text
stringlengths
56
7.94M
\begin{document} \title[Fibonacci self-reciprocal polynomials and Fibonacci PPs]{Fibonacci self-reciprocal polynomials and Fibonacci permutation polynomials} \author[Neranga Fernando]{Neranga Fernando} \address{Department of Mathematics, Northeastern University, Boston, MA 02115} \email{[email protected]} \author[Mohammad Rashid]{Mohammad Rashid} \address{Northeastern University, Boston, MA 02115} \email{[email protected]} \begin{abstract} Let $p$ be a prime. In this paper, we give a complete classification of self-reciprocal polynomials arising from Fibonacci polynomials over $\mathbb{Z}$ and $\mathbb{Z}_p$, where $p=2$ and $p>5$. We also present some partial results when $p=3, 5$. We also compute the first and second moments of Fibonacci polynomials $f_{n}(x)$ over finite fields, which give necessary conditions for Fibonacci polynomials to be permutation polynomials over finite fields. \end{abstract} \keywords{Fibonacci polynomial, permutation polynomial, self-reciprocal polynomial, finite field, Dickson polynomial} \subjclass[2010]{11B39, 11T06, 11T55} \maketitle \section{Introduction} {\it Fibonacci polynomials} were first studied in 1833 by Eugene Charles Catalan. Since then, Fibonacci polynomials have been extensively studied by many for their general and arithmetic properties; see \cite{Abd-Elhameed-Youssri-El-Sissi-Sadek, Cigler, Hoggat-Bicknell, Koshy-2001, Levy, Tasyurdu-Deveci, Yuan-Zhang}. In a recent paper, Koroglu, Ozbek and Siap studied cyclic codes that have generators as Fibonacci polynomials over finite fields; see \cite{Koroglu-Ozbek-Siap}. In another recent paper, Kitayama and Shiomi studied the irreducibility of Fibonacci polynomials over finite fields; see \cite{Kitayama-Shiomi-2017}. Fibonacci polynomials are defined by the recurrence relation $f_0(x)=0$, $f_1(x)=1$, and $$f_n(x)=xf_{n-1}(x)+f_{n-2}(x),\,\, \textnormal{for} \,\,n\geq 2.$$ The Fibonacci polyomial sequence is a generalization of the Fibonacci number sequence: $f_n(1)=F_n$ for all $n$, where $F_n$ denotes the $n$-th Fibonacci number. Moreover, $f_n(2)$ defines the well-known {\it Pell numbers} $1, 2, 5, 12, 19, \ldots$. Fibonacci polynomials can also be extended to negative subscripts (see \cite[Chapter 37]{Koshy-2001}): $$f_{-n}(x)=(-1)^{n+1}f_{n}(x).$$ In the first part of this paper, we explore self-reciprocal polynomials arising from Fibonacci polyonomials. The reciprocal $f^*(x)$ of a polynomial $f(x)$ of degree $n$ is defined by $f^*(x)=x^n\,f(\Bbb Frac{1}{x})$. A polynomial $f(x)$ is called {\it self-reciprocal} if $f^*(x)=f(x)$, i.e. if $f(x)=a_0+a_1x+a_2x^2+\cdots +a_nx^n$, $a_n\neq 0$, is self-reciprocal, then $a_i=a_{n-i}$ for $0\leq i\leq n$. The coefficients of a self-reciprocal polynomial form a palindrome because of which a self-reciprocal polynomial is also called \textit{palindromic}. Many authors have studied self-reciprocal polynomials for their applications in the theory of error correcting codes, DNA computing, and in the area of quantum error-correcting codes. We explain one application in the next paragraph. Let $C$ be a code of length $n$ over $R$, where $R$ is either a ring or a field. The reverse of the codeword $c=(c_0,c_1,\ldots ,c_{n-2}, c_{n-1})$ in $C$ is denoted by $c^r$, and it is given by $c^r=(c_{n-1},c_{n-2},\ldots ,c_1, c_0)$. If $c^r\in C$ for all $c\in C$, then the code $C$ is defined to be reversible. Let $\tau$ denote the cyclic shift. Then $\tau(c)=(c_{n-1},c_0,\ldots ,c_{n-2})$. If the cyclic shift of each codeword is also a codeword, then the code $C$ is said to be a {\it cyclic code} . It is a well-known fact that cyclic codes have a representation in terms of polynomials. For instance, the codeword $c=(c_0,c_1,\ldots ,c_{n-1})$ can be represented by the polynomial $h(x)=c_0+c_1x+\cdots c_{n-1}x^{n-1}$. The cyclic shifts of $c$ correspond to the polynomials $x^ih(x)\pmod{x^n-1}$ for $i=0,1, \ldots , n-1$. There is a unique codeword among all non-zero codewords in a cyclic code C whose corresponding polynomial $g(x)$ has minimum degree and divides $x^n-1$. The polynomial $g(x)$ is called the generator polynomial of the cyclic code $C$. In \cite{Massey-1964}, Massey studied reversible codes over finite fields and showed that the cyclic code generated by the monic polynomial $g(x)$ is reversible if and only if $g(x)$ is self-reciprocal. We present a complete classification of self-reciprocal polynomials arising from Fibonacci polynomials over $\mathbb{Z}$ and $\mathbb{Z}_p$, where $p=2$ and $p>5$. According to our numerical results obtained from the computer, the cases $p=3$ and $p=5$ seem to be inconclusive since the number of patterns of $n$ is increasing as $n$ increases. However, we present some sufficient conditions on $n$ for $f_n$ to be self-reciprocal when $p=3$ and $p=5$. We refer the reader to \cite{Adleman-1994}, \cite{Guenda-Jitman-AG}, \cite{GG-2013} and \cite{Massey-1964} for more details about self-reciprocal polynomials. In the second part of the paper, we explore necessary conditions for Fibonacci polynomials to be permutation polynomials over finite fields. Permutation polynomials over finite fields also have many important applications in coding theory. Let $p$ be a prime and $q=p^e$, where $e$ is a positive integer. Let $\Bbb F_{p^e}$ be the finite field with $p^e$ elements. A polynomial $f \in \Bbb F_{p^e}[{\tt x}]$ is called a \textit{permutation polynomial} (PP) of $\Bbb F_{p^e}$ if the associated mapping $x\mapsto f(x)$ from $\Bbb F_{p^e}$ to $\Bbb F_{p^e}$ is a permutation of $\Bbb F_{p^e}$. It is a well known fact that a function $f:\Bbb F_q\to \Bbb F_q$ is bijective if and only if \[ \sum_{a\in\Bbb F_q}f(a)^i \begin{cases} =0&\text{if}\ 1\le i\le q-2,\cr \ne 0&\text{if}\ i=q-1. \end{cases} \] Therefore, an explicit evaluation of the sum $\sum_{a\in\Bbb F_q}f_{n}(a)^i$ for any $1\le i\le q-1$ would provide necessary conditions for $f_{n}$ to be a PP of $\Bbb F_q$. We compute the sums $\sum_{a\in \Bbb F_q}f_{n}(a)$ and $\sum_{a\in \Bbb F_q}f_{n}^2(a)$ in this paper. Dickson polynomials have played a pivotal role in the area of permutation polynomials over finite fields. We point out the connection between Fibonacci polynomials and Dickson polynomials of the second kind (DPSK). We also would like to highlight the fact that the first moment and the second moment of DPSK have never appeared in any literature before. Since the permutation behaviour of DPSK is not completely known yet, we believe that the results concerning $\sum_{a\in \Bbb F_q}f_{n}(a)$ and $\sum_{a\in \Bbb F_q}f_{n}^2(a)$ would be a motivation for further investigation of $\sum_{a\in\Bbb F_q}f_{n}(a)^i$, where $3\le i\le q-1$, and to help forward the area. Here is an overview of the paper. In Subsection 1.1, we present more properties of Fibonacci polynomials that will be used throughout the paper. In Section 2, we explore self-reciprocal polynomials arising from Fibonacci polynomials over $\mathbb{Z}$ and $\mathbb{Z}_p$, where $p=2$ and $p>5$. We also present some partial results when $p=3$ and $p=5$. In Section 3, we discuss the connection between Fibonacci polynomials and Dickson polynomials of the second kind (DPSK) which are related to Chebyshev polynomials. In Section 4, we find necessary condtions for Fibonacci polynomials to be permutation polynomials over finite fields. \subsection{Some properties of Fibonacci polynomials} An explicit expression for $f_{n}(x)$ is given by \begin{equation}\label{EE1} f_{n+1}(x)=\displaystyle\sum_{j=0}^{\lfloor \Bbb Frac{n}{2} \rfloor}\,\binom{n-j}{j}\,x^{n-2j},\,\, \textnormal{for} \,\,n\geq 0. \end{equation} Therefore \begin{equation}\label{EE2} f_{n}(x)=\displaystyle\sum_{j=0}^{\lfloor \Bbb Frac{n-1}{2} \rfloor}\,\binom{n-j-1}{j}\,x^{n-2j-1},\,\, \textnormal{for} \,\,n\geq 0. \end{equation} Another explicit formula for $f_n(x)$ is given by \begin{equation}\label{EE1.3} f_n(x)=\displaystyle\sum_{k=0}^{n}\,f(n,k)\,x^k, \end{equation} where $f(n,k)=\displaystyle\binom{\Bbb Frac{n+k-1}{2}}{k}$, and $n$ and $k$ have different parity; see \cite[Section 9.4]{Benjamin-Quinn}. The coefficient $f(n,k)$ can also be thought of as the number of ways of writing $n-1$ as an ordered sum involving only 1 and 2, so that 1 is used exactly $k$ times. There is yet another explicit formula for $f_n(x)$: \begin{equation}\label{EN1.4} f_n(x)=\displaystyle\Bbb Frac{\alpha^n(x)-\beta^n(x)}{\alpha(x)-\beta(x)}, \end{equation} where $$\alpha(x)=\displaystyle\Bbb Frac{x+\sqrt{x^2+4}}{2}\,\,\,\textnormal{and}\,\,\,\beta(x)=\displaystyle\Bbb Frac{x-\sqrt{x^2+4}}{2}.$$ are the solutions of the quadratic equation $u^2-xu-1=0$. The generating function of $f_n(x)$ is given by \begin{equation}\label{EE3} \displaystyle\sum_{n=0}^{\infty}\,f_n(x)\,z^n=\displaystyle\Bbb Frac{z}{1-xz-z^2}; \end{equation} see \cite[Chapter 37]{Koshy-2001}. \section{Fibonacci Self-reciprocal polynomials} \subsection{Self-reciprocal polynomials over $\mathbb{Z}$} In this subsection, we completely classify the self-reciprocal polymomials arising from Fibonacci polynomials over $\mathbb{Z}$. \begin{thm}\label{T2.1} If $f_n$ is self-reciprocal, then $n$ is odd. \end{thm} \begin{proof} Assume that $n$ is even. Then from \eqref{E2} it is clear that there is no constant term in the polyomial. So $f_n$ cannot be self-reciprocal. \end{proof} \begin{rmk} The above result is true in any characteristic. \end{rmk} Hereafter we always assume that $n$ is even since we consider the explicit expression of $f_{n+1}$ (Eq.~\eqref{EE1}) in the proofs in Section 2. \begin{thm} $f_n$ is self-reciprocal if and only if $n\in \{3, 5\}$. \end{thm} \begin{proof} The polynomials $f_3=x^2+1$ and $f_5=x^4+3x^2+1$ are clearly self-reciprocal. We show that $f_n$ is not self-reciprocal when $n\neq 3, 5$. Recall that \[ \begin{split} & f_{n+1}(x)=\displaystyle\sum_{j=0}^{\lfloor \Bbb Frac{n}{2} \rfloor}\,\binom{n-j}{j}\,x^{n-2j},\,\, \textnormal{for} \,\,n\geq 0. \end{split} \] Note that here $n$ is even since we consider $f_{n+1}(x)$. Let $n$ be even and $n\neq 2,4$. \[ \begin{split} & f_{n+1}(x)= x^{n} + \binom{n-1}{1}x^{n-2}+ \binom{n-2}{2}x^{n-4}+....+\binom{\Bbb Frac{n}{2} +1}{\Bbb Frac{n}{2}-1}x^{2}+1. \end{split} \] We show that the coefficients of $x^{n-2}$ and $x^2$ are not equal. Assume that they are the same, i.e. $$\binom{n-1}{1}=\binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}.$$ A straightforward computation yields that $n^2 - 6n+8=0$, which implies $n=2\,\textnormal{or}\,4$. This contradicts the fact that $n\neq 2,4$. \end{proof} \subsection{In even characteristic} In this subsection, we consider the even characteristic case. \begin{thm} $f_n$ is self-reciprocal if and only if $n\in \{3, 5\}$. \end{thm} \begin{proof} $f_3=x^2+1$ and $f_5=x^4+x^2+1$ are clearly self-reciprocal. Now assume that $n\neq 3, 5$. We show that $f_n$ is not self-reciprocal when $n\neq 3, 5$. Recall that \[ \begin{split} & f_{n+1}(x)=\displaystyle\sum_{j=0}^{\lfloor \Bbb Frac{n}{2} \rfloor}\,\binom{n-j}{j}\,x^{n-2j},\,\, \textnormal{for} \,\,n\geq 0. \end{split} \] Note that \begin{center} $f_{n+1}$ is self-reciprocal if and only if $\binom{n-j}{j}=\binom{\Bbb Frac{n}{2}+j}{\Bbb Frac{n}{2}-j}$, for all $0\leq j\leq \Bbb Frac{n}{2}$. \end{center} We divide the proof into three cases: \begin{itemize} \item [(i)] $n\equiv 0\,\textnormal{or}\,6\pmod{8}$. \item [(ii)] $n\equiv 2\pmod{8}$. \item [(iii)] $n\equiv 4\pmod{8}$. \end{itemize} \noindent \textbf{Case 1.} $n\equiv 0\,\textnormal{or}\,6\pmod{8}$. We show that $\binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}\equiv 0\pmod{2}$, but $\binom{n-1}{1}\not\equiv 0\pmod{2}$. $\binom{n-1}{1}=n-1\equiv 5\,\textnormal{or}\,7\pmod{8}$, which implies $\binom{n-1}{1}\not\equiv 0\pmod{2}$. Consider $\binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}=\Bbb Frac{n(n+2)}{8}$. Since $n\equiv 0\,\textnormal{or}\,6\pmod{8}$, we have $n=8\ell_1$ and $n=8\ell_2$ for some integers $\ell_1>0$ and $\ell_2>0$. Then we have $\binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}=\Bbb Frac{n(n+2)}{8}=\ell_1(n+2)\,\,\textnormal{or}\,\,n\ell_2$. In either case, we have $\binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}\equiv 0\pmod{2}$ since $n$ is even. \noindent \textbf{Case 2.} $n\equiv 2\pmod{8}$ We show that the coefficient of $x^6$, $\binom{\Bbb Frac{n}{2}+3}{\Bbb Frac{n}{2}-3}$, is congruent to zero modulo $2$, but the coefficient of $x^{n-6}$, $\binom{n-3}{3}$, is not congruent to zero modulo $2$. \noindent \textbf{Case 2.1} First consider $\binom{n-3}{3}$. We have $$\binom{n-3}{3}=\Bbb Frac{(n-3)(n-4)(n-5)}{6}.$$ Since $n\equiv 2\pmod{8}$, we have $n=2+8\ell_1$, for some integer $\ell_1$, which implies $n-4=2(4\ell_1-1).$ Since $\textnormal{gcd}(3,8)=1$, we have $$\binom{n-3}{3}=\Bbb Frac{(n-3)(4\ell_1-1)(n-5)}{3}\equiv 4\ell_1-1\pmod{8},$$ which implies $\binom{n-3}{3}\equiv 1\pmod{2}.$ \noindent \textbf{Case 2.2} Next we consider $\binom{\Bbb Frac{n}{2}+3}{\Bbb Frac{n}{2}-3}$. Note that since $n\equiv 2 \pmod{8}$, $\Bbb Frac{n}{2}\equiv 1\,\textnormal{or}\,5 \pmod{8}$. \noindent \textbf{Subcase 2.2a.} $\Bbb Frac{n}{2} \equiv 1 \pmod{8}$. We have \begin{equation}\label{Dec233} \binom{\Bbb Frac{n}{2}+3}{\Bbb Frac{n}{2}-3} = \Bbb Frac{(\Bbb Frac{n}{2}+3)(\Bbb Frac{n}{2}+2)(\Bbb Frac{n}{2}+1)(\Bbb Frac{n}{2})(\Bbb Frac{n}{2}-1)(\Bbb Frac{n}{2}-2)}{6 \cdot 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1}. \end{equation} Since $\Bbb Frac{n}{2} \equiv 1 \pmod{8}$, $(\Bbb Frac{n}{2}-1)=8\ell_1$ and $(\Bbb Frac{n}{2}+3)=4\ell_2$, for some integers $\ell_1, \ell_2$. From \eqref{Dec233} we have $$\binom{\Bbb Frac{n}{2}+3}{\Bbb Frac{n}{2}-3} \equiv 0\pmod{2}.$$ \noindent \textbf{Subcase 2.2b.} $\Bbb Frac{n}{2} \equiv 5 \pmod{8}$. We have \begin{equation}\label{Dec234} \binom{\Bbb Frac{n}{2}+3}{\Bbb Frac{n}{2}-3} = \Bbb Frac{(\Bbb Frac{n}{2}+3)(\Bbb Frac{n}{2}+2)(\Bbb Frac{n}{2}+1)(\Bbb Frac{n}{2})(\Bbb Frac{n}{2}-1)(\Bbb Frac{n}{2}-2)}{6 \cdot 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1}. \end{equation} Since $\Bbb Frac{n}{2} \equiv 5 \pmod{8}$, $(\Bbb Frac{n}{2}-1)=4\ell_1$ and $(\Bbb Frac{n}{2}+3)=8\ell_2$, for some integers $\ell_1, \ell_2$. From \eqref{Dec234} we have $$\binom{\Bbb Frac{n}{2}+3}{\Bbb Frac{n}{2}-3} \equiv 0\pmod{2}.$$ \noindent \textbf{Case 3.} $n\equiv 4\pmod{8}$. Note that since $n\equiv 4\pmod{8}$, we have $n\equiv 12\pmod{16}$ or $n\equiv 4\pmod{16}$. \noindent \textbf{Subcase 3.1.} $n\equiv 12\pmod{16}$. We show that the coefficient of $x^4$, $\binom{\Bbb Frac{n}{2}+2}{\Bbb Frac{n}{2}-2}$, is congruent to zero modulo 2, but the coefficient of $x^{n-4}$, $\binom{n-2}{2}$, is not. We have \[ \begin{split} \binom{\Bbb Frac{n}{2}+2}{\Bbb Frac{n}{2}-2}&=\Bbb Frac{(\Bbb Frac{n}{2}+2)(\Bbb Frac{n}{2}+1)(\Bbb Frac{n}{2})(\Bbb Frac{n}{2}-1)}{4\cdot 3\cdot 2\cdot 1}. \end{split} \] Since $n\equiv 12\pmod{16}$, we have $\Bbb Frac{n}{2}\equiv 6\pmod{8},$ which implies $(\Bbb Frac{n}{2}+2)=8\ell_1$ and $\Bbb Frac{n}{2}=2\ell_2$ for some inetegrs $\ell_1, \ell_2$. Hence $\binom{\Bbb Frac{n}{2}+2}{\Bbb Frac{n}{2}-2}\equiv 0\pmod{2}$. Consider $\binom{n-2}{2}$. We have \[ \begin{split} \binom{n-2}{2}&=\Bbb Frac{(n-2)(n-3)}{2}. \end{split} \] Since $n\equiv 12\pmod{16}$, both $\Bbb Frac{(n-2)}{2}$ and $(n-3)$ are odd, which implies $\binom{n-2}{2}\equiv 1\pmod{2}$. \noindent \textbf{Subcase 3.2.} $n\equiv 4\pmod{16}$. We show that the coefficient of $x^{12}$, $\binom{\Bbb Frac{n}{2}+6}{\Bbb Frac{n}{2}-6}$, is congruent to zero modulo 2, but the coefficient of $x^{n-12}$, $\binom{n-6}{6}$, is not. We have \[ \begin{split} \binom{\Bbb Frac{n}{2}+6}{\Bbb Frac{n}{2}-6}&=\Bbb Frac{\displaystyle\prod_{i=-5}^{6}\,\Big(\Bbb Frac{n}{2}+i\Big)}{12!}\cr &=\Bbb Frac{\displaystyle\prod_{i=-5}^{6}\,\Big(\Bbb Frac{n}{2}+i\Big)}{2^{10}\cdot 3^5\cdot 5^2\cdot 7\cdot 11}. \end{split} \] Since $n\equiv 4\pmod{16}$, we have $\Bbb Frac{n}{2}\equiv 2\pmod{8},$ which implies $(\Bbb Frac{n}{2}+6)=8\ell_1$, $(\Bbb Frac{n}{2}+4)=2\ell_2$, $(\Bbb Frac{n}{2}+2)=4\ell_3$, $\Bbb Frac{n}{2}=2\ell_4$, $(\Bbb Frac{n}{2}-2)=8\ell_5$, and $(\Bbb Frac{n}{2}-4)=2\ell_6$ for some inetegrs $\ell_1, \ell_2, \ell_3, \ell_4, \ell_5, \ell_6$. Hence $\binom{\Bbb Frac{n}{2}+6}{\Bbb Frac{n}{2}-6}\equiv 0\pmod{2}$. Consider $\binom{n-6}{6}$. We have \[ \begin{split} \binom{n-6}{6}&=\Bbb Frac{(n-6)(n-7)(n-8)(n-9)(n-10)(n-11)}{6\cdot 5\cdot 4\cdot 3\cdot 2\cdot 1}\cr &=\Bbb Frac{(n-6)(n-7)(n-8)(n-9)(n-10)(n-11)}{2^4\cdot 3^2\cdot 5\cdot 1}. \end{split} \] Since $n\equiv 4\pmod{16}$, $n-6=2\ell_1, n-8=4\ell_2, n-10=2\ell_3$, where $\ell_1, \ell_2, \ell_3$ are odd integers. Also, $n-7, n-9$ and $n-11$ are odd. Hence $\binom{n-6}{6}\equiv 1\pmod{2}$. This completes the proof. \end{proof} \subsection{In odd characteristic} In this subsection, we consider the odd characteristic case. \begin{thm} Let $p$ be a prime and $p>5$. Then $f_n$ is self-reciprocal if and only if $n\in \{3, 5\}$. \end{thm} \begin{proof} Assume that $p>5$. $f_3=x^2+1$ and $f_5=x^4+3x^2+1$ are clearly self-reciprocal. Now assume that $n\neq 3, 5$. We show that $f_n$ is not self-reciprocal. Let $n$ be even and recall that \[ \begin{split} & f_{n+1}(x)=\displaystyle\sum_{j=0}^{\Bbb Frac{n}{2}}\,\binom{n-j}{j}\,x^{n-2j},\,\, \textnormal{for} \,\,n\geq 0. \end{split} \] Then we have \[ \begin{split} & f_{n+1}(x)= x^{n} + \binom{n-1}{1}x^{n-2}+ \binom{n-2}{2}x^{n-4}+....+\binom{\Bbb Frac{n}{2} +1}{\Bbb Frac{n}{2}-1}x^{2}+1. \end{split} \] Recall that \begin{center} $f_{n+1}$ is self-reciprocal if and only if $\binom{n-j}{j}=\binom{\Bbb Frac{n}{2}+j}{\Bbb Frac{n}{2}-j}$, for all $0\leq j\leq \Bbb Frac{n}{2}$. \end{center} We claim that $$\binom{n-1}{1}\not \equiv \binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}\pmod{p}.$$ We first show that if $\binom{n-1}{1}\equiv 0\pmod{p}$, then $\binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}\not\equiv 0\pmod{p}$. Assume that $\binom{n-1}{1}\equiv 0\pmod{p}$, which implies $n\equiv 1\pmod{p}$. Since $p>5$, $$\binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}=\Bbb Frac{n(n+2)}{8}\equiv \Bbb Frac{3}{8}\not\equiv 0\pmod{p}.$$ Now we claim the following. If $\binom{n-1}{1}\not \equiv 0\pmod{p}$ and $\binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}\not\equiv 0\pmod{p}$, then $$\binom{n-1}{1}\not \equiv \binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}\pmod{p}.$$ Let $\binom{n-1}{1}\not \equiv 0\pmod{p}$, $\binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}\not\equiv 0\pmod{p}$, and assume to the contrary \begin{equation}\label{Dec1} \binom{n-1}{1}\equiv \binom{\Bbb Frac{n}{2}+1}{\Bbb Frac{n}{2}-1}\pmod{p}, \end{equation} which implies $n\equiv 2\pmod{p}$ or $n\equiv 4\pmod{p}$. We show that when $n\equiv 2\pmod{p}$ or $n\equiv 4\pmod{p}$, $f_n$ is not self-reciprocal. Let $n\equiv 2\pmod{p}$. We consider the coefficient of $x^{n-6}$, $\binom{n-3}{3}$, and the coefficient of $x^6$, $\binom{\Bbb Frac{n}{2}+3}{\Bbb Frac{n}{2}-3}$. Since $n\equiv 2 \pmod{p}$, $n=2+p\ell,$ where $\ell$ is an even integer. We have $$\binom{n-3}{3}=\Bbb Frac{(n-3)(n-4)(n-5)}{3!}.$$ Note that $\textnormal{gcd}(3!,p)=1$ since $p>5$. We have $$(n-3)(n-4)(n-5)=(p\ell-1)(p\ell-2)(p\ell-3)\equiv -6\not\equiv 0\pmod{p}.$$ Hence $\binom{n-3}{3}\not\equiv 0\pmod{p}$. Now we show that $\binom{\Bbb Frac{n}{2}+3}{\Bbb Frac{n}{2}-3}$ is congruent to zero modulo $p$. We have \begin{equation}\label{Dec231} \binom{\Bbb Frac{n}{2}+3}{\Bbb Frac{n}{2}-3}=\Bbb Frac{(\Bbb Frac{n}{2}+3)!}{(\Bbb Frac{n}{2}-3)!\,\,6!}. \end{equation} Since $n\equiv 2\pmod{p}$, we have $\Bbb Frac{n}{2}\equiv 1\pmod{p}$. Since $\textnormal{gcd}(6!,p)=1$ and the right hand side of \eqref{Dec231} contains the term $(\Bbb Frac{n}{2}-1)$, we have $\binom{\Bbb Frac{n}{2}+3}{\Bbb Frac{n}{2}-3}\equiv 0\pmod{p}$. Hence $f_n$ is not self-reciprocal. Let $n\equiv 4\pmod{p}$. We consider the coefficient of $x^{n-10}$, $\binom{n-5}{5}$, and the coefficient of $x^{10}$, $\binom{\Bbb Frac{n}{2}+5}{\Bbb Frac{n}{2}-5}$. Since $n\equiv 4 \pmod{p}$, $n=4+p\ell,$ where $\ell$ is an even integer. We have $$\binom{n-5}{5}=\Bbb Frac{(n-5)(n-6)(n-7)(n-8)(n-9)}{5!}.$$ Note that $\textnormal{gcd}(5!,p)=1$ since $p>5$. We have \[ \begin{split} (n-5)(n-6)(n-7)(n-8)(n-9)&=(p\ell-1)(p\ell-2)(p\ell-3)(p\ell-4)(p\ell-5)\cr &\not\equiv 0\pmod{p}. \end{split} \] Hence $\binom{n-5}{5}\not\equiv 0\pmod{p}$. Now we show that $\binom{\Bbb Frac{n}{2}+5}{\Bbb Frac{n}{2}-5}$ is congruent to zero modulo $p$. We have \begin{equation}\label{Dec232} \binom{\Bbb Frac{n}{2}+5}{\Bbb Frac{n}{2}-5}=\Bbb Frac{(\Bbb Frac{n}{2}+5)!}{(\Bbb Frac{n}{2}-5)!\,\,10!}. \end{equation} Since $n\equiv 4\pmod{p}$, we have $\Bbb Frac{n}{2}\equiv 2\pmod{p}$. \noindent \textbf{Case 1.} $p>7$. Since $\textnormal{gcd}(10!,p)=1$ and the right hand side of \eqref{Dec232} contains the term $(\Bbb Frac{n}{2}-2)$, we have $\binom{\Bbb Frac{n}{2}+5}{\Bbb Frac{n}{2}-5}\equiv 0\pmod{p}$. \noindent \textbf{Case 2.} $p=7$. Since the term $(\Bbb Frac{n}{2}+5)$ in the numerator on the right hand side of \eqref{Dec232} is a multiple of $7$ and it contains the term $(\Bbb Frac{n}{2}-2)$, we have $\binom{\Bbb Frac{n}{2}+5}{\Bbb Frac{n}{2}-5}\equiv 0\pmod{p}$. Hence $f_n$ is not self-reciprocal. \end{proof} \begin{rmk} Let $p=3$ and $\l\geq 0$ be an integer. Then $f_n$ is self-reciprocal if $n$ satisfies one of the follwing: \begin{enumerate} \item [(i)] $n=p^{\l}$, \item [(ii)] $n=5\cdot p^{\l}$, \item [(iii)] $n=41 \cdot p^{\l}$, \item [(iv)] $n=5\cdot73\cdot p^{\l}$, \item [(v)] $n=5^{2}\cdot1181 \cdot p^{\l}$, \end{enumerate} \vskip 0.1in Let $p=5$ and $\l\geq 0$ be an integer. Then $f_n$ is self-reciprocal if $n$ satisfies one of the follwing: \begin{enumerate} \item [(i)] $n=p^{\l}$, \item [(ii)] $n=3\cdot p^{\l}$, \item [(iii)] $n=13 \cdot p^{\l}$, \item [(iv)] $n=3\cdot 29 \cdot p^{\l}$, \item [(v)] $n=3^{2}\cdot7 \cdot p^{\l}$, \end{enumerate} \end{rmk} \begin{rmk} \textnormal{Our numerical results obtained from the computer indicate that the number of patterns of $n$ increases as $n$ increases. So it would be an arduous task to find necessary and sufficient conditions on $n$ for $f_n$ to be a self-reciprocal when $p=3$ and $p=5$. } \end{rmk} \section{Fibonacci Permutation Polynomials} \subsection{Fibonacci polynomials and Dickson polynomials} Dickson polynomials have played a pivotal role in the study of permutation polynomials over finite fields. The $n$-th Dickson polynomial of the second kind $E_n(x,a)$ is defined by \[ E_{n}(x,a) = \sum_{i=0}^{\lfloor\Bbb Frac n2\rfloor}\dbinom{n-i}{i}(-a)^{i}x^{n-2i}, \] where $a\in \Bbb F_q$ is a parameter; see \cite[Chapter 2]{Lidl-Mullen-Turnwald-1993}. Dickson polynomials of the second kind (DPSK) are closely related to the well-known Chebyshev polynomials over the complex numbers by $$E_n(2x,1)=U_n(x),$$ where $U_n(x)$ is the Chebyshev polynonmial of degree $n$ of the second kind. Let $x=u+\Bbb Frac{a}{u}$ and $u-\Bbb Frac{a}{u}\neq 0$, i.e. $u\neq \pm \sqrt{a}$. Note that $x=u+\Bbb Frac{a}{u}$ implies $u^2-xu-1=0$, where $u$ is in the extension field $\Bbb F_{q^2}$. Then the functional expression of DPSK is given by \begin{equation}\label{E2.1} E_n(u+\Bbb Frac{a}{u}, a)=\displaystyle\Bbb Frac{u^{n+1}-(\Bbb Frac{a}{u})^{n+1}}{u-\Bbb Frac{a}{u}}. \end{equation} For $u=\sqrt{a}$ and $u=-\sqrt{a}$ we have $$E_n(2\sqrt{a}, a)=(n+1)(\sqrt{a})^n$$ and $$E_n(-2\sqrt{a}, a)=(n+1)(-\sqrt{a})^n;$$ see \cite[Chapter 2]{Lidl-Mullen-Turnwald-1993}. We note to the reader that when $a=-1$, $n$-th Dickson polynomial of the second kind is the $(n+1)$-st Fibonacci polynomial. Therefore the functional expression of Fibonacci polynomials is given by \begin{equation}\label{EEE2.2} f_{n+1}(u-\Bbb Frac{1}{u})=\displaystyle\Bbb Frac{u^{n+1}-(\Bbb Frac{-1}{u})^{n+1}}{u+\Bbb Frac{1}{u}} \end{equation} with the condition that $u\neq \pm b$ if $b^2=-1$ for some $b\in \Bbb F_q$. When $u=\pm b$ with $b^2=-1$ for some $b\in \Bbb F_q$, i.e. $x^2+4=0$, we have \begin{equation}\label{EEE2.3} f_{n+1}(\pm 2b)=(n+1)(\pm b)^n. \end{equation} The permutation behaviour of DPSK is not completely known yet, but it has been studied by many authors to a large degree: Stephen D. Cohen, Rex Matthews, Mihai Cipu, Marie Henderson and Robert Coulter, to name a few. We refer the reader to \cite{Cipu-2004, Cipu-Cohen-2008, Cohen-1994, Cohen-1993, Coulter-Matthews-2002, Henderson-1997, Henderson-Matthews-1998, Henderson-Matthews-1995, Lidl-Mullen-Turnwald-1993, Matthews-Thesis-1982} for more details about DPSK and their permutation behaviour over finite fields. \section{When is $f_{n}=f_{m}$?} In \cite{Wang-Yucas-FFA-2012}, Wang and Yucas introduced the $n$-th Dickson polynomial of the $(k+1)$-th kind and the $n$-th reversed Dickson polynomial of the $(k+1)$-th kind. For $a\in \Bbb F_q$, the $n$-th Dickson polynomial of the $(k+1)$-th kind $D_{n,k}(x,a)$ is defined by \[ D_{n,k}(x,a) = \sum_{i=0}^{\lfloor\Bbb Frac n2\rfloor}\Bbb Frac{n-ki}{n-i}\dbinom{n-i}{i}(-a)^{i}x^{n-2i}, \] and $D_{0,k}(x,a)=2-k$. When $a=1$, Wang and Yucas showed that the sequence of Dickson polynomials of the $(k+1)$-th kind in terms of degrees modulo $x^q-x$ is a periodic function with period $2c$, where $c=\Bbb Frac{p(q^2-1)}{4}$. \subsection{$p\equiv 1\pmod{4}$ with any $e$ or $p\equiv 3\pmod{4}$ with even $e$.} When $p\equiv 1\pmod{4}$ with any $e$ or $p\equiv 3\pmod{4}$ with even $e$, since $-1$ is a square in $\Bbb F_{p^e}$, then the following theorem follows from \cite[Theorem 2.12]{Wang-Yucas-FFA-2012}. This result also appeared in \cite{Henderson-Matthews-1995} . \begin{thm}\label{TD261} Let $e$ be even or $p\equiv 1\pmod{4}$ with odd $e$. If $n_1\equiv n_2\pmod{\Bbb Frac{p(p^{2e}-1)}{2}}$, then $f_{n_1}=f_{n_2}$ for all $x\in \Bbb F_{p^e}$. \end{thm} We would like to point out to the reader that the results in Theorems \ref{TD261}, \ref{TD262}, and \ref{TD2} also follow from the sign class argument that apeared in \cite{Matthews-Thesis-1982}. For the completeness, we give proofs for the following two theorems. \subsection{$p\equiv 3\pmod{4}$ and $e$ is odd.} From \eqref{EN1.4} we have $$f_n(u-\Bbb Frac{1}{u})=\displaystyle\Bbb Frac{u_1^n-u_2^n}{u_1-u_2},$$ where $u_1=\displaystyle\Bbb Frac{x+\sqrt{x^2+4}}{2}$ and $u_2=\displaystyle\Bbb Frac{x-\sqrt{x^2+4}}{2}$ are the solutions of the quadratic equation $u^2-xu-1=0$. Here $u$ is in the extension field $\Bbb F_{p^{2e}}$. We note to the reader that $u_1 \neq u_2$ when $p\equiv 3\pmod{4}$ and $e$ is odd since $u_1=u_2$ implies $x^2=-4$, which is not true when $p\equiv 3\pmod{4}$ and $e$ is odd. Thus we have the following result. \begin{thm}\label{TD262} Let $p\equiv 3\pmod{4}$ and $e$ be odd. If $n_1\equiv n_2\pmod{p^{2e}-1}$, then $f_{n_1}=f_{n_2}$ for all $x\in \Bbb F_{p^e}$. \end{thm} \begin{proof} For $x\in \Bbb F_{p^e}$, there exists $u\in \Bbb F_{p^{2e}}$ such that $x=u-\Bbb Frac{1}{u}$. Then we have \[ \begin{split} f_{n_1}(x)&=\displaystyle\Bbb Frac{u_1^{n_1}-u_2^{n_1}}{u_1-u_2}\cr &=\displaystyle\Bbb Frac{u_1^{n_2}-u_2^{n_2}}{u_1-u_2}\cr &=f_{n_2}(x). \end{split} \] \end{proof} \subsection{$p=2$.} \begin{thm}\label{TD2} If $n_{1}\equiv n_{2}\,\, \pmod{2^{{2e+1}}-2}$, then $f_{n_1}=f_{n_2}$. \end{thm} \begin{proof} Let $n_{1}\equiv n_{2}\,\, \pmod{2^{{2e+1}}-2}$. Note that, $2^{{2e+1}}-2=(2^{2e}-1)+(2^{2e}-1)$. When $u\neq1$, i.e. $x\neq0$, we have \[ \begin{split} f_{n_1}(x)&=\displaystyle\Bbb Frac{u^{n_1}+\Big(\Bbb Frac{1}{u}\Big)^{n_1}}{u+\Bbb Frac{1}{u}} \cr &= \displaystyle\Bbb Frac{u^{n_2}+\Big(\Bbb Frac{1}{u}\Big)^{n_2}}{u+\Bbb Frac{1}{u}} \cr &= f_{n_2}(x) \end{split} \] When $u=1$, i.e. $x=0$, we have in characteristic 2, \[ \begin{split} f_{n_1}(x)=n_{1} =n_{2}=f_{n_2}(x) \end{split} \] Thus for all ${x\in \Bbb F_{2^e}}$, $f_{n_1}(x)=f_{n_2}(x)$. \end{proof} \section{Computation of $\sum_{a\in \Bbb F_q}f_{n}(a)$. } In this Section, we compute the sums $\sum_{a\in \Bbb F_q}f_{n}(a)$ and $\sum_{a\in \Bbb F_q}f_{n}^2(a)$. We would also like to emphasize again that the sums $\sum_{a\in \Bbb F_q}f_{n}(a)$ and $\sum_{a\in \Bbb F_q}f_{n}^2(a)$, which provide necessary conditions for $f_{n}$ to be a PP of $\Bbb F_q$, have never appeared in any literature about DPSK before. We believe that the results of this section would help forward the area. \subsection{$q$ odd} In this subsection, we compute $\sum_{a\in \Bbb F_q}f_{n}(a)$ when $q$ is odd. By \eqref{EE3}, we have \begin{equation}\label{E4.1} \begin{split} \displaystyle\sum_{n=0}^{\infty}\,f_{n}(x)\,z^n&=\displaystyle\Bbb Frac{z}{1-xz-z^2}\cr &=\displaystyle\Bbb Frac{z}{1-z^2}\,\,\displaystyle\Bbb Frac{1}{1+(\Bbb Frac{z}{z^2-1})\,x}\cr &=\displaystyle\Bbb Frac{z}{1-z^2}\,\,\displaystyle\sum_{k\geq 0} \Big(\Bbb Frac{z}{z^2-1}\Big)^k\,\,(-1)^k\,\,x^k\cr &=\displaystyle\Bbb Frac{z}{1-z^2}\,\,\Big[1+\displaystyle\sum_{k=1}^{q-1}\displaystyle\sum_{l\geq 0} \Big(\Bbb Frac{z}{z^2-1}\Big)^{k+l(q-1)}\,\,(-1)^{k+l(q-1)}\,\,x^{k+l(q-1)} \Big]\cr &\equiv \displaystyle\Bbb Frac{z}{1-z^2}\,\,\Big[1+\displaystyle\sum_{k=1}^{q-1}\displaystyle\sum_{l\geq 0} \Big(\Bbb Frac{z}{z^2-1}\Big)^{k+l(q-1)}\,\,(-1)^{k}\,\,x^k \Big]\,\,\,\pmod{x^q-x}\cr &=\displaystyle\Bbb Frac{z}{1-z^2}\,\,\Big[1+\displaystyle\sum_{k=1}^{q-1}\,\,(-1)^k\,\,\displaystyle\Bbb Frac{(\Bbb Frac{z}{z^2-1})^{k}}{1-(\Bbb Frac{z}{z^2-1})^{q-1}}\,\,x^k \Big]\cr &=\displaystyle\Bbb Frac{z}{1-z^2}\,\,\Big[1+\displaystyle\sum_{k=1}^{q-1}\,\,(-1)^k\,\,\displaystyle\Bbb Frac{(z^2-1)^{q-1-k}\,\,z^{k}}{(z^2-1)^{q-1} - z^{q-1}}\,\,x^k \Big]\cr \end{split} \end{equation} \subsection{The case $p\equiv 3\pmod{4}$ and $e$ is odd.} Since $f_{n_1}\equiv f_{n_2} \pmod{x^q-x}$ when $n_1, n_2 >0$ and $n_1\equiv n_2 \pmod{q^2-1}$, we have the following. \begin{equation}\label{E4.2} \begin{split} \displaystyle\sum_{n\geq 0} \,f_{n}\,z^n&= \displaystyle\sum_{n\geq 1} \,f_{n}\,z^n\cr &=\displaystyle\sum_{n=1}^{q^2-1}\,\, \displaystyle\sum_{l\geq 0} \,f_{n+l(q^2-1)}\,z^{n+l(q^2-1)}\cr &\equiv \displaystyle\sum_{n=1}^{q^2-1}\,\,f_n\,\, \displaystyle\sum_{l\geq 0}\,z^{n+l(q^2-1)}\,\,\pmod{x^q-x}\cr &=\displaystyle\Bbb Frac{1}{1-z^{q^2-1}} \displaystyle\sum_{n=1}^{q^2-1}\,f_n\,z^n \end{split} \end{equation} Combining \eqref{E4.1} and \eqref{E4.2} gives \[ \begin{split} \displaystyle\Bbb Frac{1}{1-z^{q^2-1}} \displaystyle\sum_{n=1}^{q^2-1}\,f_n\,z^n \equiv \displaystyle\Bbb Frac{z}{1-z^2}\,\,\Big[1+\displaystyle\sum_{k=1}^{q-1}\,\,(-1)^k\,\,\displaystyle\Bbb Frac{(z^2-1)^{q-1-k}\,\,z^{k}}{(z^2-1)^{q-1} - z^{q-1}}\,\,x^k\Big]\,\,\pmod{x^q-x}, \end{split} \] i.e. \begin{equation}\label{E1} \begin{split} \displaystyle\sum_{n=1}^{q^2-1}\,f_n\,z^n \equiv \displaystyle\Bbb Frac{z\,(z^{q^2-1}-1)}{z^2-1} +\,\,h(z)\,\,\displaystyle\sum_{k=1}^{q-1}\,\,(-1)^k\,\,(z^2-1)^{q-1-k}\,\,z^{k}\,\,x^k \,\,\pmod{x^q-x}, \end{split} \end{equation} where $$h(z)=\displaystyle\Bbb Frac{z\,(z^{q^2-1}-1)}{(z^2-1)\,[(z^2-1)^{q-1}-z^{q-1}]}.$$ Note that \[ \begin{split} h(z) &= \displaystyle\Bbb Frac{z\,(z^{q^2-1}-1)}{(z^2-1)\,[(z^2-1)^{q-1}-z^{q-1}]}\cr &= \displaystyle\Bbb Frac{z\,(z^{q^2-1}-1)}{(z^2-1)^{q}-z^{q-1}(z^2-1)}\cr &= \displaystyle\Bbb Frac{z\,(z^{q^2-1}-1)}{(z^{q-1}-1)\,(z^{q+1}+1)}\cr &= \displaystyle\Bbb Frac{- z\,(z^{q^2}-z)}{(z-z^{q})\,(z^{q+1}+1)}\cr &= \displaystyle\Bbb Frac{z\,(1+(z-z^q)^{q-1})}{(z^{q+1}+1)}. \end{split} \] Let $\displaystyle\sum_{k=1}^{q^2-q+1}\,b_kz^k = z\,(1+(z-z^q)^{q-1})$. Write $k=\alpha + \beta q$ where $0\leq \alpha, \beta \leq q-1$. Then we have the following. $$ b_k = \left\{ \begin{array}{ll} (-1)^{\beta} \,\binom{q-1}{\beta} & \textnormal{if}\,\,\alpha +\beta =q,\\[0.3cm] 1 & \textnormal{if}\,\,\alpha +\beta =1, \\[0.3cm] 0 & \textnormal{otherwise}. \end{array} \right. $$ Summing the above equation as $x$ runs over $\Bbb F_q$ in \eqref{E1} we have \[ \begin{split} \displaystyle\sum_{n=1}^{q^2-1}\,\Big(\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)\Big)\,z^n \equiv h(z)\,\,\displaystyle\sum_{k=1}^{q-1}\,\,(-1)^k\,\,(z^2-1)^{q-1-k}\,\,z^{k}\,\,\Big(\displaystyle\sum_{x\in \Bbb F_q}\,x^k\Big)\,\,\pmod{x^q-x}, \end{split} \] which implies \begin{equation}\label{E2} \begin{split} \displaystyle\sum_{n=1}^{q^2-1}\, \Big(\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)\Big) z^n &=\,\,- h(z)\,\,z^{q-1}. \end{split} \end{equation} From \eqref{E2} we have \[ \begin{split} \displaystyle\sum_{n=1}^{q^2-1}\, \Big(\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)\Big) z^n &=\,\,-\,\,z^{q-1}\,\displaystyle\sum_{k=1}^{q^2-q+1}\,b_kz^k\, \displaystyle\Bbb Frac{1}{(z^{q+1}+1)}. \end{split} \] Now let $d_n=\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)$. Then the above equation can be written as follows. \begin{equation}\label{E3} \begin{split} (z^{q+1}+1)\,\,\displaystyle\sum_{n=1}^{q^2-1}\,d_n z^n &=\,\,-\,\,z^{q-1}\,\displaystyle\sum_{k=1}^{q^2-q+1}\,b_kz^k. \end{split} \end{equation} \begin{prop} By comparing the coefficient of $z^i$ on both sides of \eqref{E3}, we have the following. Case 1. When $q=3$. \[ \begin{split} & d_j=0 \, \ \textnormal{if} \ 1\leq j \leq q-1; \\ & d_j=-b_{j-(q-1)} \ \textnormal{if} \ q\leq j \leq q+1; \\ & d_j=-b_{j+2} \ \textnormal{if} \ j=q^{2}-(q+1); \\ & d_j=0 \ \textnormal{if} \ j \geq q^{2}-q. \end{split} \] Case 2. When $q>3$. \[ \begin{split} & d_j=0 \, \ \textnormal{if} \ 1\leq j \leq q-1; \\ & d_j=-b_{j-(q-1)} \ \textnormal{if} \ q\leq j \leq q+1; \\ & d_j=-b_{j-(q-1)} - d_{j-(q+1)} \ \textnormal{if} \ q+2\leq j \leq {q^2}-(q+2); \\ & d_j=-b_{j+2} \ \textnormal{if} \ j=q^{2}-(q+1); \\ & d_j=0 \ \textnormal{if} \ j \geq q^{2}-q. \end{split} \] \end{prop} The following theorem is an immediate consequence of the above Proposition and the fact that $d_n=\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)$. \begin{thm} Let $b_k$ be defined as in \eqref{E3} for $1\leq k\leq q^2+q-1$. Then we have the following. Case 1. When $q=3$. \[ \begin{split} & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)= 0 \,\, \ \textnormal{if} \ 1\leq j \leq q-1; \\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=-b_{j-(q-1)} \,\,\ \textnormal{if} \ q\leq j \leq q+1;\\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=-b_{j+2} \,\, \ \textnormal{if} \ j=q^{2}-(q+1);\\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=0 \,\, \ \textnormal{if} \ j \geq q^{2}-q. \end{split} \] Case 2. When $q>3$. \[ \begin{split} & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)= 0 \,\, \ \textnormal{if} \ 1\leq j \leq q-1; \\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=-b_{j-(q-1)} \,\,\ \textnormal{if} \ q\leq j \leq q+1;\\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=-b_{j-(q-1)} - \displaystyle\sum_{x\in \Bbb F_q}\,f_{j-(q+1)}(x) \,\,\ \textnormal{if} \ q+2\leq j \leq {q^2}-(q+2);\\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=-b_{j+2} \,\, \ \textnormal{if} \ j=q^{2}-(q+1);\\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=0 \,\, \ \textnormal{if} \ j \geq q^{2}-q. \end{split} \] \end{thm} \subsection{The case $p\equiv 1\pmod{4}$ with any $e$ or $p\equiv 3\pmod{4}$ with even $e$.} Let $s=\Bbb Frac{p(q^2-1)}{2}$. Since $f_{n_1}\equiv f_{n_2} \pmod{x^q-x}$ when $n_1, n_2 >0$ and $n_1\equiv n_2 \pmod{s}$, we have the following from a computation similar to that of \eqref{E4.2}. \begin{equation}\label{E4.2N} \begin{split} \displaystyle\sum_{n\geq 0} \,f_{n}\,z^n&=\displaystyle\Bbb Frac{1}{1-z^{s}} \displaystyle\sum_{n=1}^{s}\,f_n\,z^n \end{split} \end{equation} Combining \eqref{E4.1} and \eqref{E4.2N} gives \[ \begin{split} \displaystyle\Bbb Frac{1}{1-z^{s}} \displaystyle\sum_{n=1}^{s}\,f_n\,z^n \equiv \displaystyle\Bbb Frac{z}{1-z^2}\,\,\Big[1+\displaystyle\sum_{k=1}^{q-1}\,\,(-1)^k\,\,\displaystyle\Bbb Frac{(z^2-1)^{q-1-k}\,\,z^{k}}{(z^2-1)^{q-1} - z^{q-1}}\,\,x^k\Big]\,\,\pmod{x^q-x}, \end{split} \] i.e. \begin{equation}\label{E4} \begin{split} \displaystyle\sum_{n=1}^{s}\,f_n\,z^n \equiv \displaystyle\Bbb Frac{z\,(z^{s}-1)}{z^2-1} +\,\,h(z)\,\,\displaystyle\sum_{k=1}^{q-1}\,\,(-1)^k\,\,(z^2-1)^{q-1-k}\,\,z^{k}\,\,x^k \,\,\pmod{x^q-x}, \end{split} \end{equation} where $$h(z)=\displaystyle\Bbb Frac{z\,(z^{s}-1)}{(z^2-1)\,[(z^2-1)^{q-1}-z^{q-1}]}.$$ Note that \[ \begin{split} h(z) &= \displaystyle\Bbb Frac{z\,(z^{s}-1)}{(z^2-1)\,[(z^2-1)^{q-1}-z^{q-1}]}\cr &= \displaystyle\Bbb Frac{z\,(z^{s}-1)}{(z^2-1)^{q}-z^{q-1}(z^2-1)}\cr &= \displaystyle\Bbb Frac{z\,(z^{s}-1)}{(z^{q-1}-1)\,(z^{q+1}+1)}\cr &= \displaystyle\Bbb Frac{- z\,(z^{s+1}-z)}{(z-z^{q})\,(z^{q+1}+1)} \end{split} \] Summing the above equation as $x$ runs over $\Bbb F_q$ in \eqref{E4} we have \[ \begin{split} \displaystyle\sum_{n=1}^{s}\,\Big(\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)\Big)\,z^n \equiv h(z)\,\,\displaystyle\sum_{k=1}^{q-1}\,\,(-1)^k\,\,(z^2-1)^{q-1-k}\,\,z^{k}\,\,\Big(\displaystyle\sum_{x\in \Bbb F_q}\,x^k\Big)\,\,\pmod{x^q-x}, \end{split} \] which implies \begin{equation}\label{E5} \begin{split} \displaystyle\sum_{n=1}^{s}\, \Big(\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)\Big) z^n &=\,\,- h(z)\,\,z^{q-1}. \end{split} \end{equation} From \eqref{E5} we have \begin{equation}\label{March25} \begin{split} \displaystyle\sum_{n=1}^{s}\, \Big(\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)\Big) z^n &=\displaystyle\Bbb Frac{z^q\,(z^{s+1}-z)}{(z-z^{q})\,(z^{q+1}+1)}. \end{split} \end{equation} Now let $d_n=\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)$ and $\displaystyle\sum_{k=1}^{s-q+2}\,b_kz^k=\displaystyle\Bbb Frac{z^{s+1}-z}{z^{q-1}-1}$. Then we have the following. $$ b_k = \left\{ \begin{array}{ll} 1 & \textnormal{if}\,\,k=1+t(q-1),\,\,\,\,\textnormal{where}\,\,\,\,0\leq t\leq \Bbb Frac{p(q+1)}{2}-1,\\[0.3cm] 0 & \textnormal{otherwise}. \end{array} \right. $$ Now \eqref{March25} can be written as follows. \begin{equation}\label{E6} \begin{split} (z^{q+1}+1)\,\,\displaystyle\sum_{n=1}^{s}\,d_n z^n &=-z^{q-1}\,\displaystyle\sum_{k=1}^{s-q+2}\,b_kz^k. \end{split} \end{equation} \begin{prop} By comparing the coefficient of $z^i$ on both sides of \eqref{E6}, we have the following. \[ \begin{split} & d_j=0 \, \ \textnormal{if} \ 1\leq j \leq q-1; \\ & d_j=-b_{j-(q-1)} \ \textnormal{if} \ q\leq j \leq q+1; \\ & d_j=-b_{j-(q-1)} - d_{j-(q+1)} \ \textnormal{if} \ q+2\leq j \leq s-(q+1); \\ & d_j=-b_{j+2} \ \textnormal{if} \ j=s-q; \\ & d_j=0 \ \textnormal{if} \ j \geq s-(q-1). \end{split} \] \end{prop} The following theorem is an immediate consequence of the above Proposition and the fact that $d_n=\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)$. \begin{thm} Let $f_n$ be the $n$-th Fibonacci polynomial. Assume that $p\equiv 1\pmod{4}$ with any $e$ or $p\equiv 3\pmod{4}$ with even $e$. Let $q=p^e$. Then we have the following. \[ \begin{split} & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)= 0 \, \ \textnormal{if} \ 1\leq j \leq q-1; \\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=-b_{j-(q-1)} \ \textnormal{if} \ q\leq j \leq q+1;\\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=-b_{j-(q-1)} - \displaystyle\sum_{x\in \Bbb F_q}\,f_{j-(q+1)}(x) \ \textnormal{if} \ q+2\leq j \leq s-(q+1);\\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=-b_{j+2} \ \textnormal{if} \ j=s-q;\\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=0 \ \textnormal{if} \ j \geq s-(q-1). \end{split} \] \end{thm} \subsection{$q$ even} In this subsection, we assume that $q$ is even and compute the sum $\sum_{a\in \Bbb F_q}f_{n}(a)$. When $q$ is even, a computation similar to \eqref{E4.1} shows that \begin{equation}\label{NE4.1} \begin{split} \displaystyle\sum_{n=0}^{\infty}\,f_{n}(x)\,z^n&\equiv \displaystyle\Bbb Frac{z}{1-z^2}\,\,\Big[1+\displaystyle\sum_{k=1}^{q-1}\,\,\displaystyle\Bbb Frac{(z^2-1)^{q-1-k}\,\,z^{k}}{(z^2-1)^{q-1} - z^{q-1}}\,\,x^{k} \Big]\,\,\,\pmod{x^q-x} \end{split} \end{equation} Since $f_{n_1}\equiv f_{n_2} \pmod{x^q-x}$ when $n_1, n_2 >0$ and $n_1\equiv n_2 \pmod{2q^2-2}$, we have the following. \begin{equation}\label{NE4.2} \begin{split} \displaystyle\sum_{n\geq 0} \,f_{n}\,z^n&= \displaystyle\sum_{n\geq 1} \,f_{n}\,z^n\cr &=\displaystyle\sum_{n=1}^{2q^2-2}\,\, \displaystyle\sum_{l\geq 0} \,f_{n+l(2q^2-2)}\,z^{n+l(2q^2-2)}\cr &\equiv \displaystyle\sum_{n=1}^{2q^2-2}\,\,f_n\,\, \displaystyle\sum_{l\geq 0}\,z^{n+l(2q^2-2)}\,\,\pmod{x^q-x}\cr &=\displaystyle\Bbb Frac{1}{1-z^{2q^2-2}} \displaystyle\sum_{n=1}^{2q^2-2}\,f_n\,z^n \end{split} \end{equation} Combining \eqref{NE4.1} and \eqref{NE4.2} gives \[ \begin{split} \displaystyle\Bbb Frac{1}{1-z^{2q^2-2}} \displaystyle\sum_{n=1}^{2q^2-2}\,f_n\,z^n \equiv \displaystyle\Bbb Frac{z}{1-z^2}\,\,\Big[1+\displaystyle\sum_{k=1}^{q-1}\,\,\displaystyle\Bbb Frac{(z^2-1)^{q-1-k}\,\,z^{k}}{(z^2-1)^{q-1} - z^{q-1}}\,\,x^{k}\Big]\,\,\pmod{x^q-x}, \end{split} \] i.e. \begin{equation}\label{NE1} \begin{split} \displaystyle\sum_{n=1}^{2q^2-2}\,f_n\,z^n \equiv \displaystyle\Bbb Frac{z\,(z^{2q^2-2}-1)}{z^2-1} +\,\,h(z)\,\,\displaystyle\sum_{k=1}^{q-1}\,\,(z^2-1)^{q-1-k}\,\,z^{k}\,\,x^{k} \,\,\pmod{x^q-x}, \end{split} \end{equation} where $$h(z)=\displaystyle\Bbb Frac{z\,(z^{2q^2-1}-1)}{(z^2-1)\,[(z^2-1)^{q-1}-z^{q-1}]}.$$ Note that \[ \begin{split} h(z) &= \displaystyle\Bbb Frac{z\,(z^{2q^2-1}-1)}{(z^2-1)\,[(z^2-1)^{q-1}-z^{q-1}]}\cr &= \displaystyle\Bbb Frac{z\,(z^{2q^2-1}-1)}{(z^2-1)^{q}-z^{q-1}(z^2-1)}\cr &= \displaystyle\Bbb Frac{z\,(z^{2q^2-1}-1)}{(z^{q-1}-1)\,(z^{q+1}+1)}\cr &= \displaystyle\Bbb Frac{- z\,(z^{2q^2}-z)}{(z-z^{q})\,(z^{q+1}+1)}\cr &= \displaystyle\Bbb Frac{(z^{2q^2-2}-z)}{(z^{q-1}-1)\,(z^{q+1}+1)}. \end{split} \] Summing the above equation as $x$ runs over $\Bbb F_q$ in \eqref{NE1} we have \[ \begin{split} \displaystyle\sum_{n=1}^{2q^2-2}\,\Big(\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)\Big)\,z^n \equiv h(z)\,\,\displaystyle\sum_{k=1}^{q-1}\,\,(z^2-1)^{q-1-k}\,\,z^{k}\,\,\Big(\displaystyle\sum_{x\in \Bbb F_q}\,x^{k}\Big)\,\,\pmod{x^q-x}, \end{split} \] which implies \begin{equation}\label{NE2} \begin{split} \displaystyle\sum_{n=1}^{2q^2-2}\, \Big(\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)\Big) z^n &=\,\,h(z)\,\,z^{q-1}. \end{split} \end{equation} From \eqref{E2} we have \[ \begin{split} \displaystyle\sum_{n=1}^{2q^2-2}\, \Big(\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)\Big) z^n &=\,\,\,\,\displaystyle\Bbb Frac{(z^{2q^2+q-3}-z^q)}{(z^{q-1}-1)\,(z^{q+1}+1)}\,. \end{split} \] Now let $d_n=\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)$. Then the above equation can be written as follows. \begin{equation}\label{NE3} \begin{split} (z^{q-1}-1)\,(z^{q+1}+1)\,\,\displaystyle\sum_{n=1}^{2q^2-2}\,d_n z^n &=\,\,\,\,{(z^{2q^2+q-3}-z^q)}\,. \end{split} \end{equation} \begin{prop} By comparing the coefficient of $z^i$ on both sides of \eqref{E3}, we have the following. \[ \begin{split} & d_j=1+ d_{q}\ \textnormal{if} \ j=1; \\ & d_j=1+ d_{2q^2-4}-d_{2q^2-q-3}\ \textnormal{if} \ j=2q^2-2;\\ & d_j=0\ \textnormal{otherwise}. \end{split} \] \end{prop} The following theorem is an immediate consequence of the above Proposition and the fact that $d_n=\displaystyle\sum_{x\in \Bbb F_q}\,f_n(x)$. \begin{thm}\label{NT2.6} Let $b_k$ be defined as in \eqref{NE3} for $1\leq k\leq q^2+q-1$. Then we have the following. \[ \begin{split} & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)= 1+ d_{q}\ \textnormal{if} \ j=1; \\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=1+ d_{2q^2-4}-d_{2q^2-q-3}\ \textnormal{if} \ j=2q^2-2;\\ & \displaystyle\sum_{x\in \Bbb F_q}\,f_j(x)=0\ \textnormal{otherwise}. \end{split} \] \end{thm} \begin{rmk} \textnormal{When $q$ is even, it is easy to see that the sum $\sum_{a\in \Bbb F_q}f_{n}^2(a)$ agrees with Theorem~\ref{NT2.6}, where $d_n=\displaystyle\sum_{x\in \Bbb F_q}\,f_n^2(x)$.} \end{rmk} \noindent \textbf{Open Question:} Compute $\sum_{a\in \Bbb F_q}f_{n}^2(a)$ when $q$ is odd. \begin{rmk} \textnormal{When $q$ is odd, computing the genrating function $\displaystyle\sum_{n=0}^{\infty}\,f_n(x)^2\,z^n$ turns out to be an arduous task since a useful identity for $f_{n-1}(x)f_{n-2}(x)$ is not known for Fibonacci polynomials.} \end{rmk} \end{document}
\begin{document} \title{The Prevalence of Persistent Tangles.} \author{Louis H. Kauffman\\ Department of Mathematics, Statistics and Computer Science \\ 851 South Morgan Street \\ University of Illinois at Chicago\\ Chicago, Illinois 60607-7045 USA\\ and\\ Department of Mechanics and Mathematics\\ Novosibirsk State University\\Novosibirsk, Russia \\ \texttt{[email protected]}\\ and\\ Pedro Lopes\\ Center for Mathematical Analysis, Geometry, and Dynamical Systems, \\ Department of Mathematics, \\ Instituto Superior T\'{e}cnico, Universidade de Lisboa\\ 1049-001 Lisbon, Portugal \\ \texttt{[email protected]}} \maketitle \begin{abstract} This article addresses persistent tangles. These are tangles whose presence in a knot diagram forces that diagram to be knotted. We provide new methods for constructing persistent tangles. Our techniques rely mainly on the existence of non-trivial colorings for the tangles in question. Our main result in this article is that any knot admitting a non-trivial coloring gives rise to persistent tangles. Furthermore, we discuss when these persistent tangles are non-trivial. \end{abstract} Keywords: knots, tangles, persistent tangles, colorings, irreducible tangles Mathematics Subject Classification 2010: 57M25 \section{Introduction}\label{sec:intro} This article addresses the notion of \emph{persistent tangle}, by which we mean, a tangle whose appearance in a knot diagram forces that diagram to be knotted. We show that persistent tangles are prevalent, as subtangles, in diagrams of non-trivially colored knots. This article also addresses the following issue: local features that provide global information. For instance, we have in mind the identification of entanglement in long polymers or DNA. The size of these long molecules complicates the identification of entanglement (global information). Therefore, the recognition of persistent tangles (local feature) should be relevant in this context. The techniques in the proofs are mainly elaborations of the following idea: we endow our tangles with specific non-trivial colorings that assign the same color to the start- and end-points of the tangle, over an appropriate modulus, see \cite{SilverWilliams}. This coloring can be extended (monochromatically) to the rest of the knot diagram it may belong to. Thus, that diagram is non-trivially colored, and therefore knotted. We often use Fox colorings, but not exclusively. In a Fox coloring the colors are in $\mathbf{Z}/N\mathbf{Z}$ for an appropriate positive integer $N$ (or simply in $\mathbf{Z}$) and the sum of the colors assigned to the undercrossing arcs at a crossing is twice the color assigned to the over crossing arc \cite{lhKauffman}. The term \emph{knot} in this article means a $1$-component link. Along with giving new constructions for persistent tangles, we also formulate a conjecture that ``irreducible tangles are persistent'' (see Section \ref{sec:results} for the definition of irreducibility and the precise statement of this conjecture). Solution to this conjecture appears to require techniques beyond the reach of the present paper. \subsection{Acknowledgements.}\label{subsec:ack} Kauffman's work was supported by the Laboratory of Topology and Dynamics, Novosibirsk State University (contract no. 14.Y26.31.0025 with the Ministry of Education and Science of the Russian Federation). Lopes acknowledges support from FCT (Funda\c c\~ao para a Ci\^encia e a Tecnologia), Portugal, through project FCT PTDC/MAT-PUR/31089/2017, ``Higher Structures and Applications''. \section{First results}\label{sec:1st-results} We now recall the basic definitions and identify the original persistent tangle and the trivial ones. \begin{definition} A coloring of a knot $K$ by a quandle $X$ is a homomorphism from the fundamental quandle of the knot $K$ to the quandle $X$ \cite{SMatveev, DJoyce}. A non-trivial coloring is one such homomorphism whose range is non-singular. \end{definition} \begin{rem}\label{thm:non-trivialcols.} The unknot does not admit non-trivial colorings. Note however that unlinks can sometimes admit non-trivial colorings. \end{rem} \begin{proof} The standard diagram of the unknot is a circle on the plane without self-intersections. Therefore, the colorings it admits only involves one color. \end{proof} \begin{rem} We will use Remark \ref{thm:non-trivialcols.} in the following form. If a knot admits non-trivial colorings then it is non-trivial. \end{rem} \begin{definition}\label{def:tangle} A tangle is an embedding of one $($respect., two$)$ arc$($s$)$ in a ball with the fixed end points on the surface of the ball. Two tangles are equivalent if they are related by an ambient isotopy, keeping the endpoints fixed. The diagrammatical counterpart is a piece of knot diagram on a disc, with the endpoints on the boundary of the disc. The Reidemeister moves are restricted to the disc with the endpoints fixed on the boundary of the disc. See Figures \ref{fig:krebes}, \ref{fig:noname}, \ref{fig:persist-1-arc}, and \ref{fig:persist-1-arc-bis}, for illustrating examples. In a subsequent article we will look into the $n$-tangle case with $n>2$. \end{definition} \begin{definition}\label{def:persistent-tangle} A persistent tangle is a tangle whose presence in a knot diagram implies this knot is non-trivial. Figure \ref{fig:noname} provides an example of a persistent tangle. \end{definition} The original persistent tangle is due to Krebes \cite{Krebes} and is depicted in Figure \ref{fig:krebes}, see also Figure \ref{fig:noname}. Krebes proved persistence of tangles like those depicted in Figures \ref{fig:krebes} and \ref{fig:noname} by way of the bracket polynomial. Later Silver and Williams proved the same sort of result by way of Fox colorings \cite{SilverWilliams}. Our approach here is in the spirit of \cite{SilverWilliams} but we consider other colorings besides the Fox colorings. We acknowledge also the work of other authors in related matters \cite{SMAbernathy, KrebesSilverWilliams, KauffmanGoldman, PrzytyckiSilverWilliams, Ruberman, SilverWilliams2}. In \cite{KauffmanGoldman} invariants of knots and tangles are formulated via sums over weighted trees in the same way as the more recent paper by Silver and Williams, \cite{SilverWilliams2}, and it is shown how the tangle fraction and some generalizations arise by using the checkerboard graph for knots and links. Furthermore, \cite{KauffmanGoldman} interprets this combinatorics in terms of the current flow in electrical circuits. It is possible that there is more work to be done about persistent tangles in this domain. \begin{figure} \caption{The original persistent tangle, by Krebes. The non-trivial coloring at issue uses the dihedral quandle of order $3$.} \label{fig:krebes} \end{figure} Note that the Krebes example is not a rational tangle. In fact, no persistent tangle can be rational \cite{HenrichKauffman} since any rational tangle can be inserted into an unknot \cite{LambropoulouKauffman}. The reader may enjoy proving this as an exercise. \begin{figure} \caption{A persistent tangle. The non-trivial coloring at issue uses the dihedral quandle of order $p$, for an odd prime $p$.} \label{fig:noname} \end{figure} We now elaborate on a number of constructions that obviously give rise to persistent tangles. We call these trivial persistent tangles. We start with the case of a $1$-tangle. We recall that \emph{genus of a knot} is the least genus of the oriented surfaces whose boundary is the knot at issue. \begin{theorem}\label{thm:genus} Genus is additive under connected sums of knots. A knot is trivial if and only if its genus is $0$. \end{theorem} \begin{proof} These are known results. See \cite{Lickorish} for a proof. \end{proof} \begin{cor}\label{cor:genus} A non-trivial knot gives rise to a persistent $1$-tangle by disconnecting any of its arcs in one of its diagrams. This is our first instance of a trivial persistent tangle. \end{cor} \begin{proof} Attaching to our $1$-tangle a second $1$-tangle amounts to performing a connected sum of a non-trivial knot with another knot. Applying Theorem \ref{thm:genus} concludes the proof. See Figure \ref{fig:persist-1-tangle}, disregarding colorings. \end{proof} \begin{cor}\label{cor:persistent-colored-1-tangles} Every knot admitting a non-trivial coloring yields a persistent $1$-tangle via disconnecting any one of its arcs. \end{cor} \begin{proof} If the knot admits a non-trivial coloring then it is non-trivial. We can now apply Corollary \ref{cor:genus} to conclude the proof. See Figure \ref{fig:persist-1-tangle}, again. \end{proof} \begin{figure} \caption{Left hand-side: trefoil with a non-trivial coloring - the non-trivial knot. Middle: The non-trivial knot converted into a $1$-tangle by disconnecting one arc. Right hand-side: Identification of the $1$-tangle in a new knot implies new knot is non-trivial.} \label{fig:persist-1-tangle} \end{figure} Corollary \ref{thm:persistent-2-tangles} is a different view of the fact expressed in Corollary \ref{cor:persistent-colored-1-tangles} yet paving the way for the subsequent material. \begin{cor}\label{thm:persistent-2-tangles} Assume $K$ is a knot admitting non-trivial colorings. Then $K$ gives rise to a persistent $2$-tangle by cutting an arc at two distinct points. \end{cor} \begin{proof} Since $K$ admits a non-trivial coloring, there exists a diagram $D$ of $K$ which supports such a coloring. We disconnect one arc at two distinct points thereby producing a tangle with two start-points and two end-points, all of them receiving the same color, say $a$. Clearly, if this tangle is found in another knot diagram, this knot diagram is non-trivially colored. The arcs of the new diagram which do not belong to the tangle are monochromatically colored with $a$; the tangle part of the new diagram is colored as in the original knot diagram. Figure \ref{fig:persist-1-arc} provides an illustration of this process. \begin{figure} \caption{Left-hand side: a non-trivial coloring (Fox tricoloring) on a knot (trefoil) diagram. Middle: disconnecting one of the arcs at two points, producing a persistent tangle. Right-hand side: The persistent tangle inside a new non-trivial knot. This example produces a connected sum of knots. Note the difference from Figure \ref{fig:persist-1-tangle} \label{fig:persist-1-arc} \end{figure} \end{proof} The current article is an extension of these results, especially Corollary \ref{thm:persistent-2-tangles}, but we will disconnect two distinct arcs of (certain) knot diagrams instead of one, in order to produce persistent $2$-tangles. \section{Results}\label{sec:results} The next result advertises the possibility for the persistent tangles to be found inside knot diagrams that are not connected sums. In this way, we hope to enlarge the variety of persistent tangles. In particular we hope to obtain persistent tangles which give rise to knots which are not connected sums. \begin{theorem}\label{thm:1st-theorem} If a knot $K$ admits a non-trivial coloring over a diagram $D$ with distinct arcs bearing the same color, then it gives rise to a persistent tangle by cutting at one point each of the two arcs that have the same color. \end{theorem} \begin{proof} We distinguish two situations. \begin{enumerate} \item In the first one, the two arcs receiving the same color are both sides to the same face of the diagram, see Figure \ref{fig:persist-2-tangle} and Figure \ref{fig:persist-1-arc-bis} (the two tangle diagrams). We then disconnect each of the arcs referred to, thereby obtaining a tangle. This tangle can be non-trivially colored, since it stems from a knot that can be non-trivially colored. Thus, if this tangle is found in a new knot diagram, the rest of this diagram can be monochromatically colored. The tangle obtained is thus a persistent tangle. This concludes the proof in this situation. \item In the second situation, the two arcs receiving the same color are sides to different faces of the diagram. In this case we perform a finite number of type II Reidemeister moves in order to bring one of the arcs over to the vicinity of the other, see Figure \ref{fig:persist-1-arc-bis} (the two knot diagrams). Also note that the new coloring obtained by consistently recoloring after each of the type II Reidemeister moves is a non-trivial coloring, since recoloring after Reidemeister moves (colored Reidemeister moves) preserves non-triviality, \cite{pLopes}. Now there are two arcs bearing the same color and both are sides to the same face of the diagram. We can then apply the reasoning in $1.$ to conclude the proof in this situation. \end{enumerate} The proof is complete. \end{proof} \begin{figure} \caption{Left hand-side: the knot diagram equipped with a non-trivial coloring (not shown) with arcs $a_1$ and $a_2$ that receive the same color. Right hand-side: the $2$ tangle obtained by disconnecting arcs $a_1$ and $a_2$; $A_1, A'_1, A_2, A'_2$ are start- and/or end- points after disconnecting arcs $a_1$ and $a_2$. The shadowed regions do not contain any arc nor crossing of the diagram.} \label{fig:persist-2-tangle} \end{figure} \begin{figure} \caption{We enumerate the diagrams in this Figure $1$ though $4$, starting from the left-most. $1$: the diagram of a trefoil that evolves into ($2$) an equivalent diagram via a type II Reidemeister move. $3$. Another type II Reidemeister move and we obtained two arcs bearing the same color and adjacent to the same face. The appropriate disconnections are performed, thereby obtaining a tangle. $4$: (type II) Reidemeister moves are performed on the tangle (endpoints are kept fixed).} \label{fig:persist-1-arc-bis} \end{figure} We remark that in spite of the conditions of Theorem \ref{thm:1st-theorem} being satisfied, we may end up with unexpected outputs, like unlinked components, see Figure \ref{fig:8-16}. \begin{figure} \caption{Top part of the Figure: knot $8_{16} \label{fig:8-16} \end{figure} In Corollary \ref{cor:linking} we give a sufficient condition for the output to display linking among the components. \begin{cor}\label{cor:linking} Assume $K$ is a knot admitting a non-trivial coloring over a diagram such that two distinct arcs receive the same color. Then there is an equivalent diagram of $K$ with two arcs receiving the same color and such that cutting at one point each of the two arcs that have the same color yields a tangle with non-zero linking number. \end{cor} \begin{proof} Look at Figure \ref{fig:6-2}. The right number of Type II Reidemeister moves increases the linking number while preserving the desired color on the arcs to be disconnected. \end{proof} \begin{figure} \caption{Top: knot $6_2$ non-trivially colored mod $11$; the boxed crossing is worked out below via type II Reidemeister moves and later inserted into the diagram. In the bottom we have a $2$-tangle whose arcs are knotted.} \label{fig:6-2} \end{figure} Here is (Corollary \ref{cor:rat-persist-tangle}) another systematic way of producing persistent tangles in the spirit of Corollary \ref{thm:persistent-2-tangles}. \begin{cor}\label{cor:rat-persist-tangle} Given any rational tangle, $T$, tangle addition of it to $T^{\star}$, its mirror image, produces a persistent tangle. \end{cor} \begin{proof} See Figure \ref{fig:T-Tstar} for an illustration. \begin{figure} \caption{A persistent tangle. The non-trivial coloring at issue is by the dihedral quandle of order $7$.} \label{fig:T-Tstar} \end{figure} \end{proof} \begin{definition}\label{def:rationallyirred} A tangle is irreducible if no ambient isotopy plus adjacent end twisting can make it into a tangle with fewer crossings, and it is neither an infinity tangle nor a zero tangle. Note that rational knots and links are by definition closures of rational tangles. Rational tangles are those tangles that are rationally reducible. But non-rational tangles can sometimes reduce to rational knots (we have specific examples in the paper). \end{definition} \begin{definition}\label{def:irred} A tangle $T$ is said to be {\bf irreducible} if it is rationally irreducible, without local knots, and if whenever the numerator $N(T)$ or denominator $D(T)$ closure has one component, then it is a non-trivial knot. \end{definition} \begin{conj} An irreducible tangle is a persistent tangle. \end{conj} A small example of an irreducible tangle whose persistent we are not yet able to prove, is given in Figure \ref{fig:irred-2-tangle}. Many of the persistent tangles already discussed in this paper are irreducible. However, a proof of this conjecture would definitely require techniques beyond the coloring approach of the present paper. For example, the tangle in Figure \ref{fig:non-rat-tangle} is shown, in that figure, to admit no non-trivial coloring that constantly labels all of its tangle ends. \begin{figure} \caption{Left: Figure-$8$ knot; numerator closure of tangle $T$ (notation, $N(T)$) in the middle. Middle: tangle $T$. Right: denominator closure of $T$, $D(T)$ (it is the trefoil). Remark: both $N(T)$ and $D(T)$ are non-trivial knots. $T$ is thus an example of an irreducible tangle. However, if we color the slashed arcs with the same color then this assignment will necessarily extend to a trivial coloring. Therefore, coloring arguments are not enough to prove this tangle is persistent - should it be persistent.} \label{fig:irred-2-tangle} \end{figure} \begin{figure} \caption{Cutting open a rational knot to give a non-rational tangle that is persistent and reduces to the Hercules tangle.} \label{fig:irred-persistent} \end{figure} \begin{figure} \caption{The same $2$-tangle with the four different assignments of orientations to its arcs; the configurations on the left-hand side are prepared for denominator closure whereas the ones on the right-hand side are prepared for numerator closure, for instance. We try to equip them with a non-trivial coloring keeping the start- and end-points with the same color, $a$. We arrive at an inconsistency: $e=c$ in each instance which implies the colorings have to be trivial, again in each instance.} \label{fig:non-rat-tangle} \end{figure} In Figure \ref{fig:hercules-over} yet another instance. It is a non-trivial knot since it features Krebes original tangle although in a more elaborate way. \begin{figure} \caption{A non-trivial knot since the diagram features Krebes original tangle. A subtlety: Krebes original tangle lies over other portions of the diagram.} \label{fig:hercules-over} \end{figure} \end{document}
\begin{document} \draft \preprint{} \title{Two-mode heterodyne phase detection} \author{G. M. D'Ariano\cite{dar} and M. F. Sacchi\cite{sac}} \address{Dipartimento di Fisica ``Alessandro Volta'', Universit\`a degli Studi di Pavia, Via A. Bassi 6, I--27100 Pavia, Italy} \maketitle \begin{abstract} We present an experimental scheme that achieves ideal phase detection on a two-mode field. The two modes $a$ and $b$ are the signal and image band modes of an heterodyne detector, with the field approaching an eigenstate of the photocurrent $\hat{Z}=a+b^{\dag}$. The field is obtained by means of a high-gain phase-insensitive amplifier followed by a high-transmissivity beam-splitter with a strong local oscillator at the frequency of one of the two modes. \end{abstract} \pacs{PACS number(s): 03.65.Bz, 42.50.Dv} \begin{multicols}{2} The quantum-mechanical measurement of the phase of the radiation field is the essential problem of high sensitive interferometry, and has received much attention in quantum optics \cite{rev1,rev2}. Most of the work has been devoted to measurements on a single-mode electromagnetic field, where the measurement cannot be achieved exactly, even in principle, due to the lack of a unique self-adjoint operator \cite{pom}. \par It can be readily recognized that the absence of a proper self-adjoint operator in the one-mode case is mainly due to the semiboundedness of the spectrum of the number operator \cite{shsh,ban}, which is canonically conjugated to the phase in the sense of a Fourier-transform pair \cite{shap}. This observation discloses the route toward an exact phase measurement in terms of two-mode fields, where a phase-difference operator becomes conjugated to an unbounded number-difference operator \cite{luis}. Moreover, as already noticed in Ref.~\cite{shwa}, a two mode field corresponds to a complex photocurrent $\hat Z$ such that $[\hat{Z},\hat{Z}^{\dag}]=0$, with a self-adjoint phase operator $\hat{\phi}=\arg(\hat{Z})$ that can concretely be measured. Despite its promising possibilities, not much work has been devoted to the two-mode phase detection, and attention has been focused mostly on the algebraic structure the photocurrents (see Refs. \cite{ban,shap,luis} and references therein). Only in Ref. \cite{shwa} a concrete experimental set-up has been devised, based on unconventional field heterodyning with the signal and image-band modes both nonvacuum. \par Here in this letter, following the route opened by Ref.~\cite{shwa}, we study the eigenstates of the heterodyne photocurrent $\hat{Z}$ and provide an experimental scheme that approaches them. We then analyze the measurement of the two-mode phase $\hat{\phi}=\arg(\hat{Z})$ showing that the ideal sensitivity limit $\delta \phi =1/\overline{n}$ can be achieved for large mean number of photons $\overline{n}$. \par It has been proved by Yuen and Shapiro \cite{yuen} that the output photocurrent $\hat Z$ of a heterodyne detector (for unit quantum efficiency, and in the limit of strong local oscillator and vanishing beam splitter reflectivity) is just the operator $\hat{Z}=a+b^{\dag}$, where $a$ denotes (the annihilator~of) the signal mode, and $b$ the image-band mode. In ordinary heterodyning the image-band mode $b$ is vacuum, and is responsible for the additional 3dB noise. Here, similarly to Ref.~\cite{shwa}, we use the heterodyne detector in an unconventional way, namely with a nonvacuum $b$ mode, and look for field states which are eigenvectors of the current $\hat Z$. \par \noindent It is easy to check that the following vector \cite{shwa} \begin{eqnarray} |z\rangle\!\rangle &=& \int_{-\infty}^{+\infty}\frac{dx}{\sqrt{\pi}} e^{2ix\hbox{\scriptsize Im}z}|x\rangle_{0}\otimes|\hbox{Re}z-x \rangle_{0}\nonumber \\ &=&\int_{-\infty}^{+\infty}\frac{dy}{\sqrt{\pi}} e^{-2iy\hbox{\scriptsize Re}z}|y+\hbox{Im}z\rangle_{\pi/2} \otimes |y\rangle_{\pi /2}\;\label{zeta} \end{eqnarray} is eigenvector of $\hat Z$ with complex eigenvalue $z$. In Eq.~ (\ref{zeta}) $|\psi \rangle \otimes |\varphi \rangle $ denotes a vector in the two-mode Hilbert space $\cal H=\cal H_{a}\otimes\cal H_{b}$, and $|x\rangle_{\phi}$ represents an eigenvector of the quadrature $\hat{X}_{\phi}=\frac{1}{2}(c^{\dag}e^{i\phi}+\hbox{h.c.})$ of the pertaining mode $c=a,b$. The notation $|\ \rangle\!\rangle$ remembers that the state is a two-mode one. The set $\{|z\rangle\!\rangle\}$ is complete orthonormal for $\cal H$, with scalar product: \begin{eqnarray} \langle\!\langle z|z'\rangle\!\rangle =\delta^{(2)}(z-z')\equiv \delta (\hbox{Re}z-\hbox{Re}z')\,\delta (\hbox{Im}z-\hbox{Im}z')\,.\!\!\! \label{delta} \end{eqnarray} In the number representation the vector (\ref{zeta}) reads as follows \begin{eqnarray} |z\rangle\!\rangle = e^{i\hbox{\scriptsize Re}z\hbox{\scriptsize Im}z} \sum_{n,m=0}^\infty \hbox{c}_{n,m}(z,\overline{z})|n\rangle\otimes|m\rangle\;, \label{eigen} \end{eqnarray} with \begin{eqnarray} &&\hbox{c}_{n,n+\lambda}(z,\overline{z})=\overline{\hbox{c}}_{n+\lambda,n} (z,\overline{z})=\nonumber\\ &&=\frac{(-)^{n}}{\sqrt{\pi}}\sqrt{\frac{n!}{(n+\lambda)!}} \,\overline{z}^{\lambda}\,\hbox{L}_{n}^{\lambda}(|z|^{2})\,\exp \left(-\frac{1}{2}|z|^{2}\right) \label{cn}\;. \end{eqnarray} Eq. (\ref{cn}) is obtained from Eq. (\ref{zeta}) using the number representation of the quadrature \begin{eqnarray} {}_{\phi }\langle x|n\rangle = \left( \frac{2}{\pi}\right)^{1/4}\frac {e^{in\phi}}{\sqrt{2^{n}n!}}e^{-x^2}\hbox{H}_{n}(\sqrt{2}\,x)\;,\label{herm} \end{eqnarray} along with the following identity between Hermite and Laguerre polynomials \begin{eqnarray} &&\int_{-\infty}^{+\infty}\frac{dx}{\sqrt{\pi}}e^{-x^{2}} \hbox{H}_{n}(x+y)\hbox{H}_{n+\lambda}(x+t)\nonumber\\ &&=2^{n+\lambda }\,n!\,\hbox{L}_{n}^{\lambda}(-2yt)\,t^{\lambda}\;.\label{lag} \end{eqnarray} The Dirac-normalized states $|z\rangle\!\rangle$ have infinite total number of photons, and we seek physically realizable states approaching $|z\rangle\!\rangle$ for infinite photon numbers. The eigenstate corresponding to zero eigenvalue is given by: \begin{eqnarray} |0\rangle\!\rangle=\frac{1}{\sqrt{\pi}}\sum_{n=0}^{\infty}(-)^{n} |n\rangle\otimes|n\rangle\;.\label{twin} \end{eqnarray} This is just the ``twin-beams'' at the output of a phase-insensitive amplifier (PIA) in the limit of infinite gain \cite{mauro1}. One has \begin{eqnarray} |0\rangle\!\rangle=\lim_{\lambda\rightarrow 1^{-}} |0\rangle\!\rangle_{\lambda}\;, \end{eqnarray} with \begin{eqnarray} |0\rangle\!\rangle_{\lambda}&=&(1-\lambda ^{2})^{1/2} \sum_{n=0}^{\infty}(-\lambda)^{n}|n\rangle\otimes|n\rangle= \nonumber \\ &=&\exp [\hbox{tanh}^{-1}\,\lambda\,(ab-a^{\dag}b^{\dag})]\, |0\rangle\otimes|0\rangle\;.\label{zero} \end{eqnarray} In the parametric approximation of infinite classical (undepleted) pump the modes $a$ and $b$ are identified with a couple of signal and idler modes of the amplifier (the gain is $(1-\lambda ^{2})^{-1}$). Apart from an irrelevant phase factor, the eigenstate $|z\rangle\!\rangle$ can be generated by $|0\rangle\!\rangle$ upon displacing either $a$ or $b$. Displacing the mode $a$ we have \begin{eqnarray} |z\rangle\!\rangle=e^{i\phi_{z}}e^{za^{\dag}-\overline{z}a} |0\rangle\!\rangle\;.\label{disp} \end{eqnarray} The physical (normalizable) state $|z\rangle\!\rangle_{\lambda}$ approaching $|z\rangle\!\rangle $ for infinite gain is obtained in the same way \begin{eqnarray} |z\rangle\!\rangle_{\lambda}=e^{i\phi_{z}}e^{za^{\dag}-\overline{z}a} (1-\lambda ^{2})^{1/2} \sum_{n=0}^{\infty}(-\lambda)^{n}|n\rangle\otimes|n\rangle \;.\label{zetal} \end{eqnarray} The displacement in Eq. (\ref{zetal}) can be achieved by combining the ``twin-beams'' $|0\rangle\!\rangle_\lambda $ with a strong coherent local oscillator $|\beta\rangle \ (\beta \rightarrow \infty)$ in a beam splitter with a transmissivity $\tau \rightarrow 1$, such that $|\beta|\sqrt{1-\tau}=|z|$ (the local oscillator is at the frequency of the signal mode $a$). \par The experimental set-up to generate the state (\ref{zetal}) is sketched in Fig.~1. The state (\ref{zetal}) has average number of photons \begin{eqnarray} \overline{n}={}_{\lambda}\langle\!\langle z|a^{\dag}a+b^{\dag}b |z\rangle\!\rangle {}_{\lambda}=|z|^{2}+\frac{2\lambda^2}{1-\lambda^2} \;.\label{num} \end{eqnarray} The state (\ref{zetal}) is now impinged into a heterodyne detector with signal mode $a$ and image-band mode $b$. The probability density of getting the value $z$ for the output photocurrent $\hat Z$ with the field in the state $|w\rangle\!\rangle_{\lambda}$ is given by \begin{eqnarray} |\langle\!\langle z|w\rangle\!\rangle_{\lambda}|^{2} &=&(1-\lambda ^{2})\left|\sum_{n=0}^{\infty}(-\lambda)^{n} \,\hbox{c}_{n,n}(z-w,\overline{z-w})\right|^{2}\nonumber\\ &=&\frac{1-\lambda^{2}}{\pi}\exp{(-|z-w|^{2})} \left|\sum_{n=0}^{\infty}\lambda^{n}\hbox{L}_{n}(|z-w|^{2})\right|^2 \nonumber \\ &=&\frac{1}{\pi\Delta_{\lambda}^{2}}\exp\left( {-\frac{|z-w|^{2}} {\Delta_{\lambda}^{2}}}\right) \;\label{prob} \end{eqnarray} where \begin{eqnarray} \Delta_{\lambda}^{2}=\frac{1-\lambda }{1+\lambda }\;.\label{D} \end{eqnarray} In the limit $\lambda \rightarrow 1^-$ one has that $|\langle\!\langle z|w\rangle\!\rangle_{\lambda}|^{2}\rightarrow \delta ^{(2)}(z-w)$, confirming that the state $|w\rangle\!\rangle_{\lambda}$ approaches the eigenstate $|w\rangle\!\rangle$ of the current $\hat Z$\@. \par The detection of the phase $\hat{\phi}=\arg(\hat{Z})$ is described by the marginal probability density of (\ref{prob}), namely \begin{eqnarray} &&p(\phi)=\frac{1}{\pi \Delta_{\lambda}^{2}}\int_{0}^{+\infty}dr \,r\,\exp\left( -{|re^{i\phi}-|w|e^{i\theta}|^2 \over \Delta_{\lambda}^{2}}\right) \nonumber \\ &&={1\over 2\pi}e^{-{|w|^{2}\over\Delta_{\lambda}^{2} }} +{|w|\over\pi\Delta_{\lambda} }\cos(\phi-\theta) \nonumber \\&&\times{\sqrt{\pi}\over 2} \,\left[1+\hbox{erf}\left({|w|\cos(\phi -\theta)\over\Delta_{\lambda} }\right) \right]\,e^{-{|w|^{2}\over \Delta_{\lambda}^{2}}\sin ^{2}(\phi -\theta)} \;,\label{pfi} \end{eqnarray} where $\theta =\arg(w)$, and $\hbox{erf}(x)$ denotes the error function $\hbox{erf}(x)={2\over\sqrt{\pi}}\int_{0}^{x}dt\,e^{-t^2}$. Notice that the probability density (\ref{pfi}) is just the Born rule for the self-adjoint operator $\hat{\phi}=\arg(\hat{Z})=-{i\over2} \log(\hat Z/\hat{Z}^{\dag})$: this is well defined on the Hilbert space ${\cal H}_{0}^{\bot}$, orthogonal complement in $\cal H$ of the space ${\cal H}_{0}$ spanned by vector $|0\rangle\!\rangle$ in Eq. (\ref{twin}) \cite{hrad}. The integral over $r$ in Eq. (\ref{pfi}) just sums up degeneracies of eigenvectors (\ref{eigen}): the zero-eigenvalue vector is not degenerate, and gives a zero-measure contribution to the integral. The first Gaussian term in the last side of Eq.~(\ref{pfi}) gives a uniform phase probability distribution for the ``twin-beams'' input state $|0\rangle\!\rangle_\lambda $. \par For $\Delta_{\lambda}\ll |w|$ Eq. (\ref{pfi}) approaches the Gaussian form \begin{eqnarray} p(\phi)\simeq\frac{|w|}{\sqrt{\pi}\Delta_{\lambda}}\exp\left[ -{|w|^2 \over\Delta_{\lambda}^2}(\phi -\theta)^2\right] \;\label{Ga} \end{eqnarray} corresponding to the r.m.s. phase sensitivity \cite{cramer} \begin{eqnarray} \delta\phi=\langle\Delta\phi^{2}\rangle^{1/2}= {1\over\sqrt 2}{\Delta_{\lambda}\over|w|}\;. \end{eqnarray} In the limit of infinite gain at the PIA $(\lambda\rightarrow 1^-)$ one has $\Delta_{\lambda}^{2}\simeq{1\over2}(1-\lambda)$ and $\bar n\simeq |w|^{2}+(1-\lambda)^{-1}$. [Notice that the classical approximation for the local oscillator at the beam splitter requires that its intensity $|\beta |^2$ must be much greater than the input photon number $\simeq (1-\lambda )^{-1}$ of the ``twin beams''.] Optimizing $\delta \phi$ versus $|w|$ at fixed $\bar n$ one obtains the sensitivity \begin{eqnarray} \delta\phi\simeq{1\over\bar n}\;\label{1sun} \end{eqnarray} for $|w|^2=(1-\lambda)^{-1}$, namely for signal photons equal to the ``twin-beams'' photons. The sensitivity (\ref{1sun}) obeys the same power-law as the ideal sensitivity for one-mode phase detection (actually it is improved by a constant factor equal to 1.36: see Ref. \cite{rev1}). \par The ideal phase sensitivity (\ref{1sun}) has been derived with the hypothesis of unit efficiency at the heterodyne photodetector. It is easy to show that for nonunit quantum efficiency (independent on frequency in the range between signal and image-band modes) Eq. (\ref{D}) becomes \begin{eqnarray} \Delta_{\lambda}^2 \rightarrow \Delta_{\lambda}^2(\eta)= \Delta_{\lambda}^2+{1-\eta\over\eta}\;. \end{eqnarray} Then, it is clear that the result (\ref{1sun}) holds only in the limit $1-\eta\ll|w|^{-2}$, whereas in the opposite situation $1-\eta\gg|w|^{-2}$ one obtains the usual shot noise $\delta\phi =\sqrt{(1-\eta)/2\bar n}$. \par In conclusion, we have presented a feasible scheme to detect a two-mode phase of the field, approaching the eigenstates of the heterodyne current $\hat Z$. The state of the field is obtained by means of a high gain PIA followed by a high transmissivity beam splitter with strong local oscillator at the signal frequency. The ideal r.m.s. sensitivity $\delta\phi=1/\bar n$ is achieved for large photon numbers $\bar n\gg1$ and for signal photons $|w|^{2}=\bar {n}/2$. The gain of the PIA (parametrically ideal) is tuned to the value $g={1\over 4}\bar n$, and the quantum efficiency at the photodetector must be very good, namely $1-\eta\ll 2/{\bar n}$. \par Hence, the two-mode phase detection could be experimentally achieved, but the technical requirements are strict: two local oscillators plus a classical pump at the PIA (all of them coherent and at different frequencies); linear amplification for high gains, with the pump still undepleted; very good quantum efficiency. This shows how technical difficulties can rise when going from one-mode to two-mode phase detection. One of us (G. M. D'Ariano) acknowledges stimulating discussions with P. Kumar and H. P. Yuen. \begin{references} \bibitem[*]{dar}E-mail: [email protected] \bibitem[\dag]{sac}E-mail: [email protected] \bibitem{rev1} For a recent review see: G. M. D'Ariano and M. Paris, Phys. Rev. A {\bf 49}, 3022 (1994) \bibitem{rev2} Physica Scripta T{\bf 48} (1993) (special issue on {\em Quantum Phase and Phase Dependent measurements}). \bibitem{pom} Quantum estimation theory provides a more general description of quantum statistics in terms of POM's (positive operator-valued measures) and gives the theoretical definition of an optimized phase measurement (see C. W. Helstrom, {\em Quantum Detection and Estimation Theory}, Academic, New York, 1976). However, no feasible scheme has been devised yet, which can even approach such optimal measurement. \bibitem{shsh} J. H. Shapiro and S. R. Shepard, Phys. Rev. A {\bf 43}, 3795 (1991). \bibitem{ban} M. Ban, Phys. Rev. A {\bf 50}, 2785 (1994). \bibitem{shap} J. H. Shapiro, Physica Scripta T {\bf 48}, 105 (1993). \bibitem{luis} A. Luis and L. L. S\'anches-Soto, Phys. Rev. A {\bf 48}, 4702 (1993). \bibitem{shwa} J. H. Shapiro and S. S. Wagner, IEEE J. Quantum Electron. QE {\bf 20}, 803 (1984). \bibitem{yuen} H. P. Yuen and J. H. Shapiro, IEEE Trans. Inform. Theory IT {\bf 26}, 78 (1980). \bibitem{mauro1} G. M. D'Ariano, Int. J. Mod. Phys. B {\bf 6}, 1292 (1992). \bibitem{hrad} Z. Hradil (unpublished). \bibitem{cramer} For Gaussian distributions the average maximizes the likelihood and is an asympotically efficient estimate of the phase shift $\theta $ in Eq. (\ref{Ga}) with efficiency equal to the variance (see H. Cram\'er, {\em Mathematical Methods of Statistics}, Princeton Univ. Press, Princeton, NJ, 1951, pp.489-506). \end{references} \end{multicols} \begin{figure} \caption{Outline of the experimental setup to generate two-mode phase states approaching heterodyne eigenstates. The PIA produces the ``twin-beams'' in Eq. (\protect{\ref{zero} \label{f:outline} \end{figure} \end{document}
\begin{document} \date{} \title{A matrix equation $X^n = aI$} \begin{abstract} In this paper, we study a matrix equation $X^n = aI$. We factorize $X^n - aI$ based upon the factorization of $x^n - a$ and then give a necessary and sufficient condition for one of the factors to be the zero matrix. \end{abstract} \noindent {\it Keywords:} matrix equations; non-simple $n$th roots of $aI$; Jordan matrices. \noindent{\small {{MSC2010:} 15A24}} \section{Introduction} A polynomial $x^n - a$ with $n \ge 2$ can be factored into \[ (x - a^\frac{1}{n})(x^{n-1} + a^\frac{1}{n}x^{n-2} + \cdots + a^\frac{n-2}{n} x + a^\frac{n-1}{n}) \] if $a \ge 0$ or $n$ is odd. For the same reason, a matrix polynomial $X^n - aI$ with $n \ge 2$ can be factored into \[ (X - a^\frac{1}{n}I)(X^{n-1} + a^\frac{1}{n}X^{n-2} + \cdots + a^\frac{n-2}{n} X + a^\frac{n-1}{n} I) \] if $a \ge 0$ or $n$ is odd. From the factorization of $x^n-a$, we know that any root of $x^n-a = 0$ satisfies $x - a^\frac{1}{n} = 0$ or $x^{n-1} + a^\frac{1}{n}x^{n-2} + \cdots + a^\frac{n-2}{n} x + a^\frac{n-1}{n} = 0$. Though the ring $M_k(\mathbb{R})$ is not an integral domain, it is still interesting to ask for which $k,n,a$ the same situation occurs, that is, $X^n - aI = O$ and $X \neq a^\frac{1}{n}I$ imply \[ X^{n-1} + a^\frac{1}{n}X^{n-2} + \cdots + a^\frac{n-2}{n} X + a^\frac{n-1}{n} I = O. \] Motivated by this question, we will study the sentence \begin{equation}\label{eqn:sentence} (\forall X \in M_k(\mathbb{R}))\left[X^n = aI \wedge X \neq a^\frac{1}{n}I \Rightarrow X^{n-1} + a^\frac{1}{n}X^{n-2} + \cdots + a^\frac{n-2}{n} X + a^\frac{n-1}{n} I = O\right] \end{equation} to obtain the following theorem. \begin{Thm}\label{thm:char} For integers $k,n$ $(k,n \ge 2)$ and $a \in \mathbb{R}$ satisfying the property that if $a < 0$, then $n$ is odd, the sentence \[ (\forall X \in M_k(\mathbb{R}))\left[X^n = aI \wedge X \neq a^\frac{1}{n}I \Rightarrow X^{n-1} + a^\frac{1}{n}X^{n-2} + \cdots + a^\frac{n-2}{n} X + a^\frac{n-1}{n} I = O\right] \] becomes true if and only if one of the following holds \begin{enumerate} \item[(i)] $a \neq 0$, $k = 2$, and $n$ is odd \item[(ii)] $a=0$ and $n \ge k+1$. \end{enumerate} \end{Thm} Suppose that $n$ is even and $a<0$. Then $x^n-a$ cannot have a linear factor over $\mathbb{R}$, so it is not meaningful to consider the sentence (\ref{eqn:sentence}) for the matrix equation $X^n - aI = O$ if $n$ is even and $a<0$. However, the polynomial $x^n -a$ can be factored into \[ (x+(-a)^\frac{1}{n}\zeta)(x+ (-a)^\frac{1}{n}\zeta^3)\cdots(x+ (-a)^\frac{1}{n}\zeta^{2n-1}) = \prod_{i=1}^{n/2} \left( x^2 + (-a)^\frac{1}{n} \cos \frac{(2i-1)\pi}{n} x + (-a)^\frac{2}{n} \right) \] where $\zeta = \exp(\frac{\pi i}{n})$. For the same reason, a matrix polynomial $X^n - aI$ can be factored into \[ \prod_{i=1}^{n/2} \left( X^2 + (-a)^\frac{1}{n} \cos \frac{(2i-1)\pi}{n} X + (-a)^\frac{2}{n}I \right) \] if $n$ is even and $a<0$. In the same context as the case where $n \ge 2$ and $a \ge 0$, or $n$ is odd, we may ask for which $k,n,a$ $X^n - aI = O$ implies \[ X^2 + (-a)^\frac{1}{n} \cos \frac{(2i-1)\pi}{n} X + (-a)^\frac{2}{n}I = O \] for some $i \in \{1,2,\ldots,\frac{n}{2}\}$. Based on this question, if $n$ is even and $a<0$, we will study the sentence \begin{equation}\label{eqn:sentence2} (\forall X \in M_k(\mathbb{R}))\left[ X^n = aI \Rightarrow \left( \exists i \in \left\{1,2,\ldots,\frac{n}{2} \right\} \right) \left[X^2 + (-a)^\frac{1}{n} \cos \frac{(2i-1)\pi}{n} X + (-a)^\frac{2}{n}I = O\right] \right] \end{equation} to present the following theorem. \begin{Thm}\label{thm:char2} For integers $k,n$ $(k,n \ge 2)$ with $n$ is even and $a < 0$, the sentence \[ (\forall X \in M_k(\mathbb{R}))\left[ X^n = aI \Rightarrow \left( \exists i \in \left\{1,2,\ldots,\frac{n}{2} \right\} \right) \left[X^2 + (-a)^\frac{1}{n} \cos \frac{(2i-1)\pi}{n} X + (-a)^\frac{2}{n}I = O\right] \right] \] becomes true if and only if $k$ is odd, or $k$ is even and $n=2$. \end{Thm} We will prove Theorem~\ref{thm:char} in Section~2 and Theorem~\ref{thm:char2} in Section~3. For undefined terms, the reader may refer to \cite{HJ}. \section{A proof of Theorem~\ref{thm:char}} We take a matrix $A$ with real entries. We call $A$ a {\it non-simple $n$th root of $aI$} if it satisfies \[ A^n = aI \mbox{ and } A \neq a^\frac{1}{n}I. \] We first show that Theorem~\ref{thm:char} holds for $a=0$. To show the `only if' part, we consider the case $a = 0$ and $n \le k$. We define the matrix $A = (a_{ij})$ by \[ a_{ij} = \begin{cases} 1 & \mbox{if } j = i+k-n+1; \\ 0 & \mbox{otherwise}. \end{cases} \] See the matrix below for an illustration for $n=k$: \[ A = \begin{pmatrix} 0 & 1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 1 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 0 & 1 \\ 0 & 0 & 0 & 0 & \cdots & 0 & 0 \end{pmatrix}. \] It can easily be checked that $A \neq O, A^n=O$ but $A^{n-1} \neq O$. Therefore the `only if' part of Theorem~\ref{thm:char} follows if $a=0$. To show the `if' part, we consider the case $a = 0$ and $n \ge k+1$. Suppose that $A^n = O$. Then $0$ is the only eigenvalue of $A$ and so the Jordan matrix of $A$ is of the form \[ J_A = \begin{pmatrix} J_{n_1}(0) & O & O & \cdots & O & O \\ O & J_{n_2}(0) & O & \cdots & O & O \\ O & O & J_{n_3}(0) & \cdots & O & O \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ O & O & O & \cdots & J_{n_{l-1}}(0) & O \\ O & O & O & \cdots & O & J_{n_l}(0) \end{pmatrix} \] where $n_1 + \cdots + n_l = k$ and $J_{n_i}(0)$ is the Jordan block of order $n_i$ with eigenvalue $0$. Since $J_A$ is a strictly upper triangular matrix of order $k$, it is true that $(J_A)^k = O$. Because $A$ is similar to $J_A$, $A^k = O$. Then $A^{n-1} = A^k A^{n-k-1} = O A^{n-k-1} = O$. Therefore the `if' part of Theorem~\ref{thm:char} follows if $a=0$. Now we show Theorem~\ref{thm:char} when $a \neq 0$. If $a > 0$, then \begin{equation}\label{simple} A^n=aI \Leftrightarrow \left(a^{-\frac{1}{n}} A \right)^n = I. \end{equation} Therefore, by substituting $a^\frac{1}{n} X$ or $a^{-\frac{1}{n}} X$ into $X$, it is sufficient to consider the case $a=1$ if $a>0$. Suppose $a<0$. Note that $X^n = aI = -(-a)I$. Then, since $-a>0$, by (\ref{simple}), \begin{equation}\label{simple2} A^n = aI \Leftrightarrow \left( (-a)^{-\frac{1}{n}} A \right)^n = -I \end{equation} and therefore it is sufficient to consider the case $a = -1$. We need the following lemma. \begin{Lem} Suppose that $\left( \begin{smallmatrix} p & q \\ 0 & r \end{smallmatrix} \right) \in M_2(\mathbb{C})$ is a non-simple $n$th root of $I$ for an integer $n \ge 2$. Then \[ (p, r)=((\zeta_n)^u, (\zeta_n)^v) \] for some $u, v \in \{ 0, 1, \ldots, n-1\}$. where $\zeta_n=\exp(2\pi i/n)$. \label{lem:cyclotomic} \end{Lem} \begin{proof} For notational convenience, let $A = \left( \begin{smallmatrix} p & q \\ 0 & r \end{smallmatrix} \right)$. We first prove by induction on $n$ that \begin{equation} \label{eqn:comp} A^n = (p^{n-1} + p^{n-2}r + \cdots + pr^{n-2}+r^{n-1})A -pr(p^{n-2} + \cdots + r^{n-2})I. \end{equation} The statement~(\ref{eqn:comp}) is true for $n=2$ by the Cayley-Hamilton Theorem. Suppose that (\ref{eqn:comp}) is true for $n$. Then, by the induction hypothesis, \begin{align*} A^{n+1} &= A^nA = (p^{n-1} + p^{n-2}r + \cdots + pr^{n-2}+r^{n-1})A^2 -pr(p^{n-2} + \cdots + r^{n-2})A \\ &= (p^{n-1} + p^{n-2}r + \cdots + pr^{n-2}+r^{n-1})((p+r)A - prI) -pr(p^{n-2} + \cdots + r^{n-2})A. \end{align*} By simplifying the right-hand side of the second equality, we can check that (\ref{eqn:comp}) is true for $n+1$. Now, since $A$ is an $n$th root of $I$, by~(\ref{eqn:comp}), \[ I = A^n = (p^{n-1}+p^{n-2}r + \cdots + pr^{n-2}+r^{n-1})A -pr(p^{n-2}+p^{n-3}r + \cdots + pr^{n-3}+r^{n-2})I \] and, by comparing $(1,2)$ and $(2,2)$ entries of the matrix on the right with those of $I$ on the left, we obtain the system of equations \[ q \left(p^{n-1}+p^{n-2}r + \cdots + pr^{n-2}+r^{n-1}\right) = 0,\] and \[ 1 = (p^{n-1} + p^{n-2}r + \cdots + pr^{n-2}+r^{n-1})r - pr(p^{n-2}+p^{n-3}r + \cdots + pr^{n-3}+r^{n-2}) \] or \[ 1 = r^n. \] If $q=0$, then $I = A^n= \left( \begin{smallmatrix} p^n & 0 \\ 0 & r^n \end{smallmatrix} \right)$ and so the lemma follows. If $q \neq 0$, then by solving this system, we have \[ (p, r)=((\zeta_n)^u, (\zeta_n)^v) \] for some $u, v \in \{ 0, 1, \ldots, n-1\}$. \end{proof} To show the `if' part, take a non-simple $n$th root $A \in M_2(\mathbb{R})$ of $I$. Since $n$ is odd, \begin{equation} \label{eqn:factorization} A^{n-1} + \cdots + A + I = \prod_{w=1}^{(n-1)/2} \left(A^2-2\cos \left( \frac{2\pi}{n} w \right) A + I \right). \end{equation} Let $J_A := \left( \begin{smallmatrix} p & q \\ 0 & r \end{smallmatrix} \right)$ be the Jordan matrix of $A$. Since $A$ is similar to $J_A$, $J_A$ is also a non-simple $n$th root of $I$. By Lemma~\ref{lem:cyclotomic}, \begin{equation} \label{eqn:pair} (p, r)=((\zeta_n)^u, (\zeta_n)^v) \end{equation} for some $u, v \in \{ 0, 1, \ldots, n-1\}$. Moreover, by the similarity, $\det(A-\lambda I)=\det(J_A-\lambda I)$. Since $A \in M_2(\mathbb{R})$, $\det(A-\lambda I)$ is a polynomial over $\mathbb{R}$ and so is $\det(J_A - \lambda I)$. Then $p+r$ and $pr$ are in $\mathbb{R}$ and so $u+v = n$ for $u,v$ in (\ref{eqn:pair}). Therefore, by the symmetry of $u$ and $v$, \begin{equation} \label{eqn:quadratic} A^2-2\cos \left( \frac{2\pi}{n} u\right) A + I = O \end{equation} for some $u \in \{0,1,\ldots,\frac{n-1}{2}\}$. Suppose that $u=0$. Then $v=n$ and so $J_A = \left( \begin{smallmatrix} 1 & q \\ 0 & 1 \end{smallmatrix} \right)$. However, $(J_A)^n = \left( \begin{smallmatrix} 1 & nq \\ 0 & 1 \end{smallmatrix} \right)$ which cannot equal $I$ unless $J_A = I$, and we reach a contradiction to the fact that $J_A$ is a non-simple $n$th root of $I$. Thus $u \in \{1,2,\ldots,\frac{n-1}{2}\}$ in the statement (\ref{eqn:quadratic}) and so, by (\ref{eqn:factorization}), \[ A^{n-1} + \cdots + A + I = O. \] Now take a non-simple $n$th root $A \in M_2(\mathbb{R})$ of $-I$. If $n$ is odd, then $-A$ is a non-simple $n$th root of $I$ and so \[ O = (-A)^{n-1} + (-A)^{n-2} + \cdots + (-A) + I = A^{n-1} - A^{n-2} + \cdots - A + I. \] Hence we have shown that the `if' part of Theorem~\ref{thm:char} is true when $a \neq 0$. It remains to show the `only if' part, that is, the sentence (\ref{eqn:sentence}) is not true if either $a \neq 0$ and $k \ge 3$, or $a \neq 0$ and $n$ is even. We will give a counterexample for each of the following cases: \begin{table}[h] \begin{center} \begin{tabular}{c|c|c} & $a=1$ & $a=-1$ \\ \hline $n$ is even, $k$ is even & (i) & \\ \hline $n$ is even, $k$ is odd & (ii) & \\ \hline $n$ is odd, $k$ is even $(k \ge 3)$ & (iii) & (v) \\ \hline $n$ is odd, $k$ is odd & (iv) & (vi) \end{tabular} \end{center} \end{table} We denote the matrix $\left( \begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix} \right)$ by $T$, the zero matrix of order two by $O_2$, and the rotation matrix \[ \begin{pmatrix} \cos {\theta} & -\sin {\theta} \\ \sin {\theta} & \cos {\theta} \end{pmatrix} \] by $R_\theta$. In addition, we distinguish identity matrices by denoting the identity matrix of order $l$ by $I_l$. \noindent(i) {\it $n$ is even and $k$ is even.} We take the matrix of order $k$ \[ A := \begin{pmatrix} T & O_2 & \cdots & O_2 \\ O_2 & T & \cdots & O_2 \\ \vdots & \vdots & \ddots & \vdots \\ O_2 & O_2 & \cdots & T \end{pmatrix} \] By block multiplication, \[ A^n = \begin{pmatrix} T^n & O_2 & \cdots & O_2 \\ O_2 & T^n & \cdots & O_2 \\ \vdots & \vdots & \ddots & \vdots \\ O_2 & O_2 & \cdots & T^n \end{pmatrix} = I_k \] as an even power of $T$ is the identity matrix of order two. Since all of the diagonal entries of $A$ are zero, obviously $A \neq I_k$. However, \begin{eqnarray*} && A^{n-1} + a^\frac{1}{n}A^{n-2} + a^\frac{2}{n}A^{n-3} + a^\frac{3}{n}A^{n-4} + \cdots + a^\frac{n-2}{n} A + a^\frac{n-1}{n} I_k \\ &=& A^{n-1} + A^{n-2} + A^{n-3} + A^{n-4} + \cdots + A + I_k \\ &=& A + I_k + A + I_k + \cdots + A + I_k \\ &=& \frac{n}{2}(A+I_k) \neq O, \end{eqnarray*} so $A$ is a counterexample to the sentence (\ref{eqn:sentence}). \noindent(ii) {\it $n$ is even and $k$ is odd.} We take the matrix of order $k$ \[ \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & T & O_2 & \cdots & O_2 \\ 0 & O_2 & T & \cdots & O_2 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & O & O & \cdots & T \end{pmatrix}. \] By applying a similar argument for the case (i), we may show that the given matrix is a counterexample to the sentence (\ref{eqn:sentence}). \noindent(iii) {\it $n$ is odd and $k$ is even $(k \ge 3)$.} We take the matrix of order $k$ \[ A:=\begin{pmatrix} I_2 & O_2 & O_2 & \cdots & O_2 \\ O_2 & R_{2\pi/n} & O_2 & \cdots & O_2 \\ O_2 & O_2 & R_{2\pi/n} & \cdots & O_2 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ O_2 & O_2 & O_2 & \cdots & R_{2\pi/n} \end{pmatrix}. \] By block multiplication, \[ A^n = \begin{pmatrix} (I_2)^n & O_2 & \cdots & O_2 \\ O_2 & (R_{2\pi/n})^n & \cdots & O_2 \\ \vdots & \vdots & \ddots & \vdots \\ O_2 & O_2 & \cdots & (R_{2\pi/n})^n \end{pmatrix} = I_k \] as the $n$th power of $R_{2\pi/n}$ is the identity matrix of order two. Since $k \ge 3$, $(3,3)$ entry of $A$ exists and, by the hypothesis that $n \ge 3$, the $(3,3)$ entry of $A$ is not equal to $1$. However, the $(1,1)$ entry of $A$ is $1$, so $A \neq I_k$. Moreover, the $(1,1)$ entry of $A^i$ equals $1$ for any nonnegative integer $i$, so the $(1,1)$ entry of $A^{n-1} + a^\frac{1}{n}A^{n-2} + \cdots + a^\frac{n-2}{n} A + a^\frac{n-1}{n} I_k $ cannot be zero. Thus $A^{n-1} + a^\frac{1}{n}A^{n-2} + \cdots + a^\frac{n-2}{n} A + a^\frac{n-1}{n} I_k \neq O$ and so $A$ is a counterexample to the sentence (\ref{eqn:sentence}). \noindent(iv) {\it $n$ is odd and $k$ is odd $(k \ge 3)$.} We take the matrix of order $k$ \[ \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & R_{2\pi/n} & O_2 & \cdots & O_2 \\ 0& O_2 & R_{2\pi/n} & \cdots & O_2 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & O_2 & O_2 & \cdots & R_{2\pi/n} \end{pmatrix}. \] By applying a similar argument for the case (iii), we may show that the given matrix is a counterexample to the sentence (\ref{eqn:sentence}). \noindent(v) {\it $n$ is odd and $k$ is even $(k \ge 3)$.} We take the matrix of order $k$ \[ A := \begin{pmatrix} -I_2 & O_2 & O_2 & \cdots & O_2 \\ O_2 & -R_{2\pi/n} & O_2 & \cdots & O_2 \\ O_2 & O_2 & -R_{2\pi/n} & \cdots & O_2 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ O_2 & O_2 & O_2 & \cdots & -R_{2\pi/n} \end{pmatrix}. \] By block multiplication, \[ A^n = \begin{pmatrix} (-I_2)^n & O_2 & \cdots & O_2 \\ O_2 & (-R_{2\pi/n})^n & \cdots & O_2 \\ \vdots & \vdots & \ddots & \vdots \\ O_2 & O_2 & \cdots & (-R_{2\pi/n})^n \end{pmatrix} = -I_k \] as the $n$th power of $-R_{2\pi/n}$ equals $-I_2$. Since $k \ge 3$, $(3,3)$ entry of $A$ exists and, by the hypothesis that $n \ge 3$, the $(3,3)$ entry of $A$ is not equal to $-1$. However, the $(1,1)$ entry of $A$ is $-1$, so $A \neq -I$. However, \begin{eqnarray*} && A^{n-1} + a^\frac{1}{n}A^{n-2} + a^\frac{2}{n}A^{n-3} + a^\frac{3}{n}A^{n-4} + \cdots + a^\frac{n-2}{n} A + a^\frac{n-1}{n} I_k \\ &=& A^{n-1} - A^{n-2} + A^{n-3} - A^{n-4} + \cdots - A + I_k. \end{eqnarray*} Now, the $(1,1)$ entry of $A^i$ equals 1 if $i$ is even and $-1$ if $i$ is odd. Therefore the $(1,1)$ entry of $A^{n-1} - A^{n-2} + A^{n-3} - A^{n-4} + \cdots - A + I_k$ equals $n$ and so $A^{n-1} - A^{n-2} + A^{n-3} - A^{n-4} + \cdots - A + I_k \neq O$. Hence $A$ is a counterexample to the sentence (\ref{eqn:sentence}). \noindent(vi) {\it $n$ is odd and $k$ is odd.} We take the matrix of order $k$ \[ \begin{pmatrix} -1 & 0 & 0 & \cdots & 0 \\ 0 & -R_{2\pi/n} & O_2 & \cdots & O_2 \\ 0& O_2 & -R_{2\pi/n} & \cdots & O_2 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & O_2 & O_2 & \cdots & -R_{2\pi/n} \end{pmatrix}. \] By applying a similar argument for the case (v), we may show that the given matrix is a counterexample to the sentence (\ref{eqn:sentence}). Hence we have shown the `only if' part of Theorem~\ref{thm:char} and the proof of Theorem~\ref{thm:char} is complete. \section{A proof of Theorem~\ref{thm:char2}} In this section, it is assumed that $n$ is even and $a<0$. By (\ref{simple2}), it is sufficient to consider the case $a = -1$. First we show the `if' part of Theorem~\ref{thm:char2}. Suppose that $k$ is odd. We will show that there is no matrix whose $n$th power equals $-I$. Assume, to the contrary, that there exists $A \in M_k(\mathbb{R})$ such that $A^n = -I$. Then, for the Jordan matrix $J_A$ of $A$, the following holds: \begin{equation} \label{eqn:negative} (J_A)^n=-I \end{equation} and \begin{equation} \label{eqn:det} \det(A-\lambda I)=\det(J_A - \lambda I). \end{equation} We denote the $(j,j)$ entry of $J_A$ by $a_j$ for each $j=1, 2, \ldots, k$. Since $J_A$ is upper triangular, taking the $n$th power of $J_A$ gives diagonal elements the $n$th power of diagonal elements of $J_A$. By (\ref{eqn:negative}), $a_j^n=-1$. Since $A \in M_k(\mathbb{R})$, $\det(A-\lambda I)$ is a polynomial in $\lambda$ with real coefficients and so is $\det(J_A - \lambda I)$ by (\ref{eqn:det}). Therefore the constant term $-a_1a_2 \cdots a_k$ is real. On the other hand, since $k$ is odd, \[ (a_1a_2 \cdots a_k)^n = (a_1)^n (a_2)^n \cdots (a_k)^n = (-1)^k = -1. \] However, since $n$ is even, there is no real $a_1a_2 \cdots a_k$ satisfying the last equality and we reach a contradiction. Hence there is no matrix whose $n$th power equals $-I$ and the `if' part is vacuously true if $k$ is odd. Now suppose that $k$ is even and $n = 2$. Then the sentence (\ref{eqn:sentence2}) becomes \[ (\forall X \in M_k(\mathbb{R}))\left[ X^2 = -I \Rightarrow X^2 + I = O \right], \] which is trivially true. Hence the `if' part holds. We show the `only if' part by giving a counterexample to the sentence (\ref{eqn:sentence2}) when $k$ is even and $n \ge 4$. For a notational convenience, we denote \[ \begin{pmatrix} \displaystyle \cos\frac{(2j-1)\pi}{n} & \displaystyle -\sin \frac{(2j-1)\pi}{n} \\ \displaystyle \sin\frac{(2j-1)\pi}{n} & \displaystyle \cos\frac{(2j-1)\pi}{n} \end{pmatrix}. \] by $R_j$ instead of $R_{{(2j-1)\pi}/{n}}$. Now we take the following matrix \[ A = \begin{pmatrix} R_1 & O & O & \cdots & O \\ O & R_2 & O & \cdots & O \\ O & O & R_2 & \cdots & O \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ O & O & O & \cdots & R_2 \end{pmatrix}. \] Since $(R_j)^n = -I$ for $j=1,2,\ldots,\frac{n}{2}$, $A^n = -I$. Take any $i \in \{ 1,2,\ldots,\frac{n}{2} \}$. By the Cayley-Hamilton Theorem, $$(R_j)^2-2\cos\frac{(2j-1)\pi}{n}R_j+I=O$$ for each $j=1,2,\ldots,\frac{n}{2}$. Then, if $i=1$, $$(R_2)^2-2\cos\frac{\pi}{n}R_2+I = \left( -2\cos\frac{\pi}{n} + 2\cos\frac{3\pi}{n} \right) R_2 \neq O$$ and so $A^2-2\cos\frac{\pi}{n}A+I \neq O$. If $i \neq 1$, then $$(R_1)^2-2\cos\frac{(2i-1)\pi}{n}R_1+I = \left( -2\cos\frac{(2i-1)\pi}{n} + 2\cos\frac{\pi}{n} \right) R_1 \neq O$$ and so and so $A^2-2\cos\frac{(2i-1)\pi}{n}A+I \neq O$. Thus $A$ is a counterexample to the sentence (\ref{eqn:sentence2}) and we complete the proof of Theorem~\ref{thm:char2}. \section{Closing remarks} We may consider the complex number version of Sentence (\ref{eqn:sentence}) \[ (\forall X \in M_k(\mathbb{C}))\left[X^n = aI \wedge X \neq a^\frac{1}{n}I \Rightarrow X^{n-1} + a^\frac{1}{n}X^{n-2} + \cdots + a^\frac{n-2}{n} X + a^\frac{n-1}{n} I = O\right] \] for integers $k, n$ with $k,n \ge 2$ and $a \in \mathbb{C}$. However, it cannot happen except the case $a=0$ and $n \ge k+1$. If $a=0$, then the same argument for the real number case is applied. If $a \neq 0$, then the matrix \[\begin{pmatrix} a^{\frac{1}{n}} & 0 & 0 & \cdots & 0 \\ 0 & a^{\frac{1}{n}}\zeta_n & 0 & \cdots & 0 \\ 0 & 0 & a^{\frac{1}{n}}\zeta_n & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & a^{\frac{1}{n}}\zeta_n \end{pmatrix} \] becomes a counterexample when $a^\frac{1}{n}$ is a number satisfying $z^n = a$ and $\zeta_n = \exp(\frac{2\pi i}{n})$. \end{document}
\begin{document} \begin{abstract} We study propagation of phase space singularities for the initial value Cauchy problem for a class of Schr\"odinger equations. The Hamiltonian is the Weyl quantization of a quadratic form whose real part is non-negative. The equations are studied in the framework of projective Gelfand--Shilov spaces and their distribution duals. The corresponding notion of singularities is called the Gelfand--Shilov wave front set and means the lack of exponential decay in open cones in phase space. Our main result shows that the propagation is determined by the singular space of the quadratic form, just as in the framework of the Schwartz space, where the notion of singularity is the Gabor wave front set. \end{abstract} \keywords{Schr\"odinger equation, heat equation, propagation of singularities, phase space singularities, Gelfand--Shilov wave front set. MSC 2010 codes: 35A18, 35A21, 35Q40, 35Q79, 35S10.} \maketitle \section{Introduction}\langlebel{sec:intro} The goal of this paper is to study propagation of singularities for the initial value Cauchy problem for a Schr\"odinger type equation \begin{equation}\langlebel{schrodingerequation} \left\{ \begin{array}{rl} \partial_t u(t,x) + q^w(x,D) u (t,x) & = 0, \qquad t \geqslant 0, \quad x \in \rr d, \\ u(0,\, \cdot \, t) & = u_0, \end{array} \right. \end{equation} where $u_0$ is a Gelfand--Shilov distribution on $\rr d$, $q=q(x,\xi)$ is a quadratic form on the phase space $(x,\xi) \in T^* \rr d$ with ${\rm Re} \, q \geqslant 0$, and $q^w(x,D)$ is a pseudodifferential operator in the Weyl quantization. This family of equations comprises the free Schr\"odinger equation where $q(x,\xi)= i |\xi|^2$, the harmonic oscillator where $q(x,\xi)= i (|x|^2 + |\xi|^2)$ and the heat equation where $q(x,\xi)= |\xi|^2$. The problem has been studied in the space of tempered distributions \cite{Rodino2,Wahlberg1} where the natural notion of singularity is the Gabor wave front set. This concept of singularity is defined as the conical subset of the phase space $T^* \rr d \setminus \{0\}$ in which the short-time Fourier transform does not have rapid (superpolynomial) decay. The Gabor wave front set of a tempered distribution is empty exactly when the distribution is a Schwartz function so it measures deviation from regularity in the sense of both smoothness and decay at infinity comprehensively. In this work we study propagation of singularities for the equation \eqref{schrodingerequation} in the functional framework of the Gelfand--Shilov test function spaces and their dual distribution spaces. More precisely we use the projective limit (Beurling type) Gelfand--Shilov space $\Sigma_s(\rr d)$ for $s>1/2$ that consists of smooth functions satisfying \begin{equation*} \forall h>0 \ \ \exists C_h>0 : \quad |x^\alpha \pd \beta f(x)| \leqslant C_h h^{|\alpha+\beta|} (\alpha! \beta!)^s, \quad x \in \rr d, \quad \alpha,\beta \in \nn d. \end{equation*} This means that $\Sigma_s(\rr d)$ is smaller than the Schwartz space, and hence its dual $\Sigma_s'(\rr d)$ is a space of distributions that contains the tempered distributions. The natural concept of phase space singularities in the realm of Gelfand--Shilov spaces is the $s$-Gelfand--Shilov wave front set. The idea was introduced by H\"ormander \cite{Hormander1} under the name analytic wave front set for tempered distributions, and further developed by Cappiello and Schulz \cite{Cappiello1} and Cordero, Nicola and Rodino \cite{Cordero6} for Gelfand--Shilov distributions. These authors had an approach based on the inductive limit Gelfand--Shilov spaces as opposed to our concept that is based on the projective limit spaces. A concept similar to the $s$-Gelfand--Shilov wave front set has been studied by Mizuhara \cite{Mizuhara1}. This is the homogeneous wave front set of Gevrey order $s>1$. It is included in the $s$-Gelfand--Shilov wave front set. Propagation results for the homogeneous Gevrey wave front set are proved in \cite{Mizuhara1} for Schr\"odinger equations, albeit of a different type than ours. In this paper the $s$-Gelfand--Shilov wave front set $WF^s(u) \subseteq T^* \rr d \setminus \{0\}$ of $u \in \Sigma_s'(\rr d)$ for $s>1/2$ is defined as follows: $WF^s(u)$ is the complement in $T^* \rr d \setminus \{0\}$ of the set of $z_0 \in T^* \rr d \setminus \{0\}$ such that there exists an open conic set ${G(X|Y)}amma \subseteq T^* \rr d \setminus \{ 0 \}$ containing $z_0$, and \begin{equation*} \forall A>0: \ \sup_{z \in {G(X|Y)}amma} e^{A |z|^{1/s}} |V_\varphi u(z) | < \infty. \end{equation*} Here $V_\varphi u$ denotes the short-time Fourier transform defined by $\varphi \in \Sigma_s (\rr d) \setminus \{0\}$. Thus the short-time Fourier transform decays like $e^{-A |z|^{1/s}}$ for any $A>0$ in an open cone around $z_0$. Note that this means that the decay can be close to but not quite like a Gaussian $e^{-A |z|^2}$, due to our assumption $s>1/2$. For a tempered distribution, the $s$-Gelfand--Shilov wave front set contains the Gabor wave front set, and thus gives an enlarged notion of singularity. Our main result on propagation of singularities for Schr\"odinger type equations goes as follows, where $e^{-t q^w(x,D)}$ denotes the solution operator (propagator) of the equation \eqref{schrodingerequation}. Let $q$ be the quadratic form on $T^*\rr d$ defined by $q(x,\xi) = \langle (x,\xi), Q(x,\xi) \rangle$ and a symmetric matrix $Q \in \cc {2d \times 2d}$, ${\rm Re} \, Q \geqslant 0$, $F=\mathcal{J} Q$ where \begin{equation}\langlebel{Jdef} \mathcal{J} = \left( \begin{array}{cc} 0 & I \\ -I & 0 \end{array} \right) \in \rr {2d \times 2d} \end{equation} and $s > 1/2$. Then for $u_0 \in \Sigma_s'(\rr d)$ \begin{align*} WF^s (e^{-t q^w(x,D)}u_0) & \subseteq \left( e^{2 t {\rm Im} \, F} \left( WF^s (u_0) \cap S \right) \right) \cap S, \quad t > 0, \end{align*} where $S$ is the singular space \begin{equation*} S=\mathcal{B}ig(\bigcap_{j=0}^{2d-1} \operatorname{Ker}\big[{\rm Re} \, F({\rm Im} \, F)^j \big]\mathcal{B}ig) \cap T^*\rr d \subseteq T^*\rr d \end{equation*} of the quadratic form $q$. This result is verbatim the same as \cite[Theorem~5.2]{Rodino2} when $u_0$ is restricted to be a tempered distribution and when the $s$-Gelfand--Shilov wave front set is replaced by the Gabor wave front set (cf. \cite[Corollary~4.6]{Wahlberg1}). Thus it gives a new manifestation of the importance of the singular space for propagation of phase space singularities for the considered class of equations of Schr\"odinger type. The singular space has attracted much attention recently and occurs in several works on spectral and hypoelliptic properties of non-elliptic quadratic operators \cite{Hitrik1,Hitrik2,Hitrik3,Pravda-Starov1,Pravda-Starov2,Viola1,Viola2}. The paper is organized as follows. Section {\rm Re} \, f{sec:prelim} contains notations and background material on Gelfand--Shilov spaces, pseudodifferential operators, the short-time Fourier transform and the Gabor wave front set. Section {\rm Re} \, f{sec:seminorm} gives a comprehensive discussion on three alternative families of seminorms for the projective Gelfand--Shilov spaces. In Section {\rm Re} \, f{sec:defprop} we define the $s$-Gelfand--Shilov wave front set and deduce some properties: independence of the window function, symplectic invariance, behavior under tensor product and composition with surjective matrices, and microlocality with respect to pseudodifferential operators with certain symbols. In this process we show the continuity of metaplectic operators on $\Sigma_s(\rr d)$ and on $\Sigma_s'(\rr d)$. Section {\rm Re} \, f{sec:formulation} gives a brief discussion on the solution operator to the equation \eqref{schrodingerequation}, that is formulated for $u_0 \in L^2(\rr d)$ by means of semigroup theory. Exact propagation results are given for the case ${\rm Re} \, Q = 0$. Section {\rm Re} \, f{sec:proplinop} treats propagation of the $s$-Gelfand--Shilov wave front set for a class on linear operators, continuous on $\Sigma_s(\rr d)$ and uniquely extendible to continuous operators on $\Sigma_s'(\rr d)$. In Section {\rm Re} \, f{sec:oscint} we discuss shortly H\"ormander's oscillatory integrals with quadratic phase function, and we prove the inclusion of the $s$-Gelfand--Shilov wave front set of such an oscillatory integral in the intersection of its corresponding positive Lagrangian in $T^* \cc d$ with $T^* \rr d$. Section {\rm Re} \, f{sec:kernelschrod} gives an account of H\"ormander's description of the Schwartz kernel of the propagator as an oscillatory integral and we show the continuity of the propagator on $\Sigma_s(\rr d)$ and on $\Sigma_s'(\rr d)$. Finally in Section {\rm Re} \, f{sec:propsing} we assemble the results of Sections {\rm Re} \, f{sec:proplinop}, {\rm Re} \, f{sec:oscint} and {\rm Re} \, f{sec:kernelschrod} to prove our results on propagation of the $s$-Gelfand--Shilov wave front set for equations of the form \eqref{schrodingerequation}. \section{Preliminaries}\langlebel{sec:prelim} \subsection{Notations and basic definitions}\langlebel{subsec:notations} The gradient of a function $f$ with respect to the variable $x \in \rr d$ is denoted by $f'_x$ and the mixed Hessian matrix $(\partial_{x_i} \partial_{y_j} f)_{i,j}$ with respect to $x \in \rr d$ and $y \in \rr n$ is denoted $f_{x y}''$. The Fourier transform of $f \in \mathscr{S}(\rr d)$ (the Schwartz space) is normalized as \begin{equation*} \mathscr{F} f(\xi) = \widehat f(\xi) = \int_{\rr d} f(x) e^{- i \langle x, \xi \rangle} dx, \end{equation*} where $\langle x, \xi \rangle$ denotes the inner product on $\rr d$. The topological dual of $\mathscr S(\rr d)$ is the space of tempered distributions $\mathscr S'(\rr d)$. As conventional $D_j = -i \partial_j$ for $1 \leqslant j \leqslant d$. We will make frequent use of the inequality \begin{equation*} |x+y|^{1/s} \leqslant C_s ( |x|^{1/s} + |y|^{1/s}), \quad x,y \in \rr d, \end{equation*} where \begin{equation*} C_s = \left\{ \begin{array}{ll} 1 & \mbox{if} \ s \geqslant 1 \\ 2^{1/s-1} & \mbox{if} \ 0 < s < 1 \end{array} \right. . \end{equation*} Since this inequality will be used only for $s>1/2$ we may use the cruder estimate $C_s=2$, which leads to the inequalities \begin{align} e^{A |x+y|^{1/s} } & \leqslant e^{2A |x|^{1/s}} e^{2A |y|^{1/s}}, \quad A >0, \quad x,y \in \rr d, \langlebel{exppeetre1} \\ e^{- A |x+y|^{1/s} } & \leqslant e^{- \frac{A}{2} |x|^{1/s}} e^{A |y|^{1/s}}, \quad A >0, \quad x,y \in \rr d. \langlebel{exppeetre2} \end{align} The Japanese bracket is $\eabs{x} = (1+|x|^2)^{1/2}$. For a positive measurable weight function $\omega$ defined on $\rr d$, the Banach space $L_{\omega}^1(\rr d)$ is endowed with the norm $\| f \|_{L_{\omega}^1}= \| f \omega \|_{L^1}$. The unit sphere in $\rr d$ is denoted $S_{d-1} = \{ x \in \rr d: \ |x|=1 \}$. For a matrix $A \in \rr {d \times d}$, $A \geqslant 0$ means that $A$ is positive semidefinite, and $A^t$ is the transpose. If $A$ is invertible then $A^{-t}$ denotes the inverse transpose. In estimates the notation $f (x) \lesssim g (x)$ understands that $f(x) \leqslant C g(x)$ holds for some constant $C>0$ that is uniform for all $x$ in the domain of $f$ and $g$. If $f(x) \lesssim g(x) \lesssim f(x)$ then we write $f(x) \asymp g(x)$. We denote the translation operator by $T_x f(y)=f(y-x)$, the modulation operator by $M_\xi f(y)=e^{i \langle y, \xi \rangle} f(y)$, $x,y,\xi \in \rr d$, and the phase space translation operator by $\Pi(z) = M_\xi T_x$, $z=(x,\xi) \in \rr {2d}$. \subsection{Gelfand--Shilov spaces}\langlebel{subsec:GelfandShilov} Let $h,s,t >0$. The space $\mathcal S_{t,h}^s(\rr d)$ is defined as all $f\in C^\infty (\rr d)$ such that \begin{equation}\langlebel{gfseminorm} \nm f{\mathcal S_{t,h}^s}\equiv \sup \frac {|x^\alpha \partial ^\beta f(x)|}{h^{|\alpha +\beta |}\alpha !^s\, \beta !^t} \end{equation} is finite. The supremum refers to all $\alpha ,\beta \in \mathbf N^d$ and $x\in \rr d$. We set $\mathcal S_{s,h}=\mathcal S_{s,h}^s$. The Banach space $\mathcal S_{t,h}^s$ increases with $h$, $s$ and $t$, and the embedding $\mathcal S_{t,h}^s\subseteq \mathscr{S}$ holds for all $h,s,t >0$. If $s,t>1/2$, or $s=t =1/2$ and $h$ is sufficiently large, then $\mathcal S_{t,h}^s$ contains all finite linear combinations of Hermite functions. The \emph{Gelfand--Shilov spaces} $\mathcal S_{t}^s(\rr d)$ and $\Sigma _{t}^s(\rr d)$ are the inductive and projective limits respectively of $\mathcal S_{t,h}^s(\rr d)$ with respect to $h>0$. This means on the one hand \begin{equation*} \mathcal S_{t}^s(\rr d) = \bigcup _{h>0}\mathcal S_{t,h}^s(\rr d) \quad \text{and}\quad \Sigma _{t}^s(\rr d) =\bigcap _{h>0}\mathcal S_{t,h}^s(\rr d). \end{equation*} On the other hand it means that the topology for $\mathcal S_{t}^s(\rr d)$ is the strongest topology such that the inclusion $\mathcal S_{t,h}^s(\rr d) \subseteq \mathcal S_{t}^s(\rr d)$ is continuous for each $h>0$, and the topology for $\Sigma_{t}^s(\rr d)$ is the weakest topology such that the inclusion $\Sigma_{t}^s(\rr d) \subseteq \mathcal S_{t,h}^s(\rr d)$ is continuous for each $h>0$. The space $\Sigma _t^s(\rr d)$ is a Fr\'echet space with seminorms $\| \, \cdot \, t \|_{\mathcal S_{s,h}^t}$, $h>0$. The Gelfand--Shilov spaces are invariant under translation, modulation, dilation, linear coordinate transformations and tensor products. It holds $\Sigma _t^s(\rr d)\neq \{ 0\}$ if and only if $s,t>0$, $s+t \geqslant 1$ and $(s,t) \neq (1/2,1/2)$. We set $\mathcal S_{s}=\mathcal S_{s}^s$ and $\Sigma _{s}=\Sigma _{s}^s$. Then $\mathcal S_s(\rr d)$ is zero when $s<1/2$, and $\Sigma _s(\rr d)$ is zero when $s \le 1/2$. From now on we assume that $s>1/2$ when considering $\Sigma _s(\rr d)$. The \emph{Gelfand--Shilov distribution spaces} $(\mathcal S_{t}^s)'(\rr d)$ and $(\Sigma _{t}^s)'(\rr d)$ are the projective and inductive limits respectively of $(\mathcal S_{t,h}^s)'(\rr d)$. This implies that \begin{equation*} (\mathcal S_{t}^s)'(\rr d) = \bigcap _{h>0}(\mathcal S_{t,h}^s)'(\rr d)\quad \text{and}\quad (\Sigma _{t}^s)'(\rr d) =\bigcup _{h>0} (\mathcal S_{t,h}^s)'(\rr d). \end{equation*} The space $(\mathcal S_{t}^s)'(\rr d)$ is the topological dual of $\mathcal S_{t}^s(\rr d)$, and if $s>1/2$ then $(\Sigma _{t}^s)'(\rr d)$ is the topological dual of $\Sigma _{t}^s(\rr d)$ \cite{Gelfand1}. In this paper we work with the spaces $\Sigma _s(\rr d)$ and $\Sigma _s'(\rr d) = (\Sigma _s^s)'(\rr d)$ for $s>1/2$. These spaces are embedded with respect to the Schwartz space and the tempered distributions as \begin{equation*} \Sigma _s(\rr d) \subseteq \mathscr{S}(\rr d) \subseteq \mathscr{S}'(\rr d) \subseteq \Sigma_s'(\rr d), \quad s > 1/2. \end{equation*} For $s>1/2$ the (partial) Fourier transform extends uniquely to homeomorphisms on $\mathscr S'(\rr d)$, $\mathcal S_s'(\rr d)$ and $\Sigma _s'(\rr d)$, and restricts to homeomorphisms on $\mathscr S(\rr d)$, $\mathcal S_s(\rr d)$ and $\Sigma _s(\rr d)$. \subsection{Pseudodifferential operators and the Gabor wave front set}\langlebel{subset:pseudo} Let $s>1/2$. Given a window function $\varphi \in \Sigma_s(\rr d) \setminus \{ 0 \}$, the short-time Fourier transform (STFT) \cite{Grochenig1} of $u \in \Sigma_s'(\rr d)$ is defined by \begin{equation*} V_\varphi u(x,\xi) = ( u, M_\xi T_x \varphi ) = \mathscr{F}(u \, T_x \overline{\varphi}) (\xi), \quad x, \, \xi \in \rr d, \end{equation*} where $(\, \cdot \, t,\, \cdot \, t)$ denotes the conjugate linear action of $\Sigma_s'$ on $\Sigma_s$, consistent with the inner product $(\, \cdot \, t,\, \cdot \, t)_{L^2}$ which is conjugate linear in the second argument. The function $\rr {2d} \ni z \rightarrow V_\varphi u(z)$ is smooth. Let $\varphi,\psi \in \Sigma_s(\rr d) \setminus 0$. By \cite[Theorem~2.5]{Toft1} we have \begin{equation}\langlebel{STFTgrowth} \forall u \in \Sigma_s'(\rr d) \quad \exists M \geqslant 0: \quad |V_\varphi u (z)| \lesssim e^{M |z|^{1/s} }, \quad z \in \rr {2d}, \end{equation} and by \cite[Lemma~11.3.3]{Grochenig1} we have \begin{equation}\langlebel{STFTconvolution} |V_\psi u(z)| \leqslant (2 \pi)^{-d} \| \varphi \|_{L^2} |V_\varphi u| * |V_\psi \varphi| (z), \quad z \in \rr {2d}, \end{equation} where $V_\psi \varphi \in \Sigma_s(\rr {2d})$. If $\varphi \in \Sigma_s (\rr d)$ and $\| \varphi \|_{L^2}=1$, the STFT inversion formula \cite[Corollary~11.2.7]{Grochenig1} reads \begin{equation}\langlebel{STFTrecon2} (f,g ) = (2 \pi)^{-d} \int_{\rr {2d}} V_\varphi f(z) \, \overline{V_\varphi g(z)} \, dz, \quad f \in \Sigma_s'(\rr d), \quad g \in \Sigma_s(\rr d). \end{equation} The Weyl quantization of pseudodifferential operators (cf. \cite{Folland1,Hormander0,Shubin1}) is the map from symbols $a \in \mathscr S(\rr {2d})$ to operators acting on $f \in \mathscr S(\rr d)$ defined by \begin{equation}\mathbf Nnumber a^w(x,D) f(x) = (2 \pi)^{-d} \iint_{\rr {2d}} e^{i \langle x-y,\xi \rangle} a \left( \frac{x+y}{2},\xi \right) \, f(y) \, dy \, d \xi. \end{equation} The conditions on $a$ and $f$ can be modified and relaxed in various ways. The Weyl quantization can be formulated in the framework of Gelfand--Shilov spaces \cite{Cappiello2}. For certain symbols the operator $a^w(x,D)$ acts continuously on $\Sigma_s(\rr d)$ when $s>1/2$. If $a \in \Sigma_s'(\rr {2d})$ the Weyl quantization extends a continuous operator $\Sigma_s(\rr d) \rightarrow \Sigma_s'(\rr d)$ that satisfies \begin{equation*} (a^w(x,D) f, g) = (2 \pi)^{-d} (a, W(g,f) ), \quad f, g \in \Sigma_s(\rr d), \end{equation*} where the cross-Wigner distribution is defined as \begin{equation*} W(g,f) (x,\xi) = \int_{\rr d} g (x+y/2) \overline{f(x-y/2)} e^{- i \langle y, \xi \rangle} dy, \quad (x,\xi) \in \rr {2d}. \end{equation*} We have $W(g,f) \in \Sigma_s(\rr {2d})$ when $f,g \in \Sigma_s(\rr d)$. We need the following symbol classes for pseudodifferential operators that act on $\mathscr{S}(\rr d)$ in order to define the Gabor wave front set and explain its properties. \begin{defn}\langlebel{shubinclasses1}\cite{Shubin1} For $m\in \mathbf R$ the Shubin symbol class $G^m$ is the subspace of all $a \in C^\infty(\rr {2d})$ such that for every $\alpha,\beta \in \nn d$ \begin{equation*} |\partial_x^\alpha \partial_\xi^\beta a(x,\xi)| \lesssim \langlengle (x,\xi) \operatorname{Ran}gle^{m-|\alpha|-|\beta|}, \quad (x,\xi)\in \rr {2d}. \end{equation*} \end{defn} \begin{defn}\langlebel{hormanderclasses}\cite{Hormander0} For $m\in \mathbf R$, $0 \leqslant \rho \leqslant 1$, $0 \leqslant \delta < 1$, the H\"ormander symbol class $S_{\rho,\delta}^m$ is the subspace of all $a \in C^\infty(\rr {2d})$ such that for every $\alpha,\beta \in \nn d$ \begin{equation}\langlebel{symbolestimate2} |\partial_x^\alpha \partial_\xi^\beta a(x,\xi)| \lesssim \eabs{\xi}^{m - \rho|\beta| + \delta |\alpha|}, \quad (x,\xi)\in \rr {2d}. \end{equation} \end{defn} Both $G^m$ and $S_{\rho,\delta}^m$ are Fr\'echet spaces with respect to their naturally defined seminorms. The following definition involves conic sets in the phase space $T^* \rr d \setminus 0 \simeq \rr {2d} \setminus 0$. A set is conic if it is invariant under multiplication with positive reals. Note the difference to the frequency-conic sets that are used in the definition of the (classical) $C^\infty$ wave front set \cite{Hormander0}. \begin{defn}\langlebel{noncharacteristic2} Given $a \in G^m$, a point $z_0 \in T^* \rr d \setminus 0$ is called non-characteristic for $a$ provided there exist $A,\varepsilon>0$ and an open conic set ${G(X|Y)}amma \subseteq T^* \rr d \setminus 0$ such that $z_0 \in {G(X|Y)}amma$ and \begin{equation*} |a(z )| \geqslant \varepsilon \eabs{z}^m, \quad z \in {G(X|Y)}amma, \quad |z| \geqslant A. \end{equation*} \end{defn} The Gabor wave front set is defined as follows where $\operatorname{char} (a)$ is the complement in $T^* \rr d \setminus 0$ of the set of non-characteristic points for $a$. \begin{defn}\langlebel{wavefront1} \cite{Hormander1} If $u \in \mathscr S'(\rr d)$ then the Gabor wave front set $WF(u)$ is the set of all $z \in T^*\rr d \setminus 0$ such that $a \in G^m$ for some $m \in \mathbf R$ and $a^w(x,D) u \in \mathscr S$ implies $z \in \operatorname{char}(a)$. \end{defn} According to \cite[Proposition 6.8]{Hormander1} and \cite[Corollary 4.3]{Rodino1}, the Gabor wave front set can be characterized microlocally by means of the STFT as follows. If $u \in \mathscr S'(\rr d)$ and $\varphi \in \mathscr S(\rr d) \setminus 0$ then $z_0 \in T^*\rr d \setminus 0$ satisfies $z_0 \mathbf Ntin WF(u)$ if and only if there exists an open conic set ${G(X|Y)}amma_{z_0} \subseteq T^*\rr d \setminus 0$ containing $z_0$ such that \begin{equation}\langlebel{WFchar} \sup_{z \in {G(X|Y)}amma_{z_0}} \eabs{z}^N |V_\varphi u(z)| < \infty \quad \forall N \geqslant 0. \end{equation} The most important properties of the Gabor wave front set include the following facts. Here the microsupport $\mu \operatorname{supp} (a)$ of $a \in G^m$ is defined as follows (cf. \cite{Schulz1}). For $z_0 \in T^* \rr d \setminus 0$ we have $z_0 \mathbf Ntin \mu \operatorname{supp} (a)$ if there exists an open cone ${G(X|Y)}amma \subseteq T^* \rr d \setminus 0$ containing $z_0$ such that \begin{equation*} \sup_{z \in {G(X|Y)}amma} \eabs{z}^N |\pd \alpha a (z)| < \infty, \quad \alpha \in \nn {2d}, \quad N \geqslant 0. \end{equation*} \begin{enumerate} \item If $u \in \mathscr{S}'(\rr d)$ then $WF(u) = \emptyset$ if and only if $u \in \mathscr{S} (\rr d)$ \cite[Proposition 2.4]{Hormander1}. \item If $u \in \mathscr S'(\rr d)$ and $a \in G^m$ then \begin{align*} WF( a^w(x,D) u) & \subseteq WF(u) \cap \mu \operatorname{supp} (a) \\ & \subseteq WF( a^w(x,D) u) \ \bigcup \ \operatorname{char} (a). \end{align*} \item If $a \in S_{0,0}^0$ and $u \in \mathscr{S}'(\rr d)$ then by \cite[Theorem 5.1]{Rodino1} \begin{equation}\langlebel{microlocal2} WF(a^w(x,D) u) \subseteq WF(u). \end{equation} In particular $WF(\Pi(z) u) = WF(u)$ for any $z \in \rr {2d}$. \end{enumerate} As three basic examples of the Gabor wave front set we mention (cf. \cite[Examples~6.4--6.6]{Rodino1}) \begin{equation}\langlebel{example1} WF(\delta_x) = \{ 0 \} \times (\rr d \setminus 0 ), \quad x \in \rr d, \end{equation} \begin{equation*} WF(e^{i \langle \, \cdot \, t,\xi \rangle}) = (\rr d \setminus 0) \times \{ 0 \}, \quad \xi \in \rr d, \end{equation*} and \begin{equation*} WF(e^{i \langle x, A x \rangle/2 } ) = \{ (x, Ax): \, x \in \rr d \setminus 0 \}, \quad A \in \rr {d \times d} \quad \mbox{symmetric}. \end{equation*} The canonical symplectic form on $T^* \rr d$ is \begin{equation*} \sigma((x,\xi), (x',\xi')) = \langle x' , \xi \rangle - \langle x, \xi' \rangle, \quad (x,\xi), (x',\xi') \in T^* \rr d. \end{equation*} With the matrix \eqref{Jdef} this can be expressed with the inner product on $\rr {2d}$ as \begin{equation*} \sigma((x,\xi), (x',\xi')) = \langle \mathcal{J} (x,\xi), (x',\xi') \rangle, \quad (x,\xi), (x',\xi') \in T^* \rr d. \end{equation*} To each symplectic matrix $\chi \in \operatorname{Sp}(d,\mathbf R)$ is associated an operator $\mu(\chi)$ that is unitary on $L^2(\rr d)$, and determined up to a complex factor of modulus one, such that \begin{equation}\langlebel{symplecticoperator} \mu(\chi)^{-1} a^w(x,D) \, \mu(\chi) = (a \circ \chi)^w(x,D), \quad a \in \mathscr{S}'(\rr {2d}) \end{equation} (cf. \cite{Folland1,Hormander0}). The operator $\mu(\chi)$ is a homeomorphism on $\mathscr S$ and on $\mathscr S'$. The mapping $\operatorname{Sp}(d,\mathbf R) \ni \chi \rightarrow \mu(\chi)$ is called the \emph{metaplectic representation} \cite{Folland1,Taylor1}. It is in fact a representation of the so called $2$-fold covering group of $\operatorname{Sp}(d,\mathbf R)$, which is called the metaplectic group and denoted $\operatorname{Mp}(d,\mathbf R)$. The metaplectic representation satisfies the homomorphism relation modulo a change of sign: \begin{equation*} \mu( \chi \chi') = \pm \mu(\chi ) \mu(\chi' ), \quad \chi, \chi' \in \operatorname{Sp}(d,\mathbf R). \end{equation*} According to \cite[Proposition~2.2]{Hormander1} the Gabor wave front set is symplectically invariant as \begin{equation*} WF( \mu(\chi) u) = \chi WF(u), \quad \chi \in \operatorname{Sp}(d, \mathbf R), \quad u \in \mathscr{S}'(\rr d). \end{equation*} The work \cite{Cordero5} contains a study of the propagation of the Gabor wave front set for linear Schr\"odinger equations, and \cite{Nicola2,Nicola1} contain studies of the same question for semilinear Schr\"odinger-type equations. \section{Seminorms on Gelfand--Shilov spaces}\langlebel{sec:seminorm} We need to work with several families of seminorms on $\Sigma_s(\rr d)$ for $s>1/2$ apart from the seminorms defined by \eqref{gfseminorm}. The next result shows that there are three families of seminorms for $\Sigma _s(\rr d)$ that are each equivalent to the family of seminorms $\{\| f \|_{\mathcal S_{s,h}}, \ h > 0 \}$ defined by \eqref{gfseminorm}. The three families of seminorms are firstly $\{\| f \|_A', \ \| \widehat f \|_B', \ A,B>0 \}$ where \begin{equation}\langlebel{seminorm1} \| f \|_A' = \sup_{x \in \rr d} e^{A |x|^{1/s}} |f(x)|, \end{equation} secondly $\{| f |_A, \ A>0 \}$ where \begin{equation}\langlebel{seminorm3} | f |_A = \sup_{x \in \rr d, \ \beta \in \nn d} \frac{A^{|\beta|} e^{A |x|^{1/s}} |D^\beta f(x)|}{(\beta!)^s}, \end{equation} and thirdly $\{\| f \|_A'', \ A>0 \}$ where \begin{equation}\langlebel{seminorm2} \| f \|_A'' = \sup_{z \in \rr {2d}} e^{A |z|^{1/s}} |V_\varphi f(z)|, \end{equation} for $\varphi \in \Sigma_s(\rr d) \setminus 0$ fixed. It will turn out that the choice of $\varphi \in \Sigma_s(\rr d) \setminus 0$ in the definition of the seminorms $\{\| f \|_A'', \ A>0 \}$ is arbitrary (see the proof of Proposition {\rm Re} \, f{seminormequivalence}). The essential arguments in the proof of the following proposition can be found in several places, e.g. \cite{Gelfand1,Grochenig3,Chung1,Nicola3,Toft2}. Nevertheless we prefer to give a detailed account since it is a cornerstone for our results, and in order to give a self-contained narrative. \begin{prop}\langlebel{seminormequivalence} Let $s>1/2$. Then \begin{equation}\langlebel{seminorm2a} \forall A,B>0 \quad \exists h>0: \ \| f \|_A' + \| \widehat f \|_B' \lesssim \| f \|_{\mathcal S_{s,h}}, \quad f \in \Sigma_s(\rr d), \end{equation} and \begin{equation}\langlebel{seminorm2b} \forall h>0 \quad \exists A,B>0: \ \| f \|_{\mathcal S_{s,h}} \lesssim \| f \|_A' + \| \widehat f \|_B' , \quad f \in \Sigma_s(\rr d). \end{equation} Likewise \begin{equation}\langlebel{seminorm4a} \forall A>0 \quad \exists h>0: \ | f |_A \lesssim \| f \|_{\mathcal S_{s,h}}, \quad f \in \Sigma_s(\rr d), \end{equation} and \begin{equation}\langlebel{seminorm4b} \forall h>0 \quad \exists A>0: \ \| f \|_{\mathcal S_{s,h}} \lesssim | f |_A , \quad f \in \Sigma_s(\rr d). \end{equation} Finally \begin{equation}\langlebel{seminorm3a} \forall A>0 \quad \exists h>0: \ \| f \|_A'' \lesssim \| f \|_{\mathcal S_{s,h}}, \quad f \in \Sigma_s(\rr d), \end{equation} and \begin{equation}\langlebel{seminorm3b} \forall h>0 \quad \exists A>0: \ \| f \|_{\mathcal S_{s,h}} \lesssim \| f \|_A'', \quad f \in \Sigma_s(\rr d). \end{equation} \end{prop} \begin{proof} We start with \eqref{seminorm2a}. Let $f \in \Sigma_s(\rr d)$. From \eqref{gfseminorm} we have for any $h>0$ \begin{equation*} |x^\alpha D^\beta f(x)| \leqslant \| f \|_{\mathcal S_{s,h}} (\alpha! \beta! )^s h^{|\alpha+\beta|}, \quad \alpha, \beta \in \nn d, \quad x \in \rr d. \end{equation*} This gives for any $n \in \mathbf N$ and any $\beta \in \nn d$ \begin{equation}\langlebel{alphabetaestimate} |x|^n |D^\beta f(x)| \leqslant d^{n/2} \max_{|\alpha|=n} |x^\alpha D^\beta f(x)| \leqslant d^{n/2} \| f \|_{\mathcal S_{s,h}} (n! \beta!)^s h^{n+|\beta|}, \quad x \in \rr d, \end{equation} which in turn gives with $\beta=0$ and $A=2^{-1}s (d^{1/2} h)^{-1/s}$ \begin{align*} \exp\left(\frac{A}{s}|x|^{1/s}\right) |f(x)|^{1/s} & = \sum_{n=0}^\infty \frac{|x|^{n/s} |f(x)|^{1/s} (d^{1/2} h)^{-n/s}}{n!} \left(\frac{A (d^{1/2} h)^{1/s}}{s}\right)^n \\ & \leqslant \| f \|_{\mathcal S_{s,h}}^{1/s} \sum_{n=0}^\infty 2^{-n}, \quad x \in \rr d. \end{align*} For any $A>0$ we thus have \begin{equation}\langlebel{seminorm2a1} \| f \|_A' \lesssim \| f \|_{\mathcal S_{s,h_1}}, \quad f \in \Sigma_s(\rr d), \end{equation} if $h_1=(s/(2A))^{s} d^{-1/2}$. Since the Fourier transform is continuous on $\Sigma_s(\rr d)$ we get from \eqref{seminorm2a1} for any $B>0$ \begin{equation}\langlebel{seminorm2a2} \| \widehat f \|_B' \lesssim \| \widehat f \|_{\mathcal S_{s,h_0}} \lesssim \| f \|_{\mathcal S_{s,h_2}}, \quad f \in \Sigma_s(\rr d), \end{equation} for some $h_0, h_2>0$. Addition of \eqref{seminorm2a1} and \eqref{seminorm2a2} proves \eqref{seminorm2a} for $h=\min(h_1,h_2)$. The second and longer argument of this proof serves to prove \eqref{seminorm2b}. The argument follows closely that of the proof of \cite[Theorem~6.1.6]{Nicola3}. For completeness' sake we give the full details. First we deduce two estimates that are needed. From \eqref{seminorm2a} it follows that $\| f \|_A' < \infty$ and $\| \widehat f \|_B' < \infty$ for any $A,B>0$ when $f \in \Sigma_s(\rr d)$. Thus for any $A>0$ we have \begin{align*} \sum_{n=0}^\infty \frac{|x|^{n/s} |f(x)|^{1/s}}{n!} \left(\frac{A}{s} \right)^n & = \exp\left( \frac{A}{s} |x|^{1/s} \right) |f(x)|^{1/s} \leqslant (\| f \|_A')^{1/s}, \quad x \in \rr d, \end{align*} which gives the estimate \begin{equation*} |x|^{n} |f(x)| \leqslant \| f \|_A' (n!)^s \left(\frac{s}{A} \right)^{sn}, \quad x \in \rr d, \quad n \in \mathbf N. \end{equation*} Using $|\alpha|! \leqslant d^{|\alpha|} \alpha!$ (cf. \cite[Eq.~(0.3.3)]{Nicola3}) this gives in turn \begin{equation*} |x^\alpha f(x)| \leqslant \| f \|_A' (\alpha!)^s \left( \frac{ds}{A}\right)^{s|\alpha|}, \quad x \in \rr d, \quad \alpha \in \nn d. \end{equation*} Finally we take the $L^2$ norm and estimate for an integer $k>d/4$ with $\varepsilon=4k-d>0$: \begin{equation}\langlebel{L2est1} \begin{aligned} \| x^\alpha f \|_{L^2} & \lesssim \sup_{x \in \rr d} \eabs{x}^{(d+\varepsilon)/2} |x^\alpha f(x)| \lesssim \sup_{x \in \rr d, \ |\gamma| \leqslant 2 k} |x^{\alpha+\gamma} f(x)| \\ & \leqslant \| f \|_A' ((\alpha+\gamma)!)^s \left( \frac{ds}{A}\right)^{s|\alpha+\gamma|} \\ & \lesssim \| f \|_A' (\alpha!)^s \left( \frac{2 d s}{A}\right)^{s|\alpha|}, \quad \alpha \in \nn d, \end{aligned} \end{equation} using $(\alpha+\gamma)! \leqslant 2^{|\alpha+\gamma|} \alpha! \gamma!$ (cf. \cite{Nicola3}) and considering $k$ a fixed parameter. From \eqref{L2est1}, $\| \widehat f \|_B' < \infty$ for any $B>0$, and Parseval's theorem we obtain \begin{equation}\langlebel{L2est2} \| D^\beta f \|_{L^2} = (2 \pi)^{-d/2} \| \xi^\beta \widehat f \|_{L^2} \lesssim \| \widehat f \|_B' (\beta!)^s \left( \frac{2 d s}{B}\right)^{s|\beta|}, \quad \beta \in \nn d. \end{equation} Since $\| f \|_{A}' \leqslant \| f \|_{A+A_0}'$ when $A_0\geqslant 0$ and $A>0$ we may use $B=A$ when we now set out to prove \eqref{seminorm2b}. It suffices to assume $h \leqslant 1$. We have for $\alpha,\beta \in \nn d$ arbitrary and $f \in \Sigma_s(\rr d)$, using the Cauchy--Schwarz inequality, Parseval's theorem and the Leibniz rule \begin{equation}\langlebel{intermediateestimate1} \begin{aligned} |x^\alpha D^\beta f(x)| & = (2\pi)^{-d} \left| \int_{\rr d} \widehat{x^\alpha D^\beta f} (\xi) e^{i \langle x, \xi \rangle} d \xi \right| \lesssim \| \eabs{\, \cdot \, t}^{(d+\varepsilon)/2} \widehat{x^\alpha D^\beta f} \|_{L^2} \\ & \lesssim \max_{|\gamma| \leqslant 2k} \| D^\gamma (x^\alpha D^\beta f) \|_{L^2} \\ & \lesssim \max_{|\gamma| \leqslant 2k} \sum_{\mu \leqslant \min(\alpha,\gamma) } \binom{\gamma}{\mu} \binom{\alpha}{\mu} \mu! \| x^{\alpha-\mu} D^{\beta + \gamma-\mu} f \|_{L^2}, \quad x \in \rr d. \end{aligned} \end{equation} In an intermediate step we rewrite the expression for the $L^2$ norm squared using integration by parts and estimate it as \begin{align*} & \| x^{\alpha-\mu} D^{\beta + \gamma-\mu} f \|_{L^2}^2 \\ & = |(D^{\beta + \gamma-\mu} f , x^{2\alpha-2\mu} D^{\beta + \gamma-\mu} f )| \\ & = |(f , D^{\beta + \gamma-\mu} (x^{2\alpha-2\mu} D^{\beta + \gamma-\mu} f) )| \\ & \leqslant \sum_{\kappa \leqslant \min(\beta+\gamma-\mu,2\alpha-2\mu)} \binom{\beta+\gamma-\mu}{\kappa} \binom{2\alpha-2\mu}{\kappa} \kappa! |(x^{2\alpha-2\mu-\kappa} f, D^{2\beta + 2\gamma-2\mu-\kappa} f )| \\ & \leqslant \sum_{\kappa \leqslant \min(\beta+\gamma-\mu,2\alpha-2\mu)} \binom{\beta+\gamma-\mu}{\kappa} \binom{2\alpha-2\mu}{\kappa} \kappa! \| x^{2\alpha-2\mu-\kappa} f \|_{L^2} \| D^{2\beta + 2\gamma-2\mu-\kappa} f \|_{L^2}. \end{align*} Setting $h = 2^{2s+5/2} (2d s/A)^s$ and using \eqref{L2est1}, \eqref{L2est2} and $\kappa! = \kappa!^{2s -\delta}$ where $\delta=2s-1>0$, we get \begin{align*} & \| x^{\alpha-\mu} D^{\beta + \gamma-\mu} f \|_{L^2}^2 \\ & \lesssim 2^{2 |\alpha+\beta|} \| f \|_A' \| \widehat f \|_A' \sum_{\kappa \leqslant \min(\beta+\gamma-\mu,2\alpha-2\mu)} \kappa! ((2\alpha-2\mu-\kappa)! (2\beta + 2\gamma-2\mu-\kappa)!)^s \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times \left(2^{-5/2-2s} h \right)^{|2\alpha-4\mu-2\kappa + 2\beta + 2\gamma|} \\ & \leqslant (2^{-3-4s}h^2)^{|\alpha+\beta|} \| f \|_A' \| \widehat f \|_A' \sum_{\kappa \leqslant \min(\beta+\gamma-\mu,2\alpha-2\mu)} (\kappa!)^{-\delta} ((2\alpha-2\mu)! (2\beta + 2\gamma-2\mu)!)^s \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times \left(2^{-5/2-2s} h \right)^{|-4\mu-2\kappa+2\gamma|} \\ & \lesssim (2^{-3-4s} h^2)^{|\alpha+\beta|} \| f \|_A' \| \widehat f \|_A' \sum_{\kappa \leqslant \min(\beta+\gamma-\mu,2\alpha-2\mu)} ((2\alpha-2\mu)! (2\beta + 2\gamma-2\mu)!)^s \\ & \lesssim (2^{-2-2s} h^2)^{|\alpha+\beta|} \| f \|_A' \| \widehat f \|_A' ((2\alpha-2\mu)! (2\beta-2\mu)!)^s \end{align*} since we have assumed $h \leqslant 1$, since $|\mu| \leqslant 2 k$ which is a fixed constant, and since \begin{equation*} (\kappa!)^{-\delta} \left( 2^{-5/2-2s} h \right)^{- 2 |\kappa|} \leqslant \exp\left( \delta d \left( h^{-1} 2^{5/2+2s} \right)^{2/\delta} \right), \quad \kappa \in \nn d. \end{equation*} We insert this into \eqref{intermediateestimate1} which gives, using $\mu! \leqslant \mu!^{2s}$ and $$ (2(\alpha-\mu))! \leqslant 2^{2|\alpha|} ((\alpha-\mu)!)^2, $$ \begin{align*} & |x^\alpha D^\beta f(x)| \\ & \lesssim (2^{-1-s} h)^{|\alpha+\beta|} (\| f \|_A' \| \widehat f \|_A')^{1/2} \max_{|\gamma| \leqslant 2k} \sum_{\mu \leqslant \min(\alpha,\gamma) } \binom{\gamma}{\mu} \binom{\alpha}{\mu} \mu! ((2\alpha-2\mu)! (2\beta-2\mu)!))^{s/2} \\ & \lesssim ( 2^{-1} h)^{|\alpha+\beta|} (\| f \|_A' \| \widehat f \|_A')^{1/2} \max_{|\gamma| \leqslant 2k} \sum_{\mu \leqslant \min(\alpha,\gamma) } \binom{\gamma}{\mu} \binom{\alpha}{\mu} (\alpha! \beta!)^{s} \\ & \lesssim h^{|\alpha+\beta|} (\alpha! \beta!)^{s} (\| f \|_A' + \| \widehat f \|_A'), \quad x \in \rr d, \quad \alpha,\beta \in \nn d. \end{align*} This finally proves \eqref{seminorm2b}, since for any $h>0$ we may take $A = 2^{3+5/2s} h^{-1/s} d s$. Next we show \eqref{seminorm4a} and \eqref{seminorm4b}. We start with \eqref{seminorm4a}. From \eqref{alphabetaestimate} it follows that we have for $A>0$ \begin{align*} \exp\left(\frac{A}{s}|x|^{1/s}\right) |D^\beta f(x)|^{1/s} & = \sum_{n=0}^\infty \frac{|x|^{n/s} |D^\beta f(x)|^{1/s} (d^{1/2} h)^{-n/s}}{n!} \left(\frac{A (d^{1/2} h)^{1/s}}{s}\right)^n \\ & \leqslant \| f \|_{\mathcal S_{s,h}}^{1/s} \beta! h^{|\beta|/s} \sum_{n=0}^\infty 2^{-n}, \quad x \in \rr d, \quad \beta \in \nn d, \end{align*} provided $A \leqslant 2^{-1}s (d^{1/2} h)^{-1/s}$. Thus \begin{equation*} e^{A|x|^{1/s}} |D^\beta f(x)| \lesssim \| f \|_{\mathcal S_{s,h}} (\beta!)^s h^{|\beta|}, \quad x \in \rr d, \quad \beta \in \nn d, \end{equation*} which gives \begin{equation*} |f|_A \lesssim \| f \|_{\mathcal S_{s,h}}, \quad f \in \Sigma_s(\rr d), \end{equation*} provided $h \leqslant \min(A^{-1}, (s/(2A))^s d^{-1/2})$. Hence we have proved \eqref{seminorm4a}. We continue with the proof of \eqref{seminorm4b}. From \eqref{seminorm4a} we know that $|f|_A < \infty$ for any $A>0$ when $f \in \Sigma_s(\rr d)$. Hence for any $A>0$, $\beta \in \nn d$ and $x \in \rr d$ \begin{align*} \sum_{n=0}^\infty \frac{|x|^{n/s} |D^\beta f(x)|^{1/s}}{n!} \left(\frac{A}{s} \right)^n & = \exp\left( \frac{A}{s} |x|^{1/s} \right) |D^\beta f(x)|^{1/s} \leqslant | f |_A^{1/s} \beta! A^{-|\beta|/s}, \end{align*} which gives \begin{align*} |x|^{n} |D^\beta f(x)| & \leqslant | f |_A (n! \beta!)^s A^{-|\beta|} \left( \frac{s}{A} \right)^{sn}, \quad n \in \mathbf N, \quad \beta \in \nn d, \quad x \in \rr d, \end{align*} and thus \begin{align*} |x^\alpha D^\beta f(x)| & \leqslant | f |_A (\alpha! \beta!)^s A^{-|\beta|} \left( \frac{ds}{A} \right)^{s|\alpha|}, \quad \alpha, \beta \in \nn d, \quad x \in \rr d. \end{align*} From this it follows that \begin{equation*} \| f \|_{\mathcal S_{s,h}} \lesssim |f|_A , \quad f \in \Sigma_s(\rr d), \end{equation*} for any $h>0$ provided $A \geqslant \max(h^{-1}, s d h^{-1/s})$. This proves \eqref{seminorm4b}. It remains to show \eqref{seminorm3a} and \eqref{seminorm3b}. We start with \eqref{seminorm3a}. Let $A>0$ and $\varphi \in \Sigma_s(\rr d) \setminus 0$. We have for $f \in \Sigma_s(\rr d)$ \begin{align*} |V_\varphi f (x,\xi)| & = |\widehat{f T_x \overline{\varphi}} (\xi)| \lesssim |\widehat{f}| * |\widehat{T_x \overline{\varphi}}| (\xi) = \int_{\rr d} |\widehat{f}(\xi-\eta)| \, |\widehat{\varphi}(-\eta)| \, d \eta \\ & \lesssim \| \widehat f \|_{8A}' \int_{\rr d} \exp(-8A|\xi-\eta|^{1/s}) \, |\widehat{\varphi}(-\eta)| \, d \eta \\ & \lesssim \| \widehat f \|_{8A}' \exp(-4A|\xi|^{1/s}) \int_{\rr d} \exp(8A|\eta|^{1/s}) \, |\widehat{\varphi}(-\eta)| \, d \eta \\ & \lesssim \| \widehat f \|_{8A}' \exp(-4A|\xi|^{1/s}), \quad x, \xi \in \rr d, \end{align*} using \eqref{exppeetre2} and \eqref{seminorm2a}. From this estimate and $|V_\varphi f (x,\xi)| = (2 \pi)^{-d} |V_{\widehat \varphi} \widehat f (\xi,-x)|$ we also obtain \begin{equation*} |V_\varphi f (x,\xi)| \lesssim \| f \|_{8A}' \exp(-4A|x|^{1/s}), \quad x, \xi \in \rr d. \end{equation*} With the aid of \eqref{exppeetre1} we may conclude \begin{align*} e^{2 A |(x,\xi)|^{1/s}} |V_\varphi f (x,\xi)|^2 & \leqslant e^{4 A |x|^{1/s}} |V_\varphi f (x,\xi)| \ e^{4 A |\xi|^{1/s}} |V_\varphi f (x,\xi)| \\ & \lesssim \| f \|_{8A}' \ \| \widehat f \|_{8A}' \end{align*} which gives \begin{equation*} \| f \|_A'' \lesssim (\| f \|_{8A}' \ \| \widehat f \|_{8A}')^{1/2} \lesssim \| f \|_{8A}' + \| \widehat f \|_{8A}'. \end{equation*} Combining with \eqref{seminorm2a} we have proved \eqref{seminorm3a}. We now show \eqref{seminorm3b}. For that purpose we use the strong version of the STFT inversion formula \eqref{STFTrecon2} and its Fourier transform, that is \begin{align} f(x) & = (2 \pi)^{-d} \int_{\rr {2d}} V_\varphi f(y,\eta) M_\eta T_y \varphi (x) \, dy \, d \eta, \langlebel{STFTinv1} \\ \widehat f(\xi) & = (2 \pi)^{-d} \int_{\rr {2d}} V_\varphi f(y,\eta) T_\eta M_{-y} \widehat \varphi (\xi) \, dy \, d \eta, \langlebel{STFTinv2} \end{align} where $f \in \Sigma_s(\rr d)$ and $\varphi \in \Sigma_s(\rr d)$ satisfies $\| \varphi \|_{L^2} = 1$. From \eqref{STFTinv1} we obtain for any $A>0$ \begin{align*} e^{A |x|^{1/s}} |f(x)| & \lesssim \int_{\rr {2d}} |V_\varphi f(y,\eta)| \, e^{A |x|^{1/s}} |\varphi (x-y)| \, dy \, d \eta, \\ & \lesssim \| f \|_{3A}'' \int_{\rr {2d}} e^{-3A|(y,\eta)|^{1/s}} \, e^{A |x|^{1/s} -2 A |x-y|^{1/s}} dy \, d \eta, \\ & \lesssim \| f \|_{3A}'' \int_{\rr {2d}} e^{-3A|(y,\eta)|^{1/s}} \, e^{2 A |y|^{1/s}} dy \, d \eta, \\ & \lesssim \| f \|_{3A}'', \quad x \in \rr d, \end{align*} which gives $\| f \|_A' \lesssim \| f \|_{3A}''$. From \eqref{STFTinv2} we obtain for any $A>0$ \begin{align*} e^{A |\xi|^{1/s}} |\widehat f(\xi)| & \lesssim \int_{\rr {2d}} |V_\varphi f(y,\eta)| \, e^{A |\xi|^{1/s}} |\widehat \varphi (\xi-\eta)| \, dy \, d \eta, \\ & \lesssim \| f \|_{3A}'' \int_{\rr {2d}} e^{-3A|(y,\eta)|^{1/s}} \, e^{A |\xi|^{1/s} -2 A |\xi-\eta|^{1/s}} dy \, d \eta, \\ & \lesssim \| f \|_{3A}'', \quad \xi \in \rr d, \end{align*} which gives $\| \widehat f \|_A' \lesssim \| f \|_{3A}''$. Thus $\| f \|_A' + \| \widehat f \|_A' \lesssim \| f \|_{3A}''$ so combining with \eqref{seminorm2b} we have proved \eqref{seminorm3b}. Finally we show that the seminorms $\{\| f \|_{A}'', \ A>0\}$ are equivalent to the same family of seminorms when the window function $\varphi \in \Sigma_s(\rr d) \setminus 0$ is replaced by another function $\psi \in \Sigma_s(\rr d) \setminus 0$. From \eqref{STFTconvolution} we obtain for $A>0$ \begin{align*} e^{A |z|^{1/s}} |V_\psi f(z)| & \lesssim \int_{\rr {2d}} e^{A |z|^{1/s}} |V_\varphi f(z-w)| \, |V_\psi \varphi(w)| \, dw \\ & \lesssim \| f \|_{2A}'' \int_{\rr {2d}} e^{A |z|^{1/s} -2A|z-w|^{1/s}} \, |V_\psi \varphi(w)| \, dw \\ & \lesssim \| f \|_{2A}'' \int_{\rr {2d}} e^{2A|w|^{1/s}} \, |V_\psi \varphi(w)| \, dw \\ & \lesssim \| f \|_{2A}'', \quad z \in \rr {2d}, \end{align*} using \eqref{seminorm3a} applied to $\varphi \in \Sigma_s(\rr d)$, i.e. $\| \varphi \|_{C}'' < \infty$ for all $C>0$. This proves the claim that the window function $\psi \in \Sigma_s(\rr d) \setminus 0$ gives seminorms equivalent to those of $\varphi \in \Sigma_s(\rr d) \setminus 0$. \end{proof} \section{Definition and properties of the $s$-Gelfand--Shilov wave front set} \langlebel{sec:defprop} For $s>1/2$ and $u \in \Sigma_s' (\rr d)$ we define the $s$-Gelfand--Shilov wave front set $WF^s (u)$ as follows, modifying slightly Cappiello's and Schulz's \cite[Definition~2.1]{Cappiello1}. This concept is a coarsening of the Gabor wave front set $WF(u)$ in the sense that $WF(u) \subseteq WF^s(u)$ for all $s > 1/2$ and all $u \in \mathscr{S}'(\rr d)$. \begin{defn}\langlebel{wavefronts} Let $s > 1/2$, $\varphi \in \Sigma_s(\rr d) \setminus 0$ and $u \in \Sigma_s'(\rr d)$. Then $z_0 \in T^*\rr d \setminus 0$ satisfies $z_0 \mathbf Ntin WF^s (u)$ if there exists an open conic set ${G(X|Y)}amma_{z_0} \subseteq T^*\rr d \setminus 0$ containing $z_0$ such that for every $A>0$ \begin{equation*} \sup_{z \in {G(X|Y)}amma_{z_0}} e^{A | z |^{1/s}} |V_\varphi u(z)| < \infty. \end{equation*} \end{defn} It follows that $WF^s (u) = \emptyset$ if and only if $u \in \Sigma_s'(\rr d)$ satisfies \begin{equation*} |V_\varphi u(z)| \lesssim e^{-A | z |^{1/s}}, \quad z \in \rr {2d}, \end{equation*} for any $A>0$. By Proposition {\rm Re} \, f{seminormequivalence} (cf. \cite[Proposition~2.4]{Toft1}) this is equivalent to $u \in \Sigma_s(\rr d)$. The following lemma is needed in the proof of the independence of $WF^s (u)$ of the window function $\varphi \in \Sigma_s(\rr d) \setminus 0$. \begin{lem}\langlebel{convolutioninvariance} Let $s>1/2$ and let $f$ be a measurable function on $\rr d$ that satisfies \begin{equation}\langlebel{polynomialbound1} |f(x)| \lesssim e^{M | x |^{1/s}}, \quad x \in \rr d, \end{equation} for some $M \geqslant 0$. Suppose there exists a non-empty open conic set ${G(X|Y)}amma \subseteq \rr d \setminus 0$ such that \begin{equation}\langlebel{conedecay1} \sup_{x \in {G(X|Y)}amma} e^{A | x |^{1/s}} |f(x)| < \infty \end{equation} for all $A>0$. If \begin{equation}\langlebel{L1intersection} g \in \bigcap_{A > 0} L_{\exp(A |\, \cdot \, t |^{1/s})}^1(\rr d) \end{equation} then for any open conic set ${G(X|Y)}amma' \subseteq \rr d \setminus 0$ such that $\overline{{G(X|Y)}amma' \cap S_{d-1}} \subseteq {G(X|Y)}amma$, we have \begin{equation}\langlebel{conedecay2} \sup_{x \in {G(X|Y)}amma'} e^{A | x |^{1/s}} |f * g(x)| < \infty \end{equation} for all $A>0$. \end{lem} \begin{proof} By \eqref{exppeetre1} and the assumptions \eqref{polynomialbound1} and \eqref{L1intersection} we have \begin{equation*} |f * g (x)| \lesssim e^{2M | x |^{1/s}}, \quad x \in \rr d, \end{equation*} so it suffices to assume $|x| \geqslant L$ for some large number $L>0$. Let $\varepsilon>0$. We estimate and split the convolution integral as \begin{equation*} |f * g(x)| \leqslant \underbrace{\int_{\eabs{y} \leqslant \varepsilon \eabs{x}} |f(x-y)| \, | g (y)| \, d y}_{:= I_1} + \underbrace{\int_{\eabs{y} > \varepsilon \eabs{x}} |f(x-y)| \, | g (y)| \, d y}_{:= I_2}. \end{equation*} Consider $I_1$. Since $\eabs{y} \leqslant \varepsilon \eabs{x}$ we have $x-y \in {G(X|Y)}amma$ if $x \in {G(X|Y)}amma'$, $|x| \geqslant 1$, and $\varepsilon>0$ is chosen sufficiently small. The assumptions \eqref{conedecay1}, \eqref{L1intersection}, and \eqref{exppeetre2} give \begin{equation}\langlebel{intuppsk1} \begin{aligned} I_1 & \lesssim \int_{\eabs{y} \leqslant \varepsilon \eabs{x}} e^{- A |x+y|^{1/s} } |g (y)| \, d y \lesssim e^{- \frac{A}{2} |x|^{1/s}} \int_{\rr d} e^{A |y|^{1/s}} |g(y)| \, d y \\ & \lesssim e^{- \frac{A}{2} |x|^{1/s}}, \quad x \in {G(X|Y)}amma', \quad |x| \geqslant 1, \end{aligned} \end{equation} for any $A>0$. Next we estimate $I_2$ using \eqref{polynomialbound1} and $\eabs{y} > \varepsilon \eabs{x}$. The latter inequality implies that $|y|^{1/s} \geqslant |x|^{1/s} \varepsilon^{1/s}/2$ when $|x| \geqslant L$ if $L>0$ is sufficiently large. This gives for any $A>0$ \begin{equation}\langlebel{intuppsk2} \begin{aligned} I_2 & \lesssim \int_{\eabs{y} > \varepsilon \eabs{x}} e^{M |x-y|^{1/s} } \, |g(y) | \, dy \\ & \leqslant e^{2M |x|^{1/s} } \int_{\eabs{y} > \varepsilon \eabs{x}} e^{2M |y|^{1/s} } \, e^{-2 \varepsilon^{-1/s}(2M+A) |y|^{1/s} } \, \, e^{2 \varepsilon^{-1/s}(2M+A) |y|^{1/s} }\, |g(y) | \, dy \\ & \leqslant e^{(2M-2M-A) |x|^{1/s} } \int_{\rr d} e^{(2M+2 \varepsilon^{-1/s} (2M +A))|y|^{1/s} } \, |g(y) | \, dy \\ & \lesssim e^{-A |x|^{1/s} }, \quad x \in \rr d, \quad |x| \geqslant L, \end{aligned} \end{equation} again using \eqref{L1intersection}. A combination of \eqref{intuppsk1} and \eqref{intuppsk2} proves \eqref{conedecay2} for $A>0$ arbitrary. \end{proof} Using Lemma {\rm Re} \, f{convolutioninvariance} we show next that Definition {\rm Re} \, f{wavefronts} does not depend on the choice of the window function $\varphi \in \Sigma_s(\rr d) \setminus 0$. \begin{prop}\langlebel{sGaborinvariance} Suppose $s>1/2$ and $u \in \Sigma_s'(\rr d)$. The definition of the $s$-Gelfand--Shilov wave front set $WF^s(u)$ does not depend on the window function $\varphi \in \Sigma_s(\rr d) \setminus 0$. \end{prop} \begin{proof} Let $\varphi,\psi \in \Sigma_s(\rr d) \setminus 0$. By \eqref{STFTgrowth} we have for some $M \geqslant 0$ $$ |V_\varphi u (z)| \lesssim e^{M |z|^{1/s} }, \quad z \in \rr {2d}, $$ and by \eqref{STFTconvolution} we have \begin{equation*} |V_\psi u(z)| \leqslant (2 \pi)^{-d} \| \varphi \|_{L^2} |V_\varphi u| * |V_\psi \varphi| (z), \quad z \in \rr {2d}. \end{equation*} By Proposition {\rm Re} \, f{seminormequivalence} (cf. \cite[Theorem~2.4]{Toft1}) we have \begin{equation*} |V_\psi \varphi (z)| \lesssim e^{-A |z|^{1/s}}, \quad z \in \rr {2d}, \end{equation*} for any $A>0$, and hence \begin{equation*} V_\psi \varphi \in \bigcap_{A > 0} L_{\exp(A| \, \cdot \, t |^{1/s})}^1(\rr {2d}). \end{equation*} From Lemma {\rm Re} \, f{convolutioninvariance} we may now draw the following conclusion. If $|V_\varphi u(z)|$ decays like $e^{-A |z|^{1/s} }$ for any $A>0$ in a conic set ${G(X|Y)}amma \subseteq T^* \rr d \setminus 0$ containing $z_0 \neq 0$ then we get decay like $e^{-A |z|^{1/s} }$ for any $A>0$ in a smaller cone containing $z_0$ for $|V_\psi u(z)|$. Hence, by symmetry, decay of order $e^{-A |z|^{1/s} }$ for any $A>0$ in an open cone around a point in $T^* \rr d \setminus 0$ happens simultaneously for $V_\varphi u$ and $V_\psi u$. \end{proof} The $s$-Gelfand--Shilov wave front set $WF^s (u)$ decreases when the index $s$ increases: \begin{equation}\langlebel{sGaborinclusion} t \geqslant s \quad \Longrightarrow \quad WF^t(u) \subseteq WF^s(u). \end{equation} From $WF(u) \subseteq WF^s(u)$ for $u \in \mathscr{S}'(\rr d)$ and \eqref{example1} we have for any $s > 1/2$ \begin{equation*} WF^s(\delta_0) \supseteq \{ 0 \} \times (\rr d \setminus 0 ). \end{equation*} To see the opposite inclusion we note that if $x_0 \in \rr d \setminus 0$ and $\xi_0 \in \rr d$ then $(x_0,\xi_0) \in {G(X|Y)}amma = \{(x,\xi) \in T^* \rr d \setminus 0: |\xi| < C |x| \}$ for some $C>0$, which is an open conic subset of $T^* \rr d \setminus 0$. Let $\varphi \in \Sigma_s(\rr d) \setminus 0$ and let $\varepsilon>0$. Since $|V_\varphi \delta_0 (x, \xi)| = |\varphi(-x)|$ it follows from Proposition {\rm Re} \, f{seminormequivalence} that for any $A>0$ \begin{equation*} \sup_{z \in {G(X|Y)}amma} e^{A | z |^{1/s}} |V_\varphi \delta_0(z)| \leqslant \sup_{z \in {G(X|Y)}amma} e^{2A (1+C^{1/s}) | x |^{1/s}} |\varphi(-x)| < \infty. \end{equation*} This shows that $(x_0,\xi_0) \mathbf Ntin WF^s(\delta_0)$, and proves \begin{equation*} WF^s(\delta_0) \subseteq \{ 0 \} \times (\rr d \setminus 0 ). \end{equation*} Hence \begin{equation}\langlebel{WFsdirac} WF^s(\delta_0) = \{ 0 \} \times (\rr d \setminus 0 ), \quad s > 1/2. \end{equation} Next we show continuity of the metaplectic operators when they act on $\Sigma_{s}(\rr d)$ for $s>1/2$. We note that continuity of metaplectic operators acting on $\mathcal S_s (\rr d)$ for $s \geqslant 1$ is contained in \cite[Proposition~3.5]{Cordero7}, and G.~Tranquilli \cite[Theorem~32]{Tranquilli1} has shown continuity on $\mathcal S_s (\rr d)$ for $s \geqslant 1/2$. Our proof is inspired by hers. \begin{prop}\langlebel{metaplecticcont} If $s>1/2$ and $\chi \in \operatorname{Sp}(d,\mathbf R)$ then the metaplectic operator $\mu(\chi)$ acts continuously on $\Sigma_{s}(\rr d)$, and extends uniquely to a continuous operator on $\Sigma_s'(\rr d)$. \end{prop} \begin{proof} By \cite[Proposition~4.10]{Folland1} each matrix $\chi \in \operatorname{Sp}(d,\mathbf R)$ is a finite product of matrices of the form \begin{equation*} \mathcal{J}, \quad \left( \begin{array}{cc} A^{-1} & 0 \\ 0 & A^{t} \end{array} \right), \quad \left( \begin{array}{cc} I & 0 \\ B & I \end{array} \right), \end{equation*} for $A \in \operatorname{GL}(d,\mathbf R)$ and $B \in \rr {d \times d}$ symmetric. To show that $\mu(\chi)$ is continuous on $\Sigma_{s}(\rr d)$ it thus suffices to show that $\mu(\chi)$ is continuous on $\Sigma_{s}(\rr d)$ when $\chi$ has each of these three forms. We have $\mu(\mathcal{J}) = (2 \pi)^{-d/2} \mathscr{F}$, and $\mu(\chi) f(x) = |A|^{1/2} f(Ax)$ when $A \in \operatorname{GL}(d,\mathbf R)$ and \begin{equation*} \chi = \left( \begin{array}{cc} A^{-1} & 0 \\ 0 & A^{t} \end{array} \right). \end{equation*} The Fourier transform and linear coordinate transformations are continuous operators on $\Sigma_{s}(\rr d)$. Therefore it remains to prove that $\mu(\chi)$ is continuous on $\Sigma_{s}(\rr d)$ when $B \in \rr {d \times d}$ is symmetric and \begin{equation}\langlebel{chichirp} \chi = \left( \begin{array}{cc} I & 0 \\ B & I \end{array} \right). \end{equation} We have $\mu(\chi)f (x) = e^{i \langle B x,x \rangle/2} f(x)$ when \eqref{chichirp} holds (cf. \eqref{symplecticoperator} and \cite{Folland1}). Due to the continuity of coordinate transformations on $\Sigma_{s}(\rr d)$, it suffices to consider diagonal matrices $B$ with non-negative entries. By an induction argument applied to the seminorms \eqref{gfseminorm} it further suffices to work in dimension $d=1$ and prove continuity on $\Sigma_{s}(\mathbf R)$ of the multiplication operator $f \rightarrow g f$ for $g(x) = e^{i x^2/2}$. It may be confirmed by induction that for any $k \in \mathbf N$ we have $D^k g = p_k g$ where $p_k$ is the polynomial of order $k$ \begin{equation*} p_k (x) = k! \sum_{m=0}^{\lfloor k/2 \rfloor} \frac{x^{k-2m} (-i)^m}{m!(k-2m)! 2^m}. \end{equation*} Using $k! \leqslant 2^k (k-2m)! (2m)!$ we can estimate $|p_k(x)|$ as \begin{equation*} |p_k (x)| \leqslant \sum_{m=0}^{\lfloor k/2 \rfloor} \frac{|x|^{k-2m} (2m)!}{m! 2^{m-k}} \leqslant \sum_{m=0}^{\lfloor k/2 \rfloor} |x|^{k-2m} m! 2^{m+k}. \end{equation*} By Leibniz' rule we have for any $A>0$ and any $B \geqslant A$ \begin{align*} |D^n (g f)(x)| & \leqslant \sum_{k \leqslant n} \binom{n}{k} |p_k(x)| \, |D^{n-k} f(x)| \\ & \leqslant |f|_{2B} (n!)^s e^{-B|x|^{1/s}} A^{-n} \sum_{k \leqslant n} \binom{n}{k} |p_k(x)| e^{-B|x|^{1/s}} A^{k} \left( \frac{(n-k)!}{n!} \right)^s \\ & \leqslant |f|_{2B} (n!)^s e^{-A|x|^{1/s}} A^{-n} \sum_{k \leqslant n} \binom{n}{k} |p_k(x)| e^{-B|x|^{1/s}} A^{k} (k!)^{-s}, \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad n \in \mathbf N, \quad x \in \mathbf R. \end{align*} As an intermediate step we compute and estimate, using $m! = m!^{2s -\varepsilon}$ where $\varepsilon=2s-1>0$, \begin{align*} |p_k(x)| A^{k} (k!)^{-s} & \leqslant \sum_{m=0}^{\lfloor k/2 \rfloor} |x|^{k-2m} m! 2^{m+k} A^{k} (k!)^{-s} \\ & \leqslant \sum_{m=0}^{\lfloor k/2 \rfloor} 2^{m+k} A^{2m} m! \left(\frac{(k-2m)!}{k!}\right)^s \left(\frac{(A|x|)^{(k-2m)/s}}{(k-2m)!}\right)^s \\ & \leqslant e^{s A^{1/s} |x|^{1/s}} \sum_{m=0}^{\lfloor k/2 \rfloor} 2^{m+k} \left(\frac{A^{2m/\varepsilon}}{m!} \right)^\varepsilon \left(\frac{m!^2(k-2m)!}{k!}\right)^s \\ & \leqslant e^{s A^{1/s} |x|^{1/s}} \sum_{m=0}^{\lfloor k/2 \rfloor} 2^{m+k} \left(\frac{A^{2m/\varepsilon}}{m!} \right)^\varepsilon \\ & \lesssim e^{s A^{1/s} |x|^{1/s}} e^{\varepsilon A^{2/\varepsilon}} 2^{2k}. \end{align*} If $B \geqslant sA^{1/s}$ we thus obtain \begin{align*} |D^n (g f)(x)| & \lesssim |f|_{2B} (n!)^s e^{-A|x|^{1/s}} (A/5)^{-n} \\ & \leqslant |f|_{2B} (n!)^s e^{-(A/5)|x|^{1/s}} (A/5)^{-n}, \quad n \in \mathbf N, \quad x \in \mathbf R. \end{align*} Hence we have shown that for any $A>0$ we have \begin{equation*} |g f|_A \lesssim |f|_B \end{equation*} for $B \geqslant \max(2 s(5A)^{1/s}, 10A)$. In view of Proposition {\rm Re} \, f{seminormequivalence} this shows that the multiplication operator $f \rightarrow g f$ with $g(x) = e^{i x^2/2}$ is continuous on $\Sigma_{s}(\mathbf R)$. Thus $\mu(\chi)$ is continuous on $\Sigma_s(\rr d)$ when $\chi$ has the form \eqref{chichirp} with $B$ symmetric. We may now conclude that $\mu(\chi)$ is continuous on $\Sigma_s(\rr d)$ for all $\chi \in \operatorname{Sp}(d, \mathbf R)$. Finally the unique extension to a continuous operator on $\Sigma_s'(\rr d)$ follows from the facts that $\mu(\chi)$ is unitary and $\Sigma_s(\rr d)$ is dense in $\Sigma_s'(\rr d)$. \end{proof} The combination of Propositions {\rm Re} \, f{sGaborinvariance} and {\rm Re} \, f{metaplecticcont} gives the symplectic invariance of the $s$-Gelfand--Shilov wave front set, as follows. \begin{cor}\langlebel{symplecticWFs} If $s>1/2$ then \begin{equation*} WF^s(\mu(\chi) u) = \chi WF^s(u), \quad \chi \in \operatorname{Sp}(d,\mathbf R), \quad u \in \Sigma_s' (\rr d). \end{equation*} \end{cor} \begin{proof} By the proof of \cite[Lemma~3.7]{Wahlberg1} we have \begin{equation*} \left| V_{\mu(\chi) \varphi} \left( \mu(\chi) u \right)(\chi z) \right| = \left| V_\varphi u (z) \right| \end{equation*} for $\varphi \in \Sigma_s(\rr d)$, $u \in \Sigma_s'(\rr d)$, $\chi \in \operatorname{Sp}(d,\mathbf R)$ and $z \in \rr {2d}$. By Proposition {\rm Re} \, f{metaplecticcont} $\mu(\chi) \varphi \in \Sigma_s(\rr d)$ so the result follows immediately from Proposition {\rm Re} \, f{sGaborinvariance}. \end{proof} \begin{example} A combination of \eqref{WFsdirac} and $\mu(\mathcal{J}) = (2\pi)^{-d/2} \mathscr{F}$ gives \begin{equation}\langlebel{WFsone} WF^s( 1 ) = (\rr d \setminus 0) \times \{ 0 \}, \quad s> 1/2. \end{equation} If $B \in \rr {d \times d}$ is symmetric then $\chi$ defined by \eqref{chichirp} defines the metaplectic multiplication operator $\mu(\chi) = e^{i \langle Bx,x \rangle/2}$. Corollary {\rm Re} \, f{symplecticWFs} combined with \eqref{WFsone} yields \begin{equation}\langlebel{WFschirp} WF^s( e^{i \langle Bx,x \rangle/2} ) = \{ (x,Bx) : \ x \in \rr d \setminus 0 \}, \quad s > 1/2. \end{equation} \end{example} Next we show a result on the $s$-Gelfand--Shilov wave front set of a tensor product. The corresponding result for the Gabor wave front set is \cite[Proposition~2.8]{Hormander1}. With obvious modification of the proof given below you get an alternative proof of the latter result. Here we use the notation $x=(x',x'') \in \rr {m+n}$, $x' \in \rr m$, $x'' \in \rr n$. \begin{prop}\langlebel{tensorWFs} If $s>1/2$, $u \in \Sigma_s'(\rr m)$, and $v \in \Sigma_s'(\rr n)$ then \begin{align*} & WF^s (u \otimes v) \subseteq \left( ( WF^s(u) \cup \{0\} ) \times ( WF^s(v) \cup \{0\} ) \right)\setminus 0 \\ & = \{ (x,\xi) \in T^* \rr {m+n} \setminus 0: \ (x',\xi') \in WF^s(u) \cup \{ 0 \}, \ (x'',\xi'') \in WF^s(v) \cup \{ 0 \} \} \setminus 0. \end{align*} \end{prop} \begin{proof} Let $\varphi \in \Sigma_s(\rr m) \setminus 0$ and $\psi \in \Sigma_s(\rr n) \setminus 0$. Suppose $(x_0,\xi_0) \in T^* \rr {m+n} \setminus 0$ does not belong to the set on the right hand side. Then either $(x_0',\xi_0') \mathbf Ntin WF^s(u) \cup \{ 0 \}$ or $(x_0'',\xi_0'') \mathbf Ntin WF^s(v) \cup \{ 0 \}$. For reasons of symmetry we may assume $(x_0',\xi_0') \mathbf Ntin WF^s(u) \cup \{ 0 \}$. Then $(x_0',\xi_0') \in {G(X|Y)}amma' \subseteq T^* \rr m \setminus 0$ where ${G(X|Y)}amma'$ is an open conic subset, and \begin{equation*} \sup_{(x',\xi') \in {G(X|Y)}amma'} e^{A | (x',\xi') |^{1/s}} |V_\varphi u(x',\xi'))| < \infty \end{equation*} for all $A>0$. Define for $C>0$ the open conic set \begin{equation*} {G(X|Y)}amma = \{ (x,\xi) \in T^* \rr {m+n}: \ (x',\xi') \in {G(X|Y)}amma', \ |(x'',\xi'')| < C |(x',\xi')| \} \subseteq T^* \rr {m+n} \setminus 0. \end{equation*} Then $(x_0,\xi_0) \in {G(X|Y)}amma$ for $C>0$ sufficiently large since $(x_0',\xi_0') \neq 0$. By \eqref{STFTgrowth} we have for some $M \geqslant 0$ \begin{equation*} |V_\psi v (z)| \lesssim e^{M |z|^{1/s} }, \quad z \in \rr {2d}. \end{equation*} This gives for any $A>0$ \begin{align*} \sup_{(x,\xi) \in {G(X|Y)}amma} e^{A | (x,\xi)|^{1/s}}& |V_{\varphi \otimes \psi} u \otimes v (x,\xi)| = \sup_{(x,\xi) \in {G(X|Y)}amma} e^{A | (x,\xi)|^{1/s}} |V_\varphi u (x',\xi')| \, |V_\psi v (x'',\xi'')| \\ & \lesssim \sup_{(x,\xi) \in {G(X|Y)}amma} e^{2A | (x',\xi')|^{1/s}+(2A+M)|(x'',\xi'')|^{1/s}} |V_\varphi u (x',\xi')| \\ & \leqslant \sup_{(x',\xi') \in {G(X|Y)}amma'} e^{(2A+C^{1/s}(2A+M)) | (x',\xi')|^{1/s}} |V_\varphi u (x',\xi')| \\ & < \infty. \end{align*} It follows that $(x_0,\xi_0) \mathbf Ntin WF^s (u \otimes v)$. \end{proof} We need in Section {\rm Re} \, f{sec:oscint} the following result which is an adaptation of \cite[Proposition~2.3]{Hormander1} from the Gabor wave front set to the $s$-Gelfand--Shilov wave front set. Modified naturally the proof can be considered an alternative proof of the latter result. \begin{prop}\langlebel{linsurj} If $s>1/2$, $u \in \Sigma_s'(\rr d) \setminus 0$ and $A \in \rr {d \times n}$ is a surjective matrix, then \begin{equation*} WF^s(u \circ A) = \{ (x,A^t \xi) \in T^* \rr n \setminus 0: \ (Ax,\xi) \in WF^s(u) \} \cup \operatorname{Ker} A \setminus 0 \times \{ 0 \}. \end{equation*} \end{prop} \begin{proof} Due to Corollary {\rm Re} \, f{symplecticWFs}, and $\mu(\chi) f(x) = |B|^{1/2} f(Bx)$ when $B \in \operatorname{GL}(d,\mathbf R)$ and \begin{equation*} \chi = \left( \begin{array}{cc} B^{-1} & 0 \\ 0 & B^{t} \end{array} \right), \end{equation*} it suffices to assume $k := n - d > 0$ and $A = ( I_d \ \ 0)$ where $0 \in \rr {d \times k}$. We split variables as $x = (x',x'') \in \rr n$ with $x' \in \rr d$ and $x'' \in \rr k$. We need to prove \begin{equation}\langlebel{WFequality1} \begin{aligned} & WF^s(u \otimes 1) \\ & = \{ (x; \xi',0) \in T^* \rr n \setminus 0: \ (x',\xi') \in WF^s(u) \} \cup \left( \{ 0_d \} \times \rr k \setminus 0 \times \{ 0_n \} \right) \end{aligned} \end{equation} where we use the notation $0_n = 0\in \rr n$. The inclusion \begin{align*} & WF^s(u \otimes 1) \\ & \subseteq \{ (x; \xi',0) \in T^* \rr n \setminus 0: \ (x',\xi') \in WF^s(u) \} \cup \left( \{ 0_d \} \times \rr k \setminus 0 \times \{ 0_n \} \right) \end{align*} is a particular case of Proposition {\rm Re} \, f{tensorWFs}, combined with \eqref{WFsone}. To prove the opposite inclusion, we first show \begin{equation}\langlebel{opposite1} WF^s(u \otimes 1) \supseteq \{ 0_d \} \times \rr k \setminus 0 \times \{ 0_n \}. \end{equation} Let $\varphi \in \Sigma_s(\rr d) \setminus 0$ satisfy $(u,\varphi) \neq 0$ and let $\psi \in \Sigma_s(\rr k) \setminus 0$ satisfy $\widehat \psi(0) \neq 0$. If $x'' \in \rr k \setminus 0$ then due to \begin{equation}\langlebel{STFTtensor} |V_{\varphi \otimes \psi} u \otimes 1 (x,\xi)| = |V_\varphi u(x',\xi')| \ |\widehat \psi(-\xi'')| \end{equation} we have for any $t>0$ \begin{equation*} |V_{\varphi \otimes \psi} u \otimes 1 (t(0,x'';0)| = |V_\varphi u(0,0)| \ |\widehat \psi(0)| = |(u,\varphi)| \ |\widehat \psi(0)| \neq 0. \end{equation*} Thus $V_{\varphi \otimes \psi} u \otimes 1$ does not decay in any conical neighborhood of $(0,x'';0) \in T^* \rr n$, which proves \eqref{opposite1}. To prove \eqref{WFequality1} it thus suffices to show the inclusion \begin{equation}\langlebel{WFequality2} WF^s(u \otimes 1) \supseteq \{ (x; \xi',0) \in T^* \rr n \setminus 0: \ (x',\xi') \in WF^s(u) \}. \end{equation} Suppose $0 \neq (x_0; \xi_0',0) \mathbf Ntin WF^s(u \otimes 1)$. If $(x_0',\xi_0') = 0$ then $(x_0',\xi_0') \mathbf Ntin WF^s(u)$, so we may assume $(x_0',\xi_0') \neq 0$. We have $(x_0; \xi_0',0) \in {G(X|Y)}amma \subseteq T^* \rr n \setminus 0$ where ${G(X|Y)}amma$ is an open conic set such that \begin{equation*} \sup_{(x,\xi) \in {G(X|Y)}amma} e^{A | (x,\xi) |^{1/s}} |V_\varphi u(x',\xi')| \ |\widehat \psi(-\xi'')| < \infty \end{equation*} for all $A>0$, cf. \eqref{STFTtensor}. Define the open conic set \begin{equation*} {G(X|Y)}amma' = \{ (x',\xi') \in T^* \rr d \setminus 0: \ \exists x'' \in \rr k: \ (x',x'',\xi',0) \in {G(X|Y)}amma \} \subseteq T^* \rr d \setminus 0. \end{equation*} Then $(x_0',\xi_0') \in {G(X|Y)}amma'$ since $(x_0',\xi_0') \neq 0$. Let $A>0$ be arbitrary. Define the functions \begin{align*} f(x',\xi') & = e^{A | (x',\xi') |^{1/s}} |V_\varphi u(x',\xi')| \ |\widehat \psi(0)|, \quad (x',\xi') \in T^* \rr d, \\ g(x,\xi) & = e^{A | (x',\xi') |^{1/s}} |V_\varphi u(x',\xi')| \ |\widehat \psi(-\xi'')| \\ & \leqslant e^{A | (x,\xi) |^{1/s}} |V_\varphi u(x',\xi')| \ |\widehat \psi(-\xi'')|, \quad (x,\xi) \in T^* \rr n. \end{align*} For some sequence $(x_n',\xi'_n)_{n \in \mathbf N} \subseteq {G(X|Y)}amma'$, where for each $n \in \mathbf N$ there exists $x_n'' \in \rr k$ such that $(x_n',x_n'',\xi_n',0) \in {G(X|Y)}amma$, we have \begin{align*} \sup_{(x',\xi') \in {G(X|Y)}amma'} e^{A | (x',\xi') |^{1/s}} |V_\varphi u(x',\xi')| & = |\widehat \psi(0)|^{-1} \lim_{n \rightarrow \infty} f(x_n',\xi_n') \\ & = |\widehat \psi(0)|^{-1} \lim_{n \rightarrow \infty} g(x_n',x_n'',\xi_n',0) \\ & \lesssim \sup_{(x,\xi) \in {G(X|Y)}amma} e^{A | (x,\xi) |^{1/s}} |V_\varphi u(x',\xi')| \ |\widehat \psi(-\xi'')| \\ & < \infty. \end{align*} This means that $(x_0',\xi_0') \mathbf Ntin WF^s(u)$ which proves \eqref{WFequality2}. \end{proof} Let $s>1/2$ and suppose $a \in C^\infty(\rr {2d})$ satisfies the estimates \begin{equation}\langlebel{symbolestimate0} |\pd \alpha a(z)| \lesssim h^{|\alpha|} (\alpha!)^s, \quad \alpha \in \nn {2d}, \quad z \in \rr {2d}, \end{equation} for all $h>0$. According to \cite[Theorem 3.4]{Cappiello2} $a^w(x,D)$ is then a continuous operator on $\Sigma_s (\rr d)$ that extends uniquely to a continuous operator on $\Sigma_s' (\rr d)$. In particular $WF^s (a^w(x,D) u)$ is well defined for $u \in \Sigma_s' (\rr d)$. The next result shows that these pseudodifferential operators are microlocal with respect to the $s$-Gelfand--Shilov wave front set. First we need a lemma. \begin{lem}\langlebel{symbolSTFT} If $\varphi \in \Sigma_s(\rr {2d}) \setminus 0$ and $a \in C^\infty(\rr {2d})$ satisfies the estimates \eqref{symbolestimate0} for all $h>0$, then for any $A>0$ \begin{equation*} \left| V_{\varphi} a(x,\xi) \right| \lesssim e^{- A |\xi|^{1/s}}, \quad x \in \rr {2d}, \ \xi \in \rr {2d}. \end{equation*} \end{lem} \begin{proof} We start by estimating a seminorm \eqref{gfseminorm} of $\overline{\varphi} \, T_{-x} a$. From \eqref{symbolestimate0} we obtain for any $h>0$ \begin{align*} |y^\alpha D_y^\beta (\overline{\varphi(y)} a(y+x))| & \leqslant \sum_{\gamma \leqslant \beta} \binom{\beta}{\gamma} |y^\alpha D^{\beta-\gamma} \varphi(y)| \, |D^\gamma a(y+x)| \\ & \lesssim \| \varphi \|_{\mathcal S_{s,h/2}} \sum_{\gamma \leqslant \beta} \binom{\beta}{\gamma} (h/2)^{|\beta-\gamma + \alpha+\gamma|} ((\beta-\gamma)! \gamma! \alpha!)^s \\ & \lesssim (h/2)^{|\alpha + \beta|} (\beta! \alpha!)^s \sum_{\gamma \leqslant \beta} \binom{\beta}{\gamma} \\ & \lesssim h^{|\alpha + \beta|} (\beta! \alpha!)^s, \quad x, y \in \rr {2d}, \quad \alpha,\beta \in \nn {2d}. \end{align*} It follows that for any $h>0$ we have the estimate \begin{equation*} \| \overline{\varphi} \, T_{-x} a \|_{\mathcal S_{s,h}} \leqslant C_h, \quad x \in \rr {2d}, \end{equation*} where $C_h>0$. Note that the estimate is uniform over $x \in \rr {2d}$. By Proposition {\rm Re} \, f{seminormequivalence}, or more precisely \eqref{seminorm2a}, we have for any $A>0$ \begin{equation*} \| \widehat{\overline{\varphi} \, T_{-x} a} \|_A' \leqslant C_A, \quad x \in \rr {2d}, \end{equation*} for some $C_A>0$. This gives finally for any $A>0$ \begin{equation*} \left| V_{\varphi} a(x,\xi) \right| = |\widehat{a T_x \overline{\varphi}}(\xi)| = |\widehat{\overline{\varphi} \, T_{-x} a}(\xi)| \lesssim e^{- A |\xi|^{1/s}}, \quad x \in \rr {2d}, \ \xi \in \rr {2d}. \end{equation*} \end{proof} \begin{prop}\langlebel{microlocalWFs} If $s>1/2$ and $a \in C^\infty(\rr {2d})$ satisfies the estimates \eqref{symbolestimate0} for all $h>0$ then \begin{equation*} WF^s (a^w(x,D) u) \subseteq WF^s (u), \quad u \in \Sigma_s'(\rr d). \end{equation*} \end{prop} \begin{proof} Pick $\varphi \in \Sigma_s(\rr d)$ such that $\| \varphi \|_{L^2}=1$. Denoting the formal adjoint of $a^w(x,D)$ by $a^w(x,D)^*$, \eqref{STFTrecon2} gives for $u \in \Sigma_s'(\rr d)$ and $z \in \rr {2d}$ \begin{align*} V_\varphi (a^w(x,D) u) (z) & = ( a^w(x,D) u, \Pi(z) \varphi ) \\ & = ( u, a^w(x,D)^* \Pi(z) \varphi ) \\ & = (2 \pi)^{-d} \int_{\rr {2d}} V_\varphi u(w) \, ( \Pi(w) \varphi,a^w(x,D)^* \Pi(z) \varphi ) \, dw \\ & = (2 \pi)^{-d} \int_{\rr {2d}} V_\varphi u(w) \, ( a^w(x,D) \, \Pi(w) \varphi,\Pi(z) \varphi ) \, dw \\ & = (2 \pi)^{-d} \int_{\rr {2d}} V_\varphi u(z-w) \, ( a^w(x,D) \, \Pi(z-w) \varphi,\Pi(z) \varphi ) \, dw. \end{align*} By e.g. \cite[Lemma 3.1]{Grochenig2} we have \begin{equation*} |( a^w(x,D) \, \Pi(z-w) \varphi,\Pi(z) \varphi )| = (2 \pi)^{-d} \left| V_\Phi a \left( z-\frac{w}{2}, \mathcal{J} w \right) \right| \end{equation*} where $\Phi$ is the Wigner distribution $\Phi = W(\varphi,\varphi) \in \Sigma_s(\rr {2d})$. Defining \begin{equation*} g(w) = \sup_{z \in \rr {2d}} | ( a^w(x,D) \, \Pi(z-w) \varphi,\Pi(z) \varphi ) |, \quad w \in \rr {2d}, \end{equation*} we thus obtain from Lemma {\rm Re} \, f{symbolSTFT} \begin{equation*} g \in \bigcap_{A>0} L_{\exp(A | \, \cdot \, t |^{1/s})}^1(\rr {2d}) \end{equation*} and \begin{align}\langlebel{convolution1} |V_\varphi (a^w(x,D) u) (z)| & \lesssim |V_\varphi u| * g(z), \quad z \in \rr {2d}. \end{align} If $0 \neq z_0 \in T^*\rr d \setminus WF^s(u)$ then there exists an open conic set ${G(X|Y)}amma \subseteq T^* \rr d \setminus 0$ containing $z_0$ such that for all $A>0$ \begin{equation*} \sup_{z \in {G(X|Y)}amma} e^{A |z|^{1/s}} |V_\varphi u(z)| < \infty. \end{equation*} By \eqref{STFTgrowth} we have for some $M > 0$ \begin{equation*} |V_\varphi u (z)| \lesssim e^{M |z|^{1/s}}, \quad z \in \rr {2d}. \end{equation*} It now follows from \eqref{convolution1} and Lemma {\rm Re} \, f{convolutioninvariance} that for any open conic set ${G(X|Y)}amma'$ containing $z_0$ such that $\overline{{G(X|Y)}amma' \cap S_{2d-1}} \subseteq {G(X|Y)}amma$ we have for all $A > 0$ \begin{equation*} \sup_{z \in {G(X|Y)}amma'} e^{A |z|^{1/s}} |V_\varphi (a^w(x,D) u) (z)| < \infty, \end{equation*} which proves that $z_0 \mathbf Ntin WF^s( a^w(x,D) u)$. We have thus shown \begin{equation*} WF^s( a^w(x,D) u) \subseteq WF^s(u). \end{equation*} \end{proof} \begin{cor} Let $s>1/2$ and $u \in \Sigma_s'(\rr d)$. For any $z \in \rr {2d}$ we have $WF^s( \Pi(z) u) = WF^s(u)$. \end{cor} \begin{proof} Since $\Pi(-z) \Pi(z) = e^{i \langle x, \xi \rangle}$ for $z=(x,\xi) \in \rr {2d}$, it suffices to show $WF^s( \Pi(z) u) \subseteq WF^s(u)$. The latter inclusion follows from Proposition {\rm Re} \, f{microlocalWFs} if we succeed in showing that the Weyl symbol for $\Pi(z)$ is smooth and satisfies \eqref{symbolestimate0} for any $h>0$. We have $\Pi(z) = a_z^w(x,D)$ where \begin{equation*} a_z(w) = e^{i\langle x,\xi \rangle/2 + i \langle \mathcal{J} z, w \rangle}, \quad z=(x,\xi), \quad w \in \rr {2d} \end{equation*} (cf. the proof of \cite[Lemma~3.7]{Wahlberg1}). Thus \begin{align*} |\pd \alpha a_z (w)| & \leqslant |z|^{|\alpha|} = h^{|\alpha|} (\alpha!)^s \left( \frac{(|z|/h)^{|\alpha|/s}}{\alpha!} \right)^s \\ & \leqslant h^{|\alpha|} (\alpha!)^s \left( \frac{(d(|z|/h)^{1/s})^{|\alpha|}}{|\alpha|!} \right)^s \\ & \leqslant h^{|\alpha|} (\alpha!)^s \exp(s d (|z|/h)^{1/s}), \quad \alpha \in \nn {2d}, \quad w \in \rr {2d}, \end{align*} for any $h>0$. The estimates \eqref{symbolestimate0} are thus satisfied. \end{proof} \section{Schr\"odinger equations and solution operators} \langlebel{sec:formulation} As stated in Section {\rm Re} \, f{sec:intro} the ultimate purpose of this paper is to prove results on propagation of the $s$-Gelfand--Shilov wave front set for the initial value Cauchy problem for a class of Schr\"odinger equations. More precisely we study the equation \begin{equation}\langlebel{schrodeq} \left\{ \begin{array}{rl} \partial_t u(t,x) + q^w(x,D) u (t,x) & = 0, \\ u(0,\, \cdot \, t) & = u_0, \end{array} \right. \end{equation} where $s>1/2$, $u_0 \in \Sigma_s'(\rr d)$, $t \geqslant 0$ and $x \in \rr d$. The Hamiltonian $q^w(x,D)$ has the quadratic form Weyl symbol \begin{equation*} q(x,\xi) = \langle (x, \xi), Q (x, \xi) \rangle, \quad x, \, \xi \in \rr d, \end{equation*} where $Q \in \cc {2d \times 2d}$ is a symmetric matrix with ${\rm Re} \, Q \geqslant 0$. The special case ${\rm Re} \, Q = 0$ will admit to study the equation for $t \in \mathbf R$ instead of $t \geqslant 0$. According to \cite[Theorem 3.4]{Cappiello2} $q^w(x,D)$ extends to a continuous operator on $\Sigma_s'(\rr d)$, and we will later prove that also the solution operator is continuous on $\Sigma_s'(\rr d)$ for each $t \geqslant 0$ (see Corollary {\rm Re} \, f{propagatorcontGF}). The \emph{Hamilton map} $F$ corresponding to $q$ is defined by \begin{equation*} \sigma(Y, F X) = q(Y,X), \quad X,Y \in \rr {2d}, \end{equation*} where $q(Y,X)$ is the bilinear polarized version of the form $q$, i.e. $q(X,Y)=q(Y,X)$ and $q(X,X)=q(X)$. The Hamilton map $F$ is the matrix \begin{equation*} F = \mathcal{J} Q \in \cc {2d \times 2d} \end{equation*} where $\mathcal{J}$ is the matrix \eqref{Jdef}. For $u_0 \in L^2(\rr d)$ the equation \eqref{schrodeq} is solved for $t \geqslant 0$ by \begin{equation*} u(t,x) = e^{-t q^w(x,D)} u_0(x) \end{equation*} where the solution operator (propagator) $e^{-t q^w(x,D)}$ is the contraction semigroup that is generated by the operator $-q^w(x,D)$. Contraction semigroup means a strongly continuous semigroup with $L^2$ operator norm $\leqslant 1$ for all $t \geqslant 0$ (cf. \cite{Yosida1}). The reason why $- q^w(x,D)$, or more precisely the closure $M_{-q}$ as an unbounded linear operator in $L^2(\rr d)$ of the operator $- q^w(x,D)$ defined on $\mathscr{S}(\rr d)$, generates such a semigroup is explained in \cite[pp. 425--26]{Hormander2}. The contraction semigroup property is a consequence of $M_{- q}$ and its adjoint $M_{-\overline q}$ being \emph{dissipative} operators \cite{Yosida1}. For $M_{-q}$ this means \begin{equation*} {\rm Re} \, (M_{-q} u,u) = (M_{-{\rm Re} \, q} u,u) \leqslant 0, \quad u \in D(M_{- q}), \end{equation*} $D(M_{-q}) \subseteq L^2(\rr d)$ denoting the domain of $M_{-q}$, which follows from the assumption ${\rm Re} \, Q \geqslant 0$. Note the feature $M_{-\overline q} = M_{-q}^*$ that holds for the Weyl quantization. Our objective is the propagation of the $s$-Gelfand--Shilov wave front set with $s>1/2$ for the Schr\"odinger propagator $e^{-t q^w(x,D)}$. This means that we seek inclusions for \begin{equation*} WF^s(e^{-t q^w(x,D)} u_0) \end{equation*} in terms of $WF^s(u_0)$, $F$ and $t \geqslant 0$ for $u_0 \in \Sigma_s'(\rr d)$. If ${\rm Re} \, Q=0$ then the propagator is given by means of the metaplectic representation. To wit, if ${\rm Re} \, Q=0$ then $e^{-t q^w(x,D)}$ is a group of unitary operators, and we have by \cite[Theorem 4.45]{Folland1} \begin{equation*} e^{-t q^w(x,D)} = \mu(e^{-2 i t F}), \quad t \in \mathbf R. \end{equation*} In this case $F$ is purely imaginary and $i F \in \operatorname{sp}(d,\mathbf R)$, the symplectic Lie algebra, which implies that $e^{-2 i t F} \in \operatorname{Sp}(d,\mathbf R)$ for any $t \in \mathbf R$ \cite{Folland1}. According to Corollary {\rm Re} \, f{symplecticWFs} we thus have if $s > 1/2$ \begin{equation*} WF^s(e^{-t q^w(x,D)} u_0) = e^{-2 i t F} WF^s(u_0), \quad t \in \mathbf R, \quad u_0 \in \Sigma_s'(\rr d). \end{equation*} The propagation of the $s$-Gelfand--Shilov wave front set is thus exact when ${\rm Re} \, Q=0$. In the rest of the paper we study the more general assumption ${\rm Re} \, Q \geqslant 0$. Under this assumption we will show in Section {\rm Re} \, f{sec:kernelschrod} that the propagator $e^{-t q^w(x,D)}$ is a continuous operator on $\Sigma_s(\rr d)$ and extends uniquely to a continuous operator on $\Sigma_s'(\rr d)$ when $s>1/2$. \section{Propagation of the $s$-Gelfand--Shilov wave front set for certain linear operators}\langlebel{sec:proplinop} In this section we prepare for the results on propagation of the $s$-Gelfand--Shilov wave front set for $e^{-t q^w(x,D)}$ in Section {\rm Re} \, f{sec:propsing}. We show propagation of singularities for linear operators in terms of their Schwartz kernels. For $s>1/2$ a kernel $K \in \Sigma_s'(\rr {2d})$ defines a continuous linear map $\mathscr{K}: \Sigma_s (\rr d) \rightarrow \Sigma_s'(\rr d)$ by \begin{equation}\langlebel{kernelop} (\mathscr{K} f, g) = (K, g \otimes \overline f), \quad f,g \in \Sigma_s(\rr d). \end{equation} Let $\varphi \in \Sigma_s(\rr d)$ satisfy $\| \varphi \|_{L^2} = 1$ and set $\Phi = \varphi \otimes \varphi \in \Sigma_s(\rr {2d})$. By \cite[Lemma~4.1]{Wahlberg1} we have for $u,\psi \in \Sigma_s(\rr d)$ \begin{equation}\langlebel{TSTFT} \begin{aligned} (\mathscr{K} u, \psi) = (2 \pi)^{-2d} \int_{\rr {4d}} V_\Phi K(x,y,\xi,-\eta) \, \overline{V_\varphi \psi (x,\xi)} \, V_{\overline \varphi} u(y,\eta) \, dx \, dy \, d \xi \, d \eta. \end{aligned} \end{equation} In the following results we need a definition from \cite{Hormander1}, adapted from the Gabor to the $s$-Gelfand--Shilov wave front set. For $K \in \Sigma_s'(\rr {2d})$ we define \begin{equation}\langlebel{WFproj} \begin{aligned} WF_1^s(K) & = \{ (x,\xi) \in T^* \rr d: \ (x, 0, \xi, 0) \in WF^s(K) \} & \subseteq T^* \rr d \setminus 0, \\ WF_2^s(K) & = \{ (y,\eta) \in T^* \rr d: \ (0, y, 0, -\eta) \in WF^s(K) \} & \subseteq T^* \rr d \setminus 0. \end{aligned} \end{equation} \begin{lem}\langlebel{WFkernelaxes} If $s>1/2$, $K \in \Sigma_s'(\rr {2d})$ and $WF_1^s(K) = WF_2^s(K) = \emptyset$, then there exists $C > 1$ such that \begin{equation*} WF^s (K) \subseteq \{ (x,y,\xi,\eta) \in T^* \rr {2d}: \ C^{-1} |(x,\xi)| < |(y,\eta)| < C |(x,\xi)| \}. \end{equation*} \end{lem} \begin{proof} Suppose \begin{equation*} WK^s (K) \subseteq \{ (x,y,\xi,\eta) \in T^* \rr {2d}: \ |(y,\eta)| < C |(x,\xi)| \} \end{equation*} does not hold for any $C>0$. Then for each $n \in \mathbf N$ there exists $(x_n,y_n,\xi_n,\eta_n) \in WF^s(K)$ such that $|(y_n,\eta_n)| \geqslant n |(x_n,\xi_n)|$. We may assume that $|(x_n,y_n,\xi_n,\eta_n)| = 1$ for each $n \in \mathbf N$ since $WF^s(K)$ is conic. Thus $(x_n,\xi_n) \rightarrow 0$ as $n \rightarrow \infty$. Passing to a subsequence (without change of notation) and using the closedness of $WF^s(K)$ gives \begin{equation*} (x_n,y_n,\xi_n,\eta_n) \rightarrow (0,y,0,\eta) \in WF^s(K), \quad n \rightarrow \infty, \end{equation*} for some $(y,\eta) \in S_{2d-1}$. This implies $(y,-\eta) \in WF_2^s(K)$ which is a contradiction. Similarly one shows \begin{equation*} WK^s (K) \subseteq \{ (x,y,\xi,\eta) \in T^* \rr {2d}: \ |(x,\xi)| < C |(y,\eta)| \} \end{equation*} for some $C>0$ using $WF_1^s(K) = \emptyset$. \end{proof} In the next result we use the conventional notation (cf. \cite{Hormander1,Hormander2}) for the reflection operator in the fourth $\rr d$ coordinate on $\rr {4d}$ \begin{equation*} (x,y,\xi,\eta)' = (x,y,\xi,-\eta), \quad x,y,\xi,\eta \in \rr d. \end{equation*} \begin{lem}\langlebel{STFTkernelformula} Suppose $s>1/2$, $K \in \Sigma_s'(\rr {2d})$ and $WF_1^s(K) = WF_2^s(K) = \emptyset$. Suppose that the linear map $\mathscr{K}: \Sigma_s (\rr d) \rightarrow \Sigma_s'(\rr d)$ defined by \eqref{kernelop} is continuous $\mathscr{K}: \Sigma_s (\rr d) \rightarrow \Sigma_s(\rr d)$ and extends uniquely to a continuous linear operator $\mathscr{K}: \Sigma_s' (\rr d) \rightarrow \Sigma_s'(\rr d)$. If $\varphi \in \Sigma_s(\rr d)$ satisfies $\| \varphi \|_{L^2} = 1$ and $\Phi = \varphi \otimes \varphi$ then \eqref{TSTFT} extends to $u \in \Sigma_s'(\rr d)$ and $\psi \in \Sigma_s(\rr d)$. \end{lem} \begin{proof} Let $\varphi \in \Sigma_s(\rr d)$ satisfy $\| \varphi \|_{L^2} = 1$ and let $u \in \Sigma_s'(\rr d)$. By \eqref{STFTgrowth} we have for some $M \geqslant 0$ \begin{equation}\langlebel{STFTupperbound1} |V_\varphi u (z)| \lesssim e^{M |z|^{1/s}}, \quad z \in \rr {2d}. \end{equation} Define for $n \in \mathbf N$ \begin{equation*} u_n = (2 \pi)^{-d} \int_{|z| \leqslant n} V_\varphi u(z) \Pi(z) \varphi \, dz. \end{equation*} In order to verify $u_n \in \Sigma_s(\rr d)$ we use the seminorms \eqref{seminorm2} for $A>0$. We have for any $A>0$ \begin{align*} e^{A|w|^{1/s}} |V_\varphi u_n (w)| & \lesssim \int_{|z| \leqslant n} |V_\varphi u(z)| \, e^{A|w|^{1/s}} |V_\varphi \varphi(w-z)| \, dz \\ & \lesssim \int_{|z| \leqslant n} e^{M |z|^{1/s} + A|w|^{1/s} - 2 A |w-z|^{1/s}} \, dz \\ & \lesssim \int_{|z| \leqslant n} e^{(M+2A) |z|^{1/s}} \, dz \\ & \lesssim 1, \quad w \in \rr {2d}. \end{align*} It follows that $u_n \in \Sigma_s(\rr d)$ for $n \in \mathbf N$. To prove that $u_n \rightarrow u$ in $\Sigma_s'(\rr d)$ as $n \rightarrow \infty$ we pick $\psi \in \Sigma_s(\rr d)$. From \eqref{STFTupperbound1}, the estimate (cf. Proposition {\rm Re} \, f{seminormequivalence} and \cite[Theorem~2.4]{Toft1}) \begin{equation}\langlebel{STFTupperbound2} |V_\varphi \psi (z)| \lesssim e^{-A |z|^{1/s}}, \quad z \in \rr {2d}, \quad A>0, \end{equation} we obtain by means of dominated convergence and \eqref{STFTrecon2} \begin{align*} (u_n,\psi) & = (2 \pi)^{-d} \int_{|z| \leqslant n} V_\varphi u (z) \ \overline{V_\varphi \psi (z)} \, dz \\ & \longrightarrow (2 \pi)^{-d} \int_{\rr {2d}} V_\varphi u(z) \ \overline{V_\varphi \psi (z)} \, dz \\ & = (u,\psi), \quad n \rightarrow \infty. \end{align*} This proves the claim that $u_n \rightarrow u$ in $\Sigma_s'(\rr d)$ as $n \rightarrow \infty$. We also need the estimate (cf. \cite[Eq.~(11.29)]{Grochenig1}) \begin{equation*} |V_{\overline{\varphi}} u_n (z)| \leqslant (2 \pi)^{-d} |V_\varphi u| * |V_{\overline{\varphi}} \varphi| (z), \quad z \in \rr {2d}, \end{equation*} which in view of \eqref{STFTupperbound1} and \eqref{STFTupperbound2} with $\psi$ replaced by $\varphi$ and conjugation of the window function gives the bound \begin{equation}\langlebel{STFTupperbound3} |V_{\overline{\varphi}} u_n (z)| \lesssim e^{4M |z|^{1/s}}, \quad z \in \rr {2d}, \quad n \in \mathbf N, \end{equation} that holds uniformly over $n \in \mathbf N$. We are now in a position to assemble the arguments into a proof of formula \eqref{TSTFT} for $u \in \Sigma_s'(\rr d)$ and $\psi \in \Sigma_s(\rr d)$. Using \eqref{TSTFT} for $u_n$ gives \begin{equation}\langlebel{STFTequalitylimit} (\mathscr{K} u, \psi) = \lim_{n \rightarrow \infty} (2 \pi)^{-2d} \int_{\rr {4d}} V_\Phi K(x,y,\xi,-\eta) \, \overline{V_\varphi \psi (x,\xi)} \, V_{\overline \varphi} u_n(y,\eta) \, dx \, dy \, d \xi \, d \eta. \end{equation} Since $V_{\overline \varphi} u_n(y,\eta) \rightarrow V_{\overline \varphi} u(y,\eta)$ as $n \rightarrow \infty$ for all $(y,\eta) \in \rr {2d}$, the formula \eqref{TSTFT} follows from dominated convergence if we can show that the modulus of the integrand in \eqref{STFTequalitylimit} is bounded by an integrable function that does not depend on $n \in \mathbf N$. For $C > 1$ define the open conic set \begin{equation*} {G(X|Y)}amma = \{ (x,y,\xi,\eta) \in T^* \rr {2d}: \ C^{-1} |(x,\xi)| < |(y,\eta)| < C |(x,\xi)| \} \subseteq T^* \rr {2d} \setminus 0. \end{equation*} If $C$ is chosen properly then we have $WK^s (K) \subseteq {G(X|Y)}amma$ by Lemma {\rm Re} \, f{WFkernelaxes}. Let $\varepsilon>0$. For the integral in \eqref{STFTequalitylimit} over ${G(X|Y)}amma'$ we may estimate, using the estimate \begin{equation}\langlebel{kernelSTFT} |V_\Phi K(x,y,\xi,-\eta)| \lesssim e^{B |(x,y,\xi,\eta)|^{1/s}}, \quad (x,y,\xi,\eta) \in \rr {4d}, \end{equation} for some $B \geqslant 0$ (cf. \eqref{STFTgrowth}), and \eqref{STFTupperbound2}, \eqref{STFTupperbound3}, for any $A>0$ \begin{equation}\langlebel{integralGamma} \begin{aligned} & \int_{{G(X|Y)}amma'} |V_\Phi K(x,y,\xi,-\eta)| \,|V_\varphi \psi (x,\xi)| \, |V_{\overline \varphi} u_n(y,\eta)| \, dx \, dy \, d \xi \, d \eta \\ & \lesssim \int_{{G(X|Y)}amma'} e^{-\varepsilon |(y,\eta)|^{1/s}} e^{(2B-A) |(x,\xi)|^{1/s} + (2B+4M +\varepsilon) |(y,\eta)|^{1/s}} \, dx \, dy \, d \xi \, d \eta \\ & \leqslant \int_{{G(X|Y)}amma'} e^{-\varepsilon |(y,\eta)|^{1/s}} e^{(2B-A+C^{1/s}(2B+4M +\varepsilon)) |(x,\xi)|^{1/s}} \, dx \, dy \, d \xi \, d \eta < \infty, \end{aligned} \end{equation} in the final inequality assuming that $A>0$ is sufficiently large. Since ${G(X|Y)}amma \subseteq T^* \rr {2d} \setminus 0$ is open and $WK^s (K) \subseteq {G(X|Y)}amma$ we have for any $A>0$ \begin{equation*} |V_\Phi K(x,y,\xi,-\eta)| \lesssim e^{-A |(x,y,\xi,\eta)|^{1/s}}, \quad (x,y,\xi,-\eta) \in \rr {4d} \setminus {G(X|Y)}amma. \end{equation*} This gives for the integral in \eqref{STFTequalitylimit} over $\rr {4d} \setminus {G(X|Y)}amma'$, again using \eqref{STFTupperbound3}, \begin{equation}\langlebel{integralGammacomp} \begin{aligned} & \int_{\rr {4d} \setminus {G(X|Y)}amma'} |V_\Phi K(x,y,\xi,-\eta)| \,|V_\varphi \psi (x,\xi)| \, |V_{\overline \varphi} u_n(y,\eta)| \, dx \, dy \, d \xi \, d \eta \\ & \lesssim \int_{\rr {4d} \setminus {G(X|Y)}amma'} e^{-A |(x,y,\xi,\eta)|^{1/s} + 4M |(y,\eta)|^{1/s} } \, dx \, dy \, d \xi \, d \eta \\ & \leqslant \int_{ \rr {4d} } e^{ (4M-A) |(x,y,\xi,\eta)|^{1/s} } \, dx \, dy \, d \xi \, d \eta < \infty \end{aligned} \end{equation} provided $A>0$ is sufficiently large. Combined, \eqref{integralGamma} and \eqref{integralGammacomp} show our claim that the modulus of the integrand in \eqref{STFTequalitylimit} is bounded by an integrable function that does not depend on $n \in \mathbf N$. \end{proof} Since \begin{equation*} \overline{V_\varphi \Pi(t,\theta) \varphi (x,\xi)} = e^{i \langle x, \xi - \theta \rangle} V_\varphi \varphi (t-x,\theta-\xi), \quad t,x,\theta,\xi \in \rr d, \end{equation*} we obtain from Lemma {\rm Re} \, f{STFTkernelformula} with $\psi = \Pi(t,\theta) \varphi$ for $(t,\theta) \in \rr {2d}$, $u \in \Sigma_s'(\rr d)$, $\varphi \in \Sigma_s(\rr d)$ and $\| \varphi \|_{L^2} = 1$ \begin{equation}\langlebel{STFTop1} \begin{aligned} & V_\varphi(\mathscr{K} u) (t, \theta) = (\mathscr{K} u, \Pi(t,\theta) \varphi) \\ & = (2 \pi)^{-2d} \int_{\rr {4d}} e^{i \langle x,\xi-\theta \rangle} V_\Phi K (x,y,\xi,-\eta) V_\varphi \varphi (t-x,\theta-\xi) \, V_{\overline \varphi} u(y,\eta) \, dx \, dy \, d \xi \, d \eta. \end{aligned} \end{equation} This formula will be useful in the proof of Theorem {\rm Re} \, f{WFphaseincl}. The following result concerns propagation of singularities for linear operators and is a version of H\"ormander's \cite[Proposition~2.11]{Hormander1} adapted to the $s$-Gelfand--Shilov wave front set. We use the relation mapping notation \begin{align*} & WF^s(K)' \circ WF^s (u) \\ & = \{ (x,\xi) \in T^* \rr d: \, \exists (y,\eta) \in WF^s (u) : \, (x,y,\xi,-\eta) \in WF^s(K) \}. \end{align*} \begin{thm}\langlebel{WFphaseincl} Let $s>1/2$ and let $\mathscr{K}$ be the continuous linear operator \eqref{kernelop} defined by the Schwartz kernel $K \in \Sigma_s'(\rr {2d})$. Suppose $\mathscr{K}: \Sigma_s(\rr d) \rightarrow \Sigma_s(\rr d)$ is continuous and extends uniquely to a continuous linear operator $\mathscr{K}: \Sigma_s' (\rr d) \rightarrow \Sigma_s' (\rr d)$, and suppose \begin{equation}\langlebel{WKjempty} WF_1^s(K) = WF_2^s(K) = \emptyset. \end{equation} Then for $u \in \Sigma_s'(\rr d)$ we have \begin{equation}\langlebel{WFphaseinclusion} WF^s (\mathscr{K} u) \subseteq WF^s(K)' \circ WF^s (u). \end{equation} \end{thm} \begin{proof} It follows from \eqref{STFTgrowth} that \eqref{kernelSTFT} is satisfied for some $B \geqslant 0$ if $\Phi \in \Sigma_s(\rr {2d}) \setminus 0$. Denote by \begin{align*} p_{1,3}(x,y,\xi,\eta) & = (x,\xi), \\ p_{2,-4}(x,y,\xi,\eta) & = (y,-\eta), \quad x,y,\xi, \eta \in \rr d, \end{align*} the projections $\rr {4d} \rightarrow \rr {2d}$ onto the first and the third $\rr d$ coordinate, and onto the second and the fourth $\rr d$ coordinate with a change sign in the latter, respectively. By Lemma {\rm Re} \, f{WFkernelaxes} there exists $c>1$ such that \begin{equation*} WF^s (K) \subseteq {G(X|Y)}amma_1 = \{ (x,y,\xi,\eta) \in T^* \rr {2d}: \ c^{-1} |(x,\xi)| < |(y,\eta)| < c |(x,\xi)| \}, \end{equation*} and, defining \begin{align*} {G(X|Y)}amma_{1,3} & = \{(x,y,\xi,\eta) \in T^* \rr {2d}: \ c |(x,\xi)| \leqslant |(y,\eta)| \}, \\ {G(X|Y)}amma_{2,4} & = \{(x,y,\xi,\eta) \in T^* \rr {2d}: \ c |(y,\eta)| \leqslant |(x,\xi)| \}, \\ \end{align*} we thus have \begin{equation}\langlebel{inclusionG1} {G(X|Y)}amma_1 \subseteq \rr {4d} \setminus ({G(X|Y)}amma_{1,3} \cup {G(X|Y)}amma_{2,4} ). \end{equation} We show the inclusion \eqref{WFphaseinclusion} by showing that \begin{equation}\langlebel{assumption1} 0 \neq (t_0,\theta_0) \mathbf Ntin WF^s(K)' \circ WF^s (u) \end{equation} implies $(t_0,\theta_0) \mathbf Ntin WF^s (\mathscr{K} u)$. Thus we suppose \eqref{assumption1}. By \cite[Lemma~4.2]{Wahlberg1} we may assume that $(t_0,\theta_0) \in \Omega_0$ and $\overline \Omega_0 \cap WF^s(K)' \circ \overline \Omega_2 = \emptyset$ where $\Omega_0, \Omega_2 \subseteq T^* \rr d \setminus 0$ are conic, open and $WF^s (u) \subseteq \Omega_2$. (The assumption of \cite[Lemma~4.2]{Wahlberg1} corresponds to the assumption \eqref{WKjempty}.) Here we use the notation $\overline \Omega \subseteq T^* \rr d \setminus 0$ for the closure in the usual topology in $T^* \rr d \setminus 0$ of a conical subset $\Omega \subseteq T^* \rr d \setminus 0$. Hence \begin{equation*} \overline \Omega_0 \cap p_{1,3} \left( WF^s(K) \cap p_{2,-4}^{-1} \, \overline \Omega_2 \right) = \emptyset, \end{equation*} or, equivalently, \begin{equation*} p_{1,3}^{-1} \, \overline{\Omega}_0 \cap WF^s(K) \cap p_{2,-4}^{-1} \, \overline \Omega_2 = \emptyset. \end{equation*} Due to assumption \eqref{WKjempty} we may strengthen this into \begin{equation*} p_{1,3}^{-1} \, (\overline{\Omega}_0 \cup \{ 0 \} ) \setminus 0 \cap WF^s(K) \cap p_{2,-4}^{-1} \, (\overline \Omega_2 \cup \{ 0 \} ) \setminus 0 = \emptyset. \end{equation*} Since $p_{1,3}^{-1} \, (\overline{\Omega}_0 \cup \{ 0 \} ) \setminus 0$ and $p_{2,-4}^{-1} \, (\overline \Omega_2 \cup \{ 0 \} ) \setminus 0$ are closed conic subsets of $\rr {4d} \setminus 0$, decreasing ${G(X|Y)}amma_1 \subseteq \rr {4d} \setminus 0$ if necessary, there exist open conic subsets ${G(X|Y)}amma_0, {G(X|Y)}amma_1, {G(X|Y)}amma_2 \subseteq \rr {4d} \setminus 0$ such that \begin{equation*} WF^s(K) \subseteq {G(X|Y)}amma_1, \qquad p_{1,3}^{-1} \, \overline{\Omega}_0 \subseteq {G(X|Y)}amma_0, \qquad p_{2,-4}^{-1} \, \overline{\Omega}_2 \subseteq {G(X|Y)}amma_2, \end{equation*} and \begin{equation}\langlebel{intersection1} {G(X|Y)}amma_0 \cap {G(X|Y)}amma_1 \cap {G(X|Y)}amma_2 = \emptyset. \end{equation} Let $\Sigma_0 \subseteq T^* \rr d \setminus 0$ be an open conic set such that $(t_0,\theta_0) \in \Sigma_0$ and $\overline{\Sigma_0 \cap S_{2d-1}} \subseteq \Omega_0$. Let $\varphi \in \Sigma_s(\rr d)$, $\| \varphi \|_{L^2} = 1$ and $\Phi = \varphi \otimes \varphi$. From Lemma {\rm Re} \, f{STFTkernelformula} we know that formula \eqref{STFTop1} holds. Therefore we have for any $A>0$ \begin{equation}\langlebel{estimand1} \begin{aligned} & e^{A |(t,\theta)|^{1/s}} |V_\varphi(\mathscr{K} u) (t, \theta)| \\ & \lesssim \int_{\rr {4d}} |V_\Phi K(x,y,\xi,-\eta)| \, e^{A |(t,\theta)|^{1/s}} \, | V_\varphi \varphi (t-x,\theta-\xi)| \, |V_{\overline \varphi} u(y,\eta)| \, dx \, dy \, d \xi \, d \eta. \end{aligned} \end{equation} We will show that this integral is bounded when $(t,\theta) \in \Sigma_0$ for any $A>0$ which proves that $(t_0,\theta_0) \mathbf Ntin WF^s (\mathscr{K} u)$. Consider first the right hand side integral over $(x,y,\xi,-\eta) \in \rr {4d} \setminus {G(X|Y)}amma_1$. We have for any $b>0$ \begin{equation}\langlebel{WFcompl} |V_\Phi K(x,y,\xi,-\eta)| \lesssim e^{-b |(x,y,\xi,\eta)|^{1/s}} , \quad (x,y,\xi,-\eta) \in \rr {4d} \setminus {G(X|Y)}amma_1, \end{equation} due to $WF^s(K) \subseteq {G(X|Y)}amma_1$ and ${G(X|Y)}amma_1 \subseteq \rr {4d} \setminus 0$ being open. By \eqref{STFTgrowth} we have \begin{equation}\langlebel{STFTu} |V_\varphi u (z)| \lesssim e^{M |z|^{1/s}}, \quad z \in \rr {2d}, \end{equation} for some $M \geqslant 0$, and by Proposition {\rm Re} \, f{seminormequivalence} we have \begin{equation}\langlebel{STFTphi} |V_\varphi \varphi (z)| \lesssim e^{-c |z|^{1/s}}, \quad z \in \rr {2d}, \end{equation} for any $c>0$. Thus \begin{equation}\langlebel{estimateA} \begin{aligned} & \int_{\rr {4d} \setminus {G(X|Y)}amma_1'} |V_\Phi K(x,y,\xi,-\eta)| \, e^{A |(t,\theta)|^{1/s}} \, | V_\varphi \varphi (t-x,\theta-\xi)| \, |V_{\overline \varphi} u(y,\eta)| \, dx \, dy \, d \xi \, d \eta \\ & \lesssim \int_{\rr {4d} \setminus {G(X|Y)}amma_1'} e^{-b |(x,y,\xi,\eta)|^{1/s} + 2 A |(x,\xi)|^{1/s} + M |(y,\eta)|^{1/s}} \, e^{2A |(t-x,\theta-\xi)|^{1/s}} \, | V_\varphi \varphi (t-x,\theta-\xi)| \, dx \, dy \, d \xi \, d \eta \\ & \lesssim \int_{\rr {4d}} e^{(2 A + M - b) |(x,y,\xi,\eta)|^{1/s}} \, dx \, dy \, d \xi \, d \eta < \infty \end{aligned} \end{equation} if $b > 0$ is chosen sufficiently large. The estimate holds for all $(t,\theta) \in \rr {2d}$. It remains to estimate the right hand side integral \eqref{estimand1} over $(x,y,\xi,-\eta) \in {G(X|Y)}amma_1$. By \eqref{inclusionG1} and \eqref{intersection1} we have ${G(X|Y)}amma_1 \subseteq G_1 \cup G_2$ where \begin{equation*} G_1 = \rr {4d} \setminus ({G(X|Y)}amma_{1,3} \cup {G(X|Y)}amma_{2,4} \cup {G(X|Y)}amma_0), \quad G_2 = \rr {4d} \setminus ({G(X|Y)}amma_{1,3} \cup {G(X|Y)}amma_{2,4} \cup {G(X|Y)}amma_2 ). \end{equation*} When $(x,y,\xi,-\eta) \in {G(X|Y)}amma_1$ we have $|(x,\xi)|\asymp |(y,\eta)|$. First we study $(x,y,\xi,-\eta) \in G_1$. Then $(x,y,\xi,-\eta) \mathbf Ntin {G(X|Y)}amma_0$ which implies $(x,\xi) \mathbf Ntin \Omega_0$. There exists $\delta>0$ such that \begin{equation*} |(x,\xi) - (t,\theta)| \geqslant \delta |(x,\xi)|, \quad (x,\xi) \mathbf Ntin \Omega_0, \quad (t,\theta) \in \Sigma_0. \end{equation*} Let $\varepsilon>0$. For $(t,\theta) \in \Sigma_0$ we obtain with the aid of \eqref{kernelSTFT} using $|(x,\xi)|\asymp |(y,\eta)|$, \eqref{STFTu} and \eqref{STFTphi}, where $B,M \geqslant 0$ are fixed and $c>0$ is arbitrary, with $B_1>0$ a new constant that depends on $B, M, \varepsilon, s$, \begin{equation}\langlebel{estimateB} \begin{aligned} & \int_{G_1'} |V_\Phi K(x,y,\xi,-\eta)| \, e^{A |(t,\theta)|^{1/s}} \, | V_\varphi \varphi (t-x,\theta-\xi)| \, |V_{\overline \varphi} u(y,\eta)| \, dx \, dy \, d \xi \, d \eta \\ & \lesssim \int_{G_1'} e^{B |(x,y,\xi,\eta)|^{1/s} + 2A |(x,\xi)|^{1/s} + M |(y,\eta)|^{1/s} } \\ & \qquad \qquad \qquad \qquad \qquad \times e^{2A |(t-x,\theta-\xi)|^{1/s}} \, | V_\varphi \varphi (t-x,\theta-\xi)| \, dx \, dy \, d \xi \, d \eta \\ & \lesssim \int_{G_1'} e^{-\varepsilon |(x,y,\xi,\eta)|^{1/s} + B_1 |(x,\xi)|^{1/s} - c |(t-x,\theta-\xi)|^{1/s} } \, dx \, dy \, d \xi \, d \eta \\ & \lesssim \int_{\rr {4d}} e^{-\varepsilon |(x,y,\xi,\eta)|^{1/s} + (B_1- c \delta^{1/s}) |(x,\xi)|^{1/s} } \, dx \, dy \, d \xi \, d \eta \\ & < \infty \end{aligned} \end{equation} provided $c \geqslant B_1 \delta^{-1/s}$. Finally we study $(x,y,\xi,-\eta) \in G_2$. Then $(x,y,\xi,-\eta) \mathbf Ntin {G(X|Y)}amma_2$ so we have $(y,\eta) \mathbf Ntin \Omega_2$. Hence $(y,\eta) \in G$ where $G \subseteq T^* \rr d$ is closed, conic and does not intersect $WF^s(u)$. We obtain with the aid of \eqref{kernelSTFT} for any $(t,\theta) \in T^* \rr d$, using $|(x,\xi)|\asymp |(y,\eta)|$ and \eqref{STFTphi}, for $B \geqslant 0$ fixed and $B_1>0$ a new constant that depends on $A, B, \varepsilon, s$, \begin{equation}\langlebel{estimateC} \begin{aligned} & \int_{G_2'} |V_\Phi K(x,y,\xi,-\eta)| \, e^{A |(t,\theta)|^{1/s}} \, | V_\varphi \varphi (t-x,\theta-\xi)| \, |V_{\overline \varphi} u(y,\eta)| \, dx \, dy \, d \xi \, d \eta \\ & \lesssim \int_{G_2'} e^{B |(x,y,\xi,\eta)|^{1/s} + 2A |(x,\xi)|^{1/s}} \, |V_{\overline \varphi} u(y,\eta)| \, dx \, dy \, d \xi \, d \eta \\ & \lesssim \int_{G_2'} e^{-\varepsilon |(x,y,\xi,\eta)|^{1/s} + B_1 |(y,\eta)|^{1/s}} \, |V_{\overline \varphi} u(y,\eta)| \, dx \, dy \, d \xi \, d \eta \\ & \lesssim \sup_{w \in G} \,e^{B_1 |w|^{1/s}} \, |V_{\overline \varphi} u(w)| < \infty. \end{aligned} \end{equation} We can now combine \eqref{estimand1}, \eqref{estimateA}, ${G(X|Y)}amma_1 \subseteq G_1 \cup G_2$, \eqref{estimateB} and \eqref{estimateC} to conclude \begin{equation*} \sup_{(t,\theta) \in \Sigma_0} e^{A |(t,\theta)|^{1/s}} \, |V_\varphi (\mathscr{K} u) (t,\theta)| < \infty \end{equation*} for any $A>0$. Thus $(t_0,\theta_0) \mathbf Ntin WF^s (\mathscr{K} u)$. \end{proof} \section{The $s$-Gelfand--Shilov wave front set of oscillatory integrals}\langlebel{sec:oscint} \subsection{Oscillatory integrals}\langlebel{secoscint_one} We need to describe a class of oscillatory integrals with quadratic phase functions introduced by H\"ormander \cite{Hormander2}. This is useful due to the fact that the Schwartz kernel of the Schr\"odinger propagator $e^{-t q^w(x,D)}$ is an integral of this form, as we will describe in Section {\rm Re} \, f{sec:kernelschrod}. Our discussion on oscillatory integrals is brief. For a richer account we refer to \cite{Hormander2,Rodino2}. Let $p$ be a complex-valued quadratic form on $\rr {d + N}$, \begin{equation}\langlebel{pform} p(x,\theta) = \langle (x, \theta), P (x, \theta) \rangle, \quad x \in \rr d, \quad \theta \in \rr N, \end{equation} where $P \in \cc {(d+N) \times (d+N)}$ is the symmetric matrix \begin{equation}\langlebel{Pmatrix} P=\left( \begin{array}{ll} P_{xx} & P_{x \theta} \\ P_{\theta x} & P_{\theta \theta} \end{array} \right) \end{equation} where $P_{xx} \in \cc {d \times d}$, $P_{x \theta} \in \cc {d \times N}$ and $P_{\theta \theta} \in \cc {N \times N}$. Suppose $P$ satisfies the following two conditions. \begin{enumerate} \item ${\rm Im} \, P \geqslant 0$; \item the row vectors of the submatrix \begin{equation*} \left( \begin{array}{ll} P_{\theta x} & P_{\theta \theta} \end{array} \right) \in \cc {N \times (d+N)} \end{equation*} are linearly independent over $\mathbf C$. \end{enumerate} Under these circumstances the oscillatory integral \begin{equation}\langlebel{oscillint1} u(x) = \int_{\rr N} e^{i p(x,\theta)} \, d \theta, \quad x \in \rr d, \end{equation} can be given a unique meaning as an element in $\mathscr{S}'(\rr d)$, by means of a regularization procedure \cite{Hormander2,Rodino2}. Due to the embedding $\mathscr{S}'(\rr d) \subseteq \Sigma_s'(\rr d)$ ($s>1/2$), the oscillatory integral defines a unique element $u \in \Sigma_s'(\rr d)$. An oscillatory integral of the form \eqref{oscillint1} is, up to multiplication with an element in $\mathbf C \setminus 0$, bijectively associated with a Lagrangian subspace $\langlembdabda \subseteq T^* \cc d$, that is positive in the sense of \begin{equation*} i \sigma (\overline X, X ) \geqslant 0, \quad X \in \langlembdabda. \end{equation*} The positive Lagrangian associated with the oscillatory integral \eqref{oscillint1} is \begin{equation}\langlebel{lagrangian1} \langlembdabda = \{ (x, p_x'(x,\theta) ) \in T^* \cc d: \ p_\theta'(x,\theta) = 0, \ (x,\theta) \in \cc {d+N} \} \subseteq T^* \cc d. \end{equation} The integer $N$ in the integral \eqref{oscillint1} is not uniquely determined by $u$. In fact, it may be possible to decrease $N$ and obtain the same $u$ times a nonzero complex constant. This procedure may be iterated until the term that is quadratic in $\theta$ disappears. The form $p$ is then (cf. \cite[Propositions 5.6 and 5.7]{Hormander2}) \begin{equation}\langlebel{canonicp} p(x,\theta) = \rho(x) + \langle L \theta,x \rangle \end{equation} where $\rho$ is a quadratic form, ${\rm Im} \, \rho \geqslant 0$ and $L \in \rr {d \times N}$ is an injective matrix. The matrix $L$ is uniquely determined by the Lagrangian $\langlembdabda$ modulo invertible right factors, and similarly the values of $\rho$ on $\operatorname{Ker} L^t$ are uniquely determined. The oscillatory integral is (cf. \cite[Proposition 5.7]{Hormander2}) \begin{equation}\langlebel{oscillint2} u(x) = (2 \pi)^N \delta_0 (L^t x) e^{i \rho(x)}, \quad x \in \rr d, \end{equation} where $\delta_0 = \delta_0(\rr N)$. \subsection{The $s$-Gelfand--Shilov wave front set of an oscillatory integral} \begin{thm}\langlebel{WFsoscint} Let $u \in \mathscr{S}'(\rr d)$ be the oscillatory integral \eqref{oscillint1} with the associated positive Lagrangian $\langlembdabda$ given by \eqref{lagrangian1}. Then for $s > 1/2$ \begin{equation}\langlebel{WFincl} WF^s(u) \subseteq (\langlembdabda \cap T^* \rr d) \setminus 0. \end{equation} \end{thm} \begin{proof} First we assume $N \geqslant 1$ in \eqref{oscillint1} and in the end we will take care of the case $N=0$. As discussed above we may assume that $p$ is of the form \eqref{canonicp} where $\rho$ is a quadratic form, ${\rm Im} \, \rho \geqslant 0$ and $L \in \rr {d \times N}$ is injective, and $u$ is given by \eqref{oscillint2}. The positive Lagrangian \eqref{lagrangian1} associated to $u$ is hence \begin{equation}\langlebel{lagrangian3} \langlembdabda = \{ (x,\rho'(x) + L \theta): \, (x,\theta) \in \cc {d+N}, \, L^t x = 0 \} \subseteq T^* \cc d. \end{equation} By \cite[Proposition~3.4]{Rodino2} and the uniqueness part of \cite[Proposition 5.7]{Hormander2} we may assume \begin{equation*} {\rm Re} \, (\operatorname{Ran} \rho') \perp \operatorname{Ran} L, \quad {\rm Im} \, (\operatorname{Ran} \rho') \perp \operatorname{Ran} L. \end{equation*} If $x \in \rr d$, $\theta \in \cc N$ and $0 = {\rm Im} \, (\rho'(x) + L \theta) = {\rm Im} \, (\rho')(x) + L {\rm Im} \, \theta$, we may thus conclude ${\rm Im} \, (\rho')(x)=0$ and $L {\rm Im} \, \theta=0$, so the injectivity of $L$ forces $\theta \in \rr N$. Hence \begin{equation}\langlebel{lambdareal} \langlembdabda \cap T^* \rr d = \{ (x,{\rm Re} \, (\rho')(x) + L \theta): \, (x,\theta) \in \rr {d+N}, \, L^t x = 0 \}. \end{equation} According to Proposition {\rm Re} \, f{linsurj} and \eqref{WFsdirac} \begin{align*} WF^s( \delta_0 (L^t \, \cdot \, t) ) & = \{ (x,L \xi) \in T^* \rr d \setminus 0: \, L^t x=0, \, \xi \in \rr N \setminus 0 \} \cup \operatorname{Ker} L^t \setminus 0 \times \{ 0 \} \\ & = (\operatorname{Ker} L^t \times L \rr N) \setminus 0 \subseteq T^* \rr d \setminus 0. \end{align*} We write $\rho(x) = \rho_r(x) + i \rho_i(x)$ and \begin{equation}\langlebel{rhodef} \rho_r (x) = \langle R_r x,x \rangle, \quad \rho_i (x) = \langle R_i x, x \rangle \end{equation} with $R_r, R_i \in \rr {d \times d}$ symmetric and $R_i \geqslant 0$. The function $e^{ i \rho_r(x)}$, considered as a multiplication operator, is the metaplectic operator \begin{equation*} e^{ i \rho_r(x)} = \mu (\chi) \end{equation*} where \begin{equation}\langlebel{chidef} \chi = \left( \begin{array}{ll} I & 0 \\ 2 R_r & I \end{array} \right) \in \operatorname{Sp}(d,\mathbf R). \end{equation} The function $g(x) = e^{- \rho_i(x) }$ satisfies the estimates \eqref{symbolestimate0} for all $h>0$. In fact, since $R_i \geqslant 0$ it suffices to verify the estimates for $g(Ux) = \Pi_{j=1}^n e^{-\mu_j x_j^2}$ where $U \in \rr {d \times d}$ is an orthogonal matrix and $\mu_j>0$ for $1 \leqslant j \leqslant n \leqslant d$. The function $x \rightarrow g(Ux)$ clearly satisfies \eqref{symbolestimate0} for all $h>0$ since it is a tensor product of a Gaussian on $\rr n$, that belongs to $\Sigma_s(\rr n)$, and the function one on $\rr {d-n}$. If we consider $e^{- \rho_i(x) }$ as a function of $(x,\xi) \in T^* \rr d$, constant with respect to the $\xi$ variable, then the corresponding Weyl pseudodifferential operator is multiplication with $e^{- \rho_i(x) }$. Proposition {\rm Re} \, f{microlocalWFs} gives \begin{equation}\langlebel{propagation0} WF^s(e^{- \rho_i } u) \subseteq WF^s(u), \quad u \in \Sigma_s'(\rr d). \end{equation} Piecing these arguments together, using Corollary {\rm Re} \, f{symplecticWFs} and \eqref{lambdareal}, gives \begin{equation}\langlebel{WFincl0} \begin{aligned} WF^s( u ) & = WF^s ( e^{- \rho_i } e^{i \rho_r} \delta_0 (L^t \, \cdot \, t) ) \\ & \subseteq WF^s ( e^{i \rho_r} \delta_0 (L^t \, \cdot \, t) ) \\ & = \chi WF^s ( \delta_0 (L^t \, \cdot \, t) ) \\ & = \{ (x, 2 R_r x + L \theta): \, (x,\theta) \in \rr {d+N}, \, L^t x = 0 \} \setminus 0 \\ & = \{ (x, \rho_r'(x) + L \theta): \, (x,\theta) \in \rr {d+N}, \, L^t x = 0 \} \setminus 0 \\ & = (\langlembdabda \cap T^* \rr d) \setminus 0. \end{aligned} \end{equation} This ends the proof when $N \geqslant 1$. Finally, if $N=0$ then the (degenerate) oscillatory integral \eqref{oscillint1} is \begin{equation*} u(x) = e^{ i \rho(x)} \end{equation*} where $\rho = \rho_r + i \rho_i$ is given by \eqref{rhodef} with $R_r, R_i \in \rr {d \times d}$ symmetric and $R_i \geqslant 0$. By \eqref{lagrangian1} the corresponding positive Lagrangian is \begin{equation}\langlebel{lambda0} \langlembdabda = \{ (x, 2 (R_r + i R_i) x): \, x \in \cc d \} \subseteq T^* \cc d \end{equation} which gives \begin{equation*} \langlembdabda \cap T^* \rr d = \{ (x, 2 R_r x): \, x \in \rr d \}. \end{equation*} Since $WF^s(1)= (\rr d \setminus 0) \times \{0\}$ (cf. \eqref{WFsone}) we obtain, again using \eqref{propagation0} and recycling the argument above, \begin{equation*} \begin{aligned} WF^s( u ) & = WF^s ( e^{- \rho_i } e^{i \rho_r} ) \\ & \subseteq WF^s ( e^{i \rho_r} ) \\ & = \chi WF^s ( 1 ) \\ & = \{ (x, 2 R_r x): \, x \in \rr d \setminus 0 \} \\ & = (\langlembdabda \cap T^* \rr d) \setminus 0. \end{aligned} \end{equation*} \end{proof} \begin{rem} From $WF(u) \subseteq WF^s(u)$ for $u \in \mathscr{S}'(\rr d)$ and $s>1/2$ it follows that Theorem {\rm Re} \, f{WFsoscint} is a sharpening of \cite[Theorem~3.6]{Rodino2}. \end{rem} \section{The Schwartz kernel of the Schr\"odinger propagator}\langlebel{sec:kernelschrod} Let $q$ be a quadratic form on $T^*\rr d$ defined by a symmetric matrix $Q \in \cc {2 d \times 2 d}$ with Hamilton map $F=\mathcal{J} Q$ and ${\rm Re} \, Q \geqslant 0$. According to \cite[Theorem 5.12]{Hormander2} the Schr\"odinger propagator is \begin{equation*} e^{-t q^w(x,D)} = \mathscr{K}_{e^{-2 i t F}} \end{equation*} where $\mathscr{K}_{e^{-2 i t F}}: \mathscr{S}(\rr d) \rightarrow \mathscr{S}'(\rr d) $ is the linear continuous operator with Schwartz kernel \begin{equation}\langlebel{schwartzkernel1} K_T (x,y) = (2 \pi)^{-(d+N)/2} \sqrt{\det \left( \begin{array}{ll} p_{\theta \theta}''/i & p_{\theta y}'' \\ p_{x \theta}'' & i p_{x y}'' \end{array} \right) } \int_{\rr N} e^{i p(x,y,\theta)} d\theta \in \mathscr{S}'(\rr {2d}) \end{equation} with $T = e^{-2 i t F}$. This kernel is an oscillatory with respect to a quadratic form $p$ on $\rr {2 d + N}$ as discussed in Section {\rm Re} \, f{secoscint_one}. The positive Lagrangian associated to the Schwartz kernel $K_{e^{-2 i t F}}$ is \begin{equation}\langlebel{twistgraph} \langlembdabda = \{ (x, y, \xi, -\eta) \in T^* \cc {2d}: \, (x,\xi) = e^{-2 i t F} (y,\eta) \}. \end{equation} By \cite[Lemma~4.2]{Rodino2} the Lagrangian $\langlembdabda = \langlembdabda_{e^{-2 i t F}}$ given by \eqref{twistgraph} is a positive twisted graph Lagrangian defined by the matrix $e^{-2 i t F} \in \operatorname{Sp}(d,\mathbf C)$. When a twisted graph Lagrangian defined by a matrix $T \in \operatorname{Sp}(d,\mathbf C)$ is positive, also the matrix is called positive \cite{Hormander2}. This means \begin{equation*} i \left( \sigma(\overline{TX}, TX) - \sigma(\overline{X},X) \right) \geqslant 0, \quad X \in T^* \cc d. \end{equation*} Since $K_{e^{-2 i t F}} \in \mathscr{S}'(\rr {2d})$ the propagator is a continuous operator $e^{-t q^w(x,D)}: \mathscr{S} (\rr d) \rightarrow \mathscr{S}' (\rr d)$. We have in fact continuity $e^{-t q^w(x,D)}: \mathscr{S} (\rr d) \rightarrow \mathscr{S} (\rr d)$. This follows from \cite[Proposition~5.8 and Theorem~5.12]{Hormander2} which says that $\mathscr{K}_T: \mathscr{S}(\rr d) \rightarrow \mathscr{S}(\rr d)$ is continuous for any positive matrix $T \in \operatorname{Sp}(d,\mathbf C)$. The next result shows that $\mathscr{K}_T: \Sigma_s (\rr d) \rightarrow \Sigma_s (\rr d)$ is continuous and $\mathscr{K}_T$ extends uniquely to a continuous operator $\mathscr{K}_T: \Sigma_s' (\rr d) \rightarrow \Sigma_s' (\rr d)$. \begin{prop}\langlebel{KTcontGF} Suppose $T \in \operatorname{Sp} (d,\mathbf C)$ is positive and let $\mathscr{K}_T: \mathscr{S}(\rr d) \rightarrow \mathscr{S}'(\rr d)$ be the continuous linear operator having Schwartz kernel $K_T \in \mathscr{S}'(\rr {2d})$ defined by \eqref{schwartzkernel1}. For $s > 1/2$ the operator $\mathscr{K}_T$ is continuous on $\Sigma_s(\rr d)$ and $\mathscr{K}_T$ extends uniquely to a continuous operator on $\Sigma_s'(\rr d)$. \end{prop} \begin{proof} Due to the above mentioned continuity of $\mathscr{K}_T$ on $\mathscr{S}(\rr d)$, we have for some integer $L \geqslant 0$ \begin{equation*} \sup_{x \in \rr d} |\mathscr{K}_T f(x)| \lesssim \sum_{|\alpha+\beta| \leqslant L} \sup_{x \in \rr d} |x^\alpha D^\beta f(x)|. \end{equation*} Using the seminorms \eqref{gfseminorm} it can be seen readily that the operators $f \rightarrow D^\beta f$ and $f \rightarrow x^\alpha f$ are continuous operators on $\Sigma_s(\rr d)$. By Proposition {\rm Re} \, f{seminormequivalence} we therefore have for any $A>0$ \begin{equation*} e^{A |x|^{1/s}} | x^\alpha D^\beta f (x)| \leqslant \| x^\alpha D^\beta f \|_A' \lesssim \| f \|_B' + \| \widehat f \|_B', \quad |\alpha+\beta| \leqslant L, \quad x \in \rr d, \end{equation*} for some $B>0$, where we use the seminorm \eqref{seminorm1}. This gives \begin{equation}\langlebel{firstestimate} \| \mathscr{K}_T f \|_A' = \sup_{x \in \rr d} e^{A |x|^{1/s}} |\mathscr{K}_T f(x)| \lesssim \| f \|_B' + \| \widehat f \|_B' \end{equation} which gives a desired continuity estimate for one of the two families of seminorms $\{\| f \|_A', \ \| \widehat f \|_B', \ A,B>0 \}$. To prove that $\mathscr{K}_T$ is continuous on $\Sigma_s(\rr d)$ it remains to estimate $\| \mathscr{F} (\mathscr{K}_T f) \|_A'$ for any $A>0$. The Fourier transform has Schwartz kernel $K(x,y)=e^{-i \langle x, y \rangle}$ which is a degenerate oscillatory integral with $N=0$, cf. \eqref{oscillint1}. The corresponding Lagrangian is by \eqref{lagrangian1} \begin{align*} \langlembdabda & = \{ (x,y, - y, -x) \in T^* \cc {2d}: (x,y) \in T^* \cc d \} \\ & = \{ (x,y, \xi, -\eta) \in T^* \cc {2d}: (x,\xi) = \mathcal{J}(y,\eta) \}. \end{align*} Due to the uniqueness, modulo multiplication with a nonzero complex number, of the correspondence between oscillatory integrals and Lagrangians, it follows that $\mathscr{F} = (2 \pi)^{d/2} \mu(\mathcal{J}) = c \mathscr{K}_{\mathcal{J}}$ for some $c \in \mathbf C \setminus 0$. We may hence write for some $c \in \mathbf C \setminus 0$, using the semigroup property of $T \rightarrow \mathscr{K}_T$ modulo sign, when $T \in \operatorname{Sp}(d,\mathbf C)$ is positive (cf. \cite[Proposition~5.9]{Hormander2}) \begin{align*} \mathscr{F} (\mathscr{K}_T f) & = \mathscr{F} \mathscr{K}_T \mathscr{F} \mathscr{F}^{-1} f = c^2 \mathscr{K}_\mathcal{J} \mathscr{K}_T \mathscr{K}_\mathcal{J} \mathscr{F}^{-1} f \\ & = \pm c^2 \mathscr{K}_{\mathcal{J} T \mathcal{J}} \mathscr{F}^{-1} f. \end{align*} Since $\mathcal{J} T \mathcal{J} \in \operatorname{Sp}(d,\mathbf C)$ is positive, and since the estimate \eqref{firstestimate} holds for any positive $T \in \operatorname{Sp}(d,\mathbf C)$ for some $B>0$, we obtain for any $A>0$ \begin{equation}\langlebel{secondestimate} \| \mathscr{F}(\mathscr{K}_T f) \|_A' \lesssim \| f \|_B' + \| \widehat f \|_B' \end{equation} for some $B>0$. Combining \eqref{firstestimate} and \eqref{secondestimate} and referring to Proposition {\rm Re} \, f{seminormequivalence}, we have proved that $\mathscr{K}_T$ is continuous on $\Sigma_s(\rr d)$. Finally we show that $\mathscr{K}_T$ extends uniquely to a continuous operator on $\Sigma_s'(\rr d)$. The formal adjoint of $\mathscr{K}_T$ is $\mathscr{K}_{\overline{T}^{-1}}$ which is indexed by the inverse conjugate matrix $\overline{T}^{-1} \in \operatorname{Sp}(d,\mathbf C)$ \cite{Hormander2}. The positivity of $\overline{T}^{-1}$ is an immediate consequence of the assumed positivity of $T$. Thus $\mathscr{K}_T$ may be defined on $\Sigma_s' (\rr d)$ by \begin{equation*} ( \mathscr{K}_T u, \varphi ) = ( u, \mathscr{K}_{\overline{T}^{-1}} \varphi ), \quad u \in \Sigma_s' (\rr d), \quad \varphi \in \Sigma_s (\rr d), \quad s>1/2, \end{equation*} which gives a uniquely defined extension of $\mathscr{K}_T$ as a continuous operator on $\Sigma_s' (\rr d)$. \end{proof} \begin{cor}\langlebel{propagatorcontGF} The Schr\"odinger propagator $e^{-t q^w(x,D)}$ has Schwartz kernel $K_{e^{-2 i t F}} \in \mathscr{S}'(\rr {2d})$. For $t \geqslant 0$ and $s>1/2$ it is a continuous operator on $\Sigma_s (\rr d)$, and it extends uniquely to a continuous operator on $\Sigma_s' (\rr d)$. \end{cor} \section{Propagation of the $s$-Gelfand--Shilov wave front set for Schr\"odinger equations}\langlebel{sec:propsing} Since the Schwartz kernel of the Schr\"odinger propagator $e^{-t q^w(x,D)}$ is an oscillatory integral corresponding to the positive Lagrangian \eqref{twistgraph}, an appeal to Theorem {\rm Re} \, f{WFsoscint} gives the following result. For $s>1/2$ the $s$-Gelfand--Shilov wave front set of the Schwartz kernel $K_{e^{-2 i t F}}$ of the propagator $e^{-t q^w(x,D)}$ for $t \geqslant 0$ obeys the inclusion \begin{equation}\langlebel{WFkernel1} \begin{aligned} & WF^s( K_{e^{-2 i t F}} ) \\ & \subseteq \{ (x, y, \xi, -\eta) \in T^* \rr {2d} \setminus 0 : \, (x,\xi) = e^{-2 i t F} (y,\eta), \, {\rm Im} \, e^{-2 i t F} (y,\eta) = 0 \}. \end{aligned} \end{equation} Combining this with Corollary {\rm Re} \, f{propagatorcontGF} and Theorem {\rm Re} \, f{WFphaseincl} now gives a result on propagation of singularities. The assumptions of the latter theorem are satisfied for $\mathscr{K} = \mathscr{K}_{e^{-2 i t F}} = e^{-t q^w(x,D)}$, since $WF_1^s ( K_{e^{-2 i t F}} ) = WF_2^s ( K_{e^{-2 i t F}} )= \emptyset$ follows from \eqref{WFkernel1} and the invertibility of $e^{-2 i t F} \in \cc {2d \times 2d}$, cf. \eqref{WFproj}. \begin{cor}\langlebel{propagationsing1} Suppose $q$ is a quadratic form on $T^*\rr d$ defined by a symmetric matrix $Q \in \cc {2d \times 2d}$, ${\rm Re} \, Q \geqslant 0$, $F=\mathcal{J} Q$, and $s > 1/2$. Then for $u \in \Sigma_s'(\rr d)$ \begin{align*} WF^s (e^{-t q^w(x,D)}u) & \subseteq e^{-2 i t F} \left( WF^s (u) \cap \operatorname{Ker} ({\rm Im} \, e^{-2 i t F} ) \right), \quad t \geqslant 0. \end{align*} \end{cor} As in the proof of \cite[Theorem 5.2]{Rodino2}, the latter inclusion can be sharpened using the semigroup modulo sign property of the propagator \cite{Hormander2} \begin{equation*} e^{-(t_1+t_2)q^w(x,D)}=\pm e^{-t_1q^w(x,D)}e^{-t_2q^w(x,D)}, \quad t_1,t_2 \geqslant 0. \end{equation*} In fact, using this property and some elementary arguments one obtains the inclusions \begin{align*} WF^s (e^{-t q^w(x,D)}u) & \subseteq \left( e^{2 t {\rm Im} \, F} \left( WF^s (u) \cap S \right) \right) \cap S \\ & \subseteq e^{-2 i t F} \left( WF^s (u) \cap \operatorname{Ker} ({\rm Im} \, e^{-2 i t F} ) \right) \end{align*} where the singular space \begin{equation*} S=\mathcal{B}ig(\bigcap_{j=0}^{2d-1} \operatorname{Ker}\big[{\rm Re} \, F({\rm Im} \, F)^j \big]\mathcal{B}ig) \cap T^*\rr d \subseteq T^*\rr d \end{equation*} of the quadratic form $q$ plays a crucial role. \begin{cor}\langlebel{propagationsing2} Suppose $q$ is a quadratic form on $T^*\rr d$ defined by a symmetric matrix $Q \in \cc {2d \times 2d}$, ${\rm Re} \, Q \geqslant 0$, $F=\mathcal{J} Q$, and $s > 1/2$. Then for $u \in \Sigma_s'(\rr d)$ \begin{align*} WF^s (e^{-t q^w(x,D)}u) & \subseteq \left( e^{2 t {\rm Im} \, F} \left( WF^s (u) \cap S \right) \right) \cap S, \quad t > 0. \end{align*} \end{cor} \end{document}
\begin{document} \newcommand{\half}{\mbox{$\textstyle \frac{1}{2}$}} \newcommand{\quat}{\mbox{$\textstyle \frac{1}{4}$}} \newcommand{\octa}{\mbox{$\textstyle \frac{1}{8}$}} \newcommand{{\rm d}}{{\rm d}} \newcommand{{\rm i}}{{\rm i}} \newcommand{{\rm e}}{{\rm e}} \title{On quantum microcanonical equilibrium} \author{Dorje~C.~Brody${}^1$, Daniel~W.~Hook${}^2$, and Lane~P.~Hughston${}^3$} \address{${}^1$Department of Mathematics, Imperial College, London SW7 2BZ, UK} \address{${}^2$Blackett Laboratory, Imperial College, London SW7 2BZ, UK} \address{${}^3$Department of Mathematics, King's College London, The Strand, London WC2R 2LS, UK} \begin{abstract} A quantum microcanonical postulate is proposed as a basis for the equilibrium properties of small quantum systems. Expressions for the corresponding density of states are derived, and are used to establish the existence of phase transitions for finite quantum systems. A grand microcanonical ensemble is introduced, which can be used to obtain new rigorous results in quantum statistical mechanics. \end{abstract} \section{Introduction} \label{sec:1} The purpose of this paper is to examine properties of quantum systems in thermal equilibrium. Questions that arise in this context, for example, are: ``What is the state of a system in equilibrium?'' or ``What is the temperature of an isolated system in equilibrium?'' In the case of a classical system immersed in a heat bath, the equilibrium distribution takes the Gibbs form $\exp(-\beta H)/Z(\beta)$, where $\beta=1/k_BT$ is the inverse temperature of the bath. What about a quantum system? Is the equilibrium state given by the Gibbs density matrix $\exp(-\beta {\hat H})/Z(\beta)$? If so, how does one verify that the parameter $\beta$ appearing in the density matrix is the inverse temperature of the bath? To investigate questions of this kind it is useful to consider first the classical situation. In the classical case, we take the system and the bath as a whole and regard this as a single isolated system. The Hamiltonian (symplectic) structure of classical phase space $\Gamma$ then allows us to define the density of states $\Omega(E)=\int_\Gamma\delta(H(x)-E){\rm d} V$ as the weighted volume of the phase space occupied by states with energy $E$. In equilibrium the state of the system maximises entropy and thus is given by a uniform distribution over the energy surface; this can be derived if the Hamiltonian evolution exhibits ergodicity. The entropy of the equilibrium state is thus given by $S(E)=k_B\ln\Omega(E)$, and the temperature is defined by the thermodynamic relation $T{\rm d} S={\rm d} E$. These are the necessary ingredients for the consideration of the equilibrium properties of a small subsystem. In particular, under a set of reasonable assumptions, it is possible to deduce, by use of the law of large numbers, that the equilibrium properties of a small subsystem are described by the Gibbs state. A complete derivation of these results is outlined in the seminal work of Khinchin~\cite{khinchin1}. Although the derivation of the equilibrium state is surprisingly complicated, once the relevant assumptions are specified, there are no ambiguities in the matter, and familiar results associated with the canonical ensemble can be obtained rigorously. The situation is markedly different in the case of a quantum system. First, in the usual Hilbert space formulation of quantum mechanics it is not clear how one can exploit the Hamiltonian structure. This leads to a difficulty in defining the temperature of a closed system. Second, since no rigorous derivation of the temperature exists (at least for finite quantum systems), it is not possible to verify whether the parameter $\beta$ appearing in the Gibbs density matrix agrees with the inverse temperature of the bath. In the literature on quantum statistics it is often postulated that the microcanonical density matrix of a quantum system with eigenenergy $E_i$ is given by the projection operator onto the Hilbert subspace spanned by states with that energy, normalised by the dimensionality $n_{E_i}$ of that subspace. The entropy is then defined by the expression $S=k_B\ln n_{E_i}$. A rigorous derivation of this density matrix is given by Khinchin~\cite{khinchin2}; however, the assumptions required to obtain the result go beyond those required for the classical case. In particular, it is necessary to forbid all superpositions of states with different energy. The exclusion of general superpositions, however, contradicts the superposition principle of quantum mechanics. This incompatibility between quantum mechanics and quantum statistical mechanics is an issue that has troubled many authors. For example, Schr\"odinger remarked in this connection that ``. . . this assumption is irreconcilable with the very foundations of quantum mechanics'', and that ``. . . to adopt this view is to think along severely `classical' lines'' \cite{schrodinger}. Confronted with this apparent contradiction, Schr\"odinger was nonetheless able to offer an argument to show, in effect, that in thermodynamic limit (where the number of particles in the system approaches infinity) the assumption that general superpositions are forbidden is justified \cite{schrodinger}. There is another important shortcoming in the familiar derivation of quantum statistical mechanics, namely, that the entropy is a discontinuous function of the energy. As a consequence, the temperature of a finite isolated system is undefined. This issue is addressed by Griffiths~\cite{griffiths}, who demonstrated the existence of a thermodynamic limit in which thermodynamic functions are well defined. We thus see that to make sense of the conventional approach to quantum statistics, a ``macroscopic'' limit is required. In this limit, however, we expect quantum systems to behave semiclassically so that superpositions, in particular, are excluded. For finite quantum systems, these issues remain unresolved. While the notion of a thermodynamic limit was justified both theoretically and experimentally some forty years ago, there have been experiments carried out on quantum systems over the past decade that involve small numbers of particles (see Gross~\cite{gross} and references cited therein). In particular, phase transitions have been observed in small systems---for example, the spherically symmetric cluster of $139$ sodium atoms exhibits a solid-to-liquid phase transition at about $267$~K~\cite{schmidt}. Such experiments demonstrate the breakdown of the conventional approach in which phase transitions are predicted only in thermodynamic limits. To obtain an equilibrium distribution that is well defined for finite systems, and to address the issue of the observed finite-size phase transitions, we have recently introduced an alternative formulation to quantum microcanonical equilibrium \cite{bhh}. The idea is to follow the derivation of the traditional result, as outlined in Khinchin~\cite{khinchin2}, as closely as possible, but to relax just one of the assumptions; namely, for a fixed energy $E$, we allow the system to be in a superposition of energy eigenstates with distinct eigenvalues. \section{Thermodynamic equilibrium} \label{sec:2} The idea of the new microcanonical equilibrium can be described heuristically as follows. We consider a gas consisting of a large number $N$ of weakly-interacting identical quantum molecules. As in the conventional approach, the intermolecular interactions are assumed strong enough to allow the gas to thermalise but weak enough so that, to a good approximation, the total system energy can be written as $\sum_{i=1}^N {\hat H}_i\approx{\hat H}_{\rm total}$, where $\{{\hat H}_i\}_{i=1,2,\ldots,N}$ are the Hamiltonians of the individual constituents. If the composite system is in isolation, then the total energy is a fixed constant: $\sum_{i=1}^N \langle{\hat H}_i \rangle=E_{\rm total}$. Now consider the result of a hypothetical measurement of the energy of one of the constituents. In equilibrium, the state of each constituent should be such that the average outcome of an energy measurement should be the same; that is, $\langle{\hat H}_i\rangle=E$, where $E=N^{-1}E_{\rm total}$. In other words, the equilibrium state of each constituent must lie on the energy surface ${\mathcal E}_E=\{|\psi\rangle\large|~\langle\psi|{\hat H}_i|\psi\rangle=E\}$ in the pure-state manifold for that constituent. Since $N$ is large, this will ensure that the uncertainty in the total energy of the composite system, as a fraction of the expectation of the total energy, is vanishingly small. It is convenient to describe the distribution of the various constituent pure states, on their respective energy surfaces, as if we were considering a probability measure on the energy surface ${\mathcal E}_E$ of a single constituent. In reality, we have a large number of approximately independent constituents; but owing to the fact that the respective state spaces are isomorphic we can represent the behaviour of the aggregate system with the specification of a probability distribution on the energy surface of a single ``representative'' constituent. In thermal equilibrium the resulting distribution should be uniform over the energy surface ${\mathcal E}_E$ since it must maximise the entropy. Therefore, the density of states is given by \begin{eqnarray} \Omega(E) = \int_\Gamma \delta(H(\psi)-E){\rm d} V_\Gamma. \end{eqnarray} Here, $\Gamma$ denotes the pure state manifold and ${\rm d} V_\Gamma$ is the associated Fubini-Study volume element of $\Gamma$. Once $\Omega(E)$ is specified, the entropy is given by $S(E)=k_B\ln \Omega(E)$. It follows that the temperature and the specific heat can be deduced from thermodynamic relations $T{\rm d} S={\rm d} E$ and $C(T)={\rm d} E/{\rm d} T$. A short calculation shows that \begin{eqnarray} k_BT=\frac{\Omega(E)}{\Omega'(E)}, \quad {\rm and}\quad C(T) = \frac{k_B(\Omega')^2}{(\Omega')^2-\Omega\Omega''}. \label{eq:2} \end{eqnarray} The advantage of the present formulation over the traditional approach is that the entropy is a continuous function of the energy. As a consequence, thermodynamic functions such as those in ({\rm e}f{eq:2}) are well defined for finite quantum systems. However, to justify the term ``temperature'' for the ratio $\Omega/\Omega'$ we must show its properties are consistent with the requirements of thermodynamic equilibrium. For this purpose, consider two independent systems, each in equilibrium, with state densities $[\Omega_1(E_1)]^{N_1}$ and $[\Omega_2(E_2)]^{N_2}$. We let them interact for a period of time, during which energy $\epsilon$ is exchanged. We then separate them and let them relax again to equilibrium. Because of the interaction the state densities of the systems are now $[\Omega_1 (E_1 + \epsilon/N_1)]^{N_1}$ and $[\Omega_2(E_2- \epsilon/ N_2)]^{N_2}$. The value of $\epsilon$ is determined so that the total entropy $S(E)=k_B\ln [\Omega_1(E_1 + \epsilon/N_1)]^{N_1} [\Omega_2(E_2- \epsilon/N_2)]^{N_2}$ is maximised. This condition is satisfied if and only if $\epsilon$ is such that the temperatures of the two systems defined according to ({\rm e}f{eq:2}) are equal. It follows that our definitions are thermodynamically consistent. \section{Expressions for the density of states} \label{sec:3} Let us now try to obtain a direct representation for the density of states $\Omega(E)$ in terms of the energy eigenvalues. We consider first the two-level system with energy eigenvalues $E_1,E_2$. The Fubini-Study volume element for the pure state manifold is ${\rm d} V_\Gamma= \frac{1}{4}\sin\theta{\rm d}\theta{\rm d}\phi$, where $0\leq\theta\leq\pi$ and $0\leq\phi<2\pi$. Since the energy expectation in a generic state $|\psi\rangle = \cos\half\,\theta|E_2\rangle+ \sin\half\, \theta\,{\rm e}^{{\rm i}\phi}|E_2\rangle$ is $E_2\cos^2\half\, \theta+E_1\sin^2\half\,\theta=\half(E_2-E_1)(1+\cos\theta)+E_1$, the density of states is \begin{eqnarray} \Omega(E)= \frac{1}{8\pi} \int_{-\infty}^\infty {\rm d} \lambda \int_0^{2\pi}{\rm d} \phi \int_0^{\pi}{\rm d}\theta \,{\rm e}^{-{\rm i} \lambda\left({\bar E}(1+\cos\theta)+E_1-E{\rm i}ght)} \sin\theta. \label{eq:4} \end{eqnarray} Here, we have made use of the integral representation for the delta-function, and we have also defined ${\bar E}=(E_2-E_1)/2$. By use of the relation $\int_{-\infty}^\infty{\rm d}\lambda\, {\rm e}^{-{\rm i} b\lambda} \lambda^{-1} \sin(a\lambda)=\pi$ for $a>b$ and $=0$ for $a<b$ we thus deduce that $\Omega(E)=\pi/(E_2-E_1)$ for $E_1\leq E\leq E_2$, and $\Omega(E)=0$ otherwise. An analogous calculation can be performed for a three level system. If we let $E_1,E_2,E_3$ denote the eigenvalues of the Hamiltonian, then the energy constraint for a generic state $|\psi\rangle = \sin\half\,\theta\cos\half\,\varphi|E_3\rangle+ \sin\half\, \theta\sin\half\,\varphi\,{\rm e}^{{\rm i}\xi}|E_2\rangle + \cos\half\,\theta\,{\rm e}^{{\rm i}\eta}|E_1\rangle$ is given by $E_3\sin^2\half\,\theta\cos^2\half\,\varphi + E_2 \sin^2\half\, \theta\sin^2\half\,\varphi + E_1\cos^2\half\,\theta=E$. Since the Fubini-Study volume element in this case is ${\rm d} V_\Gamma = \frac{1}{32} \sin\theta(1-\cos\theta)\sin\varphi {\rm d}\theta {\rm d} \varphi {\rm d}\xi{\rm d}\eta$, we carry out the relevant integration and obtain \begin{eqnarray} \Omega(E)=\frac{\pi^2(E-E_1)}{(E_3-E_1)(E_2-E_1)} \quad {\rm or} \quad \Omega(E) = -\frac{\pi^2(E-E_3)}{(E_3-E_1)(E_3-E_2)}, \label{eq:6} \end{eqnarray} depending on $E_1\leq E\leq E_2$ or $E_2< E\leq E_3$. By pursuit of this line of argument we deduce more generally that the density of states $\Omega(E)$ is given by a piecewise polynomial function of energy $E$. In particular, if the energy spectrum is nondegenerate, then we have the representation \begin{eqnarray} \Omega(E)=\frac{(-\pi)^n}{(n-1)!} \sum_{k=1}^{n+1} (E_k-E)^{n-1} \prod_{l\neq k}^{n+1} \frac{{\mathbf 1}_{\{E_k> E\}}}{E_l-E_k}. \label{eq:7} \end{eqnarray} Here ${\mathbf 1}_{\{A\}}$ denotes the indicator function (${\mathbf 1}_{\{A\}}=1$ if $A$ is true, and $0$ otherwise). To offer an intuition for the behaviour of the density of states, examples of $\Omega(E)$ are shown in Figure~{\rm e}f{fig:1}. \begin{figure} \caption{\label{fig:1} \label{fig:1} \end{figure} Once the density of states $\Omega(E)$ is obtained for the microcanonical equilibrium, thermal expectation values of physical observables in the corresponding canonical distribution can be computed by use of the canonical partition function $Z(\beta)$, which is the Laplace transform of $\Omega(E)$. When energy eigenvalues are nondegenerate, we have \begin{eqnarray} Z(\beta) = \sum_{k=1}^{n+1}{\rm e}^{-\beta E_{k}} \prod_{l=1,\neq k}^{n+1}\frac{\pi}{\beta(E_{l}-E_{k})} . \label{eq:9} \end{eqnarray} A line of argument in Khinchin~\cite{khinchin1} for classical systems can then be applied here in the quantum context to prove that the parameter $\beta$ appearing in ({\rm e}f{eq:9}) for the canonical partition function agrees with the microcanonical definition of temperature in ({\rm e}f{eq:2}). \section{Quantum phase transitions} \label{sec:4} An interesting consequence of the microcanonical framework, whether classical or quantum, is that the density of states in general need not be an analytic function for finite systems. In contrast, the partition function in the canonical counterpart is necessarily analytic. In other words, while it is necessary in the canonical framework to take thermodynamic limit to describe phase transitions, in the microcanonical formalism this is not the case. Therefore, an approach based on microcanonical equilibrium might provide an adequate description of the phase transitions for small systems observed in the laboratory. We note in this connection that there are many classical systems for which finite-size phase transitions are predicted in microcanonical equilibrium~\cite{kastner,pettini}. \begin{figure} \caption{\label{fig:2} \label{fig:2} \end{figure} In quantum microcanonical equilibrium, the breakdown of analyticity of $\Omega(E)$ gives rise to phase transitions in the sense that discontinuities in the higher-order derivatives of $\Omega(E)$ emerge. Specifically, if we solve the first equation in ({\rm e}f{eq:2}) for the energy to obtain $E(T)$, then for a system with $n+1$ nondegenerate energy eigenvalues, the $(n-1)$-th derivative of the energy with respect to the temperature has a discontinuity. As an illustration we consider the specific heat $C(T)$ for a gas of weakly interacting molecules, where each molecule is modelled by a strongly interacting chain of three Ising-type spins (see Figure~{\rm e}f{fig:2}). The molecular Hamiltonian is ${\hat H} = -J \sum_{k=1}^3 \sigma_z^k \sigma_z^{k+1} - B \sum_{k=1}^3 \sigma_z^k$, where $\sigma_z^k$ is the third Pauli matrix for spin $k$, and $J,B$ are constants. For this system, the specific heat grows rapidly in the vicinity of the critical point $T_c=(2J+B)/3k_B$, where the system exhibits a discontinuity in the second derivative of the specific heat. The plot of the specific heat is shown in Figure~{\rm e}f{fig:3}, along with the corresponding plot for a simple four-level molecular gas; the latter exhibits a second-order phase transition at the critical temperature $k_BT_c = \half \varepsilon$ and critical energy $E_c = \varepsilon$, where $\varepsilon$ is the spacing of energy eigenvalues. \begin{figure} \caption{\label{fig:3} \label{fig:3} \end{figure} \section{Towards quantum grand microcanonical equilibrium} \label{sec:5} In the foregoing discussion we have made use of the energy conservation property of the unitary evolution to introduce a \emph{quantum microcanonical hypothesis} which asserts that in equilibrium, every quantum state with given energy $E$ is realised with an equal probability. This hypothesis can be refined in the following manner, leading to what might appropriately be called the quantum \textit{grand microcanonical hypothesis}. For a given quantum mechanical system there are $n$ linearly independent conserved observables, where $n+1$ is the Hilbert space dimensionality. Therefore, when an isolated quantum system with a generic Hamiltonian evolves unitarily, the associated dynamics exhibit ergodicity on the toroidal subspace ${\mathcal T}^n \subset{\mathcal E}_E$ of the energy surface determined by simultaneously fixing the expectation values of the commuting family of observables (cf.~\cite{brody}). A theorem of Birkhoff~\cite{khinchin1} applies to show that the dynamical average of an observable can be replaced by the ensemble average with respect to a uniform distribution over ${\mathcal T}^n$. The density of states is then determined by the weighted volume of the subspace ${\mathcal T}^n$ of the quantum state space. In the case of the energy observable, the conjugate variable is given by the inverse temperature. For other observables belonging to the commuting family, the associated conjugate variables can be thought of as \emph{generalised chemical potentials}. In this respect, a refinement of the microcanonical postulate leads to an ensemble of the grand canonical form. Let us consider the simplest nontrivial example $n=2$. We choose the two projection operators ${\hat\Pi}_1=|E_1\rangle\langle E_1|$ and ${\hat\Pi}_2= |E_2\rangle\langle E_2|$ for the independent pair of commuting observables. Since these observables are conserved, we let the two constraints be $\langle{\hat\Pi}_1\rangle=p$ and $\langle {\hat\Pi}_2\rangle=q$. It follows from the resolution of identity that $\langle {\hat\Pi}_3\rangle=1-p-q$. In terms of the usual parametrisation, these constraints read $\cos^2\half\,\theta=p$ and $\sin^2\half\,\theta \sin^2\half\,\varphi=q$, respectively. The generalised density of states $\Omega(p,q)=\int_\Gamma \delta(\langle{\hat\Pi}_1\rangle-p) \delta(\langle{\hat\Pi}_2\rangle-q) {\rm d} V_\Gamma$ can then be calculated to yield \begin{eqnarray} \Omega(p,q)=\quat\pi^2\left(\Upsilon(p)-\Upsilon(p+q-1) {\rm i}ght) \left(\Upsilon(q)-\Upsilon(q-1) {\rm i}ght), \label{eq:10} \end{eqnarray} where $\Upsilon(x)=-1$ for $x\leq0$ and $\Upsilon(x)=1$ for $x>0$. We have $\Omega(p,q)=\pi^2$ in the range $0\leq p,q\leq1$ and $0\leq 1-p-q \leq1$. To establish its relation with the density of states $\Omega(E)$ we solve the energy constraint $pE_1+qE_2+(1-p-q)E_3=E$ for, say, $p$, then substitute the result in $\Omega(p,q)$, and integrate over $q$ from $0$ to $1$. The temperature of the system can then be obtained by differentiation. It would be of interest to further investigate properties of the grand microcanonical equilibrium for general systems, which in our view holds the promise for many new rigorous results in quantum statistical mechanics. \ack DCB acknowledges support from The Royal Society. DWH thanks the organisers of the DICE2006 conference in Piombino, Italy, 11-15 September 2006 where this work was presented. The authors thank M.~Parry for comments. \vskip10pt \end{document}
\begin{document} \title[Quantum state estimation when qubits are lost]{Quantum state estimation when qubits are lost: A no-data-left-behind approach} \author{Brian P. Williams and Pavel Lougovski} \address{Quantum Information Science Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee USA 37831} \ead{[email protected]; [email protected]} \begin{abstract} We present an approach to Bayesian mean estimation of quantum states using hyperspherical parametrization and an experiment-specific likelihood which allows utilization of all available data, even when qubits are lost. With this method, we report the first closed-form Bayesian mean estimate for the ideal single qubit. Due to computational constraints, we utilize numerical sampling to determine the Bayesian mean estimate for a photonic two-qubit experiment in which our novel analysis reduces burdens associated with experimental asymmetries and inefficiencies. This method can be applied to quantum states of any dimension and experimental complexity.\footnote{\scriptsize This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan( \href{http://energy.gov/downloads/doe-public-access-plan}{http://energy.gov/downloads/doe-public-access-plan}). }\end{abstract} \noindent{\it \small Keywords}:\small quantum state estimation, qubit, Bayesian mean estimation\\ \normalsize \tableofcontents \pagestyle{fancy} \section{Introduction} The problem of estimating the state of a quantum system from observed measurement outcomes has been around since the dawn of quantum mechanics. In recent years, driven by technological advances in building and controlling quantum systems, this question has received a renewed interest especially in the context of quantum information processing (QIP). Since QIP relies on one's ability to prepare arbitrary multi-qubit quantum states, verifying experimentally that a quantum state has been indeed prepared acceptably close (by some metric) to a target is of paramount importance. As a result, methods built on classical statistical parameter estimation procedures have been adopted in order to satisfy a demand for quantum state characterization tools. Arguably the most widespread quantum state estimation approach relies on classical maximum likelihood estimation (MLE) technique. Pioneered in~\cite{Hradil}, it offers a great simplicity in numerical implementation but suffers from several dangerous flaws. Most prominently, MLE is prone to rank-deficient estimates of a density matrix from a {\it finite} number of measurement samples~\cite{blume2010optimal} which in turn implies that probability of observing certain states is zero -- a statement that is only valid in the limit of {\it infinite} number of observations. Also, MLE does not provide a straightforward way to place error bars on an estimated quantum state. Fortunately, there is an alternative parameter estimation technique called the Bayesian mean estimate (BME) that can be applied to quantum state characterization~\cite{blume2010optimal,jones1991principles,Granade2016} and is free of the the aforementioned shortcomings. In addition, BME minimizes the mean square error(MSE)~\cite{blume2010optimal,jaynes2003probability} i.e. the average square difference between the parameter and its estimate. Thus, BME offers a more accurate estimate of a quantum state. But, BME poses an implementation challenge. It computes a posterior distribution over quantum states for given measurement data by using Bayes' rule which in turn requires one to calculate the probability of the data by integrating over the manifold of all physical quantum states. While it may be possible to carry out the multi-dimensional integration analytically, ultimate evaluation still may be computationally prohibitive. Thus, numerical routines that use Monte Carlo (MC) methods are applied in order to sample from the posterior distribution. There is a trade-off between the speed and accuracy of BME depending on which MC algorithm is used. To our knowledge two types of MC algorithms were proposed for quantum state BME so far. The first one is the Metropolis-Hastings~\cite{MetropolisHastings} (MH) algorithm--an example of Markov Chain MC (MCMC)--was adopted in~\cite{blume2010optimal}. The second one is sequential MC (SMC) ~\cite{del2006sequential}--an importance sampling based algorithm--recently used for adaptive quantum state tomography~\cite{adaptive}. The MH algorithm is known for its ability to reproduce probability distributions very accurately at the expense of slow convergence. On the other hand, the SMC algorithm is fast but may converge to a sample that does not faithfully represent the distribution of interest. When to apply or the BME depends on many factors. For instance, for a small measurement data set and a large number of unknown parameters--a typical situation for multi-qubit systems--the BME is superior to as we demonstrate in Section~\ref{performance} of this paper. But perhaps even more crucially, the applicability of the BME approach depends on the choice of the form and parametrization of the likelihood function. The experimental likelihood most used in application is a simple multinomial that connects the observed data set directly with the quantum probabilities using Born's rule \cite{Hradil}. This approach assumes the data set, observed measurement outcomes, result directly and only from the quantum state and unitary operations. This is not always the case, since the measurement apparatus often introduces operation bias (not always unitary) and inefficiencies that modify the probability of an experimental observation. Previously, James et al. \cite{James01} accounted for bias in qubit operations in their two-photon tomography method using MLE. More recent MLE works such as Gate Set Tomography \cite{stark2014self,blume2013robust} assume nothing about the qubit state or operations, gates, other than their dimension. However, previous methods stop short of full accounting for non-unitary operations such as qubit loss. Thus, experimentalists may find themselves applying normalizing constants to account for deficiencies in the defined likelihood. These normalizing constants require preliminary experiments to obtain, and the method for obtaining these constants is often not well defined nor reported. In this paper we develop a BME-based quantum state reconstruction method that utilizes the slice sampling~\cite{sliceSampling} (SS) algorithm which has the accuracy of the MH algorithm but demonstrates faster convergence~\cite{NealConv} and is more resilient for a numerical implementation. We show that by using the hyperspherical parameterization of the manifold of density matrices the BME of a state of a single qubit can be computed analytically by using a uniform prior. For a two-qubit system, in a situation when individual qubits may be lost during the measurement process, we apply SS algorithm to the same parameterization and an experiment-specific likelihood demonstrating a computationally stable and efficient way of sampling from the posterior distribution over the density matrices. We compare the resulting BME estimates to the corresponding MLE estimates as a function of the number of measurements and observe the superiority of the BME method, especially in the limit of small sample sizes. We begin this paper with a quick outline of our method in Section~\ref{Sec:outline}. We derive a closed BME for the ideal single qubit experiment in Section~\ref{Sec:singlequbit}. This approachable example illustrates our method and contrasts it with traditional MLE methods. It may also inspire further research into closed-form BME solutions of higher dimensional quantum systems. Next, in Section~\ref{Sec:twoqubits}, we derive a likelihood for a finite data two-photon experiment where detector inefficiencies and experimental asymmetries are taken into account. Utilization of this approach results in the real world benefit of eliminating the need to perform preliminary experiments to determine normalization constants. Subsequently, in Section~\ref{performance} we simulate a multitude of two-qubit photon experiments generating data sets from which we compare the performance of various MLE and BME approaches. Lastly, we apply our estimation to a real world two-photon experiment in Section~\ref{Sec:Experimental}. In the Appendices, we describe a common MLE approach using a traditional likelihood, we detail numerical procedures for sampling density matrices from the true state distribution using slice sampling, and describe the optimization method used in likelihood maximization. \section{Approach Outline}\label{Sec:outline} The components of our quantum state estimation pipeline are outlined in Fig. \ref{outline}. First, we define a model of our experiment by enumerating all the possible outcomes. This enumeration allows us to specify an experiment-specific likelihood $P(\mathcal{D}|\alpha)$, the probability of observing a specific data set $\mathcal{D}$ given the experimental parameters $\alpha=\{\alpha_{1},\cdots,\alpha_{N}\}$. In our case parameters $\alpha$ are elements of a density matrix $\rho$ representing the quantum state to be estimated. Bayes' rule, \begin{equation}P(\alpha|\mathcal{D})=\frac{P(\mathcal{D}|\alpha)P(\alpha)}{P(\mathcal{D})}\end{equation} then allows us to express $P(\alpha|\mathcal{D})$, a posterior distribution (PD) for the variables $\alpha$, given an observed data set $\mathcal{D}$ and a prior probability distribution $P(\alpha)$. \begin{figure} \caption{Our Bayesian mean estimation of density matrices is outlined above. a) We define our experiment by specifying a likelihood function $\!P\left(\mathcal{D} \label{outline} \end{figure} Next, the BME for a specific parameter $\alpha_i$ given a data set $\mathcal{D}$ is \begin{equation}\overline{\alpha_i}=\int d\alpha P(\alpha|\mathcal{D})\times \alpha_i \textrm{.}\end{equation} We expand our analysis to quantum systems by assuming that parameters $\alpha$ are entries of a density matrix $\rho$ ($\rho_{ij}=\alpha_{k}$) describing a valid quantum state (i.e. $\rho\ge 0$, $\rho=\rho^{\dagger}$, $\textrm{Tr}(\rho)=1$). Therefore, $\alpha$'s are not independent as we must enforce quantum constraints. To achieve this in a computationally tractable fashion, instead of using the Cartesian parameterization given by $\alpha$'s we parametrize a density matrix $\rho$ utilizing a Cholesky decomposition (see panel {\bf c} in Fig.(\ref{outline})) and hyperspherical parameters as suggested by Daboul \cite{daboul1967conditions}. We abbreviate this parametrization with $\tau$ to distinguish it from the Cartesian parametrization $\alpha$. Next, in order to compute the BME estimator, we need to select a prior probability distribution over density matrices $P(\tau)\equiv P(\rho(\tau))$ and an integration measure over the set of all physical quantum states $d\tau$ such that $d\mu(\rho(\tau))=P(\tau)d\tau$ is a valid probability measure i.e. $\int d\mu(\rho(\tau)) = 1$. We use a non-informative prior $P(\tau) = \textrm{const}$ and derive the integration measure $d\tau$ induced by the Riemanian metric $g_{ij}$ computed from the Euclidian length element between density matrices $(ds)^{2} = \textrm{Tr}\left(d\rho(\tau)\cdot d\rho^{\dagger}(\tau)\right)$~\cite{fyodorov2005introduction}. This choice of the integration measure guarantees Haar invariance of the probability measure over the set of density matrices. Thus, the probability of a state $\rho(\tau)$ is invariant under an arbitrary unitary rotation $U$ i.e. $P(\rho(\tau))=P(U\rho(\tau) U^{\dagger})$. Then the BME of an unknown quantum state reads, \begin{equation}\overline{\rho}=\int d\tau P(\tau|\mathcal{D})\times \rho(\tau).\end{equation} The latter expression for the BME can, in principle, be evaluated analytically. However, in practice it almost surely requires computational resources and (or) time constraints that prohibit analytical evaluation. In this case, an estimate can be obtained using numerical sampling from the posterior distribution $P(\tau|\mathcal{D})$. For example, in later sections we utilize numerical slice sampling \cite{sliceSampling} to arrive at approximate estimates for a two-photon experiment. \section{An example: Bayesian mean estimation of an ideal single qubit}\label{Sec:singlequbit} Consider an ideal single-qubit experiment. In this experiment we can reliably and repetitively prepare a qubit in an unknown state $\rho$ and measure the value of any desired observable $M$ without err or qubit loss. If $M$ is a two outcome POVM defined by operators $M_i$ with $i\in\{0,1\}$ , $M_0 + M_1 = I$ then the respective probabilities $p_i$ to observe outcome $i$ are determined by the unknown quantum state via $p_i = \textrm{Tr}\left(\rho\cdot M_i\right)$. To fully describe a single qubit we need to measure a set of informationally complete POVMs which will fully define the density matrix. For concreteness let us consider a case when the qubit is represented by the polarization degree of freedom of a single photon. In this case a complete state description can be achieved by estimating the probability of observing one of two orthogonal outcomes in the rectilinear basis ($Z$, horizontal ($h$) and vertical ($v$) polarization), the diagonal basis ($X$, diagonal ($d$) and anti-diagonal ($a$) polarization), or the circular basis ($Y$, left ($l$) and right ($r$) circular polarization). The likelihood of observing a data set $\mathcal{D}$ from these measurements given we know the probabilities of each outcome exactly is \begin{equation}P(\mathcal{D}|\alpha)=p_h^{c_h} (1\textrm{-} p_h)^{c_v} p_d^{c_d}(1\textrm{-} p_d)^{c_a} p_l^{c_l} (1\textrm{-} p_l)^{c_r}\label{likelihood}\end{equation} where $\alpha=\{p_h,p_d,p_l\}$, $\mathcal{D}=\{c_h,c_v,c_d,c_a,c_l,c_r\}$, we have enforced the single basis requirement that the sum of orthogonal probabilities is unity, $p_{h,d,l}+p_{v,a,r}=1$. Using Bayes rule, the distribution for $\alpha$ given $\mathcal{D}$ is \begin{equation}P(\alpha|\mathcal{D})=\frac{P(\mathcal{D}|\alpha)P(\alpha)}{\int d\alpha P(\mathcal{D}|\alpha)P(\alpha)}\end{equation} which has no quantum constraints, i.e. associated density matrices may not be physical. A physical density matrix $\rho$ for the single-qubit must fulfill constraints \begin{eqnarray}\textrm{Tr}\left(\rho\right)=1 \quad &&\textrm{probabilities sum to 1} \nonumber \\ \left\langle \phi \right| \rho \left|\phi\right\rangle \geq 0 \quad &&\textrm{positive semi-definite} \nonumber \\ \rho=\rho^\dagger \quad &&\textrm{hermitian} \textrm{.}\nonumber \end{eqnarray} These can all be fulfilled by parametrizing the density matrix as suggested by Daboul \cite{daboul1967conditions}. For the single-qubit the parametrized matrix is \begin{equation}\rho(\tau)\!=\!\left(\!\begin{array}{cc} \cos^2\left(u\right) & \frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{i\phi} \\ \frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{-i\phi} & \sin^2\left(u\right) \end{array}\!\right)\label{ideal_rho}\end{equation} where parameter ranges $\tau=\{u,\theta,\phi\}$, $u\in[0,\frac{\pi}{2}]$, $\theta\in[0,\frac{\pi}{2}]$, and $\phi\in[0,2\pi]$ ensure there is no state redundancy, states having multiple representations. This matrix heeds all quantum constraints for any values of the parameters. The parameters $\alpha$ in terms of the new parameters $\tau$ are \begin{align}p_h(\tau) &= \cos^2\left(u\right)\label{pH}\\ p_d(\tau)&=\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\cos\left(\phi\right)\label{pV}\\ p_l(\tau)&=\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\sin\left(\phi\right)\textrm{.}\label{pL}\end{align} This results in the new likelihood \begin{equation}P(\mathcal{D}|\tau)=p_h(\tau)^{c_h} (1\textrm{-} p_h(\tau))^{c_v} p_d(\tau)^{c_d}(1\textrm{-} p_d(\tau))^{c_a} p_l(\tau)^{c_l} (1\textrm{-} p_l(\tau))^{c_r}\label{tau_likelihood}\textrm{.}\end{equation} To complete the new description we must define a new integration measure in $\tau$ space. Our original probability space has an infinitesimal length element $(ds)^2=(dp_h)^2+(dp_d)^2+(dp_r)^2$. The measure in this case is reduced to the volume element in Cartesian coordinates $d\alpha=dp_h dp_d dp_l$. This space can be considered a``cube" that includes both physical and unphysical states. Within this cube, the new space is a sphere containing only and all physical density matrices. The length element in this space is \cite{fyodorov2005introduction} \begin{equation}(ds)^2=\textrm{Tr}\left(d\rho \cdot d\rho^\dagger\right)=\sum\limits_{i,j}\textrm{Tr}\left(\frac{\partial \rho}{\partial \tau_i}\cdot\frac{\partial \rho}{\partial \tau_j}\right)d\tau_i d\tau_j\label{measure}\end{equation} where $\tau_i\in\{u,\theta,\phi\}$. The new measure, the infinitesimal volume, is \begin{equation}d\tau = d\tau_0\; d\tau_1 ..d\tau_m \textrm{Det}\sqrt{g} \label{dTau}\end{equation} where \begin{equation}g_{i j}=\textrm{Tr}\left(\frac{\partial \rho}{\partial \tau_i}\cdot\frac{\partial \rho}{\partial \tau_j}\right)\textrm{.}\label{gIJ}\end{equation} The integration measure in the ideal single qubit experiment is \begin{equation}d\tau = du\; d\theta\; d\phi\;\frac{\textrm{sin}^3\left(2u\right)\textrm{sin}\left(2\theta\right)}{2\sqrt{2}} \textrm{.}\nonumber\end{equation} As described earlier, this measure is Haar invariant. We will also consider how this parametrization relates to the Pauli operators \begin{equation}\sigma_z= \left(\!\begin{array}{cc} 1 & 0\\ 0 & -1 \\ \end{array}\!\right) \quad \sigma_x= \left(\!\begin{array}{cc} 0 & 1\\ 1 & 0 \\ \end{array}\!\right)\quad \sigma_y= \left(\!\begin{array}{cc} 0 & -i\\ i & 0 \\ \end{array}\!\right) \end{equation} and their expectations \begin{align}z&=\textrm{Tr}\left(\sigma_z\cdot\rho\right)=\cos(2u)\label{pauliZ}\\ x&=\textrm{Tr}\left(\sigma_x\cdot\rho\right)=\sin(2u)\cos(\theta)\cos(\phi)\label{pauliX}\\ y&=\textrm{Tr}\left(\sigma_y\cdot\rho\right)=\sin(2u)\cos(\theta)\sin(\phi)\label{pauliY}\textrm{.} \end{align} With our likelihood defined, one estimation technique is to approximate the true distribution utilizing Laplace's method \cite{mackay2003information}, also known as the saddle-point approximation. This is a multivariate Gaussian centered on the MLE defined by $k$ parameters. This MLE is found by simultaneously solving $k$ equations of the form \begin{equation}\frac{\partial P(\mathcal{D}|\tau)}{\partial \tau_i}=0\end{equation} and verifying this point represents the global maximum. The uncertainty in the parameters can be captured utilizing the covariance matrix which we estimate as \begin{equation}A_{ij}=\left.-\frac{\partial^2 \log\left(P(\mathcal{D}|\tau)\right)}{\partial \tau_i \partial \tau_j}\right|_{\tau=\tau_{\textrm{ml}}}\textrm{.}\end{equation} The approximate distribution is then \begin{equation}P(\mathcal{D}|\tau)\approx\sqrt{\frac{(2\pi)^k}{\det\left(\mathbf{A}\right)}}e^{-\frac{1}{2}\left(\mathbf{\tau}-\mathbf{\tau}_{mle}\right)^T \cdot A \cdot\left(\mathbf{\tau}-\mathbf{\tau}_{mle}\right)}\end{equation} where $\mathbf{\tau}$ is a column vector. For the ideal single-qubit we find unbounded MLE \begin{equation}u_{\textrm{uml}}=\frac{\arccos\left(z_f\right)}{2}\quad\quad \theta_{\textrm{uml}}=\arccos\left(\sqrt{\frac{x_f^2+y_f^2}{1-z_f^2}}\right)\quad\quad \phi_{\textrm{uml}}=\arctan\left(x_f,y_f\right)\label{unbounded}\end{equation} where $z_f$, $x_f$, and $y_f$ are the frequency based linear inversion estimates (LIE) of the Pauli operator expectations \begin{equation} z_f=\frac{c_h-c_v}{c_h+c_v}\quad\quad\quad x_f=\frac{c_d-c_a}{c_d+c_a}\quad\quad\quad y_f=\frac{c_l-c_r}{c_l+c_r}\textrm{.}\label{unbounded2} \end{equation} When $x_f^2+y_f^2+z_f^2\leq1$ these LIE are the correct MLE. However the parameter set given in Eq. \ref{unbounded} and \ref{unbounded2} is undefined for unphysical states, when $x_f^2+y_f^2+z_f^2>1$. When this is the case, the MLE is found on the boundary of the Bloch sphere due to the concavity of the likelihood given by Eq. \ref{tau_likelihood}. This point is not necessarily the one with smallest Euclidean distance to the unbounded MLE. Determination of the boundary MLE is accomplished by setting $\theta=0$, restricting us to the boundary, and maximizing the parametrized likelihood Eq. \ref{tau_likelihood} over the parameter ranges $u\in\left[0,\frac{\pi}{2}\right]$ and $\phi\in\left[0,2\pi\right]$. Next, we derive a closed-form BME which always results in a quantum bound obedient estimate. To calculate the BME for the single qubit density matrix we first evaluate the normalizing constant \begin{equation}P(\mathcal{D})=\int d\tau P(\mathcal{D}|\tau)P(\tau)\end{equation} and then estimate our mean density matrix \begin{align}\overline{\rho}&=\frac{1}{P(\mathcal{D})}\int d\tau P(\mathcal{D}|\tau)P(\tau)\times \rho\;(\tau)\nonumber\\ &=\frac{1}{P(\mathcal{D})}\int du\; d\theta\; d\phi\; \frac{\textrm{sin}^3\left(2u\right)\textrm{sin}\left(2\theta\right)}{2\sqrt{2}}\left(\cos^2 u\right)^{c_h}\left(\sin^2 u\right)^{c_v} \nonumber \\ &\quad\times \left(\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\cos\left(\phi\right)\right)^{c_d}\left(\frac{1}{2}-\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\cos\left(\phi\right)\right)^{c_a}\nonumber \\ &\quad\times \left(\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\sin\left(\phi\right)\right)^{c_l}\left(\frac{1}{2}-\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\sin\left(\phi\right)\right)^{c_r}\nonumber \\ &\quad\times \left(\!\begin{array}{cc} \cos^2\left(u\right) & \frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{i\phi} \\ \frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{-i\phi} & \sin^2\left(u\right) \end{array}\!\right)\nonumber \textrm{.} \end{align} Using the binomial theorem we can rewrite this as \small \begin{align}&\overline{\rho}\;=\frac{1}{P(\mathcal{D})}\int_0^{\pi/2}\!\!\!\!\!du\int_0^{\pi/2}\!\!\!\!\!d\theta\int_0^{2\pi}\!\!\!\!\!d\phi\; \frac{8\; \textrm{sin}^3\left(u\right)\textrm{cos}^3\left(u\right)\sin\left(\theta\right)\cos\left(\theta\right)}{\sqrt{2}}\left(\cos^2 u\right)^{c_h}\left(\sin^2 u\right)^{c_v} \nonumber \\ &\times\!\!\sum_{k_d=0}^{c_d}\!\!\binom{c_d}{k_d} 2^{-k_d} \!\left(\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_d-k_d}\sum_{k_a=0}^{c_a}\!\!\binom{c_a}{k_a} 2^{\textrm{-} k_a} \!\left(\textrm{-}\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_a-k_a}\nonumber \\ &\times\!\!\sum_{k_l=0}^{c_l}\!\!\binom{c_l}{k_l} 2^{-k_r} \!\left(\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_l-k_l}\sum_{k_r=0}^{c_r}\!\!\binom{c_r}{k_r} 2^{\textrm{-} k_l} \!\left(\textrm{-}\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_r-k_r}\nonumber \\ &\quad\times \left(\!\begin{array}{cc} \cos^2\!\left(u\right) & \frac{1}{2}\cos\!\left(\theta\right)\sin\!\left(2u\right)e^{i\phi} \\ \frac{1}{2}\cos\!\left(\theta\right)\sin\!\left(2u\right)e^{-i\phi} & \sin^2\!\left(u\right) \end{array}\!\right)\nonumber \textrm{.} \end{align} \normalsize The integral over $u$ has solution \begin{equation}\int_0^{\pi/2} du \sin^x\!u \cos^y\!u =\frac{1}{2}\;\textrm{Beta}\left(\frac{1+x}{2},\frac{1+y}{2}\right)\end{equation} and similar for $\theta$. The integral over $\phi$ can be shown to be \begin{equation}\int_0^{2 \pi} d\phi\; \sin^x \phi \cos^y \phi= \frac{\left(1+(\textrm{-}1)^x\right)\left(1+(\textrm{-}1)^y\right)}{2}\;\textrm{Beta}\left(\frac{1+x}{2},\frac{1+y}{2}\right)\end{equation} which is zero for odd $x$ or $y$. To ease representation of the solutions, define \scriptsize \begin{align}&F_{u_0,u_1,\theta_0,\theta_1,\phi_0,\phi_1}\nonumber \\ &=\int_0^{\pi/2}\!\!\!\!\!du\int_0^{\pi/2}\!\!\!\!\!d\theta\int_0^{2\pi}\!\!\!\!\!d\phi\; \frac{8\; \textrm{sin}^3\left(u\right)\textrm{cos}^3\left(u\right)\sin\left(\theta\right)\cos\left(\theta\right)}{\sqrt{2}} \left(\cos^2 u\right)^{c_h}\left(\sin^2 u\right)^{c_v} \nonumber \\ &\quad\times\!\!\sum_{k_d=0}^{c_d}\!\!\binom{c_d}{k_d} 2^{-k_d} \!\left(\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_d-k_d} \sum_{k_a=0}^{c_a}\!\!\binom{n_d\textrm{-} c_d}{k_a} 2^{-k_a} \!\left(-\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_a-k_a}\nonumber \\ &\quad\times\!\!\sum_{k_l=0}^{c_l}\!\!\binom{c_r}{k_r} 2^{-k_r} \!\left(\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_l-k_l} \sum_{k_r=0}^{c_r}\!\!\binom{n_c\textrm{-} c_r}{k_l} 2^{-k_l} \!\left(-\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_r-k_r}\nonumber \\ &\quad\times \cos^{u_0}(u)\sin^{u_1}(u)\cos^{\theta_0}(\theta)\sin^{\theta_1}(\theta)\cos^{\phi_0}(\phi)\sin^{\phi_1}(\phi) \nonumber \\ &=\sum_{k_d=0}^{c_d}\sum_{k_d=0}^{c_d}\sum_{k_l=0}^{c_l}\sum_{k_r=0}^{c_r}\binom{c_d}{k_d}\binom{c_a}{k_a}\binom{c_l}{k_l}\binom{c_r}{k_r} 2^{-k_d-k_a-k_l-k_r}(\textrm{-} 1)^{c_a+c_r-k_a-k_r}\!\left(\!1\!+\!(\textrm{-} 1)^{n_c \textrm{-} k_l \textrm{-} k_r \textrm{+} \phi_0}\right)\left(\!1\!+\!(\textrm{-} 1)^{n_d \textrm{-} k_d \textrm{-} k_a \textrm{+} \phi_1}\right)\nonumber \\ &\quad\times\textrm{Beta}\left(\frac{4 \textrm{+} 2\;c_h + n_d +n_c - k_d - k_a - k_l - k_r + u_0}{2},\frac{4 + 2\;c_v + n_d + n_c - k_d - k_a - k_l - k_r + u_1}{2}\right)\nonumber \\ &\quad\times\textrm{Beta}\left(\frac{2+\theta_0}{2},\frac{1 + n_d + n_c - kd - ka - k_l - k_r + \theta_1}{2}\right)\;\textrm{Beta}\left(\frac{1 + n_c - k_l - k_r + \phi_0}{2},\frac{1 + n_d - k_d - k_a + \phi_1}{2}\right)\nonumber\textrm{.} \end{align} \normalsize The BME for our ideal single-qubit is \begin{equation}\overline{\rho}= \frac{1}{F_{0,0,0,0,0,0}} \left(\!\begin{array}{cc} F_{2,0,0,0,0,0} & F_{1,1,0,1,0,1}\textrm{+} i F_{1,1,0,1,1,0}\\ F_{1,1,0,1,0,1}\textrm{-} i F_{1,1,0,1,1,0} & F_{0,2,0,0,0,0} \\\end{array}\!\right)\textrm{.}\nonumber \end{equation} This is the best possible estimation of the ideal single-qubit given a set of data $\mathcal{D}$ and a uniform prior. \begin{figure} \caption{We plot posterior marginal distributions $P(x,z|\mathcal{D} \label{bloch} \end{figure} To illustrate the physicality of the distribution $P(\tau|D)$ and the differences between the MLE, LIE, and the BME consider a true quantum state $\rho_0$ defined by the parameters $u_0=0.864$, $\theta_0=0.393$, and $\phi_0=5.18$. For visualization we use the Bloch sphere where the state is represented by the expectations of the the Pauli operators given in Eq. \ref{pauliZ}-\ref{pauliY}. We simulated taking 10 measurements in the $Z$, $X$, and $Y$ bases from which we generated counts $c_h=7$, $c_v=3$, $c_d=7$, $c_a=3$, $c_l=0$, and $c_r=10$. We plot the distributions $P(x,z|\mathcal{D})$, $P(y,z|\mathcal{D})$, and $P(x,y|\mathcal{D})$ in Fig. \ref{bloch}. The coordinates for each estimate are given in Table \ref{tableEstimates}. This small data set emphasizes the parametrized distribution's physicality and the qualitative difference between the MLE and BME. The gray locations correspond to unphysical states. As can be seen in the top right and bottom plots in Fig. \ref{bloch}, the LIE can be unphysical. To correct this, the MLE is found on the boundary, a pure state. In contrast, the BME will always be located within the physical space. This illustration is not made to emphasize the performance of any of specific approach. Performance is addressed in Section \ref{performance}. \begin{table}[t] \centering \small \begin{tabular}{|c|c|c|c|c|} \hline & $z$ & $x$ & $y$ & $\sqrt{z^2+x^2+y^2}$ \\ \hline true &-0.156&0.414&-0.813& 0.925\\\hline MLE &0.263& 0.263& -0.928& 1.00\\\hline LIE &0.400& 0.400& -1.00& 1.15$^\dagger$\\\hline BME &0.226&0.216&-0.695&0.762\\\hline \hline \end{tabular} \normalsize \caption{Bloch sphere coordinates for the true state, MLE, LIE, and BME. $\dagger$In this case, the LIE is unphysical.\label{tableEstimates}} \end{table} In order to utilize the ideal single qubit formalism with single-qubit experiments, the data can be renormalized to the lowest efficiency measurement similar to the procedure used in Appendix \ref{mleAppendix}. This method does not fully utilize the available information and has the additional complication that preliminary experiments must transpire to determine the measurement efficiencies. In the remainder of our manuscript we address qubit estimation for experiments. \section{Bayesian mean estimation for multi-qubit experiments}\label{Sec:twoqubits} In an experiment the probability of observing an outcome depends not only on the quantum state but also on the measurement apparatus itself. In this case imperfections and asymmetries in the measurement process prohibit the type of ``perfect" estimate we investigated in the last section. Our experiment of investigation is the common two-photon experiment for which we introduce the fundamental assumptions and model below. James et al. \cite{measureQubits2001} previously reported an MLE approach to this experiment as well as higher dimensional experiments. In contrast to that method, we account for qubit loss within our defined likelihood and enable determination of the BME, the best estimate on average \cite{blume2010optimal}, which avoids MLE pitfalls such as ``zero" probabilities, impossible outcomes. \subsection{Estimating parameters in a single-basis two-photon experiment}\label{A} In this section, we give an example of estimating parameters in a single-basis experiment. To begin, we assume the existence of a photon pair. A member of this pair is sent to Alice and the other one to Bob each of whom has chosen a measurement basis as seen in Fig. \ref{setup}. A single photon can result in one of two observable orthogonal outcomes, $0$ or $1$, and one unobservable outcome, the photon is lost. All observable outcomes have probabilities of occurrence proportional to the joint probabilities $p_{00}$, $p_{01}$, $p_{10}$, and $p_{11}$ as seen in Fig. \ref{bayes_tree}a. Additionally, Fig. \ref{bayes_tree}b illustrates the four possible outcomes for a given ``destiny" when pathway efficiencies are considered. The possibilities include both photons being counted giving one coincidence count and two singles counts, one photon being counted and one lost giving one singles count, or both photons being lost giving no counts. \begin{figure} \caption{The two-photon experiment is illustrated above. One member of a photon pair, a qubit, is sent to both Alice and Bob who have each chosen a measurement basis. An individual qubit can result in one of two orthogonal outcomes, $0$ or $1$, or the qubit can be lost.\label{setup} \label{setup} \end{figure} Alice and Bob record event $0$ or event $1$ with number $A_0,A_1\leq N$ and $B_0,B_1\leq N $, respectively, since typically some portion of the $N$ photons are lost. Losses are due to Alice and Bob's suboptimal pathway efficiencies $\left\{a_0,a_1,b_0,b_1\right\}\in\left[0,1\right]$. In the event both members of a photon pair are detected, Alice and Bob observe joint results, giving coincidence totals $c_{00},c_{01},c_{10},$ and $c_{11}$. \begin{figure} \caption{a) Our Bayesian tree begins with the existence of a photon pair. This pair is then ``destined" to the joint outcome $ij$ according to probability $p_{ij} \label{bayes_tree} \end{figure} From this data we may enumerate the number of each type of event. The number of joint coincidence events are straightforward, given by the $c_{ij}$ with the probability of these events being $a_i b_j p_{ij}$. The number of events where Alice registers result $i$ and Bob loses his photon is $A_i\textrm{-} c_{i0}\textrm{-} c_{i1}$. The probability of this occurrence is $a_i\left[(1-b_0)p_{i0}+(1-b_1)p_{i1}\right]$. The terms for Bob registering a photon and Alice losing her photon are similar. The number of events where both photons are lost is $N\textrm{-} A_{0}\textrm{-} A_{1}\textrm{-} B_{0}\textrm{-} B_{1}\textrm{+} c_{00}\textrm{+} c_{01}\textrm{+} c_{10}\textrm{+} c_{11}$ with probability \begin{equation}p_{\substack{pair\\lost}}=(1\textrm{-} a_0)(1\textrm{-} b_0)p_{00}+(1\textrm{-} a_0)(1\textrm{-} b_1)p_{01}+(1\textrm{-} a_1)(1\textrm{-} b_0)p_{10}+(1\textrm{-} a_1)(1\textrm{-} b_1)p_{11}\textrm{.}\nonumber\end{equation} For now, assume the photon number $N$ is known. In this case, using Bayes' rule, the event number, and the probabilities given above, the PD is \begin{equation}P\left(\alpha|\mathcal{D},N\right)=\frac{P\left(\mathcal{D},N|\alpha\right)P(\alpha)}{P\left(\mathcal{D},N\right)}\end{equation} where $\alpha$=$\left\{p_{00},p_{01},p_{10},p_{11},a_0,a_1,b_0,b_1\right\}$ are the unknown parameters, joint probabilities and pathway efficiencies, the data set $\mathcal{D}$=$\left\{c_{00},c_{01},c_{10},c_{11},A_0,A_1,B_0,B_1\right\}$ consists of the known singles and coincidence count values totalling \begin{equation}s=A_0+A_1+B_0+B_1\quad\quad\quad n=c_{00}+c_{01}+c_{10}+c_{11}\textrm{,}\nonumber\end{equation} respectively, \begin{align} &P(\mathcal{D},N|\alpha)=\nonumber\\ &\gamma\!\left(N\right)(a_0b_0p_{00})^{c_{00}}(a_0b_1p_{01})^{c_{01}}(a_1b_0p_{10})^{c_{10}}(a_1b_1p_{11})^{c_{11}}\nonumber\\ &\qquad\times [a_0\left(p_{00}(1-b_0)+p_{01}(1-b_1)\right)]^{A0-c_{00}-c_{01}}[a_1\left(p_{10}(1-b_0)+p_{11}(1-b_1)\right)]^{A1-c_{10}-c_{11}}\nonumber\\ &\qquad\times [b_0\left(p_{00}(1-a_0)+p_{10}(1-a_1)\right)]^{B0-c_{00}-c_{10}}[b_1\left(p_{01}(1-a_0)+p_{11}(1-a_1)\right)]^{B1-c_{01}-c_{11}}\nonumber\\ &\qquad\times[p_{00}(1\textrm{-} a_0)(1\textrm{-} b_0)+p_{01}(1\textrm{-} a_0)(1\textrm{-} b_1)+p_{10}(1\textrm{-} a_1)(1\textrm{-} b_0)+p_{11}(1\textrm{-} a_1)(1\textrm{-} b_1)]^{N-(s-n)}\textrm{,}\nonumber\\ &P(\alpha)=1\textrm{,}\nonumber\\ &P(\mathcal{D},N)=\!\!\!\!\int\!\!d\alpha \;P(\mathcal{D},N|\alpha)P(\alpha)\textrm{,}\nonumber\\ &\gamma\!\left(N\right)=\frac{N!}{(N\!-\!(s\!-\!n))!(A_0\textrm{-} c_{00}\textrm{-} c_{01})!(A_1\textrm{-} c_{10}\textrm{-} c_{11})!(B_0\textrm{-} c_{00}\textrm{-} c_{10})!(B_1\textrm{-} c_{01}\textrm{-} c_{11})!c_{00}!c_{01}!c_{10}!c_{11}!}\textrm{.}\nonumber\end{align} The likelihood $P(\mathcal{D},N|\alpha)$ consists of the probability of each type of event with a multiplicity equal to the number of times it occurred. Both the probabilities and number of events were described in the preceding paragraph. We have retained the full form of the likelihood that includes $\gamma(N)$ for use below. It is typical in two-photon experiments that the photon number $N$ is not known. If $N$ is known, the following step may be skipped and the above PD is the appropriate choice. Otherwise, we must make $N$ an unobserved parameter or seek a way to eliminate it. Fortunately, there is an analytical method to remove $N$ from the PD completely \cite{jaynes2003probability} by taking an average over the $N$ distribution using the summation formula \begin{equation}\sum_{m=0}^\infty\binom{m+y}{m}m^zx^m=\left(x\frac{d}{dx}\right)^z(1-x)^{-(y+1)}\textrm{.}\label{sumOverN}\end{equation} Since the average is taken over the distribution, see Fig. \ref{estimates}, only probable values of $N$ will have appreciable contribution. Applying this formula, $N$ is removed giving PD \begin{equation}P\left(\alpha|\mathcal{D}\right)=\frac{\sum_{N=s-n}^{\infty}P\left(\mathcal{D},N|\alpha\right)P(\alpha)}{P\left(\mathcal{D}\right)}=\frac{P\left(\mathcal{D}|\alpha\right)P(\alpha)}{P\left(\mathcal{D}\right)}\label{PD}\end{equation} where \begin{align}& P(\mathcal{D}|\alpha)=a_0^{A_0}a_1^{A_1}b_0^{B_0}b_1^{B_0}p_{00}^{c_{00}}p_{01}^{c_{01}}p_{10}^{c_{10}}p_{11}^{c_{11}}\nonumber\\ &\qquad\times[p_{00}(1-b_0)+p_{01}(1-b_1)]^{A0-c_{00}-c_{01}}[p_{10}(1-b_0)+p_{11}(1-b_1)]^{A1-c_{10}-c_{11}}\nonumber\\ &\qquad\times[p_{00}(1-a_0)+p_{10}(1-a_1)]^{B0-c_{00}-c_{10}}[p_{01}(1-a_0)+p_{11}(1-a_1)]^{B1-c_{01}-c_{11}}\nonumber\\ &\qquad\times[1\textrm{-} p_{00}(1\textrm{-} a_0)(1\textrm{-} b_0)\textrm{-} p_{01}(1\textrm{-} a_0)(1\textrm{-} b_1)\textrm{-} p_{10}(1\textrm{-} a_1)(1\textrm{-} b_0)\textrm{-} p_{11}(1\textrm{-} a_1)(1\textrm{-} b_1)]^{-s+n-1}\textrm{,}\label{likelihood2}\\ &P(\alpha)=1\textrm{,}\nonumber\\ &P(\mathcal{D})=\int\!\!d\alpha\;P(\mathcal{D}|\alpha)P(\alpha)\textrm{,}\nonumber\end{align} and, in this specific case, \begin{equation}\int\! d\alpha \!\equiv\! \!\int_0^1\!\!\!\!da_0\!\int_0^1\!\!\!\!da_1\!\int_0^1\!\!\!\!db_0\!\int_0^1\!\!\!\!db_1\!\!\int_{0}^1\!\!\!\! dp_{00}\!\! \int_{0}^{1-p_{00}}\hspace{-25pt}dp_{01}\!\!\int_{0}^{1-p_{00}-p_{01}}\hspace{-42pt}dp_{10}\nonumber \end{equation} with $p_{11}=1-p_{00}-p_{01}-p_{10}$. We omitted all constants. Assuming the integral can be carried out, we can make estimates of any parameter via its the mean value, for instance, \begin{equation}\overline{p_{00}}=\int\!\!d\alpha P\left(\alpha|\mathcal{D}\right)\times p_{00}.\end{equation} Likewise, any other parameter mean $\;\overline{p_{ij}}\;$, $\;\overline{a_{i}}\;$, or $\;\overline{b_{i}}\;$ as well as their standard deviations may be estimated. One exception is the mean value $\overline{N}$. We find this mean by setting $z=1$ in Eq. (\ref{sumOverN}), \begin{equation}\overline{N}=\int\!\!d\alpha P\left(\alpha|\mathcal{D}\right)\times\frac{s-n+g(\alpha)}{1-g(\alpha)}\end{equation} where \begin{equation}g(\alpha)=p_{00}(1\textrm{-} a_0)(1\textrm{-} b_0)+p_{01}(1\textrm{-} a_0)(1\textrm{-} b_1)+p_{10}(1\textrm{-} a_1)(1\textrm{-} b_0)+p_{11}(1\textrm{-} a_1)(1\textrm{-} b_1)\textrm{.}\end{equation} In principal, all of the above integrals have analytical solutions via the multinomial theorem, \begin{equation}(x_0+x_1+...+x_m)^n =\sum_{k_0+k_1+...+k_m=n}\!\!\binom{n}{k_0,k_1,..,k_n}x_0^{k_0}x_1^{k_1}\cdots x_m^{k_m}\textrm{,}\nonumber\end{equation} which gives exact answers in the form of sums of Beta and Gamma functions. However, the computation needed to carry out the resultant sums is prohibitive. If we cannot efficiently make our parameter estimations analytically, we can utilize numerical sampling to approximate the BMEs of interest. We discuss this in detail in Appendix \ref{ss}. If the probability of obtaining a sample $\alpha^{(r)}$ tends to the true probability $P(\alpha^{(r)}|\mathcal{D})$, the mean estimations can then be made by repetitive sampling, \begin{align}\overline{\alpha}=\frac{1}{R}\sum_{r=0}^R\alpha^{(r)}&=\frac{1}{R}\sum_{r=0}^R\left\{p_{00}^{(r)},p_{01}^{(r)},p_{10}^{(r)},p_{11}^{(r)},a_0^{(r)},a_1^{(r)},b_0^{(r)},b_1^{(r)}\right\}\nonumber\\ &=\left\{\overline{p_{00}},\;\overline{p_{01}},\;\overline{p_{10}},\;\overline{p_{11}},\;\overline{a_{0}},\;\overline{a_{1}},\;\overline{b_{0}},\;\overline{b_{1}}\right\}\textrm{.}\nonumber\end{align} \subsection{Single-basis simulation} \begin{figure} \caption{Left. Sample histograms given proportional to the approximate the probability distribution for each parameter $p_{ij} \label{estimates} \end{figure} Consider a single-basis simulation where unbeknownst to Alice and Bob a source generates $N$=$10,000$ photon pairs with joint probabilities and pathways efficiencies \begin{align}p_{00}&=0.3\quad\quad p_{01}=0.05\quad\quad p_{10}=0.2\quad\quad p_{11}=0.45\nonumber\\ a_{0}&=0.3\quad\quad\; a_{1}=0.7\quad\quad \;\;\;b_{0}=0.9\quad\quad \;\;b_{1}=0.5\;\textrm{.}\nonumber\end{align} The only information available to Alice and Bob are their count numbers \begin{align}A_0&=1079\quad\quad A_{1}=4553\quad\quad B_{0}=4474\quad\quad B_{1}=2565\nonumber\\ c_{00}&=829\phantom{0}\quad\quad c_{01}=89\phantom{00}\quad\quad c_{10}=1245\quad\quad c_{11}\!=1624\nonumber\textrm{.}\end{align} Numerical sampling, see Appendix \ref{ss}, is used to produce sample $\alpha^{(r)}$ from $P\left(\alpha|\mathcal{D}\right)$. Fig. \ref{estimates} includes histograms for each parameter from 25,200 $\alpha$ samples. Each parameter histogram contains 100 bins. This large sample size was chosen to illustrate that the samples do come from a distribution. For the typical application a much smaller sample size would likely be adequate. From the distribution $P\left(\alpha|\mathcal{D}\right)$ parameter mean values are found to be \begin{align}\overline{p_{00}}=0.300\pm 0.008& \quad\overline{p_{01}}=0.059 \pm 0.007 \quad \overline{p_{10}}=0.191\pm0.007&&\quad\overline{p_{11}}=0.450\pm0.009\nonumber\\ \overline{a_{0}}=0.303\pm0.011&\quad \overline{a_{1}}=0.716\pm0.014 \quad \overline{b_{0}}=0.918\pm0.018&&\quad \overline{b_{1}}=0.508\pm0.011\nonumber\end{align} and mean photon number $\overline{N}=9926\pm 50.9$. Comparison with the above true values shows qualitative agreement. \subsection{Parametrizing the n-dimensional density matrix}\label{C} For approachability, we obscured the construction of the density matrix for the ideal single qubit given in Eq. \ref{ideal_rho}. We briefly describe this construction here, for a full proof with discussion see \cite{daboul1967conditions}. We note that this construction is similar to that recently proposed by Seah et al. \cite{MCsamplingQStates_II} whose density matrix sampling application is similar to our approach. Daboul's parametrization can be extended to quantum systems of any dimension. For an $n$-dimensional Hilbert space, the density matrix is formed using the Cholesky decomposition, requiring $\rho=L(\!\tau\!)L(\!\tau\!)^\dagger$ with \begin{equation}L(\!\tau\!)\! =\!\!\left(\!\begin{array}{ccccc} L_{11} (\!\tau\!)\!\!& 0& 0 & \cdots& 0 \\ L_{21}(\!\tau\!)\! \! & L_{22}(\!\tau\!)\!\! & 0 & \cdots& 0 \\ L_{31}(\!\tau\!)\! \! & L_{32}(\!\tau\!)\!\!& L_{33}(\!\tau\!)\!\! & \cdots& 0 \\ \vdots& \vdots& \vdots & \ddots& \vdots \\ L_{n1}(\!\tau\!)\!\! & L_{n2}(\!\tau\!)\!\! & L_{n3}(\!\tau\!)\!\! &\cdots& \!\!\! L_{nn}(\!\tau\!) \end{array}\right)\nonumber \end{equation} being a lower triangular matrix with positive real diagonal elements. The parameter set $\tau$ include $n^2$$-$$1$ parameters which describe a unique density matrix. The elements $L_{ij}$ may be written as \begin{align} L_{ij} (\!\tau\!)&=U_i V_{ij}\phantom{0}\quad\quad (j\leq i)\nonumber \\ L_{ij} (\!\tau\!)&=0\phantom{U_i V_{ij}}\quad\quad (j> i) \nonumber \end{align} where \begin{align}U_{1}&=\cos\left(u_1\right) \hspace{0.3\linewidth} V_{ii}=1 \nonumber \\ U_{k}&=\cos\left(u_k\right)\prod_{j=1}^{k-1}\sin\left(u_j\right)\quad \!\!\textrm{\scriptsize $(1<k<n)$} \;\hspace{0.05\linewidth} V_{i1}=\cos\left(\theta_{i1}\right)e^{i\phi_{i1}}\quad (i>1)\nonumber \\ U_{n}&=\prod_{j=1}^{n-1}\sin\left(u_j\right) \hspace{0.26\linewidth} V_{ik}=\cos\left(\theta_{ik}\right)e^{i\phi_{ik}}\prod_{j=1}^{k-1}\sin\left(\theta_{ij}\right)\quad \!\!\textrm{\scriptsize $(1<k<i)$} \textrm{.}\nonumber \end{align} Consider the case of two qubits with dimension $n$=$4$, the parametrized matrix elements of $L(\!\tau\!)$ are \begin{align}L_{11}(\!\tau\!)\!&=\!\cos(u_1)\nonumber\\ L_{21}(\!\tau\!)\!&=\!\sin(u_1)\cos(u_2)\cos(\theta_{21})e^{i \phi_{21}}\nonumber\\ L_{22}(\!\tau\!)\!&=\!\sin(u_1)\cos(u_2) \sin(\theta_{21})\nonumber\\ L_{31}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\cos(u_3)\cos(\theta_{31})e^{i \phi_{31}}\nonumber\\ L_{32}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\cos(u_3)\sin(\theta_{31})\cos(\theta_{32})e^{i \phi_{32}}\nonumber\\ L_{33}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\cos(u_3)\sin(\theta_{31})\sin(\theta_{32})\nonumber\\ L_{41}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\sin(u_3)\cos(\theta_{41})e^{i \phi_{41}}\nonumber\\ L_{42}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\sin(u_3)\sin(\theta_{41})\cos(\theta_{42})e^{i \phi_{42}}\nonumber\\ L_{43}(\!\tau\!)\!&=\sin(u_1)\sin(u_2)\sin(u_3)\sin(\theta_{41})\sin(\theta_{42})\cos(\theta_{43})e^{i \phi_{43}}\nonumber\\ L_{44}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\sin(u_3)\sin(\theta_{41})\sin(\theta_{42})\sin(\theta_{43}) \nonumber \end{align} with $u_{i}\in[0,\frac{\pi}{2}]$, $\theta_{ij}\in[0,\frac{\pi}{2}]$, and $\phi_{ij}\in[0,2\pi]$. Indeed, one could instead change the $u_i$ and $\theta_{ij}$ trigonometric terms to \begin{equation}\cos(u_i)\rightarrow \sqrt{u_i'}\quad\quad \sin(u_i)\rightarrow \sqrt{1-u_i'}\quad\quad \cos(\theta_i)\rightarrow \sqrt{\theta_i'}\quad\quad \sin(\theta_i)\rightarrow \sqrt{1-\theta_i'}\nonumber \end{equation} with $u_i'\in[0,1]$, $\theta_{ij}'\in[0,1]$. The complex terms involving $\phi_{ij}$ remain unchanged. A similar adjustment was used by Chung and Trueman \cite{ChungTrueman}. \subsection{Estimating parameters in a multi-basis two-photon experiment}\label{multi} In section \ref{A} and \ref{C}, respectively, we defined our experimental likelihood for the single-basis experiment and detailed the parametrization of any $n$-dimensional density matrix. To make estimations using data from multi-basis two-photon experiment we will use both of these pieces. Our example will be full-state tomography. Other multiple basis experiments will have similar estimation constructions. In the case that the data set is incomplete, our method will still return an estimate true to both the given data and all quantum constraints. To complete full-state tomography Alice and Bob each take measurements in bases $Z$, $X$, and $Y$ such that all outcomes are observable in each basis combination $ZZ$, $ZX$, $XZ$, $ZY$, $YZ$, $XX$, $XY$, $YX$, and $YY$. Thus, Alice and Bob's data set will include the data from all 9 basis combinations. The likelihood is a product of the single-basis likelihoods, Eq. \ref{likelihood2}, from each of these basis combinations, \begin{equation}P(\mathcal{D}|\alpha)\!=\!P(\mathcal{D}_{ZZ}|\alpha_{ZZ})P(\mathcal{D}_{ZX}|\alpha_{ZX})\cdots P(\mathcal{D}_{YY}|\alpha_{YY})\label{likelihood_product}\end{equation} where $\alpha$ includes the probabilities of all measurement outcomes and the four experimental pathway efficiencies which we assume are the same over all bases. However, in the experimental section, Section \ref{Sec:Experimental}, we do not make this assumtion. Next, we parametrize our density matrix using the hyperspherical parameters described in the previous section, \begin{equation}P(\mathcal{D}|\alpha)\rightarrow P(\mathcal{D}|\tau)\textrm{.}\end{equation} This parametrization comes with a new measure defined by Eq. \ref{measure}, \ref{dTau}, and \ref{gIJ}. Putting it all together we can make any BME of interest, for instance the mean density matrix \begin{equation}\overline{\rho}\;= \frac{1}{P(\mathcal{D})}\int d\tau P(\mathcal{D}|\tau) P(\tau) \times \rho(\tau) \label{hardintegral}\end{equation} where $P(\mathcal{D})=\int d\tau P(\mathcal{D}|\tau) P(\tau)$. If it is not computationally convenient to evaluate the integrals of the type given in Eq. \ref{hardintegral}, we can utilize numerical sampling. If we can draw samples $\rho^{(r)}$ from the distribution $P(\tau|\mathcal{D})$ we can estimate the BME of our density matrix as \begin{equation}\overline{\rho}\;= \lim_{R\rightarrow\infty}\frac{1}{R} \sum_{i=1}^{R} \rho^{(r)}\label{PDlast}\textrm{.}\end{equation} We address our numerical sampling approach in Appendix \ref{ss}. \subsection{State certainty} When reporting the values of experimental measurements such as the visibility of an interference curve $V$ or the value of the Bell parameter $S$, it is typical to provide a standard deviation to describe the uncertainty in the parameter, e.g. $V=0.98\pm 0.01$ or $S=2.65\pm 0.05$. This gives a quantification of the uncertainty in the estimate. When the BME is multi-dimensional the uncertainty can be represented by a covariance matrix \cite{blume2010optimal,Granade2016} \begin{equation}\Delta \rho(\tau)\!=\!\left(\!\begin{array}{cccc} \Delta \tau_0^2 & \Delta \tau_0 \tau_1 & \cdots &\Delta \tau_0 \tau_k\\ \Delta \tau_1 \tau_0 & \Delta \tau_1^2 & \cdots &\Delta \tau_1 \tau_k\\ \vdots & \vdots & \ddots & \vdots\\ \Delta \tau_2 \tau_0 & \Delta \tau_2 \tau_1 & \cdots &\Delta \tau_k^2\\ \end{array}\!\right)\end{equation} with each element being a covariance, \begin{equation}\Delta \tau_i \tau_j= \overline{\tau_i \tau_j}-(\overline{\tau_i})(\overline{\tau_j})\end{equation} where $\overline{\tau_i}$ is the expectation, mean value, of $\tau_i$. For $i=j$ this is just the usual variance. Here the $k$ parameters include the $n^2-1$ parameters needed to define the density matrix as well as any additional experimental parameters such as the efficiencies. We can also define a single quantity that captures a compact representation of the certainty in the estimation. We use the \emph{trace distance deviation} $\Delta D$, which is the mean trace distance over the distribution with the distribution's mean density matrix $\overline{\rho}$, \begin{equation}\Delta D=\int d\tau D\textrm{\large$($}\;\overline{\rho},\rho(\tau)\textrm{\large$)$}\textrm{.}\label{dd}\end{equation} The trace distance is \begin{equation}D(\rho,\sigma)=\textrm{Tr}\left(\sqrt{\left(\rho-\sigma\right)^2}\right)=\frac{1}{2}\sum_{i}\left|\lambda_i\right|\label{traceD}\end{equation} where the $\lambda_i$ are the eigenvalues of $\rho-\sigma$. We approximate $\Delta D$ with numerical sampling using the formula \begin{equation}\Delta D\approx\frac{1}{R}\sum_{r=1}^R D\textrm{\large$($}\;\overline{\rho},\rho^{(r)}\textrm{\large$)$}\textrm{.}\end{equation} When the certainty is high all samples will be close to the mean value giving $\Delta D\rightarrow 0$ which can be compared to the typical standard deviation where smaller is better. This is also useful when there is no particular state one wishes to compare the estimations with such as is typically done when reporting the fidelity. \section{BME performance with numerical sampling}\label{performance} To characterize the performance of the presented estimation methods we used the following procedure. For each $N\in\{10,10^2,10^3,10^4,10^5\}$ the following steps are repeated: \begin{enumerate}[1.] \item A density matrix $\rho$ is sampled from a uniform distribution using a Haar measure in the hypershperical parameter space. \item A random set of pathway efficiencies $a_0, a_1, b_0,$ and $b_1$ are chosen from the range $\left[0,1\right]$. These are chosen to be the same across all bases. \item We simulate a two-photon experiment for $N$ identical states $\rho$ in each of the 9 bases given in Section \ref{multi}, $9N$ total identical states to generate a data set $\mathcal{D}$. Each simulated experiment for a single basis is the same as described by the Bayesian tree in Section \ref{A}. \item Using $\mathcal{D}$ we find using a traditional likelihood and the actual randomly chosen pathway efficiencies as described in Appendix~\ref{mleAppendix}. Also with $\mathcal{D}$, we find the BME and MLE using the experiment-specific likelihood in which the pathway efficiencies are not known as described in Section \ref{multi}. Thus, the traditional MLE has the unfair and unrealistic advantage of knowing the pathway efficiencies exactly. \item The distance $D$, Eq. \ref{traceD}, is found between each estimate and the true state $\rho$. \item If the experiment-specific BME or MLE is closer to the true state than the traditional MLE, that estimation type has a win tallied. \item Steps 1.-6. are repeated for $1000$ repetitions. \item The average distance $\overline{D}$ over all $1000$ repetitions is found for the traditional MLE approach and the experiment-specific BME and MLE. The total wins versus the traditional MLE are also recorded. \end{enumerate} For these simulations the average distance $\overline{D}$ results are given at left in Fig.~\ref{results}, and the win percentages for the experiment-specific likelihood MLE and BME versus the traditional MLE are given at right in Fig.~\ref{results}. The MLE was found using an gradient ascent method described in Appendix~\ref{searchAppendix}. We emphasize that these results are conservative, since we give the traditional MLE process the pathway efficiencies exactly--these would not be known exactly in an experiment. \begin{figure} \caption{We generated data from simulating two-photon experiments for various states and photon pair number $N$ as outlined in this section. Top. We have plotted the average distance estimate for photon pair number $N$ over 1000 randomly sampled states. Bottom. We give the win percentage for the experiment-specific BME and MLE versus the traditional maximum likelihood method. The best performance is achieved using the experiment-specific Bayesian mean estimate. Another observation from this data is that an experimentalist can achieve a better estimate by switching to an experiment-specific likelihood which allows them to forgo any preliminary experiments to determine normalizing constants.\label{results} \label{results} \end{figure} The states $\rho$ were drawn from a uniform distribution which is Haar invariant when using the measure obtained from Eq. \ref{measure}, \ref{dTau}, and \ref{gIJ}. We also use the same distribution as a prior to compute our BME estimate. We also made estimates using a non-Haar invariant measure to evaluate the significance of prior selection. This results in a drastically different prior relative to that used in generating the random state. In Fig. \ref{results} the experiment-specific BME with the original advantageous prior and the ``bad" prior are both plotted. As can be seen, there is, possibly, a small gain using the advantageous prior for the smaller photon pair number estimations. But, it also highlights that the prior choice can be made effectively inconsequential given enough data \cite{jaynes2003probability}. The prior certainly can improve the estimate when little data is available. Granade and colleagues discuss this in depth \cite{Granade2016}. \section{Experimental tomography}\label{Sec:Experimental} We performed state tomography on a two-photon polarization entangled target state \begin{equation}\left|\Psi^+\right\rangle=\frac{1}{\sqrt{2}}\left(\left|H_A\right\rangle\otimes\left|V_B\right\rangle+\left|V_A\right\rangle\otimes\left|H_B\right\rangle\right)\end{equation} generated by pumping a periodically poled potassium titanyl phosphate (PPKTP) nonlinear crystal inside a Sagnac loop with two counterpropagating 405nm pump beams \cite{SagnacSource,tamperSeal}. The two possibilities of Type II \footnote{The signal and idler photons are produced with orthogonal polarizations in Type II SPDC.} spontaneous parametric downconverison (SPDC), either the clockwise or counter-clockwise beam generated a 810nm photon pair, leads to a polarization entangled state output into the idler and signal modes received by Alice and Bob, respectively. Alice and Bob each choose a basis by inclusion or omission of waveplates. Since this requires a physical adjustment to our apparatus for each basis choice, we assume in our likelihood that the efficiencies are independent parameters in each basis. The half-wave and quarter-wave plate matrix operations are, respectively, \begin{equation}\textrm{H}=\left( \begin{array}{cc} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \end{array}\right) \quad \textrm{Q}=\left( \begin{array}{cc} 1 & 0 \\ 0 & i \end{array}\right)\textrm{.}\nonumber \end{equation} To measure in basis $Z$ Alice omits her waveplates. To measure in $X$ she includes the half-wave plate, she operates on her single-photon with $H$. Finally, to measure in the $Y$ basis she includes both waveplates, operating with $Q$ then $H$. Single-photon detectors record the detection mode, orthogonal outcomes 0 or 1 in each basis. \begin{figure} \caption{Our two-photon polarization entangled state is generated by pumping a nonlinear PPKTP crystal inside a Sagnac loop with two counterpropagating pump beams each of which may generate Type II SPDC pairs. This leads to a polarization entangled state shared by Alice and Bob. Alice and Bob each choose a basis by inclusion or ommission of waveplates. Single-photon detectors record the detection mode, 0 or 1. ds$\equiv$dichroic splitter, pbs$\equiv$polarizing beamsplitter, pf$\equiv$pump filter, hwp$\equiv$ half-wave plate, qwp$\equiv$ quarter-wave plate\label{experiment} \label{experiment} \end{figure} From the experimental data given in Table 1, our mean density matrix is found to be \begin{equation}\overline{\rho}\!=\!\! \left(\!\begin{array}{cccc} 0.01 & 0.03\textrm{+} i0.00 & 0.03\textrm{+} i0.00 & \textrm{-} 0.00\textrm{-} i0.01 \\ 0.02\textrm{-} i0.00 & 0.48 & 0.48\textrm{-} i0.02 &\textrm{-} 0.01\textrm{-} i0.04 \\ 0.03\textrm{-} i0.00 & 0.48\textrm{+} i0.02 & 0.49 & \textrm{-} 0.01\textrm{-} i0.05 \\ \textrm{-}0.00\textrm{+} i0.01 & \textrm{-}0.01\textrm{+} i0.04 & \textrm{-} 0.01\textrm{+} i0.05 & 0.02 \end{array}\!\right)\nonumber\end{equation} with trace distance deviation $\Delta D=0.006$ defined in Eq. \ref{dd}. We have reported only 2 significant digits in $\;\overline{\rho}\;$ for brevity. Every element has a finite value, i.e. every outcome has a non-zero probability of occurrence. The fidelity of our mean $\overline{\rho}$ with the intended state $\Psi^+$ is \begin{equation}\mathcal{F}=\sqrt{\left\langle \Psi^+ \right|\;\overline{\rho}\;\left|\Psi^+\right\rangle }=0.9838\pm0.0005 \textrm{.}\nonumber\end{equation} We have not removed accidental coincidences from our estimation, we have assumed this contribution is negligible. \begin{table}[t] \centering \small \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Basis & $A_0$ & $A_1$ & $B_0$ & $B_1$ & $c_{00}$ & $c_{01}$ & $c_{10}$ & $c_{11}$ \\ \hline $ZZ$ &47718&50367&45793&44942&189&7302&7903&250\\\hline $ZX$ &47117&50726&45467&45831&2735&3826&4075&5061\\\hline $XZ$ &45985&51051&45509&44441&4077&3643&3806&4317\\\hline $ZY$ &47775&51018&46149&45415&2579&4382&4650&4545\\\hline $YZ$ &44564&49626&45739&44157&3382&4155&4414&3505\\\hline $XX$ &46547&50920&45186&45658&6801&104&148&9083\\\hline $XY$ &45630&50932&44970&44155&3131&3770&3309&4638\\\hline $YX$ &44553&49430&45364&45428&3775&3318&2909&4650\\\hline $YY$ &44499&49666&45718&45152&6586&61&177&8915\\\hline $\textrm{Dark}$ &418&460&406&440&0&0&0&0\\ \hline \end{tabular} \normalsize \caption{Experimental tomography data for our two-photon experiment. Counts were 1 second in length. A final dark count was taken with the photon source blocked.} \end{table} \section{Conclusions} We have presented a novel method of Bayesian mean estimation using hyperspherical parametrization and an experiment-specific likelihood. This method has allowed us to derive a closed-form BME for the ideal single-qubit and to develop a numerical approach to approximating the BME for a two-qubit experiment using numerical slice sampling. Our approach offers the real world benefit of eliminating the need for preliminary experiments in common two-photon experiments by accounting for qubit loss within the likelihood. Our method is also scalable beyond two-qubit systems. Finally, we illustrated our approach by applying it to the measurement data obtained from a real-world two-photon entangled state. \begin{appendices} \section{Maximum likelihood estimation with a traditional likelihood}\label{mleAppendix} Traditional MLE relies on a simple multinomial likelihood that relates the probability of an observation to its quantum probability using Born's Rule, \begin{equation}P(\mathcal{D}|\alpha)=\prod_{i}\textrm{Tr}\left(E_i\cdot\rho\right)^{c_i}\end{equation} where $E_i$ are the observable of interest, for instance the Pauli operators $\sigma_z$, $\sigma_x$, $\sigma_y$ or possible Kronecker products of them, $\sigma_z \otimes \sigma_x$, and so forth as dimensionality demands. For instance, the likelihood for a two-photon state with measurement results from a single-basis is \begin{equation}P(\mathcal{D}|\alpha)=p_{00}^{c_{00}}p_{01}^{c_{01}}p_{10}^{c_{10}}\left(1\!-\!p_{00}\!-\!p_{01}\!-\!p_{10}\right)^{c_{11}}\end{equation} with $\mathcal{D}=\{c_{00},c_{01},c_{10},c_{11}\}$ and $\alpha=\{p_{00}, p_{01}, p_{10}\}$. The problem with this simple view is that only in an ideal, or possibly perfectly symmetric, experiment does the data set $\mathcal{D}$ truly result from the quantum probabilities alone. Instead the measurement apparatus will add its own bias and inefficiency to the result. Thus, the data set may need to be corrected using experimental constants determined from initial experiments. In this paper we focus on the pathway efficiencies $a_0$, $a_1$, $b_0$ and $b_1$ present in the two-photon apparatus. Given that we know these exactly for a single basis we can correct the data set in the following manner. Identify the smallest efficiency for both Alice and Bob \begin{equation}a_m=\textrm{min}(a_0,a_1) \quad\quad b_m=\textrm{min}(b_0,b_1)\end{equation} and use these to correct the counts to \begin{equation}k_{ij}=\frac{a_m b_m}{a_i b_j}c_{ij}\textrm.\end{equation} If $a_0$=$a_1$ and $b_0$=$b_1$, it should be apparent that no correction is needed. With the corrected data set $\mathcal{D'}$=$\{k_{00},k_{01},k_{10},k_{11}\}$ maximization of $P(\mathcal{D}'|\alpha)$ can proceed. \section{Numerically sampling density matrices}\label{ss} In this Appendix we describe our procedure for sampling density matrices from the true distribution. While other methods of numerical sampling can be used, we have utilized slice sampling \cite{sliceSampling,mackay2003information}. We give a brief description of slice sampling from a single-parameter distribution before describing its extension to density matrix sampling. \subsection{Slice sampling} \begin{figure} \caption{If one could fill the space under the curve of an unknown and unnormalized distribution $P^*(x)$ uniformly, each of these points would represent a sample from the true distribution $P(x)$ as the number of points goes to infinity.\label{fill_curve} \label{fill_curve} \end{figure} Slice sampling is an approach to sampling parameter values $x$ from the distribution $P(x)$ given that only the unnormalized distribution $P^*(x)$ is known \cite{sliceSampling, mackay2003information}. This is useful when evaluating $P^*(x)$ everywhere is resource prohibitive--the normalization $Z$=$\int dx P^*(x)$ is unknown. In this case, consider Fig. \ref{fill_curve}a) where the dashed curve represents the likelihood $P^*(x)$ which we do not know everywhere but can evaluate anywhere. Now consider that we can uniformly fill the space under this curve with points as depicted in Fig. \ref{fill_curve}b). If we forget the ``y" coordinate and randomly select one of the $R$ points we would tend to be sampling from the true distribution as $R \rightarrow \infty$. Slice sampling is one method by which to uniformly fill the space under $P^*(x)$ with points. There are different slice sampling methods, but we will use that introduced by Neal \cite{sliceSampling,mackay2003information}. Below we evaluate the unnormalized distribution $P^*(x)$ in our sampling algorithm. Referring to Fig. \ref{procedure}, \begin{figure} \caption{This sequence depicts one loop of the the slice sampling algorithm.\label{procedure} \label{procedure} \end{figure} \begin{itemize} \item[a)] Start at point $x_0$. We assume, for now, this is a random point. \item[b)] At this location a vertical coordinate $y=q\cdot P^*(x)$, $q\in\left[0,1\right]$, is chosen uniformly. \item[c)] From $x_0$ we ``step out" to the left and to the right in steps of $w$ until the left point $l$ and right point $r$ are both outside the curve, $P^*(l)<y$ and $P^*(r)<y$. \item[d)] We uniformly sample a new point $h$ from the interval $x\in\left[l,r\right]$. If $h$ lies inside the curve, $P^*(h)\geq y$ we accept this as our new point. If $h$ lies outside the curve, $P^*(h)<y$ , we set the new leftmost or rightmost point equal to $h$. Whether $h$ is greater or lesser than $x_0$ reveals which. We do this for our expected unimodal distribution to increase the chance of acceptance for the next sample by eliminating known rejection regions. We reject the first point and shrink the interval. \item[e)] We select and accept new point $h=x_0$ from the interval, since $P^*(x_1)>y$. \item[f)] We start over at step a) from point $x_1$. \end{itemize} In reality, we use $\ln P^*(x)$ to compare points in the distribution which is more numerically convenient. The first point $x_0$ in step $a)$ has not come from the true distribution. To resolve this, samples can be taken and forgotten until samples are being taken from $P(x)$ with no influence of the starting point $x_0$ present. To evaluate when this point has been reached we utilize multiple independent samplers. Our stopping criteria should ensure that all samplers have converged, and also that enough samples have been taken to sufficiently approximate the distribution $P(x)$. Using multiple samplers has compound utility in that convergence can be assessed and also that more samples can be taken in less time when samplers are run in parallel. Our sampling procedure is to perform a ``burn-in" where each sampler takes $k_0$ samples, the mean $\overline{x}$ and the mean standard deviation $\sigma$ are calculated for each sampler. When the mean of the $i$th sampler $\overline{x}_i$ is separated from each of the other $j$ sampler means $\overline{x}_j$ by less than $m\sigma_j$, $m$ smaller requires closer convergence, we stop the burn in. If this criteria is not met after $k_i$ samples, we forget all but the last sample point, double the number of samples $k_{i+1}=2k_i$, and start over. Once the criteria is met, we repeat the last sampling request for each sampler. The data from this final request is combined as the final set of samples. These samples tend to be from the true distribution $P(x)$ as the convergence parameter $m\rightarrow 0$. The estimations we have made in this article have been from distributions assumed to be unimodal. While this is certainly true for the simplest distributions given, it is not conclusive for the more complex distributions we have derived. However our assumption is corroborated by the simulations where multiple samplers are given the same data set, started in random independent locations, and always converge. However, we have not addressed how to approach known multi-modal distributions. In this case, more advanced methods such as sequential Monte Carlo (SMC) \cite{del2006sequential} must be applied to full characterize the posterior distribution. \subsection{Density matrix sampling} The goal of our slice sampling application is to sample density matrices from the true but unknown PD $P(\tau|\mathcal{D})=\frac{1}{Z}P(\mathcal{D}|\tau)P(\tau)$ used in Eq. \ref{PDlast}. To do this we slice sample parameter $x$ in $\tau$ from the conditional unnormalized distribution $P^*\left(x|\tau'\right)$ by keeping all other parameters $\tau'$ fixed, $x\notin \tau'$. The sampling procedure is performed sequentially for $k$, $k\geq n^2-1$, parameters while keeping each new parameter sample point for the next following parameters' conditional likelihoods. Once each parameter has been sampled the new points constitute one sample $\rho^{(r)}$. Beginning with sample point $\rho(\tau^i)$, $\tau^i=\{\tau^i_{1},\tau^i_{2},\cdots,\tau^i_{k}\}$, \begin{itemize} \item[1)] sample $t^{i+1}_1$ from $P^*\!\left(\tau_1|\tau^i_{2},\cdots,\tau^i_{k}\right)$ \item[2)] sample $t^{i+1}_2$ from $P^*\!\left(\tau_2|\tau^{i+1}_{1},\cdots,\tau^i_{k}\right)$ \item[j)] sample $t^{i+1}_j$ from $P^*\!\left(\tau_j|\tau^{i+1}_{1},\cdots,\tau^{i+1}_{j-1},\tau^{i}_{j+1},\cdots,\tau^i_{k}\right)$ \item[k)] sample $t^{i+1}_k$ from $P^*\!\left(\tau_k|\tau^{i+1}_{1},\tau^{i+1}_{2},\cdots,\tau^{i+1}_{k-1}\right)\textrm{.}$ \end{itemize} The $(i+1)^{\textrm{th}}$ sample is $\rho^{(i+1)}=\rho(\tau^{i+1})$. Each parameter may have unique ranges, thus slice sampling for each must account for the minimum and maximum parameter values $x\in\{\textrm{min},\textrm{max}\}$. Additionally, some parameters can be cyclic. For instance, angle $\phi\in\{0,2\pi\}$. Extra consideration must be taken for these parameters since their distribution can be centered on a boundary. Hard bounds would not allow this. For these parameters we gave no bounds during the slice sampling algorithm. \section{Maximization using gradient ascent}\label{searchAppendix} \begin{figure} \caption{This sequence depicts a few iterations of the gradient ascent method. The bottom axis represents an idealized axis along the line of highest gradient ascent direction within the multi-dimensional space.\label{search} \label{search} \end{figure} To locate a local maximum from the likelihood we use an gradient ascent method as shown in Fig. \ref{search}. This methods auto-tunes the step size to make both large and small steps when appropriate. \begin{itemize} \item[a)] Starting from the current multi-dimensional point $\textbf{x}$ the gradient $\nabla P^*(\textbf{x})$ is determined using the finite difference method. The direction of ascent $\textbf{d}$ is found by normalizing the gradient with the $\ell^2$ norm \begin{align}\textbf{d}&=\frac{\nabla P^*(\textbf{x})}{|\nabla P^*(\textbf{x})|}\nonumber\\ |\nabla P^*(\textbf{x})|&=\sqrt{\sum_i \left(\frac{\partial P^*(\textbf{x})}{\partial x_i}\right)^2}\nonumber\end{align} \item[b)] A step of size $w$ is made in the direction of increase, $\textbf{x}\rightarrow \textbf{x}+w\cdot\textbf{d}$. This point is accepted since $P^*(\textbf{x}+w\cdot\textbf{d}) \geq P^*(\textbf{x})$. The step size is doubled. \item[c)] The direction of ascent $\textbf{d}$ is found. A step of $2w$ is made in the direction of increase, $\textbf{x}\rightarrow \textbf{x}+2w\cdot\textbf{d}$. This point is accepted since $P^*(\textbf{x}+2w\cdot\textbf{d}) \geq P^*(\textbf{x})$. The step size is doubled. \item[d)] The direction of ascent $\textbf{d}$ is found. A step of $4w$ is made in the direction of increase, $\textbf{x}\rightarrow \textbf{x}+4w\cdot\textbf{d}$. This point is not accepted since $P^*(\textbf{x}+4w\cdot\textbf{d}) < P^*(\textbf{x})$. The step size is halved. \item[e)] A step of $2w$ is made in the direction of increase, $\textbf{x}\rightarrow \textbf{x}+2w\cdot\textbf{d}$. This point is not accepted since $P^*(\textbf{x}+2w\cdot\textbf{d}) < P^*(\textbf{x})$. The step size is halved. \item[f)] A step of $w$ is made in the direction of increase, $\textbf{x}\rightarrow \textbf{x}+w\cdot\textbf{d}$. This point is accepted since $P^*(\textbf{x}+w\cdot\textbf{d}) \geq P^*(\textbf{x})$. The step size is doubled. \end{itemize} In this way, the local maximum is ultimately approached. \end{appendices} \end{document}
\begin{document} \title{Bell's inequalities in the tomographic representation} \author{C. Lupo$^1$, V. I. Man'ko$^2$, G. Marmo$^1$} \address{$^1$ Dipartimento di Scienze Fisiche, Universit\'a "Federico II" e sezione INFN di Napoli, Complesso Universitario di Monte Sant'Angelo, via Cintia, 80126 Napoli, Italy} \address{$^2$ P. N. Lebedev Physical Institute, Leninskii Prospect 53, Moscow 119991, Russia} \ead{\mailto{[email protected]}, \mailto{[email protected]}, \mailto{[email protected]}} \begin{abstract} The tomographic approach to quantum mechanics is revisited as a direct tool to investigate violation of Bell-like inequalities. Since quantum tomograms are well defined probability distributions, the tomographic approach is emphasized to be the most natural one to compare the predictions of classical and quantum theory. Examples of inequalities for two qubits an two qutrits are considered in the tomographic probability representation of spin states. \end{abstract} \pacs{03.65.Ud, 03.67.-a} \section{Introduction} Bell's inequalities were originally formulated \cite{Bell} in order to provide a mathematical characterization of classical local hidden variables theories. In their original formulation, Bell's inequalities are propositions concerning expectation values of dichotomic observables (such as spin$-1/2$ polarization), when two spatially separated systems and local measurements are considered, in presence of perfect (anti-) correlations between the two systems relevant observables (such as two spin$-1/2$ in a singlet state). The experimental violation of these inequalities is an evidence against local classical variables models. Later on, other inequalities were proposed that generalize the Bell's idea to the case of non perfectly (anti-) correlated spin$-1/2$ systems \cite{CHSH,CH}, to the case of spin of higher value \cite{Mermin} and concerning probability of measurement output instead of measurement expectation value \cite{Wigner}. It is a remarkable fact that not all the states of a (say) bipartite quantum system do violate some Bell-like inequalities: only states that are \emph{entangled} are truly non local and not allowed to be described by means of a \emph{classical} local variables model. With the development of the theory of quantum information and in view of the special role played by entangled states in quantum information protocols, a violation of some Bell-like inequalities has assumed also an operational role as a witness of entanglement. The power of Bell-like inequalities is that they refer only to observables quantities, as expectation value, correlations and probabilities without an explicit link to the underlying theory. If a Bell-like inequality is a proposition that is true for a classical theory, it is nevertheless a well defined proposition (not necessary true) in the framework of quantum theory. Hence the very idea of Bell's inequalities leads to consider a unified description of both classical and quantum mechanics based on fundamental quantities as probability distributions. The conventional description of pure quantum states is by means of wave functions \cite{Sch} or state vector in Hilbert space \cite{Dirac}. For mixed states, the density matrix \cite{Landau27,vonN} is used to describe quantum states. The problem of measuring the quantum states was considered as the problem of finding the Wigner function \cite{Wigner32}, by means of which the optical tomograms of the states \cite{Bertrand-Bertrand87,Vogel-Risken89}, which are the probability distribution densities of the homodyne photon quadratures, can be determined. In \cite{mancini.manko.tombesi} the use of symplectic tomogram as a tool for state reconstruction was extended in order to describe the quantum state by the probability distribution from the very beginning. This approach is called "tomographic probability representation of quantum states". For spin degrees of freedom the probability representation was found in \cite{discrete.variables,OlgaJEPT} for one qudit and in \cite{Manko_2} for two qudits. In the framework of the tomographic representation, the spin state is identified with the probability distribution of spin projection on direction labeled by angular coordinates on the Bloch spheres for arbitrary number of qudits. The tomographic map from state vectors or density matrices onto fair probability distributions contains complete information on the quantum states. Its mathematical structure was recently found in \cite{ventriglia}. The relation of tomographic probability representation with the star-product quantization procedure was established in \cite{kernel}. The aim of this work is to find new explicit formulas for spin tomograms of two qubits and two qutrits and to analyze, by means of these formulas, some Bell-like inequalities. The paper is organized as follows. In section \ref{tomo.sep} we review the separability problem using the tomographic probability description of spin states. In section \ref{qubits} we derive the formulas for spin tomograms of two qubits and study the CHSH inequalities \cite{CHSH}. In section \ref{qutrits} we obtain the probability representation for multiqutrit state. In section \ref{conclusions} we present the conclusions. \section{Tomograms and separability}\label{tomo.sep} A tomographic description of quantum system can be formulated for systems with both discrete and continuous variables \cite{ventriglia}. Here we are interested in the case of discrete variable systems that we are going to describe in the framework of spin tomography. For qudit states with spin $j$ the tomographic probability distribution is defined as the diagonal elements of the density operator \begin{equation} \rho_U = U^\dag \rho U \end{equation} in a standard basis $\{ |m\rangle \}_{m=-j,\dots j}$, where $U$ is an operator of the unitary irreducible representation of the $\mathrm{SU}(2)$ group. The tomogram of the qudit state reads \begin{equation} \omega(m,\stackrel{\rightarrow}{n}) = \langle m | \rho_U | m \rangle = \langle m | U^\dag \rho U | m \rangle\;. \end{equation} Here $\stackrel{\rightarrow}{n} = (\sin{\theta}\cos{\phi},\sin{\theta}\sin{\phi},\cos{\theta})$ is an unit vector determining a point on the Bloch sphere. The tomogram is, by construction, the probability distribution of the spin projection $m$ onto the direction $\stackrel{\rightarrow}{n}$. The probability distribution determines the density matrix $\rho$. The formula connecting the tomogram $\omega(m,\stackrel{\rightarrow}{n})$ with the density matrix $\rho$ was obtained in \cite{OlgaJEPT}. For example the tomographic probability of the qubit state \begin{eqnarray} \rho = \left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right] \end{eqnarray} reads as follows \begin{eqnarray} \omega(1/2,\stackrel{\rightarrow}{n}) & = & \cos^2{\theta/2}\;, \\ \omega(-1/2,\stackrel{\rightarrow}{n}) & = & \sin^2{\theta/2}\;. \end{eqnarray} We used the matrix $U$ rotating the spinor in the form \begin{eqnarray} U = \left[ \begin{array}{cc} \cos{\theta/2}e^{i\frac{\phi+\psi}{2}} & \sin{\theta/2}e^{i\frac{\phi-\psi}{2}} \\ -\sin{\theta/2}e^{-i\frac{\phi-\psi}{2}} & \cos{\theta/2}e^{-i\frac{\phi+\psi}{2}} \end{array}\right]\;. \end{eqnarray} Here $\phi,\theta,\psi$ are the Euler angles. For two qudits the tomogram is defined as follows: \begin{equation} \omega(m_1,m_2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) = \langle m_1 m_2 | \mathcal{U}^\dag \rho \mathcal{U} | m_1 m_2 \rangle\;, \end{equation} where $\rho$ is a density matrix of two qudits, $\mathcal{U}=U_1 \otimes U_2$, and the matrices $U_1$ and $U_2$ are matrices of irreducible representation of the group $\mathrm{SU}(2)$ corresponding to the first and second qudit, respectively. The spin projections $m_1$ and $m_2$ onto directions $\stackrel{\rightarrow}{n_1}$ and $\stackrel{\rightarrow}{n_2}$ are random variables of the tomogram which is joint probability distribution function for the two spin projections. Below we discuss in more details the generic qudits tomograms. Let us consider an operator $A^{(j)}$ acting on a space of a spin$-j$ irreducible representation of $\mathrm{SU}(2)$. Given a standard basis $\{ |jm\rangle \}$ with $m=-j,-j+1,...j-1,j$ the matrix elements of the operator \begin{equation} A_{m,m'}^{(j)} = \langle m | A^{(j)} | m' \rangle \end{equation} of course completely determine the operator \begin{equation} A^{(j)} = \sum A^{(j)}_{m,m'} | m \rangle\langle m' |\;. \end{equation} We consider the diagonal elements in a rotated frame \begin{equation} \label{tomog} \omega_A(m,\Omega) = \langle m | R^\dag(\Omega) A^{(j)} R(\Omega) | m \rangle = \tr\left[ A^{(j)} R(\Omega) |m \rangle\langle m| R^\dag(\Omega) \right]\;, \end{equation} where $R(\Omega)$ is a unitary spin$-j$ representation of $\mathrm{SU}(2)$ and $\Omega$ is a short hand notation for the three Euler angles $\alpha$, $\beta$ and $\gamma$. The diagonal elements, as functions of the variable $m$ and of the parameters $\Omega$ define the spin tomogram of the operator $A^{(j)}$. In the case in which $A^{(j)}$ represents a density operator describing the state of a spin$-j$ system, the tomogram $\omega_A(m,\Omega)$ is interpreted as the probability of finding the system with polarization $m$ along the $z$ axis in a system rotated with Euler angles $\Omega$. The tomogram (\ref{tomog}) is a family of well defined probability distribution on the variable $m$ with parameter $\stackrel{\rightarrow}{n}$: \begin{eqnarray} \omega_A(m,\stackrel{\rightarrow}{n}) & \geq & 0\;,\\ \sum_m \omega_A(m,\stackrel{\rightarrow}{n}) & = & 1\;. \end{eqnarray} It is a remarkable result that the knowledge of only diagonal matrix elements in a generic rotated frame is sufficient to reconstruct the operator: \begin{equation} A^{(j)} = \sum_{m=-j}^{j} \int d\Omega K(m,\Omega) \omega_A(m,\Omega)\;, \end{equation} where \begin{equation} \int d\Omega = \int_0^{2\pi} d\alpha \int_0^\pi \sin{\beta}d\beta \int_0^{2\pi} d\gamma\;. \end{equation} The explicit expression for the \emph{quantizer} operator $K(m,\Omega)$ was found in \cite{kernel}. Notice that as long as the polarization along the $z$ axis is considered, the spin tomogram (\ref{tomog}) depends only on two Euler angles: in the following we write \begin{equation} \Pi^{(j)}(m,\stackrel{\rightarrow}{n}) = R(\Omega) |m \rangle\langle m| R^\dag(\Omega)\;, \end{equation} where $\stackrel{\rightarrow}{n} = (\cos{\alpha}\sin{\beta},\sin{\alpha}\sin{\beta},\cos{\beta})$ is the rotated axis of polarization. Hence, in the tomographic approach, the state of a quantum system is described by means of a well defined probability distribution $\omega(m,\stackrel{\rightarrow}{n})$ related to a Stern Gerlach-like measurement along the direction $\stackrel{\rightarrow}{n}$. Notice that a Bloch sphere description is obtained for the quantum state even for $j > 1/2$. One of the open problems in quantum mechanics and quantum information theory is to give a complete characterization of entangled states. Given a bipartite system, a quantum state of the system is said to be separable if it can be written as a convex sum of factorized states: \begin{equation} \label{def-sep} \rho = \sum_k p_k \rho_k^{(A)} \otimes \rho_k^{(B)}\;, \ \ \sum_k p_k = 1\;. \end{equation} Otherwise the state is said to be entangled. Let us also recall that a factorized state $\rho = \rho^{(A)}\otimes\rho^{(B)}$ is called a simply separable state. These definitions can be generalized, with some care, to the case of multi-partite systems \cite{Cirac,Lupo}. The relation between local realism and separability of quantum states was widely studied. It is clear from the definition (\ref{def-sep}), that every separable states can be described by means of a local hidden variables model (where the hidden variable can be identified with the index $k$). In \cite{Werner} it was first shown with an example that the converse is not true, \emph{i.e.}\ there exist quantum states that can be described by a hidden local variables model but are nevertheless entangled. This means that the violation of a Bell's inequalities by a given quantum states is a sufficient (though not necessary) condition for the state to be entangled. Although a systematic approach to generate all Bell's inequalities exists \cite{Pitowski}, how to find the inequality that presents a maximal violation for a given entangled state is still an open problem. From the point of view of entanglement detection and characterization, it is interesting to consider the tomographic description of state of multipartite quantum systems. To fix the ideas, let us consider a bipartite system composed of one spin$-j_1$ and one spin$-j_2$: in this case the spin tomogram of a state of the compound system described by density matrix $\rho$ is written as follows: \begin{equation} \omega_\rho(m_1,m_2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) = tr\left( \rho \Pi^{(j_1)}(m_1,\stackrel{\rightarrow}{n_1})\otimes\Pi^{(j_2)}(m_2,\stackrel{\rightarrow}{n_2}) \right)\;. \end{equation} This definition is simply generalized to the case of multipartite spin systems and refers to local Stern Gerlach-like measurement. For example the tomographic probability distribution function for the two qubit state \begin{eqnarray} \rho = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right] \end{eqnarray} reads \begin{eqnarray} \omega(1/2,1/2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) & = & \cos^2{\theta_1/2} \cos^2{\theta_2/2}\;, \\ \omega(1/2,-1/2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) & = & \cos^2{\theta_1/2} \sin^2{\theta_2/2}\;, \\ \omega(-1/2,1/2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) & = & \sin^2{\theta_1/2} \cos^2{\theta_2/2}\;, \\ \omega(-1/2,-1/2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) & = & \sin^2{\theta_1/2} \sin^2{\theta_2/2}\;. \end{eqnarray} The state is simply separable and the tomographic probability has the form of factorized joint probability distribution \begin{equation} \omega(m_1,m_2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) = \omega_1(m_1,\stackrel{\rightarrow}{n_1}) \omega_2(m_2,\stackrel{\rightarrow}{n_2})\;, \end{equation} where the probability distributions $\omega_1$ and $\omega_2$ describe the states of the first and second spin respectively. The joint tomographic probability determines the density matrix by means of inversion formula obtained in \cite{Manko_2}. Due to linearity of the tomographic map of density matrices onto joint probability distributions of spin projections, the tomogram of a separable state is the convex sum of factorized joint probability distributions of the simply separable states: \begin{equation} \omega(m_1,m_2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) = \sum_k p_k \omega_1^{(k)}(m_1,\stackrel{\rightarrow}{n_1}) \omega_2^{(k)}(m_2,\stackrel{\rightarrow}{n_2})\;. \end{equation} \section{Qubits tomograms}\label{qubits} In this section we discuss the tomographic representation for spin$-1/2$ (qubit) systems in its link with standard density matrix description. Let us first consider a one-qubit system. It is well known that a qubit density state can be written in terms of Pauli matrices: \begin{equation} \rho_1 = \frac{1}{2} \left( \sigma_0 + x^i \sigma_i \right)\;, \end{equation} where (the sum over repeated indices is intended) \begin{equation} \label{trace} x^i = \delta^{ij} \tr(\rho \sigma_j) = \delta^{ij} x_j \end{equation} since \begin{equation} \tr(\sigma_i \sigma_j) = 2\delta_{ij}\;. \end{equation} In the following we take $m=-1,1$. With this convention, from the definition (\ref{tomog}) it follows that in the tomographic representation: \begin{equation} \label{1-qubit-tomo} \omega(m,\stackrel{\rightarrow}{n}) = \tr\left( \rho_1 \Pi(m,\stackrel{\rightarrow}{n}) \right)\;, \end{equation} where \begin{equation} \Pi(m,\stackrel{\rightarrow}{n}) = \frac{1}{2} \left( \sigma_0 + m n^i \sigma_i \right) \end{equation} is the projector on the eigenstate with polarization $m$ along the direction $\stackrel{\rightarrow}{n}=(n^1,n^2,n^3)$, where for convenience we have chosen $m=\pm 1$. The operator $\Pi(m,\stackrel{\rightarrow}{n})$ plays the role of the de-quantizer operator used in star-product quantization scheme \cite{star-prod}. From (\ref{1-qubit-tomo}) and (\ref{trace}) it follows that the explicit expression for a generic qubit tomogram is \begin{equation} \omega_1(m,\stackrel{\rightarrow}{n}) = \frac{1}{2} \left( 1 + m \stackrel{\rightarrow}{n} \cdot \stackrel{\rightarrow}{x} \right)\;, \end{equation} where $\stackrel{\rightarrow}{x}=(x_1,x_2,x_3)$ and $\stackrel{\rightarrow}{n} \cdot \stackrel{\rightarrow}{x} = n^i x_i$. The expression (\ref{1-qubit-tomo}) can be immediately generalized to the case of multi-qubit system. In the case of a system of $N$ qubits in a global state $\rho_N$, the (global) tomogram is given by the following relation: \begin{equation} \label{N-qubit-tomo} \omega_N(m_1,m_2,\dots m_N;\stackrel{\rightarrow}{n_1}, \stackrel{\rightarrow}{n_2},\dots \stackrel{\rightarrow}{n_N}) = \tr\left[ \rho_N \bigotimes_{i=1...N} \Pi(m_i,\stackrel{\rightarrow}{n_i})\right]\;. \end{equation} In the case of a system of two qubits (\ref{N-qubit-tomo}) simplifies to \begin{equation} \label{2-qubit-tomo} \omega_2(m_1,m_2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) = \tr\left[ \rho_2 \frac{1}{4}(\sigma_0 + m_1 n_1^i \sigma_i) \otimes (\tau_0 + m_2 n_2^i \tau_i) \right]\;, \end{equation} where $\sigma_\mu$ and $\tau_\mu$ are the Pauli matrices respectively related to the first and second qubit. Defining $x_i = \tr(\rho_2 \sigma_i)$, $y_i = \tr(\rho_2 \tau_i)$ and $z_{ij} = \tr(\rho_2 \sigma_i\otimes\tau_j)$, where $\sigma_i$ and $\tau_i$ are short-hand notation for $\sigma_i\otimes\tau_0$ and $\sigma_0\otimes\tau_i$ respectively, the tomogram (\ref{2-qubit-tomo}) reads: \begin{equation}\label{2-qubit-tomo-function} \omega(m_1,m_2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) = \frac{1}{4} \left( 1 + m_1 n_1^i x_i + m_2 n_2^i y_i + m_1 m_2 n_1^i z_{ij} n_2^j \right)\;. \end{equation} Notice that for simply separable states $\tr(\rho_2 \sigma_i \otimes \tau_j) = \tr(\rho_2 \sigma_i) \tr(\rho_2 \tau_j)$, \emph{i.e.}\ $z_{ij} = x_i y_j$ and the tomogram assumes a factorized form: \begin{equation} \omega(m_1,m_2;\stackrel{\rightarrow}{n_1}, \stackrel{\rightarrow}{n_2}) = \frac{1}{4} \left( 1 + m_1 \stackrel{\rightarrow}{n_1}\cdot\stackrel{\rightarrow}{x} \right) \left( 1 + m_2 \stackrel{\rightarrow}{n_2}\cdot\stackrel{\rightarrow}{y} \right)\;. \end{equation} \subsection{Two spin$-1/2$ Bell-Wigner inequalities} Let us consider the inequality proposed in \cite{Wigner}. It is related to the case of two spin$-1/2$ particles with perfect anti-correlation. For each particle the polarization is independently measured along three arbitrary directions. The joint probability of finding the first and the second particles polarized respectively in the $\stackrel{\rightarrow}{n_1}$ and $\stackrel{\rightarrow}{n_2}$ direction is indicated with $P(\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2})$. The hypothesis of perfect anti correlation implies that the probability of measure parallel polarization along a fixed direction vanishes: \begin{equation} P(\stackrel{\rightarrow}{n},\stackrel{\rightarrow}{n}) = 0\;. \end{equation} Given three arbitrary directions $\stackrel{\rightarrow}{n_a}$, $\stackrel{\rightarrow}{n_b}$ and $\stackrel{\rightarrow}{n_c}$ the following inequality holds for a classically correlated state \cite{Wigner}: \begin{equation} \label{W-ineq} P(\stackrel{\rightarrow}{n_a},\stackrel{\rightarrow}{n_b}) + P(\stackrel{\rightarrow}{n_b},\stackrel{\rightarrow}{n_c}) - P(\stackrel{\rightarrow}{n_a},\stackrel{\rightarrow}{n_c}) \geq 0\;. \end{equation} Notice that these probability distributions are directly given in the tomographic representation, since \begin{equation} P(\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) = \omega(1,1; \stackrel{\rightarrow}{n_1}, \stackrel{\rightarrow}{n_2})\;. \end{equation} Inequality (\ref{W-ineq}) is obtained for perfectly classically anti-correlated states. It is easy to see that a quantum simply separable state cannot exhibit perfect (anti-) correlations, hence we consider non-perfect anti-correlation in a simply separable state of the following form: \begin{equation} \omega(m_1,m_2;\stackrel{\rightarrow}{n_1}, \stackrel{\rightarrow}{n_2}) = \frac{1}{4} \left[ 1 + m_1 (\stackrel{\rightarrow}{n_1}\cdot\stackrel{\rightarrow}{x}) \right] \left[1 - m_2 (\stackrel{\rightarrow}{n_2}\cdot\stackrel{\rightarrow}{x}) \right]\;. \end{equation} For such a state (\ref{W-ineq}) are always fulfilled and are simply written as follows: \begin{eqnarray} \omega(1,1;\stackrel{\rightarrow}{n_a},\stackrel{\rightarrow}{n_b}) + \omega(1,1;\stackrel{\rightarrow}{n_b},\stackrel{\rightarrow}{n_c}) - \omega(1,1;\stackrel{\rightarrow}{n_a},\stackrel{\rightarrow}{n_c}) = \\ \frac{1}{4}\left[ 1 - (\stackrel{\rightarrow}{n_a}\cdot\stackrel{\rightarrow}{x}) (\stackrel{\rightarrow}{n_b}\cdot\stackrel{\rightarrow}{x}) - (\stackrel{\rightarrow}{n_b}\cdot\stackrel{\rightarrow}{x}) (\stackrel{\rightarrow}{n_c}\cdot\stackrel{\rightarrow}{x}) + (\stackrel{\rightarrow}{n_a}\cdot\stackrel{\rightarrow}{x}) (\stackrel{\rightarrow}{n_c}\cdot\stackrel{\rightarrow}{x}) \right] \geq 0\;, \end{eqnarray} that is \begin{equation} (\stackrel{\rightarrow}{n_a}\cdot\stackrel{\rightarrow}{x}) (\stackrel{\rightarrow}{n_b}\cdot\stackrel{\rightarrow}{x}) + (\stackrel{\rightarrow}{n_b}\cdot\stackrel{\rightarrow}{x}) (\stackrel{\rightarrow}{n_c}\cdot\stackrel{\rightarrow}{x}) - (\stackrel{\rightarrow}{n_a}\cdot\stackrel{\rightarrow}{x}) (\stackrel{\rightarrow}{n_c}\cdot\stackrel{\rightarrow}{x}) \leq 1\;. \end{equation} Since the inequalities are fulfilled by non-perfectly anti-correlated particles in a factorized state it follows that the same is true for a generic anti-correlated separable state. As a simple example, we consider the case of a two-qudit system in the Werner state, defined for $\phi \in [-1,1]$ as follows: \begin{equation} \label{Wer} \rho_d(\phi) = \frac{1}{d^3-d^2}\left[ (d-\phi) Id_{d^2} + (d\phi-1) V \right]\;, \end{equation} where $Id_{d^2}$ is the identity operator in the compound system space and $V$ is the swap operator ($V \psi\otimes\phi = \phi\otimes\psi$). These states are symmetric under local unitary operations of the kind $U \otimes U$: hence we expect a particular simple tomographic expression for these states. The state (\ref{Wer}) is known to be entangled for $\phi < 0$ and separable otherwise. Notice that a spin$-j$ system can be viewed as a qudit with $d=2j+1$. In the case of two qubits ($d=2$) the tomogram of (\ref{Wer}) reads as follows: \begin{equation} \label{W-tomo} \omega_W = \frac{1}{4} \left[ 1 + \frac{2\phi-1}{3} m_1 m_2 ( \stackrel{\rightarrow}{n_1} \cdot \stackrel{\rightarrow}{n_2} ) \right]\;. \end{equation} In terms of tomogram, the inequality (\ref{W-ineq}) is immediately written as \begin{equation} \frac{2\phi-1}{3} \left[ (\stackrel{\rightarrow}{n_a} \cdot \stackrel{\rightarrow}{n_c}) - (\stackrel{\rightarrow}{n_a} \cdot \stackrel{\rightarrow}{n_b}) - (\stackrel{\rightarrow}{n_b} \cdot \stackrel{\rightarrow}{n_c}) \right] \leq 1\;. \end{equation} It follows that the inequality (\ref{W-ineq}) is violated for any $\phi < -1/2$. \subsection{Two spin$-1/2$ CHSH inequalities} As we have recalled above, both the Bell's inequalities \cite{Bell} and Bell-Wigner inequalities \cite{Wigner} assume perfect (anti-) correlations between the two system qubits. The inequalities known as CHSH inequalities were introduced \cite{CHSH} in order to relax the hypothesis of perfect correlation between the two systems. Also in this case we deal with dichotomic observables. In the following we consider the case of a composite system of two spin$-1/2$ and the relevant observables are local magnetizations along a couple of directions. As in the original Bell argument, but in contrast with the Wigner approach, these inequalities are expressed in terms of expectation values and correlations of local observables. Some aspects of CHSH inequalities and their relation to tomographic probabilities were discussed in \cite{Manko}. Given two arbitrary directions $\stackrel{\rightarrow}{n_1}$ and $\stackrel{\rightarrow}{n_2}$, let us consider the function \begin{equation} \label{CHSH-corr} M(\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) = \tr(\rho_2 n_1^i\sigma_i \otimes n_2^j\tau_j)\;, \end{equation} that represents the correlation between the polarizations along the $\stackrel{\rightarrow}{n_1}$ and $\stackrel{\rightarrow}{n_2}$ direction, respectively for the first and second qubit, over the two qubits density state $\rho_2$. Notice that, in terms of tomograms, the correlation function (\ref{CHSH-corr}) can be easily written as \begin{equation} M(\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) = \sum_{m_1,m_2} m_1 m_2 \omega(m_1,m_2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2})\;. \end{equation} Given four arbitrary directions $\stackrel{\rightarrow}{n_a}$, $\stackrel{\rightarrow}{n_b}$, $\stackrel{\rightarrow}{n_c}$ and $\stackrel{\rightarrow}{n_{b'}}$, the CHSH inequalities read as follows: \begin{equation} \label{CHSH-ineq} | M(\stackrel{\rightarrow}{n_a},\stackrel{\rightarrow}{n_b}) - M(\stackrel{\rightarrow}{n_a},\stackrel{\rightarrow}{n_c}) | + M(\stackrel{\rightarrow}{n_{b'}},\stackrel{\rightarrow}{n_b}) + M(\stackrel{\rightarrow}{n_{b'}},\stackrel{\rightarrow}{n_c}) - 2 \leq 0\;. \end{equation} For two qubits Werner state, using (\ref{W-tomo}), the average magnetization is easily written as \begin{equation} M(\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) = \frac{2\phi-1}{3} \left( \stackrel{\rightarrow}{n_1} \cdot \stackrel{\rightarrow}{n_2} \right)\;. \end{equation} The inequality (\ref{CHSH-ineq}) reads \begin{equation} \frac{|2\phi-1|}{3} \left[ | \stackrel{\rightarrow}{n_a}\cdot(\stackrel{\rightarrow}{n_b} - \stackrel{\rightarrow}{n_c}) | - \stackrel{\rightarrow}{n_{b'}}\cdot(\stackrel{\rightarrow}{n_b} + \stackrel{\rightarrow}{n_c}) \right] \leq 2\;. \end{equation} Notice that the maximum of the function \begin{equation} Y(\stackrel{\rightarrow}{n_a},\stackrel{\rightarrow}{n_b},\stackrel{\rightarrow}{n_{b'}},\stackrel{\rightarrow}{n_c}) = | \stackrel{\rightarrow}{n_a}\cdot (\stackrel{\rightarrow}{n_b} - \stackrel{\rightarrow}{n_c}) | - \stackrel{\rightarrow}{n_{b'}}\cdot (\stackrel{\rightarrow}{n_b} + \stackrel{\rightarrow}{n_c}) \end{equation} is reached when \begin{eqnarray} \stackrel{\rightarrow}{n_a} &=& \pm\frac{\stackrel{\rightarrow}{n_b}-\stackrel{\rightarrow}{n_c}}{|\stackrel{\rightarrow}{n_b}-\stackrel{\rightarrow}{n_c}|} \\ \stackrel{\rightarrow}{n_{b'}} &=& - \frac{\stackrel{\rightarrow}{n_b}+\stackrel{\rightarrow}{n_c}}{|\stackrel{\rightarrow}{n_b}+\stackrel{\rightarrow}{n_c}|} \end{eqnarray} and $\stackrel{\rightarrow}{n_b} \cdot \stackrel{\rightarrow}{n_c} = 0$, and it is equal to $2\sqrt{2}$. The inequality is violated for any $\phi < -\frac{3\sqrt{2}-2}{4}$. Hence, the violation of the inequality does not detect entanglement when $-1 \leq \phi \leq -\frac{3\sqrt{2}-2}{4}$. \section{Qutrits tomography}\label{qutrits} In the previous sections we were dealing with qubit systems. Let us now consider the case of qutrits. In order to write the spin tomogram for a generic qutrit state, one has to consider the $s=1$ irreducible representations of the group $\mathrm{SU}(2)$. Let us consider a realization of the angular momentum as qutrits operators $J_1, J_2, J_3$, such that \begin{equation} [ J_i , J_j ] = i \epsilon_{ij}^k J_k\;. \end{equation} In terms of this given representation, the spin tomogram of a qutrit state is related to the standard density matrix description via the following relation: \begin{equation} \label{1-qutrit-tomo} \omega(m,\stackrel{\rightarrow}{n}) = \tr( \rho_1 \Pi(m,\stackrel{\rightarrow}{n}) )\;, \end{equation} where $m=-1,0,1$, and the qutrit \emph{de-quantizer} operator is now given by \begin{equation} \Pi(m,\stackrel{\rightarrow}{n}) = \left(1-m^2\right)Id_3 + \frac{m}{2} n^i J_i + \left(\frac{3}{2} m^2 -1\right)(n^i J_i)^2\;, \end{equation} where $Id_3$ is the qutrit identity operator, and $\Pi(m,\stackrel{\rightarrow}{n})$ is the projector on the eigenvector of polarization $m$ along the $\stackrel{\rightarrow}{n}$ direction. The relation (\ref{1-qutrit-tomo}) is easily generalized in the case of a system of $N$ qutrits as follows: \begin{equation} \omega(m_1,m_2,\dots m_N;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2},\dots \stackrel{\rightarrow}{n_N}) = \tr\left[ \rho_N \bigotimes_{i=1...N} \Pi(m_i,\stackrel{\rightarrow}{n_i})\right]\;. \end{equation} As an example let us consider the two-qutrits Werner state obtained from (\ref{Wer}) with $d=3$: \begin{equation} \label{Werner-state} \rho_W = \frac{3-\phi}{24} Id_9 + \frac{3\phi-1}{24} V\;. \end{equation} The tomographic representation is explicitly given by \begin{equation} \omega(m_1,m_2;\stackrel{\rightarrow}{n_1},\stackrel{\rightarrow}{n_2}) = \tr\left[ \rho_W \Pi(m_1,\stackrel{\rightarrow}{n_1})\otimes\Pi(m_2,\stackrel{\rightarrow}{n_2}) \right]\;, \end{equation} that yields to \begin{eqnarray} \label{2-qutrit-tomo} \omega(m_1, m_2 ; \stackrel{\rightarrow}{n_1} , \stackrel{\rightarrow}{n_2}) &=& \frac{3-\phi}{24} + \frac{3\phi-1}{24}\left[ 3\left(1-m_1^2\right)\left(1-m_2^2\right) \right. \nonumber \\ &+& \left(1-m_1^2\right)\left(3m_2^2-2\right) + \left(1-m_2^2\right)\left(3m_1^2-2\right) \nonumber \\ &+& \frac{m_1 m_2}{2} (\stackrel{\rightarrow}{n_1} \cdot \stackrel{\rightarrow}{n_2}) \nonumber \\ &+& \left. \left(\frac{3}{2} m_1^2 -1\right)\left(\frac{3}{2} m_2^2 -1\right)\left(1 + (\stackrel{\rightarrow}{n_1} \cdot \stackrel{\rightarrow}{n_2})^2\right) \right]\;. \end{eqnarray} As another example, we discuss the non-linear Bell-like inequality proposed in \cite{Uffink}: \begin{equation} \langle A B' + A' B \rangle^2 + \langle A B - A' B' \rangle^2 \leq 1\;, \end{equation} where $A$, $A'$ and $B$, $B'$ are local observables for a system composed of two spins, with the property of orthogonality $\tr(A A')=0$, $\tr(B B')=0$. Although this inequality has been formulated for a system of two qubits, it can be considered for a system of two qutrits as well. If $A = n_A^i J_i$, $A' = n_{A'}^i J_i$, $B = n_B^i J_i$, $B' = n_{B'}^i J_i$, from (\ref{2-qutrit-tomo}) we obtain that \begin{equation} \langle A B' \rangle = \frac{3\phi-1}{12} \stackrel{\rightarrow}{n_A}\cdot\stackrel{\rightarrow}{n_{B'}}\;, \end{equation} and the inequality reads as follows: \begin{equation} \left[ \frac{3\phi-1}{24}\right]^2 \left[ \left( \stackrel{\rightarrow}{n_A}\cdot\stackrel{\rightarrow}{n_{B'}} + \stackrel{\rightarrow}{n_{A'}}\cdot\stackrel{\rightarrow}{n_B} \right)^2 + \left( \stackrel{\rightarrow}{n_A}\cdot\stackrel{\rightarrow}{n_B} - \stackrel{\rightarrow}{n_{A'}}\cdot\stackrel{\rightarrow}{n_{B'}}\right)^2 \right] \leq 1\;. \end{equation} Notice that $\left[ \left( \stackrel{\rightarrow}{n_A}\cdot\stackrel{\rightarrow}{n_{B'}} + \stackrel{\rightarrow}{n_{A'}}\cdot\stackrel{\rightarrow}{n_B} \right)^2 + \left( \stackrel{\rightarrow}{n_A}\cdot\stackrel{\rightarrow}{n_B} - \stackrel{\rightarrow}{n_{A'}}\cdot\stackrel{\rightarrow}{n_{B'}}\right)^2 \right]< 8$, therefore the inequality is never violated. \section{Conclusions}\label{conclusions} To conclude we point out the main results of the paper. We have developed a formulation of Bell's inequalities by means of tomographic probability distribution of spin projections describing the quantum states completely. New formulas convenient for further analysis for tomogram of one qubit, two qubits and tomograms of two qutrits Werner state were obtained. The dequantizer operator for qutrit is also a new result presented in the paper. We demonstrated that both Wigner inequalities and CHSH inequalities as well their violations can be easily explained using joint probability distribution (tomograms) for spin projections. There are bounds for the violation of CHSH inequalities discussed in \cite{bounds,Tsirel,Popescu,bManko}. The CHSH inequalities (\ref{CHSH-ineq}) are expressed exactly in terms of the function (\ref{2-qubit-tomo-function}), it follows that the bound can be found as the maximum of the left hand side of (\ref{CHSH-ineq}). We will develop the analysis of Bell's inequalities based on tomographic star-product approach in future publications. \section*{References} \end{document}
\begin{document} \title[On Induced Matching of Graphs]{On Induced Matching numbers of stacked-book graphs} \author[T.C Adefokun]{Tayo Charles Adefokun$^1$ } \operatorname{ad}dress{$^1$Department of Computer and Mathematical Sciences, \newline \indent Crawford University, \newline \indent Nigeria} \email{[email protected]} \keywords{Stacked-book graphs, Maximum Induced Matching Number, Cartesian product Star graph paths\\ \indent 2010 {\it Mathematics Subject Classification}. Primary: 05C70, 05C15} \begin{abstract} Suppose that $G$ is a simple, undirected graph. An induced matching in $G$ is a set of edges $M$ in the edge set $E(G)$ of $G$ such that if $e_1, e_2 \in M$, then no endpoint $v_1, v_2$ of $e_1$ and $e_2$ respectively is incident to any edge $e_k \in E(G)$ such that $e_k$ is incident to any edge in $M$. Denoted by $im (G)$, the maximum cardinal number of $M$ is known as the induced matching number of $G$. In this work, we probe $im(G)$ where $G = G_{m,n}$, which is the stacked-book graph obtained by the Cartesian product of the star graph $S_m$ and path $P_n$. \end{abstract} \maketitle \section{Introduction} Suppose that $G$ is a graph with $E(G)$ as the edge set of $G$ while $V(G)$ denotes the vertex set of $G$. Let $M$ be subset of $E(G)$ such that for every $e_1, e_2 \in M$ there is no such edge in $E(G)$ to which any of the end points of $e_1$ and $e_2$ are commonly adjacent. Maximum Induced matching (MIM) problem is the generalization of the older graph matching problem, and it was introduced in \cite{SV1}. Suppose that $M$ is the largest induced matching in $G$ then the cardinal number of $M$, denoted by $im(G)$ is called the maximum induced matching number of $G$. Many work has has been on this subject. It has attracted interest mostly because of it is theoretically interested and it has a number of direct applications. In \cite{SV1}, the authors described MIM problem as "risk free" marriage where married couples who are perfectly matched are identified. Its usefulness in cryptography is also obvious. Cameron in her earlier work \cite{C1} showed that even though the MIM problem is NP-complete for bipartite graphs, it is easier to resolve for chordal graphs. This was also confirmed for circular graph in \cite{GL1}. Golumbic and Lewenstein \cite{GL2} established that the a relationship between MIM number and redundancy number in graphs and also showed that the MIM problem is polynomial-time solvable for tree graphs, while \cite{C2} investigated the MIM problem in intersection graphs. Recent works on MIM problem include \cite{M1} where the MIM number was extensively probed for grids $G_{n,m}=P_n \Box P_m$, the Cartesian product of paths $P_n$ and $P_m$. For odd $nm$, a bound $im(G_{n,m}) \leq \lfloor \frac{nm+1}{4} \rfloor$ was obtained. The bound was tightened in \cite{AA2} and further in \cite{AA1}. In \cite{XT1} investigation was made into obtaining exact algorithm for MIM problem of graphs on $n-$vertices. In this work, we probe the maximum induced matching problem for stacked-book graph $G_{m,n}$ class which are graphs obtained from the Cartesian product of star graphs $S_m$ and paths $P_n$. The MIM numbers are obtained for the initial range of these graphs while lower bounds of MIM number are derived for the general class. \section{Definitions} To make this works self-contained, we give the following definitions, which we shall adopt in the course of the paper. Definitions that are not considered as general will be given at the point of application. The vertex set of graph $G$ is $V(G)$ and $M$ is a subset of $E(G)$, the edge set of $G$, and $M$ is the induced matching of $G$. A vertex $v \in V(G)$ is called saturated if $v \in V(G)$ and unsaturated if otherwise. A star graph $S_m$ contains a central vertex $v_1$ (except if specifically indicated otherwise) with $m-1$ leaves, which are all incident to $v_1$ as pendants. A path $P_n$ contains $n$ edges and $n-1$ paths, while a cycle $C_m$ contains $m$ vertices and $n$ edges. Supposed that $u$ and $v$ are members of $V(G)$, then $d(u,v)$ is a positive integer, which is the distance between $u$ and $v$ in $G$. A vertex $v \in V(G)$ is called unstaurable if by the virtue of its position, can not be saturated either because of its distance from a saturated vertex or it is a the the right distance but not adjacent to a vertex that can be saturated in other to form an edge in the induced matching. A saturable vertex therefore, is the opposite of an un saturable vertex. The diameter of a graph is the largest distance between any two vertices on a graph $u$ and $v$, demoted by $diam(G)$. The set $[a,b]$ denoted set of integers from $a$ to $b$ while $[a]$ is a shortened for for $[1,a]$. \subsection{Structure of a Stacked-book graph} The stacked-book graph is the Cartesian product $S_m \Box P_n$ of a star graph $S_m$ and path $P_n$. Structurally, a $S_m \Box P_n$ contains $n$ number of $S_m$ stars such that there exist the $E(G') \in E(S_m \Box P_n)$, where $E(G')=\left\lbrace v_iu_i: v_i \in V(S_m(i)); u_i \in V(S_m(i+1), i \in [n])\right\rbrace $. Clearly, $E(S_m \Box P_n)=E(G') \cup E(\cup^n_{i=1}S_m(i))$, where $S_m(i)$ is designated as the the $i$th $S_m$ star graph for all $1 \leq i \leq n$ \subsection{Initial Results} The following results are obvious \begin{theorem}\label{thm1} Let $P_n$ be a path graph on $n$ vertices. Then, $im(P_n)= \lceil \frac{n-1}{3}\rceil$. \end{theorem} \begin{theorem} \label{thm2} Let $C_n$ be a circle graph on $n$ vertices. Then $im(C_n)= \lfloor \frac{n}{3}\rfloor$. \end{theorem} \begin{theorem}\label{thm3}\cite{M1} Suppose that $G_{3,n}$ is a grid graph obtained by the Cartesian product $P_3 \Box P_n$, where $n$ is even or odd. Then for a positive integer $k$, \begin{center} $im(P_3 \Box P_n) = \left\{ \begin{array}{ll} \lceil \frac{3n}{4} \rceil & \mbox{if} \;\; n \; \mbox {is even}; \\ \frac{3(n-1)}{4} & \mbox{if} \;\; n=4k+1 \\ \frac{3(n-1)+2}{4} & \mbox{if} \; \; n=4k+3\\ \end{array} \right.$ \end{center} \end{theorem} \section{Result} Now we present the results we have obtained in this work. First we show a result on induced matching on star graph $S_m$. \begin{theorem}\label{thm4} Let $S_m$ be a given a star graph such that the central vertex is $v_1$ and it is adjacent to $m-1$ leaves. Then $im(S_m)=1$ \end{theorem} (The implication of this result is that every star contains at most one element in its induced matching set.) \begin{proof} Let $S_m$ be a start with $v_1$ being the central vertex. Then $v_1$ is saturated. Suppose that $v_k \in V(S_m)$, $k \leq m$, such that $v_k$ is saturated. Then $v_1v_k \in M$. Now for all $i$, $i \neq k$, $v_i \in V(G)$ is unsaturated since the $diam(S_m)=2$. Thus $im(S_m)=|m|=1$. \end{proof} Now we present our first results on the induced matching of stacked-book graph $G_{m,n}$. \begin{lemma}\label{lem1} Suppose that $G_{m,n}$ is a stacked-book graph. Then if the induced matching number of $G_{m,n}$ is obtained, then the central vertices $v_1(1)$, $v_1(2)$ of factor stars $S_m(1)$ and $S_m(2)$ of $G_{m,n}$ are not unsaturated. \end{lemma} \begin{proof} Suppose that $v_1$ and $u_1$ are the central vertices of $S_m(1)$ and $S_m(2)$ stars. Now, suppose that $v_1$ is saturated. Then either $v_1v_i \in M$, $v_i \in V(S_m(1))$ for some $2 \leq i < m $ or $v_1u_1 \in M$. Suppose that $v_1 v_i \in M$. By Theorem \ref{thm4}, if $v_1$ is saturated, then at least $m-2$ vertices on $S_m$ will be unsaturated. Thus, for all $v_i \in V(S_m(1))$, $v_iu_i \notin M$. Same argument holds if $u_1 \in S_m(2)$ is saturated. Thus, $im(G_{m,2}) = 1$. Now, suppose that $v_1u_1 \in M$. Since $d(v_1,v_i)=1=d(u_1,u_i)$ for all $i \in [2,m]$, then $v_i,u_i$ are unsaturated for all $i \in [2,m]$. Clearly there exists a path $P_5$ in $G_{m,2}$. From Theorem \ref{thm1}, $P_5$ contains two edges in $M$ of $G_{m,2}$. Thus a contradiction. \end{proof} Now we present the first theorem, \begin{theorem} Let $G_{m,2}$ be a stacked-book graph. Then $im(G_{m,2})=m-1$. \end{theorem} \begin{proof} Let $G_{m,2}$ be a stacked-book graph. Then there exist $S_m(1), S_m(2) \subseteq G_{m,2}$ with vertices $v_1, v_2 \cdots v_m$ and $u_1, u_2, \cdots u_m$ and a path $P_5(i) = v_i \rightarrow u_i \rightarrow u_1 \rightarrow u_{i+1} \rightarrow v_{i+1}$, for all $i \in [2,m]$. Thus, there exits, the set $\bar{P}=P_5(2), P_5(3), \cdots , P_5(\frac{m-1}{2})$, if $m$ is odd. Thus, there are $\frac{m-1}{2}$ number of $P_5-$paths. Now, by Theorem \ref{thm1}, $im(P_5)= 2$. Clearly, $\bar{P}$ consists of all the edges in $E(G_{m,2})$ that can be in $M$. Therefore, $im(G_{m.2}) \leq 2\left(\frac{m-1}{2} \right)=m-1$. Suppose that $m$ is even. Then, set $P*=\left\lbrace P_5(2), P_5(3), \cdots , P_5(\frac{m-2}{2}), P_3(t) \right\rbrace $, where $P_3(t)= v_k \rightarrow u_k \rightarrow u_1$. So, $im(P* \backslash P_3(t))=2 \left(\frac{m-2}{2}\right)=m-2$. By an earlier result, $im(P_3(t))=1$. Therefore, $im(P*)=m-1$. Hence, for any integer $m$, $im(G_{m,2}) \leq m-1$. Conversely, by definition of induced matching and stacked-book graph, $v_2u_2, v_3u_3, \cdots, v_mu_m$, satisfying the distance conditions to belong to $M$. Thus, $im(G_{m,2}) \geq m-1$ and hence the claim. \end{proof} Next we consider the induced matching in $G_{m,3}$, where $m$ is either even or odd and show that the graph contains the same induced matching as $G_{m,2}$. \begin{theorem}\label{thm5} Let $G_{m,3}$ be a stacked-book graph. Then $im(G_{m,3})=m-1$. \end{theorem} To proof Theorem \ref{thm5}, we need two results, the first one, which is about about the nature of induced matching and distances between vertices of graphs, is more like a folklore because it follows from the definitions of induced matching of graphs. \begin{lemma}\label{lem2} Let $e_1$ be a member of $M$ of a graph $G$. Then some edge $e_2 \in E(G)$ also belongs to $M$if there exists $v_1 \in e_1$ and $u_1 \in e_2$ such that $d(v_1,u_1)\geq 3$ and $v_2 \in e_1$ and $u_2 \in e_2 $, such that $d(v_2,u_2) \geq 2$. \end{lemma} \begin{proof} The proof follows from the definition of induced matching $M$ of graph $G$ \end{proof} \begin{lemma} \label{lem3} Let $G_{m,3}$ be a stacked-book graph with factor star graphs $S_m(1)$, $S_m(2)$ and $S_m(3)$ such that $v_1 \rightarrow u_1 \rightarrow w_1$ is a $P_3$ path in $G_{m,3}$, where $v_1, u_1$ and $w_1$ are the central vertices of the respective factor star graphs.Then if $u_1$ is saturated, and $u_1v_k \in M$ for some $v_k \in V(G_{m,3})$, then $|M|=1$ and thus, not the maximum induced matching of $G_{m,3}$. \end{lemma} \begin{proof} For $v_k \in V(G_{m,3})$, $v_k \neq u_1$, for which $v_i \in G_{m,3}$ such that $d(v_k,v_i)=3$ since the $diam(G_{m,3})= 3$. However, suppose that $v_iv_j \in E(G_{m,3})$, for which $d(v_k,v_i)=3$. It is clear that $v_i$ is a leaf if some $S_m(t)$, $t \in \left\lbrace 1,3\right\rbrace $. Thus, $d(u_1,v_j) =1$, hence a contradiction to Lemma \ref{lem2} and hence the result. \end{proof} \subsection{Proof of Theorem \ref{thm5}} Now we proceed to proof Theorem \ref{thm5}. \begin{proof} Suppose that $|M| > m-1$. Let $v_1,u_1$ and $w_1$ be the central vertices of $S_m(1), S_m(2)$ and $S_m(3)$ respectively. Clearly, $v_1u_1, u_1w_1 \notin M$ from Lemma \ref{lem3}. Now, first we show that $v_1$ is not saturable. Suppose that $v_1$ is saturable, then $v_1v_q \in M$, where $v_q$ is a leaf on $S_m(1)$. By an earlier result, subgraph induced $S_m(1)$ and $S_m(2)$ does not contain another member of $M$. Also, let $v_qu_q \in E(G_{m,3})$, with $u_q \in S_m(2)$ and $u_qw_q \in E(G)$, with $w_q \in S_m(3)$. By earlier result, $u_qw_q \notin M$. In like manner, if $w_1$ is saturated, and $w_1w_q \in M$ no other edge in subgraph of $G_{m,3}$ induced by $S_m(2)$ and $S_m(3)$ is a member of $M$, and $v_qu_q \notin M$. Without loss of generality, suppose that $v_1,v_q \in M$, then only $\bar{M}=\left\lbrace u_iw_i : i \in [2,m]; \; i \neq\right\rbrace \subset E(G_{m,3}) $ will be member of $M$. Thus $|\bar{M}|=m-2$ and so $|M|=m-1$, which is a contradiction. Now it has been established that none of the pendants of $S_m(1), S_m(2)$ and $S_m(3)$ can be in $M$. Thus, the possible members of $M$ are $\left\lbrace v_iu_i: i \in [2,m]\right\rbrace \cup \left\lbrace u_iw_i: i \in [2,m] \right\rbrace = M' $. Clearly, $|\bar{M}|=2(m-1)$. By Lemma \ref{lem2}, only half of the members of $\bar{M}$ can be in $M$. Thus, $im(G_{m,3}) \leq m-1$. Reasonably, $im(G_{m,2}) \leq im(G_{m,3})$. By earlier result, therefore, $iu(G_{m,3}) \geq m-1$ and thus $im(G_{m,3}) = m-1$. \end{proof} Next we investigate the induced matching number of $G_{m,4}$. We start with a lemma that will be employed in the main result. \begin{lemma} \label{lem4} Let $G_{m,4}$ be a stacked-book graph such that $S_m(1), S_m(2), S_m(3)$ and $S_m(4)$ are the factor stars of $G_{m,4}$. Suppose that $im(G_{m,4}) \geq m$. Then if $M' = \left\lbrace u_iw_i: i[2,m]; \; u_i \in S_m(2), w_i \in S_m(3) \right\rbrace $, then $M'$ is not a subset of $M$. \end{lemma} \begin{proof} It is eay to see that $|M'| = m-1$. Now, suppose that $M' \subset M$, then $u_i, w_i$ are saturated for all $i \in [2,m]$. Thus, no vertex $v_i \in S_m(1)$ and $r_i \in S_m(4)$ is saturable, for $i \in [2,m]$, which implies that $im(G_{m,4})=m-1$ and thus, a contradiction. \end{proof} Nex we consider the main theorem. \begin{theorem}\label{thm6} Let $S_m(1), S_m(2), S_m(3)$ and $S_m(4)$ be the factor star graphs of the stacked-book graph $G_{m,4}$. Then, $im(G_{m,4})=m$. \end{theorem} \begin{proof} By Lemma \ref{lem4}, suppose that at least some edge in \\$M' = \left\lbrace u_iw_i: i[2,m]; \; u_i\in S_m(2), w_i \in S_m(3) \right\rbrace $ is not in $M$. Suppose therefore that $u_kw_k \in M$. Then for $v_k\in S_m(1)$, and $r_k \in S_m(4)$, $v_1v_k, r_1r_k \in M$, where $v_1$ and $r_1$ are the central vertices of $S_m(1)$ and $S_m(4)$ respectively. Thus, $im(G_{m,4}) \geq m$. Conversely, suppose that $im(G_{m,4})=m+1$. Now, let $u_1, w_1$ be the central vertices of $S_m(2)$ and $S_m(3)$ respectively. Suppose that one of $u_i,w_i$, say $u_i$ is saturated such that $u_1u_i \in M$. Then, from earlier result, no edge in the subgraph of $G_{m,4}$ induced by $S_m(1)$, $S_m(2)$ and $S_m(3)$ is contained in $M$. Likewise, if $w_1w_i \in M,$ then all other vertices on the subgraph of $G_{m,4}$ induced by $S_m(2)$, $S_m(3)$ and $S_m(4)$ are unstaurable. If any of the pendant of $S_m(2)$ and $S_m(3)$ is in $M$, then $M=2$. Now, note as well that if $u_1w_1 \in M$, then by the distances of $u_1$ and $w_1$ to the rest of vertices on $S_m(1), S_m(2), S_m(3)$ and $S_m(4)$, only $u_1w_1$ will be in $M$. Thus for optimal $M$, some members of $M''= \left\lbrace v_iu_i; i \in [2,m]\right\rbrace$ or $M'''= \left\lbrace w_ir_i: i \in[2,m] \right\rbrace $ will have to be in $M$. Now clearly, it can be see that $|M' \cup M''|=2(m-1)$ and only $m-1$ members of $M' \cup M''$ can be in $M$. Based on this observable fact, at least there will exist a $w_i \in S_m(3)$ that is not saturable. Thus, there exist a saturable vertex $r_i \in S_m(4)$, such that $r_1r_i \in M$. By earlier result, there is no other pendant of $S_m(4)$ that is in $M$. Thus, $im(G_{m,4}) < m+1$ and hence a contradiction. Therefore, $im(G_{m,4}) \leq m$ and the claim follows. \end{proof} Now we consider the case of $G_{m,5}$. We shall need some new results to aid the proof. \begin{lemma}\label{lem5} Suppose that $w_1 \in S_m(3)$ is the central vertex of $S_m(3)$, where $\left\lbrace S_m(i): i \in [1,5]\right\rbrace $ is the set of factor stars of $G_{m,5}$. If $w_1$ is saturated, then for $M$ of $G_{m,5}$, $|M| \leq 2m-3$. \end{lemma} Suppose that $w_1$ is the central vertex of $S_m(3)$ and it is saturated. Then one of the $w_1w_k, u_1w_1$ and $w_1r_1$ belongs to $M$ where $u_1, r_1$ are central vertices of $S_m(2)$ and $S_m(4)$respectively. Suppose that $w_1w_k \in M$, where $k \leq m$. Now for all $i \in [2,m], i \neq k$, $w_i \in S_m(3)$ is unsaturable by earlier results. Thus members of $\left\lbrace u_iw_i:i \in [2,m] \right\rbrace $ and $\left\lbrace w_ir_i: r_i \in S_M(4), i \in [2,m] \right\rbrace $ do not belong in $M$. Also it clear to see that both edges $v_ku_k, r_kt_k \notin M$, where $t_k \in S_m(5)$. Now, from earlier technique, it can be deduced that $v_1v_i, t_1t_i \notin M$ for all $i \in [2,m]$. Thus, only $E'=\left\lbrace v_iu_i: i \in [2,m], i \neq k \right\rbrace $ and $E'' = \left\lbrace r_it_i: i \in [2,m], i \neq k \right\rbrace $ can be in $M$. Clearly, $|E' \cup E''|=2(m-2)$. Thus $|M|=2m-3$. Also, if $u_1w_i \in M$, it can be seen by following the definitions of induced matching that no other edges in the subgraph of $G_{m,5}$ induced by $S_m(1), S_m(2)$ and $S_m(3)$ is a member of $M$ and from earlier results, only $m-1$ edges of the subgraph of $G_{m,5}$ induced by $S_m(3), S_m(4)$ and $S_m(5)$ can be in $M$. Thus, $M$ consists of at most $m$ edges, which is not more that $2m-3$, since $m \geq 3$. \begin{lemma}\label{lem6} Suppose that $im(G_{m,5}) \geq 2(m-1)$. Then $u_1, w_1$ and $r_1$, the central vertices of $S_m(2), S_m(3)$ and $S_m(4)$ respectively are unsaturated. \end{lemma} \begin{proof} Proof follows from last theorem and an earlier result. \end{proof} Now we proceed to the probe the induced matching of $G_{m,5}$. \begin{theorem} Let $G_{m,5}$ be a stacked-book graph. Then, $im(G_{m,5}) = 2(m-1)$. \end{theorem} \begin{proof} From the last results, we see that if $u_1, w_1, r_1 $ are unsaturated, then $|M| \geq 2m-3$. Now we show that $im(G_{m,5}) \geq 2(m-1)$. Note that there exists a path $P_5(i) = v_i \rightarrow u_i \rightarrow w_i \rightarrow r_i \rightarrow t_i$, for all $i \in [2,m]$. Therefore, there are $m-1$ such paths in $G_{m,5}$. From earlier results, $im(P_5)=2$. Thus, $im(G_{m,5})\geq 2(m-1)$. Conversely, $u_1, w_1, r_1$ are established not to be saturated for the claim to hold. The edges in $E(G_{m,5})$ left to be members of $M$ the pendants of $S_m(1)$ and $S_m(5)$ and the paths $P_5(i)$ defined earlier. Suppose that a pendant each from $S_m(1)$ and $S_m(5)$ belong to $M$, then by definition of induced matching, at most one edge on each of the paths $P_5(i)$ can be a member of $M$. Thus $|M|=m+1$. The only alternative is if no pendant of $S_m(1)$ and $S_m(5)$ is a member of $M$. Thus, at most two edges on each member of $P_5(i)$ will be in $M$. Thus, $|M| \leq 2(m-1)$ and so, $im(G_{m,5})=2(m-1)$. \end{proof} Now we generalize the results. \begin{theorem} Let $G_{m,n}$ be a stacked-book graph such that $n$ is even. Then \begin{center} $im(G_{m,n}) \geq \left\{ \begin{array}{ll} m\lceil \frac{n}{4} \rceil -1 & \mbox{if} \;\; n \equiv 2 \mod 4; \\ \frac{mn}{4} & \mbox{if} \;\; n \equiv 0 \mod 4. \end{array} \right.$ \end{center} \end{theorem} \begin{proof} the claims follow from combining the results earlier proved where n are even numbers. \end{proof} \begin{theorem} Let $G_{m,n}$ be stacked-book graph with $n$ odd. Then \begin{center} $im(G_{m,n}) \geq \left\{ \begin{array}{ll} m\lfloor \frac{n}{4} \rfloor +2 & \mbox{if} \;\; n \equiv 3 \mod 4; \\ \frac{mn+3m-8}{4} & \mbox{if} \;\; n \equiv 1 \mod 4. \end{array} \right.$ \end{center} \end{theorem} We have established the lower bound for the the MIM numbers for the stacked-book graphs. From our preliminary work into establishing the tighter bounds, we have reasons to suggest that the results in the last two theorems may coincide with the upper bounds, and thus we come up with the conjectures below. \begin{conj} Let $G_{m,n}$ be a stacked-book graph such that $n$ is even. Then \begin{center} $im(G_{m,n}) = \left\{ \begin{array}{ll} m\lceil \frac{n}{4} \rceil -1 & \mbox{if} \;\; n \equiv 2 \mod 4; \\ \frac{mn}{4} & \mbox{if} \;\; n \equiv 0 \mod 4. \end{array} \right.$ \end{center} \end{conj} \begin{conj} Let $G_{m,n}$ be stacked-book graph with $n$ odd. Then \begin{center} $im(G_{m,n}) = \left\{ \begin{array}{ll} m\lfloor \frac{n}{4} \rfloor +2 & \mbox{if} \;\; n \equiv 3 \mod 4; \\ \frac{mn+3m-8}{4} & \mbox{if} \;\; n \equiv 1 \mod 4. \end{array} \right.$ \end{center} \end{conj} \section{Conclusion} We have obtained the MIM number of stacked-book graphs $G_{m,n}$ for all $m$ and for $n \in [1,5]$. These results are building blocks for obtaining the lower bounds for the cases where $n \geq 6$. The conjecture at the end of the work suggests that the lower bounds obtained in this work will in fact be equal to the upper bounds if those can be found. It must be noted that finding the lower bounds or the MIM numbers for the complete stacked-book graphs class will take rigorous effort and therefore may worth considering as a new task. \end{document}
\begin{document} \draft \title{Quantum Error-Correcting Codes Need Not Completely Reveal the Error Syndrome} \author{P.W.~Shor, AT\&T Research, Murray Hill, NJ 07974\\ and J.A.~Smolin, University of California at Los Angeles, Los Angeles, CA 90024} \maketitle \begin{abstract} Quantum error-correcting codes so far proposed have not worked in the presence of noise which introduces more than one bit of entropy per qubit sent through a quantum channel, nor can any code which identifies the complete error syndrome. We describe a code which does not find the complete error syndrome and can be used for reliable transmission of quantum information through channels which add more than one bit of entropy per transmitted bit. In the case of the depolarizing channel our code can be used in a channel of fidelity .8096. The best existing code worked only down to .8107. \end{abstract} \pacs{PACS: 03.65.Bz, 89.70.+c} \narrowtext Several recent papers have dealt with the topic of good quantum error-correcting codes~\cite{SG,purification,CS,Steane,LF,ekert,PVK,EPP,SB}. All of the efficient codes completely identify what happens to the state as it interacts with the environment. In other words they identify the exact error syndrome. The formal conditions which any good code must satisfy (see~\cite{EPP}) are less restrictive, though some have conjectured that error-correcting codes must indeed identify the complete error syndrome. There are trivial codes which do not gain full knowledge about the error syndrome, for example any of the codes which do identify the error syndrome can be supplemented by an additional quantum system about which no information is sought or gained. Such examples are trivial since the additional system is in a product state with the system which is actually involved in the coding and it is clear that such a code can only be less efficient than the codes from which they are derived. In \cite{EPP}, a ``hashing'' code is presented which, while it does not completely identify the error syndrome, achieves precisely the same rate as the ``breeding'' protocol of \cite{purification,EPP} which does. Here we present a non-trivial code which does not identify the entire error syndrome {\em and} can work in a noisier channel than any code which does. The typical error model used in analyzing quantum error-correcting codes is that of independent depolarization. In terms of the probability $x$ of not being depolarized, each qubit (two state quantum system) which is sent through a channel has a probability $f=\frac{3x+1}{4}$ of being transmitted untouched, and equal probabilities $(1-f)/3$ of 1) flipping the amplitude ($\ket{\!\!\uparrow}$ vs. $\ket{\!\!\downarrow}$), 2) changing the sign of the relative phase of $\ket{\!\!\uparrow}$ and $\ket{\!\!\downarrow}$ or 3) both. The specification of which type of error (or none) happened to each qubit is what is known as the error syndrome. Clearly, if one knew the error syndrome, all the qubits could be corrected by simply flipping each bit's direction or phase (or both) as needed, using the Pauli matrices. We present our code first in the language of quantum entanglement purification protocols, and then describe the corresponding direct quantum error-correcting code. In quantum purification protocols~\cite{purification,EPP} two-particle states $\ket{\Phi^+}=1/\sqrt{2} (\ket{\!\uparrow\uparrow}+\ket{\!\downarrow\downarrow})$ are prepared by one participant (Alice) and one of the particles is sent through the channel to the other participant (Bob). Using the four Bell states \begin{eqnarray} &\ket{\Phi^\pm}=\frac{1}{\sqrt{2}} (\ket{\!\uparrow\uparrow}\pm\ket{\!\downarrow\downarrow})& \nonumber\\ &{\rm and}& \\ &\ket{\Psi^\pm}=\frac{1}{\sqrt{2}} (\ket{\!\uparrow\downarrow}\pm\ket{\!\downarrow\uparrow}) & \nonumber \end{eqnarray} as a basis, the error model can be expressed as taking the $\ket{\Phi^+}$ states into density matrices of the Werner form \begin{equation} W=\left( \begin{array}{cccc} f& & & \\ &\frac{1-f}{3}& & \\ & &\frac{1-f}{3}& \\ & & &\frac{1-f}{3}\\ \end{array}\right) \ . \label{werner} \end{equation} $f=\melement{\Phi^+}{W}$ is then the fidelity of $W$ relative to $\ket{\Phi^+}$. In this language, amplitude errors interchange $\ket{\Phi}$ and $\ket{\Psi}$ states, and phase errors interchange plus and minus states. Our improved purification protocol uses the Bilateral exclusive or (BXOR) operation of~\cite{purification}. Alice and Bob each apply the exclusive or (XOR) operation: \begin{equation} \begin{array}{ccr} U_{XOR}& = & \ket{\!\uparrow_S\uparrow_T}\bra{\uparrow_S\downarrow_T}+ \ket{\!\uparrow_S\downarrow_T}\bra{\uparrow_S\uparrow_T} \\ & & +\ket{\!\downarrow_S\downarrow_T}\bra{\downarrow_S\downarrow_T} + \ket{\!\downarrow_S\uparrow_T}\bra{\downarrow_S\uparrow_T} \end{array} \end{equation} to the corresponding particles of two Bell states which have been shared through the channel. It can be easily seen that when $U_{XOR}$ is applied to two qubits, each in one of the basis states $\ket{\!\uparrow}$ or $\ket{\!\downarrow}$, that one of them is left alone and the other is left in the state corresponding to the classical XOR of the two original states. These are called the source and target qubits respectively. The first stage of the purification protocol is for Alice and Bob to group their noisy pairs of particles into blocks of size $k$. Next they apply the BXOR operation with one pair as the source and each of the other pairs in the block as the target in turn. The target qubits are all measured in the $z$ basis and Alice sends her classical results as bitstring $x$ to Bob, whose results are bitstring $y$, as shown in Figure~\ref{ss2}. Bob compares his results to Alice's and checks whether each bit agrees or disagrees, which is just taking the bitwise XOR, $x \oplus y$. The remaining unmeasured source pair is then in one of $2^{k-1}$ post-selected density matrices corresponding to the $2^{k-1}$ results of $x\oplus y$. All are diagonal in the Bell basis. The expected entropy of this ensemble is expressed most simply by a recursively defined function: \begin{eqnarray} && S(n,M) := \nonumber \\ &&\ {\rm if}(n==1) {\rm\ then\ return\ }( h(M) ) \nonumber \\ &&\ {\rm else\ return}\nonumber\\ &&\ \ \ p_0(M) S(n-1,M_0(M)) + p_1(M) S(n-1,M_1(M))\nonumber\\ \end{eqnarray} where $h(M)=-{\rm Tr}(M\log M)$, $p_0(M)$ and $p_1(M)$ are the probabilities that Alice and Bob's results with matrix M as a source and target state $W$ will agree or disagree, and $M_0(M)$ and $M_1(M)$ are the post-selected density matrices for the source matrix $M$ when Alice and Bob's results agree and disagree. Bob's view of this is shown in Figure~\ref{ss1}. It is straightforward to calculate these functions using the facts that the BXOR operation maps Bell states into Bell states as shown in Table~\ref{belltable}, that the matrices $M$ and $W$ are Bell diagonal, and that Alice and Bob's measurements will agree when then have $\ket{\Phi^\pm}$ and disagree when they have $\ket{\Psi^\pm}$. We have then have for the $p$ functions \widetext \begin{eqnarray} &p_0(M)=(f+g)\langle\Phi^+|M|\Phi^+\rangle + 2g \langle\Psi^+|M|\Psi^+\rangle + (f+g)\langle\Phi^-|M|\Phi^-\rangle + 2g \langle\Psi^-|M|\Psi^-\rangle & \nonumber\\ &p_1(M)=2g \langle\Phi^+|M|\Phi^+\rangle + (f+g)\langle\Psi^+|M|\Psi^+\rangle + 2g \langle\Phi^-|M|\Phi^-\rangle + (f+g)\langle\Psi^-|M|\Psi^-\rangle &\end{eqnarray} \narrowtext \noindent and for the $M$ functions \begin{eqnarray} &&\langle\Phi^+|M_0(M)|\Phi^+\rangle =\frac{f\melement{\Phi^+}{M} + g\melement{\Phi^-}{M}}{p_0(M)} \nonumber\\ &&\melement{\Psi^+}{M_0(M)} =\frac{g\melement{\Psi^+}{M} + g\melement{\Phi^-}{M}}{p_0(M)} \nonumber\\ &&\melement{\Phi^-}{M_0(M)} =\frac{g\melement{\Phi^+}{M} + f\melement{\Phi^-}{M}}{p_0(M)} \nonumber\\ &&\melement{\Psi^-}{M_0(M)} =\frac{g\melement{\Psi^+}{M} + f\melement{\Psi^-}{M}}{p_0(M)} \end{eqnarray} and \begin{eqnarray} &&\langle\Phi^+|M_1(M)|\Phi^+\rangle =\frac{g\melement{\Phi^+}{M} + g\melement{\Phi^-}{M}}{p_1(M)} \nonumber\\ &&\langle\Psi^+|M_1(M)|\Phi^+\rangle =\frac{f\melement{\Psi^+}{M} + g\melement{\Psi^-}{M}}{p_1(M)} \nonumber\\ &&\langle\Phi^-|M_1(M)|\Phi^+\rangle =\frac{g\melement{\Phi^+}{M} + g\melement{\Phi^-}{M}}{p_1(M)} \nonumber\\ &&\langle\Psi^-|M_1(M)|\Phi^+\rangle =\frac{g\melement{\Psi^+}{M} + f\melement{\Psi^-}{M}}{p_1(M)} \end{eqnarray} where we have written $g=(1-f)/3$ for convenience. Note that $M_0(M)$ and $M_1(M)$ are diagonal so these equations specify them completely. If Alice and Bob have a large number of such results, and when $S(k,W) < 1$, they can use the breeding purification method of~\cite{purification} to completely determine the error syndrome {\em of these remaining states}~\cite{FN}. The complete error syndrome of the amplitude errors is found, but since the BXORs done within the blocks of $k$ determine nothing about the phase errors, only the overall phase error of the block of $k$ is determined. This procedure will result in a yield of pure $\ket{\Phi^+}$ states of $\frac{1-S(k,W)}{k}$. These can then be used for quantum teleportation~\cite{teleportation} to transmit qubits safely through the noisy channel. The breeding protocol assumes Alice and Bob share a set of unknown Bell states and a supply of $\ket{\Phi^+}$ states known to be pure. If a sequence of $n/2$ Bell states is represented by a length $n$ bitstring $x$, the parity $x \cdot s$ of any subset $s$ of the bits of the string can be collected into the amplitude bit ($\ket{\Phi}$ vs. $\ket{\Psi}$) of one of the initially pure pairs, without disturbing the $n/2$ unknown Bell states. This is accomplished by repeatedly using the BXOR operation with the pure pair as the target. Each of the unknown states whose amplitude bit is part of $s$ is used as a source, and each one whose phase bit is selected by $s$ is pre- and post-processed by the bilateral rotation of $\pi/2$ around the $y$ axis (which has the effect of swapping the amplitude and phase bits, and then swapping them back). The subset parity $s$ is then determined by measuring the target state in the $z$ basis. The probability of any two strings $x$ and $x'$ having $m$ such random subset parities all agree is $1/2^m$. Given a random independent noise process the original ensemble of possible bitstrings has most of its weight in a set of ``typical'' strings containing $2^{\frac{n}{2}S+\delta}$ ($S$ is the entropy per Bell state, $\delta$ is small compared to $nS$). For such a distribution the collision probability of {\em any} string in the typical set other than $x$ having the same $m$ random subset parities is \begin{equation} p_c=\frac{2^{\frac{n}{2}S+\delta}}{2^m}\ . \end{equation} The probability of $x$ falling outside of the typical set is of order $O({\rm exp}(-\delta^2 n))$~\cite{schumacher}. Therefore, if $m$ is chosen slightly larger than $\frac{n}{2}S$, the original string $x$ can be determined from the $m$ subset parities with high probability. All the Bell states can then be corrected to pure $\ket{\Phi^+}$ states. $m\approx \frac{n}{2}S$ pure $\ket{\Phi^+}$ states had to be measured in the process of finding the $m$ subset parities, and so much be replaced, for a net yield of $D=1-S$. The breeding method was only shown in~\cite{purification} to work on a single Werner channel rather than the ensemble resulting from our $k$-way encoding. If Alice and Bob simply had $k-1$ channels of different fidelities they could clearly just use the breeding method, or any other, on each channel separately. However, Alice does not know into which type of channel each pair falls. Fortunately, the breeding protocol depends only on an ensemble of $n$ bits having most of its weight in a set of ``typical'' strings containing $2^{\frac{n}{2}S+\delta}$ members, which the receiver Bob can enumerate. It is apparent that the individual $k-1$ channels each have such a typical set and so, therefore, will the collection of all of them, even though only Bob can determine this set. Another important feature of the breeding and hashing protocols is that Alice and Bob choose {\em randomly} among a set of operations determined only by the channel fidelity. This implies that Alice can do her part of the procedure with no knowledge of any sort from Bob. Because of the formal equivalence of measurement of half of a Bell state and preparation of a qubit, any purification protocol requiring only one-way communication can be converted into a more explicit quantum error-correcting code~\cite{EPP}. Our protocol must work regardless of Alice's classical measurement results within the blocks of $k$. (Different results cannot convey any information to Alice because her half of each pair has not even interacted with the noise). In particular, our protocol must work when Alice's results are all $\ket{\!\!\downarrow}$. This result means that Bob's bits, before having been acted on by the noise, must have been prepared all in the same state, without specifying which state that is. In other words, Alice prepares a state of the form $\frac{1}{\sqrt{2}}(\ket{\!\uparrow\uparrow\uparrow\ldots\uparrow\uparrow}+ \ket{\!\downarrow\downarrow\downarrow\ldots\downarrow\downarrow})$ and sends $k-1$ of the bits through the channel. Bob's half of the BXOR operation is done as the decoding state, and amounts to the incomplete measurement of which of the qubits have different amplitude from the first, without determining the actual amplitude of any of them other than relative to the first. The hashing method applied directly to the states $W$ (the $k=1$ case) determines the full error syndrome, and allows error correction in channels of fidelity where $h(W_f)<1$. $h(W_f)=1$ for $f=.8107$. Our new method extends this to as low as $f=.8096$ for $k=5$. Other values are given in the Table~\ref{ktab}. The fraction $D$ of the bits transmitted through a channel which can be protected for a given channel fidelity is plotted in Figure~\ref{plot} for $k=1$~to~$7$. It is not yet known what the minimum fidelity channel is which can still have some capacity for transmission of undisturbed qubits, and our result only improves the previously known result by about 0.1\%. It is known, and proved in~\cite{EPP}, that channels of fidelity $f\leq 5/8$ have no capacity. There is still obviously a lot of room between that result and ours. Our result demonstrates that quantum error-correcting codes do not need to find the whole error syndrome, a property that any lower bound on the fidelity of a channel which can transmit undisturbed qubits must share. It should be noted as well that this and other protocols which are designed to work on depolarizing Werner channels will work on any noise which acts independently on each particle transmitted through the channel and which turns $\ket{\Phi^+}$'s into density matrices satisfying \begin{equation} g={\rm Max}(\melement{\zeta}{\rho})\geq f_c \label{spin} \end{equation} where the maximum is found over all maximally entangled four-dimensional $\ket{\zeta}$ and $f_c$ is the cutoff fidelity above which the code would work in a depolarizing channel. This is seen by rotating $\zeta_{max}$ into the direction of $\ket{\Phi^+}$ which can be done by entirely local actions (\cite{Gisin}) and then randomly rotating the state by applying a randomly selected $SU(2)$ to both Alice's and Bob's particles (this is the random bilateral rotation procedure of~\cite{purification}, explained in more detail in~\cite{EPP}). This results in a Werner density matrix of fidelity $f=g$ given by Eq.~\ref{spin}. \begin{table}[p] \begin{tabular}{r|llll|l} &\multicolumn{4}{c|}{source}& \\ target&$\Psi^-$&$\Phi^-$&$\Phi^+$&$\Psi^+$\\ \cline{1-6} &$\Psi^+$&$\Phi^+$&$\Phi^-$&$\Psi^-$&(source)\\ $\Psi^-$&$\Phi^-$&$\Psi^-$&$\Psi^-$&$\Phi^-$&(target)\\ \cline{1-6} &$\Psi^+$&$\Phi^+$&$\Phi^-$&$\Psi^-$&(source)\\ $\Phi^-$&$\Psi^-$&$\Phi^-$&$\Phi^-$&$\Psi^-$&(target)\\ \cline{1-6} &$\Psi^-$&$\Phi^-$&$\Phi^+$&$\Psi^+$&(source)\\ $\Phi^+$&$\Psi^+$&$\Phi^+$&$\Phi^+$&$\Psi^+$&(target)\\ \cline{1-6} &$\Psi^-$&$\Phi^-$&$\Phi^+$&$\Psi^+$&(source)\\ $\Psi^+$&$\Phi^+$&$\Psi^+$&$\Psi^+$&$\Phi^+$&(target)\\ \end{tabular} \caption{The BXOR mappings of Bell states onto Bell states \label{belltable}} \end{table} \begin{table} \begin{tabular}{lll|ll} k&f&&k&f\\ \cline{1-5} 1&.8107&&8&.8101\\ 2&.8115&&9&.8101\\ 3&.8099&&10&.8103\\ 4&.8101&&11&.8104\\ 5&.8096\ \ \ Best&&12&.8106\\ 6&.8100&&13&.8107\\ 7&.8098&&14&.8108\\ \end{tabular} \caption{The value of fidelity $f$ for which $S(k,f)=1$. Values of $k$ not shown all work less well than the direct hashing method ($k=1$). \label{ktab}} \end{table} \widetext \begin{figure} \caption{The first stage of the $k=4$ code, showing the overall purification view. \label{ss2} \label{ss2} \end{figure} \begin{figure} \caption{Bob's view of the conditional $M$'s as the BXORs are done in sequence. \label{ss1} \label{ss1} \end{figure} \begin{figure} \caption{The yield of distillable $\ket{\Phi^+} \label{plot} \end{figure} \end{document}
\begin{document} \title{Theory of versatile fidelity estimation with confidence} \author{Akshay Seshadri} \affiliation{Department of Physics, University of Colorado Boulder, Boulder, USA} \author{Martin Ringbauer} \affiliation{Institut f\"{u}r Experimentalphysik, Universit\"{a}t Innsbruck, Technikerstrasse 25, 6020 Innsbruck, Austria} \author{Thomas Monz} \affiliation{Institut f\"{u}r Experimentalphysik, Universit\"{a}t Innsbruck, Technikerstrasse 25, 6020 Innsbruck, Austria} \affiliation{Alpine Quantum Technologies GmbH, 6020 Innsbruck, Austria} \author{Stephen Becker} \affiliation{Department of Applied Mathematics, University of Colorado Boulder, Boulder, USA} \date{\today} \begin{abstract} Estimating the fidelity with a target state is important in quantum information tasks. Many fidelity estimation techniques present a suitable measurement scheme to perform the estimation. In contrast, we present techniques that allow the experimentalist to choose a convenient measurement setting. Our primary focus lies on a method that constructs an estimator with nearly minimax optimal confidence interval for any specified measurement setting. We demonstrate, through a combination of theoretical and numerical results, various desirable properties for the method: robustness against experimental imperfections, competitive sample complexity, and accurate estimates in practice. We compare this method with Maximum Likelihood Estimation and the associated Profile Likelihood method, a Semi-Definite Programming based approach, as well as a popular direct fidelity estimation technique. \end{abstract} \maketitle \section{Introduction} Quantum information, though a relatively nascent branch of research, has pervaded into many areas of physics and even other subjects like computer science. This prevalence is owed to a large part to the wide range of applications that benefit from a quantum approach, such as computation~\cite{ekert1996quantum}, simulation~\cite{Georgescu2014Review}, communication~\cite{gisin2007quantum}, and metrology and sensing~\cite{degen2017quantum, giovannetti2011advances}. All of these applications require us to accurately and reliably prepare a desired quantum state. Yet, \emph{verifying} that the experimentally prepared state is indeed what we intended is a non-trivial task. A commonly used metric to judge the success of state preparation is the fidelity between the target state $\rho$ and the prepared state $\sigma$. When the target state is pure, the fidelity is simply given as $F(\rho, \sigma) = \tr(\rho \sigma)$~\cite{Nielsen2000}. A common way to estimate the fidelity is to first reconstruct the experimentally prepared quantum state and then compute the fidelity between the reconstruction and the target state. Quantum state tomography is an active area of research, and several methods have been proposed to reconstruct the quantum state \cite{Vogel1989,Hradil1997, Blume-Kohout2010BME, Gross2010, Cramer2010}. However, these methods suffer from an exponential resource requirement as the Hilbert space grows, making them infeasible for all but the smallest systems. Particularly, when the task is merely to estimate the fidelity, a method known as Direct Fidelity Estimation (DFE)~\cite{Flammia2011, DaSilva2011} has been shown to achieve the task with exponentially fewer resources. DFE and related methods~\cite{Huang2020} estimate the fidelity directly from measurement outcomes without going through the intermediate step of reconstructing the state. These fidelity estimation methods specify measurement protocols, typically involving sampling measurement settings randomly, that one need to follow in order to estimate the fidelity. In practice, however, experimenters tend to use a fixed set of measurement settings instead of randomly sampling them, which would result in loss of theoretical guarantees. Furthermore, there can be cases where it is preferable to implement a different measurement scheme than the ones prescribed by these approaches. On the other hand, methods like maximum likelihood estimation (MLE) work for arbitrary measurement schemes, yet, in addition to the need for a costly reconstruction of the quantum state, they do not provide any guarantees on the estimate. Rigorous confidence intervals for estimated fidelities are crucial for applications from quantum error correction~\cite{Gottesman1997} to quantum key distribution, where the security of a protocol rests on proving the presence of entanglement~ \cite{curty2004entanglement, van2007experimental}. Entanglement verification in turn is typically achieved using an entanglement witness~\cite{brandao2005quantifying, bourennane2004experimental}, which is intimately tied to fidelity estimation~\cite{bourennane2004experimental}, again highlighting the need for rigorous confidence intervals. Even in cases where state reconstruction with a good confidence interval is possible, such as a bound in trace distance as given in Ref.~\cite{guctua2020FastTomography}, the number of measurements needed for tomography can be much larger than what is needed for fidelity estimation~\cite{Flammia2011}. There is a different approach called Variational Quantum Fidelity Estimation that gives a fidelity estimate with rigorous bounds by running a variational quantum algorithm on a quantum computer, but it is guaranteed to work only for low-rank states \cite{cerezo2020variational}. To summarize, we have identified two concerns with standard approaches for fidelity estimation: most do not allow for arbitrary measurement settings, and/or do not provide rigorous confidence intervals. Here, we address both of these concerns. We provide a versatile approach for estimating the fidelity directly from raw data for \emph{any} measurement scheme, without the need for state reconstruction. More precisely, given any target state, measurement scheme, and desired confidence level, our method constructs an estimator for the fidelity that takes raw data and gives a fidelity estimate almost instantaneously. Furthermore, the estimator comes equipped with a rigorous confidence interval that is guaranteed to be close to minimax optimal.\footnote{Larger at most by a factor of order 1 for sufficiently large confidence levels; see section~\ref{secn:minimax_method_theory}.} Turning this around, for a target confidence level $1 - \epsilon$, our method achieves a confidence interval of size $2\widehat{\mathcal{R}}_\ast$ -- where $\widehat{\mathcal{R}}_\ast$ is called the \emph{risk} (or additive error) of the estimate -- with a sample complexity of $\approx \ln(2/\epsilon)/(2\widehat{\mathcal{R}}_\ast^2)$ independent of the target state $\rho$, when measuring in the basis defined by $\rho$ (see section \ref{secn:minimax_method_optimal_risk}). While such a measurement scheme may often be impractical, we will introduce an alternative scheme using Pauli measurements that achieves a similar sample complexity for stabilizer states. This finding demonstrates that our method can give a scalable sample complexity with a judicious choice of the measurement protocol. We will begin by describing the theory behind our approach in the section \ref{secn:minimax_method_theory}. Then, we compute the sample complexity of the method in section \ref{secn:minimax_method_optimal_risk} and describe a Pauli measurement scheme similar to DFE (in procedure and sample complexity) in section \ref{secn:minimax_method_RPM_scheme}. In section \ref{secn:minimax_method_robustness}, we demonstrate the robustness of the estimator generated by our method against noise. Finally, we compare our method with DFE, MLE, profile likelihood and a semidefinite programming approach in section \ref{secn:other_methods_comparison}. \section{Minimax method\label{secn:minimax_method}} Our approach is based on recent results in statistics by Juditsky \& Nemirovski~\cite{Juditsky2009,goldenshluger2015hypothesis,juditsky2018near}, who describe the estimation of linear functionals in a general setting. The risk associated with the estimate is nearly minimax optimal, and so we call it the minimax method. Roughly speaking, a minimax optimal estimator gives the smallest possible symmetric confidence interval in the worst possible scenario (this will be made precise later). In section \ref{secn:minimax_method_theory}, we elaborate the main ideas involved in estimating fidelity using the minimax method. An abstract version of the method with the theoretical details is given in Appendix~\ref{app:minimax_theory}, while an application oriented presentation of the key results is given in the accompanying Ref.~\cite{PRL}. \subsection{Fidelity estimation using the minimax method\label{secn:minimax_method_theory}} Suppose that we are working with a quantum system that has a $d$-dimensional Hilbert space over $\mathbb{C}$, $d \in \mathbb{N}$. The states of this system can be described by density matrices, which are $d \times d$ positive semidefinite (complex-valued) matrices with unit trace. We denote the set of density matrices by $\mathcal{X}$. Our goal is to estimate the fidelity of the state $\sigma \in \mathcal{X}$ prepared in the lab with a pure target state $\rho \in \mathcal{X}$, where pure means that $\rho$ is a rank-one matrix. In this case, the fidelity between $\rho$ and $\sigma$ is given as $F(\rho, \sigma) = \tr(\rho \sigma)$, which is a linear function of $\sigma$ for a fixed $\rho$. In practice, one needs to estimate the fidelity from partial information about the state obtained through measurements. Any quantum measurement can be described by a positive operator-valued measure (POVM), which comprises of a set of positive semidefinite matrices that sum to identity. We consider the case where an experimenter uses $L$ different measurement settings, where the POVM for the $l^\text{th}$ setting ($l = 1, \dotsc, L$), is described by $\{E^{(l)}_1, \dotsc, E^{(l)}_{N_l}\}$. The experimenter performs $R_l$ repetitions (shots) of the $l^\text{th}$ POVM. The probability $p_\sigma^{(l)}(k)$ that a particular outcome $k \in \{1, \dotsc, N_l\}$ is obtained upon measuring the $l^{\text{th}}$ POVM when the state of the system is $\sigma$ is given by Born's rule as $p_\sigma^{(l)}(k) = \tr(E^{(l)}_k \sigma)$. Note that Born's rule can give zero probabilities for some of the outcomes depending on the state and the POVM. On the other hand, a technical requirement of Juditsky \& Nemirovski's method \cite{Juditsky2009} is that all outcome probabilities are non-zero. For this reason, we include a small positive parameter $\epsilon_o \ll 1$ to make the outcome probabilities positive, i.e., \begin{equation*} p_\sigma^{(l)}(k) = \frac{\tr(E^{(l)}_k \sigma) + \epsilon_o/N_l}{1 + \epsilon_o}. \end{equation*} We choose $\epsilon_o = 10^{-5}$ in the numerical simulations, so that the Born probabilities are practically unaffected. And because $\epsilon_o$ is small with respect to typical experimental imperfections, we will usually drop terms proportional to $\epsilon_o$ in the theoretical calculations. Our goal is to construct an estimator $\widehat{F}$ for the fidelity $F(\rho, \sigma)$ between the experimental state $\sigma$ and the target state $\rho$, using the observed outcomes corresponding to the chosen measurement settings. By an estimator, we mean any function that takes the observed outcomes as input and gives an estimate for the fidelity as an output. Constructing a good estimator first requires a measure of error to judge the performance of the estimator. For this purpose, we use the $\epsilon$-risk defined by Juditsky \& Nemirovski \cite{Juditsky2009}. Intuitively speaking, having an $\epsilon$-risk of $\mathcal{R}$ means that the error in the estimation is at most $\mathcal{R}$ with a probability larger than $1 - \epsilon$. Therefore, the $\epsilon$-risk defines a symmetric confidence interval for a confidence level of $1 - \epsilon$. More precisely, given a fidelity estimator $\widehat{F}$ and a confidence level $1 - \epsilon$, the $\epsilon$-risk of $\widehat{F}$ is given by \begin{align} \mathcal{R}(\widehat{F}; \epsilon) = \inf \bigg\{&\delta\ \big\vert \sup_{\chi \in \mathcal{X}} \nonumber \\ \Prob_{\text{outcomes} \sim p_\chi}\bigg[ &\left|\widehat{F}(\text{outcomes}) - \tr(\rho \chi)\right| > \delta \bigg] < \epsilon\bigg\}. \label{eqn:eps_risk} \end{align} Here, ``$\text{outcomes} \sim p_\chi$'' means that the outcomes for $l^\text{th}$ measurement are given by the ($\epsilon_0$-modified) Born rule probabilities $p_\chi^{(l)}$ for a state $\chi$, for all $l = 1, \dotsc, L$. The definition of the $\epsilon$-risk says that $\mathcal{R}(\widehat{F}; \epsilon)$ is the smallest number such that the probability that the estimator $\widehat{F}$ has an error larger than $\mathcal{R}(\widehat{F}; \epsilon)$ is less than $\epsilon$, irrespective of the underlying state of the system. Importantly, the $\epsilon$-risk for any given estimator $\widehat{F}$ can be pre-computed, even before performing a measurement. This is possible because it depends only on the target state, the confidence level, and the chosen measurement settings, while implicitly accounting for all possible measurement outcomes in all possible states. We can thus construct estimators that nearly achieve the minimum possible risk before taking any data. This minimum possible risk $\mathcal{R}_*(\epsilon)$ for a chosen confidence level is called the \textit{minimax optimal risk}, and is obtained by minimizing $\mathcal{R}(\widehat{F}; \epsilon)$ over all possible estimators $\widehat{F}$, i.e., $\mathcal{R}_*(\epsilon) = \inf_{\widehat{F}} \mathcal{R}(\mathcal{F}; \epsilon)$. Therefore, the minimax optimal risk gives the smallest possible error by minimizing over all the estimators $\widehat{F}$, while maximizing over all the states $\chi \in \mathcal{X}$ (see Eq.~\eqref{eqn:eps_risk}). In practice, however, it is computationally very difficult to sift through all possible estimators and choose the optimal one. For this purpose, Juditsky \& Nemirovski~\cite{Juditsky2009} restrict their attention to a subset $\mathcal{F}$ of possible estimators called affine estimators. Any affine estimator $\phi \in \mathcal{F}$ is of the form $\phi = \sum_{l = 1}^L \phi^{(l)}$, where $\phi^{(l)}$ is an estimator for the $l^{\text{th}}$ measurement setting. That is, $\phi^{(l)}$ takes an outcome of $l^\text{th}$ POVM as input and returns a number as the output. Remarkably, Juditsky \& Nemirovski~\cite{Juditsky2009} show how to construct an affine estimator that achieves nearly minimax optimal performance, and therefore, restricting attention to affine estimators is not a problem. Specifically, if $\widehat{F}_* \in \mathcal{F}$ is the near-optimal affine estimator constructed by Juditsky \& Nemirovski's procedure, then the risk $\widehat{\mathcal{R}}_*(\epsilon)$ of $\widehat{F}_*$ computed by their procedure satisfies \begin{align} \mathcal{R}(\widehat{F}_*; \epsilon) &\leq \widehat{\mathcal{R}}_*(\epsilon) \leq \vartheta(\epsilon) \mathcal{R}_*(\epsilon), \label{eqn:risk_minimax_guarantee} \\ \vartheta(\epsilon) &= 2 + \frac{\ln(64)}{\ln(0.25/\epsilon)} \label{eqn:risk_minimax_guarantee_factor} \end{align} for $\epsilon \in (0, 0.25)$. We restrict $\epsilon$ to the interval $(0, 0.25)$ (or equivalently, to confidence levels greater than $75\%$) so that $\vartheta(\epsilon)$ is well-defined. From Eq.~\eqref{eqn:risk_minimax_guarantee}, we see that the risk $\widehat{\mathcal{R}}_*(\epsilon)$ computed by Juditsky \& Nemirovski's procedure is actually an upper bound on the (unknown) $\epsilon$-risk $\mathcal{R}(\widehat{F}_*, \epsilon)$ of the near-optimal affine estimator $\widehat{F}_*$. Since the probability that the estimator $\widehat{F}_*$ fails by an error more than $\mathcal{R}(\widehat{F}_*, \epsilon)$ is less than $\epsilon$, and because $\mathcal{R}(\widehat{F}_*, \epsilon) \leq \widehat{\mathcal{R}}_*(\epsilon)$ we can conclude that the probability that $\widehat{F}_*$ fails by more than $\widehat{\mathcal{R}}_*(\epsilon)$ is also less than $\epsilon$. Therefore, the risk $\widehat{\mathcal{R}}_*(\epsilon)$ defines a confidence interval corresponding to the chosen confidence level $1 - \epsilon$. Eq.~\eqref{eqn:risk_minimax_guarantee} guarantees that the confidence interval defined by $\widehat{\mathcal{R}}_*(\epsilon)$ is nearly minimax optimal. This is because the risk $\widehat{\mathcal{R}}_*(\epsilon)$ of the near-optimal affine estimator $\widehat{F}_*$ computed by Juditsky \& Nemirovski's procedure is at most a constant times the minimax optimal risk, where the constant factor $\vartheta(\epsilon)$ depends only on the chosen confidence level. Note that $\vartheta(0.1) < 6.54$ and that $\vartheta(\epsilon)$ is a decreasing function of $\epsilon$, converging to $2$ as $\epsilon \to 0$. Therefore, for confidence levels greater than $90\%$, $\widehat{\mathcal{R}}_*$ is guaranteed to be close to the minimax optimal risk by a factor less than $6.5$. In practice, the estimator typically performs better than the theoretically guaranteed bound of Eq.~\eqref{eqn:risk_minimax_guarantee}. In the remainder of this study, we omit the argument $\epsilon$ when writing $\widehat{\mathcal{R}}_*$ for the sake of notational simplicity. A short summary of the different types of risk mentioned in this section can be found in Table~\ref{tab:types_of_risk}. \begin{table}[h] \begin{tabular}{l p{7.3cm}} $\mathcal{R}_*(\epsilon)$ & Minimax optimal risk, which is the minimum possible additive error with a confidence level of $1 - \epsilon$, accounting for all possible states, measurement outcomes, and all possible estimators. \\[0.3cm] $\mathcal{R}(\widehat{F}_*; \epsilon)$ & $\epsilon$-risk of the estimator $\widehat{F}_*$ as defined in Eq.~\eqref{eqn:eps_risk}. This is the smallest possible additive error for the estimator $\widehat{F}_*$ with a confidence level of $1 - \epsilon$, accounting for all possible states and measurement outcomes. \\[0.3cm] $\widehat{\mathcal{R}}_*(\epsilon)$ & Risk (or additive error) of the estimator $\widehat{F}_*$ for a confidence level of $1 - \epsilon$ that is computed by Juditsky \& Nemirovski's procedure. This is an upper bound on the $\epsilon$-risk $\mathcal{R}(\widehat{F}_*; \epsilon)$ and it is nearly minimax optimal in the sense of Eq.~\eqref{eqn:risk_minimax_guarantee}. \end{tabular} \caption{Different types of risk considered in this study. In each case, the true state is unknown, the measurement settings are fixed, and all measurement outcomes consistent with the state and measurement settings are allowed. $\widehat{F}_*$ is the near-optimal affine estimator constructed by Juditsky \& Nemirovski's \cite{Juditsky2009} procedure.} \label{tab:types_of_risk} \end{table} A practically implementable version~\cite{seshadri2021computation} of the procedure outlined by Juditsky \& Nemirovski \cite{Juditsky2009} for constructing the near-optimal affine estimator $\widehat{F}_*$ and its associated risk $\widehat{\mathcal{R}}_*$ is as follows: \begin{enumerate}[leftmargin=0.2cm] \item Find the saddle-point value of the function $\Phi\colon (\mathcal{X} \times \mathcal{X}) \times (\mathcal{F} \times \mathbb{R}_+) \to \mathbb{R}$ defined as \begin{align} \Phi(\chi_1, &\chi_2;\ \phi, \alpha) = \tr(\rho \chi_1) - \tr(\rho \chi_2) + 2\alpha \ln(2/\epsilon) \nonumber \\ &+ \alpha \sum_{l = 1}^L R_l \Bigg[\ln\left(\sum_{k = 1}^{N_l} e^{-\phi^{(l)}_k/\alpha} \frac{\tr(E^{(l)}_k \chi_1) + \epsilon_o/N_l}{1 + \epsilon_o}\right) \nonumber \\ &\hspace{1cm} + \ln\left(\sum_{k = 1}^{N_l} e^{\phi^{(l)}_k/\alpha} \frac{\tr(E^{(l)}_k \chi_2) + \epsilon_o/N_l}{1 + \epsilon_o}\right)\Bigg] \label{eqn:Phi_quantum} \end{align} to a given precision, where $\rho$ is the target state, $\chi_1, \chi_2 \in \mathcal{X}$ are density matrices, $\phi \in \mathcal{F}$ is an affine estimator, and $\alpha > 0$ is a positive number. The number $\alpha$ as such has no intuitive meaning, and enters the function $\Phi$ for mathematical reasons. The function $\Phi$ is itself a mathematical device designed by Juditsky \& Nemirovski to compute the near-optimal affine estimator and the associated risk. Recall that any affine estimator $\phi \in \mathcal{F}$ can be written as $\phi = \sum_{l = 1}^L \phi^{(l)}$. Here, $\phi^{(l)}$ is a function taking outcomes of the $l^{\text{th}}$ POVM as input, and giving a real number as an output. The $l^{\text{th}}$ POVM has $N_l$ measurement outcomes, and $\phi_k^{(l)}$ denotes the output of $\phi^{(l)}$ for the $k^{\text{th}}$ measurement outcome, for each $k = 1, \dotsc, N_l$. Juditsky \& Nemirvoski \cite{Juditsky2009} prove that the function $\Phi$ has a well-defined saddle-point value (see Appendix~\ref{app:minimax_theory} for more details). We present an algorithm to compute the saddle-point value of $\Phi$ to a given precision using convex optimization in Appendix~\ref{app:minimax_numerical}. \item We denote the saddle-point value of $\Phi$ by $2\widehat{\mathcal{R}}_*$, i.e., \begin{align} \widehat{\mathcal{R}}_* &= \frac{1}{2} \sup_{\chi_1, \chi_2 \in \mathcal{X}} \inf_{\phi \in \mathcal{F}, \alpha > 0} \Phi(\chi_1, \chi_2; \phi, \alpha) \nonumber \\ &= \frac{1}{2} \inf_{\phi \in \mathcal{F}, \alpha > 0} \max_{\chi_1, \chi_2 \in \mathcal{X}} \Phi(\chi_1, \chi_2; \phi, \alpha) . \label{eqn:JN_risk_saddle_point} \end{align} Suppose that the saddle-point value is attained at $\chi_1^*, \chi_2^* \in \mathcal{X}$, $\phi_* \in \mathcal{F}$ and $\alpha_* > 0$ to within the given precision. Since $\phi^* \in \mathcal{F}$ is an affine estimator, we can write $\phi_* = \sum_{l = 1}^L \phi^{(l)}_*$. Suppose that independent and identically distributed outcomes $\{o^{(l)}_1,\dotsc, o^{(l)}_{R_l}\}$ are observed upon measurement of the $l^\text{th}$ POVM. Then, the optimal estimator $\widehat{F}_* \in \mathcal{F}$ for estimating the fidelity of the state prepared in the lab with the target state is given as \begin{align} \widehat{F}_*(\{o^{(l)}_1, \dotsc, &o^{(l)}_{R_l}\}_{l = 1}^L) = \sum_{l = 1}^L \sum_{r = 1}^{R_l} \phi^{(l)}_*(o^{(l)}_r) + c , \label{eqn:JN_estimator} \intertext{where the constant $c$ is} c &= \frac{1}{2} \left(\tr(\rho \chi_1^*) + \tr(\rho \chi_2^*)\right) . \label{eqn:JN_estimator_constant} \end{align} The procedure outputs $\widehat{F}_*$ and $\widehat{\mathcal{R}}_*$, such that $\mathcal{R}(\widehat{F}_*; \epsilon)~\leq~\widehat{\mathcal{R}}_*$. Then, from Eq.~\eqref{eqn:eps_risk}, we can infer that $|\widehat{F}_*(\text{outcomes}) - \tr(\rho \sigma)| \leq \widehat{\mathcal{R}}_*$ with probability greater than or equal to $1 - \epsilon$, where $\sigma$ is the actual state prepared in the lab. The guarantees given by Eqs.~\eqref{eqn:risk_minimax_guarantee} \& \eqref{eqn:risk_minimax_guarantee_factor} apply. \end{enumerate} Observe that $\widehat{F}_*$ is a function of the outcomes (see Eq.~\eqref{eqn:JN_estimator}), while, the risk $\widehat{\mathcal{R}}_*$ of the estimator is a function of the measurement protocol. We thus know how well the constructed estimator does even before performing an experiment, which can be used for benchmarking different measurement protocols. Although the formal procedure above is rather abstract, in section~\ref{secn:minimax_method_robustness}, we show that the estimator $\widehat{F}_*$ can be understood as an appropriately weighted sum of the observed frequencies, where the weights depend on the target state and measurement scheme. In section~\ref{secn:minimax_method_optimal_risk}, we show that the risk $\widehat{\mathcal{R}}_*$ can be written in terms of classical fidelities determined by the POVMs in the measurement protocol. This is helpful in performing theoretical calculations involving the risk, and potentially useful for benchmarking measurement protocols. Importantly, the above procedure can not only be used to construct an estimator for the fidelity, but for the expectation value of any observable. This can be achieved by replacing the target state $\rho$ in the above equations with the Hermitian operator $\mathcal{O}$ corresponding to the observable whose expectation value we wish to estimate. We discuss the theoretical underpinnings of the minimax method in Appendix~\ref{app:minimax_theory}, starting with a brief overview of Juditsky \& Nemirovski's general set-up \cite{Juditsky2009}, followed by details on adapting their method to estimating fidelity and expectation values. Subsequently, details regarding the numerical implementation of the minimax method are given in Appendix~\ref{app:minimax_numerical}. In particular, we outline a convex optimization algorithm for constructing the optimal estimator $\widehat{F}_*$ and the associated risk $\widehat{\mathcal{R}}_*$ given any target state and measurement settings. \subsection{Toy problem: 1-qubit target state} We explain in detail a simple setting where it is possible to understand the estimator that is constructed by the procedure described above. Let the target state be $\rho = \op{1}{1}$, where $\{\ket{0}, \ket{1}\}$ are the eigenstates of Pauli $Z$. Suppose that our experiment consists of performing Pauli $Z$ measurements, and we want to estimate the fidelity of the lab state $\sigma$ with the target state $\rho$. This problem is essentially classical: given the quantum state $\sigma$, we can write it in the computational basis as $\sigma = (1 - p) \op{0}{0} + q \op{0}{1} + q^* \op{1}{0} + p \op{1}{1}$. Since we measure $Z$, we will obtain the outcome $\ket{0}$ with a probability of $1 - p$ and $\ket{1}$ with a probability of $p$, and the problem is to estimate the fidelity $\tr(\rho \sigma) = p$. This is the same as having a Bernoulli random variable with parameter $p$, which we want to estimate. Note that the parameter $p$ corresponds to the probability of observing the outcome $1$ for the Bernoulli random variable. In order to numerically compute the fidelity estimator, we choose a confidence level of $95\%$ and $R = 100$ repetitions of Pauli $Z$ measurement. Since we measure only one POVM, the estimator given in Eq.~\eqref{eqn:JN_estimator} can be written as $\widehat{F}_*(\{o_1, \dotsc, o_R\}) = \sum_{i = 1}^R \phi_*(o_i) + c$, where $o_1, \dotsc, o_R \in \{0, 1\}$ are the measurement outcomes. Here, $\phi_* \in \mathcal{F}$ is an affine estimator corresponding to the saddle-point of the function $\Phi$ (see Eq.~\eqref{eqn:Phi_quantum}) that accepts a measurement outcome ($0$ or $1$) as input and gives a number as an output, while the constant $c$ is as given in Eq.~\eqref{eqn:JN_estimator_constant}. Numerically, we find that the affine estimator $\phi_*$ gives the values $\phi_*(0) \approx -0.476 \times 10^{-2}$ and $\phi_*(1) \approx 0.476 \times 10^{-2}$ for the measurement outcomes $0$ and $1$, respectively. The value of the constant $c$ computed numerically is $c = 0.5$. Now, we show that the estimator $\widehat{F}_*$ constructed by the minimax method is essentially the sample mean estimator for a Bernoulli random variable, but with a small additive constant that pushes the estimate away from the boundary of the parameter space. Note that the sample mean estimator is also the Maximum Likelihood estimator for estimating the parameter of a Bernoulli random variable. Recall that our estimator for fidelity is given as $\widehat{F}_*(\{o_1, \dotsc, o_R\}) = \sum_{i = 1}^R \phi_*(o_i) + c$ for a given set of outcomes $\{o_i\}_{i = 1}^R$. We can then write the affine estimator $\phi_*$ as $\phi_*(x) = (\phi_*(1) - \phi_*(0)) x + \phi_*(0)$ for any outcome $x \in \{0, 1\}$. Therefore, \begin{align} \widehat{F}_*(\{o_i\}_{i = 1}^R) &= \sum_{i = 1}^R \left((\phi_*(1) - \phi_*(0)) o_i + \phi_*(0)\right) + c \notag \\ &= (\phi_*(1) - \phi_*(0)) \sum_{i = 1}^R o_i + (R \phi_*(0) + c) \notag \intertext{where $o_i \in \{0, 1\}$. Using the values obtained numerically, we find that} \widehat{F}_*(\{o_i\}_{i = 1}^{100}) &= \frac{0.952}{100} \sum_{i = 1}^{100} o_i + 0.024 \nonumber \end{align} Thus, we can interpret $\widehat{F}_*$ as (approximately) the mean of the sample $o_1, \dotsc, o_{100}$, but with a small additive constant of $0.024$. For finite sample sizes the constant term is justified, because even upon observing all $0$ outcomes, we cannot be certain that the probability $p$ of observing outcome $1$ is $p = 0$. For a similar reason, we have $0.952/100$ as the coefficient for the sum instead of $1/100$. We observe numerically that this coefficient approaches $1 / R$ and the additive constant goes to $0$ as $R$ increases. What if we decide to measure Pauli $X$ (measurement~$2$) in addition to Pauli $Z$ (measurement~$1$)? In this case, the estimator $\widehat{F}_*$ can be written as $\widehat{F}_*(\{o_1, \dotsc, o_R\}) = \sum_{i = 1}^R \phi^{(1)}_*(o_i) + \sum_{i = 1}^R \phi^{(2)}_*(o_i) + c$. Here, $\phi^{(1)}_*$ is the affine estimator at the saddle-point corresponding to the Pauli $Z$ measurement, while $\phi^{(2)}_*$ is the affine estimator at the saddle-point corresponding to the Pauli $X$ measurement. In such a scenario, we find that the saddle-point value corresponding to the Pauli $Z$ measurement is unchanged. That is, $\phi^{(1)}_*(0) \approx -0.476 \times 10^{-2},\ \phi^{(1)}_*(1) \approx 0.476 \times 10^{-2}$, and the constant $c = 0.5$ is also the same as before. On the other hand, the saddle-point value corresponding to Pauli $X$ measurement is $\phi^{(2)}_*(0) = \phi^{(2)}_*(1) = 0$. Therefore, the outcomes from the $Z$ measurement are weighted as before, but those from $X$ measurement are discarded. The reason is simple: for estimating the fidelity with $\rho = \op{1}{1}$, measurement of Pauli $X$ gives no useful information, irrespective of what the actual state $\sigma$ is. Indeed, upon measuring $X$, we get $\ket{+}$ with probability $1/2 + \textnormal{Re}(q)$ and $\ket{-}$ with probability $1/2 - \textnormal{Re}(q)$, both of which are independent of $\tr(\rho \sigma) = p$ that we want to estimate. Thus, our method properly incorporates the available measurements to give an estimator for fidelity. \subsection{Risk and the best sample complexity\label{secn:minimax_method_optimal_risk}} We now learned about the estimator, but what about the risk? We found that the risk $\widehat{\mathcal{R}}_*$ given by Eq.~\eqref{eqn:JN_risk_saddle_point} is half the saddle-point value of the function $\Phi$. However, in this form, it is difficult to infer what this quantity is. In Appendix~\ref{app:minimax_theory}, we show that one can write the risk $\widehat{\mathcal{R}}_*$ given by the minimax method in a form that is more amenable to interpretation as \begin{align} &\widehat{\mathcal{R}}_* = \max_{\chi_1, \chi_2 \in \mathcal{X}} \frac{1}{2} \left(\tr(\rho \chi_1) - \tr(\rho \chi_2)\right) \nonumber \\ &\hspace{1.6cm} \text{s.t.}\ \prod_{l = 1}^L \left[F_C(\chi_1, \chi_2, \{E^{(l)}_k\})\right]^{R_l/2} \geq \frac{\epsilon}{2} \label{eqn:JNriskprop3.1} \intertext{where} &F_C(\chi_1, \chi_2, \{E^{(l)}_k\}) = \left(\sum_{k = 1}^{N_l} \sqrt{\tr\left(E^{(l)}_k \chi_1\right) \tr\left(E^{(l)}_k \chi_2\right)}\right)^2 \label{eqn:classicalfidelity} \end{align} denotes the classical fidelity between the probabilities determined by states $\chi_1$ and $\chi_2$ corresponding to the POVM measurement $\{E^{(l)}_k\}$. Here, the measurement protocol chosen by the experimenter corresponds to measuring $L$ different POVMs, where the $l^{\text{th}}$ POVM $\{E^{(l)}_1, \dotsc, E^{(l)}_{N_l}\}$ is measured $R_l$ times, for $l = 1, \dotsc, L$. Note that the fidelity between any two quantum states must lie between $0$ and $1$, and thus, $0 \leq \tr(\rho \chi) \leq 1$ for any density matrix $\chi$. Then, we can infer from Eq.~\eqref{eqn:JNriskprop3.1} that the maximum possible value of risk is $0.5$. Indeed, an uncertainty of $\pm 0.5$ gives a confidence interval of length $1$, which is also the size of interval for fidelity. On the other hand, it can be shown from Eq.~\eqref{eqn:JNriskprop3.1} that the risk decreases when the number of measurement settings or the number of shots for any measurement setting is increased. To that end, note that $F_C$ ranges between $0$ and $1$, so raising it to a larger power makes it smaller. Thus, increasing the number of shots $R_l$ makes it harder to satisfy the constraint that $\prod_l F_C^{R_l/2}$ must be larger than $\epsilon/2$. Similarly, if we include another measurement setting (in effect, increasing $L$), we have one more fraction $F_C$ multiplying the left-hand side of the constraint, thereby making the constraint tighter. Since a tighter constraint implies a more restricted search space for maximization, we can infer that the risk will become smaller (or stay the same) in either case. This also shows that the risk is dependent on the chosen measurement protocol, and protocols that make the constraint tighter will have a smaller risk in estimating the fidelity. We numerically quantify the variation of the risk with the number of measurement settings and the number of repetitions for a $4$-qubit randomly chosen target state. We apply $10\%$ depolarizing noise to obtain the actual state, and perform Pauli measurements. We see from Fig.~\ref{fig:JN_risk_analysis} that the risk decreases with the number of Pauli measurements as well as the number of repetitions. \begin{figure} \caption{Variation of the risk with the number of Pauli measurements $L$ and number of repetitions $R_L$ of each Pauli measurement for a $4$-qubit random target state. All the computed risks correspond to a $95\%$ confidence level.} \label{fig:JN_risk_analysis} \end{figure} A natural question that arises is how low the risk can be. To find a lower bound, recall that we can write the fidelity between any two states as a minimization of the classical fidelity over all POVMs \cite{fuchs1996distinguishability}. \begin{equation*} F(\chi_1, \chi_2) = \min_{\text{POVM } \{F_i\}} F_C(\chi_1, \chi_2, \{F_i\}) \end{equation*} In particular, we have $F(\chi_1, \chi_2) \leq F_C(\chi_1, \chi_2, \{E^{(l)}_k\})$ for every POVM $\{E^{(l)}_k\}$ that we are using. Thus, we obtain the following lower bound on our risk \begin{align*} \widehat{\mathcal{R}}_* \geq &\max_{\chi_1, \chi_2 \in \mathcal{X}} \frac{1}{2} \left(\tr(\rho \chi_1) - \tr(\rho \chi_2)\right) \nonumber \\ &\qquad \text{s.t.}\quad F(\chi_1, \chi_2) \geq \left(\frac{\epsilon}{2}\right)^{\frac{2}{R}} \end{align*} where $R = \sum_{l = 1}^L R_l$ is the total number of shots. Evaluating the right-hand side of the above equation, we obtain a lower bound for the risk. This result is summarized in the following theorem. \begin{theorem} \label{thm:minimax_method_best_sample_complexity} Let $\rho$ be any pure target state. Suppose that $\widehat{\mathcal{R}}_*$ is the risk associated with the fidelity estimator given by the minimax method corresponding to a confidence level of $1 - \epsilon \in (0.75, 1)$ and any measurement scheme. Then, the risk is bounded below as \begin{equation} \widehat{\mathcal{R}}_* \geq \frac{1}{2} \sqrt{1 - \left(\frac{\epsilon}{2}\right)^{2/R}} \label{eqn:optimalJNrisk} \end{equation} where $R$ is the total number of measurement outcomes \textnormal{(}all measurement settings combined\textnormal{)} to be supplied to the estimator. This lower bound can be achieved by measuring in the basis defined by $\rho$. That is, $R$ repetitions of the POVM $\{\rho, \id - \rho\}$ achieves the risk $\widehat{\mathcal{R}}_*$. Stated differently, the best sample complexity that can be obtained using the minimax method corresponding to a risk of $\widehat{\mathcal{R}}_* \in (0, 0.5)$ and confidence level $1 - \epsilon$ is given by \begin{align} R &\geq \frac{2 \ln(2/\epsilon)}{\left|\ln(1 - 4\widehat{\mathcal{R}}_*^2)\right|} \label{eq:riskBound} \\ &\approx \frac{\ln(2/\epsilon)}{2\widehat{\mathcal{R}}_*^2} \text{ when } \widehat{\mathcal{R}}_*^2\ll 1. \nonumber \end{align} \end{theorem} \begin{proof} See Appendix~\ref{proof:minimax_method_best_sample_complexity}. \end{proof} Note that sample complexity refers to the smallest number of measurements outcomes required for achieving the desired risk for the chosen confidence level. Observe that the lower bound on the risk in Eq.~\eqref{eqn:optimalJNrisk} is independent of the system dimension, the target state and the true state. It only depends on the confidence level and total number of repetitions. The lowest possible risk (or equivalently, the best possible sample complexity) can, in principle, be achieved by measuring in a basis defined by the target state $\rho$. We acknowledge that, in practice, it is often impractical to implement the POVM $\{\rho, \id - \rho\}$ that achieves this sample complexity. However, next we show that we can achieve something very close to this optimal sample complexity for stabilizer states using a practical measurement scheme. \subsection{Stabilizer states\label{secn:minimax_method_stabilizer_states}} Stabilizer states are the cornerstone of numerous applications, ranging from measurement-based quantum computing~\cite{BriegelMBQC2009} to quantum error correction~\cite{Gottesman1997}. An $n$-qubit stabilizer state $\rho$ is the unique $+1$ eigenstate of exactly $n$ Pauli operators that generate a stabilizer group of size $d = 2^n$. The state $\rho$ can thus be written as \cite{klieschcharacterization} \begin{equation} \rho = \frac{1}{d} \left(\id + \sum_{S \in \mathcal{S}_{n} \setminus \{\id\}} S\right) \end{equation} where $\mathcal{S}_n$ is the stabilizer subgroup corresponding to the stabilizer state $\rho$. To estimate the fidelity with a target stabilizer state efficiently, we implement the minimax optimal strategy \cite{pallister2018optimal, klieschcharacterization} while restricting to Pauli measurements. The measurement strategy is simple: uniformly sample an element from the stabilizer subgroup (excluding the identity), and record the eigenvalue of the outcome (whether $+1$ or $-1$) of the stabilizer measurement. This strategy is very similar to DFE, with the exception that we exclude the identity. This measurement scheme can be implemented by an effective POVM with elements $\{\Theta,\Delta_\Theta\}$ given by \begin{align} \Theta &= \rho + \frac{d/2 - 1}{d - 1} \Delta_\rho \nonumber \\ \Delta_\Theta &= \frac{d/2}{d - 1} \Delta_\rho , \label{eqn:stabilizer_minimaxoptimal_POVM} \end{align} where $\Delta_\rho = \id - \rho$. See the proof of Theorem \ref{thm:minimax_method_pauli_scheme_sample_complexity} for details on how this is obtained (also see Ref.~\cite{pallister2018optimal, klieschcharacterization}). We can see that the effective POVM is a combination of the target state $\rho$ and $\Delta_\rho$ (which is orthogonal to $\rho$). This is similar to measuring in the basis defined by the target state, so from our results in section~\ref{secn:minimax_method_optimal_risk}, we can expect the risk to be small. Indeed, we find that for a sufficient number of repetitions of this measurement (independent of the dimension or the stabilizer state), the risk of the estimator is at most four times the optimal risk described in Eq.~\eqref{eqn:optimalJNrisk}. We summarize this result below. \begin{proposition} \label{prop:minimax_method_stabilizer_sample_complexity} Let $\rho$ be an $n$-qubit stabilizer state. Suppose that we uniformly sample $R$ elements from the stabilizer group (with replacement) and measure them. Then, for any risk $\widehat{\mathcal{R}}_* \in (0, 0.5)$ and confidence level ${1-\epsilon} \in (0.75, 1)$, \begin{align} R &\geq 2 \frac{\ln\left(2/\epsilon\right)}{\left|\ln\left(1 - \left(\frac{d}{d - 1}\right)^2 \widehat{R}_*^2\right)\right|} \label{eqn:sample_complexity_stabilizers} \\ &\approx 2 \left(\frac{d - 1}{d}\right)^2 \frac{\ln(2/\epsilon)}{\widehat{R}_*^2} \text{ when } \widehat{\mathcal{R}}_*^2\ll 1 \end{align} suffices to build an estimator using the minimax method that achieves this risk. Here, $d = 2^n$ is the dimension of the system. \end{proposition} \begin{proof} Take $\delta = d$ in corollary \ref{corr:minimax_sample_complexity_stabilizer} in the appendix. \end{proof} Since $(d - 1)/d < 1$, the dimension dependence actually improves the sample complexity, and for large dimensions one needs to measure only a constant number of stabilizers. This sample complexity is of the same order as that obtained using DFE. Note that one needs only $O(\ln(1/\epsilon)/\widehat{R}_*)$ outcomes in the protocol given by Pallister \textit{et al.} \cite{pallister2018optimal}; however, their protocol is not meant to estimate the fidelity. For determining the distance between the target and the actual state, they would need to use the Fuchs - van de Graaf inequality, which would result in a sample complexity comparable to our method. Furthermore, following the ideas given in the proof of Theorem \ref{thm:minimax_sample_complexity_2outcomePOVM}, we can simplify the algorithm given in Appendix \ref{app:minimax_numerical} to compute the estimator. In the simplified algorithm, we only need to perform a two-dimensional optimization regardless of the dimension of the system. Consequently, building the estimator is both time and memory efficient. In the following we present simulated results for $2$, $3$ and $4$ qubit stabilizer states, for a risk of $0.05$ and a confidence level of $95\%$. We implement the measurement strategy discussed above, namely randomly sampling $R$ stabilizers and recording the outcome eigenvalue. The estimated fidelity in each case is summarized in Table~\ref{tab:JNstabilizers}. Even for the $4$-qubit stabilizer state, constructing the estimator and generating measurements to compute the estimate together took only a few seconds on a personal computer. \begin{table}[!ht] \centering \begin{tabular}{l c c c c} \toprule & & \multicolumn{3}{c}{$n$} \\ \cmidrule(l){3-5} & & 2 & 3 & 4 \\ \midrule True fidelity & $F(\rho, \sigma)$ & 0.925 & 0.912 & 0.906 \\ Estimate of fidelity & $\widehat{F}_*(\text{outcomes})$ & 0.933 & 0.911 & 0.898 \\ Number samples & $R$ & 1657 & 2256 & 2591 \\ \bottomrule \end{tabular} \caption{True fidelity, the fidelity estimate, and the number of stabilizers sampled corresponding to a risk of $0.05$ and $95\%$ confidence level for $n$-qubit stabilizer states.} \label{tab:JNstabilizers} \end{table} Now, on the other extreme, we consider the case where an insufficient number of stabilizer measurements are provided. Specifically, we consider a measurement protocol where we measure only $n - 1$ generators of an $n$-qubit stabilizer state. We assume these to be subspace measurements, i.e., the measurements correspond to projecting on the eigenspaces of the generators with eigenvalue $\pm 1$. This is an insufficient measurement protocol because the measurement of $n - 1$ stabilizers cannot uniquely identify the stabilizer state, as there are two orthogonal states that will be consistent with the measurement statistics. As a consequence, any reasonable fidelity estimation protocol should give complete uncertainty for the estimated fidelity. Indeed, the following result shows that the minimax method gives a risk of $0.5$ (implying total uncertainty) for this measurement protocol. \begin{proposition} \label{prop:minimax_method_stabilizer_insufficient_measurements} Let $\rho$ be an $n$-qubit stabilizer state generated by $S_1, \dotsc, S_n$. Suppose that the measurement protocol consists of measuring the first $n - 1$ generators $S_1, \dotsc, S_{n - 1}$, such that the measurements correspond to projecting on eigenspaces of $S_i$ with eigenvalue $\pm 1$ for $i = 1, \dotsc, n$. Then, irrespective of the number of shots, the minimax method gives a risk of $0.5$. \end{proposition} \begin{proof} See the end of Appendix~\ref{proof:minimax_method_stabilizer_insufficient_measurements}. \end{proof} We remark that if we consider eigenbasis measurements (in contrast to subspace measurements considered above), computing the risk analytically is more complicated. By eigenbasis measurement, we mean that the measurement of the stabilizer $S$ has the POVM $\{E_1, \dotsc, E_d\}$ where $E_i$ is the projection on $i^{\text{th}}$ eigenvector of $S$. Note that the eigenbasis for $S$ is not unique since $S$ has a degenerate spectrum and, therefore, the risk can depend on the choice of eigenbasis used for the measurement. For example, if the target state $\rho$ happens to coincide with one of the eigenvectors used in the measurement, then that measurement has sufficient information to accurately estimate the fidelity. However, when this does not happen, one can expect the risk to be $0.5$; see section \ref{secn:mle_pl} for an example. \subsection{Randomized Pauli measurement scheme\label{secn:minimax_method_RPM_scheme}} In the spirit of the randomized measurement strategy for stabilizer states we discussed above, we present a generalized version of such a Pauli measurement scheme for arbitrary (pure) states. The idea comes from the simple observation that any density matrix $\chi$ can be written as \begin{align} \chi &= \frac{\id}{d} + \sum_{i = 1}^{d^2 - 1} \frac{\tr(W_i \chi)}{d} W_i \\ &= \frac{\id}{d} + \sum_{i = 1}^{d^2 - 1} \frac{|\tr(W_i \chi)|}{d} S_i \end{align} where $S_i = \text{sign}(\tr(W_i \chi)) W_i$ are the Pauli operators appended with a sign. This is, in particular, valid for the target state $\rho$. Then, consider the probability distribution \begin{equation} p_i = \frac{|\tr(W_i \rho)|}{\sum_{i = 1}^{d^2 - 1} |\tr(W_i \rho)|}, \quad i = 1, \dotsc, d^2 - 1 \label{eqn:randomized_pauli_measurement_probability} \end{equation} where $\rho$ is the target state. The measurement scheme is as follows: \begin{tcolorbox}[colback=white, title={\hypertarget{box:pauli_measurement_scheme}{Box \ref*{secn:minimax_method}.1}: Pauli Measurement Scheme}] \begin{enumerate} \item Sample a (non-identity) Pauli operator $W_i$ with probability $p_i$ ($i = 1, \dotsc, d^2 - 1$) given in Eq.~\eqref{eqn:randomized_pauli_measurement_probability} and record the outcome ($\pm 1$) of the measurement. \item Flip the measurement outcome $\pm 1 \to \mp 1$ if $\tr(\rho W_i) < 0$, else retain the original measurement outcome. \item Repeat this procedure $R$ times and feed the outcomes into the estimator given by the minimax method. \end{enumerate} \end{tcolorbox} Because we exclude measurement of identity, $i$ runs from $1$ to $d^2 - 1$. We flip the outcomes because we need to measure $S_i = \text{sign}(\tr(W_i \rho)) W_i$. In Theorem~\ref{thm:minimax_method_pauli_scheme_sample_complexity}, we show how to choose the number of repetitions $R$ so as to obtain a desired value of the risk. Note that the Pauli measurement can either be a projection on the subspace with eigenvalue $\pm 1$ or a projection on the eigenvectors. For the estimator given by the minimax method, the values $+1$, $-1$ are inconsequential; all that matters is how many times $+1$ and $-1$ are observed. Note that a very similar measurement scheme has been considered for verifying the ground state of a class of Hamiltonians \cite{takeuchi2018verification}. Further, the random sampling scheme described above is very similar to the measurement strategy used in DFE, except that we sample the Pauli operators using a different probability distribution. For the proposed measurement strategy, the minimax method gives the following sample complexity. \begin{theorem} \label{thm:minimax_method_pauli_scheme_sample_complexity} Let $\rho$ be an $n$-qubit pure target state. Suppose that we perform $R$ Pauli measurements as described in Box \hyperlink{box:pauli_measurement_scheme}{\ref*{secn:minimax_method}.1}. Then, for a given risk $\widehat{\mathcal{R}}_* \in (0, 0.5)$ and a confidence level $1 - \epsilon \in (0.75, 1)$, \begin{align} R &\geq 2\frac{\ln(2/\epsilon)}{\left|\ln\left(1 - \frac{d^2}{\mathcal{N}^2}\widehat{\mathcal{R}}_*^2\right)\right|} \label{eqn:minimax_method_pauli_scheme_sample_complexity} \\ &\approx 2 \left(\frac{\mathcal{N}}{d}\right)^2 \frac{\ln(2/\epsilon)}{\widehat{\mathcal{R}}_*^2} \text{ when } \widehat{\mathcal{R}}_*^2\ll 1 \label{eqn:minimax_method_pauli_scheme_sample_complexity_approx} \end{align} measurements are sufficient to achieve the risk. Here, \begin{equation*} \mathcal{N} = \sum_{i = 1}^{d^2 - 1} |\tr(W_i \rho)| \end{equation*} and for any pure target state $\rho$, we have \begin{align*} \mathcal{N} &\leq (d - 1)\sqrt{d + 1} . \intertext{Therefore} 2 \left(\frac{\mathcal{N}}{d}\right)^2 \frac{\ln(2/\epsilon)}{\widehat{\mathcal{R}}_*^2} &\leq 2 (d + 1) \left(1 - \frac{1}{d}\right)^2 \frac{\ln(2/\epsilon)}{\widehat{\mathcal{R}}_*^2} \end{align*} \end{theorem} \begin{proof} See the end of Appendix \ref{proof:minimax_method_pauli_scheme_sample_complexity}. \end{proof} Notably, the sample complexity is upper bounded by $O(d) \ln(2/\epsilon) / \widehat{\mathcal{R}}_*^2$, which is comparable to the upper bound on (the expected value of) the sample complexity in DFE (see Eq. 10 in Ref.~\cite{Flammia2011}). We can show that the sample complexity indeed improves when considering a subset of well-conditioned states defined by Flammia \& Liu \cite{Flammia2011}. To that end, suppose that for any given $i$, the state $\rho$ satisfies either $|\tr(\rho W_i)| = \alpha$ or $\tr(\rho W_i) = 0$. Then, using Eq.~\eqref{eqn:minimax_method_pauli_scheme_sample_complexity_approx} and Eq.~\eqref{eqn:pure_state_pauli_weights_constraint}, we obtain the bound of \begin{equation} \frac{2}{\alpha^2} \left(\frac{d - 1}{d}\right)^2 \frac{\ln(2/\epsilon)}{\widehat{\mathcal{R}}_*^2} \end{equation} on the sample complexity. As before, this is comparable to DFE. In addition to giving a good sample complexity, we can obtain the estimator efficiently by reducing the optimization to a two-dimensional problem, following the ideas in Theorem \ref{thm:minimax_sample_complexity_2outcomePOVM}. Thus, if $\mathcal{N}$ can be computed efficiently, we can efficiently construct the estimator even for large dimensions. \subsection{Robustness against noise\label{secn:minimax_method_robustness}} For any fidelity estimator to be useful in practice, it is crucial for it to be robust to common types of experimental noise. We show the estimator given by the minimax method possesses this property. More precisely, we first show that the estimator is affine in the observed frequencies. Then we show that small perturbations in the state and POVM elements only slightly change the estimate given by such affine estimators. Suppose that the experimentalist uses $L$ measurement settings, with POVM $\{E^{(l)}_1, \dotsc, E^{(l)}_{N_l}\}$ for the $l^\text{th}$ measurement setting. For $l = 1, \dotsc, L$, we denote the set of outcomes for the $l^\text{th}$ measurement setting by $\Omega^{(l)} = \{1, \dotsc, N_l\}$. Here, the outcome $k \in \Omega^{(l)}$ refers to the index of the POVM element $E^{(l)}_k$ observed in the experiment. The number of possible outcomes for the $l^{\text{th}}$ measurement is $N_l$. The $l^{\text{th}}$ POVM is repeated $R_l$ number of times. Now, corresponding to each outcome $k \in \Omega^{(l)}$, we associate a canonical basis element $\bm{e}^{(l)}_k \in \mathbb{R}^{N_l}$. From Eq.~\eqref{eqn:JN_estimator}, we know that the fidelity estimator is given as \begin{equation*} \widehat{F}_*(\{o^{(l)}_1, \dotsc, o^{(l)}_{R_l}\}_{l = 1}^{R_l}) = \sum_{l = 1}^L \sum_{r = 1}^{R_l} \phi^{(l)}_*(o^{(l)}_r) + c \end{equation*} where $\{o^{(l)}_1, \dotsc, o^{(l)}_{R_l}\}$ are the outcomes corresponding to the $l^\text{th}$ measurement setting, and the constant $c$ is given by Eq.~\eqref{eqn:JN_estimator_constant}. In the form written above, $\phi^{(l)}_* \in \mathcal{F}^{(l)}$ are real-valued functions defined on the set $\Omega^{(l)}$. Regarding $\phi^{(l)}_*$ as an $N_l$-dimensional real vector with ``coefficient'' vector \begin{equation*} \bm{a}^{(l)} = \begin{pmatrix} \phi^{(l)}_*(1), & \dotsc, & \phi^{(l)}_*(N_l) \end{pmatrix} \in \mathbb{R}^{N_l} \end{equation*} we can write $\phi^{(l)}_*(k) = \langle\bm{a}^{(l)}, \bm{e}^{(l)}_k\rangle$ for ${k\in\Omega^{(l)}}$. Then, the estimator can be written as \begin{equation*} \widehat{F}_*(\{o^{(l)}_1, \dotsc, o^{(l)}_{R_l}\}_{l = 1}^{R_l}) = \sum_{l = 1}^L \sum_{r = 1}^{R_l} \ip{\bm{a}^{(l)}, \bm{e}^{(l)}_{o^{(l)}_r}} + c \end{equation*} which is affine in $\bm{e}^{(l)}_{o^{(l)}_r}$. However, in the above equation, $\widehat{F}_*$ is not an affine function of the input. To remedy this, we define the vector of experimentally observed frequencies $\bm{f}^{(l)} \in \mathbb{R}^{N_l}$, obtained by binning the outcomes $o^{(l)}_1, \dotsc, o^{(l)}_{R_l}$, such that $f^{(l)}_k$ denotes the relative frequency of the outcome $k \in \Omega^{(l)}$ observed in the experiment. We can then write \begin{equation} \bm{f}^{(l)} = \frac{1}{R_l} \sum_{r = 1}^{R_l} \bm{e}^{(l)}_{o^{(l)}_r} . \notag \end{equation} We now express the fidelity estimator in a way that is indeed affine in the observed frequencies as \begin{equation} \widehat{F}_*(\bm{f}^{(1)}, \dotsc, \bm{f}^{(L)}) = \sum_{l = 1}^L R_l \ip{\bm{a}^{(l)}, \bm{f}^{(l)}} + c. \label{eqn:JN_estimator_affine} \end{equation} Eq.~\eqref{eqn:JN_estimator_affine} gives an intuitive understanding of the estimator constructed by minimax method -- it is simply an appropriate weighting of the relative frequencies observed in the experiment. The weights are obtained from the saddle-point of the function $\Phi$ defined in Eq.~\eqref{eqn:Phi_quantum}. The following result shows that any estimator that is affine in the observed frequencies is robust to small perturbations in the state and measurements. \begin{theorem} \label{thm:affine_estimator_robustness_informal} \textnormal{(}Informal\textnormal{)} Suppose that we want to estimate the fidelity with a target state $\rho$ using many repetitions of $L$ different measurement settings. Suppose that the state $\sigma$ prepared experimentally experiences an unknown \textnormal{(}and possibly different\textnormal{)} perturbation during each measurement. Similarly, say the POVM corresponding to each measurement setting experiences an unknown \textnormal{(}and possibly different\textnormal{)} perturbation during each measurement. Suppose that $\bm{f}^{(1)}, \dotsc, \bm{f}^{(L)}$ are the relative frequencies corresponding to the $L$ measurement settings one would have observed in the noiseless case \textnormal{(}i.e., when the state and POVMs do not experience any perturbation\textnormal{)}. On the other hand, let $\widetilde{\bm{f}}^{(1)}, \dotsc, \widetilde{\bm{f}}^{(L)}$ be the actual frequencies observed because of the state and POVM elements undergoing perturbation. Suppose that we are working with some estimator $\mathscr{L}$ for the fidelity that is affine in the observed frequencies. More precisely, we have $\mathscr{L}(\bm{f}^{(1)}, \dotsc, \bm{f}^{(L)}) = \ip{\bm{\ell}, \bm{f}} + b$, where $\bm{\ell}$ is a vector, $b$ is a constant, and $\bm{f}$ is the vector obtained by combining all the observed frequency vectors, i.e., $\bm{f} = \begin{pmatrix} \bm{f}^{(1)} & \dotsc & \bm{f}^{(L)} \end{pmatrix}$. Then, the fidelity estimate obtained in the perturbed case is not too different from the fidelity estimate obtained in the noiseless case: \begin{equation*} \left|\mathscr{L}(\widetilde{\bm{f}}^{(1)}, \dotsc, \widetilde{\bm{f}}^{(L)}) - \mathscr{L}(\bm{f}^{(1)}, \dotsc, \bm{f}^{(L)})\right| \leq \delta_{\mathscr{L}}. \end{equation*} The quantity $\delta_{\mathscr{L}}$ is small when the perturbations are small and when the error in constructing the histograms $\bm{f}^{(l)}$ and $\widetilde{\bm{f}}^{(l)}$ \textnormal{(}due to a finite number of repetitions\textnormal{)} are small for each $l = 1, \dotsc, L$. Note that the error in constructing the histograms can be mitigated by taking sufficient amount of data. \end{theorem} \begin{proof} See Appendix \ref{app:minimax_robustness} for a formal statement of the theorem and its proof. \end{proof} The deviation $\delta_{\mathscr{L}}$ of the fidelity estimates in the noiseless and noisy case depend on the errors in constructing the histogram because of our proof technique. In practice, this condition may not be not needed. So long as the observed frequencies in the noisy case remain close to the frequencies in the noiseless case, we are ensured that robustness holds (see Appendix~\ref{app:minimax_robustness}). \begin{corollary} \label{corr:affine_estimator_robustness} The estimator $\widehat{F}_*$ given by the minimax method is robust against noise in the sense of Theorem \ref{thm:affine_estimator_robustness_informal}. \end{corollary} \begin{proof} We re-write the estimator given in Eq.~\eqref{eqn:JN_estimator_affine} in the form required by Theorem \ref{thm:affine_estimator_robustness_informal}. For this, we define a matrix of coefficients and a vector of number of repetitions, \begin{align*} C_{\bm{a}} &= \textnormal{diag}\begin{pmatrix}a^{(1)}_1 & \dotsc & a^{(1)}_{N_1} & \dotsc & a^{(L)}_1 & \dotsc & a^{(L)}_{N_L}\end{pmatrix} \\ \bm{R} &= \begin{pmatrix} R_1 & \dotsc & R_1 & R_2 & \dotsc & R_2 & \dotsc & R_L & \dotsc & R_L\end{pmatrix}^T \end{align*} where $R_l$ is repeated $N_l$ times. Then, the estimator can be expressed as \begin{equation} \widehat{F}_*(\bm{f}^{(1)}, \dotsc, \bm{f}^{(L)}) = \ip{C_{\bm{a}} \bm{R}, \bm{f}} + c \label{eqn:JN_estimator_affine_matrix} \end{equation} where $\bm{f} = \begin{pmatrix} \bm{f}^{(1)} & \dotsc & \bm{f}^{(L)} \end{pmatrix}$ as noted in Theorem \ref{thm:affine_estimator_robustness_informal}. \end{proof} \section{Comparison with other methods\label{secn:other_methods_comparison}} We now compare the minimax method with two commonly used techniques to estimate the fidelity: direct fidelity estimation (DFE) and maximum likelihood estimation (MLE) (and a related approach called Profile Likelihood (PL)). We further compare it with a simple semidefinite programming (SDP) based approach. \subsection{Direct fidelity estimation} Flammia \& Liu \cite{Flammia2011} and da Silva \textit{et al.} \cite{DaSilva2011} construct a fidelity estimator by judiciously sampling Pauli measurements. If the target state $\rho$ is well-conditioned, i.e., $|\tr(\rho W)| \geq \alpha$ or $\tr(\rho W) = 0$ for each Pauli operator $W$ and some fixed $\alpha > 0$, then DFE gives a good sample complexity. Specifically, their method uses $O(\log(1/\epsilon)/\alpha^2 \widehat{\mathcal{R}}_*^2)$ measurement outcomes to obtain an estimate for fidelity within an additive error of $\widehat{\mathcal{R}}_*$ and a confidence level of $1 - \epsilon$ \cite{Flammia2011}. Their rigid measurement scheme, however, can have disadvantages in practice, compared to a more flexible approach that works for arbitrary POVMs. Consider a random $4$-qubit state as the target state, with the actual state obtained by applying $10\%$ depolarizing noise to the target state. We choose a risk of $0.05$ in DFE and a confidence level of $95\%$, and obtain the Pauli measurements that need to be performed as prescribed by DFE. However, to perform these Pauli measurements, we choose two different types of POVM: $(1)$ projection on $+1$ and $-1$ eigenvalue subspaces of the Pauli operator, and $(2)$ projection on each eigenstate of the Pauli operator, which is common in experiments. For DFE, it doesn't matter much which of the two POVMs is implemented because the estimator there only uses the expectation value, while the minimax method makes a distinction between these POVMs. From Table~\ref{tab:DFE_JN_comparison}, we can see that the risk ($\approx 0.023$) for the minimax method is less than half the DFE risk ($0.05$) when using projection on subspaces $(1)$. When projecting on each eigenbasis element $(2)$, we obtain a risk ($\approx 0.012$), which is even lower than before, and clearly much better than the DFE risk. That is, we are able to use the larger expressive power of POVM $(2)$ compared to POVM $(1)$ to lower the risk. Since the minimax method has already computed an estimator with a low risk (as per the prescription of the DFE method), all subsequent experiments can use the \textit{same} measurement settings to estimate the fidelity. Random sampling of Pauli measurements is not necessary. \begin{table}[!ht] \begin{center} \begin{tabular}{l c c c c} \toprule \multicolumn{5}{c}{Random state} \\[2pt] \cmidrule{2-5} & \multicolumn{2}{c}{Subspace projection} & \multicolumn{2}{c}{Eigenbasis projection} \\[2pt] \cmidrule(r){2-3} \cmidrule(l){4-5} & DFE & Minimax & DFE & Minimax \\[2pt] \midrule True fidelity & \multicolumn{2}{c}{0.906} & \multicolumn{2}{c}{0.906} \\ Estimate & 0.895 & 0.895 & 0.900 & 0.902 \\ Risk & 0.050 & 0.023 & 0.050 & 0.012 \\[2pt] \cmidrule[\heavyrulewidth]{1-5} \multicolumn{5}{c}{GHZ state} \\[2pt] \cmidrule{2-5} & \multicolumn{2}{c}{Subspace projection} & \multicolumn{2}{c}{Eigenbasis projection} \\[2pt] \cmidrule(r){2-3} \cmidrule(l){4-5} & DFE & Minimax & DFE & Minimax \\[2pt] \midrule True fidelity & \multicolumn{2}{c}{0.906} & \multicolumn{2}{c}{0.906} \\ Estimate & 0.907 & 0.907 & 0.904 & 0.905 \\ Risk & 0.050 & 0.022 & 0.050 & 0.018 \\[2pt] \bottomrule \end{tabular} \end{center} \caption{Comparison of DFE method with the minimax method for a $4$-qubit random state and $4$-qubit GHZ state as target states. The Pauli measurements are performed as per the prescription of DFE, but two different types of POVMs are used: projection on subspace with $+1$ and $-1$ eigenvalue, and projection on each eigenstate of the Pauli operator. We find that the minimax method has lower risk in most cases, while the estimates are comparable to those of DFE method.} \label{tab:DFE_JN_comparison} \end{table} Next, we consider a $4$-qubit GHZ state as the target state, which is a well-conditioned state as per DFE. As before, we perform measurements as prescribed by DFE by choosing a risk of $0.05$ for DFE method and a confidence level of $95\%$. We summarize the estimated fidelity and risks in the second half of Table~\ref{tab:DFE_JN_comparison}. Similarly to the case of the random state, we observe that the risk given by the minimax method for the GHZ state is always lower than the DFE risk. Changing from POVM $(1)$ to POVM $(2)$ again leads to an improvement in the risk. Finally, we argue that the Pauli measurement scheme prescribed by DFE can be far from optimal for certain target states. To demonstrate this, we choose a rather extreme example of a random $4$-qubit target state with the following measurement scheme: perform as many measurements and repetitions as prescribed by DFE for achieving a risk of $0.05$ at a confidence level of $95\%$, but instead of performing Pauli measurements, we choose random POVMs, generated as per Ref.~\cite{heinosaari2020random}. In this case, we obtain a risk of $\approx 0.023$ from the minimax method, which is larger than the risk for Pauli measurements using the minimax method (see Tab.~\ref{tab:DFE_JN_comparison}), but still a factor of 2 lower than the DFE risk. This suggests that when the target state is random, performing random measurements is almost as good as performing Pauli measurements. Hence there is no real advantage in using DFE in terms of sample complexity. If, instead, we used the optimal (though impractical) measurement scheme described in section \ref{secn:minimax_method_optimal_risk}, we can obtain a risk of $0.05$ using just $\approx 700$ total measurements (independent of the target state and dimension) as opposed to $\approx 248000$ total measurements required by DFE for the random target state. While this random state example is rather atypical, it stands to demonstrate that there are cases where the DFE measurement scheme is outperformed by a more tailored scheme, which the minimax method can benefit from. A potential disadvantage for the minimax method is that once the dimension of the system becomes very large, depending on the measurement scheme, constructing the estimator can become inefficient. One reason for this inefficiency is that the algorithm we use for optimization requires projection on the set of density matrices. Because we perform this projection through diagonalization, the projection can cost up to $O(d^3)$ iterations, where $d$ is the dimension of the system. In contrast, DFE can handle large dimensions well because the classical computations involved are simple and most of the complexity is absorbed into performing the experiments. This drawback of the minimax method is alleviated if we use the Pauli measurement scheme described in Box \hyperlink{box:pauli_measurement_scheme}{\ref*{secn:minimax_method}.1}, which is similar to DFE. We have written an efficient algorithm to construct the estimator given by the minimax method for this measurement scheme, as noted in section~\ref{secn:minimax_method_RPM_scheme}. Moreover, the estimator can be pre-computed once the measurement scheme has been defined. \subsection{MLE and Profile Likelihood\label{secn:mle_pl}} Maximum Likelihood Estimation (MLE) is a popular approach used for quantum tomography \cite{Hradil1997}. One can think of MLE as minimizing the Kullback-Liebler divergence between the observed frequencies and the Born probabilities \cite{vrehavcek2001iterative}. This naturally leads to the (negative) log-likelihood function \begin{equation} \ell\left(\{\bm{f}^{(l)}\}_{l = 1}^L \big| \chi\right) = - \sum_{l = 1}^L \sum_{k = 1}^{N_l} f^{(l)}_k \ln(\tr(E^{(l)}_k \chi)). \label{eqn:MLE_negative_log_likelihood} \end{equation} One then minimizes the function $\ell$ (or equivalently, maximizes the likelihood function) to obtain the MLE estimate $\widehat{\sigma}$ for the experimental quantum state. We calculate $\tr(\rho \widehat{\sigma})$ to estimate the fidelity with a target state $\rho$. While the estimates are usually good enough when a sufficient number of measurements is used for the reconstruction, a major disadvantage is that the method provides no confidence intervals for the estimated fidelity. A common approach to compute an uncertainty of the MLE estimate is Monte-Carlo (MC) re-sampling. Herein, one numerically generates (artificial) outcomes based on the observed frequencies and the expected statistical noise distribution, referred to as re-sampling. For each set of outcomes one reconstructs the state and estimates the fidelity as before. This process is repeated many times to obtain a large number of MLE fidelity estimates which can be used to calculate an (asymmetric) interval around the median that corresponds to the chosen confidence level. If necessary, hedging \cite{Blume-Kohout2010HedgedMLE} can be implemented to deal with zero probabilities. Note that the MC approach is similar to the nonparametric bootstrap method used for finding confidence intervals \cite{diciccio1996bootstrap}. A disadvantage of such an MLE based approach is that we need to reconstruct the state, which can be costly. Moreover, there are certain freedoms in the re-sampling and hedging definitions, which can affect the results. We argue that the MC approach can give overconfident uncertainty bounds. This issue can be seen in the following example: take a $2$-qubit Bell state, stabilized by $XX$ and $ZZ$, as the target state. We measure only $XX$ with $500$ repetitions, which is not enough to uniquely determine the state. The true state is obtained by applying $10\%$ depolarizing noise to the Bell state, so that the actual fidelity is $0.925$. We generate $100$ MLE fidelity estimates (with new measurement outcomes generated at every repetition), and we find that the average MLE fidelity estimate is $0.44$ when the measurements correspond to projection onto an eigenbasis of $XX$. The large error (an estimate of $0.44$ compared to the true value of $0.925$) is expected since the measurements are chosen poorly with respect to the state. This problem, however, is not detected by the MC approach, where we find that the average uncertainty corresponding to a confidence level of $95\%$ is $(0.32, 0.38)$. This indicates that MC uncertainty is overconfident (because $0.44 + 0.38 = 0.82 < 0.925$). Indeed, this is seen through the empirical coverage probability which turns out to be $0$, in stark contrast with the high confidence level of $95\%$ that was chosen. By empirical coverage probability, we mean the fraction of cases where the true fidelity lies inside the computed confidence interval. We obtain very similar results when using subspace measurements (average fidelity of $0.45$, average uncertainty of $(0.31, 0.37)$, and empirical coverage probability of $0$). Therefore, we cannot always trust the uncertainty given by the MC approach. In contrast, the minimax approach gives a risk of $0.5$ for this example (implying total uncertainty) when insufficient number of stabilizer measurements are provided irrespective of whether eigenbasis or subspace measurements are used. An alternate approach to obtain a bound for the MLE estimate is calculating the Profile Likelihood (PL) function \cite{murphy2000profile}. Given any value $F \in [0, 1]$, PL corresponds to the solution of the optimization problem \begin{align} \text{PL}(F) = &\min_{\chi \in \mathcal{X}}\ \ell\left(\{\bm{f}^{(l)}\}_{l = 1}^L \big| \chi\right) \nonumber \\ &\text{s.t.} \quad \tr(\rho \chi) = F. \end{align} Note that we define the profile likelihood in terms of negative log-likelihood instead of the likelihood function. Our definition of PL amounts to solving the MLE optimization problem, except for the added constraint that the fidelity with the target state must be equal to $F$. The MLE solution can be obtained from PL by adding an additional layer of optimization: $\text{MLE} = \min_{F \in [0, 1]} \text{PL}(F)$. It can be shown that $\text{PL}(F)$ is convex in $F$. The advantage of calculating PL is that given a cut-off value, one can obtain a bound on the fidelity similar to an error bar. To this end, we draw a horizontal line (the cut-off) on the PL vs $F$ plot, and the locations along the $F$ axis where this line intersects the curve gives a bound on estimated fidelity. The MLE estimate lies inside this interval because it corresponds to the minimum. A schematic of a typical PL plot is shown in Fig.~\ref{fig:PL_schematic}. A somewhat similar idea for obtaining confidence regions was proposed by Faist \& Renner~\cite{faist2016practical}. \begin{figure} \caption{A schematic of a typical Profile Likelihood (PL) plotted against the parameter $F$ (representing fidelity). The MLE estimate (green) corresponds to the minimum of the PL curve. A cut-off for PL (red dashed horizontal line) gives a bound $[F_1, F_2]$ for the estimated fidelity.} \label{fig:PL_schematic} \end{figure} Using PL is a natural way of providing a bound for MLE, since it returns an interval of fidelity estimates that correspond to large enough likelihood. However, it does not provide a true confidence interval as the location of the cut-off value is unknown. What we want is that given many MLE estimates (from actual experimental data), the true fidelity must lie $1 - \epsilon$ fraction of times inside the bound calculated from PL, where $1 - \epsilon$ is the chosen confidence level. One general heuristic that is used to give such a cut-off for PL is based on Wilks' Theorem~\cite{wilks1938large}. However, this theorem is only valid asymptotically as the sample size becomes arbitrarily large, and furthermore, it doesn't work well for quantum states, especially those that are low rank~\cite{Scholten2016, glancy2012gradient}. Note that Scholten \& Blume-Kohout~\cite{Scholten2016} give an alternative to Wilks' Theorem for quantum states, but like Wilks' Theorem, their alternative is only exact asymptotically. Furthermore, it uses models different from what we need for PL (they consider nested Hilbert spaces of increasing dimension, while we require density matrices of the same dimension but with an added constraint). Therefore, as of now, we do not know how to obtain a confidence interval using PL. Hence, for the purpose of demonstration, we choose different cut-off values for the PL and compute the bounds (as described below) on the MLE estimate corresponding to each chosen value. We take a $3$-qubit random target state and apply $10\%$ depolarizing noise to obtain the actual state. We measure $L = 48 = 0.75 \times 4^3$ Pauli operators, chosen in the decreasing order of weights given by DFE, repeating each measurement $100$ times. Using the knowledge of the true state, we check in which of these bounds the true fidelity lies. This process is repeated $10^3$ times by generating different observed frequencies using the true state, and subsequently, we obtain the coverage probability corresponding to each value of cut-off. We also find the average width of the bound for each cut-off value. We plot the coverage probability against this average width in Fig.~\ref{fig:PL_JN_SDP_ROC}, finding that the bounds are reasonably tight. Note that the true fidelity is usually not known, so the coverage probability cannot be computed in practice. \subsection{An SDP-based approach} Finally, we compare with a simple Semi-Definite Programming (SDP) method to obtain a bound on fidelity. This involves solving the following intuitive optimization problem to obtain bounds on fidelity. \begin{align} &F_{\min} = \min_{\chi \in \mathcal{X}} \tr(\rho \chi) \nonumber \\ &\hspace{1cm} \text{s.t.}\ \sum_{i = 1}^L \sum_{k = 1}^{N_i} \left(\tr(E^{(i)}_k \chi) - f^{(i)}_k\right)^2 \leq \epsilon_m \\ &F_{\max} = \max_{\chi \in \mathcal{X}} \tr(\rho \chi) \nonumber \\ &\hspace{1cm} \text{s.t.}\ \sum_{i = 1}^L \sum_{k = 1}^{N_i} \left(\tr(E^{(i)}_k \chi) - f^{(i)}_k\right)^2 \leq \epsilon_m \end{align} In essence, we find the minimum and maximum fidelity with the target state over density matrices that satisfy the measurement statistics up to an error of $\epsilon_m$. The advantage of such an approach is that the bounds obtained are independent of the method used to estimate fidelity. The drawback, however, is that the parameter $\epsilon_m$ needs to be chosen ``by hand". Since we need to tune the parameter $\epsilon_m$ similar to the cut-off in PL method, we plot the coverage probability against average width of the bound as before in the Fig.~\ref{fig:PL_JN_SDP_ROC}. We use the same state and measurement settings as for PL, and $10^3$ estimates to compute the coverage probability. We can see that the average width of the bound given by SDP method is typically larger than both PL and the minimax method. \begin{figure} \caption{Plot of empirical coverage probability against the average width of the bound given by the Profile Likelihood (PL) and Semi-Definite Programming (SDP) methods for a $3$-qubit random target state and $48$ Pauli measurements. It can be seen that PL gives tighter bounds than the SDP method. For reference, the minimax bound corresponding to a confidence level of $95\%$ is shown as well.} \label{fig:PL_JN_SDP_ROC} \end{figure} We remark that the minimax method gives a wider confidence interval than PL in Fig.~\ref{fig:PL_JN_SDP_ROC} due to its conservative definition. In turn, the minimax method's confidence interval is guaranteed to hold irrespective of what the true state of the system is. Indeed, we can construct the fidelity estimator using the minimax method even before taking any data. In contrast, PL and SDP methods compute bounds on the fidelity after the experimental data is collected, so they can be tighter in principle. On the other hand, the PL method requires choosing a cut-off while the SDP method requires choosing the parameter $\epsilon_m$. For experimental data, we have no systematic way to choose these quantities to ensure that the bound corresponds to a genuine confidence interval. Therefore, we cannot use PL and SDP methods to generate confidence intervals in practice. In contrast, the minimax method gives rigorous confidence intervals without free parameters. Table \ref{tab:fidelity_estimaiton_methods_comparison} provides a quick overview of comparison of these different methods. { \renewcommand{1.5}{1.5} \begin{table*} \begin{tabular}{>{\raggedright}p{4cm}p{2cm}p{2cm}p{1.8cm}p{1.5cm}p{1.8cm}} & Minimax & MLE, MC & PL & SDP & DFE \tabularnewline \hline No unknown parameters required? & \yes$^a$ & \yes & \no & \no & \yes \tabularnewline Provides a rigorous confidence interval (never overconfident)? & \yes & \no & \no$^b$ & \no$^b$ & \yes \tabularnewline Risk level known before seeing the outcomes? & \yes & \no & \no & \no & \yes \tabularnewline Applies to any measurement setting? & \yes & \yes & \yes & \yes & \no \tabularnewline No significant computation required in practice? & \no$^c$ & \no & \no & \no & \yes \tabularnewline \hline \end{tabular} \begin{minipage}[t]{0.75\textwidth} \begin{flushleft} $^a${\footnotesize \parbox[t]{\textwidth}{\raggedright Technically, $\epsilon_o$ can be considered as a free parameter, but we fix it at $\epsilon_o = 10^{-5}$. Since we don't have to tune $\epsilon_o$, we don't list it as an unknown parameter.}} $^b${\footnotesize Because no systematic method of obtaining a confidence interval is known.}\\ $^c${\footnotesize \parbox[t]{\textwidth}{\raggedright The computational time required for the minimax method depends on the system dimension, the target state, the measurement settings, and the algorithm used. For the Pauli measurement scheme in section~\ref{secn:minimax_method_RPM_scheme}, we have a fast algorithm irrespective of the target state or the dimension.}} \end{flushleft} \end{minipage} \caption{Comparison of different methods for estimation of fidelity: Minimax method, Maximum Likelihood Estimation (MLE) with Monte-Carlo (MC) sampling for estimating the fidelity and uncertainty, Profile Likelihood (PL) and Semi-Definite Programming (SDP) methods for calculating bounds on fidelity, and the direct fidelity estimation (DFE) method.} \label{tab:fidelity_estimaiton_methods_comparison} \end{table*} } \section{Concluding remarks and future research} The minimax method can be used to obtain an estimator for the fidelity with a pure target state for any measurement scheme. For a given setting, the estimator only needs to be computed once, and can subsequently be evaluated on raw measurement outcomes instantaneously. This gives our method a practical advantage over other estimation protocols which require random sampling of measurement settings. Crucially, the minimax method not only constructs an estimator, but also provides rigorous confidence intervals that are nearly minimax optimal. We showed that this property translates to practical sample complexity when the measurement scheme is carefully chosen. Notably, the risk is a property of the chosen measurement scheme (including number of repetitions) and target state and is thus computed before seeing any data. Because the risk is pre-computed, it is taken to be symmetric around the fidelity estimate. As a consequence, it might be sub-optimal in practice, as the confidence interval might include unphysical values when the estimate is close to 0 or 1. On the other hand, the fact that the risk is known beforehand allows us to use the method for benchmarking experimental protocols without having to take any data. This can, therefore, be useful in guiding the design of experiments. Further, when extending the method to quantum channels using the Choi-Jamio\l{}kowski isomorphism, such benchmarking can also be done for protocols that estimate gate fidelity. The computation involved in finding the fidelity estimator is practical for relatively small dimensions, but can become computationally intensive for larger systems because intermediate steps in our algorithm need $O(d^3)$ iterations. Therefore, finding more efficient ways to do the optimization needed to find the estimator would prove very useful in practice. We show that this is possible for a specific measurement protocol involving Pauli measurements (see section \ref{secn:minimax_method_RPM_scheme}), where the optimization is reduced to two-dimensions irrespective of dimension of the system. The only challenge is efficiently computing the Pauli weights for the measurement protocol, but that is not a drawback of the algorithm itself. For example, when these weights can be efficiently computed, as possible for well-conditioned states, the estimator can be efficiently computed in very large dimensions. It would be interesting to extend such an approach to more general measurement settings in order to efficiently compute the fidelity estimator. An obvious advantage of the minimax method over tomographic methods like MLE is that the state need not be reconstructed. We showed that the uncertainty given by Monte-Carlo method (for MLE), although typically tighter, need not correspond to a genuine confidence interval as it can be overconfident. PL also gives tighter bounds than the minimax method because it is computed after seeing the data, but we are not aware of any systematic method to obtain a confidence interval using PL. Finding an approach to obtain confidence intervals from PL and similarly for the SDP approach outlined above are interesting directions for future research. Finally the confidence intervals from the minimax method, though guaranteed to be correct, are often not as tight as they could be. This could potentially be improved by generalizing to an asymmetric risk or by computing the risk after seeing the data, or both. \section{Code availability} An open source implementation of the minimax method can be found on \href{https://github.com/akshayseshadri/minimax-fidelity-estimation}{https://github.com/akshayseshadri/minimax-fidelity-estimation}. \begin{acknowledgments} The authors thank Emanuel Knill, Scott Glancy, Yanbao Zhang, and Rainer Blatt for helpful discussions on the manuscript.\\ This material is based upon work supported by the National Science Foundation under grant no. 1819251.\\ This work utilized the Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. The Summit supercomputer is a joint effort of the University of Colorado Boulder and Colorado State University.\\ We gratefully acknowledge support by the Austrian Science Fund (FWF), through the SFB BeyondC (FWF Project No.\ F7109) and the Institut f\"ur Quanteninformation GmbH. We also acknowledge funding from the EU H2020-FETFLAG-2018-03 under Grant Agreement no. 820495, by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via US Army Research Office (ARO) grant no. W911NF-16-1-0070 and W911NF-20-1-0007, and the US Air Force Office of Scientific Research (AFOSR) via IOE Grant No. FA9550-19-1-7044 LASCEM. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 840450. \end{acknowledgments} \begin{thebibliography}{45} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem[{\citenamefont{Ekert and Jozsa}(1996)}]{ekert1996quantum} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Ekert}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Jozsa}}, \bibinfo{journal}{Reviews of Modern Physics} \textbf{\bibinfo{volume}{68}}, \bibinfo{pages}{733} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Georgescu et~al.}(2014)\citenamefont{Georgescu, Ashhab, and Nori}}]{Georgescu2014Review} \bibinfo{author}{\bibfnamefont{I.~M.} \bibnamefont{Georgescu}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Ashhab}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Nori}}, \bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{153} (\bibinfo{year}{2014}). \bibitem[{\citenamefont{Gisin and Thew}(2007)}]{gisin2007quantum} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Gisin}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Thew}}, \bibinfo{journal}{Nature photonics} \textbf{\bibinfo{volume}{1}}, \bibinfo{pages}{165} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Degen et~al.}(2017)\citenamefont{Degen, Reinhard, and Cappellaro}}]{degen2017quantum} \bibinfo{author}{\bibfnamefont{C.~L.} \bibnamefont{Degen}}, \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Reinhard}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Cappellaro}}, \bibinfo{journal}{Reviews of modern physics} \textbf{\bibinfo{volume}{89}}, \bibinfo{pages}{035002} (\bibinfo{year}{2017}). \bibitem[{\citenamefont{Giovannetti et~al.}(2011)\citenamefont{Giovannetti, Lloyd, and Maccone}}]{giovannetti2011advances} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Giovannetti}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Lloyd}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Maccone}}, \bibinfo{journal}{Nature photonics} \textbf{\bibinfo{volume}{5}}, \bibinfo{pages}{222} (\bibinfo{year}{2011}). \bibitem[{\citenamefont{Nielsen and Chuang}(2010)}]{Nielsen2000} \bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Nielsen}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.~L.} \bibnamefont{Chuang}}, \emph{\bibinfo{title}{{Quantum Computation and Quantum Information}}} (\bibinfo{publisher}{Cambridge University Press}, \bibinfo{address}{Cambridge}, \bibinfo{year}{2010}), ISBN \bibinfo{isbn}{9780511976667}. \bibitem[{\citenamefont{Vogel and Risken}(1989)}]{Vogel1989} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Vogel}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Risken}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{40}}, \bibinfo{pages}{2847} (\bibinfo{year}{1989}). \bibitem[{\citenamefont{Hradil}(1997)}]{Hradil1997} \bibinfo{author}{\bibfnamefont{Z.}~\bibnamefont{Hradil}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{55}}, \bibinfo{pages}{R1561} (\bibinfo{year}{1997}). \bibitem[{\citenamefont{Blume-Kohout}(2010{\natexlab{a}})}]{Blume-Kohout2010BME} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Blume-Kohout}}, \bibinfo{journal}{New J. Phys.} \textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{043034} (\bibinfo{year}{2010}{\natexlab{a}}). \bibitem[{\citenamefont{Gross et~al.}(2010)\citenamefont{Gross, Liu, Flammia, Becker, and Eisert}}]{Gross2010} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Gross}}, \bibinfo{author}{\bibfnamefont{Y.-K.} \bibnamefont{Liu}}, \bibinfo{author}{\bibfnamefont{S.~T.} \bibnamefont{Flammia}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Becker}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Eisert}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{105}}, \bibinfo{pages}{150401} (\bibinfo{year}{2010}). \bibitem[{\citenamefont{Cramer et~al.}(2010)\citenamefont{Cramer, Plenio, Flammia, Somma, Gross, Bartlett, Landon-Cardinal, Poulin, and Liu}}]{Cramer2010} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Cramer}}, \bibinfo{author}{\bibfnamefont{M.~B.} \bibnamefont{Plenio}}, \bibinfo{author}{\bibfnamefont{S.~T.} \bibnamefont{Flammia}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Somma}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Gross}}, \bibinfo{author}{\bibfnamefont{S.~D.} \bibnamefont{Bartlett}}, \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Landon-Cardinal}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Poulin}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.-K.} \bibnamefont{Liu}}, \bibinfo{journal}{Nat. Commun.} \textbf{\bibinfo{volume}{1}}, \bibinfo{pages}{149} (\bibinfo{year}{2010}). \bibitem[{\citenamefont{Flammia and Liu}(2011)}]{Flammia2011} \bibinfo{author}{\bibfnamefont{S.~T.} \bibnamefont{Flammia}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.-K.} \bibnamefont{Liu}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{106}}, \bibinfo{pages}{230501} (\bibinfo{year}{2011}). \bibitem[{\citenamefont{da~Silva et~al.}(2011)\citenamefont{da~Silva, Landon-Cardinal, and Poulin}}]{DaSilva2011} \bibinfo{author}{\bibfnamefont{M.~P.} \bibnamefont{da~Silva}}, \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Landon-Cardinal}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Poulin}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{107}}, \bibinfo{pages}{210404} (\bibinfo{year}{2011}). \bibitem[{\citenamefont{Huang et~al.}(2020)\citenamefont{Huang, Kueng, and Preskill}}]{Huang2020} \bibinfo{author}{\bibfnamefont{H.-Y.} \bibnamefont{Huang}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Kueng}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Preskill}}, \bibinfo{journal}{Nat. Phys.} \textbf{\bibinfo{volume}{16}}, \bibinfo{pages}{1050} (\bibinfo{year}{2020}). \bibitem[{\citenamefont{Gottesman}(1997)}]{Gottesman1997} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Gottesman}}, \textbf{\bibinfo{volume}{2008}} (\bibinfo{year}{1997}), \eprint{9705052}. \bibitem[{\citenamefont{Curty et~al.}(2004)\citenamefont{Curty, Lewenstein, and L{\"u}tkenhaus}}]{curty2004entanglement} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Curty}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Lewenstein}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{L{\"u}tkenhaus}}, \bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{92}}, \bibinfo{pages}{217903} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{van Enk et~al.}(2007)\citenamefont{van Enk, L{\"u}tkenhaus, and Kimble}}]{van2007experimental} \bibinfo{author}{\bibfnamefont{S.~J.} \bibnamefont{van Enk}}, \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{L{\"u}tkenhaus}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.~J.} \bibnamefont{Kimble}}, \bibinfo{journal}{Physical Review A} \textbf{\bibinfo{volume}{75}}, \bibinfo{pages}{052318} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Brandao}(2005)}]{brandao2005quantifying} \bibinfo{author}{\bibfnamefont{F.~G. S.~L.} \bibnamefont{Brandao}}, \bibinfo{journal}{Physical Review A} \textbf{\bibinfo{volume}{72}}, \bibinfo{pages}{022310} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Bourennane et~al.}(2004)\citenamefont{Bourennane, Eibl, Kurtsiefer, Gaertner, Weinfurter, G{\"u}hne, Hyllus, Bru{\ss}, Lewenstein, and Sanpera}}]{bourennane2004experimental} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Bourennane}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Eibl}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Kurtsiefer}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Gaertner}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}}, \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{G{\"u}hne}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Hyllus}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Bru{\ss}}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Lewenstein}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Sanpera}}, \bibinfo{journal}{Physical Review Letters} \textbf{\bibinfo{volume}{92}}, \bibinfo{pages}{087902} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Gu{\c{t}}{\u{a}} et~al.}(2020)\citenamefont{Gu{\c{t}}{\u{a}}, Kahn, Kueng, and Tropp}}]{guctua2020FastTomography} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Gu{\c{t}}{\u{a}}}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Kahn}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Kueng}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~A.} \bibnamefont{Tropp}}, \bibinfo{journal}{Journal of Physics A: Mathematical and Theoretical} \textbf{\bibinfo{volume}{53}}, \bibinfo{pages}{204001} (\bibinfo{year}{2020}). \bibitem[{\citenamefont{Cerezo et~al.}(2020)\citenamefont{Cerezo, Poremba, Cincio, and Coles}}]{cerezo2020variational} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Cerezo}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Poremba}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Cincio}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~J.} \bibnamefont{Coles}}, \bibinfo{journal}{Quantum} \textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{248} (\bibinfo{year}{2020}). \bibitem[{\citenamefont{Juditsky and Nemirovski}(2009)}]{Juditsky2009} \bibinfo{author}{\bibfnamefont{A.~B.} \bibnamefont{Juditsky}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~S.} \bibnamefont{Nemirovski}}, \bibinfo{journal}{Ann. Stat.} \textbf{\bibinfo{volume}{37}}, \bibinfo{pages}{2278} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Goldenshluger et~al.}(2015)\citenamefont{Goldenshluger, Juditsky, and Nemirovski}}]{goldenshluger2015hypothesis} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Goldenshluger}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Juditsky}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Nemirovski}}, \bibinfo{journal}{Electron. J. Stat.} \textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{1645} (\bibinfo{year}{2015}). \bibitem[{\citenamefont{Juditsky and Nemirovski}(2019)}]{juditsky2018near} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Juditsky}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Nemirovski}}, \bibinfo{journal}{IMA Inf. Inference} \textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{423} (\bibinfo{year}{2019}). \bibitem[{PRL()}]{PRL} \bibinfo{note}{See accompanying letter submission for details.} \bibitem[{\citenamefont{Seshadri and Becker}(2021)}]{seshadri2021computation} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Seshadri}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Becker}}, \bibinfo{journal}{arXiv preprint arXiv:2112.03390} (\bibinfo{year}{2021}). \bibitem[{\citenamefont{Fuchs}(1996)}]{fuchs1996distinguishability} \bibinfo{author}{\bibfnamefont{C.~A.} \bibnamefont{Fuchs}}, \bibinfo{journal}{arXiv preprint quant-ph/9601020} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Briegel et~al.}(2009)\citenamefont{Briegel, Browne, W., Raussendorf, and Van~den Nest}}]{BriegelMBQC2009} \bibinfo{author}{\bibfnamefont{H.-J.} \bibnamefont{Briegel}}, \bibinfo{author}{\bibfnamefont{D.~E.} \bibnamefont{Browne}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{W.}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Raussendorf}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Van~den Nest}}, \bibinfo{journal}{Nat. Phys.} \textbf{\bibinfo{volume}{19}}, \bibinfo{pages}{5} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Kliesch}()}]{klieschcharacterization} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Kliesch}}, \emph{\bibinfo{title}{Lecture notes: Characterization, certification, and validation of quantum systems}}, \bibinfo{note}{{H}einrich-Heine-Universit{\"a}t D{\"u}sseldorf (2020)}. \bibitem[{\citenamefont{Pallister et~al.}(2018)\citenamefont{Pallister, Linden, and Montanaro}}]{pallister2018optimal} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Pallister}}, \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Linden}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Montanaro}}, \bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{120}}, \bibinfo{pages}{170502} (\bibinfo{year}{2018}). \bibitem[{\citenamefont{Takeuchi and Morimae}(2018)}]{takeuchi2018verification} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Takeuchi}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Morimae}}, \bibinfo{journal}{Physical Review X} \textbf{\bibinfo{volume}{8}}, \bibinfo{pages}{021060} (\bibinfo{year}{2018}). \bibitem[{\citenamefont{Heinosaari et~al.}(2020)\citenamefont{Heinosaari, Jivulescu, and Nechita}}]{heinosaari2020random} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Heinosaari}}, \bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Jivulescu}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Nechita}}, \bibinfo{journal}{Journal of Mathematical Physics} \textbf{\bibinfo{volume}{61}}, \bibinfo{pages}{042202} (\bibinfo{year}{2020}). \bibitem[{\citenamefont{{\v{R}}eh{\'a}{\v{c}}ek et~al.}(2001)\citenamefont{{\v{R}}eh{\'a}{\v{c}}ek, Hradil, and Je{\v{z}}ek}}]{vrehavcek2001iterative} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{{\v{R}}eh{\'a}{\v{c}}ek}}, \bibinfo{author}{\bibfnamefont{Z.}~\bibnamefont{Hradil}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Je{\v{z}}ek}}, \bibinfo{journal}{Physical Review A} \textbf{\bibinfo{volume}{63}}, \bibinfo{pages}{040303} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Blume-Kohout}(2010{\natexlab{b}})}]{Blume-Kohout2010HedgedMLE} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Blume-Kohout}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{105}}, \bibinfo{pages}{200504} (\bibinfo{year}{2010}{\natexlab{b}}). \bibitem[{\citenamefont{DiCiccio and Efron}(1996)}]{diciccio1996bootstrap} \bibinfo{author}{\bibfnamefont{T.~J.} \bibnamefont{DiCiccio}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Efron}}, \bibinfo{journal}{Statistical science} pp. \bibinfo{pages}{189--212} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Murphy and Van~der Vaart}(2000)}]{murphy2000profile} \bibinfo{author}{\bibfnamefont{S.~A.} \bibnamefont{Murphy}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~W.} \bibnamefont{Van~der Vaart}}, \bibinfo{journal}{Journal of the American Statistical Association} \textbf{\bibinfo{volume}{95}}, \bibinfo{pages}{449} (\bibinfo{year}{2000}). \bibitem[{\citenamefont{Faist and Renner}(2016)}]{faist2016practical} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Faist}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Renner}}, \bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{117}}, \bibinfo{pages}{010404} (\bibinfo{year}{2016}). \bibitem[{\citenamefont{Wilks}(1938)}]{wilks1938large} \bibinfo{author}{\bibfnamefont{S.~S.} \bibnamefont{Wilks}}, \bibinfo{journal}{The annals of mathematical statistics} \textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{60} (\bibinfo{year}{1938}). \bibitem[{\citenamefont{Scholten and Blume-Kohout}(2018)}]{Scholten2016} \bibinfo{author}{\bibfnamefont{T.~L.} \bibnamefont{Scholten}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Blume-Kohout}}, \bibinfo{journal}{New J. Phys.} \textbf{\bibinfo{volume}{20}}, \bibinfo{pages}{023050} (\bibinfo{year}{2018}). \bibitem[{\citenamefont{Glancy et~al.}(2012)\citenamefont{Glancy, Knill, and Girard}}]{glancy2012gradient} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Glancy}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Knill}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Girard}}, \bibinfo{journal}{New Journal of Physics} \textbf{\bibinfo{volume}{14}}, \bibinfo{pages}{095017} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{Fuchs and Van De~Graaf}(1999)}]{fuchs1999cryptographic} \bibinfo{author}{\bibfnamefont{C.~A.} \bibnamefont{Fuchs}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Van De~Graaf}}, \bibinfo{journal}{IEEE Transactions on Information Theory} \textbf{\bibinfo{volume}{45}}, \bibinfo{pages}{1216} (\bibinfo{year}{1999}). \bibitem[{\citenamefont{Nesterov}(1988)}]{nesterov1988approach} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Nesterov}}, \bibinfo{journal}{Ekonomika i Mateaticheskie Metody} \textbf{\bibinfo{volume}{24}}, \bibinfo{pages}{509} (\bibinfo{year}{1988}). \bibitem[{\citenamefont{Tseng}(2010)}]{tseng2010approximation} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Tseng}}, \bibinfo{journal}{Mathematical Programming} \textbf{\bibinfo{volume}{125}}, \bibinfo{pages}{263} (\bibinfo{year}{2010}). \bibitem[{\citenamefont{Wilde}(2011)}]{wilde2011classical} \bibinfo{author}{\bibfnamefont{M.~M.} \bibnamefont{Wilde}}, \bibinfo{journal}{arXiv preprint arXiv:1106.1445} (\bibinfo{year}{2011}). \bibitem[{\citenamefont{Boyd and Vandenberghe}(2004)}]{boyd2004convex} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Boyd}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Vandenberghe}}, \emph{\bibinfo{title}{Convex optimization}} (\bibinfo{publisher}{Cambridge university press}, \bibinfo{year}{2004}). \end{thebibliography} \appendix \section{Minimax Method: Theory\label{app:minimax_theory}} We discuss here the theory surrounding the minimax method for fidelity estimation. We begin by giving a short overview of Juditsky \& Nemirovski's framework \cite{Juditsky2009} in appendix~\ref{app:JN_premise}. Then, in appendix~\ref{app:JN_fidelity_estimation}, we describe how we adapt their general method for the purpose of fidelity estimation so as to obtain the procedure described in section~\ref{secn:minimax_method_theory}. \subsection{Juditsky \& Nemirovski's premise\label{app:JN_premise}} Suppose we are given a set of ``states" $\mathcal{X} \subseteq \mathbb{R}^{d}$ that is a compact and convex subset of $\mathbb{R}^d$. We wish to estimate the linear functional $\ip{g, x}$, where $g \in \mathbb{R}^d$ is some fixed vector, while the state $x \in \mathcal{X}$ of the system is unknown to us. We do not have direct access to the state $x$. However, we have access to a single measurement outcome determined by the state $x$. Measurements are modelled using random variables that assign probabilities to the possible outcomes depending on the state. To that end, Juditsky \& Nemirovski~\cite{Juditsky2009} consider a family of random variables $\mathrm{Z}_\mu$ parametrized by $\mu \in \mathcal{M}$, where $\mathcal{M} \subseteq \mathbb{R}^m$ is some set of parameters. These random variables take values in a Polish space\footnote{A Polish space is a topological space that is homeomorphic to a separable complete metric space. We endow this with the Borel $\sigma$-algebra.} $(\Omega, \Sigma)$ equipped with a $\sigma$-finite Borel measure $\mathbb{P}$~\cite{Juditsky2009}. We assume that $\mathrm{Z}_\mu$ has a probability density $p_\mu$ with respect to this reference measure $\mathbb{P}$. The state $x \in \mathcal{X}$ determines the random variable $\mathrm{Z}_{A(x)}$ through an affine function $A\colon \mathcal{X} \to \mathcal{M}$, and we are given one outcome of this random variable for the purpose of estimation. Looking ahead to the specialization of this framework to the quantum setting, one can think of the set of observations $\Omega$ as a finite set, Borel measurable functions on $\Omega$ as functions on $\Omega$, and the integrals $\int_\Omega f(\omega) d\mathbb{P}$ as the finite sum $\sum_{\omega \in \Omega} f(\omega)$. Our goal is to construct an estimator for $\ip{g, x}$ that uses an outcome of the random variable $\mathrm{Z}_{A(x)}$ to give an estimate. An estimator is a real-valued Borel measurable function on $\Omega$. The set of estimators $\mathcal{F}$ we are allowed to work with is any finite-dimensional vector space comprised of real-valued Borel measurable functions on $\Omega$ as long as it contains constant functions~\cite{Juditsky2009}. The mapping $\mathcal{D}(\mu) = p_\mu$ between the parameter $\mu$ and the corresponding probability density $p_\mu$ is called a parametric density family~\cite{Juditsky2009}. In order to be able to choose an appropriate estimator from $\mathcal{F}$ given that the probability density of the random variable is $p_{A(x)}$, we want that the set of estimators $\mathcal{F}$ and the parametric density family $\mathcal{D}$ interact well with each other. This gives rise to the notion of a good pair defined by Juditsky \& Nemirovski~\cite{Juditsky2009}. \begin{definition}[Good pair] \label{defn:good_pair} We call a given pair $(\mathcal{D}, \mathcal{F})$ of parametric density family $\mathcal{D}$ and finite-dimensional space $\mathcal{F}$ of Borel functions on $\Omega$ a good pair if the following conditions hold. \begin{enumerate} \item $\mathcal{M}$ is a relatively open convex set in $\mathbb{R}^m$. By relatively open, we mean $\mathcal{M} = \text{relint}(\mathcal{M}) \equiv \{x \in \mathcal{M} \mid \exists r > 0 \text{ with } B(x, r) \cap \text{aff}(\mathcal{M}) \subseteq \mathcal{M}\}$, where $\text{aff}(\mathcal{M})$ is the affine hull of $\mathcal{M}$. \item Whenever $\mu \in \mathcal{M}$, we have $p_\mu(\omega) > 0$ for all $\omega \in \Omega$. \item Whenever $\mu, \nu \in \mathcal{M}$, $\phi(\omega) = \ln(p_\mu(\omega)/p_\nu(\omega)) \in \mathcal{F}$. \item Whenever $\phi \in \mathcal{F}$, the function \begin{equation*} F_{\phi}(\mu) = \ln\left(\int_\Omega \exp\left(\phi(\omega)\right) p_\mu(\omega) d\mathbb{P}\right) \end{equation*} is well-defined and concave in $\mu \in \mathcal{M}$. \end{enumerate} \end{definition} Any estimator in $\widehat{g} \in \mathcal{F}$ is called an affine estimator (note, however, that $\widehat{g}$ need not be an affine function). To judge the performance of an arbitrary estimator $\widehat{g}$, we define the $\epsilon$-risk as follows~\cite{Juditsky2009}. \begin{definition}[$\epsilon$-risk] Given a confidence level $1 - \epsilon \in (0, 1)$, we define the $\epsilon$-risk associated with an estimator $\widehat{g}$ as \begin{equation*} \mathcal{R}(\widehat{g}; \epsilon) = \inf\left\{\delta: \sup_{x \in \mathcal{X}} \Prob_{\omega \sim p_{A(x)}}\left\{\omega: |\widehat{g}(\omega) - \ip{g, x}| > \delta\right\} < \epsilon\right\} , \end{equation*} where $\omega \sim p_{A(x)}$ means that $\omega$ is sampled according to $p_{A(x)}$. The corresponding minimax optimal risk is defined as \begin{equation*} \mathcal{R}_*(\epsilon) = \inf_{\widehat{g}} \mathcal{R}(\widehat{g}; \epsilon) \end{equation*} where the infimum is taken over \textit{all} Borel functions $\widehat{g}$ on $\Omega$. Restricting to just the affine estimators, the affine risk is defined as \begin{equation*} \mathcal{R}_{\text{aff}}(\epsilon) = \inf_{\widehat{g} \in \mathcal{F}} \mathcal{R}(\widehat{g}; \epsilon) . \end{equation*} \end{definition} It turns out that we don't lose much by restricting our attention to affine estimators. Indeed, Juditsky \& Nemirovski~\cite{Juditsky2009} prove that if $(\mathcal{D}, \mathcal{F})$ is a good pair, there is an estimator $\widehat{F}_* \in \mathcal{F}$ with $\epsilon$-risk at most $\widehat{\mathcal{R}}_*$, such that \begin{align*} \mathcal{R}_{\text{aff}}(\epsilon) &\leq \widehat{\mathcal{R}}_* \leq \vartheta(\epsilon) \mathcal{R}_*(\epsilon) \\ \vartheta(\epsilon) &= 2 + \frac{\ln(64)}{\ln(0.25/\epsilon)} \end{align*} for $\epsilon \in (0, 0.25)$. The estimator $\widehat{F}_*$ and the risk $\widehat{\mathcal{R}}_*$ are constructed as follows~\cite{seshadri2021computation, Juditsky2009}. \begin{enumerate}[leftmargin=0.2cm] \item Find the saddle-point value of the function $\Phi\colon (\mathcal{X} \times \mathcal{X}) \times (\mathcal{F} \times \mathbb{R}_+) \to \mathbb{R}$ defined as \begin{align} \Phi(x, y;&\ \phi, \alpha) = \ip{g, x} - \ip{g, y} + 2\alpha \ln(2/\epsilon) \nonumber \\ &+ \alpha \Bigg[\ln\left(\int_\Omega \exp(-\phi(\omega)/\alpha) p_{A(x)}(\omega) d\mathbb{P}\right) \nonumber \\ &\hspace{1cm} + \ln\left(\int_\Omega \exp(\phi(\omega)/\alpha) p_{A(y)}(\omega) d\mathbb{P}\right)\Bigg] . \label{eqn:Phi_general} \end{align} to a given precision. Juditsky \& Nemirovski~\cite{Juditsky2009} show that $\Phi$ has the following properties. $\Phi$ is concave in $(x, y)$ and convex in $(\phi, \alpha)$, and also $\Phi \geq 0$. Further, $\Phi$ has a well-defined saddle-point. See Ref.~\cite{Juditsky2009} for other properties and a more general treatment of the problem. \item Denote the saddle-point value of $\Phi$ by $2\widehat{\mathcal{R}}_*$: \begin{align} \widehat{\mathcal{R}}_* &= \frac{1}{2} \sup_{x, y \in \mathcal{X}} \inf_{\phi \in \mathcal{F}, \alpha > 0} \Phi(x, y; \phi, \alpha) \nonumber \\ &= \frac{1}{2} \inf_{\phi \in \mathcal{F}, \alpha > 0} \max_{x, y \in \mathcal{X}} \Phi(x, y; \phi, \alpha) \label{eqn:JN_risk_saddle_point_general}. \end{align} Say the saddle-point value is achieved at some points $x^*, y^* \in \mathcal{X}$, $\phi_* \in \mathcal{F}$ and $\alpha_* > 0$ to a precision $\delta~>~0$. Suppose that an outcome $\omega \in \Omega$ is observed upon measurement of the random variable $\mathrm{Z}_{A(x)}$. Then, the estimator $\widehat{F}_* \in \mathcal{F}$ is given as \begin{align} &\widehat{F}_*(\omega) = \phi_*(\omega) + c \nonumber \intertext{where the affine estimator $\phi_*$ is given by} &\frac{\phi_*}{\alpha_*} = \frac{1}{2} \ln\left(\frac{p_{A(x^*)}}{p_{A(y^*)}}\right) \label{eqn:phi_alpha_opt} \intertext{and the constant $c$ is} &c = \frac{1}{2} \left(\ip{g, x^*} + \ip{g, y^*}\right) \label{eqn:JN_estimator_constant_general} \end{align} The $\epsilon$-risk associated with this estimator satisfies $\mathcal{R}(\widehat{F}_*; \epsilon) \leq \widehat{\mathcal{R}}_* + \delta$, so the final output of the procedure is $\widehat{F}_*(\omega) \pm (\widehat{\mathcal{R}}_* + \delta)$. See Refs.~\cite{seshadri2021computation, Juditsky2009} for details. \end{enumerate} Importantly, the estimator $\widehat{F}_*$ is a function that can accept any outcome $\omega \in \Omega$. In other words, the estimate is provided depending on the outcome, but the risk $\widehat{R}_*$ is computed before seeing any outcome. So far we have described the one-shot scenario, i.e., producing an estimate for $\ip{g, x}$ from one outcome of a single random variable $\mathrm{Z}_{A(x)}$. In practice, we will need to consider outcomes of different random variables $\mathrm{Z}_{A^{(l)}(x)}$, which corresponds to $l = 1, \dotsc, L$ different measurement settings. More precisely, we are given Polish spaces $(\Omega^{(l)}, \Sigma^{(l)})$ equipped with a $\sigma$-finite Borel measure $\mathbb{P}^{(l)}$ for $l = 1, \dotsc, L$. We are also given a set of parameters $\mathcal{M}^{(l)}$ for $l = 1, \dotsc, L$. For each $l = 1, \dotsc, L$, we are given a family of random variables $\mathrm{Z}_{\mu_l}$ taking values in $\Omega^{(l)}$, where $\mu_l \in \mathcal{M}^{(l)}$. The random variable $\mathrm{Z}_{\mu_l}$ has a probability density $p^{(l)}_{\mu_l}$ with respect to the reference measure $\mathbb{P}^{(l)}$. As before, we are given affine mappings $A^{(l)}\colon \mathcal{X} \to \mathcal{M}^{(l)}$ for $l = 1, \dotsc, L$ that map the state $x \in \mathcal{X}$ of the system to a corresponding parameter. For each $l = 1, \dotsc, L$, we can choose estimators for the $l^{\text{th}}$ measurement from the set $\mathcal{F}^{(l)}$, which is a finite-dimensional vector space of real-valued Borel measurable functions on $\Omega^{(l)}$ that contains constant functions. To incorporate the outcomes of these different random variables, Juditsky \& Nemirovski~\cite{Juditsky2009} define the direct product of good pairs, which essentially constructs one large good pair from many smaller ones. \begin{definition}[Direct product of good pairs] \label{defn:direct_product_good_pair} Considering the following quantities for $l = 1, \dotsc, L$. Let $(\Omega^{(l)}, \Sigma^{(l)})$ be a Polish space endowed with a Borel $\sigma$-finite measure $\mathbb{P}^{(l)}$. Let $\mathcal{D}^{(l)}(\mu_l) = p^{(l)}_{\mu_l}$ be the parametric density family for $\mu_l \in \mathcal{M}^{(l)}$. Let $\mathcal{F}^{(l)}$ be a finite-dimensional linear space of Borel functions on $\Omega^{(l)}$ containing constants, such that the pair $(\mathcal{D}^{(l)}, \mathcal{F}^{(l)})$ is good. Then the direct product of these good pairs $(\mathcal{D}, \mathcal{F}) = \bigotimes_{l = 1}^L (\mathcal{D}^{(l)}, \mathcal{F}^{(l)})$ is defined as follows. \begin{enumerate} \item The large space is $\Omega = \Omega^{(1)} \times \dotsb \times \Omega^{(L)}$ endowed with the product measure $\mathbb{P} = \mathbb{P}^{(1)} \times \dotsb \times \mathbb{P}^{(L)}$. \item The set of parameters is $\mathcal{M} = \mathcal{M}^{(1)} \times \dotsb \times \mathcal{M}^{(L)}$, and the associated parametric density family is $\mathcal{D}(\mu) = p_\mu \equiv \prod_{l = 1}^L p^{(l)}_{\mu_l}$ for $\mu =(\mu_1, \dotsc, \mu_L) \in \mathcal{M}$. \item The linear space $\mathcal{F}$ comprises of all functions $\phi$ defined as $\phi(\omega_1, \omega_2, \dotsc, \omega_L) = \sum_{l = 1}^L \phi^{(l)}(\omega_l)$, where $\phi^{(l)} \in \mathcal{F}^{(l)}$ and $\omega_l \in \Omega^{(l)}$ for $l = 1, \dotsc, L$. \end{enumerate} \end{definition} It can be verified that the direct product of good pairs is a good pair~\cite{Juditsky2009}. Therefore, we can apply the above procedure for constructing an optimal estimator to the direct product of good pairs to obtain an optimal estimator that accounts for all the given measurement outcomes. We now note a simplification of the risk $\widehat{\mathcal{R}}_*$ obtained using results of Juditsky \& Nemirovski \cite{Juditsky2009}. Recall that we defined the risk $\widehat{\mathcal{R}}_*$ as half the saddle-point value of the function $\Phi$ (see Eq.~\eqref{eqn:JN_risk_saddle_point_general}). However, this definition is not very amenable to theoretical calculations. Therefore, we note an alternate expression for the risk, given by Proposition~3.1 of Juditsky \& Nemirovski~\cite{Juditsky2009}. \begin{equation} \widehat{\mathcal{R}}_* = \frac{1}{2} \max_{x, y \in \mathcal{X}} \left\{\ip{g, x} - \ip{g, y} \mid \affh(A(x), A(y)) \geq \frac{\epsilon}{2}\right\} \label{eqn:JNriskprop3.1_general} \end{equation} Juditsky \& Nemirovski~\cite{Juditsky2009} refer to the quantity \begin{equation} \affh(\mu, \nu) = \int_\Omega \sqrt{p_\mu p_\nu} d\mathbb{P} \label{eqn:Hellinger_affinity} \end{equation} as Hellinger Affinity (this quantity is sometimes referred to as Bhattacharyya coefficient in the discrete case; see for example Ref.~\cite{fuchs1999cryptographic}). Juditsky \& Nemirovski~\cite{Juditsky2009} prove that the Hellinger affinity is continuous and log-concave in $(\mu, \nu) \in \mathcal{M} \times \mathcal{M}$ when $(\mathcal{D}, \mathcal{F})$ is a good pair. \subsection{Fidelity estimation\label{app:JN_fidelity_estimation}} Now we give the details on how we adapt Juditsky \& Nemirovski's general framework to fidelity estimation. In Table~\ref{tab:JN_quantities_quantum}, we provide a dictionary relating the general quantities defined in appendix~\ref{app:JN_premise} to our scenario of fidelity estimation. We assume that the quantum system has a $d$-dimensional Hilbert space over $\mathbb{C}$, $d \in \mathbb{N}$. The set of $d \times d$ complex-valued Hermitian matrices forms a real vector space that is isomorphic to $\mathbb{R}^{d^2}$, and thus, we can write $\mathcal{X} \subseteq \mathbb{R}^{d^2}$, where $\mathcal{X}$ is the set of density matrices. Note that $\mathcal{X}$ is a compact and convex set. For the $l^\text{th}$ measurement setting, we consider the positive operator-valued measure (POVM) $\{E^{(l)}_1, \dotsc, E^{(l)}_{N_l}\}$, where $l = 1, \dotsc, L$. Since the definition of a good pair requires that the probability of each outcome is non-zero, we add a small parameter $0 < \epsilon_o \ll 1$ to make the outcome probabilities positive, as noted in section~\ref{secn:minimax_method_theory}. These outcome probabilities are represented by the affine map $A^{(l)}\colon \mathcal{X} \to \mathcal{M}^{(l)}$ given in Table~\ref{tab:JN_quantities_quantum}. \begin{table}[!ht] \begin{center} \begin{tabular}{l l} \toprule $\mathcal{X}$ & Set of density matrices \\ $\Omega^{(l)}$ & Measurement outcomes $\{1, \dotsc, N_l\}$ \\ $\mathbb{P}^{(l)}$ & Counting measure on $(\Omega^{(l)}, \Sigma^{(l)})$ with $\Sigma^{(l)} = 2^{\Omega^{(l)}}$ \\ $\mathcal{M}^{(l)}$ & Relatively open simplex $\{x \in \mathbb{R}^{N_l} \mid x_i > 0,\ \sum_i x_i = 1\}$ \\ $p_\mu$ & $p_\mu = (\mu_1, \dotsc, \mu_{N_l})$, $\mu \in \mathcal{M}^{(l)}$ \\ $A^{(l)}$ & $A^{(l)}(\chi)_k = \frac{\tr(E^{(l)}_k \chi) + \epsilon_o/N_l}{1 + \epsilon_o}$, $k = 1, \dotsc, N_l$, $\epsilon_o > 0$ \\ $\mathcal{F}^{(l)}$ & Set of estimators: real-valued functions on $\Omega^{(l)}$ \\ $g$ & Pure target state $\rho$ \\ \bottomrule \end{tabular} \end{center} \caption{A dictionary specifying the meaning of each quantity appearing in the text for the purpose of fidelity estimation. The index $l$ varies from $1$ to $L$, where $L$ denotes the number of measurement settings. The $l^\text{th}$ measurement setting is described by the POVM $\{E^{(l)}_1, \dotsc, E^{(l)}_{N_l}\}$.} \label{tab:JN_quantities_quantum} \end{table} Note that the set of outcomes $\Omega^{(l)}$ is a finite set for each $l = 1, \dotsc, L$. We consider the discrete topology $\Sigma^{(l)} = 2^{\Omega^{(l)}}$ on $\Omega^{(l)}$, so that $(\Omega^{(l)}, \Sigma^{(l)})$ forms a Polish space. The Borel $\sigma$-algebra coincides with the topology, so we use the same symbol for both. Because $\Sigma^{(l)}$ is discrete, any real-valued function on $\Omega^{(l)}$ is Borel measurable, and therefore, we omit the phrase ``Borel measurable" when talking about functions (or estimators) on $\Omega^{(l)}$. Furthermore, since each $\Omega^{(l)}$ is a finite set, real-valued functions defined on it can be considered as $|\Omega^{(l)}| = N_l$ dimensional real vectors. Thus, we treat elements of $\mathcal{F}^{(l)}$ as vectors. Using these facts, we show that $\mathcal{D}^{(l)}$ and $\mathcal{F}^{(l)}$ as defined in Table~\ref{tab:JN_quantities_quantum} form a good pair. By definition of $\mathcal{M}^{(l)}$ and $p_\mu$ in Table~\ref{tab:JN_quantities_quantum}, it is easy to see the first and second conditions for good pair given in definition~\ref{defn:good_pair} hold. We check that the last two conditions given in definition~\ref{defn:good_pair} hold. Since $\mathcal{F}^{(l)}$ contains all functions on $\Omega^{(l)}$, it contains $\ln(p_\mu/p_\nu)$ in particular. Next, we see that for $\phi^{(l)} \in \mathcal{F}^{(l)}$, we can write \begin{equation*} F_{\phi^{(l)}}(\mu) = \ln\left(\sum_{k = 1}^{N_l} \exp\left(\phi^{(l)}_k\right) \mu_k\right) \end{equation*} because $\mathbb{P}^{(l)}$ is the counting measure. As noted above, we consider $\phi^{(l)} \in \mathcal{F}^{(l)}$ as an $N_l$-dimensional real vector. Then, since $\nabla_\mu^2 F_{\phi^{(l)}} = -\bm{e} \bm{e}^T/(\ip{\bm{e}, \mu})^2$, where $\bm{e} = (e^{\phi^{(l)}_1}, \dotsc, e^{\phi^{(l)}_{N_l}})$, $F_{\phi^{(l)}}$ is concave in $\mu$. Thus, $(\mathcal{D}^{(l)}, \mathcal{F}^{(l)})$ forms a good pair. Now, suppose that we perform $R_l$ repetitions (shots) of the $l^\text{th}$ measurement setting. Then, the space to be considered for all measurement settings put together is $\Omega = (\Omega^{(1)})^{R_1} \times \dotsm \times (\Omega^{(L)})^{R_L}$, the parameter space for probability distributions is $\mathcal{M} = (\mathcal{M}^{(1)})^{R_1} \times \dotsm \times (\mathcal{M}^{(L)})^{R_L}$, and the set of estimators $\mathcal{F} \subseteq (\mathcal{F}^{(1)})^{R_1} \times \dotsm \times (\mathcal{F}^{(L)})^{R_l}$ is chosen as per definition~\ref{defn:direct_product_good_pair}. The mapping $A: \mathcal{X} \to \mathcal{M}$ is given by $A(\chi) = \bigoplus_{l = 1}^L \bigoplus_{r = 1}^{R_l} A^{(l)}(\chi) \equiv (A^{(1)}(\chi), A^{(1)}(\chi), \dotsc, A^{(L)}(\chi), A^{(L)}(\chi))$, where $A^{(l)}(\chi)$ is repeated $R_l$ times. Then, we can use the direct product of good pairs (definition~\ref{defn:direct_product_good_pair}) to compute the function $\Phi$ defined in Eq.~\eqref{eqn:Phi_general} when all the measurement settings are considered together. Note that $\phi \in \mathcal{F} \subseteq (\mathcal{F}^{(1)})^{R_1} \times \dotsm \times (\mathcal{F}^{(L)})^{R_L}$ implies $\phi = \sum_{l = 1}^L \sum_{r = 1}^{R_l} \phi^{(l, r)}$, where $\phi^{(l, r)}$ belongs to the $r^\text{th}$ copy of $\mathcal{F}^{(l)}$. Then, using Eq.~\eqref{eqn:Phi_general}, we obtain \begin{align*} \Phi(&\chi_1, \chi_2; \phi, \alpha) = \tr(\rho \chi_1) - \tr(\rho \chi_2) + 2\alpha \ln(2/\epsilon) \nonumber \\ &+ \alpha \sum_{l = 1}^L \sum_{r = 1}^{R_l} \Bigg[\ln\left(\sum_{k = 1}^{N_l} e^{-\phi^{(l, r)}_k/\alpha} \frac{\tr(E^{(l)}_k \chi_1) + \epsilon_o/N_l}{1 + \epsilon_o}\right) \nonumber \\ &\hspace{1cm} + \ln\left(\sum_{k = 1}^{N_l} e^{\phi^{(l, r)}_k/\alpha} \frac{\tr(E^{(l)}_k \chi_2) + \epsilon_o/N_l}{1 + \epsilon_o}\right)\Bigg] \end{align*} where we have used the fact that $\exp(\sum_{l, r} \phi^{(l, r)}) = \prod_{l, r} \exp(\phi^{(l, r)})$, $p_\mu = \prod_{l, r} p^{(l)}_{\mu_{l, r}}$, $A(\chi) = \bigoplus_{l, r} A^{(l)}(\chi)$, and that $\mathbb{P} = (\mathbb{P}^{(1)})^{R_1} \times \dotsm \times (\mathbb{P}^{(L)})^{R_L}$ is a product measure. By remark $3.2$ of Juditsky \& Nemirovski \cite{Juditsky2009}, we can just use $R_l$ copies of $\phi^{(l)}$ instead of $\phi^{(l, r)}$ in finding the saddle-point of the above function, and thus we obtain Eq.~\eqref{eqn:Phi_quantum}. Suppose that the $(\chi_1, \chi_2)$ and the $\alpha$ components of the saddle-point of the function $\Phi$ are attained at $\chi_1^*, \chi_2^* \in \mathcal{X}$ and $\alpha_* > 0$, respectively, to a given precision. Then, from Eq.~\eqref{eqn:phi_alpha_opt}, definition~\ref{defn:direct_product_good_pair}, and preceding remarks, we can infer that the $\phi$-component of the saddle-point can be described by the function $\phi_* = \sum_{l = 1}^L \sum_{r = 1}^{R_l} \phi^{(l)}_*$, where for $l = 1, \dotsc, L$, we have \begin{equation} \phi^{(l)}_* = \frac{\alpha_*}{2} \ln\left(\frac{p_{A^{(l)}(\chi_1^*)}}{p_{A^{(l)}(\chi_2^*)}}\right). \label{eqn:phi_opt_quantum} \end{equation} Note that replacing $g = \rho$ with $g = \mathcal{O}$ for any Hermitian operator (observable) $\mathcal{O}$, we can obtain an estimator for the expectation value of that observable. Finally, to adapt the simplified expression for risk given in Eq.~\eqref{eqn:JNriskprop3.1_general} to the quantum case, we compute the Hellinger affinity (see Eq.~\eqref{eqn:Hellinger_affinity}). The Hellinger affinity for the quantum case is given as \begin{align} \affh(A(\chi_1), A(\chi_2)) &= \prod_{l = 1}^L \Bigg[\sum_{k = 1}^{N_l} \left(\frac{\tr(E^{(l)}_k \chi_1) + \epsilon_o/N_l}{1 + \epsilon_o}\right)^{1/2} \nonumber \\ &\hspace{1.5cm} \left(\frac{\tr(E^{(l)}_k \chi_2) + \epsilon_o/N_l}{1 + \epsilon_o}\right)^{1/2}\Bigg]^{R_l} \label{eqn:Hellinger_affinity_quantum} \\ &\approx \prod_{l = 1}^L \left[F_C(\chi_1, \chi_2; \{E^{(l)}_k\})\right]^{R_l/2} \nonumber \end{align} where in the last step, we neglect $\epsilon_o \ll 1$ to simplify the equations (we include $\epsilon_o$ in the numerical simulations). Here, $F_C$ is the classical fidelity defined in Eq.~\eqref{eqn:classicalfidelity}. Then substituting the expression for the Hellinger affinity in Eq.~\eqref{eqn:JNriskprop3.1_general}, we obtain Eq.~\eqref{eqn:JNriskprop3.1}. \section{Minimax Method: Numerical Implementation\label{app:minimax_numerical}} We outline the procedure followed to find the saddle-point of the function $\Phi$ defined in Eq.~\eqref{eqn:Phi_quantum}, from which we can compute the fidelity estimator. This is done in two steps: find the $\chi_1^*, \chi_2^* \in \mathcal{X}$ and $\alpha_* > 0$ components of the saddle-point and then compute $\phi_*/\alpha_*$ to obtain the $\phi_* \in \mathcal{F}$ component of the saddle-point. To execute the first step, we resort to the following expression for the saddle-point value noted in Ref.~\cite{seshadri2021computation}, which is obtained by appropriately re-writing the expression given by Juditsky \& Nemirovski~\cite{Juditsky2009}, \begin{align} 2\widehat{\mathcal{R}}_* &= \inf_{\alpha > 0} \bigg\{2 \alpha \ln(2/\epsilon) + \max_{\chi_1, \chi_2 \in \mathcal{X}} \bigg[\ip{\rho, \chi_1} - \ip{\rho, \chi_2} \nonumber \\ &\hspace{3.5cm}+ 2 \alpha \ln(\affh(A(\chi_1), A(\chi_2)))\bigg]\bigg\} \label{eqn:saddle_point_expression_optimization} \end{align} where the Hellinger affinity $\affh(A(\chi_1), A(\chi_2))$ is given in Eq.~\eqref{eqn:Hellinger_affinity_quantum}. Now, we note that as per the definitions used in Tab.~\ref{tab:JN_quantities_quantum}, the function $\affh(\mu, \nu) = \prod_{l = 1}^L \left[\sum_{i = 1}^{N_l} \sqrt{\mu^{(l)}_i \nu^{(l)}_i}\right]^{R_l}$ is smooth on its domain because $\mu, \nu > 0$ (component-wise) for each $\mu, \nu \in \mathcal{M}$. Since $\affh(\mu, \nu) > 0$, the function $\ln(\affh(\mu, \nu))$ is well-defined and smooth on its domain. Further, since $A\colon \mathcal{X} \to \mathcal{M}$ is affine, $\ln(\affh(A(\chi_1), A(\chi_2)))$ is smooth on $\mathcal{X} \times \mathcal{X}$. In particular, the derivatives of $\ln(\affh(A(\chi_1), A(\chi_2)))$ are continuous. Since $\mathcal{X} \times \mathcal{X}$ is compact, the Hessian of $\ln(\affh(A(\chi_1), A(\chi_2)))$ is bounded, and therefore, the gradient of $\ln(\affh(A(\chi_1), A(\chi_2)))$ is Lipschitz continuous. Moreover, $\ln(\affh(A(\chi_1), A(\chi_2)))$ is a jointly concave function of the density matrices (see Appendix~\ref{app:JN_premise}). With this in mind, we use the following procedure to find a saddle-point of the function $\Phi$ defined in Eq.~\eqref{eqn:Phi_quantum} to any given precision. \begin{enumerate}[leftmargin=0.2cm] \item For any fixed $\alpha > 0$, we solve the ``inner" convex optimization problem in Eq.~\eqref{eqn:saddle_point_expression_optimization} \begin{align*} \max_{\chi_1, \chi_2 \in \mathcal{X}} &\bigg[ \underbrace{\tr(\rho \chi_1) - \tr(\rho \chi_2) + 2 \alpha \ln(\affh(A(\chi_1), A(\chi_2)))}_{f(\chi_1, \chi_2)} \bigg] \\ = \max_{\chi \in \overline{\mathcal{X}}}\ &f(\chi) \nonumber \end{align*} using the version of Nesterov's second method \cite{nesterov1988approach} given in Ref.\ \cite{tseng2010approximation}. For the second equation above, we define $\chi = (\chi_1, \chi_2)$ and $\overline{\mathcal{X}} = \mathcal{X} \times \mathcal{X}$. Nesterov's second method is suited to problems where the objective $f$ is a convex function\footnote{Or a concave function in the case of maximization.} with a Lipschitz continuous gradient (see Theorem 1 (c) in Ref.~\cite{tseng2010approximation} for convergence guarantee). In such scenarios, Nesterov's second method gives an accelerated version of projected gradient ascent/descent, such that each iterate lies in the domain $\overline{\mathcal{X}}$. This is a useful method to optimize convex functions of density matrices. When the Lipschitz constant is not known, a backtracking scheme can be used~\cite{tseng2010approximation}. \item We perform the ``outer" convex optimization over $\alpha$ in Eq.~\eqref{eqn:saddle_point_expression_optimization} using \texttt{scipy}'s \texttt{minimize\_scalar} routine. Through this optimization, we obtain the $\chi_1^*, \chi_2^* \in \mathcal{X}$ and $\alpha_* > 0$ components of the saddle-point. \item Using the so obtained $\chi_1^*, \chi_2^* \in \mathcal{X}$ and $\alpha_* > 0$, we find $\phi_* \in \mathcal{F}$ using Eq.~\eqref{eqn:phi_opt_quantum}. \end{enumerate} Once we have a saddle-point, the estimator can be easily computed using Eqs.~\eqref{eqn:JN_estimator} \& \eqref{eqn:JN_estimator_constant}. Note that we embed the Hermitian matrices into a real vector space before performing the above optimizations. This is possible because an isometric isomorphism exists between the set of Hermitian operators of a fixed size and a real vector space. We use $\epsilon_o = 10^{-5}$ in the numerical implementation. \section{Minimax Method: Sample Complexity\label{app:minimax_sample_complexity}} We begin by computing the best sample complexity that can be achieved by the minimax method. A detailed statement of this result is given in Theorem~\ref{thm:minimax_method_best_sample_complexity}. Below, we present a proof of this result. \begin{proof}[Proof of Theorem~\ref{thm:minimax_method_best_sample_complexity}] \label{proof:minimax_method_best_sample_complexity} From Eq.~\eqref{eqn:JNriskprop3.1}, we know that the risk can be written as \begin{align*} &\widehat{\mathcal{R}}_* = \frac{1}{2} \max_{\chi_1, \chi_2 \in \mathcal{X}} \bigg\{\tr(\rho \chi_1) - \tr(\rho \chi_2)\ \bigg| \nonumber \\ &\hspace{3cm} \prod_{l = 1}^L \left[F_C(\chi_1, \chi_2, \{E^{(l)}_k\})\right]^{R_l/2} \geq \frac{\epsilon}{2} \bigg\} \intertext{where} &F_C(\chi_1, \chi_2, \{E^{(l)}_k\}) = \left(\sum_{k = 1}^{N_l} \sqrt{\tr\left(E^{(l)}_k \chi_1\right) \tr\left(E^{(l)}_k \chi_2\right)}\right)^2 \end{align*} is the classical fidelity between $\chi_1$ and $\chi_2$ determined by the POVM $\{E^{(l)}_k\}$. As noted in section~\ref{secn:minimax_method_optimal_risk}, we can write the fidelity between any two states as follows~\cite{fuchs1996distinguishability}. \begin{equation*} F(\chi_1, \chi_2) = \min_{\text{POVM } \{F_i\}} F_C(\chi_1, \chi_2, \{F_i\}) \end{equation*} In particular, we have $F(\chi_1, \chi_2) \leq F_C(\chi_1, \chi_2, \{E^{(l)}_k\})$ for every POVM $\{E^{(l)}_k\}$ that we are using. Thus, we obtain the following lower bound on our risk \begin{align*} \widehat{\mathcal{R}}_* \geq \frac{1}{2} \max_{\chi_1, \chi_2 \in \mathcal{X}} \bigg\{&\tr(\rho \chi_1) - \tr(\rho \chi_2)\ \bigg| \nonumber \\ &F(\chi_1, \chi_2) \geq \left(\frac{\epsilon}{2}\right)^{\frac{2}{R}}\bigg\} \end{align*} where $R = \sum_{l = 1}^L R_l$ is the total number of shots. We now proceed to evaluating the lower bound. For convenience, we denote $\gamma = (\epsilon/2)^{2/R}$, so that the constraint becomes $F(\chi_1, \chi_2) \geq \gamma$. Next, we note that for a pure state $\rho$ and possibly mixed states $\chi_1$ and $\chi_2$, we have the following inequality for the fidelity in terms of the trace distance (see chapter 9 in Ref.~\cite{wilde2011classical}) \begin{equation} \tr(\rho \chi_1) \leq \tr(\rho \chi_2) + \frac{1}{2} \norm{\chi_1 - \chi_2}_1 \end{equation} where $\norm{\chi}_1$ is the Schatten $1$-norm of $\chi$. Then, using the Fuchs - van de Graaf inequality $(1/2) \norm{\chi_1 - \chi_2}_1 \leq \sqrt{1 - F(\chi_1, \chi_2)}$ \cite{fuchs1999cryptographic}, we can write \begin{align} \tr(\rho \chi_1) - \tr(\rho \chi_2) &\leq \sqrt{1 - F(\chi_1, \chi_2)} \notag \\ &\leq \sqrt{1 - \gamma} \label{eqn:fuchsvdg_ineq_constraint} \end{align} where the second line holds when $F(\chi_1, \chi_2) \geq \gamma$. We show that the upper bound in Eq.~\eqref{eqn:fuchsvdg_ineq_constraint} can be achieved by explicitly constructing the density matrices $\chi_1^*$ and $\chi_2^*$ achieving the maximum. For this purpose, we define $\Delta_\rho = \id - \rho$, and suppose that the dimension of the system is $d$ (i.e., $\rho$ is an $d \times d$ matrix). Then, let \begin{align*} \chi_1^* &= \frac{1 + \sqrt{1 - \gamma}}{2} \rho + \frac{1 - \sqrt{1 - \gamma}}{2} \frac{\Delta_\rho}{d - 1}, \\ \chi_2^* &= \frac{1 - \sqrt{1 - \gamma}}{2} \rho + \frac{1 + \sqrt{1 - \gamma}}{2} \frac{\Delta_\rho}{d - 1}. \end{align*} Since $\rho$ is pure, there is some (normalized) vector $\ket{v_1}$ such that $\rho = \op{v_1}{v_1}$. Let $\{\ket{v_2}, \dotsc, \ket{v_d}\}$ be any orthonormal basis for the orthogonal complement of the subspace spanned by $\ket{v_1}$. Then, we can write $\Delta_\rho = \sum_{i = 2}^d \op{v_i}{v_i}$ using the resolution of identity. Therefore, in the basis $\{\ket{v_1}, \dotsc, \ket{v_d}\}$, the matrices $\chi_1^*$ and $\chi_2^*$ are diagonal. Since $\gamma < 1$, the diagonal entries of these matrices are real (and positive), and it is easy to check that they sum to $1$, showing that $\chi_1^*$ and $\chi_2^*$ are density matrices. Since they are diagonal, it is easy to compute the fidelity between them. We find that $F(\chi_1^*, \chi_2^*) = \gamma$, and therefore, these density matrices satisfy the constraint $F(\chi_1^*, \chi_2^*) \geq \gamma$. Further, we can see that these density matrices saturate the upper bound in Eq.~\eqref{eqn:fuchsvdg_ineq_constraint}, i.e., $\tr(\rho \chi_1^*) - \tr(\rho \chi_2^*) = \sqrt{1 - \gamma}$. Thus, we find that the lower bound on the risk is \begin{equation*} \widehat{\mathcal{R}}_* \geq \frac{1}{2} \sqrt{1 - \left(\frac{\epsilon}{2}\right)^{2/R}} \end{equation*} where we used $\gamma = (\epsilon/2)^{2/R}$. The inequality given in the above equation is tight: the POVM defined by $\{\rho, \Delta_\rho\}$ achieves the lower bound (see corollary \ref{corr:minimax_method_optimal_risk}). Thus, the best sample complexity given by the minimax method corresponding to a risk of $\widehat{\mathcal{R}}_* < 0.5$ and confidence level $1 - \epsilon \in (0.75, 1)$ is \begin{align*} R &\geq \frac{2 \ln(2/\epsilon)}{\left|\ln(1 - 4\widehat{\mathcal{R}}_*^2)\right|} \\ &\approx \frac{\ln(2/\epsilon)}{2\widehat{\mathcal{R}}_*^2} \text{ when } \widehat{\mathcal{R}}_*^2\ll 1. \end{align*} as noted. \end{proof} We now consider a family of two-outcome POVM measurements, and show that the risk given by the minimax method can be obtained by solving a one-dimensional optimization problem. Using this, we obtain a simple formula for an upper bound on the risk, and consequently, also a good bound on the sample complexity. In particular, this provides an upper bound on the sample complexity for the randomized Pauli measurement scheme given in Box~\hyperlink{box:pauli_measurement_scheme}{\ref*{secn:minimax_method}.1}. \begin{theorem} \label{thm:minimax_sample_complexity_2outcomePOVM} Suppose we are given a pure target state $\rho$, and we perform $R$ repetitions of the POVM $\{\Theta, \Delta_\Theta\}$ defined as \begin{align*} \Theta &= \omega_1 \rho + \omega_2 \Delta_\rho \\ \Delta_\Theta &= (1 - \omega_1) \rho + (1 - \omega_2) \Delta_\rho \end{align*} where $\Delta_\rho = \id - \rho$ and $\omega_1, \omega_2 \in [0, 1]$ are parameters satisfying $\omega_1 > \omega_2$. Also, define \begin{align*} \gamma &= \left(\frac{\epsilon}{2}\right)^{2/R} \intertext{and} R_o &= \frac{\ln(2/\epsilon)}{|\ln(\sqrt{\omega_1 \omega_2} + \sqrt{(1 - \omega_1)(1 - \omega_2)})|} \end{align*} \noindent Then, if $R > R_o$, the risk of the estimator given by the minimax method can be obtained by solving the one-dimensional optimization problem \begin{align} \widehat{\mathcal{R}}_* &= \frac{\sqrt{1 - \gamma}}{2(\omega_1 - \omega_2)} \max_{a \in \mathcal{A}_a} \sqrt{1 - (2a - 1)^2\gamma} \label{eqn:thm_sample_complexity_2outcomePOVM-risk} \intertext{where the set of allowed values for $a$ is given as} \mathcal{A}_a &= [0, 1] \cap \left((-\infty, a^{(1)}_-] \cup [a^{(1)}_+, \infty)\right) \nonumber \\ &\hspace{2.5cm} \cap \left((-\infty, a^{(2)}_-] \cup [a^{(2)}_+, \infty)\right) \nonumber \intertext{with} a^{(1)}_{\pm} &= \omega_1 \pm \sqrt{\omega_1 (1 - \omega_1) \frac{(1 - \gamma)}{\gamma}} \nonumber \\ a^{(2)}_{\pm} &= \omega_2 \pm \sqrt{\omega_2 (1 - \omega_2) \frac{(1 - \gamma)}{\gamma}}. \nonumber \end{align} For $R \leq R_o$, the risk is $\widehat{\mathcal{R}}_* = 0.5$. \noindent In particular, for any risk $\widehat{\mathcal{R}}_* \in (0, 0.5)$, \begin{align} R &\geq 2\frac{\ln\left(2/\epsilon\right)}{\left|\ln\left(1 - 4 (\omega_1 - \omega_2)^2 \widehat{\mathcal{R}}_*^2\right)\right|} \label{eqn:thm_sample_complexity_2outcomePOVM-repetitions} \\ &\approx \frac{1}{2(\omega_1 - \omega_2)^2} \frac{\ln(2/\epsilon)}{\widehat{\mathcal{R}}_*^2} \nonumber \end{align} repetitions of the measurement are sufficient to achieve that risk with a confidence level of $1 - \epsilon \in (3/4, 1)$. \end{theorem} \begin{proof} For the case that we have $R$ repetitions of a single POVM $\{\Theta, \Delta_\Theta\}$, the risk can be written as (see Appendix~\ref{app:minimax_theory}) \begin{align*} \widehat{\mathcal{R}}_* &= \frac{1}{2} \max_{\chi_1, \chi_2 \in \mathcal{X}} \bigg\{\tr(\rho \chi_1) - \tr(\rho \chi_2)\ \bigg| \nonumber \\ &\hspace{2cm}\text{AffH}(A(\chi_1), A(\chi_2)) \geq \sqrt{\gamma}\bigg\} \end{align*} where $A(\chi) = (\tr(\Theta \chi) + \epsilon_o/2) / (1 + \epsilon_o), \tr(\Delta_\Theta \chi) + \epsilon_o/2) / (1 + \epsilon_o))$. To begin with, we simplify this to a 2-dimensional optimization problem. For this purpose, we write the density matrices $\chi_1$, $\chi_2$ as a convex combination of the target state $\rho$ and some other density matrix in the orthogonal complement of the subspace generated by $\rho$: \begin{align*} \chi_1 &= \alpha_1 \rho + (1 - \alpha_1) \rho_1^\perp \nonumber \\ \chi_2 &= \alpha_2 \rho + (1 - \alpha_2) \rho_2^\perp \end{align*} with $\tr(\rho \rho_1^\perp) = \tr(\rho \rho_2^\perp) = 0$ and $0 \leq \alpha_1, \alpha_2 \leq 1$. Using this, the Hellinger affinity can be written as \begin{align} \text{AffH}(\alpha_1, \alpha_2) &= \frac{1}{1 + \epsilon_o} \left(\omega_1 \alpha_1 + \omega_2 (1 - \alpha_1) + \frac{\epsilon_o}{2}\right)^{1/2} \nonumber \\ &\hspace{2cm} \left(\omega_1 \alpha_2 + \omega_2 (1 - \alpha_2) + \frac{\epsilon_o}{2}\right)^{1/2} \nonumber \\ &\hspace{-1cm} + \frac{1}{1 + \epsilon_o} \left((1 - \omega_1) \alpha_1 + (1 - \omega_2) (1 - \alpha_1) + \frac{\epsilon_o}{2}\right)^{1/2} \nonumber \\ &\hspace{0.8cm} \left((1 - \omega_1) \alpha_2 + (1 - \omega_2) (1 - \alpha_2) + \frac{\epsilon_o}{2}\right)^{1/2} \nonumber \\ &\hspace{-1.8cm}\approx \sqrt{(\omega_2 + (\omega_1 - \omega_2) \alpha_1) (\omega_2 + (\omega_1 - \omega_2) \alpha_2)} \nonumber \\ &\hspace{-1.5cm} + \sqrt{((1 - \omega_2) + (\omega_2 - \omega_1) \alpha_1) ((1 - \omega_2) + (\omega_2 - \omega_1) \alpha_2)} \label{eqn:affh_sample_complexity_2outcomePOVM} \end{align} Note that because of the parameter $\epsilon_o > 0$, the Hellinger affinity is differentiable. Since $\epsilon_o \ll 1$, we neglect it in Eq.~\eqref{eqn:affh_sample_complexity_2outcomePOVM} to prevent the equations from becoming cumbersome later. We can write the risk as \begin{align} 2 \widehat{\mathcal{R}}_* &= \max_{\alpha_1, \alpha_2 \in [0, 1]} (\alpha_1 - \alpha_2) \nonumber \\ &\hspace{1.1cm} \text{s.t. } -\ln(\affh(\alpha_1, \alpha_2)) \leq -\ln(\sqrt{\gamma}) \label{eqn:thm_sample_complexity_2outcomePOVM-risk_optimization} \end{align} We take a logarithm to make the optimization problem convex (see Proposition 3.1 in \cite{Juditsky2009}). Now, consider the case $R > R_o$, where $R_o$ is as defined in the statement of the theorem. Then, we argue that at the optimum, $\affh = \sqrt{\gamma}$. To see this, we first convert the above maximization to a minimization problem, and write its Lagrangian as \begin{align*} \mathcal{L} &= -\alpha_1 + \alpha_2 - \lambda \ln(\affh(\alpha_1, \alpha_2)) + \lambda \ln(\sqrt{\gamma}) \nonumber \\ &\hspace{1cm}- \nu^1_0 \alpha_1 + \nu^1_1 (\alpha_1 - 1) - \nu^2_0 \alpha_2 + \nu^2_1 (\alpha_2 - 1) \end{align*} where $\lambda, \nu^1_0, \nu^1_1, \nu^2_0, \nu^2_1$ are dual variables. At the optimum, the Karush-Kuhn-Tucker (KKT) conditions must be satisfied \cite{boyd2004convex}, which we list below for convenience. \begin{enumerate} \item (Primal feasibility) The ``primal" variables $\alpha_1, \alpha_2$ must lie in the domain $[0, 1]$. \item (Dual feasibility) The dual variables (corresponding to inequality constraints) $\lambda, \nu^1_0, \nu^1_1, \nu^2_0, \nu^2_1$ must be non-negative. \item (Complementary slackness) Either the dual variable must vanish or the constraint must be tight. \item (Stationarity) The gradient of the Lagrangian with respect to the primal variables must vanish. \end{enumerate} Since the gradient of $\mathcal{L}$ with respect to $\alpha_1$ and $\alpha_2$ must vanish at the optimum, we have \begin{align*} \frac{\partial \mathcal{L}}{\partial \alpha_1} &= -1 - \lambda \frac{\partial \ln(\affh)}{\partial \alpha_1} - \nu^1_0 + \nu^1_1 = 0 \\ \frac{\partial \mathcal{L}}{\partial \alpha_2} &= 1 - \lambda \frac{\partial \ln(\affh)}{\partial \alpha_2} - \nu^2_0 + \nu^2_1 = 0. \end{align*} Dual feasibility implies $\lambda, \nu^1_0, \nu^1_1, \nu^2_0, \nu^2_1 \geq 0$ at the optimum. If $\lambda = 0$ at the optimum, we must have $\nu^1_1 = 1 + \nu^1_0 > 0$ and $\nu^2_0 = 1 + \nu^2_1 > 0$. Then complementary slackness implies that $\affh \geq \sqrt{\gamma}$, $\alpha_1 = 1$ and $\alpha_2 = 0$ at the optimum. However, for $R > R_o$, we have \begin{align*} \affh(\alpha_1 = 1, \alpha_2 = 0) &= \sqrt{\omega_1 \omega_2} + \sqrt{(1 - \omega_1)(1 - \omega_2)} \\ &< \sqrt{\gamma} \end{align*} contradicting with the constraint. Thus, we must have $\lambda > 0$, implying that $\affh = \sqrt{\gamma}$ as claimed. Using this, we can reduce the problem to a one-dimensional problem that will eventually help perform the optimization. We do this by appropriately parametrizing each term in $\affh$: \begin{align*} &\sqrt{(\omega_2 + (\omega_1 - \omega_2) \alpha_1) (\omega_2 + (\omega_1 - \omega_2) \alpha_2)} \equiv a \sqrt{\gamma} \\ &\sqrt{((1 - \omega_2) + (\omega_2 - \omega_1) \alpha_1) ((1 - \omega_2) + (\omega_2 - \omega_1) \alpha_2)} \nonumber \\ &\hspace{2cm} \equiv b \sqrt{\gamma} \\ \intertext{Then, $\affh = \sqrt{\gamma}$ implies} &a + b = 1 \end{align*} where $a, b \geq 0$. From the above equations, we can deduce that \begin{align*} \alpha_1 + \alpha_2 &= \frac{(a^2 - b^2)}{\omega_1 - \omega_2} \gamma + \frac{(1 - 2\omega_2)}{\omega_1 - \omega_2} \\ \alpha_1 \alpha_2 &= \frac{((1 - \omega_2) a^2 + \omega_2 b^2)}{(\omega_1 - \omega_2)^2} \gamma - \frac{\omega_2 (1 - \omega_2)}{(\omega_1 - \omega_2)^2}. \end{align*} These equations are well-defined because $\omega_1 > \omega_2$. Solving these simultaneous equations and applying the constraint $a + b = 1$, we obtain \begin{align*} \alpha_1 &= \frac{(2a - 1)\gamma + (1 - 2\omega_2)}{2(\omega_1 - \omega_2)} + \frac{\sqrt{1 - \gamma}}{2(\omega_1 - \omega_2)} \sqrt{1 - (2a - 1)^2\gamma} \\ \alpha_2 &= \frac{(2a - 1)\gamma + (1 - 2\omega_2)}{2(\omega_1 - \omega_2)} - \frac{\sqrt{1 - \gamma}}{2(\omega_1 - \omega_2)} \sqrt{1 - (2a - 1)^2\gamma} \end{align*} Since $a \in [0, 1]$, $(2a - 1)^2 \in [0, 1]$ and $\gamma \in (0, 1)$, the term in the square-root is non-negative, so $\alpha_1, \alpha_2$ are real. Furthermore, we have used the fact that $\alpha_1 \geq \alpha_2$ at the optimum because the risk involves maximization of $\alpha_1 - \alpha_2$; see Eq.~\eqref{eqn:thm_sample_complexity_2outcomePOVM-risk_optimization}. Next, we need to impose the constraints $\alpha_1, \alpha_2 \in [0, 1]$. Requiring $\alpha_2 \geq 0$ (and thus $\alpha_1 \geq 0$) gives \begin{align*} &(a - a^{(2)}_+) (a - a^{(2)}_-) \geq 0 \\ &a^{(2)}_{\pm} = \omega_2 \pm \sqrt{\omega_2 (1 - \omega_2) \frac{(1 - \gamma)}{\gamma}} \end{align*} which means $a$ must lie in the region $(-\infty, a^{(2)}_-] \cup [a^{(2)}_+, \infty)$. Similarly, requiring $\alpha_1 \leq 1$ (and thus $\alpha_2 \leq 1$) gives \begin{align*} &(a - a^{(1)}_+)(a - a^{(1)}_-) \geq 0 \\ &a^{(1)}_{\pm} = \omega_1 \pm \sqrt{\omega_1 (1 - \omega_1) \frac{(1 - \gamma)}{\gamma}} \end{align*} which implies that $a$ must lie in the region $(-\infty, a^{(1)}_-] \cup [a^{(2)}_+, \infty)$. Therefore, the allowed values of $a$ are \begin{align*} \mathcal{A}_a &= [0, 1] \cap \left((-\infty, a^{(2)}_-] \cup [a^{(2)}_+, \infty)\right) \nonumber \\ &\hspace{2cm} \cap \left((-\infty, a^{(1)}_-] \cup [a^{(1)}_+, \infty)\right) \end{align*} Note that the optimization problem defined by Eq.~\eqref{eqn:thm_sample_complexity_2outcomePOVM-risk_optimization} has a solution for all $R > 0$ (i.e., $\gamma \in (0, 1)$) because any $\alpha_1, \alpha_2 \in [0, 1]$ with $\alpha_1 = \alpha_2$ satisfies the constraints. Therefore, we must have $\mathcal{A}_a \neq \varnothing$. Thus, the risk is given as \begin{align*} \widehat{\mathcal{R}}_* &= \max_{a \in \mathcal{A}_a} \frac{1}{2} (\alpha_1 - \alpha_2) \\ &= \frac{\sqrt{1 - \gamma}}{2(\omega_1 - \omega_2)} \max_{a \in \mathcal{A}_a} \sqrt{1 - (2a - 1)^2\gamma} \end{align*} Now, for the case when $R \leq R_o$, we have $\sqrt{\gamma} \leq \sqrt{\omega_1 \omega_2} + \sqrt{(1 - \omega_1)(1 - \omega_2)} = \affh(\alpha_1 = 1, \alpha_2 = 0)$. Therefore, $\alpha_1 = 1$ and $\alpha_2 = 0$ satisfy the constraint of the optimization in Eq.~\eqref{eqn:thm_sample_complexity_2outcomePOVM-risk_optimization}, giving $\widehat{\mathcal{R}}_* = 0.5$. The last part of the statement of the theorem follows from the observation that Eq.~\eqref{eqn:thm_sample_complexity_2outcomePOVM-repetitions} implies $R > R_o$ when $\widehat{\mathcal{R}}_* < 0.5$, and for $R > R_o$, we have the inequality $\widehat{\mathcal{R}}_* \leq \sqrt{1 - \gamma}/(2(\omega_1 - \omega_2))$. \end{proof} Note that there is no loss of generality in requiring that $\omega_1 > \omega_2$, for if $\omega_2 < \omega_1$, we can simply swap $\Theta$ and $\Delta_\Theta$. When $\omega_1 = \omega_2$, we have $\Theta = \Delta_\Theta = \id / 2$, which means we learn nothing about the state. Indeed $R_o \to \infty$ as $\omega_1 \to \omega_2$, alluding to this fact. On the other hand, the best we can do is when $\omega_1$ and $\omega_2$ are farthest from each other, and this leads to the following result. \begin{corollary} \label{corr:minimax_method_optimal_risk} Let $\rho$ be any pure target state, and $\Delta_\rho = \id - \rho$. Then, for $R$ repetitions of the POVM $\{\rho, \Delta_\rho\}$, the estimator given by minimax method achieves the risk \begin{equation*} \widehat{\mathcal{R}}_* = \frac{1}{2} \sqrt{1 - \left(\frac{\epsilon}{2}\right)^{2/R}} \end{equation*} \end{corollary} \begin{proof} We have $\omega_1 = 1$ and $\omega_2 = 0$. Substituting this in Theorem \ref{thm:minimax_sample_complexity_2outcomePOVM}, we can see that $a^{(1)}_{\pm} = 1$ and $a^{(2)}_{\pm} = 0$. Therefore, the allowed values of $a$ are $\mathcal{A}_a = [0, 1]$. Subsequently, the risk is given as \begin{align*} \widehat{\mathcal{R}}_* &= \frac{\sqrt{1 - \gamma}}{2} \max_{a \in [0, 1]} \sqrt{1 - (2a - 1)^2\gamma} \\ &= \frac{\sqrt{1 - \gamma}}{2} \end{align*} as claimed. \end{proof} We also consider a more restricted family of POVMs that are relevant to the stabilizer measurements described in section \ref{secn:minimax_method_stabilizer_states}. \begin{corollary} \label{corr:minimax_sample_complexity_stabilizer} Suppose we are given a pure target state $\rho$, and we perform $R$ repetitions of the POVM $\{\Theta, \Delta_\Theta\}$ defined as \begin{align*} \Theta &= \rho + \frac{\delta/2 - 1}{\delta - 1} \Delta_\rho \\ \Delta_\Theta &= \frac{\delta/2}{\delta - 1} \Delta_\rho \end{align*} where $\Delta_\rho = \id - \rho$ and $\delta \geq 2$ is a parameter. Also, define \begin{equation*} R_o = 2 \frac{\ln(2/\epsilon)}{\ln\left(\frac{\delta - 1}{\delta/2 - 1}\right)}. \end{equation*} \noindent Then, if $R > R_o$, the risk of the estimator given by the minimax method is \begin{align} \widehat{\mathcal{R}}_* &= \begin{cases} \left(\frac{\delta - 1}{\delta}\right) \sqrt{1 - \gamma} & b_- \geq 1 \\ \left(\frac{\delta - 1}{\delta}\right) (1 - \gamma) \sqrt{1 + b_-(2 - b_-) \left(\frac{\gamma}{1 - \gamma}\right)} & b_- < 1,\\ &\hspace{-1.5cm}|b_- - 1| \leq |b_+ - 1| \\ \left(\frac{\delta - 1}{\delta}\right) (1 - \gamma) \sqrt{1 + b_+(2 - b_+) \left(\frac{\gamma}{1 - \gamma}\right)} & b_- < 1,\\ &\hspace{-1.5cm}|b_- - 1| > |b_+ - 1| \end{cases} \label{eqn:thm_sample_complexity-risk} \intertext{where} \gamma &= \left(\frac{\epsilon}{2}\right)^{2/R} \notag \intertext{and} b_{\pm} &= \left(\frac{\delta}{\delta - 1}\right) \left[1 \pm \sqrt{\left(\frac{1 - \gamma}{\gamma}\right) \left(\frac{\delta - 2}{\delta}\right)}\right] \nonumber \end{align} For $R \leq R_o$, the risk is $\widehat{\mathcal{R}}_* = 0.5$. \noindent In particular, for any risk $\widehat{\mathcal{R}}_* \in (0, 0.5)$, \begin{align} R &\geq 2\frac{\ln\left(2/\epsilon\right)}{\left|\ln\left(1 - \left(\frac{\delta}{\delta - 1}\right)^2 \widehat{\mathcal{R}}_*^2\right)\right|} \label{eqn:thm_sample_complexity-repetitions} \\ &\approx 2 \left(\frac{\delta - 1}{\delta}\right)^2 \frac{\ln(2/\epsilon)}{\widehat{\mathcal{R}}_*^2} \nonumber \end{align} repetitions of the measurement are sufficient to achieve that risk with a confidence level of $1 - \epsilon \in (3/4, 1)$. \end{corollary} \begin{proof} In the context of Theorem \ref{thm:minimax_sample_complexity_2outcomePOVM}, we have \begin{equation*} \omega_1 = 1 \quad \text{and} \quad \omega_2 = \frac{\delta/2 - 1}{\delta - 1} \end{equation*} This implies $a^{(1)}_{\pm} = 1$, and also \begin{equation*} a^{(2)}_{\pm} = \frac{\delta/2 - 1}{\delta - 1} \pm \frac{\delta/2}{\delta - 1} \sqrt{\frac{(1 - \gamma)}{\gamma} \frac{\delta - 2}{\delta}} \end{equation*} For convenience, we perform the change of variables $b = 2(1 - a)$, and define \begin{align*} b_{\pm} &\equiv 2(1 - a^{(2)}_{\mp}) \\ &= \left(\frac{\delta}{\delta - 1}\right) \left[1 \pm \sqrt{\left(\frac{1 - \gamma}{\gamma}\right) \left(\frac{\delta - 2}{\delta}\right)}\right] \end{align*} Clearly, $b_- \leq b_+$. Note that $R > R_o$ implies $b_- > 0$, which means $a_+ < 1$. Thus, we have that $\mathcal{A}_a = [0, 1] \cap ((-\infty, a^{(2)}_-] \cup [a^{(2)}_+, \infty)) = [0, a^{(2)}_-] \cup [a^{(2)}_+, 1]$ is non-empty. With respect to the variable $b$, these allowed values can be expressed as $\mathcal{A}_b = [0, b_-] \cup [b_+, 2]$. The risk is then given as \begin{align*} \widehat{\mathcal{R}}_* &= \left(\frac{\delta - 1}{\delta}\right) \sqrt{1 - \gamma} \max_{b \in \mathcal{A}_b} \sqrt{1 - (1 - b)^2 \gamma} \notag \\ &= \left(\frac{\delta - 1}{\delta}\right) (1 - \gamma) \max_{b \in \mathcal{A}_b} \sqrt{1 + b (2 - b) \left(\frac{\gamma}{1 - \gamma}\right)}. \notag \end{align*} Since the objective of maximization is symmetric about $b = 1$ and the maximum is achieved at $b = 1$, the allowed value of $b$ closest to $1$ achieves the maximum. Noting that $b_+ > 1$, we obtain the expression given in the statement of the corollary. \end{proof} Finally, we obtain a bound on the sample complexity of the randomized Pauli measurement scheme described in Box \hyperlink{box:pauli_measurement_scheme}{\ref*{secn:minimax_method}.1}. The statement is given in Theorem~\ref{thm:minimax_method_pauli_scheme_sample_complexity}, so we simply give a proof. \begin{proof}[Proof of Theorem~\ref{thm:minimax_method_pauli_scheme_sample_complexity}] \label{proof:minimax_method_pauli_scheme_sample_complexity} For convenience, we reproduce below the measurement strategy given in Box \hyperlink{box:pauli_measurement_scheme}{\ref*{secn:minimax_method}.1}. Given a target state $\rho$, \begin{enumerate} \item Sample a (non-identity) Pauli operator $W_i$ with probability \begin{equation*} p_i = \frac{|\tr(W_i \rho)|}{\sum_{i = 1}^{d^2 - 1} |\tr(W_i \rho)|}, \quad i = 1, \dotsc, d^2 - 1, \end{equation*} and record the outcome ($\pm 1$) of the measurement. \item Flip the measurement outcome (i.e, $\pm 1 \to \mp 1$) if $\tr(\rho W_i) < 0$, else retain the original measurement outcome \item Repeat this procedure $R$ times. \end{enumerate} Note that measuring the Pauli $W_i$ and flipping the measurement outcome is equivalent to measuring the operator $S_i~=~\text{sign}(\tr(W_i \rho)) W_i$. We can describe the above measurement strategy using the effective POVM described below, which we obtain by finding a positive semidefinite operator that reproduces the measurement statistics. The probability of obtaining $+1$ outcome can be written as \begin{align*} \text{Pr}(+1) &= \sum_{i = 1}^{d^2 - 1} (\text{Pr. choosing $S_i$}) \nonumber \\ &\hspace{1cm} (\text{Pr. outcome $1$ upon measuring $S_i$}) \\ &\equiv \tr(\Theta \sigma) \\ \Theta &= \sum_{i = 1}^{d^2 - 1} p_i \mathbb{P}^+_i \\ \mathbb{P}^+_i &= \frac{\id + S_i}{2} \end{align*} where $S_i = \text{sign}(\tr(W_i \rho)) W_i$. Substituting for $p_i$ and a simple rearrangement of terms gives \begin{align*} \Theta &= \frac{\id}{2} + \frac{d}{2\mathcal{N}} \sum_{i = 1}^{d^2 - 1} \frac{|\tr(W_i \rho)|}{d} S_i \\ &= \frac{(d + (\mathcal{N} - 1))}{2\mathcal{N}} \rho + \frac{(\mathcal{N} - 1)}{2\mathcal{N}} \Delta_\rho. \end{align*} $\Delta_\Theta = \id - \Theta$ is given as \begin{equation*} \Delta_\Theta = \frac{((\mathcal{N} + 1) - d)}{2\mathcal{N}} \rho + \frac{(\mathcal{N} + 1)}{2\mathcal{N}} \Delta_\rho. \end{equation*} The effective POVM is then $\{\Theta, \Delta_\Theta\}$. Substituting \begin{equation*} \omega_1 = \frac{(d + (\mathcal{N} - 1))}{2\mathcal{N}} \quad \text{and} \quad \omega_2 = \frac{(\mathcal{N} - 1)}{2\mathcal{N}} \end{equation*} in Theorem \ref{thm:minimax_sample_complexity_2outcomePOVM}, we obtain Eq.~\eqref{eqn:minimax_method_pauli_scheme_sample_complexity}. Now, note that for any pure state $\rho$, we have $\tr(\rho^2) = 1$ and this gives us \begin{equation} \sum_{i = 1}^{d^2 - 1} (\tr(W_i \rho))^2 = d - 1. \label{eqn:pure_state_pauli_weights_constraint} \end{equation} Therefore, to obtain an upper bound on $\mathcal{N}$, we solve the following optimization problem. Let $M$ be a positive integer and $\beta > 0$ be any real number such that $M > \beta$. We solve \begin{align*} \max\ &\sum_{i = 1}^M x_i \nonumber \\ \text{s.t. } &x_i \in [0, 1]\quad \forall i = 1, \dotsc, M \nonumber \\ &\sum_{i = 1}^M x_i^2 \leq \beta \end{align*} The above problem gives a bound for the special case of $x_i = |\tr(W_i \rho)|$, $M = d^2 - 1$ and $\beta = d - 1$, while saving the trouble of optimizing over all density matrices. Note that we consider the relaxation $\sum_{i = 1}^M x_i^2 \leq \beta$, instead of the original equality constraint $\sum_{i = 1}^M x_i^2 = \beta$ because quadratic equality constraints do not define a convex set in general. Such a relaxation is inconsequential because equality is attained at the optimum. It can be shown using the KKT conditions that the optimum corresponds to \begin{align*} x_i &= \sqrt{\frac{\beta}{M}}\quad \forall i \in \{1, \dotsc, M\}\notag \\ \implies \sum_{i = 1}^M x_i &= \sqrt{M \beta}. \notag \end{align*} Returning to the problem of Pauli weights, since $M = d^2 - 1$ and $\beta = d - 1$, we obtain $\mathcal{N} \leq \sqrt{(d^2 - 1)(d - 1)} = \sqrt{d + 1} (d - 1)$ as claimed. Substituting in the expression for sample complexity gives the desired upper bound. \end{proof} In contrast to the above result which shows that a good sample complexity can be obtained for the randomized Pauli measurement scheme, we consider the case of a bad measurement protocol. Namely, we are given an $n$-qubit stabilizer state and we measure $n - 1$ of its generators, where the measurements are subspace measurements. Then, as noted in Proposition~\ref{prop:minimax_method_stabilizer_insufficient_measurements}, the minimax method gives a risk of $0.5$. Here, we present a proof for this statement. \begin{proof}[Proof of Proposotion~\ref{prop:minimax_method_stabilizer_insufficient_measurements}] \label{proof:minimax_method_stabilizer_insufficient_measurements} Let $\rho$ be an $n$-qubit stabilizer state generated by $S_1, \dotsc, S_n$. The measurement protocol corresponds to measuring only the first $n - 1$ generators $S_1, \dotsc, S_{n - 1}$. The measurement of $S_l$ has the POVM $\{E^{(l)}_1, E^{(l)}_2\}$ where $E^{(l)}_1$ is the projection on the $+1$ eigenspace of $S_l$ while $E^{(l)}_2$ is the projection on $-1$ eigenspace of $S_l$, for $l = 1, \dotsc, n - 1$. Suppose that the $l^{\text{th}}$ measurement is repeated $R_l$ times. From Eq.~\eqref{eqn:JNriskprop3.1}, we know that the risk of the minimax method can be written as \begin{align*} &\widehat{\mathcal{R}}_* = \frac{1}{2} \max_{\chi_1, \chi_2 \in \mathcal{X}} \bigg\{\tr(\rho \chi_1) - \tr(\rho \chi_2)\ \bigg| \nonumber \\ &\hspace{3cm} \prod_{l = 1}^{n - 1} \left[F_C(\chi_1, \chi_2, \{E^{(l)}_k\})\right]^{R_l/2} \geq \frac{\epsilon}{2} \bigg\} \intertext{where} &F_C(\chi_1, \chi_2, \{E^{(l)}_k\}) = \left(\sum_{k = 1}^2 \sqrt{\tr\left(E^{(l)}_k \chi_1\right) \tr\left(E^{(l)}_k \chi_2\right)}\right)^2 \end{align*} is the classical fidelity corresponding to the POVM $\{E^{(l)}_1, E^{(l)}_2\}$ for $l = 1, \dotsc, n - 1$. Our strategy is to construct two density matrices $\chi_1$ and $\chi_2$ that satisfy the constraints of the optimization defining the risk, such that the value of the risk is $0.5$. To that end, let $\tilde{\rho}$ be the stabilizer state generated by $S_1, \dotsc, -S_n$, where, the last generator of $\tilde{\rho}$ differs from that of $\rho$ by a negative sign. Note that the states $\rho$ and $\tilde{\rho}$ are orthogonal to each other. Observe that the classical fidelity between the states $\rho$ and $\tilde{\rho}$ corresponding to the POVM $\{E^{(l)}_1, E^{(l)}_2\}$ is $F_C(\rho, \tilde{\rho}, \{E^{(l)}_1, E^{(l)}_2\}) = 1$ for all measured stabilizers since $\tr(E^{(l)}_1 \rho) = \tr(E^{(l)}_1 \tilde{\rho}) = 1$ while $\tr(E^{(l)}_2 \rho) = \tr(E^{(l)}_2 \tilde{\rho}) = 0$ for all $l = 1, \dotsc, n - 1$. Thus, taking $\chi_1 = \rho$ and $\chi_2 = \tilde{\rho}$, we find that the risk is $\widehat{R}_* = 0.5$, which is the maximum possible value for the risk. In other words, when an insufficient number of stabilizer measurements are provided, the minimax method infers that the fidelity cannot be estimated accurately. \end{proof} \section{Minimax Method: Robustness\label{app:minimax_robustness}} In this section, we give a formal statement and proof of Theorem~\ref{thm:affine_estimator_robustness_informal}. To that end, recall that for any vector $\bm{M}$ of size $m$, its $p$-norm is defined as $\norm{\bm{M}}_p = (\sum_{i = 1}^m |M_i|^p)^{1/p}$ when $1 \leq p < \infty$. For $p = \infty$, $\norm{\bm{M}}_p = \max_{1 \leq i \leq m} |M_i|$ denotes the largest entry of $\bm{M}$ (in terms of the absolute value). For any matrix $M$, the symbol $\bm{M}$ denotes the vectorized version of the matrix (which can be obtained, for example, by stacking the rows). Therefore, the norm $\norm{\bm{M}}_p$ is the element-wise $p$-norm of the matrix $M$ (and \textit{not} the Schatten norm). \begin{theorem}[Formal version of Theorem~\ref{thm:affine_estimator_robustness_informal}] \label{thm:affine_estimator_robustness} Suppose that we want to estimate the fidelity with a target state $\rho$ using outcomes from $R_l$ repetitions of the POVM $\{E^{(l)}_1, \dotsc, E^{(l)}_{N_l}\}$, for $l = 1, \dotsc, L$. \begin{enumerate} \item Suppose that the state $\sigma$ prepared experimentally experiences an unknown perturbation $\sigma'_{r, l}(t)$ during the $r^{\text{th}}$ repetition of the $l^{\text{th}}$ measurement setting, with $\norm{\bm{\sigma}'_{r, l}(T_{l, r})}_1 \leq \delta_S$ $\forall r,\ \forall l$. $T_{r, l}$ is the time at which this measurement is performed. Similarly, assume that each POVM element $E^{(l)}_k$ undergoes an unknown perturbation $(E'_r)^{(l)}_k(t)$, with $\max_{l, k} \norm{(\bm{E}'_r)^{(l)}_k(T_{l, r})}_1 \leq \delta_M$ $\forall r$. Denote the perturbed state as $\widetilde{\sigma}_{r, l}(t) = \sigma + \sigma'_{r, l}(t)$ and the perturbed POVM elements as $(\widetilde{E}_r)^{(l)}_k(t) = E^{(l)}_k + (E'_r)^{(l)}_k(t)$. \item Let $\bm{f}^{(1)}, \dotsc, \bm{f}^{(L)}$ be the relative frequencies that would have been observed were the state and POVM elements fixed \textnormal{(}i.e., the ``noiseless" case\textnormal{)}, and let $\widetilde{\bm{f}}^{(1)}, \dotsc, \widetilde{\bm{f}}^{(L)}$ be the actual frequencies observed because of the state and POVM elements undergoing perturbation. \item Let $\mathscr{L}(\bm{f}^{(1)}, \dotsc, \bm{f}^{(L)}) = \ip{\bm{\ell}, \bm{f}} + b$ be any affine estimator for fidelity, where $\bm{\ell}$ is a vector, $b$ is a constant, and $\bm{f}$ is the vector obtained by combining all the observed frequency vectors, i.e., $\bm{f} = \begin{pmatrix} \bm{f}^{(1)} & \dotsc & \bm{f}^{(L)} \end{pmatrix}$. \end{enumerate} Then, we have \begin{align} &\left|\mathscr{L}(\widetilde{\bm{f}}^{(1)}, \dotsc, \widetilde{\bm{f}}^{(L)}) - \mathscr{L}(\bm{f}^{(1)}, \dotsc, \bm{f}^{(L)})\right| \leq \delta_{\mathscr{L}} \nonumber \\ \intertext{where} &\delta_{\mathscr{L}} = \norm{\bm{\ell}}_1 \bigg(\max_{l, k} \norm{\bm{E}^{(l)}_{k}}_\infty \delta_S + \norm{\bm{\sigma}}_\infty \delta_M + \delta_M \delta_S \nonumber \\ &\hspace{3cm}+ \max_l \delta^{(l)} + \max_l \widetilde{\delta}^{(l)}\bigg) \label{eqn:minimax_method_robustness_estimate_error} \end{align} and $\delta^{(l)}$ and $\widetilde{\delta}^{(l)}$ denote the error in building the histogram corresponding to the $l^\text{th}$ measurement setting. That is, $\norm{\bm{f}^{(l)} - \bm{p}^{(l)}}_\infty \leq \delta^{(l)}$ \& $\norm{\widetilde{\bm{f}}^{(l)} - \langle\widetilde{\bm{p}}^{(l)}(T_{r, l})\rangle_r}_\infty \leq \widetilde{\delta}^{(l)}$, where $\bm{p}^{(l)}_k = \tr(\sigma E^{(l)}_k)$, $\widetilde{\bm{p}}^{(l)}_k = \tr(\widetilde{\sigma}_{r, l} (\widetilde{E}_r)^{(l)}_k)$ for $l = 1, \dotsc, L$. $\ip{\widetilde{\bm{p}}^{(l)}}_r$ denotes the ensemble average over $R_l$ repetitions. \end{theorem} \begin{proof} Consider the difference \begin{equation*} \Delta \bm{f} = \widetilde{\bm{f}} - \bm{f} \end{equation*} where $\bm{f} = \begin{pmatrix} \bm{f}^{(1)}, & \dotsc, & \bm{f}^{(L)} \end{pmatrix}$ are the frequencies (for all measurement settings combined) assuming that the state and measurements are fixed, and similarly, $\widetilde{\bm{f}} = \begin{pmatrix} \widetilde{\bm{f}}^{(1)}, & \dotsc, & \widetilde{\bm{f}}^{(L)} \end{pmatrix}$ are the observed frequencies that account for perturbations. Then, using H\"older's inequality, we can write \begin{align} \big|\mathscr{L}(\widetilde{\bm{f}}^{(1)}, \dotsc, \widetilde{\bm{f}}^{(L)}) - \mathscr{L}(\bm{f}^{(1)}, &\dotsc, \bm{f}^{(L)})\big| = |\ip{\bm{\ell}, \Delta\bm{f}}| \notag \\ &\leq \norm{\bm{\ell}}_1 \norm{\Delta\bm{f}}_\infty. \label{eqn:minimax_robustness_estimate_error_intermediate} \end{align} Suppose that $\norm{\Delta\bm{f}}_\infty = \max_{l, k} |\widetilde{f}^{(l)}_k - f^{(l)}_k|$ is achieved at $(l^*, k^*)$, i.e., $\norm{\Delta\bm{f}}_\infty = |\widetilde{f}^{(l^*)}_{k^*} - f^{(l^*)}_{k^*}|$. Now, we insert the Born probabilities appropriately to obtain \begin{align} \norm{\Delta\bm{f}}_\infty &= |(\langle\widetilde{p}^{(l^*)}_{k^*}(T_{r, l})\rangle - p^{(l^*)}_{k^*}) + (\widetilde{f}^{(l^*)}_{k^*} - \langle\widetilde{p}^{(l^*)}_{k^*}(T_{r, l})\rangle) \nonumber \\ &\hspace{2cm} + (p^{(l^*)}_{k^*} - f^{(l^*)}_{k^*})| \nonumber \\ &\leq |\langle\widetilde{p}^{(l^*)}_{k^*}(T_{r, l})\rangle - p^{(l^*)}_{k^*}| + |\widetilde{f}^{(l^*)}_{k^*} - \langle\widetilde{p}^{(l^*)}_{k^*}(T_{r, l})| \nonumber \\ &\hspace{2cm} + |f^{(l^*)}_{k^*} - p^{(l^*)}_{k^*}| \nonumber \\ &\leq |\langle\widetilde{p}^{(l^*)}_{k^*}(T_{r, l})\rangle - p^{(l^*)}_{k^*}| + \delta^{(l^*)} + \widetilde{\delta}^{(l^*)}. \label{eqn:minimax_robustness_deltaf_intermediate} \end{align} Now, we expand $\langle\widetilde{p}^{(l)}_{k}(T_{r, l})\rangle$ in terms of the perturbation to obtain \begin{align*} &|\langle\widetilde{p}^{(l)}_k(T_{r, l})\rangle - p^{(l)}_k| \nonumber \\ &= \left|\frac{1}{R_l} \sum_{r = 1}^{R_l} \tr(\widetilde{\sigma}_{r, l}(T_{r, l})\ (\widetilde{E}_r)^{(l)}_k(T_{r, l})) - p^{(l)}_k\right| \\ &\leq \frac{1}{R_l} \sum_{r = 1}^{R_l} \bigg(\left|\tr(\sigma\ (E_r')^{(l)}_k(T_{r, l}))\right| + \left|\tr(\sigma'_{r, l}(T_{r, l})\ E^{(l)}_k)\right| \nonumber \\ &\hspace{2cm}+ \left|\tr(\sigma'_{r, l}(T_{r, l})\ (E_r')^{(l)}_k(T_{r, l}))\right|\bigg) \end{align*} where we use triangle inequality in the last step. Note that for Hermitian matrices $P$, $Q$, we can write $\tr(P Q) = \ip{\bm{P}, \bm{Q}}$, where $\bm{P}$ denotes vectorized form of $P$. Then, by H\"older's inequality, we have $|\ip{\bm{P}, \bm{Q}}| \leq \norm{\bm{P}}_1 \norm{\bm{Q}}_\infty$. Similarly, we can write $|\ip{\bm{P}, \bm{Q}}| \leq \norm{\bm{P}}_2 \norm{\bm{Q}}_2 \leq \norm{\bm{P}}_1 \norm{\bm{Q}}_1$ because $\norm{\bm{P}}_2 \leq \norm{\bm{P}}_1$ for any vector $\bm{P}$. Using this, we obtain \begin{align*} &|\langle\widetilde{p}^{(l)}_k(T)\rangle - p^{(l)}_k| \nonumber \\ &\leq \frac{1}{R_l} \sum_{r = 1}^{R_l} \bigg(\norm{\bm{\sigma}}_\infty \norm{(\bm{E}_r')^{(l)}_k(T_{r, l})}_1 + \norm{\bm{E}^{(l)}_k}_\infty \norm{\bm{\sigma}'_{r, l}(T_{r, l})}_1 \nonumber \\ &\hspace{2cm}+ \norm{\bm{\sigma}'_{r, l}(T_{r, l})}_1 \norm{(\bm{E}_r')^{(l)}_k(T_{r, l})}_1\bigg) \notag \\ &\leq \norm{\bm{\sigma}}_\infty \delta_M + \norm{\bm{E}^{(l)}_k}_\infty \delta_S + \delta_S \delta_M. \end{align*} Substituting this in Eq.~\eqref{eqn:minimax_robustness_deltaf_intermediate}, we get \begin{align*} \norm{\Delta\bm{f}}_\infty &\leq \norm{\bm{\sigma}}_\infty \delta_M + \norm{\bm{E}^{(l^*)}_{k^*}}_\infty \delta_S + \delta_S \delta_M + \delta^{(l^*)} + \widetilde{\delta}^{(l^*)} \notag \\ &\leq \max_{l, k} \norm{\bm{E}^{(l)}_k}_\infty \delta_S + \norm{\bm{\sigma}}_\infty \delta_M + \delta_S \delta_M \nonumber \\ &\hspace{1.5cm}+ \max_l \delta^{(l)} + \max_l \widetilde{\delta}^{(l)} \end{align*} Substituting this in Eq.~\eqref{eqn:minimax_robustness_estimate_error_intermediate}, we get the desired result. \end{proof} As noted before, $\lVert\bm{E}^{(l)}_{k}\rVert_\infty$ and $\lVert\bm{\sigma}\rVert_\infty$ refer to the largest entry (in terms of absolute value) in $E^{(l)}_k$ and $\sigma$, respectively. In many situations, even in large dimensions, these quantities can be expected to be of the order of $1$. The price paid, however, is that perturbations have to be smaller in size with increasing dimension. Also, we expect the histogram errors $\delta^{(l)}$ and $\widetilde{\delta}^{(l)}$ to decrease with increasing number of repetitions. Note that these restrictions are required for the above theoretical guarantee in terms of $\delta_{\mathscr{L}}$ to be useful. In practice, robustness holds generally, as long as the observed frequencies remain close to the unperturbed ones, as evident from Eq.~\eqref{eqn:minimax_robustness_estimate_error_intermediate}. Note that the norm $\norm{\bm{\ell}}_1$ of the affine estimator appearing in Eq.~\eqref{eqn:minimax_method_robustness_estimate_error} for the case of minimax estimator is $\norm{C_{\bm{a}} \bm{R}}_1$ (see Corollary~\ref{corr:affine_estimator_robustness}). This norm can be written as $\norm{C_{\bm{a}} \bm{R}}_1 = \sum_{l = 1}^L R_l \sum_{k = 1}^{N_i} |a^{(l)}_k|$, which depends solely on the estimator. We argue that $\norm{C_{\bm{a}}\bm{R}}_1$ is not likely to increase very much as the number of repetitions increases. Recall that by the definition of the risk and from Eq.~\eqref{eqn:JN_estimator_affine_matrix}, we have \begin{align*} &\left|\ip{C_{\bm{a}}\bm{R}, \bm{f}} + c - \tr(\rho \sigma)\right| < \widehat{\mathcal{R}}_* \\ &\left|\ip{C_{\bm{a}}\bm{R}, \bm{f}}\right| < \tr(\rho \sigma) + c + \widehat{\mathcal{R}}_* < 2 \end{align*} with probability greater than $1 - \epsilon$. In the second equation, we use the fact that $0 \leq c \leq 1$ and $0 \leq c + \widehat{\mathcal{R}}_* \leq 1$ because $c = (\tr(\rho \chi_1^*) + \tr(\rho \chi_2^*))/2$ (Eq.~\eqref{eqn:JN_estimator_constant}) and $\widehat{\mathcal{R}}_* = (\tr(\rho \chi_1^*) - \tr(\rho \chi_2^*))/2$ (Eq.~\eqref{eqn:JNriskprop3.1}). Thus, we expect that increasing the number of repetitions proportionately decreases the value of entries in $C_{\bm{a}}$. This is only a heuristic argument because entries of $C_{\bm{a}}$ can be negative, but we see in practice that the absolute value of the entries in $C_{\bm{a}}$ does decrease when increasing the number of repetitions. \end{document}
\begin{document} \title{Blow up of conductors} \author[C.~Birghila]{Corina Birghila} \address{C.~Birghila\\ Department of Statistics and Operations Research\\ University of Vienna\\ Oskar-Morgenstern-Platz 1\\ 1090 Vienna\\ Austria} \email{\href{mailto:[email protected]}{[email protected]}} \author[M.~Schulze]{Mathias Schulze} \address{M.~Schulze\\ Department of Mathematics\\ TU Kaiserslautern\\ 67663 Kaiserslautern\\ Germany} \email{\href{mailto:[email protected]}{[email protected]}} \thanks{The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n\textsuperscript{o} PCIG12-GA-2012-334355.} \subjclass[2010]{Primary 14E05; Secondary 14M05} \keywords{blow up, normalization, Gorenstein, canonical module, fractional ideal} \begin{abstract} We generalize results of P.M.H.~Wilson describing situations where the blow up of the conductor ideal of a scheme coincides with the normalization. \end{abstract} \maketitle \section{Introduction}\label{53} Blowup and normalization are fundamental operations in the study of varieties and singularities. While normalization modifies the non-normal locus defined by the conductor ideal, blow up modifies the locus defined by any given ideal. In typical cases the normalization is finite while blow ups are not. It is therefore a particular situation that the blow up of the conductor ideal yields the normalization. P.M.H.~Wilson described instances where this happens. He considers irreducible projective varieties over an algebraically closed field and proves the following results (see \cite[Cor.~1.4, Thm.~2.7, Rem.~2.8]{Wil78}). \begin{prp}[Wilson] Given a curve $C$ with normalization $\widetilde C$ and with $C'$ the blow up of $C$ in its conductor ideal, then $C'=\widetilde C$. \end{prp} \begin{thm}[Wilson] The blow up $V'$ of a hypersurface in its conductor ideal $\mathscr{C}$ is the same as the normalization $\widetilde V$ if and only if the dualizing sheaf $\omega_{\widetilde V}$ is invertible. In particular, if $V$ is a surface, then $V'=\widetilde V$ if and only if $\widetilde V$ is Gorenstein.\qed \end{thm} Ragni Piene generalized the \enquote{if}-part of Wilson's theorem to reduced (not necessarily irreducible) algebraic schemes over an algebraically closed field replacing the normalization by a finite birational morphism (see \cite[Prop.~(2.9)]{Pie78}). \begin{prp}[Piene]\label{3} Let $f\colon Y\to X$ be a finite, birational morphism between Gorenstein schemes. Then $f$ is isomorphic to the blow up of the conductor of $Y$ in $X$. \end{prp} In this note we further generalize Wilson's results dropping the base field. We consider Cohen--Macaulay schemes equipped with a canonical (fractional) ideal sheaf. Our main result Theorem~\ref{47} generalizes Piene's result and yields \begin{thm}\label{48} Let $X$ be a reduced Gorenstein Nagata scheme with Cohen--Macaulay normalization $\widetilde X\to X$. Denote by $\Bl_{\mathscr{C}_{\widetilde X/X}} X$ the blow up of $X$ in the conductor ideal $\mathscr{C}_{\widetilde X/X}$. Then $\widetilde X=\Bl_{\mathscr{C}_{\widetilde X/X}} X$ if and only if $\widetilde X$ is Gorenstein.\qed \end{thm} The above mentioned main result involves the blow up of fractional ideals. In preparation, we collect results on sheaves of rational functions and consider morphisms that allow for a pullback of fractional ideals. Up to some extent we describe these concepts in relation with associated points of schemes. A slightly different account of this topic is given in \cite[\S7.1]{Liu02}. Although we work with sheaves on locally Noetherian schemes, our results are mostly local in the realm of commutative algebra. The question under consideration also appears in work of Mitsuo Shinagawa \cite{Shi82} that aims for deducing properties of a scheme from its normalization. Under the strong condition of normal flatness (that we do not pursue here) he proves \begin{thm}[M.~Shinagawa] Let $X$ be a reduced Noetherian scheme with finite normalization $\widetilde X$, $Y$ be the closed subscheme defined by the conductor of $X$ in $\widetilde X$, and $X'$ the blow up of $X$ in $Y$. If $X$ is normally flat along $Y$ and $Y$ is of pure codimension $1$ in $X$, then $X'$ is naturally isomorphic to $X$.\qed \end{thm} \subsection*{Acknowledgments} Preliminary results towards the ones presented here were obtained in the first named author's Master's thesis~\cite{Bir14}. \section{Rational functions}\label{49} All rings will be Noetherian commutative rings with unity. For a ring $A$ we denote by $A^\mathrm{reg}$ the set of its regular elements and by \[ Q(A):=(A^\mathrm{reg})^{-1}A \] its total ring of fractions. All schemes will be locally Noetherian and all morphisms quasicompact, that is, locally on the target, morphisms of Noetherian schemes. A property that holds over each affine open set is refered to as an affine local property. Let $X$ be a scheme. Then $x\in X$ is called an \emph{associated point of $X$} if $\mathfrak{m}_{X,x}$ is an associated prime of $\mathcal{O}_{X,x}$. We denote by $\mathscr{A}ss X$ the (locally finite) set of associated points of $X$. For $x\in X$ we set \[ \mathscr{A}ss(X,x):=\mathscr{A}ss(\mathcal{O}_{X,x}). \] Note that $U\cap\mathscr{A}ss X=\mathscr{A}ss U$ for any open $U\subset X$. The following result is well-known; we give a proof. \begin{lem}\label{7} If $X=\Spec A$ is affine, then $\mathscr{A}ss X=\mathscr{A}ss A$. \end{lem} \begin{proof}\ \begin{asparaitem} \item[($\subset$)] Let $\partialp=\ideal{p_1,\dots,p_n}\in\mathscr{A}ss X$. This means that $\partialp A_\partialp=\mathscr{A}nn_{A_\partialp}(g/1)$ for some $g/1\in A_\partialp$. Then $g\in\partialp$ and there are $q_i\not\in\partialp$ such that $p_iq_ig=0$ in $A$. It follows that $\partialp\subset\mathscr{A}nn_A(qg)$ where $q=q_1\cdots q_n\not\in\partialp$. Conversely, let $r\in\mathscr{A}nn_A(qg)$, then $r/1\in\mathscr{A}nn_A(qg)_\partialp=\mathscr{A}nn_{A_\partialp}(g/1)=\partialp A_\partialp$ implies $r\in\partialp$. Thus, $\partialp=\mathscr{A}nn_A(qg)$ which means that $\partialp\in\mathscr{A}ss A$. \item[($\supset$)] Let $\partialp\in\mathscr{A}ss A$. Then there is an inclusion $A/\partialp\hookrightarrow A$ and hence $A_\partialp/\partialp A_\partialp\hookrightarrow A_\partialp$ by exactness of localization. This means that $\partialp\in\mathscr{A}ss X$.\qedhere \end{asparaitem} \end{proof} For $x,y\in X$, we say that $y$ \emph{specializes to} $x$ (or $x$ \emph{generalizes to} $y$) and write $y\leadsto x$ if $x$ is in the closure of $y$. This makes $X$ and hence $\mathscr{A}ss X$ into poset by setting $x\ge y$ if and only if $y\leadsto x$. For $X=\Spec(A)$ and $x=\partialp$ and $y=\mathfrak{q}$ this is equivalent to $\mathfrak{q}\subset\partialp$. \begin{lem}\label{56} Any point of a locally Noetherian scheme specializes to a closed point. \end{lem} \begin{proof} See \cite[\href{http://stacks.math.columbia.edu/tag/02IL}{Lem.~02IL}]{stacks-project}. \end{proof} We equip $\mathscr{A}ss X\subset X$ with the subspace Zariski topology. By the following result it consists of all decreasing subsets, that is, subsets stable under generalization. \begin{lem}\label{41} For each $x\in X$, \begin{equation}\label{6} \mathscr{A}ss(X,x)=\{y\in\mathscr{A}ss X\mid y\leadsto x\}. \end{equation} In particular, $\mathscr{A}ss(X,x)\subset\mathscr{A}ss X$ is open and equals the intersection of $\mathscr{A}ss X$ with all open neighborhoods of $x\in X$. In case $x\in\mathscr{A}ss X$ this means that $\mathscr{A}ss(X,x)$ is the smallest open neighborhood of $x\in\mathscr{A}ss X$. \end{lem} \begin{proof} Replacing $X$ by an affine open neighborhood of $x$ we may assume that $X=\Spec A$ is affine and we write $\partialp$ for $x$. In particular, $\mathscr{A}ss(X,x)=\mathscr{A}ss A_\partialp$ and $\mathscr{A}ss X=\mathscr{A}ss A$ by Lemma~\ref{7}. \begin{asparaitem} \item[($\subset$)] Let $\mathfrak{q}'\in\mathscr{A}ss A_\partialp$ correspond to $\mathfrak{q}\in\Spec A$ with $\mathfrak{q}\subset\partialp$. Then there is an inclusion $A_\partialp/\mathfrak{q} A_\partialp=A_\partialp/\mathfrak{q}'\hookrightarrow A_\partialp$ and hence $A_\mathfrak{q}/\mathfrak{q} A_\mathfrak{q}\hookrightarrow A_\mathfrak{q}$ by exactness of localization. This means that $\mathfrak{q}\in\mathscr{A}ss A$. \item[($\supset$)] Let $\mathfrak{q}\in\mathscr{A}ss A$ with $\mathfrak{q}\subset \partialp$. This means that there is an inclusion $A/\mathfrak{q}\hookrightarrow A$ and hence $A_\partialp/\mathfrak{q} A_\partialp\subset A_\partialp$ by exactness of localization. This means that $\mathfrak{q}'=\mathfrak{q} A_\partialp\in\mathscr{A}ss A_\partialp$. \end{asparaitem} Let now $\{\mathfrak{q}\in\mathscr{A}ss A\mid\mathfrak{q}\not\subset\partialp\}=\{\mathfrak{q}_1,\dots,\mathfrak{q}_n\}$. Pick $f_i\in\mathfrak{q}_i\setminus\partialp$ and set $f:=f_1\cdots f_n$. Then $\{\mathfrak{q}\in\mathscr{A}ss A\mid\mathfrak{q}\subset\partialp\}=D(f)\cap\mathscr{A}ss A$ is open in $\mathscr{A}ss A$. \end{proof} \begin{dfn}\label{27} The $\mathcal{O}_X$-algebra of \emph{rational functions on $X$} can be defined by \begin{equation}\label{30} \mathscr{Q}_X:=i_*i^{-1}\mathcal{O}_X \end{equation} where $i:\mathscr{A}ss X\to X$ denotes the inclusion. \end{dfn} By \cite{Kle79}, $\mathscr{Q}_X$ is quasicoherent for reduced $X$ (but not in general). Moreover, its stalks and sections over affine open sets can be described as follows (see \cite[(20.2.11.1)]{EGA4}). \begin{lem}\label{4}\partialushQED{\qed} Let $X$ be a scheme. \begin{enumerate}[(a)] \item\label{4b} We have $\Gamma(U,\mathscr{Q}_X)=Q(A)$ for any affine open $U=\Spec A\subset X$. \item\label{4a} We have $\mathscr{Q}_{X,x}=Q(\mathcal{O}_{X,x})$ for any $x\in X$, hence $\mathscr{Q}_{X,x}=\mathcal{O}_{X,x}$ if $x\in\mathscr{A}ss X$. \end{enumerate} \end{lem} \begin{proof}\partialushQED{\qed} \begin{asparaenum}[(a)] \item Recall that the $D(t)=\{\partialp\in\Spec A\mid t\not\in\partialp\}$ for $t\in A$ form a basis of the Zariski topology on $U$. By Lemma~\ref{7}, \begin{equation}\label{39} A^\mathrm{reg}=\{t\in A\mid D(t)\supset\mathscr{A}ss U\} \end{equation} The set $S:=A^\mathrm{reg}$ is multiplicatively closed and directed by setting $t\le t'$ if and only if $t\mid t'$. For any $t,t'\in S$ with $t\le t'$, there is a morphism $A_t\to A_{t'}$. These morphisms form a directed system and, using \eqref{39}, \begin{equation}\label{37} Q(A)=S^{-1}A=\varinjlim_{t\in S}A_t=\varinjlim_{t\in S}\Gamma(D(t),\mathcal{O}_X)=\varinjlim_{D(t)\supset\mathscr{A}ss U}\Gamma(D(t),\mathcal{O}_X). \end{equation} By \eqref{30}, \begin{equation}\label{38} \Gamma(U,\mathscr{Q}_X)=\Gamma(U\cap\mathscr{A}ss X,i^{-1}\mathcal{O}_X)=\varinjlim_{V\supset\mathscr{A}ss U}\Gamma(V,\mathcal{O}_X). \end{equation} Combining \eqref{37} and \eqref{38} yields a natural morphism $Q(A)\to\Gamma(U,\mathscr{Q}_X)$. Conversely, for any open subset $V\supset\mathscr{A}ss U$, prime avoidance yields a $t\in A$ such that $V\supset D(t)\supset\mathscr{A}ss U$. The claim follows. \item We may assume that $X=\Spec A$ is affine. Then, using \eqref{4b}, \eqref{37} and Lemma~\ref{41}, \begin{align*} \mathscr{Q}_{X,x}& =\varinjlim_{x\in D(s)}\Gamma(D(s),\mathscr{Q}_X) =\varinjlim_{x\in D(s)}Q(A_s)\\ &=\varinjlim_{x\in D(s)}\varinjlim_{D(st)\supset\mathscr{A}ss(D(s))}A_{st} =\varinjlim_{D(s)\supset\mathscr{A}ss(X,x)}A_s =Q(\mathcal{O}_{X,x}).\qedhere \end{align*} \end{asparaenum} \end{proof} In particular, it follows from Lemma~\ref{4}.\eqref{4a} that \[ \mathcal{O}_X\hookrightarrow\mathscr{Q}_X. \] We shall describe sections of $\mathscr{Q}_X$, and more generally of $\mathscr{M}\otimes_{\mathcal{O}_X}\mathscr{Q}_X$ for coherent $\mathscr{M}$, over arbitrary open sets. We abbreviate \[ X':=\mathscr{A}ss X,\quad X'_x:=\mathscr{A}ss(X,x). \] Lemma~\ref{41} shows that \begin{equation}\label{28} (i_*\mathscr{F})_x=\mathscr{F}(X'_x) \end{equation} for any sheaf $\mathscr{F}$ on $X'$ and any $x\in X$. In case $x=x'\in X'$, the latter becomes \begin{equation}\label{35} \mathscr{F}(X'_{x'})=\mathscr{F}_{x'}. \end{equation} \begin{lem}\label{9}\ \begin{asparaenum}[(a)] \item\label{9a} Let $\mathscr{M}$ be a coherent $\mathcal{O}_X$-module. Then \begin{equation}\label{36} \mathscr{M}\otimes_{\mathcal{O}_X}\mathscr{Q}_X=i_*i^{-1}\mathscr{M}=\left(U\mapsto\varprojlim_{x\in X'\cap U}\mathscr{M}_x\right). \end{equation} Note that $\varprojlim_{X'\cap U}=\partialrod_{X'\cap U}$ if $X$ has no embedded points. \item\label{9c} For any affine open $U=\Spec A\subset X$ and $M:=\Gamma(X,\mathscr{M})$, \begin{equation}\label{50} \Gamma(U,\mathscr{M}\otimes_{\mathcal{O}_X}\mathscr{Q}_X)=M\otimes_AQ(A). \end{equation} \item\label{9b} We have $\mathscr{M}_x\otimes_{\mathcal{O}_{X,x}}\mathscr{Q}_{X,x}=\varprojlim_{x'\in X'_x}\mathscr{M}_{x'}$ for any $x\in X$. \end{asparaenum} \end{lem} \begin{proof} Setting $\mathscr{M}'(V):=\varprojlim_{x'\in V}\mathscr{M}_{x'}$ for any open $V\subset X'$ defines a sheaf on $X'$ (see \cite[\S4.2.2]{Cur14}). For any open $U\subset X$, \begin{equation}\label{29} i_*\mathscr{M}'(U)=\varprojlim_{x\in X'\cap U}\mathscr{M}_x. \end{equation} We may therefore read the right-hand sheaf in \eqref{36} as $i_*\mathscr{M}'$ and the right-hand of \eqref{9b} as $\mathscr{M}'(X'_x)$. Now \eqref{28} reduces \eqref{9b} to proving \eqref{9a}. By \eqref{30} and \eqref{29}, we settle \eqref{9a} in case $\mathscr{M}=\mathcal{O}_X$, by proving that \begin{equation}\label{31} i^{-1}\mathscr{M}=\mathscr{M}'. \end{equation} There is a natural morphism of sheaves $\mathscr{M}\to i_*\mathscr{M}'$. Since $i^{-1}$ is left-adjoint to $i_*$, this gives rise to a morphism $i^{-1}\mathscr{M}\to\mathscr{M}'$. That it is an isomorphism can be checked stalk-wise at any $x'\in X'$ using \eqref{35}: \begin{equation}\label{40} (i^{-1}\mathscr{M})_{x'}=\mathscr{M}_{x'}=\varprojlim_{X'\ni y'\leadsto x'}\mathscr{M}_{y'}=\mathscr{M}'(X'_{x'})=\mathscr{M}'_{x'}. \end{equation} Since $i^{-1}$ is left-adjoint to $i_*$, the identity morphism of $i^{-1}\mathscr{M}$ induces a natural morphism $\mathscr{M}\to i_*i^{-1}\mathscr{M}$. Using \eqref{30} and \eqref{31}, this yields a natural morphism of sheaves \[ \mathscr{M}\otimes_{\mathcal{O}_X}\mathscr{Q}_X\to i_*\mathscr{M}'. \] To establish both \eqref{9a} and \eqref{9c} we show that this induces an isomorphism of global sections over any affine open using the presheaf tensor product. To this end, we assume that $X=\Spec A$ and set $M:=\Gamma(X,\mathscr{M})$. Then, using Lemma~\ref{7} and the claim in case $\mathscr{M}=\mathcal{O}_X$, it suffices to show that \begin{equation}\label{32} M\otimes_A\varprojlim_{\partialp\in\mathscr{A}ss A}A_\partialp\cong\varprojlim_{\partialp\in\mathscr{A}ss A}M_\partialp. \end{equation} By $\mathcal{O}_X$-coherence of $\mathscr{M}$, $M$ is a finitely presented $A$-module. Since $A$ is Noetherian, $\mathscr{A}ss A$ is finite and hence the Mittag--Leffler condition is trivially satisfied. Therefore, the inverse limit commutes with tensor product and \eqref{32} holds true. \end{proof} \section{Rank of coherent modules} Recall that an $A$-module $M$ has rank $\rk M=\rk_A M=r$ if $M\otimes_AQ(A)\cong Q(A)^r$ (see \cite[Def.~1.4.2]{BH93}). In case $M$ is finite, this is equivalent to $M_\partialp\cong A_\partialp^r$ for all $\partialp\in\mathscr{A}ss A$ (see \cite[Prop.~1.4.3]{BH93}). \begin{dfn}\label{51} Let $\mathscr{M}$ be a coherent $\mathcal{O}_X$-module. We say that $\mathscr{M}$ has \emph{global rank} $\rk\mathscr{M}=\rk_X\mathscr{M}=r$ if \begin{equation}\label{24} \mathscr{M}\otimes_{\mathcal{O}_X}\mathscr{Q}_X\cong\mathscr{Q}_X^r. \end{equation} We say that $\mathscr{M}$ has \emph{local rank} $\rk\mathscr{M}=\rk_X\mathscr{M}=r$ if $\mathscr{M}_{x'}\cong\mathcal{O}_{X,x'}^r$ for all $x'\in X'$. In case $\mathscr{M}\hookrightarrow\mathscr{Q}_X^r$, we say that $\mathscr{M}$ has a \emph{rank} if it has a local, or equivalently global, rank (see Lemma~\ref{23} below). \end{dfn} The following easy result applies to $A=Q(A)$. \begin{lem}\label{26} Let $A$ be a ring in which all regular elements are units. Then any inclusion of free modules of equal finite rank is an equality.\qed \end{lem} \begin{lem}\label{23} Let $\mathscr{M}$ be a coherent $\mathcal{O}_X$-module. \begin{enumerate}[(a)] \item\label{23c} If $\mathscr{M}$ has a global rank, then $\mathscr{M}$ has the same local rank. \item\label{23d} $\mathscr{M}$ has a local rank $\rk\mathscr{M}=r$ if and only if $\mathscr{M}_{X,x}$ has rank $\rk\mathscr{M}_{X,x}=r$ for all $x\in X$. \item\label{23a} If $X=\Spec A$ is affine and $\mathscr{M}=\widetilde M$, then the following are equivalent. \begin{enumerate}[(1)] \item\label{23a2} $\mathscr{M}$ has global rank $\rk\mathscr{M}=r$. \item\label{23a0} $\mathscr{M}$ has local rank $\rk\mathscr{M}=r$. \item\label{23a1} $M$ has rank $\rk M=r$. \end{enumerate} \item\label{23b} If $\mathscr{M}\hookrightarrow\mathscr{Q}_X^r$, then the following are equivalent. \begin{enumerate}[(1)] \item\label{23b0} $\mathscr{M}$ has local rank $\rk\mathscr{M}=r$. \item\label{23b1} $\mathscr{M}$ has local rank $\rk\mathscr{M}\ge r$. \item\label{23b2} The induced morphism $\mathscr{M}\otimes_{\mathcal{O}_X}\mathscr{Q}_X\to\mathscr{Q}_X^r$ is an isomorphism. \item\label{23b3} For any affine open $U=\Spec A\subset X$ and $M:=\Gamma(U,\mathscr{M})$, the induced morphism $M\otimes_AQ(A)\to Q(A)^r$ is an isomorphism. \end{enumerate} In particular, a local rank is global in this case. \end{enumerate} \end{lem} \begin{proof}\partialushQED{\qed} By coherence of $\mathscr{M}$, $M$ is finite. \begin{asparaenum}[(a)] \item Taking stalks at $x\in X$ in \eqref{24} this follows from Lemma~\ref{4}.\eqref{4a}. \item This follows from Lemma~\ref{41}. \item By \eqref{23c}, \eqref{23a2} implies \eqref{23a0}. By Lemma~\ref{7}, points $x\in X'$ correspond one-to-one to prime ideals $\partialp\in\mathscr{A}ss A$. Then $\mathscr{M}_x=M_\partialp$ and $\mathcal{O}_{X,x}=A_\partialp$ and \eqref{23a0} implies \eqref{23a1} by \cite[Prop.~1.4.3]{BH93}. Assuming the latter, loc.~cit.~gives a short exact sequence of $A$-modules \[ 0\to F\to M\to T\to 0 \] where $F$ is free of rank $r$ and $T$ is torsion. These properties are preserved under localization. By Lemmas~\ref{4}.\eqref{4a} and \ref{26}, applying $\widetilde{-}\otimes_{\mathcal{O}_X}\mathscr{Q}_X$ turns it into an isomorphism $\mathscr{Q}_X^r\cong\mathscr{M}\otimes_{\mathcal{O}_X}\mathscr{Q}_X$. Thus, \eqref{23a1} implies \eqref{23a2}. \item By \eqref{23a}, \eqref{23b3} implies \eqref{23b0}, which trivially implies \eqref{23b1}. By Lemma~\ref{9}.\eqref{9a}, the morphism in \eqref{23b2} reads \[ \mathscr{M}\otimes_{\mathcal{O}_X}\mathscr{Q}_X=i_*i^{-1}\mathscr{M}\to i_*i^{-1}\mathcal{O}_X^r=\mathscr{Q}_X^r \] and is induced by $i^{-1}\mathscr{M}\to i^{-1}\mathcal{O}_X^r$. Its stalk at $x'\in X'$ is the inclusion $\mathscr{M}_{x'}\hookrightarrow\mathcal{O}_{X,x'}^r$ from the hypothesis. If \eqref{23b1} holds it must be an equality by Lemma~\ref{26} and \eqref{23b2} follows. For $U$ and $M$ as in \eqref{23b3}, by Lemma~\ref{4}.\eqref{4b} and injectivity of sheafification on sections, \eqref{23b2} yields an inclusion $M\otimes_AQ(A)\hookrightarrow\Gamma(U,\mathscr{M}\otimes_{\mathcal{O}_X}\mathscr{Q}_X)=Q(A)^r$. For it to be an isomorphism it suffices to show that $\rk M=r$ by Lemma~\ref{26}. By \cite[Prop.~1.4.3]{BH93} and Lemma~\ref{7}, this is equivalent to $\mathscr{M}_{x'}\cong\mathcal{O}_{X,x'}^r$ for all $x'\in U\cap X'$. This follows from \eqref{23b2} due to Lemma~\ref{4}.\eqref{4a}. Alternatively one could use that $\mathscr{M}_U\otimes_{\mathcal{O}_U}\mathscr{Q}_U$ is the presheaf tensor product as observed in the proof of Lemma~\ref{9} . \qedhere \end{asparaenum} \end{proof} \section{Fractional morphisms} \begin{dfn}\label{2} Let $\mathscr{I}$ be a coherent $\mathcal{O}_X$-submodule of $\mathscr{Q}_X$ and let $f:Y\to X$ be a morphism of schemes. \begin{enumerate}[(i)] \item\label{2a} We call $\mathscr{I}$ a \emph{fractional ideal on $X$} if it has a rank. \item\label{2b} We call $f$ a \emph{fractional morphism} if it induces a morphism \begin{equation}\label{44} \xymat{ \mathcal{O}_Y\ar@{^(->}[r] & \mathscr{Q}_Y\\ f^{-1}\mathcal{O}_X\ar@{^(->}[r]\ar[u]^-{f^\#} & f^{-1}\mathscr{Q}_X\ar@{-->}[u] } \end{equation} We call it a \emph{bifractional} if this morphism induces an isomorphism $\mathscr{Q}_X\cong f_*\mathscr{Q}_Y$. \item\label{2c} For a fractional $f$, the \emph{inverse image of $\mathscr{I}$ under $f$} is the $\mathcal{O}_X$-submodule \[ \mathcal{O}_Y\mathscr{I}:=\mathcal{O}_Yf^\#(f^{-1}\mathscr{I})\subset\mathscr{Q}_Y. \] \end{enumerate} \end{dfn} Due to Lemma~\ref{23} fractionality of ideals is an affine local property. \begin{lem}\label{25} Let $\mathscr{I}\subset\mathscr{Q}_X$ be a quasicoherent $\mathcal{O}_X$-submodule. Then $\mathscr{I}$ is a fractional ideal on $X$ if and only if $I=\Gamma(U,\mathscr{I})$ is a fractional ideal of $A$ for each affine open $U=\Spec A\subset X$.\qed \end{lem} \begin{rmk}\label{11} Let $\mathscr{I}\subset\mathscr{Q}_X$ be a quasicoherent $\mathcal{O}_X$-submodule. \begin{enumerate}[(a)] \item\label{11b} By left exactness of the section functor and Lemma~\ref{4}.\eqref{4b}, $\mathscr{I}$ is coherent if and only if (affine) locally $\alpha\mathscr{I}\subset\mathcal{O}_X$ for some regular $\alpha\in\mathcal{O}_X$. \item\label{11c} By Lemma~\ref{23}.\eqref{23b}, any fractional ideal $\mathscr{I}\ne0$ has (global) rank $\rk\mathscr{I}=1$ which means that (affine) locally $\alpha\mathcal{O}_X\subset\mathscr{I}$ for some regular $\alpha\in\mathcal{O}_X$. \item\label{11a} By Lemma~\ref{4}.\eqref{4b}, $f:Y\to X$ is a fractional morphism if and only if for any restriction $\Spec B\to\Spec A$ of $f$ to affine open subsets, $B$ is a torsion free $A$-module. \item\label{11d} Setting $\mathscr{I}=\mathcal{O}_X$ in Definition~\ref{2}.\eqref{2c}, $\mathcal{O}_Y\mathcal{O}_X=f^*\mathcal{O}_X=\mathcal{O}_Y$. \item\label{11e} If $\alpha$ is as in \eqref{11b} or \eqref{11c} for $\mathscr{I}$, then $f^\#(\alpha)$ has the corresponding property for $\mathcal{O}_Y\mathscr{I}$. In particular, $\mathcal{O}_Y\mathscr{I}$ inherits coherence and fractionality from $\mathscr{I}$. \end{enumerate} \end{rmk} \section{Associating morphisms} The notion of a fractional morphism in Defintion~\ref{2}.\eqref{2b} can be expressed in terms of associated points. To this end we consider a variation of the notions of dominant and birational morphism (see \cite[\href{http://stacks.math.columbia.edu/tag/01RJ}{Def.~01RJ}, \href{http://stacks.math.columbia.edu/tag/01RO}{Def.~01RO}]{stacks-project} for comparison). \begin{dfn}\label{8} Let $f:Y\to X$ be a quasicompact morphism of locally Noetherian schemes. \begin{enumerate}[(i)] \item\label{8a} We call $f$ \emph{associating} if it induces a map $\mathscr{A}ss Y\to\mathscr{A}ss X$. \item\label{8b} We call $f$ \emph{biassociating} if it induces a bijection $\mathscr{A}ss Y\to\mathscr{A}ss X$ and an isomorphism $f^\#_{\widetilde y}:\mathcal{O}_{X,f(\widetilde y)}\to\mathcal{O}_{Y,\widetilde y}$ for any maximal/closed $\widetilde y\in\mathscr{A}ss Y$. \end{enumerate} \end{dfn} \begin{lem}\label{5}\ \begin{enumerate}[(a)] \item\label{5a} If $f:Y\to X$ is associating, then it is fractional. \item\label{5b} The converse statement of \eqref{5a} holds if $X$ has no embedded points. \item\label{5c} With $X$ also $Y$ has no embedded points if $f:\mathscr{A}ss Y\to\mathscr{A}ss X$ is a bijection. \item\label{5d} If $f:Y\to X$ is biassociating, then it is bifractional and induces a homeomorphism $\mathscr{A}ss Y\to\mathscr{A}ss X$. \end{enumerate} \end{lem} \begin{proof}\partialushQED{\qed}\ \begin{asparaenum}[(a)] \item Suppose that $f:Y\to X$ is associating. Then there is a commutative diagram \begin{equation}\label{19} \xymat{ Y'=\mathscr{A}ss Y\ar[d]_g\ar@{^(->}[r]_-j & Y\ar[d]^f \\ X'=\mathscr{A}ss X\ar@{^(->}[r]_-i & X } \end{equation} where $g$ is a map of posets. Applying $j^{-1}$ to $f^\#:f^{-1}\mathcal{O}_X\to\mathcal{O}_Y$, this yields a morphism \[ j^{-1}f^\#:g^{-1}i^{-1}\mathcal{O}_X=j^{-1}f^{-1}\mathcal{O}_X\to j^{-1}\mathcal{O}_Y. \] Applying $j_*$ and composing with the natural transformation $f^{-1}i_*\to j_*g^{-1}$ applied to $i^{-1}\mathcal{O}_X$ yields the desired morphism \[ \xymat{ f^{-1}\mathscr{Q}_X=f^{-1}i_*i^{-1}\mathcal{O}_X\ar[r] & j_*g^{-1}i^{-1}\mathcal{O}_X\ar[r]^-{j_*j^{-1}f^\#} & j_*j^{-1}\mathcal{O}_Y=\mathscr{Q}_Y. } \] To see the above natural transformation, let $\mathscr{F}$ be a sheaf on $X'$ and $U\subset Y$ open. Since sheafification is left adjoint to the forgetful functor from sheaves to presheaves, we may use the presheaf inverse image. Then \[ (f^{-1}i_*\mathscr{F})(U)=\varinjlim_{f(U)\subset V}\mathscr{F}(i^{-1}(V)),\quad (j_*g^{-1}\mathscr{F})(U)=\varinjlim_{g(j^{-1}(U))\subset W}\mathscr{F}(W). \] By the commutative diagram \eqref{19}, $i^{-1}(V)$ is a valid choice for $W$. Indeed, $f(U)\subset V$ means $U\subset f^{-1}(V)$. Applying $j^{-1}$ this gives $j^{-1}(U)\subset j^{-1}f^{-1}(V)=g^{-1}i^{-1}(V)$, and finally $g(j^{-1}(W))\subset i^{-1}(V)$ as claimed. \item Suppose conversely that $f$ induces a morphism $f^{-1}\mathscr{Q}_X\to\mathscr{Q}_Y$. Let $y\in\mathscr{A}ss Y$. By Lemma~\ref{7}, we may assume that $X=\Spec A$ and $Y=\Spec B$ with $y=\mathfrak{q}\in\mathscr{A}ss B$. By Lemma~\ref{4}.\eqref{4b}, the hypothesis then becomes that $f^\#$ induces a morphism $Q(A)\to Q(B)$. Assume that $x=f(y)\not\in\mathscr{A}ss X$. Then $x=\partialp=(f^\#)^{-1}(\mathfrak{q})\not\in\mathscr{A}ss A$ and hence $\partialp\not\subset\partialp'$ for any $\partialp'\in\mathscr{A}ss A$ as $X$ has no embedded points by hypothesis. Then prime avoidance yields an $\alpha\in\partialp\setminus\bigcup\mathscr{A}ss A$. This means that $\alpha\in A$ is regular while $f^\#(\alpha)\in\mathfrak{q}\in\mathscr{A}ss B$ is not, a contradiction. It follows that $x\in\mathscr{A}ss X$ and hence that $f$ is associating. \item Since $f$ is continuous it is order preserving and the claim follows. \item Let $y\in Y'$ and $x=g(y)\in X'$. Pick $\bar x\in X'$ maximal such that $x\le\bar x$. By the first part of Definition~\ref{8}.\eqref{8b} there is a unique $\bar y\in Y'$ with $g(\bar y)=\bar x$, which is then maximal by continuity of $g$. By its second part, $g^\#_{\bar y}:\mathcal{O}_{X,\bar x}\to\mathcal{O}_{Y,\bar y}$ is an isomorphism. Due to \eqref{6} $y\le\bar y$ and $g^{-1}(X'_x)=Y'_y$. In particular, $g$ is a homeomorphism. Localizing $g^\#_{\bar y}$ at the prime of $\mathcal{O}_{X,\bar x}$ corresponding to $x$ via \eqref{6}, \eqref{40} yields an isomorphism \[ g^\#_y:\mathcal{O}'_X(X'_x)=\mathcal{O}_{X,x}\to\mathcal{O}_{Y,y}=\mathcal{O}'_Y(Y'_y)=(g_*\mathcal{O}'_Y)(X'_x). \] Since the $X'_x$ form a basis of the topology of $X$, it follows that $g^\#:\mathcal{O}'_X\to g_*\mathcal{O}'_Y$ is an isomorphism. By \eqref{30}, \eqref{31} and commutativity of \eqref{19}, applying $i_*$ yields the desired isomorphism \[ \xymat{ f^\#:\mathscr{Q}_X=i_*\mathcal{O}'_X\ar[r]^-{i_*g^\#}_-\cong & i_*g_*\mathcal{O}'_Y=f_*j_*\mathcal{O}'_Y=f_*\mathscr{Q}_Y. }\qedhere \] \end{asparaenum} \end{proof} \section{Blowup of fractional ideals} \begin{dfn} Let $X$ be a scheme and let $\mathscr{I}$ be a coherent $\mathcal{O}_X$-submodule of $\mathscr{Q}_X$. Then the \emph{blow up of $X$ along $\mathscr{I}$} is defined as \begin{equation}\label{1} \xymat{ Z:=\Bl_\mathscr{I} X:=\Proj_X\bigoplus_{i\ge0}\mathscr{I}^i\ar[r]^-{b} & X. } \end{equation} \end{dfn} \begin{lem}\label{10} The blow up morphism $b$ in \eqref{1} is fractional. \end{lem} \begin{proof} Using Remark~\ref{11}.\eqref{11a}, we may assume that $X=\Spec A$, $\mathscr{I}=\widetilde I$ for some $I\subset Q(A)$, and replace $Z$ by an affine open $V=\Spec B$ where $B=(\bigoplus_{i\ge0}I^i)_{(g)}$ for some $g\in I^i$ where $i\ge 1$. We have to show that $B$ is a torsion free $A$-module. Any $a\in A^\mathrm{reg}$ is a unit in $Q(A)$ and hence regular on $\bigoplus_{i\ge0}I^i$ since $I^i\subset Q(A)$. Since localization and projection to a direct summand are exact operations, $a$ is also regular on $B$. \end{proof} \begin{rmk}\label{45}\ \begin{asparaenum}[(a)] \item\label{45a} Denote by $\mathscr{A}:=\bigoplus_{i\ge0}\mathscr{I}^i$ the graded $\mathcal{O}_X$-algebra defining $\Bl_\mathscr{I} X$. The inverse image $\mathcal{O}_Z\mathscr{I}=\mathcal{O}_Z(1)$ is the sheaf associated to the graded $\mathscr{A}$-module \[ \mathscr{I}\bigoplus_{i\ge0}\mathscr{I}^i=\bigoplus_{i\ge0}\mathscr{I}^{i+1}. \] It is invertible since the $\mathscr{A}$ is generated by $\mathscr{A}_1=\mathscr{I}$. In case $\mathscr{I}$ is invertible, $\Bl_\mathscr{I} X=X$. \item\label{45b} By Remark~\ref{11}.\eqref{11b}, there exists locally a unit $\alpha$ in $\mathscr{Q}_X$ such that $\alpha\mathscr{I}$ is an $\mathcal{O}_X$-ideal. Then $Z=\Bl_\mathscr{I} X$ is locally isomorphic to the blow up $Z':=\Bl_{\alpha\mathscr{I}}X$ of $X$ along $\alpha\mathscr{I}$ since multiplication by $\alpha^i$ in degree $i$ induces a graded isomorphism of $\mathcal{O}_X$-algebras \[ \xymat{ \bigoplus_{i\ge0}\mathscr{I}^i\ar[r]_-\cong^-{\alpha^\bullet\cdot} & \bigoplus_{i\ge0}\alpha^i\mathscr{I}^i. } \] Over it, there is a graded isomorphism \[ \xymat{ \bigoplus_{i\ge0}\mathscr{I}^{i+1}\ar[r]_-\cong^-{\alpha^\bullet\cdot} & \frac1\alpha\bigoplus_{i\ge0}(\alpha\mathscr{I})^{i+1} } \] which shows that $\mathcal{O}_Z(1)$ is isomorphic $\mathcal{O}_{Z'}(1)$ via the local isomorphism $Z\cong Z'$. \end{asparaenum} \end{rmk} The universal property of blow up \cite[Def.~IV-16]{EH00} extends to blow ups of fractional ideals as follows. This was remarked in \cite[Prop.~2.1]{OZ91} under slightly stronger hypotheses. \begin{prp}\label{17} Let $X$ be a scheme and let $\mathscr{I}$ be a coherent $\mathcal{O}_X$-submodule of $\mathscr{Q}_X$. In the full subcategory of fractional morphisms $f:Y\to X$ such that $\mathcal{O}_Y\mathscr{I}$ is invertible, the blow up \eqref{1} is a terminal object. \end{prp} \begin{proof} Let $f:Y\to X$ be a morphism as in the statement. Set $\mathcal{L}:=\mathcal{O}_Y\mathscr{I}$ such that $f^*\mathscr{I}\twoheadrightarrow\mathcal{L}$. Then \[ f^*\bigoplus_{i\ge0}\mathscr{I}^i\to\bigoplus_{i\ge0}\mathcal{O}_Y\mathscr{I}^i=\bigoplus_{i\ge0}\mathcal{L}^{\otimes i} \] is a morphism of graded $\mathcal{O}_X$-algebras. By the universal property of $\Proj_X$ (see \cite[\href{http://stacks.math.columbia.edu/tag/01O4}{Lem.~01O4}]{stacks-project}), this induces a morphism $Y\to Z$ of schemes over $X$ as required. \end{proof} \section{Blowup of conductors}\label{14} \begin{dfn}\label{58}\ \begin{enumerate}[(a)] \item Let $f:Y\to X$ be a finite morphism of Noetherian schemes and let $\mathscr{F}$ be a coherent $\mathcal{O}_X$-module. Then $\SHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y,\mathscr{F})$ defines a coherent $\mathcal{O}_Y$-module $f^!\mathscr{F}$ (see \cite[Prop.~6.4.25.(a)]{Liu02}). The $\mathcal{O}_X$-ideal $\mathscr{C}_{Y/X}:=\SHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y,\mathcal{O}_X)$ is the \emph{conductor of $f$}. \item Let $X$ be a reduced scheme and let $\overline\mathcal{O}_X$ be the integral closure of $\mathcal{O}_X$ in $\mathscr{Q}_X$. Then $f:\widetilde X:=\Spec_X\overline\mathcal{O}_X\to X$ is the \emph{normalization} and $\mathscr{C}_{\widetilde X/X}$ is the \emph{conductor of $X$}. \end{enumerate} \end{dfn} \begin{rmk}\label{18} A scheme $X$ is called \emph{Nagata} if for any affine open $U=\Spec A\subset X$ (in some cover) the ring $A$ is Nagata. Nagata schemes have a finite normalization morphism. Since we need only this latter property, we omit the details of the definition. \end{rmk} To define conductors of fractional ideals we apply the following easy result. \begin{lem}\label{20} Let $A$ be a ring and let $J$ and $I$ be fractional ideals of $A$. Then $\Hom_A(J,I)\cong I:_{Q(A)}J$ given by $\frac{\varphi(a)}{a}\leftrightarrow\varphi$ for any $a\in A^\mathrm{reg}\cap J$. In particular, $\Hom_A(J,I)$ is again a fractional ideal.\qed \end{lem} \begin{lem}\label{21} Let $f:Y\to X$ be a finite bifractional morphism of schemes. Let $\mathscr{J}$ and $\mathscr{I}$ be fractionals on $Y$ and $X$ respectively. Then $f_*\mathscr{J}$ and \[ \mathscr{C}_{\mathscr{J}/\mathscr{I}}:=\SHom_{\mathcal{O}_X}(f_*\mathscr{J},\mathscr{I})=\mathscr{I}:_{\mathscr{Q}_X}f_*\mathscr{J} \] are fractional ideals on $X$ and there is a unique fractional ideal $\mathscr{C}'_{\mathscr{J}/\mathscr{I}}$ on $Y$ such that \[ f_*(\mathscr{C}'_{\mathscr{J}/\mathscr{I}})=\mathscr{C}_{\mathscr{J}/\mathscr{I}}. \] \end{lem} \begin{proof} Since $f$ is finite and hence affine, $f_*\mathscr{J}$ is coherent and we may assume that both $X=\Spec(A)$ and $Y=\Spec(B)$ are affine. By Lemma~\ref{4}.\eqref{4b}, \eqref{44} up to isomorphism translates to \begin{equation}\label{43} \xymat{ B\ar@{^(->}[r] & Q(B)\\ A\ar@{^(->}[r]\ar@{^(->}[u]^-{\varphi} & Q(A)\ar@{=}[u] } \end{equation} By Lemma~\ref{25}, $J:=\Gamma(Y,\mathscr{J})=\Gamma(X,f_*\mathscr{J})$ and $I:=\Gamma(X,\mathscr{I})$ are fractional ideals of $B$ and $A$ respectively, and $\SHom_{\mathcal{O}_X}(f_*\mathscr{J},\mathscr{I})$ is the sheaf associated to $\Hom_A(J,I)$ where $J$ is an $A$-module via $\varphi$. We may assume that $I\ne0\ne J$. Since $\varphi:A\to B$ is finite, $J$ is a finite $A$-module and $J\otimes_AQ(A)=J\otimes_B Q(B)\cong Q(B)=Q(A)$ which means that $\rk_AJ=\rk_BJ=\rk_Y\mathscr{J}=1$ by Lemma~\ref{23}.\eqref{23a} and Remark~\ref{11}.\eqref{11c}. Hence $J$ and then by Lemma~\ref{20} also $\Hom_A(J,I)$ is a non zero fractional ideal of $A$. With $J$ also $\Hom_A(J,I)$ is a $B$-module which is finite over $A$ and hence over $B$. Thus, $\Hom_A(J,I)\otimes_BQ(B)=\Hom_A(J,I)\otimes_AQ(A)\cong Q(A)$ and therefore $\rk_B\Hom_A(J,I)=\rk_A\Hom_A(J,I)=1$. Thus, using Lemma~\ref{25}, the fractional ideal \begin{equation}\label{33} \mathscr{C}'_{\mathscr{J}/\mathscr{I}}:=\widetilde{\Hom_A(J,I)} \end{equation} is the $\mathcal{O}_Y$-module associated to the $B$-module $\Hom_A(J,I)$. \end{proof} \begin{rmk}\label{62} If $f:Y\to X$ is finite bifractional, then $\mathscr{C}'_{\mathcal{O}_Y/\mathscr{I}}=f^!\mathscr{I}$ in Lemma~\ref{21}. \end{rmk} \begin{lem}\label{13} Under the hypotheses of Lemma~\ref{21}, $\Bl_{\mathscr{C}_{\mathscr{J}/\mathscr{I}}}X=\Bl_{\mathscr{C}'_{\mathscr{J}/\mathscr{I}}}Y$. \end{lem} \begin{proof} Reduce to an affine situation as in Lemma~\ref{21}. Setting $C:=\Gamma(X,\mathscr{C}_{\mathscr{J}/\mathscr{I}})=\Gamma(Y,\mathscr{C}'_{\mathscr{J}/\mathscr{I}})$, we have \[ \Bl_{\mathscr{C}_{\mathscr{J}/\mathscr{I}}} X=\Proj_A(A\oplus C\oplus C^2\oplus\cdots)=\Proj_B(B\oplus C\oplus C^2\oplus\cdots)=\Bl_{\mathscr{C}'_{\mathscr{J}/\mathscr{I}}}Y \] which proves the claim. \end{proof} Combining Remark~\ref{45}.\eqref{45a} and Lemma~\ref{13} immediately implies the following \begin{thm}\label{22} Let $f:Y\to X$ be a finite bifractional morphism of schemes. Let $\mathscr{J}$ and $\mathscr{I}$ be fractional on $Y$ and $X$ respectively. Then $\Bl_{\mathscr{C}_{\mathscr{J}/\mathscr{I}}} X=Y$ if and only if $\mathscr{C}'_{\mathscr{J}/\mathscr{I}}$ is invertible.\qed \end{thm} Using Remark~\ref{18}, we recover a result of P.M.H.~Wilson~\cite[Cor.~1.4]{Wil78} with reduced hypotheses. \begin{cor} Let $X$ be a reduced one-dimensional locally Noetherian scheme with finite normalization $\widetilde X\to X$. Then $\Bl_{\mathscr{C}_{\widetilde X/X}} X=\widetilde X$. \end{cor} \begin{proof} By hypothesis $\mathcal{O}_{\widetilde X}$ a sheaf of PIDs and hence any fractional ideal on $\widetilde X$ is invertible. Then the claim follows from Theorem~\ref{22}. \end{proof} In this context it is interesting to know when $\mathscr{C}'_{\mathscr{J}/\mathscr{I}}$ is reflexive. \begin{prp} In addition to the hypotheses of Theorem~\ref{22}, assume that $Y$ is $(S_2)$ and Gorenstein in codimension up to one. If $\mathscr{I}$ is $(S_2)$, then $\mathscr{C}'_{\mathscr{J}/\mathscr{I}}$ is reflexive. \end{prp} \begin{proof}\partialushQED{\qed} Reduce to an affine situation as in Lemma~\ref{21}. By \eqref{33} and \cite[Thm.~3.6]{EG85}, it suffices to show that $\Hom_A(J,I)_\mathfrak{q}$ is an $(S_2)$ $B_\mathfrak{q}$-module for any $\mathfrak{q}\in\Spec B$. Setting $\partialp:=\varphi^{-1}(\mathfrak{q})\in\Spec A$, we may replace $A$ by $A_\partialp$ with maximal ideal $\mathfrak{m}:=\partialp A_\partialp$, $\mathfrak{q}$ by $\mathfrak{n}:=\mathfrak{q} B_\partialp$, and assume that $A$ is local. By finiteness of $B$ over $A$, $\dim A\ge\dim B\ge\dim B_{\mathfrak{q}}$. Using \cite[Prop.~1.2.10.(a), Exc.~1.4.19]{BH93} and that $I$ is $(S_2)$, this implies \begin{align*} \depth_{B_\mathfrak{q}}\Hom_A(J,I)_\mathfrak{q} &\ge\grade(\mathfrak{m},\Hom_A(J,I)) =\depth_A\Hom_A(J,I)\\ &\ge\min\{2,\depth_AI\} =\min\{2,\dim A\}\\ &\ge\min\{2,\dim B\}\ge\min\{2,\dim B_{\mathfrak{q}}\}.\qedhere \end{align*} \end{proof} \section{Blowup of canonical ideals}\label{42} Let $X$ be a \emph{Cohen--Macaulay scheme}. That is, $X$ is locally Noetherian and $\mathcal{O}_{X,x}$ is a Cohen--Macaulay ring for all (closed) points $x\in X$ (see \cite[\href{http://stacks.math.columbia.edu/tag/02IP}{Lem.~02IP}]{stacks-project}). The condition on closed points suffices due to Lemma~\ref{56} and since the Cohen--Macaulay property localizes (see \cite[Thm.~2.1.3.(b)]{BH93}). We use an analogous definition for a \emph{Gorenstein scheme} (see \cite[\href{http://stacks.math.columbia.edu/tag/0AWW}{Def.~0AWW}]{stacks-project}). By a \emph{canonical module} $\omega_X$ on a Cohen--Macaulay scheme $X$ we mean a coherent $\mathcal{O}_X$-module such that $\omega_{X,x}$ is a canonical module for $\mathcal{O}_{X,x}$ in the sense of \cite[Def.~3.3.1]{BH93} for all (closed) points $x\in X$. Recall that by \cite[Thm.3.3.5.(b)]{BH93} the canonical module localizes. If $\omega_X$ is isomorphic to a fractional ideal we call it a \emph{canonical ideal}. \begin{lem}\label{52} Let $X$ be a Cohen--Macaulay scheme with canonical module $\omega_X$. Assume that $\omega_X$ has a global rank. Then $\omega_X$ is a canonical ideal. \end{lem} \begin{proof} By Lemma~\ref{23}.\eqref{23c} and \cite[Prop.~3.3.18]{BH93}, $\rk\omega_{X,x}=1$, that is, $X$ is generically Gorenstein. Since $\omega_{X,x}$ is a maximal Cohen--Macaulay module it is torsion free and hence taking stalks of the canonical morphism $\omega_X\to\omega_X\otimes_{\mathcal{O}_X}\mathscr{Q}_X\cong\mathscr{Q}_X$ yields the claim. \end{proof} Let $X$ be a Cohen--Macaulay scheme with canonical ideal $\omega_X$. Let $f:Y\to X$ be a finite bifractional morphism of schemes. We abbreviate \[ \omega_Y:=f^!\omega_X. \] By Lemma~\ref{21} and Remark~\ref{62}, $\SHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y,\omega_X)$ and $\omega_Y$ are fractional ideals on $X$ and $Y$ respectively and related by \begin{equation}\label{15} f_*\omega_Y=\SHom_{\mathcal{O}_X}(f_*\mathcal{O}_Y,\omega_X). \end{equation} \begin{dfn} We say that a morphism $f:Y\to X$ is \emph{equidimensional along fibers of closed points} if, for all closed points $x\in X$, $\dim\mathcal{O}_{Y,y}$ is independent of $y\in f^{-1}(x)$. \end{dfn} \begin{prp}\label{12} Let $f:Y\to X$ be a finite bifractional morphism of Cohen--Macaulay schemes which is equidimensional along fibers of closed points and let $\omega_X$ be a canonical ideal on $X$. Then $\omega_Y$ is invertible if and only if $Y$ is Gorenstein. \end{prp} \begin{proof} Since $\omega_Y$ is coherent, it is invertible if and only if $\omega_{Y,y}\cong\mathcal{O}_{Y,y}$ for all closed $y\in Y$. By faithful flatness of completion, this is equivalent to $\widehat{\omega_{Y,y}}\cong\widehat{\mathcal{O}_{Y,y}}$ for all closed $y\in Y$. By \cite[Prop.~3.1.19, Thm.~3.3.7]{BH93} and Lemma~\ref{16} below, this is equivalent to $Y$ being Gorenstein. \end{proof} \begin{lem}\label{57} Let $f:Y\to X$ be a finite bifractional morphism of schemes. Then $y\in Y$ is closed if and only $x=f(y)\in X$ is closed. \end{lem} \begin{proof} By hypothesis $f$ corresponds affine locally in the target to an integral extension $\varphi$ as in \eqref{43}. Here the going up and incomparability theorem apply and Lemma~\ref{56} implies the claim. \end{proof} \begin{lem}\label{16} Under the hypotheses of Proposition~\ref{12}, $\widehat{\omega_{Y,y}}$ is a canonical module for $\widehat{\mathcal{O}_{Y,y}}$ for all closed $y\in Y$. \end{lem} \begin{proof} Let $y\in Y$ closed. Then $x=f(y)\in X$ is closed by Lemma~\ref{57}. By hypothesis $f$ is finite hence quasifinite and affine. Then $x$ has finite preimage $f^{-1}(x)=\{y=y_1,y_2,\dots,y_n\}$. We may assume that both $Y=\Spec B$ and $X=\Spec A$ are affine and identify $x=\partialp\in\Spec A$, $y=\mathfrak{q},y_i=\mathfrak{q}_i\in\Spec B$. Since $f$ is finite bifractional, $\varphi:A\hookrightarrow B$ in \eqref{43} and hence $\varphi_\partialp:A_\partialp\hookrightarrow B_\partialp$ is a finite extension. Setting $\omega_A=\Gamma(X,\omega_X)$ and $\omega_B=\Gamma(Y,\omega_Y)$, \eqref{15} reads $\omega_B=\Hom_A(B,\omega_A)$ and hence $\omega_{B,\partialp}=\Hom_{A_\partialp}(B_\partialp,\omega_{A_\partialp})$. We may therefore assume that $A=(A,\partialp)$ is local with canonical module $\omega_A$. Since $B$ is integral over $A$, $\dim A=\dim B\ge\dim B_{\mathfrak{q}_i}$ with equality for some $i$. Assuming $y$ closed means that $\mathfrak{q}\lhd B$ is maximal which is equivalent to $\partialp\lhd A$ being maximal by \cite[Cor.~5.8]{AM69}. It follows that $B$ is semilocal with maximal ideals $\mathfrak{q}_1,\dots,\mathfrak{q}_n\lhd B$. Using the equidimensionality hypothesis it follows that $\dim B_{\mathfrak{q}_i}=\dim A$ for all $i$. The ideal $\partialp B$ defines the same topology as the Jacobson radical $\bigcap_{i=1}^n\mathfrak{q}_i$ and hence $B\otimes_A\widehat A=\widehat B$. By \cite[Thm.~8.15]{Mat89}, there is a product decomposition \begin{equation}\label{60} \widehat B=\partialrod_{i=1}^n\widehat{B_{\mathfrak{q}_i}}. \end{equation} Then each $\widehat{B_{\mathfrak{q}_i}}$ is a finite $\widehat A$-module of dimension $\dim\widehat{B_{\mathfrak{q}_i}}=\dim \widehat A$. Since $B$ is finitely presented and $\widehat A$ is flat over $A$, \cite[Thm.~3.3.5.(c), Thm.~3.3.7.(b)]{BH93} yields that \begin{align*} \omega_B\otimes_B\widehat B &=\Hom_A(B,\omega_A)\otimes_B\widehat B =\Hom_A(B,\omega_A)\otimes_A\widehat A =\Hom_{\widehat A}(B\otimes_A\widehat A,\omega_A\otimes_A\widehat A)\\ &=\Hom_{\widehat A}(\partialrod_{i=1}^n\widehat{B_{\mathfrak{q}_i}},\omega_{\widehat A}) =\bigoplus_{i=1}^n\Hom_{\widehat A}(\widehat{B_{\mathfrak{q}_i}},\omega_{\widehat A}) =\bigoplus_{i=1}^n\omega_{\widehat{B_{\mathfrak{q}_i}}}. \end{align*} The claim then follows by applying $-\otimes_{\widehat B}\widehat{B_\mathfrak{q}}$: \[ \widehat{\omega_{Y,y}}=\widehat{\omega_{B,\mathfrak{q}_1}}=\omega_B\otimes_BB_\mathfrak{q}\otimes_{B_\mathfrak{q}}\widehat{B_\mathfrak{q}}=\omega_B\otimes_B\widehat{B_\mathfrak{q}}=\omega_B\otimes_B\widehat B\otimes_{\widehat B}\widehat{B_\mathfrak{q}}=\omega_{\widehat{B_\mathfrak{q}}}=\omega_{\widehat{\mathcal{O}_{Y,y}}}. \] \end{proof} We now generalize a result of P.M.H.~Wilson~\cite[Thm.~2.7, Rem.~2.8]{Wil78}. \begin{thm}\label{47} Let $f:Y\to X$ be a finite bifractional morphism of schemes. Assume that $X$ is Cohen--Macaulay with canonical ideal $\omega_X$. Then $\Bl_{f_*f^!\omega_X} X=Y$ if and only if $f^!\omega_X$ is invertible. The latter is equivalent to $Y$ being Gorenstein if $Y$ is Cohen--Macaulay and $f$ is equidimensional along fibers of closed points. \end{thm} \begin{proof} This follows from Theorem~\ref{22} and Proposition~\ref{12}. \end{proof} Taking Remark~\ref{18} into account, Theorem~\ref{48} in \S\ref{53} is now a consequence of Theorem~\ref{47} and the following result. \begin{prp}\label{54} Let $X$ be a reduced Cohen--Macaulay scheme. If $f:Y\to X$ is finite bifractional, then it is equidimensional along fibers of closed points. \end{prp} \begin{proof} Let $x\in X$ be a closed point and $y\in f^{-1}(x)$. We may return to the affine local situation of the proof of Lemma~\ref{16}. Since $A$ is reduced we have $Q(A)_\partialp=Q(A_\partialp)$. By exactness of localization, we may assume that $A=(A,\partialp)$ is local Cohen--Macaulay and \begin{equation}\label{61} \xymat{ A\ar@{^(->}[r]^-\varphi & B\ar@{^(->}[r] & Q(A) } \end{equation} with $\varphi$ a finite extension. By exactness of completion, $-\otimes_A\widehat A$ preserves injectivity and finiteness in \eqref{61} and $Q(A)\otimes_A\widehat A\hookrightarrow Q(\widehat A)$. Since the Cohen--Macaulay property commutes with completion (\cite[Cor.~2.1.8]{BH93}), we may assume that $A$ is complete and that $B$ is decomposed as in \eqref{60}. In particular, $B$ is catenary (see \cite[Thm.~2.1.12]{BH93}). Let $\mathfrak{q}'\in\Spec B$ be such that $\dim B_\mathfrak{q}=\dim\mathfrak{q}'$. In particular, $\mathfrak{q}'$ is minimal and hence $\mathfrak{q}'\in\mathscr{A}ss B$. Then $\partialp':=\varphi^{-1}(\mathfrak{q}')$ must be minimal. Otherwise, there is a $t\in\partialp'$ not contained in any minimal prime of $A$ by prime avoidance. Since $A$ is reduced this means that $t\in A^\mathrm{reg}\cap\partialp'$. Then $t$ becomes a unit in $Q(A)$ and hence $\varphi(t)\in B^\mathrm{reg}$ in contradiction to $\varphi(t)\in\mathfrak{q}'\in\mathscr{A}ss B$. Since local Cohen--Macaulay rings are equidimensional (see \cite[Thm.~2.1.2.(a)]{BH93}), $\dim\partialp'=\dim A$. Applying the going up theorem to $\mathfrak{q}'$ and a maximal chain of primes between $\partialp'$ and $\partialp$ gives $\dim A=\dim B_\mathfrak{q}$ and the claim follows. \end{proof} \end{document}
\begin{document} \begin{abstract} The aim of this paper is to investigate the singular relaxation limits for the Euler--Korteweg and the Navier--Stokes--Korteweg system in the high friction regime. We shall prove that the viscosity term is present only in higher orders in the proposed scaling and therefore it does not affect the limiting dynamics, and the two models share the same equilibrium equation. The analysis of the limit is carried out using the relative entropy techniques in the framework of weak, finite energy solutions of the relaxation models converging toward smooth solutions of the equilibrium. The results proved here take advantage of the enlarged formulation of the models in terms of the \emph{drift velocity} introduced in \cite{Bresh}, generalizing in this way the ones proved in \cite{LT2} for the Euler--Korteweg model. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} The objective of this work is to study the high friction limit for the Euler--Korteweg and the Navier--Stokes--Korteweg systems, that is: \begin{equation}\label{ek} \left\{\begin{aligned} & \partial_t \rho + \dive m =0\\ & \partial_t m + \dive \left( \frac{m \otimes m}{\rho}\right) + \nabla p(\rho) = {2 \nu} \dive(\mu_L(\rho)Du) + \nu \nabla(\lambda_L(\rho) \dive u) \\ & \qquad\qquad\qquad + \rho \nabla \left( k(\rho)\Delta\rho + \frac{1}{2}k'(\rho)|\nabla \rho|^2 \right) - \xi \rho u, \end{aligned}\right. \end{equation} where $t>0$, $x\in\mathbb{T}^n$, the $n$--dimensional torus, $\rho$ is the density, $m=\rho u $ is the momentum, and the constants $\xi > 0$ and $\nu\geq 0$ stand for the large friction and the viscosity coefficient ($\nu=0$ for the case of Euler--Korteweg system). As usual, in the viscosity terms of \eqref{ek} $$Du= \frac{ \nabla u + {}^t\nabla u}{2}$$ is the symmetric part of the gradient $\nabla u$ and the Lam\'e coefficients $\mu_L(\rho)$ and $\lambda_L(\rho)$ verifies \begin{equation} \label{lame} \mu_L(\rho)\geq 0;\ \frac2n \mu_L(\rho) +\lambda_L(\rho) \geq 0. \end{equation} Moreover, $p(\rho)$ stands for the pressure, connected to the internal energy $e(\rho)$ by the relations \begin{equation}\label{eq:defh} e'(\rho)= \frac{p(\rho)}{\rho^2};\ h(\rho)= \rho e(\rho);\ h''(\rho) = \frac{p'(\rho)}{\rho};\ p(\rho)=\rho h'(\rho) - h(\rho). \end{equation} As a consequence, we readily obtain \begin{equation*} \rho \nabla (h'(\rho)) = \rho h''(\rho) \nabla \rho = \nabla p(\rho). \end{equation*} In what follows, and we shall confine ourselves to the case of monotone pressure, and, for simplicity we shall consider the classical $\gamma$--law $p(\rho)= \rho^{\gamma}$ for $\gamma > 1$, for which the function $h$ is given by \begin{equation*} h(\rho) = \frac{1}{\gamma-1}\rho^\gamma. \end{equation*} The literature concerning these kind of systems, which include in particular Quantum Hydrodynamic models, is very wide and a complete description of it is beyond the main interest of our present research, which is focused in the study of the relaxation limit for weak, finite energy solutions of \eqref{ek}. In particular, we are not interested here in investigating the existence of such solutions, but solely in understanding their behavior in the high friction regime. However, for some rigorous mathematical studies of such systems, regarding in particular the existence of weak solutions, the dedicated reader may refer to \cite{AM1,AM2,AS,AS2} and the reference therein. The high friction regime, after an appropriate time scaling, in both cases is given by the following equation: \begin{equation}\label{eq:diff-limit} \rho_t = \dive_x \left( \rho \nabla_x \left(h'(\rho) + k(\rho)\Delta\rho + \frac{1}{2}k'(\rho)|\nabla \rho|^2 \right) \right), \end{equation} as one can easily check by performing the classical Hilbert expansion. Moreover, the rigorous study of this singular limit in terms of relative entropy techniques limit when $\nu>0$ does not present significant differences, and therefore we shall first discuss the case of Euler--Korteweg system in full details, and leave the discussion of the Navier--Stokes--Korteweg for the last section, where we shall emphasize only how to control the new terms due to the presence of the viscosity in \eqref{ek}. Moreover, it is worth to observe here that, besides the natural condition \eqref{lame} needed to guarantee the dissipative nature of the viscosity terms, we shall assume here only appropriate uniform integrability conditions on that functions (which can be deduced from a bound of their $L^1$ norm in terms of the energy), without a precise connection with the capillarity coefficient $k(\rho)$, as it is usually needed in the analysis of these models. The kind of singular limits under investigation here enters in the realm of diffusive relaxations, for which hyperbolic systems of balance laws (as \eqref{ek} for $\nu=0$) converge in a diffusive scaling toward parabolic equilibrium systems. These kind of asymptotic analysis has been addressed in various frameworks and with several techniques; in particular we refer to \cite{DM04} and the reference therein for the results concerning weak solutions and compactness arguments. More recently, this kind of limits has been also successfully addressed by means of relative entropy techniques, starting from the well--known case of the Euler system with friction (obtained by choosing $k(\rho) =0$ in \eqref{ek} in addition to $\nu=0$) converging to the porous media equation \cite{LT}. It is worth recalling that, as already pointed out before, this asymptotic behavior has been analyzed also before under many different viewpoints, and in particular for this remarkable example we refer to \cite{MM90,HMP05,HPW11}. However, the study of such limits with the present technique, even if it is confined to the case of smooth solutions at equilibrium, has the advantage of obtaining a stability estimate and hence a rate of convergence as the relaxation parameter goes to zero. More recently, many other diffusive limits have been addressed following the same ideas; among others, see \cite{Bia19,FT19,HJT19,OR20,Carrillo}, and in particular here we recall the general framework introduced in \cite{GLT,LT2}, where the relative entropy calculation and the analysis of the diffusive limits have been presented in the general framework of abstract Euler flows generated by the first variation of an energy functional $\mathcal{E}(\rho)$: \begin{equation*} \left\{\begin{aligned} & \partial_t \rho + \dive (\rho u) =0\\ & \rho \partial_tu + \rho u \cdot \nabla u = - \rho \nabla \frac{\delta \mathcal{E}}{\delta \rho} - \xi \rho u. \end{aligned}\right. \end{equation*} The system $\eqref{ek}$ under consideration here belongs to this class of abstract flows for the following particular choice for $\mathcal{E}(\rho)$: \begin{equation}\label{eq:potintro} \mathcal{E}(\rho) = \int \left (h(\rho) + \frac12 k(\rho) |\nabla \rho|^2 \right )dx. \end{equation} Referring in particular to the analysis of the large limit, among other possible instances, we recall here that in the paper \cite{LT2} the Authors showed the emergence of the (Cahn--Hilliard type) equation \eqref{eq:diff-limit} as high friction limit of the Euler-Korteweg system solely in the case of constant capillarity $k(\rho) = C_k$. This result is based on the following general relative entropy relation for the aforementioned abstract Euler equations \cite{GLT,LT2} \begin{equation*} \begin{split} & \frac{d}{dt} \left( \mathcal{E}(\rho|\bar{\rho}) + \int \frac{1}{2} \rho|u-\bar{u}|^2 \right) + \xi \int \rho|u-\bar{u}|^2 dx = \\ & \int \nabla \bar{u} : S(\rho|\bar{\rho}) dx - \int \rho \nabla \bar{u} : (u-\bar{u}) \otimes (u-\bar{u}) dx, \end{split} \end{equation*} written here for $(\rho,u)$ and $(\bar{\rho},\bar{u})$ smooth solutions of this system. The stress tensor $S$ appearing in the relation above can be defined in many examples of physical interest starting from the energy functional as follows: \begin{equation*} -\rho \nabla \frac{\delta \mathcal{E}}{\delta \rho} = \dive S. \end{equation*} In the particular case under consideration here, this relation becomes \begin{equation*} -\nabla p (\rho) + \rho \nabla \left( k(\rho)\Delta\rho + \frac{1}{2}k'(\rho)|\nabla \rho|^2 \right) = \dive S. \end{equation*} The relation recalled above, and thus the corresponding control of the diffusive limit can be improved if we confine our attention to the specific form of the Euler-Korteweg systems \eqref{ek}, as it has been recently proved in \cite{Bresh}. Indeed, the relative entropy techniques turns out to be more effective if one introduce the \emph{drift velocity} \begin{equation*} v= \frac{\nabla \mu(\rho)}{\rho}, \end{equation*} where $\mu(\rho)$ satisfies $\mu'(\rho) = \sqrt{\rho k(\rho)}$. In this way, it is possible to obtain an augmented formulation of \eqref{ek}, which, for the Euler-Korteweg system (that is, with $\nu=0$), reads as follows: \begin{equation}\label{ekb} \left\{\begin{aligned} & \partial_t \rho + \dive (\rho u) =0\\ & \partial_t( \rho u ) + \dive( \rho u\otimes u) + \nabla p( \rho) = \dive(\mu(\rho)\nabla v) + \frac{1}{2}\nabla(\lambda(\rho)\dive v) - \xi \rho u\\ & \partial_t (\rho v) + \dive(\rho v \otimes u) + \dive(\mu(\rho)^t\nabla u) + \frac{1}{2}\nabla (\lambda(\rho)\dive u)=0, \end{aligned}\right. \end{equation} where $\lambda(\rho)= 2(\mu'(\rho)\rho - \mu(\rho))$ and, thanks to the Bohm identity (see \cite{Bresh}), we also have \begin{equation*} \dive(\mu(\rho)\nabla v) + \frac{1}{2}\nabla(\lambda(\rho)\dive v) = \dive S_1 \end{equation*} thus defining the new stress tensor $S_1$ in $\eqref{ekb}$ due solely to the capillarity effects. As we shall prove in the sequel, this approach will lead us to control the high friction limits for non constant capillarities, obtaining the same advantages already pointed out in \cite{Bresh} also in the context of diffusive relaxation, thus generalizing the results of \cite{LT2} for this particular system. More precisely, the strategy is to define a new \emph{momentum} $J=\rho v$ to then estimate the following relative entropy: \begin{align*} \eta(\rho,m,J | \bar{\rho},\bar{m},\bar{J}) & = \eta(\rho,m,J) - \eta(\bar{\rho},\bar{m},\bar{J}) - \bar{\eta}_{\rho}(\rho- \bar{\rho})- \bar{\eta}_m \cdot (m-\bar{m}) \\ & \ - \bar{\eta}_J \cdot(J-\bar{J}) \\ & = \frac{1}{2} \rho |u - \bar{u}|^2 + \frac{1}{2} \rho |v-\bar{v}|^2 + h(\rho|\bar{\rho}). \end{align*} In the present analysis, which involves a relaxation limit between two different diffusive theories, the equilibrium (smooth) solution $(\bar{\rho},\bar{m},\bar{J})$ will solve the corresponding diffusive limiting equation, which shall then be recasted as an appropriate correction of the relaxing system \eqref{ekb}, as already done in previous works \cite{LT,LT2}. The outline of this work is as follows. In Section \ref{sec:hilbert}, after the appropriate time scaling, we perform the Hilbert expansion of \eqref{ekb} in order to recognize the limit equation. Then we rewrite the latter as a correction of the relaxation system \eqref{ekb} to take full advantage of the relative entropy tools. Section \ref{sec:relenes} is devoted to obtaining the relative entropy inequality, which will be used as an yardstick to measure the distance between the two solutions in the relaxation limit of the subsequent section. Finally, in Section \ref{sec:NSK} we describe our all results can be adapted in a straightforward way to the case of the Navier--Stokes--Korteweg model \eqref{ek} for $\nu>0$. \section{Hilbert expansion and formal diffusive limit for the Euler--Korteweg model} \label{sec:hilbert} In this section we shall present the correct scaling for which \eqref{ek}, and hence \eqref{ekb}, exhibits the desired diffuse limit. More precisely, for $\xi = 1/\epsilon$, we rescale the time so that $\partial_t \rightarrow \epsilon\partial_t$ and \eqref{ek} becomes: \begin{equation}\label{ek-scaled} \left\{\begin{aligned} & \partial_t \rho + \frac{1}{\epsilon} \dive m =0 \\ & \partial_tm + \frac{1}{\epsilon} \dive \left( \frac{m \otimes m}{\rho}\right) + \frac{1}{\epsilon} \nabla p( \rho) = \frac{1}{\epsilon}\dive S_1 - \frac{1}{\epsilon^2} \rho u. \end{aligned}\right. \end{equation} Accordingly, \eqref{ekb} reads \begin{equation}\label{ekb-scaled} \left\{\begin{aligned} & \partial_t \rho + \frac{1}{\epsilon} \dive (m) =0\\ & \partial_t( m) + \frac{1}{\epsilon} \dive \left( \frac{m \otimes m}{\rho}\right) + \frac{1}{\epsilon} \nabla p( \rho) = \frac{1}{\epsilon}\dive S_1 - \frac{1}{\epsilon^2} \rho u\\ & \partial_t (J) + \frac{1}{\epsilon} \dive \left( \frac{ J \otimes m}{\rho} \right) + \dive S_2=0, \end{aligned}\right. \end{equation} where $J=\rho v$ and (see \cite{Bresh} for further details) \begin{equation*} \dive S_2 = \dive(\mu(\rho)^t\nabla u) + \frac{1}{2}\nabla (\lambda(\rho)\dive u). \end{equation*} In order to perform the Hilbert expansion, we need to introduce the asymptotic expansions of $\rho$ and $m$ in \eqref{ekb-scaled}, and the one for $J$ will follow, being $J = \rho v = \nabla \mu(\rho)$. To this end, \begin{align*} &\rho = \rho_0 + \epsilon\rho_1 + \epsilon^2 \rho_2 + \cdots\\ &m = m_{0} + \epsilon m_{1} + \epsilon^2 m_{2} + \cdots \end{align*} and collect the terms of the same order. From the mass conservation we get: \begin{align*} &O(\epsilon^{-1}): & & \dive m_{0} = 0; \\ &O(1): & &\partial_t \rho_0 + \dive m_{1} = 0; \\ &O(\epsilon): & & \dots \\ \end{align*} from the momentum equation we get: \begin{align*} & O(\epsilon^{-2}) : & &m_{0}=0; \\ & O(\epsilon^{-1}) : & &-m_{1} = \nabla p(\rho_0) - \dive S_1(\rho_0);& \\ & O(1): & &\dots \\ \end{align*} Hence, from these first relations, we recover the equilibrium relation $m_{0} = 0$, the Darcy's law $m_{1} = - \nabla_x p(\rho_0) + \dive_x S_1(\rho_0)$, and the following gradient flow dynamic for $\rho_0$: \begin{equation}\label{gf} \partial_t\rho_{0} + \dive \left( - \nabla p(\rho_0) + \dive S_1(\rho_0) \right)= 0, \end{equation} that is, the formal limit as $\epsilon \rightarrow 0$ of \eqref{ekb-scaled}. In order to compare weak solutions of \eqref{ekb-scaled} and strong solutions of its parabolic equilibrium \eqref{gf} and take full advantage of the relative entropy estimate for hyperbolic systems, as already done in \cite{LT, LT2}, we the latter as Euler--Korteweg system with friction plus an error term as follows. Let us denote by $\bar{\rho}$ the (smooth) solution of $\eqref{gf}$. Then $(\bar{\rho}, \bar{m} = \bar{\rho}\bar{u})$ solves \begin{equation}\label{ek-scaledstrong} \left\{\begin{aligned} & \partial_t \bar{\rho} + \frac{1}{\epsilon} \dive \bar{m} =0\\ & \partial_t\bar{ m} + \frac{1}{\epsilon} \dive \left( \frac{\bar{m} \otimes \bar{m}}{\bar{\rho}}\right) + \frac{1}{\epsilon} \nabla p(\bar{ \rho}) = \frac{1}{\epsilon}\dive \bar{S_1} - \frac{1}{\epsilon^2} \bar{m} + e(\bar{\rho},\bar{m}), \end{aligned}\right. \end{equation} where \begin{equation*} \bar{m} = \epsilon \left(- \nabla p(\bar{\rho})+ \dive S_1(\bar{\rho})\right). \end{equation*} Clearly, in \eqref{ek-scaledstrong}, the error term $e(\bar{\rho},\bar{m})= \bar{e}$ is given by: \begin{align}\label{eq:error} \bar{e} & = \frac{1}{\epsilon} \dive_x \left( \frac{\bar{m} \otimes \bar{m} }{\bar{\rho}} \right) + \bar{m}_t \nonumber\\ & = \epsilon \dive_x \left( (- \nabla p(\bar{\rho})+ \dive S_1(\bar{\rho})) \otimes (- \nabla p(\bar{\rho})+ \dive S_1(\bar{\rho}) \right) \nonumber\\ & \ + \epsilon \left( - \nabla p(\bar{\rho})+ \dive S_1(\bar{\rho}) \right)_t \nonumber\\ & = O(\epsilon). \end{align} Introducing the notation $\bar{J}= \bar{\rho}\bar{v} = \nabla \mu(\bar{\rho})$, the equilibrium can be rewritten also as follows: \begin{equation}\label{ekb-scaledstrong} \left\{\begin{aligned} & \partial_t \bar{\rho} + \frac{1}{\epsilon} \dive \bar{m} =0\\ & \partial_t\bar{ m} + \frac{1}{\epsilon} \dive \left( \frac{\bar{m} \otimes \bar{m}}{\bar{\rho}}\right) + \frac{1}{\epsilon} \nabla p(\bar{ \rho}) = \frac{1}{\epsilon}\dive \bar{S_1} - \frac{1}{\epsilon^2} \bar{m} + e(\bar{\rho},\bar{m})\\ & \partial_t \bar{J} + \frac{1}{\epsilon} \dive \left( \frac{ \bar{J} \otimes \bar{m}}{\bar{\rho}} \right) + \frac{1}{\epsilon} \dive \bar{S_2}=0. \end{aligned}\right. \end{equation} As already done previously \cite{LT,LT2}, in next section we as shall validate rigorously the large friction limit using relative entropy estimates, but this time using the enlarged reformulation in terms of the drift velocity, thus considering the singular limit from \eqref{ekb-scaled} to \eqref{ekb-scaledstrong}. \section{Relative entropy estimate for the Euler--Korteweg model} \label{sec:relenes} Let us start by we start by recalling the entropy--entropy flux pair $(\eta, Q)$ associated to the original Euler-Korteweg system \eqref{ek} with $\xi = 1/\epsilon$ and after the related time scaling. Using the notation of \cite{GLT,LT2}, we obtain the potential energy (see \eqref{eq:potintro}) $$F(\rho, \nabla \rho)= h(\rho) + \frac{1}{2} k(\rho)|\nabla \rho|^2,$$ while the kinetic energy reads $$E_K = \frac{1}{2} \rho |u|^2.$$ Moreover, the couple $(\eta, Q)$ is defined in the following way: \begin{align*} \eta(\rho,m, \nabla \rho) &= \frac{1}{2} \rho|u|^2+ \frac{1}{2} k(\rho)|\nabla \rho|^2 + h(\rho);\\ Q(\rho,m, \nabla \rho) &= \frac{1}{2}\rho u |u|^2 + \rho u \left( h'(\rho) + \frac{1}{2}k'(\rho)|\nabla \rho|^2 -\dive (k(\rho)\nabla \rho)\right) \\ &\ + k(\rho)\nabla \rho\dive (\rho u). \end{align*} Before the rigorous justification of the relative entropy calculation in the context of weak solutions we are interested in, let us first briefly present the (formal) computation leading to the desired expression in the case when both solutions (of the relaxation and the limiting equations) are regular. Let us emphasize once again that in the sequel we shall take advantage of the reformulation \eqref{ekb-scaled} in terms of the drift velocity, and the rewriting of the equilibrium equation in \eqref{ekb-scaledstrong}. If we introduce $m = \rho u$, then (smooth) solutions of \eqref{ek} in the diffusive regime satisfy \begin{align*} & \partial_t \eta(\rho,m,\nabla \rho) + \frac{1}{\epsilon} \dive \Bigg ( \frac{1}{2}m \frac{|m|^2}{\rho^2} + m \left( h'(\rho) + \frac{1}{2}k'(\rho)|\nabla \rho|^2 -\dive (k(\rho)\nabla \rho) \right ) + \\ &\ k(\rho)\nabla \rho\dive m \Bigg ) = -\frac{1}{\epsilon^2} \frac{|m|^2}{\rho} \leq 0, \end{align*} while (smooth) solutions of \eqref{ek-scaledstrong} satisfy the following energy dissipation identity: \begin{align}\label{etaq-strong} & \partial_t \eta(\bar{\rho}, \bar{m}, \nabla \bar \rho ) + \frac{1}{\epsilon} \dive Q(\bar{\rho}, \bar{m}, \nabla \bar \rho) = - \frac{1}{\epsilon^2} \frac{ |\bar{m}|^2}{\bar \rho} + \frac{\bar{m}}{\bar{\rho}} \cdot \bar{e}. \end{align} It is worth to observe here that \eqref{etaq-strong} is a rewriting of the classical energy relation valid for the solution $\bar\rho$ to the equilibrium gradient flow equation \eqref{gf}. At this point, the main difference here with respect to the arguments in \cite{GLT,LT,LT2} relies on the fact that we use the notation of \cite{Bresh}: we introduce a fictitious velocity $v = \sqrt{\frac{k(\rho)}{\rho}} \nabla \rho$ and correspondingly its transport equation along the velocity $u$ (see \eqref{ekb-scaled}$_3$). This leads us to define a ``new'' entropy-entropy flux pair $(\eta,Q)$ related to the ``new'' potential energy $$F(\rho,J) = h(\rho) + \frac{1}{2} \frac{|J|^2}{\rho},$$ where $J=\rho v$. Hence, the entropy rewrites as follows: $$\eta(\rho, m , J) = \frac{1}{2} \frac{|m|^2}{\rho} + h(\rho) + \frac{1}{2} \frac{|J|^2}{\rho},$$ while its flux $Q$ is given by: $$Q(\rho, m, J) = \frac{1}{2} m \frac{|m|^2}{\rho^2} + mh'(\rho) + \frac{1}{2} m \frac{|J|^2}{\rho^2}.$$ We get: \begin{align}\label{single eta} \partial_t \eta(\rho,m,J) + \frac{1}{\epsilon} \dive Q(\rho,m,J) = \frac{1}{\epsilon} \frac{m}{\rho} \cdot \dive S_1 - \frac{1}{\epsilon} \frac{J}{\rho} \cdot \dive S_2 - \frac{1}{\epsilon^2} \frac{|m|^2}{\rho}, \end{align} while for the regular solution of the parabolic equation we get: \begin{align}\label{single etabar} \partial_t {\eta}(\bar \rho,\bar m, \bar J) + \frac{1}{\epsilon} \dive {Q}(\bar \rho,\bar m, \bar J)& = \frac{1}{\epsilon} \frac{\bar{m}}{\bar{\rho}} \cdot \dive \bar{S_1} - \frac{1}{\epsilon} \frac{\bar{J}}{\bar{\rho}} \cdot \dive \bar{S_2} - \frac{1}{\epsilon^2} \frac{|\bar{m}|^2}{\bar{\rho}}+ \bar{e} \cdot \frac{\bar{m}}{\bar{\rho}}. \end{align} Before formally prove the relative entropy relation in the context of weak solutions, here we sketch the derivation of \eqref{single eta} for the system $\eqref{ekb-scaled}$ and state the final result. To this end, a direct computation shows \begin{align*} \partial_t \left(\frac{1}{2} \frac{|m|^2}{\rho} \right) + \frac{1}{\epsilon} \dive \left( \frac{1}{2} m \frac{|m|^2}{\rho^2} \right) = - \frac{1}{\epsilon} u \cdot \nabla p(\rho) + \frac{1}{\epsilon} u \cdot \dive S_1 - \frac{1}{\epsilon^2} \rho |u|^2, \end{align*} and \begin{align*} \partial_t F(\rho, J) & = \partial_t \left( h(\rho) + \frac{1}{2} \frac{|J|^2}{\rho} \right)= - \frac{1}{\epsilon} \dive \left( m \left(h'(\rho) + \frac{1}{2}|v|^2 \right) \right) \\ &\ + \frac{1}{\epsilon} u \cdot \nabla p(\rho) - \frac{1}{\epsilon} v \cdot \dive S_2, \end{align*} leading to \eqref{single eta}. In this framework, the relative entropy is defined as: \begin{align*} \eta(\rho,m,J|\bar{\rho},\bar{m},\bar{J}) &= \eta(\rho,m,J) - \eta(\bar{\rho},\bar{m},\bar{J}) - \eta_{\rho}(\bar{\rho},\bar{m},\bar{J})(\rho- \bar{\rho}) \\ &\ - \eta_{m}(\bar{\rho},\bar{m},\bar{J})\cdot(m-\bar{m}) - \eta_{J}(\bar{\rho},\bar{m},\bar{J})\cdot(J-\bar{J}). \end{align*} When both solutions are regular, it verifies the following relation: \begin{align*} & \partial_t \eta (\rho,m,J| \bar{\rho}, \bar{m}, \bar{J}) + \frac{1}{\epsilon} \dive_x Q(\rho,m,J|\bar{\rho}, \bar{m}, \bar{J} ) = \\ & - \frac{1}{\epsilon} \rho \nabla \bar{u} : (u-\bar{u}) \otimes (u-\bar{u}) - \frac{1}{\epsilon^2} \rho |u-\bar{u}|^2 - \frac{\rho}{\bar{\rho}} \bar{e}\cdot(u-\bar{u}) - \frac{1}{\epsilon}p(\rho| \bar{\rho}) \dive \bar{u} \\ & - \frac{1}{\epsilon} \rho \; \nabla \bar{u} : (v-\bar{v}) \otimes (v-\bar{v}) - \frac{1}{\epsilon}\rho( \mu''(\rho) \nabla \rho - \mu''(\bar{\rho}) \nabla \bar{\rho})) \cdot ((v - \bar{v}) \dive \bar{u} \\ & \ - (u - \bar{u}) \dive \bar{v}) \\ & - \frac{1}{\epsilon} \rho(\mu'(\rho) - \mu'(\bar{\rho}))((v-\bar{v})) \cdot \nabla (\dive\bar{u}) - (u - \bar{u}) \cdot \nabla (\dive \bar{v})), \end{align*} where the relative flux is given by \begin{align*} Q(\rho,u,v | \bar{\rho}, \bar{u}, \bar{v}) = & \rho u \frac{1}{2}|u-\bar{u}|^2 + \rho u (h'(\rho) - h'(\bar{\rho}) )+ \frac{1}{2} \rho u |v-\bar{v}|^2 \\ & - \mu(\rho)\nabla v (u-\bar{u}) - \frac{1}{2} \lambda(\rho)\dive v (u-\bar{u}) - \\ & \mu(\rho) \nabla u (\bar{v}-v) - \frac{1}{2} \lambda(\rho)\dive u (\bar{v}-v) \\ & - \mu(\bar{\rho}) \frac{\rho}{\bar{\rho}} \nabla \bar{v} (\bar{u}-u) + \mu(\bar{\rho}) \frac{\rho}{\bar{\rho}}\nabla \bar{u} (\bar{v}-v) \\ & - \rho\left( \frac{\mu(\rho)}{\rho} - \frac{\mu(\bar{\rho})}{\bar{\rho}} \right) ( \nabla \bar{u} (v-\bar{v}) - \nabla \bar{v}(u-\bar{u})) - \\ & \frac{1}{2}\left(\lambda(\rho) - \frac{\rho}{\bar{\rho}}\lambda(\bar{\rho}) \right)((v- \bar{v})\dive \bar{u} - (u-\bar{u})\dive \bar{v})) \end{align*} and the relative entropy can be also rewritten as $$ \eta( \rho, m,J| \bar{\rho}, \bar{m}, \bar{J}) = \frac{1}{2} \rho |u - \bar{u}|^2 + \frac{1}{2} \rho |v-\bar{v}|^2 + h(\rho|\bar{\rho}).$$ Now, to generalize this relation for weak solutions, let us first state the precise definition of the latter, based on the one introduced in \cite{LT2}. We recall that we shall consider here $\gamma$--law pressures $p(\rho) = \rho^\gamma$, while the capillarity coefficient $k(\rho)$ is given by $k(\rho) = \frac{(s+3)^2}{4} \rho^s$, for which we obtain $\mu(\rho) = \rho^{\frac{s+3}{2}}$, with the conditions $\gamma > 1$, $s+2 \leq \gamma$ and $s \geq -1$. \begin{definition}\label{ws} ($\rho$, $m$, $J$) with $\rho \in C([0, \infty);(L^1(\mathbb{T}^n))$ $(m,J) \in C([0, \infty);(L^1(\mathbb{T}^n))^{2n})$, $\rho \geq 0$, is a weak (periodic) solution of $\eqref{ekb-scaled}$ if $$ \sqrt{\rho}u, \sqrt{\rho}v \in L^{\infty}((0,T);L^2(\mathbb{T}^n)^n),\ \rho \in C([0, \infty);(L^\gamma(\mathbb{T}^n)),$$ and $(\rho, m,J)$ satisfy for all $\psi \in C^1_c([0, \infty); C^1(\mathbb{T}^n))$ and for all $\phi, \varphi \in C^1_c([0, \infty); C^1(\mathbb{T}^n)^n)$: \begin{align*} & - \iint_{(0,+\infty)\times\mathbb{T}^n}\Bigg (\rho \psi_t + \frac{1}{\epsilon} m \cdot \nabla_x \psi \Bigg )dxdt = \int_{\mathbb{T}^n} \rho(x,0)\psi(x,0);\\ & - \iint_{(0,+\infty)\times\mathbb{T}^n} \Bigg[m \cdot (\phi)_t + \frac{1}{\epsilon}\left(\frac{m \otimes m}{\rho} : \nabla_x \phi \right) + \frac{1}{\epsilon} p(\rho) \dive \phi \\ & + \frac{1}{\epsilon}\left( \mu(\rho) v \cdot \nabla \dive (\phi) + \nabla \mu(\rho) \cdot (\nabla \phi v ) + \frac{1}{2} \nabla \lambda(\rho) \cdot v \dive \phi + \frac{1}{2} \lambda(\rho) v \cdot \nabla \dive \phi \right) \Bigg]dxdt \\ & \ = - \frac{1}{\epsilon^2} \iint_{(0,+\infty)\times\mathbb{T}^n}m \cdot \phi dxdt + \int_{\mathbb{T}^n} m(x,0) \cdot\phi(x,0)dx, \end{align*} where we have used the identity $$\displaystyle{S= - p(\rho) \mathbb{I} + S_1= - p(\rho)\mathbb{I} + \mu(\rho)\nabla v + \frac{1}{2}\lambda(\rho)\dive v \mathbb{I}};$$ \begin{align*} & - \iint_{(0,+\infty)\times\mathbb{T}^n} \Bigg[J \cdot\varphi_t + \frac{1}{\epsilon}\left(\frac{J \otimes m}{\rho} : \nabla_x \varphi \right) - \frac{1}{\epsilon}\Bigg( \mu(\rho) u \cdot (\nabla \dive \varphi ) + \nabla \mu(\rho) \cdot (\nabla \varphi u ) \\ &\ + \frac{1}{2} \nabla \lambda(\rho) \cdot u \dive \varphi + \frac{1}{2} \lambda(\rho) u \cdot \nabla \dive \varphi \Bigg) \Bigg]dxdt = \int_{\mathbb{T}^n} J(x,0) \cdot \varphi(x,0)dx, \end{align*} where we have used the identity $$\displaystyle{S_2= \mu(\rho)^t\nabla u + \frac{1}{2}\lambda(\rho)\dive u \mathbb{I}}.$$ If in addition $ \eta (\rho,m,J) \in C([0, \infty); L^1(\mathbb{T}^n))$ and $(\rho,m,J)$ satisfy \begin{align}\label{diss} & \iint_{(0,+\infty)\times\mathbb{T}^n} \left( \eta(\rho,m,J) \right) \dot{\theta}(t) dxdt \leq \int_{\mathbb{T}^n} \left( \eta(\rho,m,J)\right)|_{t=0} \theta(0)dx \nonumber\\ &\ - \frac{1}{\epsilon^2} \iint_{(0,+\infty)\times\mathbb{T}^n} \frac{|m|^2}{\rho} \theta(t) dxdt \end{align} for any non-negative $\theta \in W^{1,\infty}[0, \infty)$ compactly supported on $[0,\infty)$, then $(\rho,m,J)$ is called a \emph{dissipative} weak solution. If $\eta(\rho,m,J) \in C([0,\infty);L^1(\mathbb{T}^n))$ and $(\rho,m,J)$ satisfy $\eqref{diss}$ as an equality, then $(\rho,m,J)$ is called a \emph{conservative} weak solution. We say that a dissipative (or conservative) weak (periodic) solution $(\rho,m,J)$ of $\eqref{ekb-scaled}$ with $\rho \geq 0$ has finite total mass and energy if $$ \sup_{t \in (0,T)} \int_{\mathbb{T}^n} \rho dx \leq M <+ \infty,$$ and $$ \sup_{t \in (0,T)} \int_{\mathbb{T}^n} \eta(\rho,m,J) dx \leq E_o <+ \infty.$$ \end{definition} \begin{theorem}\label{relativentropy} Let $(\rho,m,J)$ be a dissipative (or conservative) weak solution of $\eqref{ekb-scaled}$ with finite total mass and energy according to Definition \ref{ws}, and let $\bar{\rho}$ be a smooth solution of $\eqref{gf}$. Then \begin{align}\label{eq:relen} & \int_{\mathbb{T}^n} \eta(\rho,m,J| \bar{\rho}, \bar{m}, \bar{J})(t)dx \leq \int_{\mathbb{T}^n} \eta(\rho,m,J| \bar{\rho}, \bar{m}, \bar{J})(0)dx \nonumber\\ & - \frac{1}{\epsilon^2} \iint_{(0,t) \times \mathbb{T}^n} \rho |u-\bar{u}|^2 dxd\tau - \frac{1}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \rho \nabla \bar{u}: (u-\bar{u}) \otimes (u-\bar{u})dxdt \nonumber\\ & - \frac{1}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n}p(\rho|\bar{\rho}) \dive \bar{u} dxd\tau - \frac{1}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \rho \; \nabla \bar{u}: (v-\bar{v}) \otimes (v-\bar{v}) dxd\tau \nonumber\\ & - \iint_{(0,t) \times \mathbb{T}^n} e(\bar{\rho},\bar{m}) \cdot \frac{\rho}{\bar{\rho}} (u-\bar{u})dxd\tau \nonumber\\ & - \frac{1}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \rho[(\mu''(\rho)\nabla \rho - \mu''(\bar{\rho})\nabla \bar{\rho})\cdot((v-\bar{v})\dive \bar{u} - (u-\bar{u})\dive \bar{v})]dxd\tau \nonumber\\ & - \frac{1}{\epsilon}\iint_{(0,t) \times \mathbb{T}^n} \rho (\mu'(\rho)- \mu'(\bar{\rho}))[(v-\bar{v})\cdot\nabla \dive \bar{u}-(u- \bar{u})\cdot\nabla \dive \bar{v}]dxd\tau, \end{align} where \begin{equation}\label{eq:defbarutheo} \bar{m} = \bar{\rho}\bar{u}= \epsilon \left(- \nabla p(\bar{\rho})+ \dive S_1(\bar{\rho})\right); \ \bar{J}= \bar{\rho}\bar{v} = \nabla \mu(\bar{\rho}). \end{equation} \end{theorem} \begin{proof} Let $(\rho,m,J)$ be a weak dissipative (or conservative) weak solution of $\eqref{ekb-scaled}$ according to Definition \ref{ws} and let $\bar \rho$ be a strong solution of $\eqref{gf}$, so that, using \eqref{eq:defbarutheo}, $(\bar{\rho}, \bar{m},\bar{J})$ satisfies $\eqref{ekb-scaledstrong}$. We consider the following function $\theta(\tau)$ in the energy (in)equality \eqref{diss} of Definition \ref{ws}: \begin{equation*} \theta(\tau)= \begin{cases} 1, & \hbox{ for } 0 \leq \tau < t, \\ \frac{t-\tau}{\mu} + 1, & \hbox{ for } t \leq \tau < t+\tau, \\ 0, & \hbox{ for } \tau \geq t +\mu. \end{cases} \end{equation*} Then, as $\mu \rightarrow 0$, we readily obtain: \begin{align*} \int_{\mathbb{T}^n} (\eta(\rho,m,J))|_{\tau =0}^t \leq - \frac{1}{\epsilon^2} \iint_{(0,t)\times\mathbb{T}^n} \frac{|m|^2}{\rho} dxd\tau. \end{align*} Moreover, by a direct integration in $(0,t) \times \mathbb{T}^n$ of $\eqref{single etabar}$ we get: \begin{equation} \begin{split} \int_{\mathbb{T}^n} \eta(\bar{\rho},\bar{m}, \bar{J})|_{\tau = 0}^t = & - \frac{1}{\epsilon^2} \iint_{(0,t)\times\mathbb{T}^n} \frac{|\bar{m}|^2}{\bar{\rho}}dxd\tau + \iint_{(0,t)\times\mathbb{T}^n} \frac{\bar{m}}{\bar{\rho}} \cdot \bar{e} \end{split} \end{equation} because \begin{align}\label{rhobar} 0 &= - \frac{1}{\epsilon}\iint_{(0,t)\times\mathbb{T}^n}\big (\nabla \bar{u} : \bar{S_1} - \nabla \bar{v} : \bar{S_2} \big ) dxd\tau \nonumber\\ & = \frac{1}{\epsilon}\iint_{(0,t)\times\mathbb{T}^n}\big (\bar{u} \cdot \dive \bar{S_1} - \bar{v} \cdot \dive \bar{S_2} \big ) dxd\tau \nonumber\\ &= \iint_{(0,t)\times\mathbb{T}^n} \big (\nabla \mu (\bar \rho) \cdot (\nabla \bar v \bar u - \nabla \bar u \bar v) + \mu(\bar \rho) (\bar u \cdot \nabla \dive \bar v - \bar v \cdot \nabla \dive \bar u)\big)dxd\tau \nonumber\\ & \ + \frac{1}{2\epsilon} \iint_{(0,t)\times\mathbb{T}^n} \big( \nabla\lambda(\bar \rho)\cdot(\bar u \dive \bar v - \bar v \dive \bar u) + \lambda(\bar \rho)(\bar u \cdot \nabla \dive \bar v - \bar v \cdot \nabla \dive \bar u)\big) dxd\tau, \end{align} being $\bar\rho $ periodic and using the definitions of $\bar{S_1}$ and $\bar{S_2}$: \begin{align*} &\bar{S_1}= \mu(\bar\rho)\nabla \bar v + \frac{1}{2}\lambda(\bar\rho)\dive \bar v \mathbb{I}; \\ & \bar{S_2} = \mu(\bar\rho)^t\nabla \bar u + \frac{1}{2}\lambda(\bar\rho)\dive \bar u \mathbb{I}. \end{align*} Indeed we have: \begin{equation*} \frac{1}{\epsilon}\iint_{(0,t)\times\mathbb{T}^n}\nabla \bar{u} : \bar{S_1} dxd\tau = \frac{1}{\epsilon} \iint_{(0,t)\times\mathbb{T}^n}\left ( \mu(\bar{\rho}) \nabla \bar{u} : \nabla \bar{v} + \frac{1}{2} \lambda(\bar{\rho}) \dive \bar{v} \dive \bar{u} \right )dxd\tau, \end{equation*} and \begin{equation*} \frac{1}{\epsilon}\iint_{(0,t)\times\mathbb{T}^n}\nabla \bar{v} : \bar{S_2} dxd\tau = \frac{1}{\epsilon}\iint_{(0,t)\times\mathbb{T}^n} \left (\mu(\bar{\rho}) \nabla \bar{v} : {}^t\nabla \bar{u} + \frac{1}{2} \lambda(\bar{\rho}) \dive \bar{u} \dive \bar{v} \right )dxd\tau. \end{equation*} Therefore, since $$\nabla \bar{v} = \nabla\left ( \frac{\mu'(\bar\rho)}{\bar\rho}\nabla\bar\rho\right ) = \nabla^2 M(\bar\rho) $$ is symmetric, it holds: $$\nabla \bar{u}: \nabla \bar{v}- \nabla \bar{v}: {}^t\nabla \bar{u} = \nabla \bar{u}: \nabla \bar{v}- \nabla \bar{u}: {}^t\nabla \bar{v} = \nabla \bar{u}: \nabla \bar{v}- \nabla \bar{u}: \nabla \bar{v} = 0,$$ and the integral \begin{align*} & \frac{1}{\epsilon}\iint_{(0,t)\times\mathbb{T}^n}\big ( \nabla \bar{u} : \bar{S_1} - \nabla \bar{v} : \bar{S_2} \big )dxd\tau \\ & \ = \frac{1}{\epsilon}\iint_{(0,t)\times\mathbb{T}^n}\left ( \mu(\bar{\rho}) (\nabla \bar{u}: \nabla \bar{v}- \nabla \bar{v}: {}^t\nabla \bar{u}) + \frac{1}{2} \lambda(\bar{\rho}) (\dive \bar{v}\dive \bar{u} - \dive\bar{v}\dive\bar{u}) \right )dxd\tau \end{align*} vanishes. Now we want to evaluate the linear part of the relative entropy for the difference $(\rho - \bar{\rho}, m - \bar{m}, J- \bar{J})$ choosing suitable test functions in the weak formulation (according to Definition \ref{ws}) of the equation satisfied by these differences, namely: \begin{equation}\label{psi} - \iint_{[0,\infty) \times \mathbb{T}^n} \left( \psi_t(\rho- \bar{\rho}) + \frac{1}{\epsilon} \psi_{x_i}(m_i-\bar{m_i}) \right)dxd\tau = \int_{\mathbb{T}^n} (\rho - \bar{\rho})\psi|_{t=0}dx, \end{equation} \begin{equation}\label{phi} \begin{split} & - \iint_{[0,\infty) \times \mathbb{T}^n} \phi_{t} \cdot (m-\bar{m}) + \frac{1}{\epsilon} \left( \frac{m_i m_j}{\rho} - \frac{\bar{m_i}\bar{m_j}}{\bar{\rho}} \right)\partial_{x_j}\phi_{i} + \frac{1}{\epsilon} [p(\rho)- p(\bar{\rho})] \partial_{x_i} \phi_{i} dxd\tau \\ & - \frac{1}{\epsilon} \iint_{[0,\infty) \times \mathbb{T}^n} (\mu(\rho)v_i-\mu(\bar\rho)\bar{v_i}) \partial_{x_i} \partial_{x_j} \phi_{j} + (\partial_{x_i} \mu(\rho)v_j - \partial_{x_i}\mu(\bar\rho) \bar{v_j})\partial_{x_j} \phi_{i} dxd\tau \\ & - \frac{1}{\epsilon} \iint_{[0,\infty) \times \mathbb{T}^n} \frac{1}{2}( \partial_{x_i}( \lambda(\rho)) v_i - \partial_{x_i}(\lambda(\bar \rho)) \bar{v_i}) \partial_{x_j}\phi_{j} + \frac{1}{2} (\lambda(\rho) v_i- \lambda(\bar\rho)\bar{v_i}) \partial_{x_i} \partial_{x_j}\phi_{j} dxd\tau \\ & \ = - \frac{1}{\epsilon^2} \iint_{[0, \infty) \times \mathbb{T}^n} (m - \bar{m}) \cdot \phi dx d\tau - \iint_{[0, \infty) \times T^n} \bar{e} \cdot \phi dx d\tau + \int_{\mathbb{T}^n} (m - \bar m)\cdot \phi|_{t=0}dx \end{split} \end{equation} and \begin{equation}\label{phi2} \begin{split} & - \iint_{[0,\infty) \times \mathbb{T}^n} \left( \varphi_t \cdot (J-\bar{J}) \right) + \frac{1}{\epsilon} \left( \frac{J_{i} m_{j}}{\rho} - \frac{\bar{J_{i}}\bar{m_{j}}}{\bar{\rho}} \right) \partial_{x_j}\varphi_i dxd\tau \\ & + \frac{1}{\epsilon} \iint_{[0,\infty) \times \mathbb{T}^n} (\mu(\rho)u_i-\mu(\bar\rho)\bar{u_i}) \partial_{x_i} \partial_{x_j} \varphi_j + (\partial_{x_i} \mu(\rho)u_j-\partial_{x_i}\mu(\bar\rho)\bar{u_j})\partial_{x_j} \varphi_i dxd\tau \\ & + \frac{1}{\epsilon} \iint_{[0,\infty) \times \mathbb{T}^n} \frac{1}{2}(\partial_{x_i} \lambda(\rho)u_i-\partial_{x_i} \lambda(\bar\rho)\bar{u_i}) \partial_{x_j} \varphi_j + \frac{1}{2}(\lambda(\rho)u_i- \lambda(\bar\rho)\bar{u_i}) \partial_{x_i} \partial_{x_j} \varphi_j dxd\tau \\ &\ = \int_{\mathbb{T}^n} (J - \bar J)\cdot \varphi|_{t=0}dx, \end{split} \end{equation} where $\psi,\phi, \varphi$ are Lipschitz test functions, $\phi, \varphi$ vector--valued, compactly supported in $[0, + \infty)$ in time and periodic in space. In the above relation we choose in particular \begin{align*} &\psi = \theta(\tau) \left( h'(\bar{\rho})- \frac{1}{2} \frac{|\bar{m}|^2}{\bar{\rho^2}} - \frac{|\bar{J}|^2}{ \bar{\rho^2}} \right) \text{ \; and} \\ & \Phi = (\phi, \varphi)= \theta(\tau) \left( \frac{\bar{m}}{\bar{\rho}}, \frac{\bar{J}}{\bar{\rho}} \right), \text{where $\theta(\tau)$ is defined above.} \end{align*} Then, letting $\mu \rightarrow 0$ in $\eqref{psi}$ we obtain \begin{align*} & \int_{\mathbb{T}^n}\left( h'(\bar{\rho})- \frac{1}{2} \frac{|\bar{m}|^2}{\bar{\rho^2}} - \frac{|\bar{J}|^2}{ \bar{\rho^2}} \right)(\rho - \bar{\rho} ) \mid_{\tau = 0}^t dx \\ & - \iint_{[0,t] \times \mathbb{T}^n} \left[ \partial_{\tau} \left(h'(\bar{\rho}) - \frac{1}{2} \frac{|\bar{m}|^2}{\bar{\rho^2}} - \frac{|\bar{J}|^2}{ \bar{\rho^2}} \right) (\rho- \bar{\rho}) dx d\tau \right] \\ & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \nabla_x \left(h'(\bar{\rho}) - \frac{|\bar{m}|^2}{ \bar{\rho^2}} - \frac{|\bar{J}|^2}{ \bar{\rho^2}} \right) \cdot (m- \bar{m})dxd\tau = 0. \end{align*} From $\eqref{phi}$: \begin{align*} & \int_{\mathbb{T}^n} \frac{\bar{m}}{\bar{\rho}} \cdot (m - \bar{m})|_{\tau=0}^tdx - \iint_{[0,t] \times \mathbb{T}^n} \partial_{\tau} \left( \frac{\bar{m}}{\bar{\rho}} \right) \cdot (m-\bar{m}) dxd\tau \\ & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \left [ \left( \frac{m_{i} m_{j}}{ \rho} - \frac{ \bar{m_{i}} \bar{m_{j}} }{ \bar{\rho} } \right) \partial_{x_j}\left( \frac{\bar{m_{i}}}{\bar{\rho}} \right) + (p(\rho)- p(\bar{\rho})) \dive \left( \frac{\bar{m}}{\bar{\rho}} \right) \right] dxd\tau \\ & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n}\left [ \mu(\rho)(v-\bar{v}) \cdot \nabla\dive \left( \frac{\bar{m}}{\bar{\rho}} \right) + \nabla \mu(\rho)\cdot \nabla \left( \frac{\bar{m}}{\bar{\rho}} \right)(v-\bar{v}) \right ]dxd\tau \\ & - \frac{1}{\epsilon} \int \int_{[0,t] \times \mathbb{T}^n} \left [(\mu(\rho) - \mu(\bar\rho)) \bar v \cdot \nabla\dive \left( \frac{\bar{m}}{\bar{\rho}} \right) + \nabla( \mu(\rho) -\mu(\bar\rho)) \nabla \left( \frac{\bar{m}}{\bar{\rho}} \right) \bar v \right ] dxd\tau \\ & - \frac{1}{2\epsilon} \iint_{[0,\infty) \times \mathbb{T}^n} \left( \nabla \lambda(\rho)\cdot (v-\bar{v}) \dive \left( \frac{\bar{m}}{\bar{\rho}} \right) + \lambda(\rho)(v-\bar{v}) \cdot \nabla \dive \left( \frac{\bar{m}}{\bar{\rho}} \right) \right )dxd\tau \\ & -\frac{1}{ 2 \epsilon} \iint_{[0,\infty) \times \mathbb{T}^n} \left [ \nabla(\lambda(\rho) - \lambda(\bar \rho)) \cdot \bar v \dive \left( \frac{\bar{m}}{\bar{\rho}} \right) + (\lambda(\rho) - \lambda(\bar \rho)) \bar v \cdot \nabla \dive \left( \frac{\bar{m}}{\bar{\rho}} \right) \right ]dxd\tau \\ &=- \frac{1}{\epsilon^2} \iint_{[0,t] \times \mathbb{T}^n} \frac{\bar{m} }{ \bar{\rho}} \cdot (m -\bar{m}) dxd\tau - \iint_{[0,t] \times \mathbb{T}^n} \frac{\bar{m}}{\bar{\rho}} \cdot \bar{e} dxd\tau. \end{align*} Analogously, from $\eqref{phi2}$: \begin{align*} & \int_{\mathbb{T}^n} \frac{\bar{J}}{\bar{\rho}} \cdot (J - \bar{J})|_{\tau=0}^tdx - \iint_{[0,t] \times \mathbb{T}^n} \partial_{\tau} \left( \frac{ \bar{J} }{ \bar{\rho}} \right) \cdot (J - \bar{J}) dxd\tau \\ & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \left( \frac{m_{j} J_{i} }{\rho} - \frac{ \bar{m_{j}} \bar{ J_{i} } }{\bar{\rho}} \right)\partial_{x_j} \left( \frac{\bar{J_{i}} }{\bar{\rho}} \right) dxd\tau \\ & + \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n}\left [ \mu(\rho)\left (\frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}}\right ) \cdot \nabla (\dive \bar{v}) + \nabla \mu(\rho) \cdot \nabla \bar{v} \left (\frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}}\right ) \right ]dxd\tau \\ & + \frac{1}{\epsilon} \iint_{[0, t) \times \mathbb{T}^n} \left [(\mu(\rho) - \mu(\bar\rho) ) \frac{\bar{m}}{\bar{\rho}}\cdot \nabla \dive \bar v + \nabla( \mu(\rho) -\mu(\bar\rho)) \nabla \bar v \frac{\bar{m}}{\bar{\rho}} \right ] dxd\tau \\ & + \frac{1}{2\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \left [ \nabla \lambda(\rho) \cdot \left (\frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}}\right )\dive \bar{v} + \lambda(\rho) \left (\frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}}\right ) \cdot \nabla \dive \bar{v} \right ]dxd\tau \\ & + \frac{1}{2 \epsilon} \iint_{[0,t] \times \mathbb{T}^n} \nabla(\lambda(\rho) -\lambda(\bar\rho)) \cdot \frac{\bar{m}}{\bar{\rho}} \dive \bar v + (\lambda(\rho) -\lambda(\bar{\rho})) \frac{\bar{m}}{\bar{\rho}} \cdot \nabla \dive \bar v dxd\tau = 0. \end{align*} Combining the above relations we get: \begin{align}\label{corradded} & \int_{\mathbb{T}^n} \left[ \eta(\rho,m,J| \bar{\rho}, \bar{m}, \bar{J}) \right]|_{\tau=0}^t dx \leq - \frac{1}{\epsilon^2} \iint_{[0,t]\times \mathbb{T}^n} [ \rho |u|^2 - \bar{\rho}|\bar{u}|^2 - \bar{u}(\rho u - \bar{\rho} \bar{u})]dxd\tau \nonumber\\ & - \iint_{[0,t]\times \mathbb{T}^n} \left[ \partial_{\tau} \left(h'(\bar{\rho}) - \frac{1}{2}|\bar{u}|^2 - \frac{1}{2}|\bar{v}|^2 \right)(\rho - \bar{\rho}) + \partial_{\tau}(\bar{u})(\rho u - \bar{\rho} \bar{u}) + \partial_{\tau}(\bar{v})(\rho v - \bar{\rho} \bar{v}) \right]dxd\tau \nonumber \\ & - \frac{1}{\epsilon} \iint_{[0,t]\times \mathbb{T}^n} \nabla \left( h'(\bar{\rho}) - \frac{1}{2}|\bar{u}|^2 - \frac{1}{2}|\bar{v}|^2 \right)(\rho u - \bar{\rho} \bar{u}) dxd\tau - \frac{1}{\epsilon} \iint_{[0,\infty) \times \mathbb{T}^n} [p(\rho)-p(\bar{\rho})] \dive \bar{u} dxd\tau \nonumber\\ & - \frac{1}{\epsilon} \iint_{[0,t]\times \mathbb{T}^n} (\rho u_i u_j - \bar{\rho} \bar{u_i} \bar{u_j})\partial_{x_j}(\bar{u_i}) + (\rho v_i u_j - \bar{\rho} \bar{v_i} \bar{u_j})\partial_{x_j}(\bar{v_i}) dxd\tau \nonumber\\ & - \frac{1}{\epsilon} \iint_{[0,t]\times \mathbb{T}^n} \mu(\rho)[(v-\bar{v}) \nabla \dive \bar{u} - (u-\bar{u}) \nabla \dive \bar{v}] + \nabla \mu(\rho)[ \nabla \bar{u}(v-\bar{v}) - \nabla \bar{v}(u-\bar{u})]dxd\tau \nonumber\\ & - \frac{1}{2\epsilon} \iint_{[0,t]\times \mathbb{T}^n} \nabla \lambda(\rho)[ (v-\bar{v}) \dive \bar{u} - (u-\bar{u}) \dive \bar{v}] + \lambda(\rho)[(v-\bar{v}) \nabla \dive \bar{u} - (u-\bar{u}) \nabla \dive \bar{v}]dxd\tau \nonumber\\ & + \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \big( (\mu(\rho) - \mu(\bar\rho))(\bar u \cdot\nabla \dive \bar v - \bar v \cdot\nabla \dive \bar u ) + \nabla (\mu(\rho)-\mu(\bar\rho))\cdot(\nabla \bar v \bar u - \nabla \bar u \bar v )\big )dxd\tau \nonumber\\ & +\frac{1}{2 \epsilon}\iint_{[0,t] \times \mathbb{T}^n} \big((\lambda(\rho) -\lambda(\bar\rho)) \left(\bar u \cdot \nabla \dive \bar v - \bar v \cdot \nabla \dive \bar u \right) + \nabla(\lambda(\rho)-\lambda(\bar\rho))\cdot (\bar u \dive \bar v - \bar v \dive \bar u )\big)dxd\tau. \end{align} First of all, let us observe that the last two lines of the relation above are indeed zero, as one can easily prove repeating the arguments leading to \eqref{rhobar}, with the differences $\mu(\rho)-\mu(\bar\rho)$ and $\lambda(\rho)-\lambda(\bar\rho)$ replacing $\mu(\bar\rho)$ and $\lambda(\bar\rho)$ inside the definition of the tensors $S_1$ and $S_2$. Moreover, using the relation $h''(\bar{\rho}) = p'(\bar{\rho})/ \bar{\rho}$ and the continuity equation for $\bar\rho$, we get \begin{align*} & - \iint_{[0,t]\times \mathbb{T}^n} \left ( \partial_{\tau}h'(\bar{\rho})(\rho- \bar{\rho}) + \frac{1}{\epsilon} \nabla h'(\bar{\rho}) (\rho u - \bar{\rho}\bar{u}) \right) dxd\tau \\ & = \frac{1}{\epsilon} \iint_{[0,t]\times \mathbb{T}^n} \left ( p'(\bar{\rho})(\rho- \bar{\rho})\dive \bar{u} + \frac{\rho}{\bar{\rho}} \nabla p(\bar{\rho})(\bar{u}- u) \right )dxd\tau. \end{align*} We multiply the transport equations of $\bar{u}$ and $\bar{v}$, namely \begin{align*} & \partial_\tau \bar{u} + \frac{1}{\epsilon} ( \bar{u} \cdot \nabla \bar{u}) + \frac{1}{\epsilon} \frac{\nabla p(\bar{\rho})}{\bar{\rho}} - \frac{\dive \bar{S_1}}{\bar{\rho}} - \frac{\bar{e}}{\bar{\rho}}= - \frac{1}{\epsilon^2} \bar{u}, \\ & \partial_\tau \bar{v} + \frac{1}{\epsilon} (\bar{u} \cdot \nabla \bar{v}) + \frac{1}{\epsilon} \frac{\dive \bar{S_2}}{\bar{\rho}} = 0, \end{align*} by $\rho( \bar{u} - u)$ and $\rho( \bar{v} - v)$ respectively to conclude \begin{align*} &\partial_\tau \left( \frac{1}{2} |\bar{u}|^2 \right)(\rho - \bar{\rho}) + \frac{1}{\epsilon} \nabla \left( \frac{1}{2}|\bar{u}|^2 \right)\cdot (\rho u - \bar{\rho} \bar{u}) - \partial_\tau \bar{u}\cdot (\rho u - \bar{\rho} \bar{u}) - \frac{1}{\epsilon} \partial_{x_j} \bar{u_i}( \rho u_iu_j - \bar{\rho} \bar{u_i} \bar{u_j}) \\ & = - \frac{1}{\epsilon} \rho \nabla \bar{u}:[(u- \bar{u}) \otimes (u-\bar{u})] - \frac{1}{\epsilon} \frac{\nabla p(\bar{\rho})}{\bar{\rho}} \rho \cdot (\bar{u}-u) + \frac{1}{\epsilon} \frac{\rho}{\bar{\rho}} \dive \bar{S_1}\cdot(\bar{u}- u) + \bar{e} \frac{\rho}{\bar{\rho}}\cdot( \bar{u} - u) \\ & \ - \frac{1}{\epsilon^2} \rho \bar{u}\cdot(\bar{u}-u) \end{align*} and \begin{align*} & \partial_\tau \left( \frac{1}{2} |\bar{v}|^2 \right)(\rho - \bar{\rho}) + \frac{1}{\epsilon} \nabla \left( \frac{1}{2}|\bar{v}|^2 \right)\cdot(\rho u - \bar{\rho} \bar{u}) - \partial_\tau \bar{v}\cdot(\rho u - \bar{\rho} \bar{u}) - \frac{1}{\epsilon} \partial_{x_j} \bar{v_i}( \rho v_iu_j - \bar{\rho} \bar{v_i} \bar{u_j}) \\ & = - \frac{1}{\epsilon} \rho \nabla \bar{v} :[(v-\bar v) \otimes ( u - \bar u)] - \frac{1}{\epsilon} \frac{\rho}{\bar{\rho}} \dive \bar{S_2} \cdot (\bar{v}-v). \end{align*} In view of the calculation above, \eqref{corradded} rewrites as follows: \begin{align}\label{corradded2} & \int_{\mathbb{T}^n} \left[ \eta(\rho,m,J| \bar{\rho}, \bar{m}, \bar{J}) \right]|_{\tau=0}^t dx \leq - \frac{1}{\epsilon^2} \iint_{[0,t] \times \mathbb{T}^n} \rho |u- \bar{u}| ^2 dxd\tau \nonumber\\ & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \rho \nabla \bar{u} :[(u-\bar{u}) \otimes (u-\bar{u})]dxd\tau - \frac{1}{\epsilon} \int \int_{[0,t] \times \mathbb{T}^n} \rho \nabla \bar{v} : [ (v-\bar v) \otimes ( u - \bar u)] dxd\tau \nonumber\\ & + \iint_{[0,t] \times \mathbb{T}^n} \bar{e}\cdot\frac{\rho}{\bar{\rho}}(\bar{u} - u)dxd\tau - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} p(\rho|\bar{\rho}) \dive \bar{u} dx d\tau \nonumber\\ & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n}\big[ \mu(\rho)((v-\bar{v})\cdot \nabla \dive \bar{u} - (u-\bar{u})\cdot\nabla \dive \bar{v}) + \nabla \mu(\rho)\cdot(\nabla \bar{u}(v-\bar{v}) - \nabla \bar{v}(u-\bar{u}))\big]dxd\tau \nonumber\\ & - \frac{1}{2\epsilon} \iint_{[0,t]\times \mathbb{T}^n} \big[ \nabla \lambda(\rho)\cdot((v-\bar{v}) \dive \bar{u} - (u-\bar{u}) \dive \bar{v}) + \lambda(\rho)((v-\bar{v})\cdot \nabla \dive \bar{u} - (u-\bar{u})\cdot \nabla \dive \bar{v})\big ]dxd\tau \nonumber\\ & + \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \frac{\rho}{\bar{\rho}} \Bigg [(\mu(\bar{\rho}) \dive \nabla \bar{v} + {}^t\nabla \mu(\bar{\rho}) {}^t\nabla \bar{v})\cdot(\bar{u} - u) + \frac{1}{2}( \nabla \lambda(\bar{\rho}) \dive \bar{v} + \lambda(\bar{\rho}) \nabla \dive \bar{v} )\cdot(\bar{u} - u)\Bigg ]dxd\tau \nonumber\\ & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \frac{\rho}{\bar{\rho}} \Bigg [ (\mu(\bar{\rho}) \dive {}^t\nabla \bar{u} + {}^t\nabla \mu(\bar{\rho}) \nabla \bar{u})\cdot(\bar{v}-v) + \frac{1}{2} (\nabla \lambda(\bar{\rho}) \dive \bar{u} + \lambda(\bar{\rho}) \nabla \dive \bar{u} )\cdot (\bar{v}-v) \Bigg ]dxd\tau. \end{align} We recall that $\dive {}^t\nabla \bar u = \nabla \dive \bar u$ and therefore $\dive \nabla \bar v = \nabla \dive \bar v$ being $\nabla \bar v$ symmetric. Hence, we can collect terms as follows: \begin{align*} I_1 := & -\frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \left ( \mu(\rho)- \frac{\rho}{\bar{\rho}} \mu(\bar{\rho}) \right) ( \nabla \dive \bar{u} \cdot (v-\bar{v})- \nabla \dive\bar{v} \cdot (u-\bar{u}))dxd\tau \\ & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \left( \nabla \mu(\rho) - \frac{\rho}{\bar{\rho}} \nabla \mu(\bar{\rho}) \right)\cdot( \nabla \bar{u}(v-\bar{v}) - \nabla \bar{v} (u-\bar{u}))dxd\tau. \end{align*} In addition, recalling also the definition of $\displaystyle{v = \frac{\nabla \mu(\rho)}{\rho}}$, we conclude: \begin{equation*} \begin{split} I_1 = & -\frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \rho \left ( \frac{\mu(\rho)}{\rho}- \frac{\mu(\bar\rho)}{\bar{\rho}} \right) ( \nabla \dive \bar{u} \cdot (v-\bar{v})- \nabla \dive\bar{v} \cdot (u-\bar{u}))dxd\tau \\ & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n}\rho ( v - \bar{v} )\cdot( \nabla \bar{u}(v-\bar{v}) - \nabla \bar{v} (u-\bar{u}))dxd\tau. \end{split} \end{equation*} Morevoer, we define \begin{equation*} \begin{split} I_2 := &- \frac{1}{2\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \left(\lambda(\rho) - \frac{\rho}{\bar{\rho}} \lambda(\bar{\rho}) \right)((v-\bar{v}) \cdot \nabla \dive \bar{u} - (u-\bar{u})\cdot \nabla \dive \bar{v}) dxd\tau \\ & - \frac{1}{2\epsilon}\iint_{[0,t] \times \mathbb{T}^n} \left(\nabla \lambda(\rho) - \frac{\rho}{\bar{\rho}} \nabla \lambda(\bar{\rho}) \right)\cdot ((v-\bar{v})\dive \bar{u} - (u-\bar{u}) \dive \bar{v} )dxd\tau. \end{split} \end{equation*} Since $\displaystyle{\lambda(\rho)= 2(\rho\mu'(\rho)-\mu(\rho))}$, one has \begin{equation*} \begin{split} I_2 = & - \frac{1}{2\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \rho \left(\frac{\lambda(\rho)}{\rho} - \frac{\lambda(\bar{\rho})}{\bar{\rho}} \right)((v-\bar{v}) \cdot \nabla \dive \bar{u} - (u-\bar{u})\cdot \nabla \dive \bar{v}) dxd\tau \\ & - \frac{1}{\epsilon}\iint_{[0,t] \times \mathbb{T}^n} \rho (\mu''(\rho)\nabla\rho - \mu''(\bar \rho)\nabla\bar\rho)\cdot ((v-\bar{v})\dive \bar{u} - (u-\bar{u}) \dive \bar{v})dxd\tau. \end{split} \end{equation*} Therefore \begin{align}\label{eq:last} I_1 + I_2 = & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \rho((\mu''(\rho)\nabla \rho - \mu''(\bar{\rho})\nabla \bar{\rho})\cdot((v-\bar{v})\dive \bar{u} + (\bar{u}-u)\dive \bar{v}))dxd\tau \nonumber\\ & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \rho (\mu'(\rho)- \mu'(\bar{\rho}))((v-\bar{v})\cdot\nabla \dive \bar{u}+(\bar{u}-u)\cdot\nabla \dive \bar{v})dxd\tau \nonumber\\ & - \frac{1}{\epsilon}\iint_{[0,t] \times \mathbb{T}^n} \left [\rho (v-\bar{v}) \cdot \nabla \bar{u} (v-\bar{v}) - \rho(v-\bar{v}) \nabla \bar{v} (u-\bar{u}) \right ] dxd\tau \end{align} Finally, using \eqref{eq:last} in \eqref{corradded2} we obtain \eqref{eq:relen} and the proof is complete. \end{proof} \section{Stability result and convergence of the diffusive limit} \label{sec:stabconv} With the relative entropy estimate \eqref{eq:relen} of Theorem \ref{relativentropy} at hand, we are now able to control our diffusive relaxation limit in terms of the quantity \begin{equation}\label{eq:distfi} \Psi(t) := \int_{\mathbb{T}^n} \left (h(\rho|\bar{\rho}) + \frac{1}{2} \rho \left| \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}}\right|^2 + \frac{1}{2} \rho \left|\frac{J}{\rho} - \frac{\bar{J}}{\bar{\rho}}\right|^2 \right )dx. \end{equation} The proof of our convergence result will follow the blueprint of \cite{LT,LT2}, in particular generalizing the results of the latter to our more general case in terms of the capillarity coefficient, thanks to the enlarged reformulation of the system due to \cite{Bresh}. To this end, let us first remark that, since we are dealing here to $\gamma$--law gases, $\gamma>1$, we have \begin{equation*} h(\rho) = \frac{1}{\gamma-1}\rho^\gamma. \end{equation*} Therefore \begin{equation}\label{pcontrol} p(\rho|\bar{\rho})= (\gamma - 1) h(\rho|\bar{\rho}), \end{equation} and the error term in \eqref{eq:relen} involving the pressure will be then controlled in terms of the relative entropy, namely in terms of the ``distance'' $\Psi$ defined in \eqref{eq:distfi}. It is worth observing that the same kind of control can be obtained for general monotone pressure laws, with $h$ given as in \eqref{eq:defh} and satisfying appropriate conditions, and for positive densities; see \cite{LT,GLT,LT2} for details, as well as for discussions about the metric induced by \eqref{eq:distfi}. Moreover, to control the last two terms of \eqref{eq:relen}, we take advantage of the results contained in \cite{Bresh}, an in particular the followig one, that we report here below for the sake of completeness. \begin{lemma}\cite[Lemma 35]{Bresh}\label{lemma8} Let assume $\mu(\rho)= \rho^{\frac{s+3}{2}}$ with $\gamma \geq s+2$ and $s \geq -1$. We have $$\rho |\mu'(\rho)-\mu'(\bar{\rho})|^2 \leq C(\bar{\rho})h(\rho|\bar{\rho}),$$ with $C(\bar{\rho})$ uniformly bounded for $\bar{\rho}$ belonging to compact sets in $\mathbb{R}^+ \times \mathbb{T}^n$. \end{lemma} We are now ready to state our main convergence theorem. \begin{theorem}\label{STABILITY} Let $T>0$ be fixed and let $(\rho,m, J)$ be as in Definition \ref{ws} and $\bar\rho$ be a smooth solution of $\eqref{gf}$ with $\bar{\rho} \geq \delta > 0$, and define $\bar m$ and $\bar J$ by \eqref{eq:defbarutheo}. Assume the pressure $p(\rho)$ is given by the $\gamma$--law $\rho^{\gamma}$, $\gamma > 1$, and assume $\mu(\rho)= \rho^{\frac{s+3}{2}}$ with $\gamma \geq s+2$ and $s \geq -1$. Then, for any $t \in [0,T]$, the stability estimate \begin{equation}\label{stabtheosec4} \Psi(t) \leq C(\Psi(0) + \epsilon^4), \end{equation} holds true, where $C$ is a positive constant depending on $T$, $M$, the $L^1$ bound for $\rho$, assumed to be uniform in $\epsilon$, $\bar\rho$, and its derivatives. Moreover, if $\Psi(0) \rightarrow 0$ as $\epsilon \rightarrow 0$, then as $\epsilon \rightarrow 0$ \begin{equation*} \sup_{t \in [0,T]} \Psi(t) \rightarrow 0. \end{equation*} \end{theorem} \begin{proof} In view of the definition of $\Psi$ in \eqref{eq:distfi}, from the relative entropy estimate given by Theorem \ref{relativentropy} we get: \begin{align}\label{phie} & \Psi(t) + \frac{1}{\epsilon^2} \iint_{[0,t]\times \mathbb{T}^n} \rho \left| \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}} \right|^2dxd\tau \leq \Psi(0) + \iint_{[0,t]\times \mathbb{T}^n} (|Q|+ |E|) dxd\tau, \end{align} where the terms $Q$ and $E$ are given by \begin{equation*} E : = \bar{e} \cdot \frac{\rho}{\bar{\rho}} \left( \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}} \right), \ Q = Q_1 + Q_2, \end{equation*} with \begin{align*} Q_1 : = & - \frac{1}{\epsilon} \iint_{[0,t]\times \mathbb{T}^n} \rho \nabla \bar{u} : [(u-\bar{u}) \otimes (u-\bar{u})]dxd\tau \\ & - \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \rho \nabla \bar{u} : [(v-\bar{v}) \otimes (v-\bar{v})]dxd\tau - \frac{1}{\epsilon} \iint_{[0,t]\times \mathbb{T}^n} p(\rho|\bar{\rho}) \dive \bar{u} dxd\tau \end{align*} \begin{align*} Q_2 : = & - \frac{1}{\epsilon} \iint_{[0,t]\times \mathbb{T}^n} \rho[(\mu''(\rho)\nabla \rho - \mu''(\bar{\rho})\nabla \bar{\rho})\cdot((v-\bar{v})\dive \bar{u} - (u-\bar{u})\dive \bar{v})]dxdt \\ & - \frac{1}{\epsilon} \iint_{[0,t]\times \mathbb{T}^n} \rho (\mu'(\rho)- \mu'(\bar{\rho}))((v-\bar{v}) \cdot \nabla \dive \bar{u} - (u-\bar{u})\cdot \nabla \dive \bar{v})dxd\tau. \end{align*} We use the Young inequality and the previous results to estimate $E$ and $Q_1$ (as in \cite{LT,LT2}) and $Q_2$ (following \cite{Bresh}) in terms of the relative entropy itself. We start from the error term $E$: \begin{align*} \iint_{[0,t] \times \mathbb{T}^n} \left|E\right|dxd\tau & \leq \frac{\epsilon^2}{2} \iint_{[0,t] \times \mathbb{T}^n} \left| \frac{\bar{e}}{\bar{\rho}} \right|^2 \rho dxd\tau + \frac{1}{2\epsilon^2}\iint_{[0,t] \times \mathbb{T}^n} \rho \left| \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}} \right|^2dxd\tau \\ & \leq CT\epsilon^4 + \frac{1}{4\epsilon^2}\iint_{[0,t] \times \mathbb{T}^n} \rho \left| \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}} \right|^2dxd\tau, \end{align*} using the bounds for $\bar\rho$, the $L^1$ bound for $\rho$, and in view of the fact that, as shown in \eqref{eq:error}, the error term $\bar e$ is $O(\epsilon)$. For the term $Q_1$ we use again the the fact that ${\nabla\bar{u}}=O(\epsilon)$ to conclude \begin{align*} \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \rho \nabla \bar{u} : [(u-\bar{u}) \otimes (u-\bar{u})]dxd\tau \leq C_1 \iint_{[0,t] \times \mathbb{T}^n} \rho \left| \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}} \right|^2dxd\tau, \end{align*} \begin{align*} \frac{1}{\epsilon}\iint_{[0,t] \times \mathbb{T}^n} \rho \nabla \bar{u} : [(v-\bar{v}) \otimes (v-\bar{v})]dxd\tau & \leq C_2 \iint_{[0,t] \times \mathbb{T}^n} \rho \left|\frac{J}{\bar{\rho}} - \frac{\bar{J}}{\bar{\rho}} \right|^2dxd\tau, \end{align*} \begin{align*} \frac{1}{\epsilon}\iint_{[0,t] \times \mathbb{T}^n} p(\rho|\bar{\rho})\dive \bar{u} dxd\tau \leq C_3 \iint_{[0,t] \times \mathbb{T}^n} h(\rho|\bar{\rho})dxd\tau, \end{align*} the latter thanks to \eqref{pcontrol} as well. For the new term $Q_2$ coming from the formulation of the relative entropy estimate of \cite{Bresh}, the strategy is the same: we shall take advantage of the estimates from that paper, by carefully taking into account of the singular coefficient in terms of the relaxation parameter $\epsilon$. For the first term we define \begin{align*} & \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \rho(\mu''(\rho) \nabla \rho - \mu''(\bar{\rho})\nabla \bar{\rho})\cdot((v-\bar{v})\dive \bar{u} + (\bar{u}-u)\dive \bar{v})dxd\tau \\ & = Q_{21} + Q_{22}, \end{align*} where \begin{align*} & Q_{21} := \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \sqrt{\rho}(\mu''(\rho) \nabla \rho - \mu''(\bar{\rho})\nabla \bar{\rho})\cdot \sqrt{\rho}(v-\bar{v})\dive \bar{u} dxd\tau, \\ & Q_{22} := \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \sqrt{\rho}(\mu''(\rho) \nabla \rho - \mu''(\bar{\rho})\nabla \bar{\rho})\cdot\sqrt{\rho}(\bar u -u)\dive \bar{v}dxd\tau . \end{align*} Again, ${\dive \bar u }=O(\epsilon)$ and, since $\mu''(\rho)\nabla \rho - \mu''(\bar \rho)\nabla \bar \rho = \frac{s+1}{2}(v- \bar v) $, we readily obtain \begin{align*} & Q_{21} \leq C_4 \iint_{[0,t] \times \mathbb{T}^n} \rho \left|\frac{J}{\rho}- \frac{\bar J}{\bar \rho}\right|^2 dxd\tau, \\ & Q_{22} \leq C_5 \iint_{[0,t] \times \mathbb{T}^n} \rho \left|\frac{J}{\rho}-\frac{\bar J}{\bar \rho}\right|^2 dxd\tau + \frac{1}{4 \epsilon^2}\int \int_{[0,t] \times \mathbb{T}^n} \rho \left| \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}} \right|^2dxd\tau, \end{align*} using Young's inequality for the second estimate. Analogously, we split the second term in $Q_2$ in two: \begin{align*} & \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \rho(\mu'(\rho)-\mu'(\bar{\rho}))((v-\bar{v})\cdot\nabla \dive \bar{u}+ (\bar{u}-u)\cdot\nabla \dive \bar{v})dxd\tau \\ & = \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \sqrt{\rho}(\mu'(\rho)-\mu'(\bar{\rho}))\sqrt{\rho}(v-\bar{v})\cdot\nabla \dive \bar{u}dxd\tau \\ & + \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \sqrt{\rho}(\mu'(\rho)-\mu'(\bar{\rho}))\sqrt{\rho}(\bar u -u )\cdot\nabla \dive \bar{v}dxd\tau. \end{align*} Hence, we use Young's inequality and Lemma $\ref{lemma8}$ to bound the first term in view of ${\nabla \dive \bar u } =O(\epsilon)$, while for the second one we take advantage of the control given by the friction term: \begin{align*} & \frac{1}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \rho(\mu'(\rho)-\mu'(\bar{\rho}))((v-\bar{v})\cdot\nabla \dive \bar{u}+ (\bar{u}-u)\cdot\nabla \dive \bar{v})dxd\tau \\ & \leq C_6 \iint_{[0,t] \times \mathbb{T}^n} \left ( h(\rho|\bar{\rho}) + \rho \left|\frac{J}{\rho} - \frac{\bar{J}}{\bar{\rho}}\right|^2\right )dxd\tau + \frac{1}{8\epsilon^2} \iint_{[0,t] \times \mathbb{T}^n} \rho \left|\frac{m}{\rho}-\frac{\bar{m}}{\bar{\rho}}\right|^2 dxd\tau. \end{align*} Finally the relative entropy inequality becomes: \begin{align*} \Psi(t) + \frac{1}{2\epsilon^2} \int \int_{[0,t] \times \mathbb{T}^n} \rho \left| \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}} \right|^2dxd\tau \leq \Psi(0) + \tilde{C}\epsilon^4 + C \int_0^t \Psi(\tau)d\tau, \end{align*} and the Gronwall's Lemma gives the desired result. \end{proof} \section{The high friction limit of Navier--Stokes--Korteweg system}\label{sec:NSK} In the same spirit of the previous discussions, in this section we want to study the high-friction limit in the case of the Navier--Stokes--Korteweg system, which, in the (enlarged) formulation and after the scaling described above, rewrites as follows: \begin{equation}\label{NSK} \left\{\begin{aligned} & \partial_t \rho + \frac{1}{\epsilon} \dive m = 0 \\ & \partial_t m + \frac{1}{\epsilon} \dive \left( \frac{m \otimes m}{\rho} \right) + \frac{1}{\epsilon} \nabla p(\rho) - \frac{2 \nu}{\epsilon} \dive(\mu_L(\rho)Du) - \frac{\nu}{\epsilon} \nabla(\lambda_L(\rho) \dive u) \\ & \ = \frac{1}{\epsilon} \dive S_1 - \frac{1}{\epsilon^2}m \\ &\partial_t J + \frac{1}{\epsilon} \dive\left(\frac{J \otimes m}{\rho} \right) + \frac{1}{\epsilon} \dive S_2= 0. \end{aligned}\right. \end{equation} In system \eqref{NSK}, $m=\rho u$ , $J=\rho v $, the viscosity coefficient $\nu$ is positive, and, as denoted above, $$Du= \frac{ \nabla u + {}^t\nabla u}{2}$$ is the symmetric part of the gradient $\nabla u$ and we recall that the Lam\'e coefficient verifies \begin{equation} \label{lame2} \mu_L(\rho)\geq 0;\ \frac2n \mu_L(\rho) +\lambda_L(\rho) \geq 0. \end{equation} Moreover, the effective velocity $v = \nabla \mu(\rho)/\rho$ and the stresses $S_1$ and $S_2$ are the same of the Euler--Korteweg system, namely $$ \dive S_1 = \dive (\mu(\rho)\nabla v) + \frac{1}{2 \epsilon} \nabla(\lambda(\rho)\dive v)$$ and $$\dive S_2 = (\mu(\rho) {}^t\nabla u) + \frac{1}{2 \epsilon} \nabla(\lambda(\rho)\dive u),$$ as well as the definition of the functions $\mu(\rho)$ and $\lambda(\rho)$. As already pointed out above, we stress once again that the coefficients $\mu_L(\rho$)and $\lambda_L(\rho)$ need not to coincide with $\mu(\rho)$ and $\lambda(\rho)$, and we shall only assume their $L^1$ norm is bounded uniformly in $\epsilon$, which can be viewed as a control of them in terms of the pressure term $\rho^\gamma$ and using the energy bound $E_o$. The Hilbert expansion applied to system $\eqref{NSK}$ will give us the same formal limit of the previous case, that is the viscosity term will affect the expansion only for higher terms, and therefore, the limit solution $\bar\rho$ as $\epsilon \rightarrow 0$ satisfies the following equation: \begin{equation}\label{GFfor NS} \partial_t \bar{\rho} + \dive(- \nabla p(\bar{\rho}) + \dive S_1(\bar{\rho})) = 0, \end{equation} while the (nonzero) leading term for the momentum is given by \begin{equation} \label{eq:equmom2} \bar m = \epsilon(-\nabla p(\rho) + \dive \bar S_1). \end{equation} Indeed, we introduce the asymptotic expansion of the state variables: \begin{align*} &\rho = \rho_0 + \epsilon\rho_1 + \epsilon^2 \rho_2 + \cdots\\ &m = m_{0} + \epsilon m_{1} + \epsilon^2 m_{2} + \cdots \end{align*} in the system $\eqref{NSK}$ and collect the terms of the same order; the expansion for $J$ will clearly come from the one of $\rho$. Then, from the mass conservation we get: \begin{align*} &O(\epsilon^{-1}): & & \dive m_{0} = 0; \\ &O(1): & & \partial_t \rho_0 + \dive m_{1} = 0; \\ &O(\epsilon): & & \partial_t \rho_1 + \dive m_{2} =0;\\ &O(\epsilon^2): & & \dots\\ \end{align*} while, from the momentum equation we get: \begin{align*} &O(\epsilon^{-2}) : & & m_{0}=0; \\ &O(\epsilon^{-1}) : & & -m_{1} = \nabla p(\rho_0) - \dive S_1(\rho_0); \\ &O(1): & & -m_{2} = \nabla( p'(\rho_0)\rho_1) - \dive(\mu'(\rho_0)\rho_1 \nabla v_0 + \mu(\rho)\nabla v_1) \\ & & & \qquad \qquad + \nabla(\lambda'(\rho_0)\rho_1 \dive v_0 + \lambda(\rho_0) \dive v_1) \\ & & & \qquad\qquad - 2 \nu\dive\left(\mu_L(\rho_0)D\left(\frac{m_1}{\rho_0}\right)\right) - \nu \nabla\left(\lambda_L(\rho)\dive \frac{m_1}{\rho_0}\right); \\ &O(\epsilon): & & \dots\\ \end{align*} Hence, from these first relations, we recover the equilibrium relation $m_{0} = 0$, the Darcy's law $m_{1} = - \nabla_x p(\rho_0) + \dive_x S_1(\rho_0)$, and the gradient flow dynamic $\eqref{GFfor NS}$ for the leading term $\rho_0$. In the same spirit of Section 2, we rewrite the scalar equation $\eqref{GFfor NS}$ in the same form of the ``hyperbolic part'' of system \eqref{NSK} by adding an appropriate error term. To this end, let us consider $\bar\rho$ a smooth solution of \eqref{GFfor NS} and assume $\bar m$ is given by \eqref{eq:equmom2} and, as said before, $\bar J = \nabla \mu(\bar\rho)$. Then $(\bar \rho , \bar m, \bar J)$ satisfies \begin{equation}\label{ns-scaledstrong} \left\{\begin{aligned} & \partial_t \bar \rho + \frac{1}{\epsilon} \dive\bar m = 0 \\ & \partial_t\bar m + \frac{1}{\epsilon} \dive \left(\frac{\bar m \otimes \bar m }{\bar\rho} \right) + \frac{1}{\epsilon} \nabla p( \bar \rho) = \frac{1}{\epsilon} \dive \bar S_1 - \frac{1}{\epsilon^2} \bar m + \bar e\\ &\partial_(\bar J + \frac{1}{\epsilon} \dive \left(\frac{\bar J \otimes \bar m}{\bar \rho} \right) + \frac{1}{\epsilon} \dive \bar S_2 = 0, \end{aligned}\right. \end{equation} where $$\bar e = \frac{1}{\epsilon} \dive \left(\frac{\bar m \otimes \bar m }{\bar \rho} \right) + \bar m_t = O(\epsilon). $$ The idea of introducing the system $\eqref{ns-scaledstrong}$ is that of recostructing the same first--order part of the relaxing system, to take advantage of the properties which link the entropy and the convective terms, and then obtain in a more direct way the relative energy estimate, as already done in Section \ref{sec:relenes}, and therefore there is no need to introduce viscosity terms (and thus extra errors) in this reformulation of the equilibrium dynamics. As consequence, the structure of the two systems is the same of the one considered above, and hence we shall emphasize here below only the differences with respect to the previous calculations in obtaining the desired relative entropy inequality. Let us start by recalling the constitutive relations for the functions involved in $\eqref{NSK}$, that is the $\gamma$--law pressure $p(\rho) = \rho^{\gamma}$, $\mu(\rho)= \rho^{\frac{s+3}{2}}$ with the conditions $\gamma > 1$, $s+2 \leq \gamma$, $s \geq -1$ and $\lambda(\rho)=2( \rho \mu'(\rho) - \mu(\rho))$. The mechanical energy associated to $\eqref{NSK}$ is given by \begin{equation*} \eta(\rho,m,J) = \frac{1}{2} \frac{|m|^2}{\rho} + \frac{1}{2} \frac{|J|^2}{\rho} + h(\rho), \end{equation*} and, proceeding as in the previous sections, we (formally) obtain \begin{equation*} \frac{d}{dt} \int_{\mathbb{T}^n} \eta(\rho,m,J) dx + \frac{2\nu}{\epsilon} \int_{\mathbb{T}^n}\mu_L(\rho)|D(u)|^2 dx + \frac{\nu}{\epsilon} \int_{\mathbb{T}^n} \lambda_L(\rho)|\dive u|^2 dx = - \frac{1}{\epsilon^2} \int_{\mathbb{T}^n} \frac{|m|^2}{\rho}dx. \end{equation*} In particular, as it is well known, condition \eqref{lame2} implies the mechanical energy dissipates along solutions of $\eqref{NSK}$. On the other hand, the entropy $\bar \eta(\bar \rho, \bar m , \bar J)$ associated to $\eqref{ns-scaledstrong}$ satisfies: \begin{equation}\label{etabarNS} \frac{d}{dt} \int_{\mathbb{T}^n} \bar \eta(\bar \rho, \bar m , \bar J) dx = - \frac{1}{\epsilon^2} \int_{\mathbb{T}^n} \frac{|\bar m|^2}{\rho}dx + \int_{\mathbb{T}^n} \bar e \frac{\bar m }{\bar \rho}dx. \end{equation} We state here below the definition of weak solutions we shall consider in the study of our relaxation limit \begin{definition}\label{deFNS} ($\rho$, $m$, $J$) with $\rho \in C([0, \infty);(L^1(\mathbb{T}^n))$ $(m,J) \in C([0, \infty);(L^1(\mathbb{T}^n))^{2n})$, $\rho \geq 0$, is a weak (periodic) solution of $\eqref{NSK}$ if \begin{align*} & \sqrt{\rho}u, \sqrt{\rho}v \in L^{\infty}((0,T);L^2(\mathbb{T}^n)^n),\ \rho \in C([0, \infty);(L^\gamma(\mathbb{T}^n)), \\ & \mu_L(\rho)D(u) \in L^1((0,T);L^1(\mathbb{T}^n)^{2n} ), \ \lambda_L(\rho)\dive u \in L^1((0,T);L^1(\mathbb{T}^n)) \end{align*} and $(\rho, m,J)$ satisfy for all $\psi \in C^1_c([0, \infty); C^1(\mathbb{T}^n))$ and for all $\phi, \varphi \in C^1_c([0, \infty); C^1(\mathbb{T}^n)^n)$: \begin{align*} & - \iint_{(0,+\infty)\times\mathbb{T}^n}\Bigg (\rho \psi_t + \frac{1}{\epsilon} m \cdot \nabla_x \psi \Bigg )dxdt = \int_{\mathbb{T}^n} \rho(x,0)\psi(x,0);\\ & - \iint_{(0,+\infty)\times\mathbb{T}^n} \Bigg[m \cdot (\phi)_t + \frac{1}{\epsilon}\left(\frac{m \otimes m}{\rho} : \nabla_x \phi \right) + \frac{1}{\epsilon} p(\rho) \dive \phi - \frac{2\nu}{\epsilon} \mu_L(\rho) D(u): \nabla \phi \\ & \; \; - \frac{\nu}{\epsilon} \lambda_L(\rho)\dive u \dive \phi + \frac{1}{\epsilon}\left( \mu(\rho) v \cdot \nabla \dive (\phi) + \nabla \mu(\rho) \cdot (\nabla \phi v ) \right) + \\ & \; \; \; \frac{1}{\epsilon} \left( \frac{1}{2} \nabla \lambda(\rho) \cdot v \dive \phi + \frac{1}{2} \lambda(\rho) v \cdot \nabla \dive \phi \right) \Bigg]dxdt = \\ & \ - \frac{1}{\epsilon^2} \iint_{(0,+\infty)\times\mathbb{T}^n}m \cdot \phi dxdt + \int_{\mathbb{T}^n} m(x,0) \cdot\phi(x,0)dx, \end{align*} where we have used the identity $$\displaystyle{S= - p(\rho) \mathbb{I} + S_1= - p(\rho)\mathbb{I} + \mu(\rho)\nabla v + \frac{1}{2}\lambda(\rho)\dive v \mathbb{I}},$$ \begin{align*} & - \iint_{(0,+\infty)\times\mathbb{T}^n} \Bigg[J \cdot\varphi_t + \frac{1}{\epsilon}\left(\frac{J \otimes m}{\rho} : \nabla_x \varphi \right) - \frac{1}{\epsilon}\Bigg( \mu(\rho) u \cdot (\nabla \dive \varphi ) + \nabla \mu(\rho) \cdot (\nabla \varphi u ) \\ &\ + \frac{1}{2} \nabla \lambda(\rho) \cdot u \dive \varphi + \frac{1}{2} \lambda(\rho) u \cdot \nabla \dive \varphi \Bigg) \Bigg]dxdt = \int_{\mathbb{T}^n} J(x,0) \cdot \varphi(x,0)dx, \end{align*} where we have used the identity $$\displaystyle{S_2= \mu(\rho)^t\nabla u + \frac{1}{2}\lambda(\rho)\dive u \mathbb{I}}.$$ If in addition $ \eta (\rho,m,J) \in C([0, \infty); L^1(\mathbb{T}^n))$ and $(\rho,m,J)$ satisfy \begin{align}\label{dissNS} & \iint_{(0,+\infty)\times\mathbb{T}^n} \left( \eta(\rho,m,J) \right) \dot{\theta}(t) dxdt \leq \int_{\mathbb{T}^n} \left( \eta(\rho,m,J)\right)|_{t=0} \theta(0)dx \nonumber\\ & \ - \frac{1}{\epsilon^2} \iint_{(0,+\infty)\times\mathbb{T}^n} \frac{|m|^2}{\rho} \theta(t) dxdt - \frac{1}{\epsilon}\iint_{(0,\infty) \times \mathbb{T}^n} \mu_L(\rho)|D(u)|^2\theta(t) dxdt \nonumber \\ & \ - \frac{1}{\epsilon}\iint_{(0,\infty) \times \mathbb{T}^n} \lambda_L(\rho)|\dive u|^2 \theta(t)dxdt \end{align} for any non-negative $\theta \in W^{1,\infty}[0, \infty)$ compactly supported on $[0,\infty)$, then $(\rho,m,J)$ is called a \emph{dissipative} weak solution. If $\eta(\rho,m,J) \in C([0,\infty);L^1(\mathbb{T}^n))$ and $(\rho,m,J)$ satisfy $\eqref{dissNS}$ as an equality, then $(\rho,m,J)$ is called a \emph{conservative} weak solution. We say that a dissipative (or conservative) weak (periodic) solution $(\rho,m,J)$ of $\eqref{NSK}$ with $\rho \geq 0$ has finite total mass and energy if $$ \sup_{t \in (0,T)} \int_{\mathbb{T}^n} \rho dx \leq M <+ \infty,$$ and $$ \sup_{t \in (0,T)} \int_{\mathbb{T}^n} \eta(\rho,m,J) dx \leq E_o <+ \infty.$$ \end{definition} The relative entropy calculation is contained in the next theorem. \begin{theorem}\label{relativeentropyNS} Let $(\rho,m,J)$ be a dissipative (or conservative) weak solution of $\eqref{NSK}$ with finite total mass and energy according to Definition \ref{deFNS}, and $\bar \rho$ smooth solution of $\eqref{GFfor NS}$. Then \begin{align}\label{eq:relenNS} &\int_{\mathbb{T}^n} \eta(\rho,m,J| \bar{\rho}, \bar{m}, \bar{J})(t)dx \leq \int_{\mathbb{T}^n} \eta(\rho,m,J| \bar{\rho}, \bar{m}, \bar{J})(0)dx \nonumber \\ & - \frac{2\nu}{\epsilon}\iint_{ (0,t) \times \mathbb{T}^n } \mu_L(\rho)|D(u-\bar u)|^2 dxd\tau - \frac{\nu}{\epsilon}\iint_{ (0,t) \times \mathbb{T}^n }\lambda_L(\rho)|\dive(u-\bar u )|^2dxd\tau \nonumber\\ & - \frac{1}{\epsilon^2} \iint_{(0,t) \times \mathbb{T}^n} \rho |u-\bar{u}|^2 dxd\tau - \frac{1}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \rho \nabla \bar{u}: (u-\bar{u}) \otimes (u-\bar{u})dxdt \nonumber\\ & - \iint_{(0,t) \times \mathbb{T}^n} e(\bar{\rho},\bar{m}) \cdot \frac{\rho}{\bar{\rho}} (u-\bar{u})dxd\tau - \frac{1}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n}p(\rho|\bar{\rho}) \dive \bar{u} dxd\tau \nonumber \\ & - \frac{1}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \rho \; \nabla \bar{u}: (v-\bar{v}) \otimes (v-\bar{v}) dxd\tau \nonumber\\ & - \frac{1}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \rho[(\mu''(\rho)\nabla \rho - \mu''(\bar{\rho})\nabla \bar{\rho})\cdot((v-\bar{v})\dive \bar{u} - (u-\bar{u})\dive \bar{v})]dxd\tau \nonumber\\ & - \frac{1}{\epsilon}\iint_{(0,t) \times \mathbb{T}^n} \rho (\mu'(\rho)- \mu'(\bar{\rho}))[(v-\bar{v})\cdot\nabla \dive \bar{u}-(u- \bar{u})\cdot\nabla \dive \bar{v}]dxd\tau.\nonumber\\ & - \frac{2\nu}{\epsilon}\iint_{ (0,t) \times \mathbb{T}^n } \mu_L(\rho) D(\bar u):D(u-\bar u) dxd\tau - \frac{\nu}{\epsilon}\iint_{ (0,t) \times \mathbb{T}^n }\lambda_L(\rho)\dive \bar u(\dive u - \dive \bar u)dxd\tau \end{align} where \begin{equation*}\bar{m} = \bar{\rho}\bar{u}= \epsilon \left(- \nabla p(\bar{\rho})+ \dive S_1(\bar{\rho})\right); \ \bar{J}= \bar{\rho}\bar{v} = \nabla \mu(\bar{\rho}). \end{equation*} are defined as in $\eqref{eq:defbarutheo}$ \end{theorem} \begin{proof} To prove the relation $\eqref{eq:relenNS}$ we underline here only the differences coming from the presence of viscosity term in the momentum equation $m$. To this end, we recall that from energy inequality $\eqref{dissNS}$, using the test function $\theta(\tau)$: \begin{equation*} \theta(\tau)= \begin{cases} 1, & \hbox{ for } 0 \leq \tau < t, \\ \frac{t-\tau}{\mu} + 1, & \hbox{ for } t \leq \tau < t+\tau, \\ 0, & \hbox{ for } \tau \geq t +\mu, \end{cases} \end{equation*} as $\mu \rightarrow 0$, one has: \begin{align*} \int_{\mathbb{T}^n} (\eta(\rho,m,J))|_{\tau =0}^t & \leq - \frac{1}{\epsilon^2} \iint_{(0,t)\times\mathbb{T}^n} \frac{|m|^2}{\rho} dxd\tau - \frac{2\nu}{\epsilon}\iint_{(0,t) \times \mathbb{T}^n} \mu_L(\rho)|D(u)|^2dxd\tau \\ & - \frac{\nu}{\epsilon}\iint_{(0,t) \times \mathbb{T}^n} \lambda_L(\rho)|\dive u|^2 dxd\tau. \end{align*} On the other hand, integrating over $(0,t)$ the relation $\eqref{etabarNS}$ we get: \begin{equation*} \int_{\mathbb{T}^n} \bar \eta(\bar \rho, \bar m , \bar J)|_{\tau = 0}^t dx = - \frac{1}{\epsilon^2} \iint_{ (0,t) \times \mathbb{T}^n } \frac{|\bar m|^2}{\rho}dx + \iint_{(0,t) \times \mathbb{T}^n} \bar e \frac{\bar m }{\bar \rho}dx. \end{equation*} To control the linear correction of the entropy we choose, as in Theorem \ref{relativentropy}, the following test functions in the weak formulation for the differences $(\rho - \bar\rho, m - \bar m, J-\bar J)$: \begin{align*} &\psi = \theta(\tau) \left( h'(\bar{\rho})- \frac{1}{2} \frac{|\bar{m}|^2}{\bar{\rho^2}} - \frac{|\bar{J}|^2}{ \bar{\rho^2}} \right) \text{ \; and} \\ & \Phi = (\phi, \varphi)= \theta(\tau) \left( \frac{\bar{m}}{\bar{\rho}}, \frac{\bar{J}}{\bar{\rho}} \right), \end{align*} where $\theta(\tau)$ is defined above. Since $D(u):\nabla \phi = D(u):D(\phi)$ and the equation for $\bar m$ does not involve viscosity terms, the new terms due to the viscosity in the weak formulation of the equation for $m-\bar m$ re given solely by: \begin{align*} +\frac{\nu}{\epsilon} \iint_{[0,t] \times \mathbb{T}^n} \big (2 \mu_L(\rho)D(u):D(\bar u) dxd\tau + \lambda_L(\rho) \dive u \dive \bar u \big ) dxd\tau. \end{align*} Hence, the new terms we need to handle here with respect to Theorem \ref{relativentropy} are the following integrals: \begin{align*} & \frac{2 \nu}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \mu_L(\rho)[ D(u):D(\bar u)]dxd\tau + \frac{\nu}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \lambda_L(\rho) \dive u \dive \bar u dx\tau\\ & -\frac{2\nu}{\epsilon} \iint_{ (0,t) \times \mathbb{T}^n } \mu_L(\rho)|D(u)|^2 \; dxd\tau - \frac{\nu}{\epsilon} \iint_{ (0,t) \times \mathbb{T}^n } \lambda_L(\rho) |\dive u|^2 dxd\tau. \end{align*} which can be rearranged as follows: \begin{align*} & - \frac{2\nu}{\epsilon}\iint_{ (0,t) \times \mathbb{T}^n } \mu_L(\rho)|D(u-\bar u)|^2 dxd\tau - \frac{2\nu}{\epsilon}\iint_{ (0,t) \times \mathbb{T}^n }\mu_L(\rho) D(\bar u):D(u-\bar u) dxd\tau \\ & - \frac{\nu}{\epsilon}\iint_{ (0,t) \times \mathbb{T}^n }\lambda_L(\rho)|\dive(u-\bar u )|^2dxd\tau - \frac{\nu}{\epsilon}\iint_{ (0,t) \times \mathbb{T}^n }\lambda_L(\rho)\dive \bar u(\dive u - \dive \bar u)dxd\tau, \end{align*} Hence, repeating the same calculation of Theorem \ref{relativentropy} for all remaining terms we readily obtain \eqref{eq:relenNS} and the proof is complete. \end{proof} Now we use Theorem \ref{relativeentropyNS} to measure the distance between the two solutions in terms of the relative entropy as in Section \ref{sec:stabconv}. To this end, we recall the definition \eqref{eq:distfi} of the ``distance'' $\Psi(t)$: \begin{equation*} \Psi(t) = \int_{\mathbb{T}^n} \left (h(\rho|\bar{\rho}) + \frac{1}{2} \rho \left| \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}}\right|^2 + \frac{1}{2} \rho \left|\frac{J}{\rho} - \frac{\bar{J}}{\bar{\rho}}\right|^2 \right )dx. \end{equation*} Then the following theorem holds. \begin{theorem}\label{theo:stabNSK} Let $T>0$ be fixed, let $(\rho,m, J)$ be as in Definition $\ref{deFNS}$ and $\bar{\rho}$ be a smooth solution of $\eqref{GFfor NS}$ such that $\bar{\rho} \geq \delta > 0$, $\bar m$ and $\bar J$ defined as $\eqref{eq:defbarutheo}$. Assume the pressure $p(\rho)$ is given by the $\gamma$-law $\rho^{\gamma}$ with $\gamma > 1$. Assume $\mu(\rho) = \rho^{\frac{s+3}{2}}$ with $\gamma \geq s+2$ and $s \geq -1$, and \begin{equation} \label{lamecontrol} \left\| \mu_L(\rho) \right\|_{L^{\infty}((0,t);L^1(\mathbb{T}^n))} , \left\| \lambda_L(\rho) \right\|_{L^{\infty}((0,t);L^1(\mathbb{T}^n))} \leq \tilde E. \end{equation} for a positive constant $\tilde E$ independent from $\epsilon$. Then, for $t \in [0,T]$, the stability estimate \begin{equation}\label{stabtheosec5} \Psi(t) \leq C(\Psi(0) + \epsilon^4 +\nu\epsilon), \end{equation} holds true, where $C$ is a positive constant depending on $T$, $M$, the $L^1$ bound for $\rho$, and $E_o$, the energy bound, both assumed to be uniform in $\epsilon$, $\bar{\rho}$ and its derivatives. Moreover, if $\Psi(0) \rightarrow 0$ as $\epsilon \rightarrow 0$, then as $\epsilon \rightarrow 0$ \begin{equation} \sup_{t \in [0,T]} \Psi(t) \rightarrow 0. \end{equation} \end{theorem} \begin{proof} From the definition of $\Psi(t)$ and from the relative entropy estimate given by Theorem $\ref{relativeentropyNS}$ we obtain for $t \in [0,T]$: \begin{align*} & \Psi(t) + \frac{1}{\epsilon^2} \iint_{[0,t]\times \mathbb{T}^n} \rho \left| \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}} \right|^2dxd\tau + \frac{2\nu}{\epsilon} \iint_{ (0,t) \times \mathbb{T}^n } \mu_L(\rho)|D(u)-D(\bar u)|^2 dxd\tau \\ & +\frac{\nu}{\epsilon}\iint_{ (0,t) \times \mathbb{T}^n } \lambda_L(\rho)[\dive u - \dive \bar u]^2 dxd\tau \leq \Psi(0) + \iint_{ (0,t) \times \mathbb{T}^n } \big(|E| +|Q| + |E_2| \big)dxd\tau . \end{align*} The terms $Q$ and $E$ are exactly the same of Section \ref{sec:stabconv}, that is \begin{equation*} E = \bar{e} \cdot \frac{\rho}{\bar{\rho}} \left( \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}} \right) \end{equation*} and \begin{align*} Q = & - \frac{1}{\epsilon} \int \int_{[0,t]\times \mathbb{T}^n} \rho \nabla \bar{u} : [(u-\bar{u}) \otimes (u-\bar{u})]dxd\tau \\ & - \frac{1}{\epsilon} \int \int_{[0,t] \times \mathbb{T}^n} \rho \nabla \bar{u} : [(v-\bar{v}) \otimes (v-\bar{v})]dxd\tau - \frac{1}{\epsilon} \int \int_{[0,t]\times \mathbb{T}^n} p(\rho|\bar{\rho}) \dive \bar{u} dxd\tau \\ & - \frac{1}{\epsilon} \int_0^t\int_{\mathbb{T}^n} \rho[(\mu''(\rho)\nabla \rho - \mu''(\bar{\rho})\nabla \bar{\rho})((v-\bar{v})\dive \bar{u} - (u-\bar{u})\dive \bar{v})]dxdt \\ & - \frac{1}{\epsilon} \int_0^t\int_{\mathbb{T}^n} \rho (\mu'(\rho)- \mu'(\bar{\rho}))[(v-\bar{v}) \nabla \dive \bar{u} - (u-\bar{u})\nabla \dive \bar{v}]dxd\tau, \\ \end{align*} while the new error term $E_2$ is defined as follows: \begin{align*} E_2 &: = - \frac{2\nu}{\epsilon}\iint_{ (0,t) \times \mathbb{T}^n }\mu_L(\rho) D(\bar u):D(u-\bar u) dxd\tau - \frac{\nu}{\epsilon}\iint_{ (0,t) \times \mathbb{T}^n }\lambda_L(\rho)\dive \bar u(\dive u - \dive \bar u)dxd\tau \\ & = : E_{21} + E_{22}. \end{align*} Clearly, the terms $Q$ and $E$ can be bounded as in Theorem \ref{STABILITY}, namely \begin{equation*} \iint_{[0,t]\times \mathbb{T}^n}|E|dx\tau \leq CT\epsilon^4 + \frac{1}{4\epsilon^2}\iint_{[0,t]\times \mathbb{T}^n} \rho\left|\frac{m}{\rho} - \frac{\bar m }{\bar \rho}\right|^2dxd\tau, \end{equation*} where $C$ depends on the bound for $\bar \rho$ and on the (uniform) $L^1$ bound for $\rho$. Here we also used the fact that $\bar e = O(\epsilon)$. Moreover, we recall the estimate for $Q$ as well: \begin{equation*} \iint_{[0,t]\times \mathbb{T}^n}|Q|dxd\tau \leq \frac{1}{4\epsilon^2}\iint_{[0,t]\times \mathbb{T}^n} \rho\left|\frac{m}{\rho} - \frac{\bar m }{\bar \rho}\right|^2dxd\tau + \tilde C \int_{[0,t]}\Psi(\tau)d\tau, \end{equation*} where $\tilde C$ depends on the bounds $\dive \bar u/ \epsilon=O(1)$ and $\nabla \dive \bar u/ \epsilon=O(1)$. To bound the new terms $E_{21}$ and $E_{22}$ we shall use the uniform bound \eqref{lame2} a sfollows. \begin{align*} & E_{21} = - \frac{2 \nu}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \mu_L(\rho)D(\bar u): [D(u-\bar u)]dxd\tau \leq \\ & \; \frac{4\nu}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \mu_L(\rho)|D(\bar u)|^2dxd\tau + \frac{\nu}{\epsilon}\iint_{(0,t) \times \mathbb{T}^n}\mu_L(\rho)|D(u- \bar u)|^2 dxd\tau \leq \\ & \; \frac{\nu}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \mu_L(\rho)|D(u-\bar u )|^2 dxd\tau + \nu C_2T \epsilon, \end{align*} where we have used ${D(\bar u)} = O(\epsilon)$, and $C_2$ depends also on $E_o$ in view of \eqref{lamecontrol}. The estimate for $E_{22}$ is analogous: we use the fact that ${\dive \bar u }= O(\epsilon)$ as follows \begin{align*} E_{22} = &- \frac{\nu}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \lambda_L(\rho) \dive \bar u( \dive u -\dive \bar u) dx\tau \leq \\ & \frac{\nu}{2\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \lambda_L(\rho) |\dive (u - \bar u)|^2 dxd\tau + \frac{2\nu}{\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \lambda_L(\rho) |\dive \bar u|^2 dxd\tau \leq \\ & +\nu C_4T \epsilon + \frac{\nu}{2\epsilon} \iint_{(0,t) \times \mathbb{T}^n} \lambda_L(\rho) |\dive (u - \bar u)|^2 dxd\tau, \end{align*} where $C_4$ depends also on $E_o$, again using \eqref{lamecontrol}. Finally we get: \begin{align*} & \Psi(t) + \frac{1}{2\epsilon^2} \iint_{[0,t]\times \mathbb{T}^n} \rho \left| \frac{m}{\rho} - \frac{\bar{m}}{\bar{\rho}} \right|^2dxd\tau + \frac{\nu}{\epsilon} \iint_{ (0,t) \times \mathbb{T}^n } \mu_L(\rho)|D(u)-D(\bar u)|^2 dxd\tau \nonumber\\ &+ \frac{\nu}{2\epsilon}\iint_{ (0,t) \times \mathbb{T}^n } \lambda_L(\rho)[\dive u - \dive \bar u]^2 dxd\tau \leq \Psi(0) + \tilde{C}\epsilon^4 + \nu \bar{C} \epsilon + \int_0^t \Psi(\tau) d\tau, \end{align*} and, since from the relation $\eqref{lame2}$ we obtain \begin{align*} 0 &\leq \frac{\nu}{\epsilon} \iint_{ (0,t) \times \mathbb{T}^n } \frac{1}{2}\left( \lambda_L(\rho) + \frac{2}{n}\mu_L(\rho) \right) |\dive(u-\bar{u})|^2dxd\tau \\ & \leq \frac{\nu}{\epsilon} \iint_{ (0,t) \times \mathbb{T}^n } \mu_L(\rho)|D(u)-D(\bar u)|^2 dxd\tau + \frac{\nu}{2\epsilon}\iint_{ (0,t) \times \mathbb{T}^n } \lambda_L(\rho)[\dive u - \dive \bar u]^2 dxd\tau, \end{align*} the Gronwall's Lemma gives the result. \end{proof} \begin{remark} Let us emphasize that the choice $\mu_L(\rho) = \mu(\rho)$ and $\lambda_L(\rho) = \lambda(\rho)$ is compatible with \eqref{lame2} and \eqref{lamecontrol} in the range of exponents considered here. Indeed, we have $\mu(\rho)=\rho^{\frac{s+3}{2}}$ with $s\geq -1$, $s+2 \leq \gamma$, and $\gamma>1$, and $\lambda(\rho)=\frac{s+1}{2} \mu(\rho)$. Then $\mu(\rho)$ and $\lambda(\rho)$ are both nonnegative and \begin{equation*} \left\| \mu_L(\rho) \right\|_{L^{\infty}((0,t);L^1(\mathbb{T}^n))} , \left\| \lambda_L(\rho) \right\|_{L^{\infty}((0,t);L^1(\mathbb{T}^n))} \leq \bar C||\rho||_{L^{\infty}((0,t)L^{\gamma}(\mathbb{T}^n))} \leq \bar C E_o^{\frac{1}{\gamma}}. \end{equation*} Moreover, it is worth to observe the difference between the stability estimate \eqref{stabtheosec4} obtained for the Euler--Korteweg model and \eqref{stabtheosec5} of Theorem \ref{theo:stabNSK}. Besides the common control of the initial relative entropy $\Psi(0)$, the latter gives a control of the errors of the form $O(\epsilon^4) + O(\nu \epsilon)$, which is consistent with the one in \eqref{stabtheosec4} as $\nu\to 0+$. In other words, the stability estimate obtained in the Euler-Korteweg case is better, nevertheless it is recovered by the one obtained in the presence of the viscosity terms. The leeway which allows us to perform this estimate in the case of the high friction limit for the Navier--Stokes--Korteweg system is linked to the fact the viscosity terms appear at an intermediate order in the Hilbert expansion and they are ``less singular'' with respect to the ones coming from the friction term, and therefore they can be controlled in the relative entropy estimate. \end{remark} \end{document}
\begin{document} \begin{center} \vertskip 1cm{\LARGE\bf A Combinatorial Interpretation of $\frac{j}{n}\binom{kn}{n+j} $ } \vertskip 1cm \advance\xdim by-.30cmarge David Callan\\ Department of Statistics \\ University of Wisconsin-Madison \\ 1300 University Ave \\ Madison, WI \ 53706-1532 \\ \href{mailto:[email protected]}{\tt [email protected]} \\ \end{center} \vertspace*{5mm} \begin{abstract} The identity $\frac{j}{n}\binom{kn}{n+j} =(k-1)\binom{kn-1}{n+j-1}-\binom{kn-1}{n+j}$ shows that $\frac{j}{n}\binom{kn}{n+j} $ is always an integer. Here we give a combinatorial interpretation of this integer in terms of lattice paths, using a uniformly distributed statistic. In particular, the case $j=1,k=2$ gives yet another manifestation of the Catalan numbers. \end{abstract} \vertspace{6mm} \section{Introduction} For each pair of integers $j\ge 1$ and $k\ge 2$, the sequence $\big(\frac{j}{n}\binom{kn}{n+j}\big)_{n\ge \frac{j}{k-1}}$ consists of integers since $\frac{j}{n}\binom{kn}{n+j} =(k-1)\binom{kn-1}{n+j-1}-\binom{kn-1}{n+j}$. For $j=1,k=2$ this sequence is the Catalan numbers, \htmladdnormallink{A000108}{http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A000108} in the \htmladdnormallink{On-Line Encyclopedia of Integer Sequences}{http://www.research.att.com/~njas/sequences/Seis.html} ; for $j=1,k=3$ it is \htmladdnormallink{A007226}{http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A007226} and for $j=1,k=4$ it is \htmladdnormallink{A007228}{http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A007228} . In this note, we give a combinatorial interpretation for all $j,k$ in terms of lattice paths. We first treat the case $j=1$, which is simpler (\S 2), specialize to $k=1$ (\S 3), then generalize to larger $j$ (\S 4), and end with some remarks (\S 5). \section{Case \emph{j} = 1} Let $\ensuremath{\mathcal P}\xspace_{n,k}$ denote the set of lattice paths of $n+1$ upsteps $U=(1,1)$ and $(k-1)n-1$ downsteps $D=(1,-1)$. Clearly, $\vert \ensuremath{\mathcal P}\xspace_{n,k}\vert=\binom{kn}{n+1}\,:$ choose locations for the upsteps among the total of $kn$ steps. A path in $\ensuremath{\mathcal P}\xspace_{n,k}$ has $kn+1$ vertices or \emph{points}: its initial and terminal points and $kn-1$ interior points. Define the \emph{baseline\xspace} of $P\in \ensuremath{\mathcal P}\xspace_{n,k}$ to be the line joining its initial and terminal points. For $P\in \ensuremath{\mathcal P}\xspace_{n,k}$ label its points $0,1,2,\advance\xdim by-.30cmdots,kn$ left to right and define the \emph{$k$-divisible\xspace}points of $P$ to be those whose label is divisible by $k$. An example with $n=5$ and $k=3$ is illustrated ($k$-divisible\xspace points indicated by a heavy dot). \vertspace*{-20mm} \begin{pspicture}(-2,-1)(16,6) \ensuremath{\mathcal P}\xspacesset{unit=.7cm} \ensuremath{\mathcal P}\xspacesdots(0,3) (1,4)(2,3)(3,2)(4,3)(5,2)(6,3)(7,4)(8,5)(9,4)(10,3)(11,2)(12,1)(13,0)(14,1)(15,0) \ensuremath{\mathcal P}\xspacesline[linecolor=blue](0,3)(1,4)(2,3)(3,2)(4,3)(5,2)(6,3)(7,4)(8,5)(9,4)(10,3)(11,2)(12,1)(13,0)(14,1)(15,0) \ensuremath{\mathcal P}\xspacesline[linestyle=dotted](0,3)(15,0) \ensuremath{\mathcal P}\xspacesset{dotscale=2} \ensuremath{\mathcal P}\xspacesdots(0,3)(3,2)(6,3)(9,4)(12,1)(15,0) \advance\xdim by.30cmput(-0.2,2.6){\textrm{{\footnotesize $0$}}} \advance\xdim by.30cmput(1,4.4){\textrm{{\scriptsize $1$}}} \advance\xdim by.30cmput(2.1,3.4){\textrm{{\scriptsize $2$}}} \advance\xdim by.30cmput(3,1.6){\textrm{{\scriptsize $3$}}} \advance\xdim by.30cmput(4,3.4){\textrm{{\scriptsize $4$}}} \advance\xdim by.30cmput(5,1.6){\textrm{{\scriptsize $5$}}} \advance\xdim by.30cmput(5.9,3.4){\textrm{{\scriptsize $6$}}} \advance\xdim by.30cmput(6.9,4.4){\textrm{{\scriptsize $7$}}} \advance\xdim by.30cmput(8,5.4){\textrm{{\scriptsize $8$}}} \advance\xdim by.30cmput(9.1,4.4){\textrm{{\scriptsize $9$}}} \advance\xdim by.30cmput(10.1,3.4){\textrm{{\scriptsize $10$}}} \advance\xdim by.30cmput(11.1,2.4){\textrm{{\scriptsize $11$}}} \advance\xdim by.30cmput(12.1,1.4){\textrm{{\scriptsize $12$}}} \advance\xdim by.30cmput(13,-0.4){\textrm{{\scriptsize $13$}}} \advance\xdim by.30cmput(14,1.4){\textrm{{\scriptsize $14$}}} \advance\xdim by.30cmput(15,-0.4){\textrm{{\scriptsize $15$}}} \advance\xdim by.30cmput(8,0.2){$\textrm{{\footnotesize baseline\xspace}}$} \advance\xdim by.30cmput(8,0.9){$\textrm{{\footnotesize $\nearrow$}}$} \advance\xdim by.30cmput(8,-1.3){$\textrm{{\small A path in $\ensuremath{\mathcal P}\xspace_{5,3}$}}$} \end{pspicture} \vertspace*{5mm} Consider the statistic $X$ on $\ensuremath{\mathcal P}\xspace_{n,k}$ defined by $X=\#\:$interior $k$-divisible\xspace points lying strictly above the baseline\xspace. In the illustration $X=3$ (points 6,\ 9 and 12). \begin{theorem} The statistic $X$ on $\ensuremath{\mathcal P}\xspace_{n,k}$ is uniformly distributed over $0,1,2,\advance\xdim by-.30cmdots,n-1$. \end{theorem} The following count is an immediate consequence of the theorem by considering the paths with $X=n-1$. \begin{cor} $\frac{1}{n}\binom{kn}{n+1}$ is the number of paths in $\ensuremath{\mathcal P}\xspace_{n,k}$ all of whose interior $k$-divisible\xspace points lie strictly above its baseline\xspace. \end{cor} \noindent \textbf{Proof of Theorem}\quad Consider the operation ``rotate left $k$ units'' on $\ensuremath{\mathcal P}\xspace_{n,k}$ defined by transferring the initial $k$ steps of a path in $\ensuremath{\mathcal P}\xspace_{n,k}$ to the end. This rotation operation partitions $\ensuremath{\mathcal P}\xspace_{n,k}$ into rotation classes. We claim (i) each such rotation class has size $n$, and (ii) $X$ assumes the values $0,1,2,\advance\xdim by-.30cmdots,n-1$ in turn on the paths of a rotation class. The first claim follows from \begin{lemma} Given $P \in \ensuremath{\mathcal P}\xspace_{n,k}$, the only $k$-divisible\xspace points lying on its baseline\xspace are its initial and terminal points. \end{lemma} \noindent \textbf{Proof of Lemma}\quad Suppose $ik,\ 0\advance\xdim by-.30cme i \advance\xdim by-.30cme n$, is a $k$-divisible\xspace point on the baseline\xspace. Since the slope of the baseline\xspace is $-\frac{(k-2)n-2}{kn}$, this says that the point with coordinates $(ik,-i \frac{(k-2)n-2}{n})$ lies on $P$ (taking the initial point of $P$ as origin). For each point $(x,y)$ on $P$, $x$ and $y$ must have the same even/odd parity. Hence $ik \equiv i \frac{(k-2)n-2}{n}$\:mod\:2. Simplifying, we find $2i \equiv \frac{2i}{n} \:\textrm{mod}\:2 \Rightarrow i \equiv \frac{i}{n} \:\textrm{mod}\:1 \Rightarrow n|\,i \Rightarrow i=0$ or $n$, the last implication because $0\advance\xdim by-.30cme i \advance\xdim by-.30cme n$. \qed To prove the second claim, we exhibit a bijection from the paths in $\ensuremath{\mathcal P}\xspace_{n,k}$ with $X=n-1$ to those with $X=i$ for each $i\in [0,n-1]$. Given $P\in \ensuremath{\mathcal P}\xspace_{n,k}$ with $X=n-1$, draw its baseline\xspace $L$. The entire rotation class of $P$ can be viewed in a single diagram: draw a second contiguous copy of $P$ as illustrated, then join the two occurrences of each interior $k$-divisible\xspace point. This results in $n$ parallel line segments (no two collinear, by the Lemma), each the base line of a path in the rotation class of $P$. Label the lines (at their endpoints) 0 through $n-1$ from top to bottom. \vertspace*{-45mm} \begin{pspicture}(0,-1.5)(30,10) \ensuremath{\mathcal P}\xspacesset{unit=.5cm} \ensuremath{\mathcal P}\xspacesdots(0,6) (1,7)(2,6)(3,7)(4,8)(5,9)(6,8)(7,7)(8,6)(9,5)(10,4)(11,5)(12,4)(13,5)(14,4)(15,3) (16,4)(17,3)(18,4)(19,5)(20,6)(21,5)(22,4)(23,3)(24,2)(25,1)(26,2)(27,1)(28,2)(29,1)(30,0) \ensuremath{\mathcal P}\xspacesset{dotscale=1.9} \ensuremath{\mathcal P}\xspacesdots(0,6)(3,7)(6,8)(9,5)(12,4)(15,3) \ensuremath{\mathcal P}\xspacesdots(18,4)(21,5)(24,2)(27,1)(30,0) \ensuremath{\mathcal P}\xspacesline[linecolor=blue](0,6) (1,7)(2,6)(3,7)(4,8)(5,9)(6,8)(7,7)(8,6)(9,5)(10,4)(11,5)(12,4)(13,5)(14,4)(15,3) \ensuremath{\mathcal P}\xspacesline[linecolor=red](15,3) (16,4)(17,3)(18,4)(19,5)(20,6)(21,5)(22,4)(23,3)(24,2)(25,1)(26,2)(27,1)(28,2)(29,1)(30,0) \ensuremath{\mathcal P}\xspacesline[linestyle=dotted](0,6)(30,0) \ensuremath{\mathcal P}\xspacesline[linestyle=dotted](3,7)(18,4) \ensuremath{\mathcal P}\xspacesline[linestyle=dotted](6,8)(21,5) \ensuremath{\mathcal P}\xspacesline[linestyle=dotted](9,5)(24,2) \ensuremath{\mathcal P}\xspacesline[linestyle=dotted](12,4)(27,1) \ensuremath{\mathcal P}\xspacesline[linestyle=dashed]{<-}(0,1)(6,1) \ensuremath{\mathcal P}\xspacesline[linestyle=dashed]{->}(9,1)(15,1) \advance\xdim by.30cmput(7.5,1){\textrm{{\small $P$}}} \advance\xdim by.30cmput(0,5.5){\textrm{{\scriptsize $5$}}} \advance\xdim by.30cmput(15,2.5){\textrm{{\scriptsize $5$}}} \advance\xdim by.30cmput(2.5,7.1){\textrm{{\scriptsize $2$}}} \advance\xdim by.30cmput(18.5,4){\textrm{{\scriptsize $2$}}} \advance\xdim by.30cmput(6.1,8.5){\textrm{{\scriptsize $1$}}} \advance\xdim by.30cmput(21.1,5.5){\textrm{{\scriptsize $1$}}} \advance\xdim by.30cmput(8.5,5){\textrm{{\scriptsize $3$}}} \advance\xdim by.30cmput(24.5,2){\textrm{{\scriptsize $3$}}} \advance\xdim by.30cmput(11.6,4){\textrm{{\scriptsize $4$}}} \advance\xdim by.30cmput(27.4,0.9){\textrm{{\scriptsize $4$}}} \advance\xdim by.30cmput(14,-2){\textrm{{\small The rotation class of $P=UDU^{3}D^{5}UDUD^{2} \in \ensuremath{\mathcal P}\xspace_{5,3}$}}} \end{pspicture} Now the path $Q$ with baseline\xspace $i$ has the form $BA$ when $P$ is decomposed as $AB$ with $A$ an initial segment of $P$. Hence $Q$ is in $\ensuremath{\mathcal P}\xspace_{n,k}$ and has $X=i$ since the interior $k$-divisible\xspace points of $Q$ lying (strictly) above its baseline\xspace are precisely those labeled $1,2,\advance\xdim by-.30cmdots,i-1$. The path $B$ can be retrieved in $Q$ as the initial subpath of $Q$ terminating at its ``lowest'' $k$-divisible\xspace point where ``lowest'' is measured relative to the parallel lines, and so the mapping is invertible. The diagram used in this proof is reminiscent of the one used in \emph{Concrete Mathematics} \cite[p.\,360]{gkp} to prove Raney's Lemma, also known as the Cycle Lemma \cite{zaks,sw}. \section{Special Case} The case $j=1,k=2$ gives a new interpretation of the Catalan numbers: $C_{n}$ is the number of lattice paths of $n+1$ upsteps and $n-1$ downsteps such that the interior even-numbered vertices all lie strictly above the line joining the initial and terminal points. The $C_{3}=5$ paths with $n=3$ are shown. \Einheit=0.4cm \[ baseline\xspaceueclr{\Pfad(-17,0),333344\endPfad \Pfad(-10,0),333434\endPfad \Pfad(-3,0),333443\endPfad \Pfad(4,0),334334\endPfad \Pfad(11,0),334343\endPfad} \Label\advance\ydim by-.30cm{ \textrm{{\scriptsize The 5 paths in $\ensuremath{\mathcal P}\xspace_{3,2}$}}}(0,-1) \NormalPunkt(-17,0) \DuennPunkt(-16,1) \NormalPunkt(-15,2) \DuennPunkt(-14,3) \NormalPunkt(-13,4) \DuennPunkt(-12,3) \NormalPunkt(-11,2) \NormalPunkt(-10,0) \DuennPunkt(-9,1) \NormalPunkt(-8,2) \DuennPunkt(-7,3) \NormalPunkt(-6,2) \DuennPunkt(-5,3) \NormalPunkt(-4,2) \NormalPunkt(-3,0) \DuennPunkt(-2,1) \NormalPunkt(-1,2) \DuennPunkt(0,3) \NormalPunkt(1,2) \DuennPunkt(2,1) \NormalPunkt(3,2) \NormalPunkt(4,0) \DuennPunkt(5,1) \NormalPunkt(6,2) \DuennPunkt(7,1) \NormalPunkt(8,2) \DuennPunkt(9,3) \NormalPunkt(10,2) \NormalPunkt(11,0) \DuennPunkt(12,1) \NormalPunkt(13,2) \DuennPunkt(14,1) \NormalPunkt(15,2) \DuennPunkt(16,1) \NormalPunkt(17,2) \] \vertspace*{5mm} \section{General Case} The general case $j\ge 1$ is similar but a little more complicated. Let $\ensuremath{\mathcal P}\xspace_{n,k,j}$ denote the set of paths of $kn$ upsteps/downsteps of which $n+j$ are upsteps. Thus $\vert \ensuremath{\mathcal P}\xspace_{n,k,j} \vert =\binom{kn}{n+j}$. The ``$j$'' factor in the numerator of $\frac{j}{n}\binom{kn}{n+j}$ requires that we consider the Cartesian product $\ensuremath{\mathcal P}\xspace_{n,k,j}^{*}:= \ensuremath{\mathcal P}\xspace_{n,k,j} \times [j]$ whose size is $j\binom{kn}{n+j}$. Given $(P,i)\in \ensuremath{\mathcal P}\xspace_{n,k,j}^{*}$, introduce an $x$-$y$ coordinate system with origin at the initial point of $P$, identify the parameter $i$ with the line segment joining $(0,2(i-1))$ and $(kn,-(k-2)n+2i)$, and call this the baseline for $(P,i)$; it coincides with the previous notion of baseline when $j=1$, forcing $i=1$. It is easy to see that, once again, the baseline\xspace never contains an interior $k$-divisible\xspace point of $P$. Define $X$ on $(P,i)\in \ensuremath{\mathcal P}\xspace_{n,k,j}^{*}$ by $X=\#\ $interior $k$-divisible\xspace points of $P$ lying strictly above the baseline\xspace. We first show that $X$ is uniformly distributed over $0,1,2,\advance\xdim by-.30cmdots,n-1$. It is no longer true that orbits in $\ensuremath{\mathcal P}\xspace_{n,k,j}$ under the ``rotate left by $k$'' operator $R$ all have size $n$ but no matter: in general, $P\in \ensuremath{\mathcal P}\xspace_{n,k,j}$ uniquely has the form $P_{1}^{r}$ with $P_{1}$ of length divisible by $k$ and $r$ maximal. Then $r$ necessarily divides $n$ and $j$, and the orbit of $P$ under $R$ has size $n/r$. In case $r\ge 1$, everything will merely be cut down by a factor of $r$. Declare two elements $(P_{1},i_{1})$ and $(P_{2},i_{2})$ to be \emph{rotation-equivalent} if $P_{1}$ and $P_{2}$ are in the same rotation class under $R$ (regardless of $i_{1}$ and $i_{2}$). As before, all elements of a rotation-equivalence class can be seen in a single diagram as illustrated. \vertspace*{-35mm} \begin{pspicture}(-4,-2.5)(13,9) \ensuremath{\mathcal P}\xspacesset{unit=.5cm} \multiput(0,9)(1,0){13}{.} \multiput(0,8)(1,0){13}{.} \multiput(0,7)(1,0){13}{.} \multiput(0,6)(1,0){13}{.} \multiput(0,5)(1,0){13}{.} \multiput(0,4)(1,0){13}{.} \multiput(0,3)(1,0){13}{.} \multiput(0,2)(1,0){13}{.} \multiput(0,1)(1,0){13}{.} \multiput(0,0)(1,0){13}{.} \ensuremath{\mathcal P}\xspacesdots(0,0) (1,1)(2,2)(3,3)(4,4)(5,5)(6,4)(7,5)(8,6)(9,7)(10,8)(11,9)(12,8) \ensuremath{\mathcal P}\xspacesdots(0,2)(2,4)(4,6)(6,2)(8,4)(10,6) \ensuremath{\mathcal P}\xspacesset{dotscale=2.3} baseline\xspaceueclr{\ensuremath{\mathcal P}\xspacesdots(0,0)(2,2)(4,4)(6,4)(8,6)(10,8)(12,8) } \ensuremath{\mathcal P}\xspacesline[linecolor=blue] (0,0)(1,1)(2,2)(3,3)(4,4)(5,5)(6,4)(7,5)(8,6)(9,7)(10,8)(11,9)(12,8) \ensuremath{\mathcal P}\xspacesline(0,0)(6,2) \ensuremath{\mathcal P}\xspacesline(0,2)(6,4) \ensuremath{\mathcal P}\xspacesline(2,2)(8,4) \ensuremath{\mathcal P}\xspacesline(2,4)(8,6) \ensuremath{\mathcal P}\xspacesline(4,4)(10,6) \ensuremath{\mathcal P}\xspacesline(4,6)(10,8) \ensuremath{\mathcal P}\xspacesline[linestyle=dashed]{<-}(0,-1)(2,-1) \ensuremath{\mathcal P}\xspacesline[linestyle=dashed]{->}(4,-1)(6,-1) \advance\xdim by.30cmput(3,-1){\textrm{{\small $P$}}} \advance\xdim by.30cmput(6.5,2.1){\textrm{{\footnotesize $2$}}} \advance\xdim by.30cmput(6.5,4.0){\textrm{{\footnotesize $1$}}} \advance\xdim by.30cmput(8.5,4.2){\textrm{{\footnotesize $2$}}} \advance\xdim by.30cmput(8.5,6.0){\textrm{{\footnotesize $0$}}} \advance\xdim by.30cmput(10.5,6.1){\textrm{{\footnotesize $1$}}} \advance\xdim by.30cmput(10.5,8.0){\textrm{{\footnotesize $0$}}} \advance\xdim by.30cmput(6,-3.2){\textrm{{\small The rotation-equivalence class in $\ensuremath{\mathcal P}\xspace_{3,2,2}^{*}$ of $P=U^{5}D$}}} \end{pspicture} \vertspace*{-35mm} \begin{pspicture}(-1.5,-3.5)(25,8) \ensuremath{\mathcal P}\xspacesset{unit=.5cm} \multiput(0,7)(1,0){25}{.} \multiput(0,6)(1,0){25}{.} \multiput(0,5)(1,0){25}{.} \multiput(0,4)(1,0){25}{.} \multiput(0,3)(1,0){25}{.} \multiput(0,2)(1,0){25}{.} \multiput(0,1)(1,0){25}{.} \multiput(0,0)(1,0){25}{.} \multiput(0,-1)(1,0){25}{.} \multiput(0,-2)(1,0){25}{.} \ensuremath{\mathcal P}\xspacesdots(0,0) (1,1)(2,2)(3,3)(4,2)(5,1)(6,0)(7,-1)(8,0)(9,1)(10,2)(11,1)(12,2)(13,3)(14,4)(15,5) (16,4)(17,3)(18,2)(19,1)(20,2)(21,3)(22,4)(23,3)(24,4) \ensuremath{\mathcal P}\xspacesdots(0,2)(0,4)(3,5)(3,7)(6,2)(6,4)(9,3)(9,5)(12,-2)(12,0)(15,1)(15,3)(18,-2)(18,0)(21,-1)(21,1) \ensuremath{\mathcal P}\xspacesset{dotscale=2.3} baseline\xspaceueclr{\ensuremath{\mathcal P}\xspacesdots(0,0)(3,3)(6,0)(9,1)(12,2)(15,5)(18,2)(21,3)(24,4)} \ensuremath{\mathcal P}\xspacesline[linecolor=blue] (0,0)(1,1)(2,2)(3,3)(4,2)(5,1)(6,0)(7,-1)(8,0)(9,1)(10,2)(11,1)(12,2) \ensuremath{\mathcal P}\xspacesline[linecolor=blue] (12,2)(13,3)(14,4)(15,5)(16,4)(17,3)(18,2)(19,1)(20,2)(21,3)(22,4)(23,3)(24,4) \ensuremath{\mathcal P}\xspacesline(0,0)(12,-2) \ensuremath{\mathcal P}\xspacesline(0,2)(12,0) \ensuremath{\mathcal P}\xspacesline(0,4)(12,2) \ensuremath{\mathcal P}\xspacesline(3,3)(15,1) \ensuremath{\mathcal P}\xspacesline(3,5)(15,3) \ensuremath{\mathcal P}\xspacesline(3,7)(15,5) \ensuremath{\mathcal P}\xspacesline(6,0)(18,-2) \ensuremath{\mathcal P}\xspacesline(6,2)(18,0) \ensuremath{\mathcal P}\xspacesline(6,4)(18,2) \ensuremath{\mathcal P}\xspacesline(9,1)(21,-1) \ensuremath{\mathcal P}\xspacesline(9,3)(21,1) \ensuremath{\mathcal P}\xspacesline(9,5)(21,3) \ensuremath{\mathcal P}\xspacesline[linestyle=dashed]{<-}(0,-2.8)(5,-2.8) \ensuremath{\mathcal P}\xspacesline[linestyle=dashed]{->}(7,-2.8)(12,-2.8) \advance\xdim by.30cmput(6,-2.8){\textrm{{\small $P$}}} \advance\xdim by.30cmput(15.6,4.9){\textrm{{\footnotesize $0$}}} \advance\xdim by.30cmput(15.5,2.9){\textrm{{\footnotesize $0$}}} \advance\xdim by.30cmput(15.5,0.9){\textrm{{\footnotesize $1$}}} \advance\xdim by.30cmput(12.5,1.9){\textrm{{\footnotesize $0$}}} \advance\xdim by.30cmput(12.5,-0.1){\textrm{{\footnotesize $2$}}} \advance\xdim by.30cmput(12.5,-2.1){\textrm{{\footnotesize $3$}}} \advance\xdim by.30cmput(18.5,-2.1){\textrm{{\footnotesize $3$}}} \advance\xdim by.30cmput(18.5,-0.1){\textrm{{\footnotesize $2$}}} \advance\xdim by.30cmput(18.6,1.9){\textrm{{\footnotesize $1$}}} \advance\xdim by.30cmput(21.5,-1.1){\textrm{{\footnotesize $3$}}} \advance\xdim by.30cmput(21.5,0.9){\textrm{{\footnotesize $2$}}} \advance\xdim by.30cmput(21.5,2.9){\textrm{{\footnotesize $1$}}} \advance\xdim by.30cmput(12,-5){\textrm{{\small The rotation-equivalence class in $\ensuremath{\mathcal P}\xspace_{4,3,3}^{*}$ of $P=U^{3}D^{4}U^{3}DU$}}} \end{pspicture} Label the baselines (there are $jn/r$ of them; both illustrations have $r=1$) at their endpoints as follows (each of $0,1,\advance\xdim by-.30cmdots,n-1$ will be the label on $j/r$ endpoints). First take the highest endpoint $p$ and consider the set of all endpoints lying weakly to the left of the vertical line through $p$. Since there are $j-1$ endpoints directly below $p$, this set has size at least $j$. Place label 0 on the $j/r$ highest points in this set, favoring points to the left if a choice must be made between points at the same height. Then take the highest unlabeled endpoint, consider the set of all unlabeled endpoints lying weakly to the left of its vertical line, and place the label 1 on the $j/r$ highest points in this set, again favoring ``left''. Continue in like manner until all endpoints are labeled. Then, for each $i=0,1,\advance\xdim by-.30cmdots,n-1$,the $j/r$ objects in the rotation-equivalence class with label $i$ all have $X=i$, and the uniform distribution of $X$ follows. By considering the objects in $\ensuremath{\mathcal P}\xspace_{n,k,j}^{*}$ with $X=n-1$, we obtain our main result. \begin{main} Suppose $j\ge 1,\ k\ge 2,$ and $n\ge\frac{j}{k-1}$. Then $\frac{j}{n}\binom{kn}{n+j}$ is the number of lattice paths of $n+j$ upsteps $(1,1)$ and $kn-(n+j)$ downsteps $(1,-1)$ which $($i$\,)$ start at $(0,-2i)$ for some $i\ge 0$, and $($ii$\,)$ have all interior $k$-divisible\xspace points (strictly) above the line through the origin of slope $-\frac{(k-2)n-2}{kn}$. \end{main} \vertspace*{5mm} \section{Concluding Remarks} The main theorem can be generalized somewhat further (essentially the same proof): $\frac{d}{n}\binom{an}{cn+d}$ is the number of lattice paths of $cn+d$ upsteps and $an-(cn+d)$ downsteps which (i) start at $(0,-2i)$ for some $i\ge 0$, and (ii) have all interior $k$-divisible\xspace points (strictly) above the line through the origin of slope $-\frac{(a-2c)n-2}{an}$. There is also a well known generalization of the Catalan numbers in a different direction: $\frac{j}{kn+j}\binom{kn+j}{n}$ is the number of lattice paths of $n$ steps east (1,0) and $(k-1)n+j-1$ steps north $(0,1)$ that start at the origin and lie weakly above the line $y=(k-1)x$. One way to prove this (slightly generalizing the approach in \cite{woan}) is as follows. Consider the set $\ensuremath{\mathcal P}\xspace_{n,k,j}$ of paths consisting of $n$ steps east and $(k-1)n+j$ steps north. Measuring ``height'' of a point above $y=(k-1)x$ as the perpendicular distance to $y=(k-1)x$, define $j$ \emph{high points} for a path $P\in \ensuremath{\mathcal P}\xspace_{n,k,j}$: the first high point is the leftmost of the highest points on the path, the second high point is the leftmost of the next highest points of the path, and so on. Note that all $j$ high points necessarily lie strictly above $y=(k-1)x$. Mark any one of the these high points to obtain the set $\ensuremath{\mathcal P}\xspace_{n,k,j}^{*}$ of marked $\ensuremath{\mathcal P}\xspace_{n,k,j}$-paths. Clearly, $\vert \ensuremath{\mathcal P}\xspace_{n,k,j}^{*} \vert = j\binom{kn+j}{n}$. Label the $kn+j+1$ points on a marked path $P^{*} \in \ensuremath{\mathcal P}\xspace_{n,k,j}^{*}$ in order $0,1,2,\advance\xdim by-.30cmdots,kn+j$ starting at the origin. Set $X=$ label of the marked high point. Then $X$ is uniformly distributed over $1,2,\advance\xdim by-.30cmdots,kn+j$. The paths with $X=kn+j$ yield the desired paths by deleting the last step (necessarily a north step) and rotating $180^{\circ}$. All the above generalizations of the Catalan numbers are incorporated in the expression \[ \frac{ad-bc}{an+b}\binom{an+b}{cn+d}=(a-c)\binom{an+b-1}{cn+d-1}-c\binom{an+b-1}{cn+d} \] and it would be interesting to find a unified combinatorial interpretation for it. \noindent 2000 {\it Mathematics Subject Classification}: 05A15. \noindent \emph{Keywords: } Catalan, uniformly distributed, $k$-divisible, baseline, Cycle Lemma. \end{document}
\begin{document} \begin{frontmatter} \title{Estimation of the relative risk following group sequential procedure based upon the weighted log-rank statistic\thanksref{t1}} \runtitle{Estimation following group sequential trial} \thankstext{t1}{This article is a U.S. Government work and is in the public domain in the U.S.A.} \begin{aug} \author{\fnms{Grant} \snm{Izmirlian}\corref{}{\mathrm{e}}ad[label=e1]{[email protected]}} \address{National Cancer Institute; Executive Plaza North, Suite 3131\\ 6130 Executive Blvd, MSC 7354; Bethesda, MD 20892-7354\\ \printead{e1}} \runauthor{G. Izmirlian} {\mathrm{e}}nd{aug} \begin{abstract} In this paper we consider a group sequentially monitored trial on a survival endpoint, monitored using a weighted log-rank (WLR) statistic with deterministic weight function. We introduce a summary statistic in the form of a weighted average logged relative risk and show that if there is no sign change in the instantaneous logged relative risk, there always exists a bijection between the WLR statistic and the weighted average logged relative risk. We show that this bijection can be consistently estimated at each analysis under a suitable shape assumption, for which we have listed two possibilities. We indicate how to derive a design-adjusted p-value and confidence interval and suggest how to apply the bias-correction method. Finally, we document several decisions made in the design of the NLST interim analysis plan and in reporting its results on the primary endpoint. {\mathrm{e}}nd{abstract} \begin{keyword}[class=AMS] \kwd[Primary ]{62L12} \kwd{62L12} \kwd[; secondary ]{62N022} {\mathrm{e}}nd{keyword} \begin{keyword} \kwd{Weighted Logrank Statistic} \kwd{Group Sequential} \kwd{Interim Analysis} \kwd{Estimation} {\mathrm{e}}nd{keyword} {\mathrm{e}}nd{frontmatter} \section{Introduction} Time to event, e.g. disease specific mortality, is the primary endpoint in many clinical trials. The use of group sequential boundaries in monitoring the trial is not only commonplace, but ethically mandated in all trials of human subjects. The logrank statistic is often the monitoring statistic of choice due to its natural connection with the relative risk, which is often the parameter of inference. This natural connection, which is based upon the assumption of proportional hazards, admits a one-to-one correspondence between the inferential procedure based upon the usual standard normal scale and that based on the scale of the natural parameter. However, the assumption of proportional hazards is not always a reasonable assumption. In many subject areas, e.g. in disease-prevention trials, one expects that the hazard ratio will not be constant. Much of the prior work on the use of the weighted logrank statistic in a sequential design is confined to the use a weighting function from the $G^{\rho, \gamma}(t) = S^{\rho}(t) (1-S(t))^{\gamma}$ family, of Fleming and Harrington, \cite{FlemingT:1991}. They suggest two major types of problems which can arise. First, they argue that use of the weighted logrank statistic does not reproduce the single point analysis in the way that is desired. Most notably, they argue, there is no clinically meaningful parameter that allows the values of the monitoring statistic and sequential boundaries to be cast into a clinically meaningful scale. They believe that this problem is further aggrivated when the range of the weighting function over the duration of the trial is quite large, such as is the case with the $G^{0, 1}$ weight function (Gillen and Emerson, \cite{Gillen:2005a}) and suggest a re-weighting scheme whereby the most weight is given to the most recent data collected at each analysis. Secondly, they argue that if the chosen weighting function is non-deterministic or trial-specific then it is impossible to compare results from different clinical trials, (Gillen and Emerson, \cite{Gillen:2005b, Gillen:2007}). While the bulk of these cautious remarks are useful to know in their own right, several important points have been omitted from the discussion. Firstly, as we will show, there is a natural, clinically meaningful parameter, the weighted average logged relative risk, that is connected bijectively to the weighted logrank statistic when there is no change in sign in the instantaneous logged relative risk. Under suitable shape assumptions, the bijection can be estimated at each analysis. We will show that the asymptotic distribution of the WLR statistic, suitably normalized is a Brownian motion plus drift under nothing but boundeness conditions. In two corollaries, we demonstrate how each of two presented shape assumptions translates into a form of the drift function and consequently, into an estimator of the weighted average logged relative risk. We then demonstrate how the usual results concerning monitoring and end of trial estimation follow. Finally, we note that this bijection between the weighted logrank statistic and the weighted average logged relative risk allows the values of the monitoring statistic, efficacy and futility boundaries, and reported point estimate and confidence interval to be cast into a clinically meaningful scale. \section{Terminology and framework} We consider a two armed randomized trial of the effect of an intervention upon a time to event that is run until time $\tau$. Let $\tilde T_i$ be the possibly unobserved time to event and let $C_i$ a right censoring time. We assume non-informative censoring for simplicity. Let $T_i = \tilde T_i \wedge C_i$ be the observed time on study and let $\delta_i = I(\tilde T_i \leq C_i)$ be the event indicator. Let $X_i$ indicates membership in the intervention arm ($X_i=1$) or control arm ($X_i=0$). We assume, conditional upon $X_i$, that individuals, $i = 1,\ldots,n$ are distributed independently and identically. Let $dH_0(t)$ and $dH_1(t)$ be the trial arm specific cumulative hazard increments. We assume throughout that $H_0(t)$ is finite for all $t$ on $[0,\tau]$. For the instantaneous logged hazard ratio, we write \begin{equation} \beta(t) = \log \left\{ \frac{dH_1(t)}{dH_0(t)}\right\}\,. {\mathrm{e}}nd{equation} Let $N_i(t) = I( T_i \leq t, \delta_i = 1)$ and $dN_i(t) = N_i(t) - N_i(t-)$ be the subject level counting process and its increments, respectively. Let $N_n(t) = \sum_i N_i(t)$ and $dN_n(t) = N_n(t) - N_n(t-)$ be the aggregated counting process and its increments, respectively. Note that the following difference is a compensated counting process martingale: \begin{equation} dM_i(t) = dN_i(t) - I(T_i \geq t) {\mathrm{e}}xp(X_i \beta(t)) dH_0(t) {\mathrm{e}}nd{equation} Let $E_n(t, 0) = \sum_i X_i I(T_i \geq t)/\sum_i I(T_i \geq t)$ denote the proportion of the population at risk at time $t$ in the intervention arm, and let $e(t, 0) = \mathop{{\mathrm{lim}}_{a.s.}}_{n\rightarrow\infty} E_n(t, 0)$ and let $G(t) = \mathop{{\mathrm{lim}}_{a.s.}} dN_n(t)/n$. Let ${I}\kern-.3em{F}_n(t) = \int_0^t E_n(\xi, 0) (1-E_n(\xi, 0))\,dN_n(\xi)/n$ and let ${I}\kern-.3em{F}(t) = \int_0^t e(\xi, 0) (1-e(\xi, 0))\,dG(\xi)$. We introduce the following notation for cross moment integrals against $d{I}\kern-.3em{F}$ over $(0,t)$: \begin{equation} \langle \psi_1 | {I}\kern-.3em{F} | \psi_2\rangle_t = \int_0^t \psi_1(\xi) \,\psi_2(\xi) d{I}\kern-.3em{F}(\xi)\,.\label{eqn:brkt} {\mathrm{e}}nd{equation} \noindent For reasons that will become clear below, we consider the target of our investigation to be the following weighted average logged relative risk: \begin{equation} \beta^{\star} = \frac{\langle Q | {I}\kern-.3em{F} |\beta\rangle_{\tau}}{\langle Q | {I}\kern-.3em{F} | 1 \rangle_{\tau}} \,.\label{eqn:betastar} {\mathrm{e}}nd{equation} Let $q(t) = \beta(t)/\beta^{\star}$. This provides a representaton of the instantaneous logged relative risk function, $\beta(t) = \beta^{\star}\,q(t)$ as the product of its weighted average value, $\beta^{\star}$ times a shape function, $q$. Note it follows that the shape function has weighted average value equal to 1: \begin{equation} 1 = \frac{\langle Q | {I}\kern-.3em{F} | q \rangle_{\tau}}{\langle Q | {I}\kern-.3em{F} | 1 \rangle_{\tau}} \,.\label{eqn:intQqeq1} {\mathrm{e}}nd{equation} \noindent At follow-up time $t$, the $\sqrt{n}$ normalized score statistic with weighting function $Q$ is: \begin{equation} U_n(t) = \frac{1}{\sqrt{n}} \sum_{i=1}^n \int_0^{t} Q(\xi) \left\{ X_i - E_n(\xi, 0)\right\} dN_i(\xi)\,. {\mathrm{e}}nd{equation} Its estimated variance is: \begin{equation} V_n(t) = \frac{1}{n} \int_0^{t} Q^2(\xi) E_n(\xi, 0) \left(1-E_n(\xi, 0)\right) dN_n(\xi) = \langle Q | {I}\kern-.3em{F}_n | Q \rangle_t\,. {\mathrm{e}}nd{equation} Let $v(t) = \mathop{{\mathrm{lim}}_{a.s.}} V_n(t)$. Note that $v(t)= \langle Q | {I}\kern-.3em{F} | Q \rangle_t$. Let $f_n(t;\tau) = V_n(t)/V_n(\tau)$ and $f(t; \tau) = v(t)/v(\tau)$. We will on occasion use the shorthand $f_{n,j}$ and $f_j$ for $f_n(t; \tau)$ and $f(t;\tau)$, respectively. Also, let $m_n(t) = \langle Q | {I}\kern-.3em{F}_n | Q \rangle_t$ and $m(t) = \langle Q | {I}\kern-.3em{F} | Q \rangle_t$. We consider the weighted log-rank (WLR) statistic at time $t$ on several ``scales'' \begin{itemize} \item[(i)]{The standard normal scale: $Z_n(t) = U_n(t)/\sqrt{V_n(t)}$} \item[(ii)]{The ``Brownian scale'': $X_n(t) = U_n(t)/\sqrt{V_n(\tau)}$} {\mathrm{e}}nd{itemize} \section{Main Result} \begin{cond}\label{cond:betabdd} The instantaneous logged relative risk function, $\beta$, is bounded on $[0,\tau]$. {\mathrm{e}}nd{cond} \begin{cond}\label{cond:Qbdd} The chosen weighting function, $Q$, is bounded on $[0,\tau]$ and deterministic. {\mathrm{e}}nd{cond} Recall that a weighting functions is always non-negative. The stipulated boundedness in conditions \ref{cond:betabdd} and \ref{cond:Qbdd} above can be relaxed to being of class $L^2$ with respect to the measure $d{I}\kern-.3em{F}$, as this is all that is really required. \noindent While the context will involve monitoring the statistic at a sequence of interim analyses, for the time being, we suppress this aspect and consider instead the following more general and generic result which holds under the weakest set of assumptions: \begin{thm}\label{thm:asymp} Under conditions \ref{cond:betabdd} and \ref{cond:Qbdd}, then under the family of local alternatives, $\beta_n^{\star} = b^{\star}/\sqrt{n}$, the score statistic, normalized to the ``Brownian scale'' is asymptotically a Brownian motion on $[0, 1]$ plus a drift. \begin{equation} X_n(t) \buildrel{\cal D}\over\longrightarrow W(f(t;\tau)) + \mu(t)\, {\mathrm{e}}nd{equation} where the ``time scale'' for the Brownian motion is the variance ratio or information fraction, $f(t;\tau) = v(t)/v(\tau)$, and the drift, parameterized by $t$ is \begin{equation} \mu(t) = \frac{\langle Q | {I}\kern-.3em{F} | q\rangle_t}{\sqrt{\langle Q | {I}\kern-.3em{F} | Q \rangle_{\tau}}} \, b^{\star}\,.\label{eqn:mut} {\mathrm{e}}nd{equation} {\mathrm{e}}nd{thm} The proof of \ref{thm:asymp} is given in appendix \ref{sec:thm1proof}. Notice, first, that from equations \ref{eqn:intQqeq1} and \ref{eqn:mut}, it follows that the value of the drift function at the scheduled end of the trial is \begin{equation} \mu(\tau) = \frac{\langle Q | {I}\kern-.3em{F} | 1\rangle_{\tau}}{\sqrt{\langle Q | {I}\kern-.3em{F} | Q \rangle_{\tau}}} \, b^{\star}\,.\label{eqn:mutau} {\mathrm{e}}nd{equation} Thus, without any additional assumptions on the shape function, $q$, we have the following corollary: \begin{cor}\label{cor:bstarTau} At the planned conclusion of the trial, $\tau$, an estimate of $\beta^{\star}$ is given by the following: \begin{equation} {\widehat \beta^{\star}} = X_n(\tau) \frac{\sqrt{\langle Q | {I}\kern-.3em{F}_n | Q \rangle_{\tau}}}{\sqrt{n} \,\langle Q | {I}\kern-.3em{F}_n | 1 \rangle_{\tau}}\,. {\mathrm{e}}nd{equation} \begin{itemize} \item[(i)]{${\widehat \beta^{\star}}$ is unbiased} \item[(ii)]{An estimate of its variance is given by \begin{equation} \var{\widehat \beta^{\star}} = \frac{\langle Q | {I}\kern-.3em{F}_n | Q \rangle_{\tau}}{n \,\langle Q | {I}\kern-.3em{F}_n | 1 \rangle_{\tau}^2}\,.\label{eqn:varhatbetatau} {\mathrm{e}}nd{equation} } {\mathrm{e}}nd{itemize} {\mathrm{e}}nd{cor} \section{Estimates of $\beta^{\star}$ in a Trial Stopped Early} \noindent Obtaining an estimate of $\beta^{\star}$ at a trial stopped early due to an efficacy boundary crossing will require more assumptions on the shape function, $q$. At a minimum in order to have a monotone drift function which is necessary for propper monitoring, we require the following. \begin{cond}\label{cond:qnonneg} The shape function, $q$, is non-negative. {\mathrm{e}}nd{cond} Since the drift's function's dependence on $t$ is through an integral of a non-negative function, we have the following corollary: \begin{cor}\label{cor:bstarq} If conditions \ref{cond:betabdd}, \ref{cond:Qbdd} and \ref{cond:qnonneg} are true then the conclusion of theorem \ref{thm:asymp} holds and the drift function is monotone increasing or decreasing in $t$, depending upon the sign of $b^{\star}$. {\mathrm{e}}nd{cor} Note also that as the inverse of an increasing function is also increasing, the drift function can also be considered a monotone function of the information fraction. This would, of course, lead to a natural estimate of $\beta^{\star}$ in a trial stopped early except for the fact that we have no knowledge of $q$. In order to have a more useful estimator for $\beta^{\star}$ in trials stopped early, we opt for a semi-parametric model. In the following, we list two possibilities. The most natural shape condition to impose is true if our choice of weight function was the optimal one among all possible choices. \begin{cond}\label{cond:qeqKQ} The shape function, $q$, is proportional to our chosen weighting function, $q(t) = K\,Q(t)$. {\mathrm{e}}nd{cond} Note that as the weighted average of the shape function must equal 1 as in equation \ref{eqn:intQqeq1} it follows that the constant of proportionality, $K$, must be \begin{equation} K = \frac{\langle Q | {I}\kern-.3em{F} | 1 \rangle_{\tau}}{\langle Q | {I}\kern-.3em{F} | Q \rangle_{\tau}} \,.\label{eqn:Kdef} {\mathrm{e}}nd{equation} \begin{cor}\label{cor:bstarqeqKQ} If conditions \ref{cond:betabdd}, \ref{cond:Qbdd} and \ref{cond:qeqKQ} are true then \begin{itemize} \item[(i)]{$X_n$ is asymptotically a Brownian motion with a drift that is linear in the information fraction: \begin{equation} \mu(t) = \frac{\langle Q | {I}\kern-.3em{F} | 1 \rangle_{\tau}}{\sqrt{\langle Q | {I}\kern-.3em{F} | Q \rangle_{\tau}}} f(t; \tau) \, b^{\star}\,.\label{eqn:mutKQ} {\mathrm{e}}nd{equation} } \item[(ii)]{If the trial is stopped at an analysis number $J$ at calender time $t_J$ due to an effacacy boundary crossing, then we have the following estimate of $\beta^{\star}$ \begin{equation} {\widehat \beta^{\star}} = \frac{X_n(t_J)}{f_n(t_J; \tau)} \,\frac{\sqrt{\langle Q | {I}\kern-.3em{F}_n | Q \rangle_{\tau}}}{\sqrt{n} \,\langle Q | {I}\kern-.3em{F}_n | 1 \rangle_{\tau}} {\mathrm{e}}nd{equation} } \item[(iii)]{An estimate of the mean-squared error is given by: \begin{equation} \mse{\widehat \beta^{\star}} = \frac{\langle Q | {I}\kern-.3em{F}_n | Q \rangle_{\tau}}{n \,f_n(t_J; \tau)\,\langle Q | {I}\kern-.3em{F}_n | 1 \rangle_{\tau}^2} {\mathrm{e}}nd{equation} } {\mathrm{e}}nd{itemize} {\mathrm{e}}nd{cor} Another natural shape condition is true when we have opted for a weighted statistic but the true shape is constant. \begin{cond}\label{cond:qeq1} The shape function, $q$, is identically 1. {\mathrm{e}}nd{cond} \begin{cor}\label{cor:bstarqeq1} If conditions \ref{cond:betabdd}, \ref{cond:Qbdd} and \ref{cond:qeq1} are true then \begin{itemize} \item[(i)]{$X_n$ is asymptotically a Brownian motion the following drift: \begin{equation} \mu(t) = \frac{\langle Q | {I}\kern-.3em{F} | 1 \rangle_{\tau}}{\sqrt{\langle Q | {I}\kern-.3em{F} | Q \rangle_{\tau}}} r(t, \tau) \, b^{\star}\,,\label{eqn:mut1} {\mathrm{e}}nd{equation} where $r(t; \tau) = \langle Q | {I}\kern-.3em{F} | 1 \rangle_t/\langle Q | {I}\kern-.3em{F} | 1 \rangle_{\tau}$, which is an increasing function of $t$ and takes the values $0$ at $t=0$ and $1$ at $t=\tau$. } \item[(ii)]{If the trial is stopped at an analysis number $J$ at calender at time $t_J$ due to an effacacy boundary crossing, then we have the following estimate of $\beta^{\star}$ \begin{equation} {\widehat \beta^{\star}} = \frac{X_n(t_J)}{r_n(t_J; \tau)} \, \frac{\sqrt{\langle Q | {I}\kern-.3em{F}_n | Q \rangle_{\tau}}}{\sqrt{n} \,\langle Q | {I}\kern-.3em{F}_n | 1 \rangle_{\tau}}\,, {\mathrm{e}}nd{equation} where $r_n(t; \tau) = \langle Q | {I}\kern-.3em{F}_n | 1 \rangle_t/\langle Q | {I}\kern-.3em{F}_n | 1 \rangle_{\tau}$ } \item[(iii)]{An estimate of the mean-squared error is given by: \begin{equation} \mse{\widehat \beta^{\star}} = \frac{f_n(t_J;\tau) \,\langle Q | {I}\kern-.3em{F}_n | Q \rangle_{\tau}}{n \,r_n(t_J;\tau)^2\, \langle Q | {I}\kern-.3em{F}_n | 1 \rangle_{\tau}^2} {\mathrm{e}}nd{equation} } {\mathrm{e}}nd{itemize} {\mathrm{e}}nd{cor} \section{Application to Monitoring and Final Reporting in a Clinical Trial} The relationship between the drift of the WLR statistic and the weighted average logged relative risk parameter provided by theorem \ref{thm:asymp} and its corallaries can be used in the monitoring and final reporting of a clinical trial. \subsection{Futility Boundary} Our comments regarding monitoring a trial are made within the context of boundaries constructed using the Lan-Demets procedure, \cite{LanK:1983}. Construction of the efficacy boundary is done under the null hypothesis that the drift function is identically zero and can be done without appealing to the results presented here. If a futility boundary is specified in the design then under either of the shape assumptions, one can apply the corresponding corollary \ref{cor:bstarqeqKQ} or corollary \ref{cor:bstarqeq1} to calculate the drift function at each interim analysis which is required to compute the futility boundary under the Lan-Demets approach \cite{LanK:1983}. Note that the shape assumption being made must be part of the interim analysis plan design. In the following discussion we will assume that the optimal weighting shape condition \ref{cond:qeqKQ} was specified in the design so that the discussion focuses on the application of corollary \ref{cor:bstarqeqKQ}. In this case, $\beta^{\star}$ is the weighted average logged relative risk for which the study is powered to detect and must also be specified in the interim analysis plan design. The values of $v(\tau)=\langle Q | {I}\kern-.3em{F} | Q \rangle_{\tau}$ and $m(\tau)=\langle Q | {I}\kern-.3em{F} | 1 \rangle_{\tau}$ at the planned termination of the study, $\tau$, must also be specified in the interim analysis plan design. We demonstrate in appendix \ref{sec:EOSfunctionals} when the only source of censoring is administrative censoring or other cause mortality, how these functionals can be projected for a specific choice of weighting function, $Q$, based upon projected values of the cross-arm pooled cumulative hazard function at several landmark times on study. We remark here that following consensus, we recommend using a non-binding futility boundary which is constructed after construction of an efficacy boundary which ignores the existence of the futility boundary. This is preferred to the joint construction of efficacy and futility boundaries as that approach results in a discounted efficacy criterion. \subsection{Prediction at End of Trial} When the trial is stopped at an efficacy or futility boundary crossing, or at the scheduled end of the trial, and if the optimal weighting shape assumption \ref{cond:qeqKQ} was specified in the design, then corollary \ref{cor:bstarqeqKQ} can be used to convert the value of the WLR statistic on the Brownian scale, $X_n(t_j)$, to an estimate of the weighted average logged relative risk, $\widehat\beta^{\star}$. Therefore, our point estimate is \begin{equation} {\widehat \beta^{\star}} = \frac{X_n(t_j)}{f_{n,j}} \,\frac{\sqrt{\langle Q | {I}\kern-.3em{F}_n | Q \rangle_{\tau}}}{\sqrt{n} \,\langle Q | {I}\kern-.3em{F}_n | 1 \rangle_{\tau}} {\mathrm{e}}nd{equation} We use the values of $v(\tau)=\langle Q | {I}\kern-.3em{F} | Q \rangle_{\tau}$ and $m(\tau)=\langle Q | {I}\kern-.3em{F} | 1 \rangle_{\tau}$ which are specified in the interim analysis plan design. As mentioned above, when it is obtained at an efficacy boundary crossing, these type of estimates are known to be biased away from the null (see e.g. Liu and Hall, \cite{LiuA:1999}). The construction of a design-adjusted confidence interval and adjustment of this estimate for the above mentioned bias are standard results, especially under the optimal weighting shape condition \ref{cond:qeqKQ} which leads, in corollary \ref{cor:bstarqeqKQ}, to a drift that is linear in the information fraction. For sake of completeness, we outline below how to compute a design adjusted p-value, construct a design-adjusted confidence interval and how to calculate the bias adjusted estimate of the weighted average logged relative risk. All three of these tasks involve the sampling density under the null hypothesis of the sufficient statistic, $(J, X_n(t_J))$, where $J$ and $X_n(t_J)$ are the analysis number and the value of the weighted logrank statistic at an efficacy crossing. The sampling density of $(J, X_n(t_J))$ takes the following form. First, for $j=1$, $\pi((1,x)) = {{\rm I}\kern-.18em{\rm P}}\{X_n(t_1) = x\}$. For $j>1$, \begin{eqnarray} \pi((j, x) \kern-0.75em &;&\kern-0.75em \mathbf{b}_{1:(j-1)}, \mathbf{f}_{1:j}) \label{eqn:psi}\\ &=& \frac{d}{dx} {{\rm I}\kern-.18em{\rm P}}_{H_0}\{J=j \mathrm{~and~} X_n(t_{{\mathrm{e}}ll}) < \sqrt{f_{{\mathrm{e}}ll}} b_{{\mathrm{e}}ll}\,,\, {\mathrm{e}}ll=1,\ldots,j-1, X_n(t_j) = x\}\nonumber {\mathrm{e}}nd{eqnarray} Here $\mathbf{b}_{1:(j-1)}$ is the sequence of efficacy boundary points at all prior analyses and $\mathbf{f}_{1:j}$ is the sequence of information fractions at all analyses prior and current. In the following $\mathbf{b}_{1:{{\mathrm{e}}ll}}$ and $\mathbf{f}_{1:{{\mathrm{e}}ll}}$ for ${\mathrm{e}}ll < 1$ denote the empty sequence. The construction and form of this density is reviewed in appendix \ref{sec:density}. Let \begin{equation} \bar{{{\rm I}\kern-.18em{\rm P}}i}((j,x) ; \mathbf{b}_{1:(j-1)}, \mathbf{f}_{1:j}) = \int_x^{\infty} \pi((j,\xi); \mathbf{b}_{1:(j-1)}, \mathbf{f}_{1:j}) d\xi \label{eqn:Psibar} {\mathrm{e}}nd{equation} be the joint probability under $\pi$ that $J=j$ and $X_n(t_j)$ is in the right tail $(x, \infty)$. In order to calculate a p-value and construct a confidence interval which account for the sequential design, we must choose an ordering of the sample space for the statistic $(J, X_n(t_J))$. Here we prefer to use the following ordering: $(j, x) > (k, y)$ if and only if ($j=k$ and $x>y$) or $j<k$. This ordering is applicable when the rejection region is convex, as is the case with Lan-Demets boundaries constructed using a smooth spending function. The discussion of the p-value and of the confidence interval is in the setting of symmetric 2-sided boundaries and when sign of the alternative hypothesis is positive as it is a simple matter to apply these results to the case where the sign of the alternative hypothesise is negative. \vskip0.5truein \noindent{\bf{P-value}}\hfil\break Under the ordering given above, the region further away from the null than $(J, X_n(t_J))$ is the union of all prior rejection regions with the right tail at $X_n(t_J)$. Thus the design-adjusted or sequential p-value is: \begin{equation} \bar{{{\rm I}\kern-.18em{\rm P}}i}((J,X_n(t_J)) ; \mathbf{b}_{1:(J-1)}, \mathbf{f}_{1:J}) + \sum_{{\mathrm{e}}ll=1}^{J-1}\bar{{{\rm I}\kern-.18em{\rm P}}i}(({\mathrm{e}}ll,b_{{\mathrm{e}}ll}) ; \mathbf{b}_{1:{{\mathrm{e}}ll-1}}, \mathbf{f}_{1:{\mathrm{e}}ll})\,, {\mathrm{e}}nd{equation} \vskip0.5truein \noindent{\bf{Confidence Interval}}\hfil\break \noindent If the probability of type one error that remained prior to analysis $J$ is $\alpha_{tot} - \alpha_{J-1}$ then a two sided design-adjusted confidence interval for $\widehat\beta^{\star}$ is derived as follows. If we denote by $x_u$ the solution in $x$ of the equation \begin{equation} \alpha_{tot} - \alpha_{J-1} = \bar{{{\rm I}\kern-.18em{\rm P}}i}((J,x) ; \mathbf{b}_{1:(J-1)}, \mathbf{f}_{1:J}) + \sum_{{\mathrm{e}}ll=1}^{J-1}\bar{{{\rm I}\kern-.18em{\rm P}}i}(({\mathrm{e}}ll,b_{{\mathrm{e}}ll}) ; \mathbf{b}_{1:{{\mathrm{e}}ll-1}}, \mathbf{f}_{1:{\mathrm{e}}ll})\,, {\mathrm{e}}nd{equation} then the design-adjusted confidence interval is \begin{equation} \widehat\beta^{\star} \pm \frac{x_u}{\sqrt{f_{n,J}}}\sqrt{\mse{\widehat \beta^{\star}}}\,, {\mathrm{e}}nd{equation} where $\mse{\widehat \beta^{\star}}$ is the estimated mean-squared error of $\widehat\beta^{\star}$ as given in part (iii) of corollary \ref{cor:bstarqeqKQ}. Note that when the efficacy boundary is one-sided one can still construct a 2-sided confidence interval by replacing $\alpha_{tot} - \alpha_{J-1}$ above with 1/2 its value. \vskip0.5truein \noindent{\bf{Bias Adjustment}}\hfil\break \noindent As in Liu and Hall, \cite{LiuA:1999}, bias adjustment is done recursively as follows. First, \begin{equation} \widetilde \zeta(1, x) = \frac{x}{f_1} {\mathrm{e}}nd{equation} Continuing, \begin{equation} \widetilde \zeta(j, x) = \int_{-\infty}^{\sqrt{f_j} b_j} \widetilde \zeta(j-1, \xi)\,\pi((j-1, \xi); \mathbf{b}_{1:(j-1)}, \mathbf{f}_{1:(j-1)}) \, \phi_{_{\Delta_j}}(x-\xi) \,d\xi {\mathrm{e}}nd{equation} \noindent The bias adjusted estimate, $\widetilde\beta^{\star}$, of the weighted average logged relative risk, $\beta^{\star}$, is obtained by replacing $X_n(t_J)/f_{n,J}$ in part (ii) of corollary \ref{cor:bstarqeqKQ} with $\widetilde\zeta(J, X_n(t_J))$ to obtain the following: \begin{equation} \widetilde\beta^{\star} = \widetilde\zeta(J, X_n(t_J))\,\frac{\sqrt{\langle Q | {I}\kern-.3em{F}_n | Q \rangle_{\tau}}}{\sqrt{n} \, \langle Q | {I}\kern-.3em{F}_n | 1 \rangle_{\tau}} {\mathrm{e}}nd{equation} The design-adjusted confidence interval is the same as given above, but now centered about $\widetilde\beta^{\star}$ \begin{equation} \widetilde\beta^{\star} \pm \frac{x_u}{\sqrt{f_{n,J}}}\sqrt{\mse{\widehat \beta^{\star}}}\,, {\mathrm{e}}nd{equation} \section{The NLST} The design of the National Lung Screening Trial (NLST) \cite{NLST:2011} interim analysis plan stipulated a one-sided efficacy boundary constructed using the Lan-Demets procedure with a total probability of type one error set to 0.05. The trial had 90\% power to detect a relative risk of 0.79 at a sample size of 25,000 per arm, accounting for contamination and non-compliance that could attenuate this effect to 0.85. The trial began randomization on August 5th, 2002 and concluded randomization on April 26th, 2004. A non-binding futility boundary was used. The drift was derived under the optimal weighting shape assumption, \ref{cond:qeqKQ}, and incorporated the design alternative $\beta^{\star} = \log(0.85)$. Initial estimates of $v(\tau)$ and $m(\tau)$ were posed in the design. These were updated by using a least squares quadratic curve to project required future values of $H$ as data accumulated. During the run of the trial, projected values of the end of trial functionals $v(\tau)$ and $m(\tau)$ did not vary more than $\pm 5\%$. Interim analyses occured starting in Spring of 2006 and continued annually until the 5th analysis. The 6th analysis occured 6 months after the 5th. Data on the primary endpoint was backdated roughly 18 months to allow more complete ascertainment by the endpoint verification team. The efficacy boundary was crossed at the sixth interim analysis, using data backdated to January 15th 2009. Data on the primary endpoint was collected only for events occurring through December 31, 2009 so this was used as the scheduled termination date. The raw estimated weighted logged relative risk and its design-adjusted confidence interval were derived. The bias adjusted weighted logged relative risk was compared to the raw estimate. As the raw estimate is asymptotically unbiased, and since the crude risk ratio is the most straightforward and tangible summary of the trial results, the trial leadership decided to report the crude risk ratio together with the exponentiated raw estimate's design-adjusted confidence interval. \section{Discussion} We have shown that there is a natural clinically meaningful parameter, the weighted average logged relative risk, that is connected the weighted logrank statistic. When $\beta(t)$ does not change sign, the connection is a bijection. We have shown that under suitable shape assumptions, this bijection can be estimated at each analysis. We have shown how this bijection between the weighted logrank statistic and the weighted average logged relative risk allows the values of the monitoring statistic, efficacy and futility boundaries, and reported point estimate and confidence interval to be cast into a clinically meaningful scale. We have indicated how to derive a design-adjusted p-value and confidence interval and how bias adjustment of the estimate may be done using known methods. Finally, we have documented several decisions made in the design of the NLST interim analysis plan and in reporting its results on the primary endpoint. \section{Appendices} \subsection{Proof of Theorem \ref{thm:asymp}} \label{sec:thm1proof} \noindent We follow the usual method of adding and subtracting the differential of the compensator, and thereby express $U_n$ as a sum of a term that is asymptotically mean zero Gaussian process and a drift function which grows as $\sqrt{n}$. \begin{eqnarray} U_n(t) &=& \frac{1}{\sqrt{n}} \sum_{i=1}^n \int_0^{t} Q(\xi) \left\{ X_i - E_n(\xi, 0)\right\} dM_i(\xi) \nonumber\\ &&+\;\;\frac{1}{\sqrt{n}} \sum_{i=1}^n \int_0^{t} Q(\xi) \left\{ X_i - E_n(\xi, 0)\right\} I(T_i\geq \xi) {\mathrm{e}}xp(X_i q(\xi) \beta^{\star}) dH_0(\xi)\nonumber\\ &&\nonumber\\ &=& \frac{1}{\sqrt{n}} \sum_{i=1}^n \int_0^{t} Q(\xi) \left\{ X_i - E_n(\xi, 0)\right\} dM_i(\xi) \nonumber\\ &&+\;\;\sqrt{n} \int_0^{t} Q(\xi) \left\{ E_n(\xi, \beta^{\star}) - E_n(\xi, 0)\right\} R_n(\xi, \beta^{\star}) dH_0(\xi)\,, {\mathrm{e}}nd{eqnarray} where in the above, $R_n(\xi, \beta^{\star}) = 1/n \sum_i I(T_i \geq \xi){\mathrm{e}}xp(X_i q(\xi) \beta^{\star})$, and $E_n(\xi, \beta^{\star}) = 1/(n R_n(\xi, \beta^{\star})) \sum_i X_i I(T_i \geq \xi){\mathrm{e}}xp(X_i q(\xi) \beta^{\star})$. \noindent By linearizing the difference, $E_n(\xi, \beta^{\star}) - E_n(\xi, 0)$ about $\beta^{\star} = 0$ we obtain \begin{eqnarray} U_n(t) &=& \frac{1}{\sqrt{n}} \sum_{i=1}^n \int_0^{t} Q(\xi) \left\{ X_i - E_n(\xi, 0)\right\} dM_i(\xi) \nonumber\\ &&+\;\;\sqrt{n} \beta^{\star} \int_0^{t} Q(\xi) q(\xi) E_n(\xi, 0)\left\{1 - E_n(\xi, 0)\right\} R_n(\xi, \beta^{\star}) dH_0(\xi)\,.\nonumber\\ &&\label{eqn:U} {\mathrm{e}}nd{eqnarray} We normalize by $\sqrt{V_n(\tau)}$ and replace the differential $R_n(\xi, \beta^{\star}) dH_0(\xi)$ with $dN_n(\xi)/n$. The latter is possible because integrals of bounded functions against the difference of the differentials are consistent to zero. \begin{eqnarray} X_n(t) &=& \frac{1}{\sqrt{n\,V_n(\tau)}} \sum_{i=1}^n \int_0^{t} Q(\xi) \left\{ X_i - E_n(\xi, 0)\right\} dM_i(\xi) \nonumber\\ &&+\;\;\sqrt{\frac{n}{V_n(\tau)}} \beta^{\star} \int_0^{t} Q(\xi) q(\xi) E_n(\xi, 0)\left\{1 - E_n(\xi, 0)\right\} \frac{dN_n(\xi)}{n}\nonumber\\ &&\nonumber\\ &=& W_n(f_n(t;\tau)) \;+\; \frac{\langle Q | {I}\kern-.3em{F}_n | q\rangle_t}{\sqrt{\langle Q | {I}\kern-.3em{F}_n | Q \rangle_{\tau}}} \, \sqrt{n} \beta^{\star}\,. {\mathrm{e}}nd{eqnarray} The first term is easily recognized to be asymptotic in distribution to a standard Brownian motion. The reader can either directly apply Robolledo's martingale central limit theorem, verifying that in the case that integrands and intensities are bounded all conditions are satisfied, or apply a more direct result, such as theorem (6.2.1) in Fleming and Harrington \cite{FlemingT:1991}. Under the family of local alternatives, $\beta^{\star}_n = b^{\star}/\sqrt{n}$, then by the comments following expression \ref{eqn:U}, the second term is easily seen to be consistent to the drift function listed in expression \ref{eqn:mut}. Therefore the result follows by Slutzky's theorem. \subsection{End of Trial Functionals} \label{sec:EOSfunctionals} \noindent In this section we demonstrate how to project values of the variance $v(\tau) = \langle Q | {I}\kern-.3em{F} | Q \rangle_{\tau}$, and the ``first moment'' $m(\tau) = \langle Q | {I}\kern-.3em{F} | 1 \rangle_{\tau}$ at the scheduled end of study, $\tau$. This is done in the specific case of the ``ramp plateau'' weighting function which was used for interim monitoring and reporting in the NLST. This is the function which takes the value 0 at $t=0$, has linear increase to the value 1 at $t=t_c$ and then maintains this constant value forward. \begin{equation} Q(t) = \frac{t}{t_c} \wedge 1 {\mathrm{e}}nd{equation} In the NLST, the value of $t_c=4$ years was used. Next, by imposing some mild assumptions we will be able to express all quantities in the integrands in terms of the cross-arm pooled cancer mortality cumulative hazard function, $H$ and thereby solve the integrals via a simple change of variables. The resulting expressions require only values of $H(t)$ at $t=t_c$, $t=\tau-t_{er}$ and $t=\tau$, where $t_{er}$ is the calender time at which randomization was concluded. First we shall list the required assumptions. In the following discussion, $S$, $S_{lr}$ and $S_{oth}$ are survival functions corresponding to the cross-arm pooled cancer mortality, administrative censoring or ``live removal'' and other cause mortality. The latter two were the only sources of censoring in the NLST because complete ascertainment with respect to mortality was possibly through the use of the matching death certificates through the national death index. \begin{cond}\label{cond:prop} Other cause mortality is proportional to cancer mortality, i.e. that $\theta = -dlog(S_{oth})/dH$ is constant. {\mathrm{e}}nd{cond} \begin{cond}\label{cond:propall} Proportional allocation: $e(\xi, 0) {\mathrm{e}}quiv e(0, 0)$. {\mathrm{e}}nd{cond} \begin{cond}\label{cond:unifaccrH} Accrual is uniform on the scale of $H$, so that \begin{equation} S_{lr}(\xi) = \frac{H(\tau) - H(\xi)}{H(\tau) - H(\tau - t_{er})}\wedge 1, {\mathrm{e}}nd{equation} where $\tau$ is the time at which the required number of events are obtained, and $t_{er}$ is the time at which randomization is completed. {\mathrm{e}}nd{cond} \begin{cond}\label{cond:wtfn} \begin{equation} Q(\xi) = \frac{\xi}{t_c} \wedge 1 {\mathrm{e}}quiv \frac{1-{\mathrm{e}}xp(-H(\xi) \wedge H(t_c) )}{1-{\mathrm{e}}xp(-H(t_c))}. {\mathrm{e}}nd{equation} {\mathrm{e}}nd{cond} \noindent The other cause versus cancer proportionality assumption is perhaps the most arguable. However, the extent to which it is violated in practice has little impact upon our results as other cause mortality enters our results only through its survival function which maintains a value in excess of 0.95 throughout the trial. The proportional allocation assumption approximates what we see in practice quite closely, especially in the case of a large trial of a rare event. In the NLST there was 1 to 1 randomization so that $e(0, 0) = 1/2$. The extent to which the latter two assumptions \ref{cond:unifaccrH} and \ref{cond:wtfn} hold both depend upon the extent to which pooled cancer specific mortality grows at a constant rate. In the case of the NLST, the pooled cancer mortality cumulative hazard function did grow at an approximately linear rate. \hfil\break \noindent{\bf{Variance at Planned Termination}} \begin{eqnarray} v(\tau) &=& \langle Q | {I}\kern-.3em{F} | Q \rangle_{\tau} = \int_0^{\tau} Q^2(\xi) e(\xi, 0) \left(1-e(\xi, 0)\right) dG(\xi) \nonumber\\ &&\nonumber\\ &=& \int_0^{\tau} Q^2(\xi) e(\xi, 0) \left(1-e(\xi, 0)\right) S_{oth}(\xi) S_{lr}(\xi) S(\xi) dH(\xi)\,.\label{eqn:computeV} {\mathrm{e}}nd{eqnarray} Here, $S$, $S_{lr}$ and $S_{oth}$ are survival functions corresponding to the cross-arm pooled cancer mortality, administrative censoring or ``live removal'' and other cause mortality. The latter two were the only sources of censoring in the NLST because complete ascertainment with respect to mortality was possibly through the use of the matching death certificates through the national death index. Therefore, we can express the differential, $dG$, in this way. Under assumptions \ref{cond:prop}, \ref{cond:propall}, \ref{cond:unifaccrH}, and \ref{cond:wtfn}, we apply the change of variables, ${\mathrm{e}}ta = H(\xi)$, to obtain \begin{eqnarray*} v(\tau) &=& \frac{1}{4} \int_0^{H(\tau)} \left(1-{\mathrm{e}}^{-({\mathrm{e}}ta\wedge H(t_c))}\right)^2 \,{\mathrm{e}}^{-\theta {\mathrm{e}}ta} \left\{\frac{H(\tau) - {\mathrm{e}}ta}{H(\tau) - H(\tau-t_{er})} \wedge 1 \right\} \,{\mathrm{e}}^{-{\mathrm{e}}ta} d{\mathrm{e}}ta\\ \\ &=& \frac{1}{4} \int_0^{H(t_c)\wedge H(\tau-t_{er})} \left(1 - 2 {\mathrm{e}}^{-{\mathrm{e}}ta} + {\mathrm{e}}^{-2{\mathrm{e}}ta}\right) \,{\mathrm{e}}^{-(\theta + 1){\mathrm{e}}ta} d{\mathrm{e}}ta\\ \\ &&\;\;+ \frac{I\left(t_c < \tau - t_{er}\right)}{4} \,\left(1-{\mathrm{e}}^{-H(t_c)}\right)^2\, \int_{H(t_c)}^{H(\tau - t_{er})} {\mathrm{e}}^{-(\theta + 1){\mathrm{e}}ta} d{\mathrm{e}}ta\\ \\ &&\;\;+ \frac{I(\tau-t_{er} < t_c)}{4\left(H(\tau) - H(\tau-t_{er})\right)} \, \int_{H(\tau-t_{er})}^{H(t_c)}\,\left(1 - 2{\mathrm{e}}^{-{\mathrm{e}}ta} + {\mathrm{e}}^{-2{\mathrm{e}}ta}\right)\,{\mathrm{e}}^{-(\theta+1){\mathrm{e}}ta}\,\left(H(\tau) - {\mathrm{e}}ta\right) d{\mathrm{e}}ta\\ \\ &&\;\;+ \frac{\left(1-{\mathrm{e}}^{-H(t_c)}\right)^2}{4\left(H(\tau)-H(\tau-t_{er})\right)}\, \int_{H(\tau-t_{er})\vee H(t_c)}^{H(\tau)} {\mathrm{e}}^{-(\theta+1){\mathrm{e}}ta} \, \left(H(\tau)-{\mathrm{e}}ta\right) d{\mathrm{e}}ta\\ \\ &=& I_1 + I_2 + I_3 + I_4\,. {\mathrm{e}}nd{eqnarray*} \noindent These evaluate to: \begin{eqnarray*} I_1 &=& \frac{1}{4}\left\{\frac{1-{\mathrm{e}}^{-(\theta+1)H_m}}{\theta+1}\;-\;2\,\frac{1-{\mathrm{e}}^{-(\theta+2)H_m}}{\theta+2}\;+\; \frac{1-{\mathrm{e}}^{-(\theta+3)H_m}}{\theta+3}\right\}\;\;{\rm where~} H_m = H(t_c)\wedge H(\tau-t_{er})\,,\\ \\ I_2 &=& I(t_c<\tau-t_{er})\,\left(1-{\mathrm{e}}^{-H(t_c)}\right)^2\, \frac{{\mathrm{e}}^{(\theta+1)H(t_c)}-{\mathrm{e}}^{-(\theta+1)H(\tau-t_{er})}}{4(\theta+1)}\,,\\ \\ I_3 &=& \frac{I(\tau-t_{er} < t_c)}{4(H(\tau)-H(\tau-t_{er}))}\\ \\ &&\;\; \times \;\left\{ \left(\frac{{\mathrm{e}}^{-(\theta+1)H(\tau-t_{er})}}{\theta+1}- 2\frac{{\mathrm{e}}^{-(\theta+2)H(\tau-t_{er})}}{\theta+2} + \frac{{\mathrm{e}}^{-(\theta+3)H(\tau-t_{er})}}{\theta+3}\right)\left(H(\tau)- H(\tau-t_{er})\right) \right.\\ \\ &&\qquad-\; \left(\frac{{\mathrm{e}}^{-(\theta+1) H(t_c)}}{\theta+1} - 2\frac{{\mathrm{e}}^{-(\theta+2) H(t_c)}}{\theta+2} + \frac{{\mathrm{e}}^{-(\theta+3) H(t_c)}}{\theta+3}\right)\left(H(\tau) - H(t_c)\right) \\ \\ &&\qquad-\; \left(\frac{{\mathrm{e}}^{-(\theta+1)H(\tau-t_{er})}-{\mathrm{e}}^{-(\theta+1) H(t_c)}}{(\theta+1)^2} - 2\frac{{\mathrm{e}}^{-(\theta+2)H(\tau-t_{er})}-{\mathrm{e}}^{-(\theta+2) H(t_c)}}{(\theta+2)^2}\right. \\ \\ &&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +\;\left.\left.\frac{{\mathrm{e}}^{-(\theta+3)H(\tau-t_{er})}-{\mathrm{e}}^{-(\theta+3) H(t_c)}}{(\theta+3)^2}\right)\right\}\,,\\ \\ I_4 &=& \frac{\left(1-{\mathrm{e}}^{-H(t_c)}\right)^2}{4 (\theta+1)}\\ \\ &&\kern-1em\times \;\left\{ \frac{H(\tau) -(H(\tau-t_{er})\vee H(t_c))}{H(\tau)-H(\tau-t_{er})}\,{\mathrm{e}}^{-(\theta+1)\left(H(\tau-t_{er}) \vee H(t_c) \right)}\; -\;\frac{{\mathrm{e}}^{-(\theta+1)(H(\tau-t_{er}) \vee H(t_c))}-{\mathrm{e}}^{-(\theta+1)H(\tau)}}{(\theta+1)(H(\tau)-H(\tau-t_{er})}\right\} {\mathrm{e}}nd{eqnarray*} respectively. \vskip0.5truein \noindent{\bf{First Moment at Planned Termination}} \begin{eqnarray} m(\tau) &=& \int_0^{\tau} Q(\xi) e(\xi, 0) \left(1-e(\xi, 0)\right) dG(\xi) \nonumber\\ &&\nonumber\\ &=& \int_0^{\tau} Q(\xi) e(\xi, 0) \left(1-e(\xi, 0)\right) S_{oth}(\xi) S_{lr}(\xi) S(\xi) dH(\xi)\,.\label{eqn:computeM} {\mathrm{e}}nd{eqnarray} \noindent Under assumptions \ref{cond:prop}, \ref{cond:propall}, \ref{cond:unifaccrH}, and \ref{cond:wtfn}, we again apply the change of variables, ${\mathrm{e}}ta = H(\xi)$, to obtain \begin{eqnarray*} m(\tau) &=& \frac{1}{4} \int_0^{H(\tau)} \left(1 - {\mathrm{e}}^{-{\mathrm{e}}ta\wedge H(t_c)}\right)\,{\mathrm{e}}^{-\theta {\mathrm{e}}ta}\,\left\{\frac{H(\tau) - {\mathrm{e}}ta}{H(\tau) - H(\tau-t_{er})} \wedge 1 \right\} \,{\mathrm{e}}^{-{\mathrm{e}}ta} d{\mathrm{e}}ta\\ \\ &=& \frac{1}{4} \int_0^{H(t_c)\wedge H(\tau-t_{er})} \left(1-{\mathrm{e}}^{-{\mathrm{e}}ta}\right)\,{\mathrm{e}}^{-\theta {\mathrm{e}}ta}\,{\mathrm{e}}^{-{\mathrm{e}}ta} d{\mathrm{e}}ta\\ \\ &&\;+\;\frac{1}{4}\,I(t_c < \tau-t_{er})\,\left(1-{\mathrm{e}}^{-H(t_c)}\right)\,\int_{H(t_c)}^{H(\tau-t_{er})}\,{\mathrm{e}}^{-\theta{\mathrm{e}}ta}\, {\mathrm{e}}^{-{\mathrm{e}}ta} d{\mathrm{e}}ta\\ \\ &&\;+\; \frac{1}{4}\,I(t_c > \tau-t_{er})\,\int_{H(\tau-t_{er})}^{H(t_c)} \left(1-{\mathrm{e}}^{-{\mathrm{e}}ta}\right)\,{\mathrm{e}}^{-\theta {\mathrm{e}}ta}\, \frac{H(\tau) - {\mathrm{e}}ta}{H(\tau) - H(\tau-t_{er})}\,{\mathrm{e}}^{-{\mathrm{e}}ta} d{\mathrm{e}}ta\\ \\ &&\;+\; \frac{1}{4}\,I(t_c < \tau) \,\left(1-{\mathrm{e}}^{-H(t_c)}\right)\,\int_{H(t_c)\vee H(\tau-t_{er})}^{H(\tau)}\, {\mathrm{e}}^{-\theta {\mathrm{e}}ta}\,\frac{H(\tau) - {\mathrm{e}}ta}{H(\tau) - H(\tau-t_{er})} \,{\mathrm{e}}^{-{\mathrm{e}}ta} d{\mathrm{e}}ta\\ \\ &=& J_1 + J_2 + J_3 + J_4 {\mathrm{e}}nd{eqnarray*} \noindent These evaluate to \begin{eqnarray*} J_1 &=& \frac{1}{4} \left\{\frac{1-{\mathrm{e}}^{-(\theta+1)(H(t_c)\wedge H(\tau-t_{er}))}}{\theta+1} - \frac{1-{\mathrm{e}}^{-(\theta+2)(H(t_c)\wedge H(\tau-t_{er}))}}{\theta+2} \right\}\,,\\ \\ J_2 &=& \frac{1}{4}\,I(t_c < \tau-t_{er})\,\left(1-{\mathrm{e}}^{-H(t_c)}\right)\, \frac{{\mathrm{e}}^{-(\theta+1)H(t_c)} - {\mathrm{e}}^{-(\theta+1)H(\tau-t_{er})}}{\theta+1}\,,\\ \\ J_3 &=&\frac{I(t_c > \tau-t_{er})}{4\left(H(\tau) - H(\tau-t_{er})\right)}\\ \\ &&\qquad\qquad\times\;\left\{\left(\frac{\left(H(\tau)-H(\tau-t_{er})\right)\,{\mathrm{e}}^{-(\theta+1)H(\tau-t_{er})} - \left(H(\tau)-H(t_c)\right)\,{\mathrm{e}}^{-(\theta+1)H(t_c)}}{\theta+1}\right.\right.\\ \\ &&\qquad\qquad -\left.\frac{\left(H(\tau)-H(\tau-t_{er})\right)\,{\mathrm{e}}^{-(\theta+2)H(\tau-t_{er})} - \left(H(\tau)-H(t_c)\right)\,{\mathrm{e}}^{-(\theta+2)H(t_c)}}{\theta+2}\right)\\ \\ &&\qquad\qquad -\left.\left(\frac{ {\mathrm{e}}^{-(\theta+1)H(\tau-t_{er})} - {\mathrm{e}}^{-(\theta+1)H(t_c)}}{(\theta+1)^2} \;-\;\frac{{\mathrm{e}}^{-(\theta+2)H(\tau-t_{er})} - {\mathrm{e}}^{-(\theta+2)H(t_c)}}{(\theta+2)^2}\right)\right\}\\ \\ J_4 &=&\frac{I(t_c < \tau) \,\left(1-{\mathrm{e}}^{-H(t_c)}\right)}{4\left(H(\tau) - H(\tau-t_{er})\right)}\\ \\ &&\quad\times\;\left\{\frac{\left(H(\tau)-H(t_c\vee(\tau-t_{er}))\right)\, {\mathrm{e}}^{-(\theta+1)H(t_c\vee(\tau-t_{er}))}}{\theta+1} \; - \; \frac{{\mathrm{e}}^{-(\theta+1)H(t_c\vee(\tau-t_{er}))} - {\mathrm{e}}^{-(\theta+1)H(\tau)}}{(\theta+1)^2}\right\} {\mathrm{e}}nd{eqnarray*} respectively. \vskip0.5truein \noindent{\bf{Duration of Trial}}\hfil\break \noindent The duration the NLST was part of the design. In other situations in which the design stipulates that the trial should run until required number of events is attained, the above change of variables technique can be used to find a closed form expression for \begin{equation} G(\tau) = \int_0^{\tau} S_{oth}(\xi) S_{lr}(\xi) S(\xi) dH(\xi)\,.\label{eqn:computeG} {\mathrm{e}}nd{equation} in terms of the projected values of $H$ at $t=\tau$ and $t=\tau-t_{er}$. Then using the plug-in estimate ${{\rm I}\kern-.18em{\rm E}} N_n(\tau)/n$ for $G(\tau)$ this expression can be inverted to solve for $\tau$, the duration of the trial. \subsection{Sampling density of $(J, X_n(t_J))$} \label{sec:density} As in Armitage, McPherson and Rowe, \cite{ArmitageP:1969}, the sampling density of $(J, X_n(t_J))$ can be derived recursively as follows. Let $\Delta_j = f_{n,j} - f_{n,j-1}$ and let $\phi_v(x) = \phi(x/\sqrt{v})/\sqrt{v}$ where $\phi$ is the density of the standard normal. First, \begin{eqnarray} \pi((1,x)) = \phi_{_{f_1}}(x)\,. {\mathrm{e}}nd{eqnarray} Next, for all $j> 1$, \begin{eqnarray} \pi((j,x) \kern-1em &;&\kern-1em \mathbf{b}_{1:(j-1)}, \mathbf{f}_{1:j}) \nonumber\\ && \nonumber\\ \kern-1em&=&\kern-1em \int_{-\infty}^{\sqrt{f_{j-1}} b_{j-1}} \pi((j-1, \xi);\mathbf{b}_{1:(j-2)},\mathbf{f}_{1:(j-1)})\, \phi_{_{\Delta_j}}(x-\xi) \,d\xi \nonumber\\ \label{eqn:pij} {\mathrm{e}}nd{eqnarray} {\mathrm{e}}nd{document}
\begin{document} \title{Quantum interferences from cross talk in $J=1/2\leftrightarrow J=1/2$ transitions} \author{Shubhrangshu Dasgupta} \affiliation{Physical Research Laboratory, Navrangpura, Ahmedabad - 380 009, India} \date{\today} \begin{abstract} We consider the possibility of a control field opening up multiple pathways and thereby leading to new interference and coherence effects. We illustrate the idea by considering the $J=1/2\leftrightarrow J=1/2$ transition. As a result of the additional pathways, we show the possibilities of nonzero refractive index without absorption and gain without inversion. We explain these results in terms of the coherence produced by the opening of an extra pathway. \end{abstract} \pacs{42.50.-p, 42.50.Gy} \maketitle \section{\label{sec:intro}Introduction} The usage of a coherent field to control the optical properties of a medium has led to many remarkable results such as enhanced nonlinear optical effects \cite{spt,harris:eit}, electromagnetically induced transparency (EIT) \cite{harris:pt}, lasing without inversion \cite{lwi,lwi:rev,gsa1:lwi}, ultraslow light \cite{hau,scully,budker}, storage and revival of optical pulses \cite{lukin} and many others \cite{index1,index:expt,Zhu0304,DengPRL}. Most of these effects rely on quantum interferences which are created by the application of a coherent field. The coherent field opens up a new channel for the process under consideration. The application of the coherent field gives us considerable flexibility as by changing the strength and the frequency of the field we can obtain a variety of control on the optical properties of the medium. The Zeeman degeneracy of the atomic levels adds to this flexibility as the coherent field may open up more than one pathway thereby leading to new interferences. In this paper, we consider a specific atomic energy level scheme consisting of $J=1/2\leftrightarrow J=1/2$ transitions. We show how a control field opens up more than one pathway and how this leads to new coherent effects. We report interesting results on gain without inversion and on the production of the refractive index without absorption \cite{footnote1}. The organization of the paper is as follows. In Sec. II, we describe the atomic configuration in details. We put forward all the relevant equations and the steady state solutions. In Sec. III, we discuss the absorption and dispersion profiles of the probe field. In Sec. IV, we show how new features arise in these profiles as effects of the new coherence. \begin{figure} \caption{\label{chap5fig1} \label{chap5fig1} \end{figure} \begin{figure} \caption{\label{fignew} \label{fignew} \end{figure} \section{\label{sec:model}Model configuration} We consider the $J=1/2\leftrightarrow J=1/2$ transition in alkali atoms, as shown in Fig.~\ref{chap5fig1}. This kind of configuration is important for studying the effects of cross talk among different transitions \cite{brown}. Note that cross talk in a $\Lambda$-system can lead to gain in the transmission of the probe field \cite{menon}. Similar configuration with degenerate sublevels also leads to electromagnetically induced absorption while interacting with control and probe fields with the same polarization \cite{goren}. We apply a dc magnetic field to remove the degeneracy of the excited and the ground states. In general, the Zeeman separation $2B$ of the excited magnetic sublevels $m_e=\pm 1/2 (\equiv |e_\mp\rangle)$ is not the same as the Zeeman separation $2B'$ of the ground levels $(\equiv |g_\mp\rangle)$, due to difference in the Land\'e $g$-factors in these manifolds. For example, in $^{39}$K atom, $B'=3B$, where $B=\mu_Bg_eM/\hbar$ ($\mu_B$ is the Bohr magneton) and $g_e=2/3$ and $g_g=2$ are the Land\'e $g$-factors of the excited and the ground sublevels. We apply a weak $\hat{x}$ polarized field $\vec{E}_p=\hat{x}{\cal E}_pe^{-i\omega_1 t}+\textrm{c.c.}$ to probe the properties of the atom, where $\omega_1$ is the angular frequency of the field. The $\sigma_\pm$ components of this probe field interact with the $|e_\mp\rangle\leftrightarrow |g_\pm\rangle$ transitions. The Rabi frequencies are given by $2g_\pm=2(\vec{d}_{e_\mp g_\pm}.\hat{x}{\cal E}_p)/\hbar$, where $\vec{d}_{ij}$ is the electric dipole moment between the levels $|i\rangle$ and $|j\rangle$. We also apply a strong $\pi$-polarized control field \begin{equation} \vec{E}_c=\vec{\cal E}_ce^{-i\omega_2 t}+\textrm{c.c.}\;, \end{equation} which interacts with the $|e_\pm\rangle\leftrightarrow |g_\pm\rangle$ transitions. We assume that the corresponding Rabi frequencies $2\vec{d}_{e_\pm g_\pm}.\vec{\cal E}_c/\hbar$ are equal to $2G$. We emphasize that the system of Fig.~\ref{chap5fig1} can be visualized as two $\Lambda$ systems (as shown in Fig.~\ref{fignew}), which talk to each other, as the same control field $G$ drives both the transitions $|e_\pm\rangle\leftrightarrow |g_\pm\rangle$. The interaction Hamiltonian of this system in dipole approximation is \begin{eqnarray} \frac{H}{\hbar}&=&\left[(\omega_{e_-g_-}|e_-\rangle\langle e_-|+\omega_{e_+g_-}|e_+\rangle\langle e_+|+\omega_{g_+g_-}|g_+\rangle\langle g_+|)\right.\nonumber\\ &-&(\vec{d}_{e_-g_+}|e_-\rangle\langle g_+|+\vec{d}_{e_+g_-}|e_+\rangle\langle g_-|+\textrm{h.c.}).\vec{E}_p\nonumber\\ &-&\left.(\vec{d}_{e_+g_+}|e_+\rangle\langle g_+|+\vec{d}_{e_-g_-}|e_-\rangle\langle g_-|+\textrm{h.c.}).\vec{E}_c\right]\;. \label{hamil} \end{eqnarray} Here the zero of energy is defined at the level $|g_-\rangle$ and $\hbar\omega_{\alpha\beta}$ is the energy difference between the levels $|\alpha\rangle$ and $|\beta\rangle$. We consider the natural decay terms in our analysis and hence invoke the density matrix formalism to find the following equations for different density matrix elements: \begin{widetext} \begin{eqnarray} \dot{\tilde{\rho}}_{e_+g_-}&=&-\left[i(-\Delta +2B')+\Gamma_{e_+g_-}\right]\tilde{\rho}_{e_+g_-}+i\left[g_-e^{-i\omega_{12} t}(\tilde{\rho}_{g_-g_-}-\tilde{\rho}_{e_+e_+})+G\tilde{\rho}_{g_+g_-}-G\tilde{\rho}_{e_+e_-}\right]\;,\nonumber\\ \dot{\tilde{\rho}}_{e_-g_+}&=&-\left[i(-\Delta -2B)+\Gamma_{e_-g_+}\right]\tilde{\rho}_{e_-g_+}+i\left[g_+e^{-i\omega_{12} t}(\tilde{\rho}_{g_+g_+}-\tilde{\rho}_{e_-e_-})+G\tilde{\rho}_{g_-g_+}-G\tilde{\rho}_{e_-e_+}\right]\;,\nonumber\\ \dot{\tilde{\rho}}_{e_+g_+}&=&-\left[-i\Delta +\Gamma_{e_+g_+}\right]\tilde{\rho}_{e_+g_+}+i\left[G(\tilde{\rho}_{g_+g_+}-\tilde{\rho}_{e_+e_+})+g_-e^{-i\omega_{12}t}\tilde{\rho}_{g_-g_+}-g_+e^{-i\omega_{12} t}\tilde{\rho}_{e_+e_-}\right]\;,\nonumber\\ \dot{\tilde{\rho}}_{e_-g_-}&=&-\left[i(-\Delta -2B+2B')+\Gamma_{e_-g_-}\right]\tilde{\rho}_{e_-g_-}+i\left[g_+e^{-i\omega_{12} t}\tilde{\rho}_{g_+g_-}-g_-e^{-i\omega_{12} t}\tilde{\rho}_{e_-e_+}+G(\tilde{\rho}_{g_-g_-}-\tilde{\rho}_{e_-e_-})\right]\;,\nonumber\\ \label{doteq}\dot{\tilde{\rho}}_{g_+g_-}&=&-(2iB'+\Gamma_{g_+g_-})\tilde{\rho}_{g_+g_-}+i\left[G^*\tilde{\rho}_{e_+g_-}-G\tilde{\rho}_{g_+e_-}+g_+^*e^{i\omega_{12} t}\tilde{\rho}_{e_-g_-}-g_-e^{-i\omega_{12} t}\tilde{\rho}_{g_+e_+}\right]\;,\\ \dot{\tilde{\rho}}_{e_+e_-}&=&-(2iB+\Gamma_{e_+e_-})\tilde{\rho}_{e_+e_-}-i\left[G^*\tilde{\rho}_{e_+g_-}-G\tilde{\rho}_{g_+e_-}+g_+^*e^{i\omega_{12} t}\tilde{\rho}_{e_+g_+}-g_-e^{-i\omega_{12} t}\tilde{\rho}_{g_-e_-}\right]\;,\nonumber\\ \dot{\tilde{\rho}}_{g_-g_-}&=&\gamma_{g_-e_-}\tilde{\rho}_{e_-e_-}+\gamma_{g_-e_+}\tilde{\rho}_{e_+e_+}+i\left[g_-^*e^{i\omega_{12} t}\tilde{\rho}_{e_+g_-}+G^*\tilde{\rho}_{e_-g_-}-\textrm{h.c.}\right]\;,\nonumber\\ \dot{\tilde{\rho}}_{e_-e_-}&=&-(\gamma_{g_-e_-}+\gamma_{g_+e_-})\tilde{\rho}_{e_-e_-}+i\left[g_+e^{-i\omega_{12} t}\tilde{\rho}_{g_+e_-}+G\tilde{\rho}_{g_-e_-}-\textrm{h.c.}\right]\;,\nonumber\\ \dot{\tilde{\rho}}_{e_+e_+}&=&-(\gamma_{g_-e_+}+\gamma_{g_+e_+})\tilde{\rho}_{e_+e_+}+i\left[g_-e^{-i\omega_{12} t}\tilde{\rho}_{g_-e_+}+G\tilde{\rho}_{g_+e_+}-\textrm{h.c.}\right]\;,\nonumber \end{eqnarray} \end{widetext} where $\Delta =\omega_2-\omega_{e_+g_+}$ is the detuning of the control field from the transition $|e_+\rangle\leftrightarrow |g_+\rangle$, $\delta=\omega_1-\omega_{e_+g_-}$ is the detuning of the $\sigma_-$ component of the probe field from the transition $|e_+\rangle\leftrightarrow |g_-\rangle$, $\omega_{12}=\omega_1-\omega_2=\delta-\Delta+2B'$ is the difference between frequencies of the probe and control fields, $\gamma_{\alpha\beta}$ is the spontaneous emission rate from the level $|\beta\rangle$ to $|\alpha\rangle$, $\Gamma_{\alpha\beta}=\frac{1}{2}\sum_k(\gamma_{k\alpha}+\gamma_{k\beta})$ is the dephasing rate of the coherence between the levels $|\alpha\rangle$ and $|\beta\rangle$. Here onwards, we assume that $\gamma_{g_+e_-}=\gamma_{g_-e_+}=\gamma_1$ and $\gamma_{g_+e_+}=\gamma_{g_-e_-}=\gamma_2$ without loss of generality, so that $\Gamma_{e_+g_\pm}=\Gamma_{e_-g_\pm}=\Gamma=(\gamma_1+\gamma_2)/2$, $\Gamma_{e_+e_-}=\gamma_1+\gamma_2$, and $\Gamma_{g_+g_-}=0$ \cite{note}. \begin{figure*} \caption{\label{chap5fig2} \label{chap5fig2} \end{figure*} To obtain the above equations, we have applied the rotating wave approximation to neglect the highly oscillating terms. The transformed matrix elements are given by $\tilde{\rho}_{e_\pm g_\mp}=\rho_{e_\pm g_\mp}e^{i\omega_2 t}$ and $\tilde{\rho}_{e_\pm g_\pm}=\rho_{e_\pm g_\pm}e^{i\omega_2 t}$, whereas the other elements remain unchanged. Before solving Eqs. (\ref{doteq}), let us first analyze two different cases. Case I: We consider Fig.~\ref{fignew}(b). If the probe field $\sigma_-$ is absent, all the populations from the level $|g_+\rangle$ would be optically pumped to the state $|g_-\rangle$ by the action of the control field $G$. Thus, the level $|g_-\rangle$ would be the steady state. When the probe field is on, population transfer from the level $|g_-\rangle$ to $|g_+\rangle$ would occur via the following pathway: absorption from the $\sigma_-$ component followed by the emission in the control field in the $|e_+\rangle\rightarrow |g_+\rangle$ transition. The relevant susceptibility of the probe field would be the same as that in case of a $\Lambda$ system. We provide the corresponding expression at the end of this section. Case II: We consider Fig.~\ref{chap5fig1}. If both the $\sigma_\pm$ components are absent, then in the steady state, the population gets distributed in four levels depending upon the detunings and the Rabi frequencies of the control fields. In this case, when both the $\sigma_\pm$ components are switched on, an {\it additional\/} pathway for population transfer from the level $|g_-\rangle$ to $|g_+\rangle$ would arise, in addition to the one described in the Case I. This pathway can be described as follows: absorption from the control field in $|g_-\rangle\rightarrow |e_-\rangle$ transition followed by the emission in the $\sigma_+$ component. Interference of these two pathways (i.e., cross talk between two $\Lambda$ systems in Fig.~\ref{fignew}) leads to new coherence in the four-level system of Fig.~\ref{chap5fig1}, which would not arise in a $\Lambda$-system (Case I). Later we show that all the new features described in this paper can be attributed to this coherence. Note that coherences have been recognized as the major source of newer effects in multilevel systems \cite{harris:pt,lwi:rev,gsa1:lwi}. For the Case II, the steady state solutions of the equations (\ref{doteq}) can be found by expanding the density matrix elements in terms of the harmonics of $\omega_{12}$ as \begin{eqnarray} \tilde{\rho}_{\alpha\beta}&=&\tilde{\rho}_{\alpha\beta}^{(0)}+g_-e^{-i\omega_{12} t}\tilde{\rho}_{\alpha\beta}^{'(-1)}+g_-^*e^{i\omega_{12} t}\tilde{\rho}_{\alpha\beta}^{''(-1)}\nonumber\\ &&+g_+e^{-i\omega_{12} t}\tilde{\rho}_{\alpha\beta}^{'(+1)}+g_+^*e^{i\omega_{12} t}\tilde{\rho}_{\alpha\beta}^{''(+1)}\;. \end{eqnarray} Thus, we obtain a set of algebraic equations for $\tilde{\rho}_{\alpha\beta}^{(r)}$'s. Solving them, we find the following zeroth order population terms (i.e., when both the $\sigma_\pm$ components are absent): \begin{subequations} \label{popul} \begin{eqnarray} \tilde{\rho}_{e_\pm e_\pm}^{(0)}&=&\frac{xy}{Q}\;,\\ \tilde{\rho}_{g_-g_-}^{(0)}&=&\frac{y}{Q}(\gamma_1+\gamma_2+x)\;,\\ \tilde{\rho}_{g_+g_+}^{(0)}&=&\frac{x}{Q}(\gamma_1+\gamma_2+y)\;, \end{eqnarray} \end{subequations} where \begin{eqnarray} &&x=\frac{2|G|^2\Gamma}{|d|^2}\;;~y=\frac{2|G|^2\Gamma}{|c|^2}\;,\nonumber\\ &&Q=(x+y)(\gamma_1+\gamma_2)+4xy\;,\nonumber\\ &&c=i\Delta+\Gamma\;,\\ &&d=-i(-\Delta-2B+2B')+\Gamma\;.\nonumber \end{eqnarray} Clearly, the population is distributed among the levels $|e_\pm\rangle$ and $|g_\pm\rangle$. This is due to optical pumping as we have discussed earlier. \begin{figure*} \caption{\label{fig3} \label{fig3} \end{figure*} The relevant zeroth order coherence terms turn out to be \begin{subequations} \begin{eqnarray} \label{rhogeEx}\tilde{\rho}_{g_+e_+}^{(0)}&=&-\frac{ixG^*}{cQ}(\gamma_1+\gamma_2)\;,\\ \tilde{\rho}_{g_-e_-}^{(0)}&=&-\frac{iyG^*}{dQ}(\gamma_1+\gamma_2)\;, \end{eqnarray} \end{subequations} which vanish in absence of any control field (i.e., for $G=0$). The susceptibilities of the $\sigma_\mp$ components of the probe field can be obtained from first-order solutions which we write as \begin{eqnarray} \tilde{\rho}_{e_+g_-}^{'(-1)}&=&\frac{1}{M_1}\left[Ga_+p_+\tilde{\rho}_{g_+e_+}^{(0)}+Gb_+p_+\tilde{\rho}_{g_-e_-}^{(0)}\right.\nonumber\\ &+&\left.i\{a_+b_+p_++|G|^2(a_++b_+)\}(\tilde{\rho}_{g_-g_-}^{(0)}-\tilde{\rho}_{e_+e_+}^{(0)})\right]\;,\nonumber\\ &&\label{rho_egN}\\ \tilde{\rho}_{e_-g_+}^{'(+1)}&=&\frac{1}{M_2}\left[Gb_-q_-\tilde{\rho}_{g_+e_+}^{(0)}+Ga_-q_-\tilde{\rho}_{g_-e_-}^{(0)}\right.\nonumber\\ &+&\left.i\{a_-b_-q_-+|G|^2(a_-+b_-)\}(\tilde{\rho}_{g_+g_+}^{(0)}-\tilde{\rho}_{e_-e_-}^{(0)})\right]\;,\nonumber\\ && \end{eqnarray} where \begin{eqnarray} M_1&=&a_+b_+p_+q_++|G|^2(p_++q_+)(a_++b_+)\;,\nonumber\\ M_2&=&a_-b_-p_-q_-+|G|^2(p_-+q_-)(a_-+b_-)\;,\nonumber \end{eqnarray} and \begin{eqnarray} a_\pm &=& -i\omega_{12}\pm 2iB +\Gamma_{e_+e_-}\;,\nonumber\\ b_\pm &=& -i\omega_{12}\pm 2iB'+\Gamma_{g_+g_-}\;,\nonumber\\ p_\pm &=& -i\omega_{12} \pm i(\Delta+2B)+\Gamma\;,\nonumber\\ q_\pm &=& -i\omega_{12} \pm i(-\Delta+2B')+\Gamma\;.\nonumber \end{eqnarray} Note that the difference between the frequencies of the probe and control fields is given by $\omega_{12}=\delta-\Delta+2B'$. The above susceptibility is to be compared with the one in the absence of cross-talk (Case I). For the Case I, we have \begin{subequations} \begin{eqnarray} \label{rho_eg1}\tilde{\rho}_{e_+g_-}^{'(-1)}&=&\frac{ib_+}{b_+q_++|G|^2}(\tilde{\rho}_{g_-g_-}^{(0)}-\tilde{\rho}_{e_+e_+}^{(0)})\\ \label{cohG1}&=&\frac{-i\{i(\delta-\Delta)-\Gamma_{g_+g_-}\}}{\{i(\delta-\Delta)-\Gamma_{g_+g_-}\}(i\delta-\Gamma)+|G|^2}\;, \end{eqnarray} \end{subequations} where we have used the fact that in steady state, $\tilde{\rho}_{g_-g_-}^{(0)}=1$ and the populations in the other levels vanish. In this case, the zeroth-order coherence between the levels $|e_+\rangle, |g_+\rangle$ also vanishes. From Eq.~(\ref{cohG1}), we find that the real part of the susceptibility vanishes at the detunings $\delta=\Delta+\Delta_-$, satisfying the following equation: \begin{equation} \Delta_-^3+\Delta\Delta_-^2+(\Gamma_{g_+g_-}^2+2\Gamma\Gamma_{g_+g_-}+|G|^2)\Delta_-+\Delta\Gamma_{g_+g_-}^2=0\;. \end{equation} Three different solutions of the above equation for $\Delta_-$ corresponding to vanishing real part of $\chi_-$, can be obtained from the Cardano's formula \cite{cardano}, given by \begin{equation} \Delta_-^1=-\frac{\Delta}{3}+A_+\;, \Delta_-^{2,3}=-\frac{\Delta}{3}-\frac{1}{2}A_+\pm\frac{i}{2}\sqrt{3}A_-\;, \end{equation} where \begin{eqnarray} &&A_\pm=[R+\sqrt{Q^3+R^2}]^{1/3}\pm [R-\sqrt{Q^3+R^2}]^{1/3}\;,\nonumber\\ &&Q=\frac{1}{9}(3a_1-\Delta^2)\;, R=\frac{1}{54}(9\Delta a_1-27a_0-2\Delta^3)\;,\nonumber\\ &&a_0=\Delta\Gamma_{g_+g_-}^2\;, a_1=\Gamma_{g_+g_-}^2+2\Gamma\Gamma_{g_+g_-}+|G|^2\;. \end{eqnarray} \section{\label{sec:absorb}Absorption and dispersion profiles} We first recall the features of the $\Lambda$ system of Fig.~\ref{fignew}(b). The usual line-shapes can be obtained for a resonant control field, i.e., by putting $\Delta=0$ in the susceptibility given by Eq.~(\ref{cohG1}). Then the absorption and dispersion profiles would be symmetric around $\delta=0$. However, for non-zero $\Delta$, the line-shapes depend upon the values of $\Delta$. We show the dispersion and absorption profiles of the $\sigma_-$ component in Figs.~\ref{chap5fig2}(a) and \ref{chap5fig2}(b) for a fixed detuning $\Delta=B'-B$ of the control field. Clearly, the real and imaginary parts of the susceptibility $\chi_-$ [$\equiv (N|\vec{d}_{e_+g_-}|^2/\hbar\gamma)\tilde{\rho}_{e_+g_-}^{'(-1)}$, $N$ being the number density of the atomic medium] vanish at the two-photon resonance $\delta=\Delta$, which occurs at $\delta=\Delta=4\gamma$ for $B=2\gamma$. These are the usual features of a $\Lambda$ system at two-photon resonance. Next we analyze the four-level system of Fig.~\ref{chap5fig1}. We show the dispersion and absorption profiles of the $\sigma_-$ component in Figs.~\ref{chap5fig2} for $\Delta=B'-B$ \cite{sigmaplus}. \begin{figure} \caption{\label{fig4} \label{fig4} \end{figure} \begin{figure} \caption{\label{fig5} \label{fig5} \end{figure} At two-photon resonance (i.e., at $\delta=\Delta$), the real part of the susceptibility $\chi_-$ is {\it non-zero and negative\/} in contrast to the case of a $\Lambda$ system. On the other hand, at two-photon resonance $\delta=\Delta$, the medium continues to remain transparent as in case of a $\Lambda$-system. Further, at certain region of the detuning $\delta$ ($> \Delta$) of the probe field, the imaginary part of $\chi_-$ becomes {\it negative\/}, leading to the {\it gain in the $\sigma_-$ component\/}. For the case of a $\Lambda$-system, there would be no possibility of gain in the medium [solid line in Fig.~\ref{chap5fig2}(b)] for any $\delta$. In the next section, we analyze these new features in terms of the new coherence that we discussed in the Sec.~II. We should mention here that two-photon gain in hot alkali vapor has been reported in recent experiments \cite{gau1}. The gain involves absorption of two photons from the detuned $\sigma_-$ polarized control field and emission of two photons in $\pi$-polarized probe field at Raman resonance. Detailed theoretical analysis \cite{gau2} has shown that these are due to quantum interference of several excitation pathways, originating from hyperfine structures. Because in our model, gain arises due to interference of two different pathways, as described Sec. II, we should emphasize the main difference between the model in \cite{gau2} and our model. We consider a different set of polarizations and frequency of the electric fields interacting with the non-degenerate electronic levels, contrary to \cite{gau2}, where degenerate hyperfine levels have been considered. In our model, the gain, associated with emission of a single photon in $\sigma_-$ component, is essentially a two-photon process, and thus much larger in effect compared to that arising due to four-photon process described in \cite{gau1,gau2}. In addition, gain occurs when the probe field is {\it not\/} at Raman resonance with the control field. \section{\label{sec:discuss}Discussions} \subsection{Origin of non-zero susceptibility} We start with the expression (\ref{rho_eg1}) for $\tilde{\rho}_{e_+g_-}^{'(-1)}$ in the $\Lambda$-system. Clearly, the susceptibility of the probe field arises only by the population difference between the relevant levels $|e_+\rangle$ and $|g_-\rangle$, as the zeroth-order coherence between the levels $|g_+\rangle$ and $|e_+\rangle$ is zero in this case. In addition, at two-photon resonance ($\delta=\Delta$), $b_+=0$ and the susceptibility vanishes. But, in case of the four-level system of Fig.~\ref{chap5fig1}, the coherence $\tilde{\rho}_{g_+ e_+}^{(0)}$ also contributes to the susceptibility $\chi_-$ [see Eq. (\ref{rho_egN})], while contribution from $\tilde{\rho}_{g_-e_-}^{(0)}$ to $\chi_-$ vanishes at two-photon resonance ($\delta=\Delta$) as $b_+=0$. Then, using Eqs.~(\ref{popul}) and (\ref{rhogeEx}), we can write for $\delta=\Delta=B'-B$ \begin{equation} \tilde{\rho}_{e_+g_-}^{'(-1)}=\frac{ix(\gamma_1+\gamma_2)}{2q_+Q}\left(1-\frac{q_+}{c}\right)\;, \end{equation} where $p_+=q_+=c^*=\Gamma+i(B-B')$, and $Q>0$. The first term inside the bracket in the above equation is due to the contribution of $\tilde{\rho}_{g_-g_-}^{(0)}-\tilde{\rho}_{e_+e_+}^{(0)}$ and the second term is due to the coherence $\tilde{\rho}_{g_+e_+}^{(0)}$. We see from the above expression, that real parts of these two terms cancel each other, and it is essentially their imaginary parts which contribute to the susceptibility. We find that \begin{equation} \label{nonzero}\tilde{\rho}_{e_+g_-}^{'(-1)}=\frac{-(B'-B)}{2\left\{\Gamma^2+(B'-B)^2+2|G|^2\right\}} \end{equation} which is non-zero as $B'\neq B$. Thus nonzero susceptibility of the system of Fig.~\ref{chap5fig1} manifests itself as an effect of the {\it zeroth order coherence\/} in the $|e_+\rangle\leftrightarrow |g_+\rangle$ transition. Further, it is essentially associated with no absorption, as Eq.~(\ref{nonzero}) has no imaginary part. We should mention here that the susceptibility becomes negative due to larger Land\'e $g$ factor of the ground state manifold (i.e., $B'>B$). Clearly, in absence of the magnetic field, the medium becomes isotropic, and no special features can be found in the dispersion profile. In Fig.~\ref{fig3}(a), we show how the real part of $\chi_-$ varies with the detuning $\Delta$ of the control field at two-photon resonance $\delta=\Delta$. Clearly, even for resonant control field (i.e., for $\Delta=0$), the susceptibility is negative. Moreover, the real part of $\chi_-$ becomes zero at two-photon resonance for certain value of $\delta$. Putting $b_+=0$ in Eq.~(\ref{rho_egN}), one can calculate the corresponding value \begin{equation} \delta_0=\frac{2(B'-B)^2+\Gamma^2}{B'-B}\;. \end{equation} For the present parameters, we find that $\delta_0=10.25\gamma$, which is shown in Fig.~\ref{fig3}(a). In fact, for $\delta=\Delta=\delta_0$, the contributions of the population difference term and the coherence term to Re($\chi_-$) cancel each other and the susceptibility of the probe field vanishes. In Fig.~\ref{fig3}(b), we show that the control field can be a good control parameter for the susceptibility. The susceptibility remains negative for the entire range of $G$ at two-photon resonance. This is unlike the case of a $\Lambda$ system [see Fig.~\ref{fignew}(b)] for which the real part of $\chi_-$ would remain zero at two-photon resonance irrespective of the value of Rabi frequency $G$ of the control field . \subsection{Origin of gain} It is well understood that, the absorption spectrum of the probe field in the $\Lambda$ system of Fig.~\ref{fignew}(b) shows Autler-Townes doublet. Moreover, there does not arise any gain in the medium. At two-photon resonance $\delta=\Delta$, the absorption becomes zero. However, the case of the four-level system of Fig.~\ref{chap5fig1} is different. We already have noted that the susceptibility $\chi_-$ of the $\sigma_-$ component is contributed by two terms : $\tilde{\rho}_{g_+e_+}^{(0)}$ and $\tilde{\rho}_{g_-g_-}^{(0)}-\tilde{\rho}_{e_+e_+}^{(0)}$. We show the individual contribution of these two terms in the absorption spectra in Fig.~\ref{fig4}. From this figure one can see that at certain region of frequency ($\delta > B'-B$), the negative contribution of $\tilde{\rho}_{g_+e_+}^{(0)}$ is larger in magnitude than the positive contribution of $\tilde{\rho}_{g_-g_-}^{(0)}-\tilde{\rho}_{e_+e_+}^{(0)}$. Thus, gain arises in the medium. In our model, this novel feature can be attributed to the control field $G$ which gives rise to the non-zero coherence $\tilde{\rho}_{g_+e_+}^{(0)}$. Further, from the expressions of $\tilde{\rho}^{(0)}_{g_\pm g_\pm}$ and $\tilde{\rho}^{(0)}_{e_\pm e_\pm}$ [Eqs.~(\ref{popul})], one sees that the zeroth-order populations in both the $|g_\pm\rangle$ levels are larger than those in the levels $|e_\pm\rangle$ due to the presence of non-zero decay terms $\gamma_1$ and $\gamma_2$. Thus there is no population inversion in bare basis and we have {\it gain without inversion\/}. Note that at two-photon resonance ($\delta=\Delta$), the contributions from the terms $\tilde{\rho}_{g_+e_+}^{(0)}$ and $\tilde{\rho}_{g_-g_-}^{(0)}-\tilde{\rho}_{e_+e_+}^{(0)}$ to the absorption profile cancel each other, leading to transparency. We should further mention here that the contribution of $\tilde{\rho}_{g_-e_-}^{(0)}$ to the gain is negligible for all $\delta$. \section{conclusions} In conclusion, we have shown how a control field can give rise to new coherence effects in a specific four-level system. The control field leads to multiple pathways, interference between which leads to the effects like gain without inversion and non-zero susceptibility associated with zero absorption. We have explained these results in terms of the new coherence arising due to this interference. \end{document}
\begin{document} \title{Energy convexity estimates for non-degenerate ground states of nonlinear 1D Schr\"odinger systems} \author{ Eugenio Montefusco\thanks{Research supported by the MIUR national research project\newline {\it Variational Methods and Nonlinear Differential Equations}.},\; Benedetta Pellacci$^{*}$,\; Marco Squassina\thanks{Research supported by the MIUR national research project\newline {\it``Variational and Topological Methods in the Study of Nonlinear Phenomena''}. \vskip4pt \noindent {\it 2000 Mathematics Subject Classifications.} 34B18, 34G20, 35Q55. \vskip1pt \noindent {\it Keywords.} Weakly coupled nonlinear Schr\"odinger systems, stability, nondegeneracy, ground states. }} \date{\today} \maketitle \begin{abstract} We study the spectral structure of the complex linearized operator for a class of nonlinear Schr\"odinger systems, obtaining as byproduct some interesting properties of non-degenerate ground state of the associated elliptic system, such as being isolated and orbitally stable. \end{abstract} \section{Introduction and main results} In the last few years, the interest in the study of Schr\"odinger systems has considerably increased, in particular, for the following class of two weakly coupled nonlinear Schr\"odinger equations \begin{equation}\lambdabel{schr} \begin{cases} \displaystyle {\rm i}\partial_{t}\phi_{1}+ \frac12\partial_{xx}\phi_{1}+\big(|\phi_{1}|^{2p} +\beta|\phi_{2}|^{p+1}|\phi_{1}|^{p-1}\big)\phi_{1}=0 & \text{in $\R\times \R^{+}\!\!,$} \\ \displaystyle {\rm i}\partial_{t}\phi_{2}+\frac12\partial_{xx}\phi_{2}+\big(|\phi_{2}|^{2p} +\beta|\phi_{1}|^{p-1}|\phi_{2}|^{p+1}\big)\phi_{2}=0 & \text{in $\R\times \R^{+}\!\!,$} \\ \phi_{1}(0,x)=\phi_{1}^{0}(x), \quad \phi_{2}(0,x)=\phi_{2}^{0}(x) & \text{in $\R$}, \end{cases} \end{equation} where $\mathcal{P}hi=(\phi_{1},\phi_{2})$ and $\phi_{i}:[0,\infty)\times\R\to\C$, $\phi_{i}^{0}:\R\to\C$, $0< p<2$. Usually the coupling constant $\beta>0$ models the birefringence effects inside a given anisotropic material (see e.g. \cite{manak}, \cite{men}). A soliton or standing wave solution is a solution of the form $\mathcal{P}hi(x,t)=(u_{1}(x)e^{{\rm i} t},u_{2}(x)e^{{\rm i} t})$ where $U(x)=(u_{1}(x),u_{2}(x))$ solves the elliptic system \begin{equation}\lambdabel{ellittico} \begin{cases} -\displaystyle \frac12 \partial_{xx} r_{1}+r_{1}=r_{1}^{2p+1} +\beta r_{1}^{p} r_{2}^{p+1} & \text{in $\R$}, \\ \displaystyle -\frac12\partial_{xx} r_{2}+r_{2}=r_{2}^{2p+1} +\beta r_{2}^{p} r_{1}^{p+1} & \text{in $\R$}. \end{cases} \end{equation} Among all the solutions of \eqref{ellittico} there are the ground states, namely least energy solutions. It is known (see e.g. \cite{mmp1}, \cite{si}) that for $p\geq 1$ there exists a ground state $R=(r_{1},r_{2})$ $\in C^{2}(\R)\cap W^{2,s}(\R)$ for any positive $s$; Moreover, $R$ has nonnegative components $r_i$ which are even, decreasing on $\R^{+}$ and exponentially decaying. In \cite{mmp3} it is shown that $R$ can be characterized as a solutions of the following minimization problem \begin{equation}\lambdabel{elle2} \mathcal{E}(R)=\inf_{\mathcal{M}}\mathcal{E}(V)\qquad \text{where }\quad \mathcal{M}:=\left\{V\in H^{1}(\R)\times H^{1}(\R),\,\|V\|_{2}=\|R\|_{2} \right\}, \end{equation} and \begin{equation}\lambdabel{defE} \mathcal{E}(V)=\mathcal{E}(v_{1},v_{2})=\frac{1}{2}\left\|\partial_{x} V\right\|_2^2 -\frac1{p+1}\int\big(|v_1|^{2p+2}+|v_2|^{2p+2} +2\beta |v_1v_2|^{p+1}\big), \end{equation} when the exponent $p$ satisfies \begin{equation}\lambdabel{pzero} 1\leq p<2. \end{equation} The interest in finding ground states is also motivated by their properties with respect of the analysis of the dynamical system \eqref{schr}, such as stability properties. For the single Schr\"odinger equation many notions of stability have been introduced and proved, among all, we recall \cite{cl} and \cite{weinsteinMS,weinsteinCpam}; in the former it is proved that the ground state, which is unique, of the equation \begin{equation} \lambdabel{eeqr} -\frac12\partial_{xx} z+z=z^{2p+1}\quad \text{in $\R$}, \end{equation} is orbitally stable, that is, roughly speaking, if $\phi^{0}$ is a function close to $z$ with respect to the $H^{1}$ norm then the solution of the Cauchy problem \begin{equation}\lambdabel{scalSE} \begin{cases} \displaystyle {\rm i}\partial_{t}\phi+ \frac12\partial_{xx}\phi+|\phi|^{2p}\phi=0 & \text{in $\R\times\R^{+}\!\!,$} \\ \phi(0,x)=\phi^0(x) & \text{in $\R,$} \end{cases} \end{equation} where $\phi:[0,\infty)\times\R\to\C$, $\phi^{0}:\R\to\C$ and $1\leq p<2$, remains close to $z$ up to phase rotations and translations. In \cite{weinsteinMS,weinsteinCpam} the study becomes deeper assuming that $z$ is non-degenerate, that is the linearized operator for~\eqref{eeqr} has a $1$-dimensional kernel which is spanned by $\partialrtial_{x}z$. More precisely, it is proved that for every $\phi\in H^1(\R)$ such that $\|\phi\|_{L^2}=\|z\|_{L^2}$, the following inequality holds \begin{equation}\lambdabel{modscal} \mathcal{E}(\phi)-\mathcal{E}(z)\geq C\inf_{x_{0}\in \R \atop\theta\in [0,2\pi)} \|\phi-e^{i\theta}z(\cdot-x_0)\|_{H^1}^{2}, \end{equation} for some positive constant $C$, provided that the energy $\mathcal{E}(\phi)$ is sufficiently close to $\mathcal{E}(z)$. Here, $\mathcal{E}$ is the energy defined in \eqref{defE} once we consider $V=(z,0)$. Inequality \eqref{modscal} allows to provide not only the same orbital stability result proved in \cite{cl}, but it also permits to derive explicit differential equation to which the phase and position adjustment have to obey for the ground state to be linearly stable. Moreover, \eqref{modscal} tells us that the energy functional can be seen as a Lyapunov functional, as it measures the deviation of the solution of \eqref{schr} from the ground state orbit. \\ The main goal of this paper is to extend inequality~\eqref{modscal} to the more general framework of 1D vector Schr\"odinger problems. In order to do this we are lead to consider non-degenerate ground state for system \eqref{ellittico}. This notion is introduced in the following definition. \begin{definition}\lambdabel{nondeg} We will say that a ground state solution $R=(r_1,r_2)$ of system~\eqref{ellittico} is non-degenerate if the set of solutions of the linearized system \begin{equation}\lambdabel{linsyste} \begin{cases} -\frac{1}{2}\partial_{xx}\phi+\phi=[(2p+1)r_1^{2p}+ \beta p r_1^{p-1}r_2^{p+1}]\phi +\beta(p+1)r_1^pr_2^p\psi & \text{in $\R$,} \\ -\frac{1}{2}\partial_{xx}\psi+\psi=[(2p+1)r_2^{2p} +\beta p r_1^{p+1}r_2^{p-1}]\psi+ \beta(p+1)r_1^pr_2^p\phi & \text{in $\R$,} \end{cases} \end{equation} is an $1$-dimensional vector space and any solution $(\phi,\psi)$ of~\eqref{linsyste} is given by $\theta\partial_{x} R$, for some $\theta\in\R$. \end{definition} The main result of the paper is stated in the following \betae\lambdabel{main} Let $R$ be non-degenerate and assume~\eqref{pzero}. Then, for every $\mathcal{P}hi\in H^{1}\times H^{1}$ with $$ \|\mathcal{P}hi\|_{L^{2}\times L^{2}}=\|R\|_{L^{2}\times L^{2}}, $$ the following inequality holds \begin{align*} \mathcal{E}(\mathcal{P}hi)-\mathcal{E}(R)\geq & \inf_{x\in \R \atop\theta\in [0,2\pi)^2}\|\mathcal{P}hi-(e^{{\rm i}\theta_{1}} r_{1}(\cdot-x),e^{{\rm i}\theta_{2}}r_{2}(\cdot-x))\|_{H^{1}\times H^{1}}^{2} \\ &\quad +\text{{\small o}}\Big( \inf_{x\in \R\atop\theta\in [0,2\pi)^2}\|\mathcal{P}hi-(e^{{\rm i}\theta_{1}} r_{1}(\cdot-x),e^{{\rm i}\theta_{2}}r_{2}(\cdot-x))\|_{H^{1}\times H^{1}}^{2}\Big) \end{align*} where $\text{{\small o}}(x)$ satisfies $\text{ {\small o}}(x)/x\to 0$ as $x\to 0$. \end{theorem} As interesting consequences, we will obtain the property of being isolated, and of being orbitally stable for a non-degenrate ground state. In \cite{mmp3} it has been recently proved that the set of ground states of \eqref{ellittico} enjoys the orbital stability property. To this respect, we have to recall that up to now it is not yet been proved a uniqueness result for ground state solutions of the system \eqref{ellittico}. Therefore, a solution of \eqref{schr} which starts near a ground state $R$, may leave the orbit around $R$ and approach the orbit generated by another ground state. But, this is not the case, once we know that the ground states are isolated. This property is easily obtained as a consequence of Theorem \ref{main} as stated in the following corollary. \begin{corollary}\lambdabel{cor1} Let $R$ be non-degenerate and assume~\eqref{pzero}. Then $R$ is isolated, that is, if there exists a ground state of~\eqref{ellittico} $S$ satisfying $\|R-S\|_{\varmathbb{H}^{1}}<\delta$ for a $\delta>0$ sufficiently small, then $S=R$ up to a translation and a phase change. \end{corollary} Then, we can also prove the following \begin{corollary} \lambdabel{cor2} Let $R$ be non-degenerate and assume~\eqref{pzero}. Then $R$ is orbitally stable. \end{corollary} We recall that a ground state $R=(r_{1},r_{2})$ is said to be orbitally stable if for any given $\varepsilon>0$, there exist $\delta(\varepsilon)>0$ such that $$ \sup_{t\in [0,\infty)}\inf_{x\in\R\atop \theta\in [0,2\pi)^2} \| \mathcal{P}si(t,\cdot)-(e^{{\rm i}\theta_{1}} r_{1}(\cdot-x), e^{{\rm i}\theta_{2}}r_{2}(\cdot-x)\|_{H^{1}\times H^{1}}< \varepsilon, $$ provided that $$ \inf_{x\in\R\atop \theta\in [0,2\pi)^2} \| \mathcal{P}si^{0}-(e^{{\rm i}\theta_{1}} r_{1}(\cdot-x),e^{{\rm i}\theta_{2}}r_{2}(\cdot-x)\|_{H^{1}\times H^{1}}< \delta, $$ where $\mathcal{P}si$ is the solution of~\eqref{schr} with initial datum $\mathcal{P}si^{0}$. \\ Theorem \ref{main} plays a very important role also in the study of the so-called {\em soliton dynamics} for Schr\"o\-din\-ger. More precisely, when one considers \eqref{schr} when the Plank's constant $\hbar$ explicitly appears in the equations, and studies the evolution, in the semi-classical limit ($\hbar\to 0$), of the solution of \eqref{schr} starting from a $\hbar$-scaling of a soliton, once the action of external forces appears. We refer the reader to \cite{bj,K1,K2} for the scalar case and to~\cite{mps-soliton} for systems, where the authors have recently showed, in semi-classical regime, how the soliton dynamics can be derived from Theorem \ref{main}. \\ Finally, we have to point out that some of our results can be proved in general dimension $n\geq 1$ as well, with minor changes. Unfortunately, this is not the case for our main Theorem, since, in order to work on the linearized equation, and to perform Taylor expansion on the energy functional ${\mathcal E}$, we need enough regularity on the nonlinear term and this forces us to restrict the range of $p$ because of the presence of the coupling term. Of course, it is a really interesting open problem, to prove the assertion of Theorem~\ref{main} for any $n\geq 1$ and any $0<p<2/n$. \vskip8pt In Section~\ref{spectral}, we will study some delicate spectral properties of the linearized system introduced in Definition~\ref{nondeg}. The proofs of Theorem~\ref{main} and of Corollaries~\ref{cor1} and~\ref{cor2} will be carried out in Section~\ref{proofsection}. Finally, in Section~\ref{ground}, we shall prove that there exists a non-degenerate ground state for system~\eqref{ellittico}. \section{Spectral analysis of the linearized operators}\lambdabel{spectral} In this section we will prove some important properties concerning the linearized Schr\"odinger system associated with~\eqref{schr}. \\ We will make use of the functional spaces $\varmathbb{L}^{2}=\ldue(\R,\C)\times\ldue(\R, \C)$ and $\varmathbb{H}^{1}=\huno(\R,\C)\times\huno(\R,\C)$. We recall that the inner product between $u,\,v\in \C$ is given by $u\cdot v=\Re(u\bar v)=1/2(u\bar v+v\bar u)$. It is known (see \cite{caz,susu}) that \eqref{schr} is well locally posed in time, for any $p$, in the space $\varmathbb{H}^{1}$ endowed with the norm $\|\mathcal{P}hi\|_{\varmathbb{H}^{1}}^{2}=\|\partial_{x}\mathcal{P}hi\|_{2}^{2}+\|\mathcal{P}hi\|_{2}^{2}$ for every $\mathcal{P}hi=(\phi_1,\phi_2)\in\varmathbb{H}^{1}$. Moreover we set the ${\varmathbb{L}^{q}}$ norm as $\|\mathcal{P}hi\|_q^q=\|\phi_1\|_q^q+\|\phi_2\|_q^q$ for any $q\in\left[1,\infty\right)$, we denote by $(U,V)$ the inner scalar product in $\varmathbb{L}^{2}$ and by $(U,V)_{\varmathbb{H}^{1}}$ the inner scalar product in $\varmathbb{H}^{1}$. In \cite{fm} it is proved that, for $p$ satisfying $0<p<2$ the solution of the Cauchy problem \eqref{schr} exists globally in time and the mass of a solution and its total energy are preserved in time, that is having defined the total energy of system~\eqref{schr} as \begin{equation}\lambdabel{energy} \mathcal{E}\left(\mathcal{P}hi(t)\right)=\frac{1}{2}\left\|\partial_{x}\mathcal{P}hi (t)\right\|_2^2 -\int F\left(\mathcal{P}hi(t)\right) \end{equation} where \begin{equation}\lambdabel{defF} F(U)=F(u_{1},u_{2})=\frac1{p+1}\Big(|u_{1}|^{2p+2}+|u_{2}|^{2p+2} +2\beta |u_{1}u_{2}|^{p+1}\Big), \end{equation} the following conservation laws hold (see \cite{fm}): \begin{equation}\lambdabel{mass} \|\phi_1\|_2^2=\|\phi^{0}_{1}\|_2^2,\qquad \|\phi_2\|_2^2=\|\phi^{0}_{2}\|_2^2, \qquad \mathcal{E}\left(\mathcal{P}hi(t)\right)=\mathcal{E}(0)=\frac{1}{2}\left\|\partial_{x}\mathcal{P}hi^{0}\right\|_2^2 -\int F\left(\mathcal{P}hi^{0}\right). \end{equation} Setting $\phi_i=r_i+\varepsilon w_i$, $i=1,2$, the linearized Schr\"odinger system at $r_i$ in $w_i$ is given by \begin{equation}\lambdabel{schrLIN} \begin{cases} \displaystyle i\partial_{t}w_{1}+ \frac12\partial_{xx} w_{1}-w_1+G_1(w_1,w_2)=0 & \text{in $\R$}, \\ \displaystyle i\partial_{t}w_{2}+\frac12\partial_{xx} w_{2}-w_2+G_2(w_1,w_2)=0 & \text{in $\R$}, \end{cases} \end{equation} where we have set $$ G_1(w_1,w_2) = \left[r_1^{2p}+\beta r_1^{p-1}r_2^{p+1}\right]w_1 +\left[2pr_1^{2p}+\beta(p-1)r_1^{p-1}r_2^{p+1}\right]\Re(w_1) +\beta (p+1)r_1^pr_2^p \Re(w_2) , $$ $$ G_2(w_1,w_2) = \left[r_2^{2p}+ \beta r_1^{p+1}r_2^{p-1}\right]w_2 +\left[2pr_2^{2p}+\beta(p-1)r_1^{p+1}r_2^{p-1}\right]\Re(w_2) +\beta(p+1)r_1^pr_2^p\Re(w_1). $$ System~\eqref{schrLIN} can be written down as $\partialrtial_t W=LW,$ for $L:\varmathbb{L}^{2}\times \varmathbb{L}^{2}\to \varmathbb{L}^{2}\times \varmathbb{L}^{2}$ defined by \begin{equation*} L=\left( \begin{array}{cc} 0 & L_{-} \\\\ -L_{+} & 0 \end{array}\right),\qquad W\in\C^2,\, W=(w_{1},w_{2}) \end{equation*} and where the operators $L_{-},\,L_{+}:L^{2}(\R,\R)\times L^{2}(\R,\R) \to L^{2}(\R,\R)\times L^{2}(\R,\R)$ acting respectively on the real and imaginary parts of $w_{i}$. are the following \begin{equation} \lambdabel{operator} L_{+}=\left( \begin{array}{cc} L_{+}^{11} & L_{+}^{12} \\\\ L_{+}^{21} & L_{+}^{22} \end{array} \right) \qquad L_{-}=\left( \begin{array}{cc} L_{-}^{11} & 0 \\\\ 0 & L_{-}^{22} \end{array} \right) \end{equation} where $L_{+,-}^{ij}:L^{2}(\R,\R)\to L^{2}(\R,\R)$ are defined by \begin{align*} L_{+}^{11}&=\,-\frac12\partial_{xx} +1-H^{11}(R) \qquad L_{+}^{12}=L_{+}^{21}=-H^{12}(R) \\\noalign{\vskip4pt} L_{+}^{22}&=\,-\frac12\partial_{xx} +1-H^{22}(R) \\\noalign{\vskip4pt} L_{-}^{11}&=\,-\frac12\partial_{xx} +1-\left[r_{1}^{2p}+\beta r_{1}^{p-1}r_{2}^{p+1}\right] \qquad\qquad L_{-}^{22}=\,-\frac12\partial_{xx} +1-\left[r_{2}^{2p}+\beta r_{1}^{p+1}r_{2}^{p-1}\right] \end{align*} and the Hessian matrix $H_{F}(U)=(H^{ij}):(\R^{+})^{2}\to {M_{2\times 2}}(\R)$ is given by \begin{align*} H^{11}&= (2p+1)u_{1}^{2p}+p\beta u_{1}^{p-1}u_{2}^{p+1} \qquad H^{12}=H^{21}=\, (p+1)\beta u_{2}^{p}u_{1}^{p} \\\noalign{\vskip4pt} H^{22}&= (2p+1)u_{2}^{2p}+p\beta u_{2}^{p-1}u_{1}^{p+1}. \end{align*} We will study $L_{+}$ on $ \mathcal{V}$, namely the closed subspace of $\varmathbb{H}^{1}$ defined as \begin{equation}\lambdabel{defV} \mathcal{V}=\left\{U\in \varmathbb{H}^{1}\,:\,(U,R)=0\right\}. \end{equation} The first important property of $L_{+}$ on $\mathcal{V}$ is proved in the following proposition. \begin{proposition}\lambdabel{L+prima} Assume \eqref{pzero} and that $R$ a ground state of \eqref{ellittico}. Then $\displaystyle \inf\limits_{\mathcal{V}}\left(L_{+}(U),U\right)=0$. \end{proposition} {\noindent{\bf Proof.}\,\,\,} First notice that $U_{*}=(r_1',r_2')$ belongs to $\mathcal{ V}$ and $U_{*}$ satisfies $(L_{+}(U_{*}),U_{*})=0$, showing that the infimum is less or equal than zero. On the other hand, since $R$ solves problem \eqref{elle2}, of course $R$ is also a minimum point of $\mathcal{I}=\mathcal{E}(\mathcal{P}hi)+ \|\mathcal{P}hi\|_{2}^{2}$ on $\mathcal{M}$. Consequently, for any smooth curve $\varphi:[-1,1]\to \mathcal{M}$ such that $\varphi(0)=R$, it follows $$ \frac{d^{2}\mathcal{I}(\varphi(s))}{ds^{2}}\Bigg|_{s=0}\geq 0. $$ Therefore, taking into account that $\mathcal{I}'(R)=0$, we get \begin{align*} 0 & \leq \lambdangle \mathcal{I}{''}(\varphi(s))\varphi'(s), \varphi'(s)\rangle\Big|_{s=0} +\lambdangle \mathcal{I}'(\varphi(s)),\varphi''(s)\rangle \Big|_{s=0} \\& =\lambdangle \mathcal{I}{''}(R)\varphi'(0),\varphi'(0)\rangle +\lambdangle \mathcal{I}'(R),\varphi''(0)\rangle =\lambdangle \mathcal{I}{''}(R)\varphi'(0),\varphi'(0)\rangle. \end{align*} Now, taking into account that the map $s\mapsto \|\varphi(s)\|_{2}$ is constant, it readily follows that $\varphi'(0)$ belongs to $\mathcal{ V}$, which yields the assertion by the arbitrariness of $\varphi$. \edim The above result is the first step to show that $L_{+}$ is coercive once we restrict it on a closed subspace of $\mathcal{V}$, as shown in the following proposition. \begin{proposition} \lambdabel{oneofmaitoools} Assume \eqref{pzero} and that $R$ is a ground state of \eqref{ellittico} satisfying Definition \ref{nondeg}. Then \begin{equation}\lambdabel{infpos} \inf_{U\in\mathcal{V}_{0},\;\; } \frac{\left(L_{+}(U),U\right)}{\|U\|^{2}_{2}}>0,\qquad \mathcal{V}_{0}=\left\{U\in\varmathbb{H}^{1} : (U,R)=(U,H_{F}(R)\partial_{x}R)=0\right\}. \end{equation} \end{proposition} {\noindent{\bf Proof.}\,\,\,} Denoting with $\alphapha$ the infimum $$ \alphapha=\inf_{\|V\|_{L^2}=1,\,V\in\mathcal{V}_{0}}(L_{+}(V),V), $$ first notice that Proposition \ref{L+prima} implies that $\alphapha$ is nonnegative, so that we only have to show that $\alphapha$ is not zero. Let us argue by contradiction and suppose that $\alphapha=0$. Taken $U_{n}$ a minimizing sequence, from the regularity properties of $R$ it follows that ${U}_{n}$ is bounded in $\varmathbb{H}^{1}$. These gives us a function $U\in \varmathbb{H}^{1}$, such that $U_{n}\rightharpoonup U$ weakly (up to a subsequence) in $\varmathbb{H}^{1}$, implying that $U\in \mathcal{V}_{0}$. From Proposition \ref{L+prima} and \eqref{infpos}, we get $$ 0\leq \left(L_{+}(U),U\right) \leq \liminf_{n\to\infty} \left\{\|U_{n}\|^{2}_{\varmathbb{H}^{1}}-(U_{n},H_{F}(R)U_{n})\right\} =\lim_{n\to\infty} \left(L_{+}(U_{n}),U_{n}\right)=0. $$ So that $U$ solves $\left(L_{+}(U),U\right)=0$ and $\left(L_{+}(U_{n}),U_{n}\right)\to \left(L_{+}(U),U\right)$. Moreover, \begin{align*} \|U\|^{2}_{\varmathbb{H}^{1}} \leq \liminf_{n\to\infty}\|U_{n}\|^{2}_{\varmathbb{H}^{1}} & \leq \limsup_{n\to\infty}\|U_{n}\|^{2}_{\varmathbb{H}^{1}}= \lim_{n\to\infty} \big\{\left(L_{+}(U_{n}),U_{n}\right) +(U_{n},H_{F}(R)U_{n})\big\}\\ & =\left(L_{+}(U),U\right)+(U,H_{F}(R)U)=\|U\|^{2}_{\varmathbb{H}^{1}}, \end{align*} from which $U_{n}\to U$ strongly in $\varmathbb{H}^{1}$, so that $\|U\|_{\varmathbb{L}^{2}}=1$ and $U$ solves the constrained minimization problem \eqref{infpos}. When we derive the functional $(L_{+}(V),V)/\|V\|_{\varmathbb{L}^{2}}^{2}$ and use that $(L_{+}(U),U)=0$ we obtain that there exists Lagrange multipliers $\mu,\,\gammamma\in \R$ such that \begin{equation}\lambdabel{lagrange} \left(L_{+}U,V\right)=\mu\left( R,V\right)+\left(\gammamma\cdot H_{F}(R) \partial_{x} R,V\right),\qquad \text{for every $V\in \varmathbb{H}^{1}$}. \end{equation} Choosing as test function $V=\partial_{x} R$ and taking into consideration that $(R,\partialrtial_jR)=0$, gives $$ 0=\left(L_{+}(U),\partial_{x} R\right)= \left(\gammamma\cdot H_{F}(R)\partial_{x} R,\partial_{x} R\right) =\gammamma (H_{F}(R)\partial_{x}R,\partial_{x}R), $$ where we have taken into account that $L_{+}$ is a self-adjoint operator and $\partial_{x}R=\left(\partial_{x}r_{1},\partial_{x}r_{2}\right)$ is a solution of $L_{+}V=0$. Since $R$ has even components the summands on the right hand side are nonzero, so that $\gammamma=0$. As a consequence, $U$ solves $L_{+}U=\mu R$. Moreover, we consider the vector $x \cdot\partial_{x} R$, whose components are $x \cdot\partial_{x} R=(x\partial_{x}r_{1},x\partial_{x}r_{2})$ and we compute $L_{+}(x \cdot\partial_{x} R)$. After some simple calculations, one reaches $$ L_{+}(x \cdot\partial_{x} R)=(-\partial_{xx} r_{1},-\partial_{xx} r_{2}) \quad \text{and} \quad L_{+}(R/p)=-2(r_1^{2p+1}+\beta r_2^{p+1}r_1^{p}, r_2^{2p+1}+\beta r_1^{p+1} r_2^{p}). $$ Then, in turn, we get $L_{+}(R/p+x \cdot\partial_{x} R)=-2R$, and by linearity $$ L_{+}\left(-\mu/2(R/p+x \cdot\partial_{x} R)\right)=\mu R. $$ Then, Definition~\ref{nondeg} (nondegeneracy) immediately yields \begin{equation}\lambdabel{Uequatt} U=-\mu/2(R/p+x \cdot\partial_{x} R)+\theta\cdot\partial_{x} R \end{equation} for some constant $\theta\in\R$. Now we have to show that $\theta=0$, by using the available constraints. By applying to equation~\eqref{Uequatt} the self-adjoint operator $H_F=H_F(R)$, we get $$ H_FU=-\frac{\mu}{2p} H_F R-\frac{\mu}{2}H_F x\cdot\partial_{x} R+H_F\theta\cdot\partial_{x} R. $$ As $U\in\mathcal{V}_{0}$, it results $ (H_FU,\partial_{x} R)=(U,H_F\partial_{x} R)=0.$ Furthermore, since $R$ is a radial solution of \eqref{ellittico}, we also have that $(H_F R,\partial_{x} R)= (H_F x\cdot\partial_{x} R,\partial_{x} R)=0.$ On the other hand $$ (H_F\theta\cdot\partial_{x} R,\partial_{x} R)=\theta (H_F\partial_{x} R, \partial_{x} R)=c\theta $$ with $c\neq 0$, so it has to be $\theta=0$. Then~\eqref{Uequatt} reduces to $$ U=-\frac{\mu}{2p}R-\frac{\mu}{2}x\cdot\partial_{x} R. $$ Computing the $L^{2}$-scalar product with $R$ and keeping in mind that $U\in \mathcal{V}_{0}$ yields $$ 0=(U,R)=-\frac{\mu}2\left[\frac1{p}\|R\|_{2}^{2}+(x\cdot\partial_{x} R,R)\right]. $$ As far as concern the last term in the previous relation, we integrate by parts and obtain $$ (x\cdot\partial_{x} R,R)=-\frac{1}2 \|R\|_{2}^{2}. $$ The last two equations and \eqref{pzero} give the desired contradiction. \edim \begin{remark}\rm\lambdabel{coerc} The argument in the proof of the previous Proposition shows that there exists a positive constant $\alphapha_0$ such that \begin{equation} \lambdabel{coercL+} (L_{+}V,V)\geq \alphapha_{0} \|V\|_{2}^{2},\qquad\text{for all $V\in \mathcal{V}_{0}$}. \end{equation} Moreover, if we consider $|||U|||=\sqrt{(L_{+}U,U)}$ for every $U\in \mathcal{V}_{0}$, we obtain that $|||\cdot|||$ satisfies all the required properties of a norm, by \eqref{coercL+} and by the self-adjointness property of $L_{+}$. In addition, every Cauchy sequence $\{U_{n}\}$ with respect to $|||\cdot|||$ has a strong limit $U$ belonging $L^{2}$; moreover $U$ satisfies all the orthogonality relations required in $\mathcal{V}_{0}$. Besides, computing $(L_{+}(U_{n}-U_{m}),U_{n}-U_{m})$ gives that also $\{\partial_{x} U_{n}\}$ is a Cauchy sequence in $L^{2}$ then $U$ is necessarily the strong limit of $\{U_{n}\}$ in $\varmathbb{H}^{1}$. Finally, $|||U_{n}-U|||\to 0$ by the definition of $L_{+}$. As a consequence, $\mathcal{V}_{0}$ is a Banach space with respect to this norm, and we get the equivalence with the standard $\varmathbb{H}^{1}$ norm, namely there exists $\alphapha>0$ such that $$ \left(L_{+}V,V\right)\geq \alphapha \|V\|_{\varmathbb{H}^{1}}^{2},\qquad\text{for all $V\in \mathcal{V}_{0}$}. $$ \end{remark} Before stating our next result let us prove the following lemma. \begin{lemma} Let us take $\mathcal{P}hi\in \varmathbb{L}^{2}$ such that $\|\mathcal{P}hi\|_{2}=\|R\|_{2}$ and consider the difference $W=\mathcal{P}hi-R$. Denoting with $U$ and $V$ the real and imaginary part of $W$, it results \begin{equation}\lambdabel{ideU} \left(R,U\right)=-\frac12 \left[\|U\|_{2}^{2}+\|V\|_{2}^{2}\right]=-\frac12 \|W\|_{2}^{2} \end{equation} \end{lemma} {\noindent{\bf Proof.}\,\,\,} The above identity immediately follows by imposing $\|R+W\|_{2}^{2}=\|R\|_{2}^{2}$ and by recalling that $R$ is a real function. \edim \begin{proposition}\lambdabel{fineL+} Assume \eqref{pzero} and that $R$ satisfies Definition \ref{nondeg}. Moreover, let us take $W=U+iV$ satisfying \eqref{ideU} with $U$ verifying \begin{equation}\lambdabel{ortder} (U,H_{F}(R)\partial_{x}R)=0. \end{equation} Then, there exists positive constants $D,\,D_{i}$ such that \begin{equation}\lambdabel{disL+} \left(L_{+}U,U\right)\geq D\|U\|_{\varmathbb{H}^{1}}^{2}-D_{1} \|W\|_{2}^{4}-D_{2} \|W\|_{2}^{2}\|\partial_{x} W\|_{2} \end{equation} \end{proposition} {\noindent{\bf Proof.}\,\,\,} Without loss of generality, we can suppose that $\|R\|_{2}=1$; moreover, we decompose $U$ as $U=U_{||}+U_{\bot}$ where $U_{||}=\left(U,R\right)R$, while $U_{\bot}=U-U_{||}$ is orthogonal to $R$ with respect to the $L^{2}$ scalar product. Since $L_{+}$ is self-adjoint it results \begin{equation}\lambdabel{deco} \left(L_{+}U,U\right)=\left(L_{+}U_{||},U_{||}\right)+2\left(L_{+}U_{\bot},U_{||}\right) +\left(L_{+}U_{\bot},U_{\bot}\right). \end{equation} Next, we study separately the summands on the right hand side of this formula. Observe that, taking into account identity~\eqref{ideU}, we have \begin{equation} \lambdabel{graddineqq} \|\partial_{x} U_\bot\|_{2}^2\geq \|\partial_{x} U\|_{2}^2-C\|W\|_{2}^{2}\|\partial_{x} W\|_{2}, \end{equation} for some positive constant $C$. Since $(U_{||},H_{F}(R)\partial_{x}R)=0$, condition~\eqref{ortder} implies that also $U_{\bot}$ has to be orthogonal to $H_{F}(R)\partial_{x}R$, hence $U_{\bot}$ is in $\mathcal{V}_{0}$. Then Remark~\ref{coerc},~\eqref{graddineqq} and~\eqref{ideU} give us \begin{align}\lambdabel{uort} \left(L_{+}U_{\bot},U_{\bot}\right)& \geq D \|U_{\bot}\|_{\varmathbb{H}^{1}}^{2}\geq D\| U\|_{\varmathbb{H}^{1}}^2-CD\|W\|_{2}^{2}\|\partial_{x} W\|_{2}- D\|U_{||}\|_{2}^{2}\\ &=D\|U\|_{\varmathbb{H}^{1}}^{2}-d_{1}\|W\|_{2}^{2}\left[ \|W\|_{2}^{2}+\|\partial_{x} W\|_{2}\right] . \notag \end{align} We also obtain from \eqref{ideU} that \begin{equation}\lambdabel{prod} \left(L_{+}U_{\bot},U_{||}\right)=(R,U)\left(L_{+}U_{\bot},R\right) = -\frac12 \|W\|_{2}^{2} \left(L_{+}U_{\bot},R\right)\geq -d_{2} \|W\|_{2}^{2}\|\partial_{x} W\|_{2}. \end{equation} As far as concern the last term in \eqref{deco}, it results $$ \left(L_{+}U_{||},U_{||}\right)=(U,R)^{2}\left(L_{+}R,R\right) =\frac14\|W\|_{2}^{4}\left(L_{+}R,R\right)\geq -d_{3}\|W\|_{2}^{4}. $$ This last equation, joint with \eqref{uort} and \eqref{prod} yields the conclusion. \edim \begin{proposition} It results $\displaystyle \inf\limits_{V\neq 0, \; (v_{i},r_{i})_{H^{1}}=0} \frac{\left(L_{-}(V),V\right)}{\|V\|^{2}_{2}}>0$. \end{proposition} {\noindent{\bf Proof.}\,\,\,} Let us first prove that $L_{-}$ is a positive operator. Denoting with $\sigma_{d}(L_{-})$ the discrete spectrum of the operator $L_{-}$ it results \begin{equation}\lambdabel{spettro} \sigma_{d}(L_{-})=\sigma_{d}(L^{11}_{-})\cup\sigma_{d}(L^{22}_{-}). \end{equation} Indeed, if $\lambdambda\in \sigma_{d}(L^{11}_{-})$ we get that $L^{11}_{-}(u)=\lambdambda u$, then $\lambdambda\in \sigma_{d}(L_{-})$ with eigenfunction $U=(u,0)$, analogous argument holds for $\lambdambda\in \sigma_{d}(L^{22}_{-})$, proving that $\sigma_{d}(L^{11}_{-})\cup\sigma_{d}(L^{22}_{-})\subseteq \sigma_{d}(L_{-})$. On the other hand, if $\lambdambda\in \sigma_{d}(L_{-})$ there exists $U=(u_{1},u_{2})\neq (0,0)$ such that $$ L_{-}^{11}u_{1}=\lambdambda u_{1},\quad L_{-}^{22}u_{2}=\lambdambda u_{2} $$ so that, if $u_{1}\neq 0$ $\lambdambda\in \sigma_{d}(L^{11}_{-})$, otherwise $u_{2}\neq 0$ and $\lambdambda\in \sigma_{d}(L^{22}_{-})$, showing \eqref{spettro}. Moreover, since $L_{-}R=0$, with $R=(r_{1},r_{2})\neq (0,0)$, $r_{i}\geq 0$, we get that $\lambdambda =0$ is the first eigenvalue of $L^{11}_{-}$ and $L^{22}_{-}$ when both $r_{1},\,r_{2}\neq 0$. Besides, if for example $r_{1}\equiv 0$, $\lambdambda=0$ is the first eigenvalue of $L_{-}^{22}$, while $L_{-}^{11}=-\partial_{xx}+1$ and its discrete spectrum is empty (see e.g. Chapter 3 in \cite{bershu}), yielding that $\lambdambda=0$ is the first eigenvalue of $L_{-}$. Then $(L_{-}(V),V)\geq 0$ for every function $V\in\varmathbb{H}^{1}$, proving that $L_{-}$ is a positive operator. Arguing now as in the proof of Proposition~\ref{oneofmaitoools}, and considering the (nonnegative) infimum $$ \alphapha=\inf_{\|V\|_{L^2}=1,\,\, (V_{i},r_{i})_{H^{1}}=0}(L_{-}(V),V), $$ assuming by contradiction that $\alphapha=0$, we find that there exists a nonzero minimizer $U$ (satisfying the constraints) for the problem such that \begin{equation} \lambdabel{minimizconstrain} (L_{-}U,U)=0 \end{equation} Taking into account that the constraints $(U_{i},r_{i})_{H^{1}}=0$ can be written in the $L^2$ form \begin{equation} \lambdabel{constrains-i} (q^{11}_{-}(R)R_1,U)=0,\qquad (q^{22}_{-}(R)R_2,U)=0, \end{equation} where we have set $$ q^{11}_{-}(R)=r_{1}^{2p}+\beta r_{1}^{p-1}r_{2}^{p+1}, \qquad q^{22}_{-}(R)=r_{2}^{2p}+\beta r_{1}^{p+1}r_{2}^{p-1}, \quad R_1=(r_1,0), \quad R_2=(0,r_2). $$ we have three lagrange parameters $\lambdambda,\gammamma_1,\gammamma_2\in\R$ such that $$ (L_{-}U,V)=\lambdambda (U,V)+\gammamma_1(q^{11}_{-}(R)R_1,V)+\gammamma_2(q^{22}_{-}(R)R_2,V) $$ for all $V\in\varmathbb{H}^{1}$. Hence, by choosing $V=U$ and taking into account~\eqref{minimizconstrain} and that $U$ satisfies the constraints~\eqref{constrains-i}, we immediately get $\lambdambda=0$. Choosing now $V=R_1$ and $V=R_2$ and taking into account $L_{-}$ is self-adjoint and that $L_{-}R_i=0$ we obtain $\gammamma_1=\gammamma_2=0$. Therefore, we conclude that $$ L_{-}U=0, $$ namely $L_{-}^{11}u_1=0$ and $L_{-}^{22}u_2=0$ where we set $U=(u_1,u_2)$. In turn, $u_i$ is a first eigenfunction of $L_{-}^{ii}$, which yields $u_i\in{\rm span}(r_i)$ since the first eigenvalue is simple (see e.g.\ Theorem 3.4 in~\cite{bershu}). This is of course a contradiction with~\eqref{constrains-i}. Hence $\alphapha>0$ and the proof is complete. \edim \begin{remark}\rm \lambdabel{stimaLmeno} Arguing as in Remark~\ref{coerc}, it is possible to find a positive constant $\alphapha>0$ such that \begin{equation*} (L_{-}V,V)\geq \alphapha \|V\|_{\varmathbb{H}^{1}}^2,\qquad \text{for all $V\in\varmathbb{H}^{1}$ with $(v_{i},r_{i})_{H^{1}}=0$, $i=1,2$}. \end{equation*} \end{remark} \section{Proofs of the main results} \lambdabel{proofsection} In order to prove Theorem \ref{main}, the following characterization will be crucial. \begin{proposition} \lambdabel{ortogonal} Let us consider $y_{0}\in \R$ and $\mathcal{G}amma=(\gammamma_1,\gammamma_2)\in \R^{2}$ be such that \begin{equation}\lambdabel{minimo} \min_{x_{0}\in \R \atop\Theta\in \R^{2}} \|(\phi_{1}(\cdot+x_{0})e^{{\rm i}\theta_{1}},\phi_{2}(\cdot+x_{0})e^{{\rm i}\theta_{2}})-R\|_{\varmathbb{H}^{1}}^{2} = \|(\phi_{1}(\cdot+y_{0},t)e^{{\rm i}\gammamma_{1}},\phi_{2}(\cdot+y_{0}) e^{{\rm i}\gammamma_{2}})-R\|_{\varmathbb{H}^{1}}^{2} \end{equation} Then, writing $$ (\phi_{1}(\cdot+y_{0},t)e^{{\rm i}\gammamma_{1}},\phi_{2}(\cdot+y_{0},t) e^{{\rm i}\gammamma_{2}})=R+W, $$ where $W=U+{\rm i} V$, the following orthogonality condition are satisfied \begin{equation}\lambdabel{orto} \left(U, H_{F}(R)\partial_{x} R \right)=0, \qquad \left(v_{1},r_{1}\right)_{H^{1}}=\left(v_{2},r_{2}\right)_{H^{1}}=0. \end{equation} \end{proposition} {\noindent{\bf Proof.}\,\,\,} Let us introduce the functions $P,\,Q:\R\times \R^{2}\to \R$ defined by \begin{align*} P(x_{0}, \Theta) &=P(x_{0}, \theta_{1},\theta_{2})=\|(\phi_{1}(\cdot+x_{0})e^{{\rm i}\theta_{1}}, \phi_{2}(\cdot+x_{0})e^{i\theta_{2}})-R\|_{2}^{2} \\ Q(x_{0}, \Theta) &=Q(x_{0}, \theta_{1},\theta_{2})=\| (\partial_{x} \phi_{1}(\cdot+x_{0})e^{{\rm i}\theta_{1}},\partial_{x} \phi_{2}(\cdot+x_{0})e^{{\rm i}\theta_{2}})-\partial_{x} R\|_{2}^{2}. \end{align*} Writing down the partial derivatives of $P$ and $Q$ and integrating by parts, give us \begin{align*} \partial_{x_{0}}P(x_{0},\Theta) & =\sum_{j=1}^{2}\int\left( \phi_{j}e^{{\rm i}\theta_{j}}-r_{j}\right)e^{-{\rm i}\theta_{j}} \partial_{x_{0}}\overlineerline{\phi}_{j}+ \left(\overlineerline{\phi}_{j}e^{-{\rm i}\theta_{j}}-r_{j}\right)e^{{\rm i}\theta_{j}} \partial_{x_{0}}\phi_{j} \\ &=-2\sum_{j=1}^{2} \int r_{j}\Re\left(e^{{\rm i}\theta_{j}} \partial_{x_{0}} \phi_{j}\right); \\ \partial_{x_{0}} Q(x_{0}, \Theta) & =\sum_{j=1}^{2}\int \partial_{x}\left(\phi_{j}e^{{\rm i}\theta_{j}}-r_{j}\right) \partial_{x} \partial_{x_{0}} \overlineerline{\phi}_{j} e^{-{\rm i}\theta_{j}}+ \partial_{x}\left(\overlineerline{\phi}_{j}e^{-{\rm i}\theta_{j}}-r_{j}\right) \partial_{x} \partial_{x_{0}} \phi_{j} e^{{\rm i}\theta_{j}} \\ &=- 2\sum_{j=1}^{2} \int \partial_{x} r_{j}\Re\left(\partial_{x} \partial_{x_{0}}\phi_{j} e^{{\rm i}\theta_{j}}\right); \\ \frac{\partialrtial P}{\partialrtial \theta_{j}}(x_{0}, \Theta) &={\rm i}\int\left[ -\left(\phi_{j}e^{{\rm i}\theta_{j}}-r_{j}\right) e^{-{\rm i}\theta_{j}} \overlineerline{\phi}_{j}+ \left(\overlineerline{\phi}_{j}e^{-{\rm i}\theta_{j}}-r_{j}\right) e^{{\rm i}\theta_{j}} \phi_{j}\right] \\ &=2 \int r_{j}\mathcal{I}m\left(e^{{\rm i}\theta_{j}}\phi_{j}\right); \\ \frac{\partialrtial Q}{\partialrtial\theta_{j}} (x_{0}, \Theta) &={\rm i}\int\left[ -\partial_{x}\left(\phi_{j}e^{{\rm i}\theta_{j}}-r_{j}\right) \partial_{x}\overlineerline\phi_{j} e^{-{\rm i}\theta_{j}}+ \partial_{x}\left(\overlineerline{\phi}_{j}e^{-{\rm i}\theta_{j}}-r_{j}\right) \partial_{x}\phi_{j} e^{{\rm i}\theta_{j}}\right] \\ &=2\int \partial_{x} r_{j}\mathcal{I}m\left(\partial_{x} \phi_{j} e^{{\rm i}\theta_{j}}\right). \end{align*} If $x_{0}=y_{0}$ and $\mathcal{G}amma=(\gammamma_1,\gammamma_2)$ realize the minimum in~\eqref{minimo}, the following equations are satisfied \begin{align*} \frac{\partialrtial (P+Q)}{\partialrtial x_{0}}(x_{0}, \Theta) &= -2\sum_{j=1}^{2} \int \left[r_{j}(x)\Re\left(e^{{\rm i}\gammamma_{j}}\frac{\partialrtial {\phi}_{j}}{\partialrtial x_{0}}(x-y_{0})\right)+\partial_{x} r_{j}(x) \Re\left(e^{{\rm i}\gammamma_{j}}\partial_{x} \frac{\partialrtial {\phi}_{j}}{\partialrtial x_{0}} (x-y_{0}) \right)\right]=0 \\ \frac{\partialrtial (P+Q)}{\partialrtial \theta_{j}}(x_{0}, \Theta) &= 2\int\left[ r_{j}(x)\mathcal{I}m\left(e^{{\rm i}\gammamma_{j}}\phi_{j}(x-y_{0})\right) +\partial_{x} r_{j}(x)\mathcal{I}m\left(e^{{\rm i}\gammamma_{j}}\partial_{x} \phi_{j}(x-y_{0}) \right)\right]=0. \end{align*} Denoting with $U$ and $V$ the real and imaginary (respectively) part of $W=\mathcal{P}hi(x-y_{0})e^{{\rm i}\mathcal{G}amma}-R(x)$ and taking into account that $R$ is real and does not depend on $x_{0}$, it follows \begin{align*} \frac{\partialrtial (P+Q)}{\partialrtial x_{0}}(x_{0}, \Theta) &= \sum_{j=1}^{2} \int \left[r_{j}\frac{\partialrtial u_{j}}{\partialrtial x_{0}}+\partial_{x} r_{j}\partial_{x} \frac{\partialrtial u_{j}}{\partialrtial x_{0}}\right] =-\sum_{j=1}^{2} \int \left[u_{j}\frac{\partialrtial r_{j}}{\partialrtial x_{0}}+\partial_{x} u_{j}\partial_{x} \frac{\partialrtial r_{j}}{\partialrtial x_{0}}\right]=0 \\ \frac{\partialrtial (P+Q)}{\partialrtial \theta_{j}}(x_{0}, \Theta) &= \int\left[ r_{j} v_{j}+\partial_{x} r_{j}\partial_{x} v_{j}\right]=0,\quad j=1,2. \end{align*} The second line of the above equations can be read as the orthogonality conditions on $V$ in~\eqref{orto}. As far as regards $U$, we only have to notice that $\partial_{x}R$ satisfies the linearized system of~\eqref{ellittico} so that all the conditions in~\eqref{orto} are proved. \edim \noindent We are now ready to complete the proof of the main result, Theorem~\ref{main}. \vskip4pt \noindent{\bf Proof of Theorem~\ref{main} concluded.} Let us consider $\mathcal{P}hi\in \varmathbb{H}^{1}$ with $\|\mathcal{P}hi\|_{2}=\|R\|_{2}$ and $W(x)=\mathcal{P}hi(x-y_{0})e^{{\rm i}\mathcal{G}amma}-R(x)$, where $y_{0}\in\R$ and $\mathcal{G}amma\in\R^2$ satisfy the minimality conditions~\eqref{minimo}. We want to control the $\varmathbb{H}^{1}$ norm of $W$ in terms of the difference $\mathcal{I}(\mathcal{P}hi)-\mathcal{I}(R)$, being $\mathcal{I}$ is the action functional associated to the system and defined as $$ \mathcal{I}(\mathcal{P}hi)=\mathcal{E}(\mathcal{P}hi)+ \|\mathcal{P}hi\|_{2}^{2}. $$ To this aim, we first compute the difference $\mathcal{I}(\mathcal{P}hi)-\mathcal{I}(R)$ and we use scale invariance, obtaining $\mathcal{I}(\mathcal{P}hi)-\mathcal{I}(R)=\mathcal{I}(R+W)-\mathcal{I}(R)$. Then, recalling that $\lambdangle \mathcal{I}'(R),W\rangle=0$, Taylor expansion gives \begin{align*} \mathcal{I}(\mathcal{P}hi)-\mathcal{I}(R)&=\mathcal{I}(R+W)-\mathcal{I}(R)=\lambdangle \mathcal{I}'(R),W\rangle+\lambdangle \mathcal{I}''(R+\vartheta W)W,W\rangle \\ &=\lambdangle \mathcal{I}''(R)W,W\rangle +\lambdangle \mathcal{I}''(R+\vartheta W)W,W\rangle-\lambdangle \mathcal{I}''(R)W,W\rangle. \end{align*} In order to evaluate the difference on the right hand side we will use the $C^{2}$ regularity of $\mathcal{I}$, at this point it is crucial \eqref{pzero}. For simplicity, let us consider separately the nonlinear terms in $\mathcal{I}$. The term $G:\varmathbb{H}^{1}\to \R$ defined by $$ G(U)=G(u_1,u_2)= \|u_1\|_{2p+2}^{2p+2}+\|u_2\|_{2p+2}^{2p+2}, $$ is of class $C^{3}$, as $p\geq 1$, so that \begin{equation}\lambdabel{gsec} \lambdangle G''(R+\vartheta W)W,W\rangle-\lambdangle G''(R)W,W\rangle\geq -c_{1}\|W\|_{\varmathbb{H}^{1}}^{3}. \end{equation} As far as concern the coupling term $\Upsilon:\varmathbb{H}^{1}\to \R$ defined by $\Upsilon(U)=\Upsilon(u_1,u_2)=\|u_1u_{2}\|_{p+1}^{p+1},$ it results \begin{align*} \lambdangle \Upsilon''(U)W,W\rangle & = (p^2-1)\int |u_{1}|^{p-3}|u_2|^{p-3}\left[ |u_{2}|^{4}\Re^2(u_1)|w_1|^2 +|u_{1}|^{4}\Re^2(u_2)|w_2|^2 \right] \\ & +(p+1)\int |u_1|^{p-1}|u_2|^{p-1} \left[|u_2|^{2}|w_1|^2 +|u_1|^{2}|w_2|^2 \right] \\ &+2(p+1)^2\int |u_1|^{p-1}|u_2|^{p-1}\Re(u_1)\Re(u_2)\Re(w_1\overlineerline{w}_2). \end{align*} When we write the difference $\lambdangle \Upsilon''(R)W,W\rangle- \lambdangle \Upsilon''(R+\vartheta W)W,W\rangle $ we use that $R$ is a real function and we control the first two terms with the real parts by the modulus; finally we use the inequality $$ \left||r_j+\vartheta w_j|^{p-1}-|r_j|^{p-1}\right|\leq C|w_j|^{p-1}, $$ to get \begin{equation} \lambdangle \Upsilon''(R)W,W\rangle- \lambdangle \Upsilon''(R+\vartheta W)W,W\rangle \geq -c_{1}\|W\|_{\varmathbb{H}^{1}}^{2+\mu}\qquad\text{for some $\mu>0$.} \end{equation} This inequality joint with \eqref{gsec} implies that \begin{equation} \lambdangle \mathcal{I}''(R+\vartheta W)W,W\rangle -\lambdangle \mathcal{I}''(R)W,W\rangle \geq -C \|W\|_{\varmathbb{H}^{1}}^{2+\mu}. \end{equation} Therefore, \begin{equation*} \mathcal{I}(\mathcal{P}hi)-\mathcal{I}(R)\geq \lambdangle \mathcal{I}''(R)W,W\rangle-C\|W\|_{\varmathbb{H}^{1}}^{2+\mu}= \lambdangle L_- V,V\rangle+\lambdangle L_+ U,U\rangle-C\|W\|_{\varmathbb{H}^{1}}^{2+\mu}. \end{equation*} Taking into account the orthogonality conditions of Proposition~\ref{ortogonal}, the assertion now follows from Proposition~\ref{fineL+} and Remark~\ref{stimaLmeno}. \edim \noindent{\bf Proof of Corollary~\ref{cor1}}\quad Let $\delta$ be a positive number to be chosen later. Moreover, let $R=(r_1,r_2)\in\varmathbb{H}^{1}$ and $S=(s_1,s_1)\in \varmathbb{H}^{1}$ be two given non-degenerate ground state solutions to system~\eqref{ellittico} such that $$ \|R-S\|_{\varmathbb{H}^{1}}^2<\delta. $$ Then, taking into account the variational characterization~\eqref{elle2} for ground states, we learn that $$ \mathcal{E}(R)=\mathcal{E}(S),\qquad \|R\|_{\varmathbb{L}^{2}}=\|S\|_{\varmathbb{L}^{2}}. $$ Notice also that $$ \inf_{x_{0}\in \R\atop\theta\in \R^{2}}\|R-(e^{i\theta_{1}} s_{1}(\cdot-x_{0}),e^{i\theta_{2}}s_{2}(\cdot-x_{0}))\|_{\varmathbb{H}^{1}}^{2} \leq \|R-S\|_{\varmathbb{H}^{1}}^2<\delta. $$ Therefore, by applying Theorem~\ref{main}, if $\delta>0$ is chosen sufficiently small, we get \begin{equation*} \inf_{x_{0}\in \R \atop\theta\in \R^{2}}\|R-(e^{i\theta_{1}} s_{1}(\cdot-x_0),e^{i\theta_{2}}s_{2}(\cdot-x_0))\|_{\varmathbb{H}^{1}}^{2}\leq 0. \end{equation*} In turn we conclude that $R=S$, up to a suitable translation and phase change. \vskip4pt \noindent \noindent{\bf Proof of Corollary~\ref{cor2}}\quad Let $T>0$ and let us fix $\varepsilon>0$ sufficiently small. Consider the solution $\mathcal{P}si$ of system~\eqref{schr} with initial datum $\mathcal{P}si^{0}$. By the conservation laws, we have $$ \|\mathcal{P}si(t)\|_{\varmathbb{L}^{2}}=\|\mathcal{P}si^0\|_{\varmathbb{L}^{2}},\,\,\quad \mathcal{E}(\mathcal{P}si(t))=\mathcal{E}(\mathcal{P}si^0),\quad \text{for all $t\in [0,\infty)$}. $$ By the continuity of the energy $\mathcal{E}$, there exists $\delta=\delta(\varepsilon)>0$ such that $$ \mathcal{E}(\mathcal{P}si(t))-\mathcal{E}(R)=\mathcal{E}(\mathcal{P}si^0)-\mathcal{E}(R)<\varepsilon,\quad\text{for all $t\in [0,\infty)$}, $$ provided that \begin{equation} \lambdabel{deltasmall} \inf_{\theta\in\R^{2} \atop x\in\R} \| \mathcal{P}si^0(\cdot)-(e^{i\theta_{1}} r_{1}(\cdot-x),e^{i\theta_{2}}r_{2}(\cdot-x))\|_{\varmathbb{H}^{1}}^2< \delta. \end{equation} Then, if we define for any $t>0$ the positive number \begin{equation*} \mathcal{G}amma_{\mathcal{P}si(t)}=\inf_{\theta\in\R^{2}\atop x\in\R} \|\mathcal{P}si(t)-(e^{i\theta_1}r_1(\cdot-x),e^{i\theta_2}r_2(\cdot-x))\|_{\varmathbb{H}^{1}}^{2}, \end{equation*} we learn from Theorem~\ref{main} that there exist two positive constants ${\mathcal A}$ and $C$ such that \begin{equation} \lambdabel{strongconclusion} \mathcal{G}amma_{\mathcal{P}si(t)}\leq C(\mathcal{E}(\mathcal{P}si(t))-\mathcal{E}(R)), \end{equation} provided that $\mathcal{G}amma_{\mathcal{P}si(t)}<{\mathcal A}$. Let us define the value $$ T_0:=\sup\big\{t\in [0,T]:\,\, \mathcal{G}amma_{\mathcal{P}si(s)}<{\mathcal A}\,\,\text{for all $s\in [0,t)$}\big\}. $$ Of course, it holds $T\geq T_0>0$ by means of~\eqref{deltasmall} (up to reducing the size of $\delta$, if necessary) and the continuity of $\mathcal{P}si(t)$. Hence, we deduce that \begin{equation} \lambdabel{firstconcl} \sup_{t\in [0,T_0]}\inf_{\theta\in\R^{2} \atop x\in\R} \| \mathcal{P}si(t,\cdot)-(e^{i\theta_{1}} r_{1}(\cdot-x),e^{i\theta_{2}}r_{2}(\cdot-x))\|_{\varmathbb{H}^{1}}^2\leq C(\mathcal{E}(\mathcal{P}si(t))-\mathcal{E}(R))=C(\mathcal{E}(\mathcal{P}si^0)-\mathcal{E}(R))< C\varepsilon . \end{equation} On the other hand, it is readily seen that, from this inequality, one obtains $T_0=T$. In fact, assume by contradiction that $T_0<T$. Then, since by~\eqref{firstconcl} $$ \mathcal{G}amma_{\mathcal{P}si(T_0)}=\inf_{\theta\in\R^{2} \atop x\in\R} \| \mathcal{P}si(T_0,\cdot)-(e^{i\theta_{1}} r_{1}(\cdot-x),e^{i\theta_{2}}r_{2}(\cdot-x))\|_{\varmathbb{H}^{1}}^2<C\varepsilon, $$ inequality $\mathcal{G}amma_{\mathcal{P}si(t)}<{\mathcal A}$ holds true by continuity for any $t\in[T_0,T_0+\rho)$, for some small $\rho>0$, which is a contradiction by the definition of $T_0$. Hence $T_0=T$ and, for any $T>0$, from~\eqref{firstconcl} we get \begin{equation*} \sup_{t\in [0,T]}\inf_{\theta\in\R^{2} \atop x\in\R} \| \mathcal{P}si(t,\cdot)-(e^{i\theta_{1}} r_{1}(\cdot-x),e^{i\theta_{2}}r_{2}(\cdot-x))\|_{\varmathbb{H}^{1}}^2<C\varepsilon, \end{equation*} which is the desired property on $[0,T]$. By the arbitrariness of $T$ the assertion follows. \section{Existence of a non-degenerate ground state}\lambdabel{ground} In the following section we will show that there exists a non-degenerate ground state $Z$. More precisely, let us consider $z$ be the unique positive radial least energy solution of \eqref{eeqr} and let $a$ be given by \begin{equation}\lambdabel{defa} a=(1+\beta)^{-1/2p}. \end{equation} We will prove the following result. \betae \lambdabel{gsnondegex} Let $a$ be given in \eqref{defa}, then the vector $Z=a(z,z)$ is a non-degenerate ground state of system \eqref{ellittico} for every $p>0$, $\beta>1$ and $p\neq \beta$. \end{theorem} \begin{remark}\rm In \cite{mmp1} it is proved that for $\beta\leq 1$ every ground state of \eqref{ellittico} necessarily has one trivial component, that is the reason of the assumption $\beta>1$. Moreover, it can been easily seen that for $p=\beta$ the ground state $Z$ is a degenerate solution that is why we assume $p\neq \beta.$ \end{remark} This result will be a consequence of the two following results. \betae\lambdabel{esi} Let $a$ be given in \eqref{defa}, then the vector $Z=a(z,z)$ is a ground state of system \eqref{ellittico} for every $p>0$, $\beta>1$. \end{theorem} \betae\lambdabel{deg} Let $a$ be given in \eqref{defa}, then the vector $Z=a(z,z)$ is a non-degenerate ground state of system \eqref{ellittico} for every $p>0$, $\beta>1$ and $p\neq \beta$. \end{theorem} \begin{remark}\rm In \cite{fm} it is studied the global existence for the Cauchy problem \eqref{schr} and it is proved that the solution exists for any time if $p<2/n$, while it can blow up if $p\geq 2/n$. In the critical case $p=2/n$ it is given a bound on the $L^2$-norm of the initial data which guarantees the global existence of the solution (see Theorem 2). Since Theorem \ref{esi} shows that the test functions used in \cite{fm} to estimate the blow-up threshold belong to the set of ground state solutions, as a by product, we obtain that the bound given in \cite{fm} is the exact threshold value. \end{remark} \begin{remark}\rm The above results have been proved for $p=1$, respectively, in~\cite{si} and~\cite{dw} in any dimension. Actually, the same arguments work for any $p>0$. In the following we include the details for completeness. Let us notice that the same proof of Theorem \ref{esi} holds in dimension greater than one; in addition, the arguments used in \cite{dw} hold for $p\in (0,2/n)$ for every $n\geq 1$. Thus, the vector $Z$ is a non-denerate ground state solution of \eqref{ellittico} in any dimension $n\geq 1$, our conjecture is that it is the only one if $\beta>1$. Here our interest, is restricted to the one dimension setting so that we will see the proof of Theorem \ref{gsnondegex} in this case. \end{remark} \subsection{Proof of Theorem \ref{esi}}\lambdabel{existenceproof} First, we recall this simple facts. \begin{proposition} Let us set $$ S_{1}=\inf_{H^{1}(\R)\setminus\{0\}}\frac{\|u\|^{2}_{H^1}}{\| u\|_{2p+2}^{2}}, \qquad T_{1}=\inf_{\mathcal N_{1}}\Big\{\frac12\|u\|^{2}_{H^1}-\frac1{2p+2}\|u\|_{2p+2}^{2p+2}\Big\}, $$ where $$ \mathcal N_{1}=\big\{u\inH^{1}(\R):\, u\neq 0,\,\,\|u\|^{2}_{H^1}=\|u\|_{2p+2}^{2p+2}\big\}. $$ Then, the following equality holds $$ T_{1}=\frac12\frac{p}{p+1}(S_{1})^{(p+1)/p}. $$ \end{proposition} {\noindent{\bf Proof.}\,\,\,} As $z$ solves the minimization problems that defines $S_{1}$ and $T_{1}$, using~\eqref{eeqr} we get $$ S_{1}=\frac{\|z\|^{2}_{H^1}}{\| z\|_{2p+2}^{2}}= \frac{\|z\|^{2}_{H^1}}{\| z\|^{2/(p+1)}}=\|z\|^{2p/(p+1)}_{H^1}= \|z\|_{2p+2}^{2p}, $$ namely \begin{equation} \lambdabel{euqaS} \|z\|^{2}_{H^1}=S_{1}^{(p+1)/p} \,\,\quad \text{and} \,\,\quad \|z\|_{2p+2}=S_{1}^{1/2p}. \end{equation} Using these equalities in the definition of $T_{1}$ permits to conclude the proof. \edim \vskip2pt \noindent Define now the sets \begin{align*} {\mathcal N}_{0} &=\left\{U\in\varmathbb{H}^{1}: U\neq (0,0),\,\, \|U\|^{2}_{\varmathbb{H}^{1}}=\|U\|_{2p+2}^{2p+2}+2\beta\|u_{1}u_{2}\|_{p+1}^{p+1}\right\}, \\ {\mathcal N} &=\big\{ U\in\varmathbb{H}^{1}: u_{i}\neq 0,\,\, \|u_{i}\|^{2}_{H^1}=\|u_{i}\|_{2p+2}^{2p+2}+ \beta\| u_{1}u_{2}\|^{p+1}_{p+1},\,\,i=1,2\big\}. \end{align*} Moreover, if $\varmathbb{H}^{1}_{r}$ is the set of radial function of $\varmathbb{H}^{1}$, we introduce the numbers \begin{equation} \lambdabel{definf} A_{0}=\inf_{U\in \mathcal N_{0}} \mathcal{I}(U),\qquad A=\inf_{U\in \mathcal N} \mathcal{I}(U), \qquad A_{r}=\inf_{U\in \mathcal N\cap \varmathbb{H}^{1}_{r}} \mathcal{I}(U), \end{equation} where $$ \mathcal{I}(U)=\frac{1}{2}\|U\|^{2}_{\varmathbb{H}^{1}}-\frac{1}{2p+2}\|U\|_{2p+2}^{2p+2} -\frac{1}{p+1}\beta\|u_{1}u_{2}\|_{p+1}^{p+1}. $$ Let $a$ be a positive number. Writing down the equations that define $\mathcal N$ and recalling that $z$ satisfies \eqref{eeqr} it is easy to see that $a(z,z)\in \mathcal N$ if $a$ satisfies \eqref{defa}. \\ \noindent Concerning the infimum problems $A_0,A,A_r$, in \cite{si} the following result is proved for $p=1$; actually the same proof holds for any $p$ satisfying \eqref{pzero}, we include some details. \begin{proposition}\lambdabel{disinf} Let $a$ satisfies \eqref{defa}. Then the following inequalities hold \begin{equation}\lambdabel{disA} 0<A_{0}\leq A\leq A_{r}\leq \frac{p}{p+1}a^{2}S_{1}^{(p+1)/p}, \end{equation} where the values $A_{0}$ and $A_{r}$ are defined in~\eqref{definf}. \end{proposition} {\noindent{\bf Proof.}\,\,\,} First note that, taken any $U=(u_1,u_2)\in \mathcal N_{0} $, the value $\mathcal{I}(U)$ is equal to \begin{equation}\lambdabel{disI} \mathcal{I}(U)=\frac12\Big(\frac{p}{p+1}\Big)\big[\|U\|_{2p+2}^{2p+2}+2\beta \|u_{1}u_{2}\|_{p+1}^{p+1}\big]=\frac12\Big(\frac{p}{p+1}\Big)\|U\|^{2}_{\varmathbb{H}^{1}}. \end{equation} Moreover, since $a(z,z)\in \mathcal N$ and has radial components, recalling~\eqref{euqaS} we get \begin{equation}\lambdabel{disar} A_{r} \leq \mathcal{I}(az,az) = \frac12\Big(\frac{p}{p+1}\Big) \|(az,az)\|^{2}_{H^1}=\Big(\frac{p}{p+1}\Big)a^{2}\|z\|^{2}_{H^1} =\Big(\frac{p}{p+1}\Big)a^{2} S_{1}^{(p+1)/p}, \end{equation} which is the last inequality on the right-hand side in \eqref{disA}. It just remains to show that $A_{0}>0$. To this aim, take $U\in \mathcal N_{0}$ and observe that H\"older and Sobolev inequalities imply that there exist positive constants $C_0,C_1$ such that $$ \|U\|^{2}_{\varmathbb{H}^{1}}=\|U\|_{2p+2}^{2p+2}+2\beta\|u_{1}u_{2}\|_{p+1}^{p+1}\leq C_0\|U\|_{2p+2}^{2p+2}\leq C_1\|U\|^{2p+2}_{\varmathbb{H}^{1}} $$ so that the norm $\|U\|_{\varmathbb{H}^{1}}$ remains uniformly away from zero. Hence, recalling formula~\eqref{disI}, we conclude the proof. \edim \vskip4pt We are now ready to complete the proof of Theorem \ref{esi}. \\ \noindent{\bf Proof of Theorem~\ref{esi} concluded.} We will obtain Theorem \ref{esi} by showing that the infimum $A$ equals $A_{r}$ and it is achieved at the couple $a(z,z)$, which is thus a ground state solution of~\eqref{ellittico}. \\ First, let $(U_{m})=(u_{m,1},u_{m,2})\subset {\mathcal N}$ be a minimizing sequence for $A$, namely $\mathcal{I}(U_m)=A+o(1)$ as $m\to\infty$. Let us set $y_{m,i}=\|u_{m,i}\|_{2p+2}^{2}$ for any $m\in\N$ and $i=1,2$. Hence, by the definition of $S_1$ and H\"older inequality, it follows that, for all $m\in\N$, \begin{equation} \lambdabel{disy} S_{1}y_{m,1}\leq \|u_mu\|^{2}_{H^1}=\|u_mu\|_{2p+2}^{2p+2}+\beta \|u_muu_md\|_{p+1}^{p+1}\leq y_mu^{p+1}+\beta y_mu^{(p+1)/2} y_md^{(p+1)/2}, \end{equation} for all $m\in\N$. Of course, for all $m\in\N$, the analogous inequality holds \begin{equation} \lambdabel{disy-2} S_{1}y_{m,2}\leq \|u_md\|^{2}_{H^1}=\|u_md\|_{2p+2}^{2p+2}+\beta \|u_muu_md\|_{p+1}^{p+1}\leq y_md^{p+1}+\beta y_mu^{(p+1)/2} y_md^{(p+1)/2}. \end{equation} Furthermore, taking into account formula~\eqref{disI}, by addition of the first inequalities in~\eqref{disy} and~\eqref{disy-2} one obtains \begin{equation}\lambdabel{disfina} S_{1}(y_mu+y_md)\leq 2\frac{p+1}{p}\mathcal{I} (U_{n})=2\frac{p+1}{p} A+o(1),\quad\text{as $m\to\infty$}. \end{equation} By combining this inequality with Proposition~\ref{disinf} gives $$ S_{1}(y_mu+y_md)\leq 2a^{2}S_{1}^{(p+1)/p}+o(1),\quad\text{as $m\to\infty$}. $$ Hence, defining $z_{m,i}=y_{m,i}/S_{1}^{1/p}$, we derive $z_{m,1}+z_{m,2}\leq 2a^{2}+o(1),$ as $m$ tends to infinity. Also, by dividing \eqref{disy} by $S_{1}y_mu$ and~\eqref{disy-2} by $S_{1}y_md$ and using $S_{1}=S_{1}^{(p-1)/2p}S_{1}^{(p+1)/2p}$ we obtain that, as $m\to\infty$, $(z_{m,1},z_{m,2})$ satisfies the following system of inequalities $$ \begin{cases} z_{m,1}+z_{m,2}\leq 2a^{2}+o(1), \\ z^{p}_{m,1}+\beta z_{m,1}^{(p-1)/2}z_{m,2}^{(p+1)/2} \geq 1, \\ z^{p}_{m,2}+\beta z_{m,1}^{(p+1)/2}z_{m,2}^{(p-1)/2} \geq 1. \end{cases} $$ Taking into account \eqref{defa} we are lead to the study of the associated algebraic system of inequalities \begin{equation}\lambdabel{algebinequalit} \begin{cases} x+y \leq 2a^{2}, \\ x^{p}+\beta x^{(p-1)/2}y^{(p+1)/2} \geq (1+\beta)a^{2p}, \\ y^{p}+\beta x^{(p+1)/2}y^{(p-1)/2} \geq (1+\beta)a^{2p}, \end{cases} \end{equation} for which we refer to Figure 1. \noindent Then, for $\beta >1$ and any $i=1,2$, the sequence $(z_{m,i})$ remains bounded away from zero and it has to be $z_{m,1}\to a^{2}$ and $z_{m,2}\to a^{2}$ as $m\to\infty$, so that looking at the first (in)equality of~\eqref{algebinequalit} with $x=y$ (by figure 1) yields $x=y=a^2$), so that $y_mu\to a^{2}S_{1}^{1/p},$ and $y_md\to a^{2}S_{1}^{1/p},$ as $m$ diverges. Whence, passing to the limit in formula~\eqref{disfina}, in light of Proposition \ref{disinf} we obtain $$ 2S_{1}^{(p+1)/p}a^{2}\leq 2\frac{p+1}{p} A\leq 2a^{2}S_{1}^{(p+1)/p} $$ so that, \eqref{disar}, gives $$ A\leq A_{r} \leq \mathcal{I}(az,az) \leq \Big(\frac{p}{p+1}\Big)a^{2} \left(S_{1}\right)^{(p+1)/p}=A, $$ which gives $A=A_{r}=\mathcal{I}(az,az)$, concluding the proof. \edim \subsection{Proof of Theorem \ref{deg}} According to Section~\ref{existenceproof}, let us consider $Z=a(z,z)$ the particular ground state solution of \eqref{ellittico}, with $a$ given in \eqref{defa}; we will now show the non-degeneracy property of $Z$. First, notice that the linearized system \eqref{linsyste} can be obtained using the operator $L_{+}$ acting on $Z$, and by the explicit expression of $Z$ we get \begin{equation*} L_{+}= \left( \begin{array}{cc} -\dfrac12\partial_{xx} +1 & 0 \\\\ 0 & -\dfrac12\partial_{xx} +1 \end{array} \right)-\left( \begin{array}{cc} \dfrac{p(2+\beta)+1}{1+\beta}z^{2p} & \dfrac{\beta(p+1)}{1+\beta}z^{2p} \\\\ \dfrac{\beta(p+1)}{1+\beta}z^{2p} & \dfrac{p(2+\beta) +1}{1+\beta}z^{2p} \end{array} \right). \end{equation*} In accordance with Section \ref{spectral}, we denote with $H_{F}(Z)$ the second matrix on the right hand side. The quadratic form related to $H_{F}(Z)$ can be diagonalized by an orthonormal change of coordinates, introducing \begin{equation}\lambdabel{coordinatech} w_{1}=\dfrac{\sqrt{2}}{2}(\phi_{1}+\phi_{2}),\quad w_{2}=\dfrac{\sqrt{2}}{2}(\phi_{1}-\phi_{2}). \end{equation} Since we have \begin{displaymath} \hbox{Tr}(H_{F}(Z))=2 \dfrac{(2+\beta)p+1}{1+\beta} =(2p+1)+\dfrac{2p+1-\beta}{1+\beta},\qquad \hbox{Det}(H_{F}(Z))=\dfrac{(2p+1)(2p+1-\beta)}{1+\beta}, \end{displaymath} it follows that its eigenvalues are \begin{equation}\lambdabel{auto} \lambda_{1}=2p+1,\qquad \lambda_{2}=\dfrac{2p+1-\beta}{1+\beta}\in (-1,2p+1) \end{equation} so the linear elliptic system $L_{+}\mathcal{P}hi=0$ decouples and reduces to \begin{equation}\lambdabel{diagonale} \begin{cases} -\frac12\partial_{xx} w_{1}+w_{1}=(2p+1)z^{2p}(x) w_{1}, & \text{in $\R$}\\ -\frac12\partial_{xx} w_{2}+w_{2}=\dfrac{2p+1-\beta}{1+\beta}z^{2p}(x) w_{2}, & \text{in $\R$}. \end{cases} \end{equation} Taking into account that the weight $z$ is exponentially decaying, the spectrum of the linear self-adjoint operator $-\frac{1}{2}\partialrtial_{xx}+{\rm Id}-\mu z^{2p}$ is discrete. Furthermore, from \cite[(a) and (b) of Proposition 2.8]{weinsteinMS} with proofs for $n=1$ in~\cite[Appendix A]{weinsteinMS}, we learn that the eigenvalues of \begin{equation}\lambdabel{eigen} -\dfrac12\partial_{xx} w+w-\mu z^{2p}(x) w=0\qquad \text{in $\R$}, \end{equation} are given by $\mu_{1}=1,$ $\mu_{2}=2p+1,$ $\mu_{3}>2p+1,$ and, denoting by $V_{\mu_i}$ the eigenspace corresponding to the eigenvalue $\mu_i$, we have $ V_{\mu_1}={\rm span}\big\{z\big\},$ $ V_{\mu_2}={\rm span}\big\{\partialrtial_x z\big\}.$ Therefore, from the first equation of~\eqref{diagonale} we deduce $w_1\in {\rm span}\big\{\partialrtial_x z\big\}.$ From \eqref{auto} we also deduce, from the second equation of~\eqref{diagonale}, that $w_2=0.$ In turn, by the orthonormal change of coordinates~\eqref{coordinatech} we obtain $\phi_1=\phi_2=c\partialrtial_x z$, for some coefficient $c\in\R$. Whence $\hbox{Ker}(L_{+})=\lambdangle\partial_{x}Z_{\beta}\rangle$, which concludes the proof. \edim \end{document}
\begin{document} \title[Fusion procedure]{Fusion procedure for cyclotomic BMW algebras} \author[Weideng Cui]{Weideng Cui} \address{School of Mathematics, Shandong University, Jinan, Shandong 250100, P.R. China.} \email{[email protected]} \begin{abstract} Inspired by the work [IMOg2], in this note, we prove that the pairwise orthogonal primitive idempotents of generic cyclotomic Birman-Murakami-Wenzl algebras can be constructed by consecutive evaluations of a certain rational function. In the appendix, we prove a similar result for generic cyclotomic Nazarov-Wenzl algebras. \end{abstract} \maketitle \section{Introduction} \subsection{} The primitive idempotents of a symmetric group $\mathfrak{S}_n,$ showed by Jucys [Juc], can be obtained by taking a certain limiting process on a rational function. The process is now commonly known as the fusion procedure, which has been further developed in the situation of Hecke algebras [Ch]; see also [Na2-4]. In [Mo], Molev has presented another approach of the fusion procedure for $\mathfrak{S}_n,$ which depends on the existence of a maximal commutative subalgebra generated by the Jucys-Murphy elements. In his approach, the primitive idempotents are obtained by consecutive evaluations of a certain rational function. The new version of the fusion procedure has been generalized to the Hecke algebras of type $A$ [IMO], to the Brauer algebras [IM, IMOg1], to the Birman-Murakami-Wenzl algebras [IMOg2], to the complex reflection groups of type $G(d,1,n)$ [OgPA1], to the Ariki-Koike algebras [OgPA2], to the wreath products of finite groups by the symmetric group [PA], to the degenerate cyclotomic Hecke algebras [ZL], to the Yokonuma-Hecke algebras [C1], to the cyclotomic Yokonuma-Hecke algebras [C2, Appendix] and to the degenerate cyclotomic Yokonuma-Hecke algebras [C3]. \subsection{} The Birman-Murakami-Wenzl (for brevity, BMW) algebra was algebraically defined by Birman and Wenzl [BW], and independently Murakami [Mu], which is an algebra generated by some elements satisfying certain particular relations. These relations are in fact implicitly modeled on the ones of certain algebra of tangles studied by Kauffman [Ka] and Morton and Traczyk [MT], which is known as a Kauffman tangle algebra. BMW algebra are closely related to Artin braid groups of type $A,$ Iwahori-Hecke algebras of type $A,$ quantum groups, Brauer algebras and other diagram algebras; see [Eny1-2, HuXi, Hu, LeRa, MW, RuSi4-6, RuSo, Xi] and the references therein. Motivated by studying link invariants, H\"{a}ring-Oldenburg [HO] introduced a class of finite dimensional associative algebras called cyclotomic Birman-Murakami-Wenzl (for brevity, BMW) algebras, generalizing the notions of BMW algebras. Such algebras are closely related to Artin braid groups of type $B,$ cyclotomic Hecke algebras and other research objects, and have been studied by a lot of authors from different perspectives; see [Go1-4, GoHM1-2, HO, OrRa, RuSi2-3, RuXu, Si, WiYu1-3, Xu, Yu] and so on. \subsection{} Inspired by the work [IMOg2] on the fusion procedures of BMW algebras, in this note we prove that a complete set of pairwise orthogonal primitive idempotents of cyclotomic BMW algebras can be derived by consecutive evaluations of a certain rational function in several variables. In the appendix, we prove a similar result for generic cyclotomic Nazarov-Wenzl algebras. This paper is organized as follows. In Section 2, we recall some preliminaries and introduce the the primitive idempotents $E_{\mathcal{T}}$ of cyclotomic BMW algebras. In Section 3, we establish the fusion formula for the primitive idempotent $E_{\mathcal{T}}.$ In Section 4 (Appendix), we develop the fusion formulas for the primitive idempotents of cyclotomic Nazarov-Wenzl algebras. \section{Preliminaries} \subsection{Cyclotomic Birman-Murakami-Wenzl algebras} \begin{definition} Assume that $\mathbb{K}$ is an algebraically closed field containing $\delta_{j},$ $0\leq j\leq d-1,$ and some nonzero elements $\rho,$ $q$, $q-q^{-1}$ and $v_i,$ $1\leq i\leq d$, and that they satisfy the relation $\rho-\rho^{-1}=(q-q^{-1})(\delta_{0}-1).$\vskip2mm Fix $n\geq 1.$ The cyclotomic Birman-Murakami-Wenzl algebra $\mathscr{B}_{d, n}$ is the $\mathbb{K}$-algebra generated by the elements $X_{1}^{\pm 1}, T_{i}^{\pm 1}$ and $E_{i}$ ($1\leq i\leq n-1$) with the following relations:\vskip2mm (1) (Inverses) $T_{i}T_{i}^{-1}=T_{i}^{-1}T_{i}=1$ and $X_{1}X_{1}^{-1}=X_{1}^{-1}X_{1}=1.$ (2) (Idempotent relations) $E_{i}^{2}=\delta_{0} E_{i}$ for $1\leq i\leq n-1.$ (3) (Affine braid relations) \hspace{0.7cm}(a) $T_{i}T_{i+1}T_{i}=T_{i+1}T_{i}T_{i+1}$ and $T_{i}T_{j}=T_{j}T_{i}$ if $|i-j|\geq 2.$ \hspace{0.7cm}(b) $X_{1}T_{1}X_{1}T_{1}=T_{1}X_{1}T_{1}X_{1}$ and $X_{1}T_{j}=T_{j}X_{1}$ if $j\geq 2.$ (4) (Tangle relations) \hspace{0.7cm}(a) $E_{i}E_{i\pm 1}E_{i}=E_{i}.$ \hspace{0.7cm}(b) $T_{i}T_{i\pm 1}E_{i}=E_{i\pm 1}E_{i}$ and $E_{i}T_{i\pm 1}T_{i}=E_{i}E_{i\pm 1}.$ \hspace{0.7cm}(c) For $1\leq j\leq d-1,$ $E_{1}X_{1}^{j}E_{1}=\delta_{j}E_{1}.$ (5) (Kauffman skein relations) $T_{i}-T_{i}^{-1}=(q-q^{-1})(1-E_{i})$ for $1\leq i\leq n-1.$ (6) (Untwisting relations) $T_{i}E_{i}=E_{i}T_{i}=\rho^{-1}E_{i}$ for $1\leq i\leq n-1.$ (7) (Unwrapping relations) $E_{1}X_{1}T_{1}X_{1}=\rho E_{1}=X_{1}T_{1}X_{1}E_{1}.$ (8) (Cyclotomic relation) $(X_1-v_1)(X_1-v_2)\cdots (X_1-v_d)=0.$ \end{definition} In $\mathscr{B}_{d, n}$, We define inductively the following elements: \begin{equation}\label{JMur-elements} X_{i+1} :=T_{i}X_iT_i\quad\mbox{for}~i=1,\ldots,n-1. \end{equation} It can be easily checked that the elements $X_1,\ldots,X_n$ commute with each other, and moreover, we have \begin{equation}\label{JMur-elements1} E_{i}X_{i}X_{i+1}=X_{i}X_{i+1}E_{i}=E_{i}\quad\mbox{for}~i=1,\ldots,n-1. \end{equation} We now define the following elements (see [IMOg2, (2.15)]): \begin{equation}\label{Baxterized-elements11} T_{i}(u,v)=T_{i}+\frac{(q-q^{-1})u}{v-u}+\frac{(q-q^{-1})u}{u+\rho qv}E_{i}\quad\mbox{for}~i=1,\ldots,n-1. \end{equation} Note that $E_{i}^{2}=\delta_{0}E_{i}$, where $\delta_{0}=\frac{(q^{-1}+\rho^{-1})(\rho q-1)}{q-q^{-1}}.$ By using this, it can be easily checked that (see [IMOg2, (2.17-18)]) \begin{equation}\label{Baxterized-elements111} T_{i}(u,v)T_{i}(v,u)=f(u,v)\quad\mbox{for}~i=1,\ldots,n-1, \end{equation} where \begin{equation}\label{Baxterized-elements1111} f(u,v)=f(v,u)=\frac{(u-q^{2}v)(u-q^{-2}v)}{(u-v)^{2}}. \end{equation} \subsection{Combinatorics} $\lambda=(\lambda_{1},\ldots,\lambda_{k})$ is called a partition of $n$ if it is a finite sequence of weakly decreasing nonnegative integers whose sum is $n.$ We set $|\lambda| :=n.$ We shall identify a partition $\lambda$ with a Young diagram, which is the set $$[\lambda] :=\{(i,j)\:|\:i\geq 1~\mathrm{and}~1\leq j\leq \lambda_{i}\}.$$ We shall regard $\lambda$ as a left-justified array of boxes such that there exist $\lambda_{j}$ boxes in the $j$-th row for $j=1,\ldots,k.$ We write $\theta=(a,b)$ if the box $\theta$ lies in row $a$ and column $b.$ Similarly, a $d$-partition of $n$ is an ordered $d$-tuple $\bm{\lambda}=(\lambda^{(1)},\lambda^{(2)},\ldots,\lambda^{(d)})$ of partitions $\lambda^{(k)}$ such that $\sum_{k=1}^{d}|\lambda^{(k)}|=n.$ We denote by $\mathcal{P}_{d}(n)$ the set of $d$-partitions of $n.$ We shall identify a $d$-partition $\bm{\lambda}$ with its Young diagram, which is the ordered $d$-tuple of the Young diagrams of its components. We write $\bm{\theta}=(\theta, s)$ if the box $\theta$ lies in the component $\lambda^{(s)}.$ Assume that $\bm{\lambda}$ and $\bm{\mu}$ are two $d$-partitions. We say that $\bm{\lambda}$ is obtained from $\bm{\mu}$ by adding a box if there exists a pair $(j,t)$ such that $\lambda_{j}^{(t)}=\mu_{j}^{(t)}+1$ and $\lambda_{i}^{(s)}=\mu_{i}^{(s)}$ for $(i,s)\neq (j,t).$ In this case, we will also say that $\bm{\mu}$ is obtained from $\bm{\lambda}$ by removing a box. Set \[\Lambda_{d,n}^{+} :=\{(l,\bm{\lambda})\:|\:0\leq l\leq \lfloor n/2\rfloor, \bm{\lambda}\in \mathcal{P}_{d}(n-2l)\}.\] The combinatorial objects appearing in the representation theory of $\mathscr{B}_{d, n}$ will be updown tableaux. For $(f, \bm{\lambda})\in \Lambda_{d,n}^{+},$ an $n$-updown $\bm{\lambda}$-tableau, or more simply an updown $\bm{\lambda}$-tableau, is a sequence $\mathcal{T}=(\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{n})$ of $d$-partitions such that $\mathcal{T}_{n}=\bm{\lambda}$ and $\mathcal{T}_{i}$ is obtained from $\mathcal{T}_{i-1}$ by either adding or removing a box, for $i=1,\ldots,n$, where we set $\mathcal{T}_{0}=\emptyset.$ Let $\mathscr{T}_{n}^{ud}(\bm{\lambda})$ be the set of updown $\bm{\lambda}$-tableaux of $n.$ Suppose that $(f, \bm{\lambda})\in \Lambda_{d,n}^{+}$ and $\mathcal{U}=(\mathcal{U}_{1},\ldots,\mathcal{U}_{n})\in \mathscr{T}_{n}^{ud}(\bm{\lambda}).$ Let \begin{align}\label{symme-forms} \mathrm{c}(\mathcal{U}|k)= \begin{cases} v_{s}q^{2(j-i)} & \text{if } \mathcal{U}_{k}=\mathcal{U}_{k-1}\cup ((i,j),s), \\ v_{s}^{-1}q^{2(i-j)} & \text{if } \mathcal{U}_{k-1}=\mathcal{U}_{k}\cup ((i,j),s). \end{cases} \end{align} Given a box $\bm{\alpha}=((i,j),s),$ we define the content of it by \begin{align}\label{symme-forms11113344} \mathrm{c}(\mathcal{U}|\bm{\alpha})= \begin{cases} v_{s}q^{2(j-i)} & \text{if }\bm{\alpha}\text{ is an addable box of }\mathcal{U}, \\ v_{s}^{-1}q^{2(i-j)} & \text{if }\bm{\alpha}\text{ is a removable box of }\mathcal{U}. \end{cases} \end{align} We give the generalizations of some constructions in [IM, Section 3]. Suppose that $(f, \bm{\lambda})\in \Lambda_{d,n}^{+}$ and $\mathcal{T}=(\mathcal{T}_{1},\ldots,\mathcal{T}_{n})\in \mathscr{T}_{n}^{ud}(\bm{\lambda}).$ Set $\bm{\mu}=\mathcal{T}_{n-1}$ and consider the updown $\bm{\mu}$-tableau $\mathcal{U}=(\mathcal{T}_{1},\ldots,\mathcal{T}_{n-1}).$ We now define two $d$-tuples of infinite matrices \[M(\mathcal{U})=(m_{1}(\mathcal{U}),\ldots,m_{d}(\mathcal{U}))\quad \mathrm{ and }\quad \overline{M}(\mathcal{U})=(\overline{m}_{1}(\mathcal{U}),\ldots,\overline{m}_{d}(\mathcal{U})),\] here the rows and columns of each $m_{s}(\mathcal{U})$ or $\overline{m}_{s}(\mathcal{U})$ are labelled by positive integers and only a finite number of entries in each of the matrices are nonzero. The entry $m_{ij}^{s}$ of the matrix $m_{s}(\mathcal{U})$ (respectively, the entry $\overline{m}_{ij}^{s}$ of the matrix $\overline{m}_{s}(\mathcal{U})$) equals the number of times that the box $((i,j),s)$ is added (respectively, removed) in the sequence $(\emptyset, \mathcal{T}_{1},\ldots, \mathcal{T}_{n-1}).$ For each $k\in \mathbb{Z}$ and $1\leq s\leq d,$ we define two nonnegative integers $d_{k}^{s}=d_{k}(m_{s}(\mathcal{U}))$ and $\overline{d}_{k}^{s}=d_{k}(\overline{m}_{s}(\mathcal{U}))$ as the sums of the entries of the matrices $m_{s}(\mathcal{U})$ and $\overline{m}_{s}(\mathcal{U})$ on the $k$-th diagonal, that is, \begin{equation}\label{dsk-dskbar} d_{k}^{s}=\sum_{j-i=k}m_{ij}^{s}\quad\mathrm{ and }\quad \overline{d}_{k}^{s}=\sum_{j-i=k}\overline{m}_{ij}^{s}. \end{equation} Furthermore, we define the indexes $g_{k}^{s}=g_{k}(m_{s}(\mathcal{U}))$ and $\overline{g}_{k}^{s}=g_{k}(\overline{m}_{s}(\mathcal{U}))$ as follows: \begin{equation}\label{index-indexbar} g_{k}^{s}=\delta_{k0}+d_{k-1}^{s}+d_{k+1}^{s}-2d_{k}^{s}\quad\mathrm{ and }\quad \overline{g}_{k}^{s}=\overline{d}_{k-1}^{s}+\overline{d}_{k+1}^{s}-2\overline{d}_{k}^{s}. \end{equation} Finally, we define some integer $p_{1},\ldots,p_{n}$ associated to $\mathcal{T}$ inductively such that $p_{k}$ depends only on the first $k$ $d$-partitions $(\mathcal{T}_{1},\ldots,\mathcal{T}_{k})$ of $\mathcal{T}.$ Therefore, it is enough to define $p_{n}.$ We set \begin{equation}\label{integer-indexbar} p_{n}=1-g_{k_{n}}(m_{s_{n}}(\mathcal{U})) \end{equation} if $\mathcal{T}_{n}$ is obtained from $\mathcal{T}_{n-1}$ by adding a box $((i_n,j_n),s_n)$, where $k_n=j_n-i_n;$ \begin{equation}\label{integer-indexbar11} p_{n}=1-g_{k_{n}'}(\overline{m}_{s_{n}'}(\mathcal{U})) \end{equation} if $\mathcal{T}_{n}$ is obtained from $\mathcal{T}_{n-1}$ by removing a box $((i_{n}',j_{n}'),s_{n}')$, where $k_{n}'=j_{n}'-i_{n}'.$ Assume that $(f, \bm{\lambda})\in \Lambda_{d,n}^{+},$ $\mathcal{T}=(\mathcal{T}_{1},\ldots,\mathcal{T}_{n})$ is an $n$-updown $\bm{\lambda}$-tableau and that $\mathcal{U}=(\mathcal{T}_{1},\ldots,\mathcal{T}_{n-1}).$ We then define the element $f(\mathcal{T})$ inductively by \begin{equation}\label{hooklength-indexbar11} f(\mathcal{T})=f(\mathcal{U})\varphi(\mathcal{U}, \mathcal{T}), \end{equation} where \begin{equation*} \varphi(\mathcal{U}, \mathcal{T})=\prod_{\substack{k\neq k_{n}\\k\in \mathbb{Z}}}(q^{2k_{n}}-q^{2k})^{g_{k}^{s_{n}}}\prod_{\substack{1\leq t\leq d; t\neq s_{n}\\k\in \mathbb{Z}}}\hspace{-2mm}(v_{s_{n}}q^{2k_{n}}-v_{t}q^{2k})^{g_{k}^{t}} \prod_{\substack{1\leq r\leq d\\k\in \mathbb{Z}}}(v_{s_{n}}q^{2k_{n}}-v_{r}^{-1}q^{-2k})^{\overline{g}_{k}^{r}} \end{equation*} if $\mathcal{T}_{n}$ is obtained from $\mathcal{T}_{n-1}$ by adding a box $((i_n,j_n),s_n)$, where $k_n=j_n-i_n;$ \begin{equation*} \varphi(\mathcal{U}, \mathcal{T})=\prod_{\substack{k\neq k_{n}'\\k\in \mathbb{Z}}}(q^{-2k_{n}'}-q^{-2k})^{\overline{g}_{k}^{s_{n}'}}\hspace{-1.5mm} \prod_{\substack{1\leq t\leq d; t\neq s_{n}'\\k\in \mathbb{Z}}}\hspace{-2mm}(v_{s_{n}'}^{-1}q^{-2k_{n}'}-v_{t}^{-1}q^{-2k})^{\overline{g}_{k}^{t}} \prod_{\substack{1\leq r\leq d\\k\in \mathbb{Z}}}(v_{s_{n}'}^{-1}q^{-2k_{n}'}-v_{r}q^{2k})^{g_{k}^{r}} \end{equation*} if $\mathcal{T}_{n}$ is obtained from $\mathcal{T}_{n-1}$ by removing a box $((i_{n}',j_{n}'),s_{n}')$, where $k_{n}'=j_{n}'-i_{n}'.$ In the special situation when $f=0,$ that is, $\bm{\lambda}$ is a $d$-partition of $n,$ there is a natural bijection between the set of $n$-updown $\bm{\lambda}$-tableaux and the set of standard $\bm{\lambda}$-tableaux defined in [DJM, Definition (3.10)]. The following proposition is inspired by [IM, Proposition 3.3] and can be proved similarly. \begin{proposition}\label{special-propo} If $\bm{\lambda}$ is a $d$-partition of $n$ and $\mathcal{T}=(\mathcal{T}_{1},\ldots,\mathcal{T}_{n})$ is an $n$-updown $\bm{\lambda}$-tableau, then $p_1,\ldots,p_{n}$ are all equal to zero, and $f(\mathcal{T})$ is exactly equal to $\emph{F}_{\bm{\lambda}}^{-1}$ defined in $[\emph{OgPA}2, \emph{Section } 2.2(12)]$ when $d=m.$ \end{proposition} \subsection{Idempotents of $\mathscr{B}_{d, n}$} Following [RuXu, Definition 3.4], we say that $\mathscr{B}_{d, n}$ is generic if the parameters $v_i$, $1\leq i\leq d$, and $q$ satisfy the conditions (1) the order $o(q^{2})$ of $q^{2}$ satisfies $o(q^{2})> 2n$; (2) $|r|\geq 2n$ whenever there exists $r\in \mathbb{Z}$ such that either $v_{i}v_{j}^{\pm 1}=q^{2r}$ for $i\neq j,$ or $v_{i}=\pm q^{r}.$ Following [WiYu1, Corollary 4.5], we say that $\mathscr{B}_{d, n}$ is admissible if the set $\{E_{1}, E_{1}X_{1},\ldots,E_{1}X_{1}^{d-1}\}$ is linearly independent in $\mathscr{B}_{d, 2}.$ It has been proved by Goodman [Go2, Theorem 4.4] that this admissible condition coincides with the $\bm{\mathrm{u}}$-admissible condition defined in [RuXu, Definition 2.27]. From now on, we always assume that $\mathscr{B}_{d, n}$ is generic and admissible. Thus, by [RuXu, Lemma 3.5], we have $\mathcal{S}=\mathcal{T}$ if and only if $\mathrm{c}(\mathcal{S}|k)=\mathrm{c}(\mathcal{T}|k)$ for all $1\leq k\leq n.$ Therefore, the set $\{X_1,\ldots,X_n\},$ as a family of JM-elements for $\mathscr{B}_{d, n}$ in the abstract sense defined in [Ma, Definition 2.4], satisfies the separation condition associated to the weakly cellular basis of $\mathscr{B}_{d, n}$ constructed in [RuXu, Theorem 4.19]. Note that the results in [Ma] also hold for $\mathscr{B}_{d, n}$ with respect to the weakly cellular basis. In particular, we can construct the primitive idempotents of $\mathscr{B}_{d, n}$ following the arguments in [Ma, Section 3]. For each $1\leq k\leq n,$ we define the following set: \[\mathcal{R}(k) :=\{\mathrm{c}(\mathcal{S}|k)\:|\:\mathcal{S}\in \mathscr{T}_{n}^{ud}(\bm{\lambda}) \text{ for some }(f, \bm{\lambda})\in \Lambda_{d,n}^{+}\}.\] Suppose that $(f, \bm{\lambda})\in \Lambda_{d,n}^{+}$ and $\mathcal{T}\in \mathscr{T}_{n}^{ud}(\bm{\lambda}).$ We set \begin{equation}\label{hooklength-idempotentelement11} E_{\mathcal{T}}=\prod_{k=1}^{n}\bigg(\prod_{\substack{c\in \mathcal{R}(k)\\c\neq \mathrm{c}(\mathcal{T}|k)}}\frac{X_{k}-c}{\mathrm{c}(\mathcal{T}|k)-c} \bigg). \end{equation} By standard arguments in [Ma, Section 3], the elements $\{E_{\mathcal{T}}\:|\:\mathcal{T}\in \mathscr{T}_{n}^{ud}(\bm{\lambda}) \text{ for some }(f, \bm{\lambda})\in \Lambda_{d,n}^{+}\}$ form a complete set of pairwise orthogonal primitive idempotents of $\mathscr{B}_{d, n}.$ Moreover, the elements $X_1,\ldots,X_n$ generate a maximal commutative subalgebra of $\mathscr{B}_{d, n}.$ We also have \begin{equation}\label{hooklength-idempotentelement1111} X_{k}E_{\mathcal{T}}=E_{\mathcal{T}}X_{k}=\mathrm{c}(\mathcal{T}|k)E_{\mathcal{T}}. \end{equation} \section{Fusion procedure for cyclotomic BMW algebras} Assume that $(f, \bm{\lambda})\in \Lambda_{d,n}^{+}$ and that $\mathcal{T}=(\mathcal{T}_{1},\ldots,\mathcal{T}_{n})$ is an $n$-updown $\bm{\lambda}$-tableau. Set $\bm{\mu}=\mathcal{T}_{n-1}$ and $\mathcal{U}=(\mathcal{T}_{1},\ldots,\mathcal{T}_{n-1})$ as an updown $\bm{\mu}$-tableau. Let $\bm{\theta}$ be the box that is addable to or removable from $\bm{\mu}$ to get $\bm{\lambda}.$ For simplicity, we set $\mathrm{c}_{k} :=\mathrm{c}(\mathcal{T}|k).$ By \eqref{hooklength-idempotentelement11}, we can rewrite $E_{\mathcal{T}}$ inductively as follows: \begin{equation}\label{idempotentele-induc} E_{\mathcal{T}}=E_{\mathcal{U}}\frac{(X_{n}-b_1)\cdots (X_{n}-b_k)}{(\mathrm{c}_{n}-b_1)\cdots (\mathrm{c}_{n}-b_k)}, \end{equation} where $b_1,\ldots,b_k$ are the contents of all boxes except $\bm{\theta},$ which can be addable to or removable from $\bm{\mu}$ to get a $d$-partition. We denote by $\{\Lambda_{1},\ldots,\Lambda_{h}\}$ the set of all $d$-partitions obtained from $\bm{\mu}$ by adding a box or removing one. Set $\mathscr{T}_{j} :=(\mathcal{T}_{1},\ldots,\mathcal{T}_{n-1},\Lambda_{j})$ for $1\leq j\leq h.$ Note that $\mathcal{T}\in \{\mathscr{T}_{1},\ldots,\mathscr{T}_{h}\}.$ Since $\mathscr{B}_{d, n}$ is generic, hence it is semisimple. By [RuSi3, (4.16)] we have \begin{equation}\label{sum-formula11} E_{\mathcal{U}}=\sum_{j=1}^{h}E_{\mathscr{T}_{j}}. \end{equation} The property \eqref{hooklength-idempotentelement1111} implies that the following rational function \begin{equation}\label{rational-function11} E_{\mathcal{U}}\frac{u-\text{c}_n}{u-X_{n}} \end{equation} is regular at $u=\text{c}_n,$ and by \eqref{sum-formula11}, we get \begin{equation}\label{sum-function1111} E_{\mathcal{U}}\frac{u-\text{c}_n}{u-X_{n}}\Big|_{u=\text{c}_n}=E_{\mathcal{T}}. \end{equation} For $1\leq i\leq n-1,$ we set \begin{align}\label{Q-function41} Q_{i}(u,v;c) :=T_{i}+\frac{q-q^{-1}}{\rho^{-1}cuv-1}+\frac{q-q^{-1}}{1+qcuv}E_{i}. \end{align} Let $\phi_{1}(u) :=\frac{cuX_{1}-\rho}{u-X_1}.$ For $k=2,\ldots,n$, we set \begin{align}\label{phi-function42} \phi_k(u_1,\ldots,u_{k-1},u)& :=Q_{k-1}(u_{k-1},u;c)\phi_{k-1}(u_1,\ldots,u_{k-2},u)T_{k-1}(u_{k-1}, u)\notag\\ =Q_{k-1}&(u_{k-1},u;c)\cdots Q_{1}(u_{1},u;c)\phi_{1}(u)T_{1}(u_{1}, u)\cdots T_{k-1}(u_{k-1}, u). \end{align} From now on, we always set $c :=-q^{-1}.$ The following lemma is inspired by [IMOg2, Lemma 1] and can be proved similarly. \begin{lemma}\label{phi-phi-phi111} Assume that $n\geq 1.$ We have \begin{align}\label{F-PhiEu43} E_{\mathcal{U}}\phi_n(\mathrm{c}_1,\ldots,\mathrm{c}_{n-1},u)\prod_{r=1}^{n-1}f(u, \mathrm{c}_{r})^{-1}=E_{\mathcal{U}}\frac{cuX_{n}-\rho}{u-X_n}. \end{align} \end{lemma} \begin{proof} We shall prove \eqref{F-PhiEu43} by induction on $n.$ For $n=1,$ the situation is trivial. We set \begin{align}\label{phi-function421} \phi'_n(\mathrm{c}_1,&\ldots,\mathrm{c}_{n-1},u)\notag\\ &=Q_{n-1}(\mathrm{c}_{n-1},u;c)\cdots Q_{1}(\mathrm{c}_{1},u;c)\phi_{1}(u)T_{1}(u, \mathrm{c}_{1})^{-1}\cdots T_{n-1}(u, \mathrm{c}_{n-1})^{-1}. \end{align} By \eqref{Baxterized-elements111} and \eqref{phi-function421}, in order to show \eqref{F-PhiEu43}, it suffices to prove that \begin{align}\label{F-PhiEu4321} E_{\mathcal{U}}\phi'_n(\mathrm{c}_1,\ldots,\mathrm{c}_{n-1},u)=E_{\mathcal{U}}\frac{cuX_{n}-\rho}{u-X_n}. \end{align} By the induction hypothesis, it boils down to proving the following equality: \begin{align}\label{EUEU-PhiEu5} E_{\mathcal{U}}Q_{n-1}(\mathrm{c}_{n-1},u;c)\frac{cuX_{n-1}-\rho}{u-X_{n-1}}T_{n-1}(u, \mathrm{c}_{n-1})^{-1}=E_{\mathcal{U}}\frac{cuX_{n}-\rho}{u-X_n}. \end{align} Since $X_{n}$ commutes with $E_{\mathcal{U}},$ we can rewrite \eqref{EUEU-PhiEu5} as follows: \begin{align}\label{EUEU-PhiEu6} E_{\mathcal{U}}(u-X_n)Q_{n-1}(&\mathrm{c}_{n-1},u;c)(cuX_{n-1}-\rho)\notag\\ &=E_{\mathcal{U}}(cuX_{n}-\rho)T_{n-1}(u, \mathrm{c}_{n-1})(u-X_{n-1}). \end{align} By \eqref{Baxterized-elements11} and \eqref{Q-function41}, the equality \eqref{EUEU-PhiEu6} becomes \begin{align}\label{EUEU-PhiEu7} E_{\mathcal{U}}(u&-X_n)\Big(T_{n-1}+\frac{q-q^{-1}}{\rho^{-1}cu\mathrm{c}_{n-1}-1}+\frac{q-q^{-1}}{1+qcu\mathrm{c}_{n-1}}E_{n-1}\Big)(cuX_{n-1}-\rho)\notag\\ &=E_{\mathcal{U}}(cuX_{n}-\rho)\Big(T_{n-1}+\frac{(q-q^{-1})u}{\mathrm{c}_{n-1}-u}+\frac{(q-q^{-1})u}{u+\rho q\mathrm{c}_{n-1}}E_{n-1}\Big)(u-X_{n-1}). \end{align} By definition, we have $T_{n-1}X_{n-1}=X_{n}T_{n-1}-(q-q^{-1})X_{n}+(q-q^{-1})X_{n}E_{n-1}.$ Thus, we get that \eqref{EUEU-PhiEu7} is equivalent to \begin{align}\label{EUEU-PhiEu8} E_{\mathcal{U}}(u&-X_n)\Big(cu(X_{n}T_{n-1}-(q-q^{-1})X_{n}+(q-q^{-1})X_{n}E_{n-1})-\rho T_{n-1}\notag\\ &\hspace{2cm}+(q-q^{-1})\rho+\frac{q-q^{-1}}{1+qcu\mathrm{c}_{n-1}}E_{n-1}(cuX_{n-1}-\rho)\Big)\notag\\ &=E_{\mathcal{U}}(cuX_{n}-\rho)\Big(-X_{n}T_{n-1}+(q-q^{-1})X_{n}-(q-q^{-1})X_{n}E_{n-1}+uT_{n-1}\notag\\ &\hspace{2cm}-(q-q^{-1})u+\frac{(q-q^{-1})u}{u+\rho q\mathrm{c}_{n-1}}E_{n-1}(u-X_{n-1})\Big). \end{align} Since we have \begin{align*} cu^{2}&X_{n}T_{n-1}-cuX_{n}^{2}T_{n-1}-(q-q^{-1})cuX_{n}(u-X_n)-\rho(u-X_n)(T_{n-1}-(q-q^{-1}))\notag\\ =&-cuX_{n}^{2}T_{n-1}+\rho X_nT_{n-1}+(q-q^{-1})(cuX_{n}-\rho)X_{n}+u(cuX_{n}-\rho)(T_{n-1}-(q-q^{-1})), \end{align*} it is easy to see that the equality \eqref{EUEU-PhiEu8} comes down to the following equality: \begin{align}\label{EUEU-PhiEu9} E_{\mathcal{U}}&(u-X_n)\Big(cuX_{n}E_{n-1}+\frac{1}{1+qcu\mathrm{c}_{n-1}}E_{n-1}(cuX_{n-1}-\rho)\Big)\notag\\ &=E_{\mathcal{U}}(cuX_{n}-\rho)\Big(-X_{n}E_{n-1}+\frac{u}{u+\rho q\mathrm{c}_{n-1}}E_{n-1}(u-X_{n-1})\Big). \end{align} By definition, we have $E_{\mathcal{U}}X_{n-1}=\mathrm{c}_{n-1}E_{\mathcal{U}}.$ Hence, we get $E_{\mathcal{U}}X_{n}E_{n-1}=\frac{1}{\mathrm{c}_{n-1}}E_{\mathcal{U}}E_{n-1}$ by \eqref{JMur-elements1}. According to this, by comparing the coefficients of the terms involving $E_{\mathcal{U}}E_{n-1}X_{n-1}$, we see that it suffices to show that \begin{align}\label{EUEU-PhiEu11} \frac{1}{1+qcu\mathrm{c}_{n-1}}\cdot \frac{cu^{2}\mathrm{c}_{n-1}-cu}{\mathrm{c}_{n-1}}=\frac{u}{u+\rho q\mathrm{c}_{n-1}}\cdot \frac{\rho\mathrm{c}_{n-1}-cu}{\mathrm{c}_{n-1}}. \end{align} By comparing the coefficients of the terms involving $E_{\mathcal{U}}E_{n-1}$, it suffices to show that \begin{align}\label{EUEU-PhiEu12} \frac{cu^{2}-\rho}{\mathrm{c}_{n-1}}+\frac{1}{1+qcu\mathrm{c}_{n-1}}\cdot \frac{\rho-\rho u\mathrm{c}_{n-1}}{\mathrm{c}_{n-1}}= \frac{u}{u+\rho q\mathrm{c}_{n-1}}\cdot \frac{cu^{2}-\rho u\mathrm{c}_{n-1}}{\mathrm{c}_{n-1}}. \end{align} Noting that $c=-q^{-1},$ it is easy to verify that \eqref{EUEU-PhiEu11} and \eqref{EUEU-PhiEu12} are true. Thus, \eqref{EUEU-PhiEu9} holds. The lemma is proved. \end{proof} Let $\overline{\phi}_{1}(u) :=(u-v_{1})\cdots (u-v_{d})\frac{cuX_{1}-\rho}{u-X_1}.$ For $k=2,\ldots,n$, we set \begin{align}\label{phi-function424242} \overline{\phi}_k(u_1,\ldots,u_{k-1},u)& :=Q_{k-1}(u_{k-1},u;c)\overline{\phi}_{k-1}(u_1,\ldots,u_{k-2},u)T_{k-1}(u_{k-1}, u)\notag\\ =Q_{k-1}&(u_{k-1},u;c)\cdots Q_{1}(u_{1},u;c)\overline{\phi}_{1}(u)T_{1}(u_{1}, u)\cdots T_{k-1}(u_{k-1}, u). \end{align} We also define the following rational function: \begin{align}\label{Phi-function111} \Phi(u_1,\ldots,u_n) :=\overline{\phi}_1(u_1)\cdots \overline{\phi}_{n-1}(u_1,\ldots,u_{n-1})\overline{\phi}_n(u_1,\ldots,u_{n}). \end{align} Recall that the integers $p_{1},\ldots,p_{n}$ associated to $\mathcal{T}$ have been defined as in \eqref{integer-indexbar} or \eqref{integer-indexbar11}. Now we can state the main result of this paper. \begin{theorem}\label{main-theorem11112} The idempotent $E_{\mathcal{T}}$ of $\mathscr{B}_{d, n}$ corresponding to an $n$-updown $\bm{\lambda}$-tableau $\mathcal{T}$ can be derived by the following consecutive evaluations$:$ \begin{equation}\label{idempotents111} E_{\mathcal{T}}=\frac{1}{f(\mathcal{T})}\Big(\prod_{k=1}^{n}\frac{(u_{k}-\mathrm{c}_{k})^{p_{k}}}{cu_{k}\mathrm{c}_{k}-\rho}\Big) \Phi(u_1,\ldots,u_n)\Big|_{u_{1}=\emph{c}_1}\cdots\Big|_{u_{n}=\emph{c}_{n}}. \end{equation} \end{theorem} \begin{proof} We shall prove the theorem by induction on $n.$ For $n=1,$ we have $p_{1}=0$ by Proposition \ref{special-propo}. Thus, we get that the right-hand side of \eqref{idempotents111} is equal to \begin{align}\label{n-1-istrue} \frac{1}{f(\mathcal{T})}&\frac{(u_{1}-v_{1})\cdots (u_{1}-v_{d})}{cu_{1}\mathrm{c}_{1}-\rho} \frac{cu_{1}X_{1}-\rho}{u_{1}-X_1}\Big|_{u_{1}=\mathrm{c}_1}\notag\\ &=\frac{1}{f(\mathcal{T})}\frac{(u_{1}-v_{1})\cdots (u_{1}-v_{d})}{u_{1}-\mathrm{c}_{1}}\frac{u_{1}-\mathrm{c}_{1}}{cu_{1}\mathrm{c}_{1}-\rho}\frac{cu_{1}X_{1}-\rho}{u_{1}-X_1}\Big|_{u_{1}=\mathrm{c}_1}. \end{align} Moreover, by \eqref{hooklength-indexbar11}, we have \[f(\mathcal{T})=\prod_{1\leq k\leq d;v_{k}\neq \mathrm{c}_{1}}(\mathrm{c}_{1}-v_{k}).\] Therefore, it is easy to see that \eqref{n-1-istrue} is equal to $E_{\mathcal{T}}$ by \eqref{hooklength-idempotentelement1111} and \eqref{sum-function1111}. For $n\geq 2,$ by the induction hypothesis we can write the right-hand side of \eqref{idempotents111} as follows: \begin{align}\label{n-1-istrue2} \frac{f(\mathcal{U})}{f(\mathcal{T})}\frac{(u_{n}-\mathrm{c}_{n})^{p_{n}}}{cu_{n}\mathrm{c}_{n}-\rho}E_{\mathcal{U}} \overline{\phi}_n(\mathrm{c}_{1},\ldots,\mathrm{c}_{n-1},u_n)\Big|_{u_{n}=\mathrm{c}_{n}}. \end{align} Note that $\overline{\phi}_n(\mathrm{c}_{1},\ldots,\mathrm{c}_{n-1},u_n)=(u_{n}-v_{1})\cdots (u_{n}-v_{d})\phi_n(\mathrm{c}_{1},\ldots,\mathrm{c}_{n-1},u_n).$ By \eqref{F-PhiEu43}, we can rewrite the expression \eqref{n-1-istrue2} as \begin{align}\label{n-1-istrue3} \frac{f(\mathcal{U})}{f(\mathcal{T})}\frac{(u_{n}-\mathrm{c}_{n})^{p_{n}}}{cu_{n}\mathrm{c}_{n}-\rho}(u_{n}-v_{1})\cdots (u_{n}-v_{d})\prod_{r=1}^{n-1}f(u_{n}, \mathrm{c}_{r})E_{\mathcal{U}}\frac{cu_{n}X_{n}-\rho}{u_{n}-X_n}\Big|_{u_{n}=\mathrm{c}_{n}}. \end{align} By \eqref{hooklength-indexbar11}, we see that \begin{align*} \frac{f(\mathcal{U})}{f(\mathcal{T})}(u_{n}&-v_{1})\cdots (u_{n}-v_{d})\prod_{r=1}^{n-1}f(u_{n}, \mathrm{c}_{r})(u_{n}-\mathrm{c}_{n})^{p_{n}-1}\notag\\ &=\frac{f(\mathcal{U})}{f(\mathcal{T})}(u_{n}-v_{1})\cdots (u_{n}-v_{d})\prod_{r=1}^{n-1}\frac{(u_{n}-q^{2}\mathrm{c}_{r})(u_{n}-q^{-2}\mathrm{c}_{r})}{(u_{n}-\mathrm{c}_{r})^{2}}(u_{n}-\mathrm{c}_{n})^{p_{n}-1} \end{align*} is regular at $u_n=\mathrm{c}_{n}$ and is equal to $1.$ Thus, the expression \eqref{n-1-istrue3} equals \begin{align}\label{n-1-istrue4} E_{\mathcal{U}}\frac{u_{n}-\mathrm{c}_{n}}{u_{n}-X_n}\frac{cu_{n}X_{n}-\rho}{cu_{n}\mathrm{c}_{n}-\rho}\Big|_{u_{n}=\mathrm{c}_{n}}. \end{align} By \eqref{sum-function1111}, we see that \eqref{n-1-istrue4} is equal to \begin{align}\label{n-1-istrue5} E_{\mathcal{T}}\frac{cu_{n}X_{n}-\rho}{cu_{n}\mathrm{c}_{n}-\rho}\Big|_{u_{n}=\mathrm{c}_{n}}. \end{align} By \eqref{hooklength-idempotentelement1111}, we have $E_{\mathcal{T}}X_{n}=\mathrm{c}_{n}E_{\mathcal{T}}.$ Thus, we get that the expression \eqref{n-1-istrue5}, that is, the right-hand side of \eqref{idempotents111} equals $E_{\mathcal{T}}.$ \end{proof} \begin{remark}\label{remark111} Let $\mathscr{H}_{d, n}$ be the cyclotomic Hecke algebra defined in [AK]. It has been proved in [RuXu, Proposition 4.1] that $\mathscr{H}_{d, n}$ is isomorphic to the quotient of $\mathscr{B}_{d, n}$ by the two-sided ideal generated by all $E_{i}.$ In the process of taking quotient, the parameter $\rho$ disappears; however, the parameter $c$ is reserved and can be arbitrary. If we replace the $T_{i}(u, v),$ $Q_{i}(u,v;c),$ $\phi_{1}(u)$ in \eqref{phi-function42} with \begin{align*} \overline{T}_{i}(u,v)=T_{i}+\frac{(q-q^{-1})u}{v-u},\quad \overline{Q}_{i}(u,v;c) :=T_{i}+\frac{q-q^{-1}}{cuv-1},\quad \psi_{1}(u) :=\frac{cuX_{1}-1}{u-X_1}, \end{align*} it is easy to see that the analogue of Lemma \ref{phi-phi-phi111} holds. Let $\overline{\psi}_{1}(u) :=(u-v_{1})\cdots (u-v_{d})\frac{cuX_{1}-1}{u-X_1},$ and for $k=2,\ldots,n$, set \begin{align*} \overline{\psi}_k(u_1,\ldots,u_{k-1},u)& :=\overline{Q}_{k-1}(u_{k-1},u;c)\overline{\psi}_{k-1}(u_1,\ldots,u_{k-2},u)\overline{T}_{k-1}(u_{k-1}, u). \end{align*} We also define a rational function by \begin{align*} \Upsilon(u_1,\ldots,u_n) :=\overline{\psi}_1(u_1)\cdots \overline{\psi}_{n-1}(u_1,\ldots,u_{n-1})\overline{\psi}_n(u_1,\ldots,u_{n}). \end{align*} Then it is easy to see that the analogue of Theorem \ref{main-theorem11112} is true. Thus, we get a one-parameter family of the fusion procedures for cyclotomic Hecke algebras, generalizing the results obtained in [OgPA2]. \end{remark} \section{Appendix. Fusion procedure for cyclotomic Nazarov-Wenzl algebras} When studying the representations of Brauer algebras, Nazarov [Na1] introduced a class of infinite dimensional algebras under the name affine Wenzl algebras. In order to study finite dimensional irreducible representations of affine Wenzl algebras, Ariki, Mathas and Rui [AMR] defined the finite dimensional quotients of them, known as the cyclotomic Nazarov-Wenzl algebras. Cyclotomic Nazarov-Wenzl algebras are related to degenerate cyclotomic Hecke algebras just in the same way that cyclotomic BMW algebras are connected with cyclotomic Hecke algebras. Cyclotomic Nazarov-Wenzl algebras have been studied by many authors; see [Go3-4, RuSi1-2, Xu] and so on. \subsection{Cyclotomic Nazarov-Wenzl algebras} \begin{definition} Suppose that $\mathbb{K}$ is an algebraically closed field containing $\omega_{j}$ ($0\leq j\leq d-1$), $v_i$ ($1\leq i\leq d$), and the invertible element $2.$\vskip2mm Fix $n\geq 1.$ The cyclotomic Nazarov-Wenzl algebra $\mathscr{W}_{d, n}$ is the $\mathbb{K}$-algebra generated by the elements $S_{i}, E_{i}$ ($1\leq i\leq n-1$) and $X_{j}$ ($1\leq j\leq n$) satisfying the following relations:\vskip2mm (1) (Involutions) $S_{i}^{2}=1$ for $1\leq i\leq n-1.$ (2) (Idempotent relations) $E_{i}^{2}=\omega_{0} E_{i}$ for $1\leq i\leq n-1.$ (3) (Affine braid relations) \hspace{0.7cm}(a) $S_{i}S_{i+1}S_{i}=S_{i+1}S_{i}S_{i+1}$ and $S_{i}S_{j}=S_{j}S_{i}$ if $|i-j|\geq 2.$ \hspace{0.7cm}(b) $S_{i}X_{j}=X_{j}S_{i}$ if $j\neq i, i+1.$ (4) (Tangle relations) \hspace{0.7cm}(a) $E_{i}E_{i\pm 1}E_{i}=E_{i}.$ \hspace{0.7cm}(b) $S_{i}S_{i\pm 1}E_{i}=E_{i\pm 1}E_{i}$ and $E_{i}S_{i\pm 1}S_{i}=E_{i}E_{i\pm 1}.$ \hspace{0.7cm}(c) For $1\leq k\leq d-1,$ $E_{1}X_{1}^{k}E_{1}=\omega_{k}E_{1}.$ (5) (Untwisting relations) $S_{i}E_{i}=E_{i}S_{i}=E_{i}$ for $1\leq i\leq n-1.$ (6) (Skein relations) $S_{i}X_{i}-X_{i+1}S_{i}=E_{i}-1$ for $1\leq i\leq n-1.$ (7) (Anti-symmetry relations) $E_{i}(X_{i}+X_{i+1})=(X_{i}+X_{i+1})E_{i}=0$ for $1\leq i\leq n-1.$ (8) (Commutative relations) \hspace{0.7cm}(a) $S_{i}E_{j}=E_{j}S_{i}$ and $E_{i}E_{j}=E_{j}E_{i}$ if $|i-j|\geq 2.$ \hspace{0.7cm}(b) $E_{i}X_{j}=X_{j}E_{i}$ if $j\neq i, i+1.$ \hspace{0.7cm}(c) $X_{i}X_{j}=X_{j}X_{i}$ for $1\leq i,j \leq n.$ (9) (Cyclotomic relation) $(X_1-v_1)(X_1-v_2)\cdots (X_1-v_d)=0.$ \end{definition} We define the following elements: \begin{equation}\label{Baxterized-elements11cde} S_{i}(u,v)=S_{i}+\frac{1}{v-u}-\frac{1}{v-u+\frac{\omega_{0}}{2}-1}E_{i}\quad\mbox{for}~1\leq i\leq n-1. \end{equation} By using the fact that $E_{i}^{2}=\omega_{0}E_{i}$, we can easily get \begin{equation}\label{Baxterized-elements111cde} S_{i}(u,v)S_{i}(v,u)=g(u,v)\quad\mbox{for}~1\leq i\leq n-1, \end{equation} where \begin{equation}\label{Baxterized-elements1111cde} g(u,v)=g(v,u)=\frac{(u-v+1)(u-v-1)}{(u-v)^{2}}. \end{equation} \subsection{Combinatorics} Suppose that $(f, \bm{\lambda})\in \Lambda_{d,n}^{+}$ and $\mathfrak{s}=(\mathfrak{s}_{1},\ldots,\mathfrak{s}_{n})\in \mathscr{T}_{n}^{ud}(\bm{\lambda}).$ We can define the integers $d_{k}^{s},$ $\overline{d}_{k}^{s},$ $g_{k}^{s},$ $\overline{g}_{k}^{s}$ and some integers $p_{1},\ldots,p_{n}$ associated to $\mathfrak{s}$ in exactly the same way as those related to some $\mathcal{T}$ defined in Subsection 2.2. We shall follow the notations and only emphasize the differences. Set \begin{align}\label{symme-formscde} \mathrm{c}(\mathfrak{s}|k)= \begin{cases} v_{s}+j-i & \text{if } \mathfrak{s}_{k}=\mathfrak{s}_{k-1}\cup ((i,j),s), \\ -v_{s}+i-j & \text{if } \mathfrak{s}_{k-1}=\mathfrak{s}_{k}\cup ((i,j),s). \end{cases} \end{align} Given a box $\bm{\beta}=((i,j),s),$ we define the content of it by \begin{align}\label{symme-forms11113344cde} \mathrm{c}(\mathcal{U}|\bm{\beta})= \begin{cases} v_{s}+j-i & \text{if }\bm{\beta}\text{ is an addable box of }\mathfrak{s}, \\ -v_{s}+i-j & \text{if }\bm{\beta}\text{ is a removable box of }\mathfrak{s}. \end{cases} \end{align} Assume that $(f, \bm{\lambda})\in \Lambda_{d,n}^{+},$ $\mathfrak{t}=(\mathfrak{t}_{1},\ldots,\mathfrak{t}_{n})$ is an $n$-updown $\bm{\lambda}$-tableau and that $\mathfrak{u}=(\mathfrak{t}_{1},\ldots,\mathfrak{t}_{n-1}).$ We then define the element $g(\mathfrak{t})$ inductively by \begin{equation}\label{hooklength-indexbar11cde} g(\mathfrak{t})=g(\mathfrak{u})\psi(\mathfrak{u}, \mathfrak{t}), \end{equation} where \begin{equation*} \psi(\mathfrak{u}, \mathfrak{t})=\prod_{\substack{k\neq k_{n}\\k\in \mathbb{Z}}}(k_{n}-k)^{g_{k}^{s_{n}}}\prod_{\substack{1\leq t\leq d; t\neq s_{n}\\k\in \mathbb{Z}}}\hspace{-2mm}(v_{s_{n}}-v_{t}+k_{n}-k)^{g_{k}^{t}}\prod_{\substack{1\leq r\leq d\\k\in \mathbb{Z}}}(v_{s_{n}}+v_{r}+k_{n}+k)^{\overline{g}_{k}^{r}} \end{equation*} if $\mathfrak{t}_{n}$ is obtained from $ \mathfrak{t}_{n-1}$ by adding a box $((i_n,j_n),s_n)$, where $k_n=j_n-i_n;$ \begin{equation*} \psi(\mathfrak{u}, \mathfrak{t})=\prod_{\substack{k\neq k_{n}'\\k\in \mathbb{Z}}}(-k_{n}'+k)^{\overline{g}_{k}^{s_{n}'}}\prod_{\substack{1\leq t\leq d; t\neq s_{n}'\\k\in \mathbb{Z}}}(-v_{s_{n}'}+v_{t}-k_{n}'+k)^{\overline{g}_{k}^{t}}\prod_{\substack{1\leq r\leq d\\k\in \mathbb{Z}}}(-v_{s_{n}'}-v_{r}-k_{n}'-k)^{g_{k}^{r}} \end{equation*} if $\mathfrak{t}_{n}$ is obtained from $\mathfrak{t}_{n-1}$ by removing a box $((i_{n}',j_{n}'),s_{n}')$, where $k_{n}'=j_{n}'-i_{n}'.$ The following proposition is inspired by [IM, Proposition 3.3] and can be proved similarly. \begin{proposition}\label{special-propocde} If $\bm{\lambda}$ is a $d$-partition of $n$ and $\mathfrak{t}=(\mathfrak{t}_{1},\ldots,\mathfrak{t}_{n})$ is an $n$-updown $\bm{\lambda}$-tableau, then $p_1,\ldots,p_{n}$ are all equal to zero, and $g(\mathfrak{t})$ is exactly equal to $\Theta_{\bm{\lambda}}(Q)^{-1}$ defined in $[\emph{ZL}, (3.2)]$ when $d=m$ and $v_{s}=q_{s}$ for $1\leq s\leq m.$ \end{proposition} \subsection{Idempotents of $\mathscr{W}_{d, n}$} Following [AMR, Definition 4.3], we say that $\mathscr{W}_{d, n}$ is generic if the parameters $v_i$, $1\leq i\leq d$, satisfy the conditions (1) the characteristic $p$ of $\mathbb{K}$ satisfies $p=0$ or $p> 2n;$ (2) $|r|\geq 2n$ whenever there exists $r\in \mathbb{Z}$ such that either $v_{i}\pm v_{j}=r$ and $i\neq j,$ or $2v_{i}=r.$ Following [Go3, Definition 4.2], we say that $\mathscr{W}_{d, n}$ is admissible if the set $\{E_{1}, E_{1}X_{1},\ldots,E_{1}X_{1}^{d-1}\}$ is linearly independent in $\mathscr{B}_{d, 2}.$ It has been proved by Goodman [Go3, Theorem 5.2] that this admissible condition coincides with the $\bm{\mathrm{u}}$-admissible condition defined in [AMR, Definition 3.6]. From now on, we always assume that $\mathscr{W}_{d, n}$ is generic and admissible. Thus, by [AMR, Lemma 4.4], we have $\mathfrak{s}=\mathfrak{t}$ if and only if $\mathrm{c}(\mathfrak{s}|k)=\mathrm{c}(\mathfrak{t}|k)$ for all $1\leq k\leq n.$ Therefore, the set $\{X_1,\ldots,X_n\},$ as a family of JM-elements for $\mathscr{W}_{d, n}$ in the abstract sense defined in [Ma, Definition 2.4], satisfies the separation condition associated to the cellular basis of $\mathscr{W}_{d, n}$ constructed in [AMR, Theorem 7.17]. In particular, we can construct the primitive idempotents of $\mathscr{W}_{d, n}$ following the arguments in [Ma, Section 3]. For each $1\leq k\leq n,$ we define the following set: \[\mathscr{R}(k) :=\{\mathrm{c}(\mathfrak{s}|k)\:|\:\mathfrak{s}\in \mathscr{T}_{n}^{ud}(\bm{\lambda}) \text{ for some }(f, \bm{\lambda})\in \Lambda_{d,n}^{+}\}.\] Suppose that $(f, \bm{\lambda})\in \Lambda_{d,n}^{+}$ and $\mathfrak{t}\in \mathscr{T}_{n}^{ud}(\bm{\lambda}).$ We set \begin{equation}\label{hooklength-idempotentelement11cde} E_{\mathfrak{t}}=\prod_{k=1}^{n}\bigg(\prod_{\substack{a\in \mathscr{R}(k)\\a\neq \mathrm{c}(\mathfrak{t}|k)}}\frac{X_{k}-a}{\mathrm{c}(\mathfrak{t}|k)-a} \bigg). \end{equation} By standard arguments in [Ma, Section 3], the elements $\{E_{\mathfrak{t}}\:|\:\mathfrak{t}\in \mathscr{T}_{n}^{ud}(\bm{\lambda}) \text{ for some }(f, \bm{\lambda})\in \Lambda_{d,n}^{+}\}$ form a complete set of pairwise orthogonal primitive idempotents of $\mathscr{W}_{d, n}.$ Moreover, the elements $X_1,\ldots,X_n$ generate a maximal commutative subalgebra of $\mathscr{W}_{d, n}.$ We also have \begin{equation}\label{hooklength-idempotentelement1111cde} X_{k}E_{\mathfrak{t}}=E_{\mathfrak{t}}X_{k}=\mathrm{c}(\mathfrak{t}|k)E_{\mathfrak{t}}. \end{equation} \subsection{Fusion procedure for cyclotomic Nazarov-Wenzl algebras} Assume that $(f, \bm{\lambda})\in \Lambda_{d,n}^{+}$ and that $\mathfrak{t}=(\mathfrak{t}_{1},\ldots,\mathfrak{t}_{n})$ is an $n$-updown $\bm{\lambda}$-tableau. Set $\bm{\mu}=\mathfrak{t}_{n-1}$ and $\mathfrak{u}=(\mathfrak{t}_{1},\ldots,\mathfrak{t}_{n-1})$ as an updown $\bm{\mu}$-tableau. Let $\bm{\theta}$ be the box that is addable to or removable from $\bm{\mu}$ to get $\bm{\lambda}.$ For simplicity, we set $\mathrm{c}_{k} :=\mathrm{c}(\mathfrak{t}|k).$ By \eqref{hooklength-idempotentelement11cde}, we can rewrite $E_{\mathfrak{t}}$ inductively as follows: \begin{equation}\label{idempotentele-induccde} E_{\mathfrak{t}}=E_{\mathfrak{t}}\frac{(X_{n}-a_1)\cdots (X_{n}-a_k)}{(\mathrm{c}_{n}-a_1)\cdots (\mathrm{c}_{n}-a_k)}, \end{equation} where $a_1,\ldots,a_k$ are the contents of all boxes except $\bm{\theta},$ which can be addable to or removable from $\bm{\mu}$ to get a $d$-partition. We denote by $\{\Delta_{1},\ldots,\Delta_{e}\}$ the set of all $d$-partitions obtained from $\bm{\mu}$ by adding a box or removing one. Set $\mathscr{S}_{j} :=(\mathfrak{t}_{1},\ldots,\mathfrak{t}_{n-1},\Delta_{j})$ for $1\leq j\leq e.$ Note that $\mathfrak{t}\in \{\mathscr{S}_{1},\ldots,\mathscr{S}_{e}\}.$ Since $\mathscr{W}_{d, n}$ is generic, hence it is semisimple. By [AMR, Theorem 5.3 a)] we have \begin{equation}\label{sum-formula11cde} E_{\mathfrak{u}}=\sum_{j=1}^{e}E_{\mathscr{S}_{j}}. \end{equation} The equality \eqref{hooklength-idempotentelement1111cde} implies that the following rational function \begin{equation}\label{rational-function11cde} E_{\mathfrak{u}}\frac{u-\text{c}_n}{u-X_{n}} \end{equation} is regular at $u=\text{c}_n,$ and by \eqref{sum-formula11cde}, we get \begin{equation}\label{sum-function1111cde} E_{\mathfrak{u}}\frac{u-\text{c}_n}{u-X_{n}}\Big|_{u=\text{c}_n}=E_{\mathfrak{t}}. \end{equation} For $1\leq i\leq n-1,$ we set \begin{align}\label{Q-function41cde} R_{i}(u,v;c) :=S_{i}+\frac{1}{u+v+c}-\frac{1}{u+v}E_{i}. \end{align} Let $\varphi_{1}(u) :=\frac{u+X_{1}+c}{u-X_1}.$ For $k=2,\ldots,n$, we set \begin{align}\label{phi-function42cde} \varphi_k(u_1,\ldots,u_{k-1},u)& :=R_{k-1}(u_{k-1},u;c)\varphi_{k-1}(u_1,\ldots,u_{k-2},u)S_{k-1}(u_{k-1}, u)\notag\\ =R_{k-1}&(u_{k-1},u;c)\cdots R_{1}(u_{1},u;c)\varphi_{1}(u)S_{1}(u_{1}, u)\cdots S_{k-1}(u_{k-1}, u). \end{align} From now on, we always set $c :=1-\frac{\omega_{0}}{2}.$ The following lemma is inspired by [IMOg2, Lemma 1] and can be proved similarly. \begin{lemma}\label{phi-phi-phi111cde} Assume that $n\geq 1.$ We have \begin{align}\label{F-PhiEu43cde} E_{\mathfrak{u}}\varphi_n(\mathrm{c}_1,\ldots,\mathrm{c}_{n-1},u)\prod_{r=1}^{n-1}g(u, \mathrm{c}_{r})^{-1}=E_{\mathfrak{u}}\frac{u+X_{n}+c}{u-X_n}. \end{align} \end{lemma} \begin{proof} We shall prove \eqref{F-PhiEu43cde} by induction on $n.$ For $n=1,$ the situation is trivial. We set \begin{align}\label{phi-function421cde} \varphi'_n(\mathrm{c}_1,&\ldots,\mathrm{c}_{n-1},u)\notag\\ &=R_{n-1}(\mathrm{c}_{n-1},u;c)\cdots R_{1}(\mathrm{c}_{1},u;c)\varphi_{1}(u)S_{1}(u, \mathrm{c}_{1})^{-1}\cdots S_{n-1}(u, \mathrm{c}_{n-1})^{-1}. \end{align} By \eqref{Baxterized-elements111cde} and \eqref{phi-function421cde}, in order to show \eqref{F-PhiEu43cde}, it suffices to prove that \begin{align}\label{F-PhiEu4321cde} E_{\mathfrak{u}}\varphi'_n(\mathrm{c}_1,\ldots,\mathrm{c}_{n-1},u)=E_{\mathfrak{u}}\frac{u+X_{n}+c}{u-X_n}. \end{align} By the induction hypothesis, it boils down to proving the following equality: \begin{align}\label{EUEU-PhiEu5cde} E_{\mathfrak{u}}R_{n-1}(\mathrm{c}_{n-1},u;c)\frac{u+X_{n-1}+c}{u-X_{n-1}}S_{n-1}(u, \mathrm{c}_{n-1})^{-1}=E_{\mathfrak{u}}\frac{u+X_{n}+c}{u-X_n}. \end{align} Since $X_{n}$ commutes with $E_{\mathfrak{u}},$ we can rewrite \eqref{EUEU-PhiEu5cde} as follows: \begin{align}\label{EUEU-PhiEu6cde} E_{\mathfrak{u}}(u-X_n)R_{n-1}(&\mathrm{c}_{n-1},u;c)(u+X_{n-1}+c)\notag\\ &=E_{\mathfrak{u}}(u+X_{n}+c)S_{n-1}(u, \mathrm{c}_{n-1})(u-X_{n-1}). \end{align} By \eqref{Baxterized-elements11cde} and \eqref{Q-function41cde}, the equality \eqref{EUEU-PhiEu6cde} becomes \begin{align}\label{EUEU-PhiEu7cde} E_{\mathfrak{u}}&(u-X_n)\Big(S_{n-1}+\frac{1}{\mathrm{c}_{n-1}+u+c}-\frac{1}{\mathrm{c}_{n-1}+u}E_{n-1}\Big)(u+X_{n-1}+c)\notag\\ &=E_{\mathfrak{u}}(u+X_{n}+c)\Big(S_{n-1}+\frac{1}{\mathrm{c}_{n-1}-u}-\frac{1}{\mathrm{c}_{n-1}-u+\frac{\omega_{0}}{2}-1}E_{n-1}\Big)(u-X_{n-1}). \end{align} By definition, we have $S_{n-1}X_{n-1}=X_{n}S_{n-1}+E_{n-1}-1.$ Thus, we get that \eqref{EUEU-PhiEu7cde} is equivalent to \begin{align}\label{EUEU-PhiEu8cde} E_{\mathfrak{u}}(u&-X_n)\Big(uS_{n-1}+(X_{n}S_{n-1}+E_{n-1}-1)+cS_{n-1}+1\notag\\ &\hspace{2cm}-\frac{1}{\mathrm{c}_{n-1}+u}E_{n-1}(u+X_{n-1}+c)\Big)\notag\\ &=E_{\mathfrak{u}}(u+X_{n}+c)\Big(uS_{n-1}-(X_{n}S_{n-1}+E_{n-1}-1)-1\notag\\ &\hspace{2cm}-\frac{1}{\mathrm{c}_{n-1}-u+\frac{\omega_{0}}{2}-1}E_{n-1}(u-X_{n-1})\Big). \end{align} It is easy to see that the equality \eqref{EUEU-PhiEu8cde} comes down to the following equality: \begin{align}\label{EUEU-PhiEu9cde} (c+2u)&E_{\mathfrak{u}}-E_{\mathfrak{u}}(u-X_n)\frac{1}{\mathrm{c}_{n-1}+u}E_{n-1}(u+X_{n-1}+c)\notag\\ &=-E_{\mathfrak{u}}(u+X_{n}+c)\frac{1}{\mathrm{c}_{n-1}-u+\frac{\omega_{0}}{2}-1}E_{n-1}(u-X_{n-1}). \end{align} By definition, we have $E_{\mathfrak{u}}X_{n-1}=\mathrm{c}_{n-1}E_{\mathfrak{u}}.$ Hence, we get $E_{\mathfrak{u}}X_{n}E_{n-1}=-\mathrm{c}_{n-1}E_{\mathfrak{u}}E_{n-1}$ by definition. According to this, by comparing the coefficients of the terms involving $E_{\mathfrak{u}}E_{n-1}X_{n-1}$, we see that it suffices to show that \begin{align}\label{EUEU-PhiEu11cde} \frac{-u-\mathrm{c}_{n-1}}{\mathrm{c}_{n-1}+u}=\frac{u-\mathrm{c}_{n-1}+c}{\mathrm{c}_{n-1}-u+\frac{\omega_{0}}{2}-1}. \end{align} By comparing the coefficients of the terms involving $E_{\mathfrak{u}}E_{n-1}$, it suffices to show that \begin{align}\label{EUEU-PhiEu12cde} (c+2u)+\frac{-(c+u)(\mathrm{c}_{n-1}+u)}{\mathrm{c}_{n-1}+u}=\frac{u(-u+\mathrm{c}_{n-1}-c)}{\mathrm{c}_{n-1}-u+\frac{\omega_{0}}{2}-1}. \end{align} Noting that $c=1-\frac{\omega_{0}}{2},$ it is easy to verify that \eqref{EUEU-PhiEu11cde} and \eqref{EUEU-PhiEu12cde} are true. Thus, \eqref{EUEU-PhiEu9cde} holds. The lemma is proved. \end{proof} Let $\overline{\varphi}_{1}(u) :=(u-v_{1})\cdots (u-v_{d})\frac{u+X_{1}+c}{u-X_1}.$ For $k=2,\ldots,n$, we set \begin{align}\label{phi-function424242cde} \overline{\varphi}_k(u_1,\ldots,u_{k-1},u)& :=R_{k-1}(u_{k-1},u;c)\overline{\varphi}_{k-1}(u_1,\ldots,u_{k-2},u)S_{k-1}(u_{k-1}, u)\notag\\ =R_{k-1}&(u_{k-1},u;c)\cdots R_{1}(u_{1},u;c)\overline{\varphi}_{1}(u)S_{1}(u_{1}, u)\cdots S_{k-1}(u_{k-1}, u). \end{align} We also define the following rational function: \begin{align}\label{Phi-function111cde} \Psi(u_1,\ldots,u_n) :=\overline{\varphi}_1(u_1)\cdots \overline{\varphi}_{n-1}(u_1,\ldots,u_{n-1})\overline{\varphi}_n(u_1,\ldots,u_{n}). \end{align} Recall that the integers $p_{1},\ldots,p_{n}$ associated to $\mathfrak{t}$ have been defined as in \eqref{integer-indexbar} or \eqref{integer-indexbar11}. Now we can state the main result of this paper. \begin{theorem}\label{main-theorem11112cde} The idempotent $E_{\mathfrak{t}}$ of $\mathscr{W}_{d, n}$ corresponding to an $n$-updown $\bm{\lambda}$-tableau $\mathfrak{t}$ can be derived by the following consecutive evaluations$:$ \begin{equation}\label{idempotents111cde} E_{\mathfrak{t}}=\frac{1}{g(\mathfrak{t})}\Big(\prod_{k=1}^{n}\frac{(u_{k}-\mathrm{c}_{k})^{p_{k}}}{u_{k}+\mathrm{c}_{k}+c}\Big) \Psi(u_1,\ldots,u_n)\Big|_{u_{1}=\emph{c}_1}\cdots\Big|_{u_{n}=\emph{c}_{n}}. \end{equation} \end{theorem} \begin{proof} We shall prove the theorem by induction on $n.$ For $n=1,$ we have $p_{1}=0$ by Proposition \ref{special-propocde}. Thus, we get that the right-hand side of \eqref{idempotents111cde} is equal to \begin{align}\label{n-1-istruecde} \frac{1}{g(\mathfrak{t})}&\frac{(u_{1}-v_{1})\cdots (u_{1}-v_{d})}{u_{1}+\mathrm{c}_{1}+c} \frac{u_{1}+X_{1}+c}{u_{1}-X_1}\Big|_{u_{1}=\mathrm{c}_1}\notag\\ &=\frac{1}{g(\mathfrak{t})}\frac{(u_{1}-v_{1})\cdots (u_{1}-v_{d})}{u_{1}-\mathrm{c}_{1}}\frac{u_{1}-\mathrm{c}_{1}}{u_{1}+\mathrm{c}_{1}+c}\frac{u_{1}+X_{1}+c}{u_{1}-X_1}\Big|_{u_{1}=\mathrm{c}_1}. \end{align} Moreover, by \eqref{hooklength-indexbar11cde}, we have \[g(\mathfrak{t})=\prod_{1\leq k\leq d;v_{k}\neq \mathrm{c}_{1}}(\mathrm{c}_{1}-v_{k}).\] Therefore, it is easy to see that \eqref{n-1-istruecde} is equal to $E_{\mathfrak{t}}$ by \eqref{hooklength-idempotentelement1111cde} and \eqref{sum-function1111cde}. For $n\geq 2,$ by the induction hypothesis we can write the right-hand side of \eqref{idempotents111cde} as follows: \begin{align}\label{n-1-istrue2cde} \frac{g(\mathfrak{u})}{g(\mathfrak{t})}\frac{(u_{n}-\mathrm{c}_{n})^{p_{n}}}{u_{n}+\mathrm{c}_{n}+c}E_{\mathfrak{u}} \overline{\varphi}_n(\mathrm{c}_{1},\ldots,\mathrm{c}_{n-1},u_n)\Big|_{u_{n}=\mathrm{c}_{n}}. \end{align} Note that $\overline{\varphi}_n(\mathrm{c}_{1},\ldots,\mathrm{c}_{n-1},u_n)=(u_{n}-v_{1})\cdots (u_{n}-v_{d})\varphi_n(\mathrm{c}_{1},\ldots,\mathrm{c}_{n-1},u_n).$ By \eqref{F-PhiEu43cde}, we can rewrite the expression \eqref{n-1-istrue2cde} as \begin{align}\label{n-1-istrue3cde} \frac{g(\mathfrak{u})}{g(\mathfrak{t})}\frac{(u_{n}-\mathrm{c}_{n})^{p_{n}}}{u_{n}+\mathrm{c}_{n}+c}(u_{n}-v_{1})\cdots (u_{n}-v_{d})\prod_{r=1}^{n-1}g(u_{n}, \mathrm{c}_{r})E_{\mathfrak{u}}\frac{u_{n}+X_{n}+c}{u_{n}-X_n}\Big|_{u_{n}=\mathrm{c}_{n}}. \end{align} By \eqref{hooklength-indexbar11cde}, we see that \begin{align*} \frac{g(\mathfrak{u})}{g(\mathfrak{t})}(u_{n}&-v_{1})\cdots (u_{n}-v_{d})\prod_{r=1}^{n-1}g(u_{n}, \mathrm{c}_{r})(u_{n}-\mathrm{c}_{n})^{p_{n}-1}\notag\\ &=\frac{g(\mathfrak{u})}{g(\mathfrak{t})}(u_{n}-v_{1})\cdots (u_{n}-v_{d})\prod_{r=1}^{n-1}\frac{(u_{n}-\mathrm{c}_{r}+1)(u_{n}-\mathrm{c}_{r}-1)}{(u_{n}-\mathrm{c}_{r})^{2}}(u_{n}-\mathrm{c}_{n})^{p_{n}-1} \end{align*} is regular at $u_n=\mathrm{c}_{n}$ and is equal to $1.$ Thus, the expression \eqref{n-1-istrue3cde} equals \begin{align}\label{n-1-istrue4cde} E_{\mathfrak{u}}\frac{u_{n}-\mathrm{c}_{n}}{u_{n}-X_n}\frac{u_{n}+X_{n}+c}{u_{n}+\mathrm{c}_{n}+c}\Big|_{u_{n}=\mathrm{c}_{n}}. \end{align} By \eqref{sum-function1111cde}, we see that \eqref{n-1-istrue4cde} is equal to \begin{align}\label{n-1-istrue5cde} E_{\mathfrak{t}}\frac{u_{n}+X_{n}+c}{u_{n}+\mathrm{c}_{n}+c}\Big|_{u_{n}=\mathrm{c}_{n}}. \end{align} By \eqref{hooklength-idempotentelement1111cde}, we have $E_{\mathfrak{t}}X_{n}=\mathrm{c}_{n}E_{\mathfrak{t}}.$ Thus, we get that the expression \eqref{n-1-istrue5cde}, that is, the right-hand side of \eqref{idempotents111cde} equals $E_{\mathfrak{t}}.$ \end{proof} \begin{remark}\label{remark111cde} Let $\mathscr{D}_{d, n}$ be the degenerate cyclotomic Hecke algebra. It has been proved in [AMR, Proposition 7.2] that $\mathscr{D}_{d, n}$ is isomorphic to the quotient of $\mathscr{W}_{d, n}$ by the two-sided ideal generated by all $E_{i}.$ In the process of taking quotient, the parameter $\omega_{0}$ disappears; however, the parameter $c$ is reserved and can be arbitrary. If we replace the $S_{i}(u, v),$ $R_{i}(u,v;c),$ $\varphi_{1}(u)$ in \eqref{phi-function42cde} with \begin{align*} \overline{S}_{i}(u,v)=S_{i}+\frac{1}{v-u},\quad \overline{R}_{i}(u,v;c) :=S_{i}+\frac{1}{u+v+c},\quad \chi_{1}(u) :=\frac{u+X_{1}+c}{u-X_1}, \end{align*} it is easy to see that the analogue of Lemma \ref{phi-phi-phi111cde} holds. Let $\overline{\chi}_{1}(u) :=(u-v_{1})\cdots (u-v_{d})\frac{u+X_{1}+c}{u-X_1},$ and for $k=2,\ldots,n$, set \begin{align*} \overline{\chi}_k(u_1,\ldots,u_{k-1},u)& :=\overline{R}_{k-1}(u_{k-1},u;c)\overline{\chi}_{k-1}(u_1,\ldots,u_{k-2},u)\overline{S}_{k-1}(u_{k-1}, u). \end{align*} We also define a rational function by \begin{align*} \Omega(u_1,\ldots,u_n) :=\overline{\chi}_1(u_1)\cdots \overline{\chi}_{n-1}(u_1,\ldots,u_{n-1})\overline{\chi}_n(u_1,\ldots,u_{n}). \end{align*} Then it is easy to see that the analogue of Theorem \ref{main-theorem11112cde} is true. Thus, we get a one-parameter family of the fusion procedures for degenerate cyclotomic Hecke algebras, generalizing the results obtained in [ZL]. \end{remark} \noindent{\bf Acknowledgements.} The author is deeply indebted to Dr. Shoumin Liu for posing the question about fusion procedures for cyclotomic Nazarov-Wenzl algebras to him. \end{document}
\begin{document} \begin{abstract} We show that the finite dimensional nilpotent complex Lie superalgebras $\mathfrak{g}$ whose injective hulls of simple $U(\mathfrak{g})$-modules are locally Artinian are precisely those whose even part $\mathfrak{g}_0$ is isomorphic to a nilpotent Lie algebra with an abelian ideal of codimension $1$ or to a direct product of an abelian Lie algebra and a certain $5$-dimensional or a certain $6$-dimensional nilpotent Lie algebra. \end{abstract} \maketitle \section{Introduction} Injective modules are the building blocks in the theory of Noetherian rings. Matlis showed that any indecomposable injective module over a commutative Noetherian ring is isomorphic to the injective hull $E(R/P)$ of some prime ideal $P$ of $R$. He also showed that any injective hull of a simple module is Artinian (see \cite{Matlis} and \cite[Proposition 3]{Matlis_DCC}). In connection with the Jacobson Conjecture for Noetherian rings Jategaonkar showed in \cite{Jategaonkar_conj} (see also \cite{Cauchon, Schelter}) that the injective hulls of simple modules are locally Artinian provided the ring $R$ is fully bounded Noetherian (FBN). This lead him to answer the Jacobson Conjecture in the afirmative for FBN rings. Recall that a module is called {\it locally Artinian} if every finitely generated submodule of it is Artinian. After Jategaonkar's result the question arose whether the condition \begin{center}$(\diamond)\:\:$ Injective hulls of simple right $A$-modules are locally Artinian\end{center} was sufficient to prove an affirmative answer of the Jacobson Conjecture which quickly turned out to be not the case. However property $(\diamond)$ remained a subtle condition for Noetherian rings whose meaning is not yet fully understood. Property $(\diamond)$ says that all finitely generated essential extensions of simple right $A$-modules are Artinian. And in case $A$ is right Noetherian property $(\diamond)$ is equivalent to the condition that the class of semi-Artinian right $A$-modules, i.e. modules $M$ that are the union of their socle series, is closed under essential extensions. For algebras related to $U(\mathfrak{sl}_2)$ the condition has been examined in \cite{Dahlberg, injective_modules_over_down-up_algebras, PaulaIan, musson_classification}. One of the first examples of a Noetherian domain that does not satisfy $(\diamond)$ had been found by Ian Musson in \cite{some-examples-of-modules-over-noetherian-rings-musson} concluding that whenever $\mathfrak{g}$ a finite dimensional solvable non-nilpotent Lie algebra, then $U(\mathfrak{g})$ does not satisfy property $(\diamond)$. It is then natural to ask for which finite dimensional complex nilpotent Lie algebras $\mathfrak{g}$ its enveloping algebra satisfies $(\diamond)$. We will answer this question completely and will show that those Lie algebras are close to abelian Lie algebras. Slightly more general we can prove our Main Theorem for Lie superalgebras: \begin{thm}\label{Main_Theorem} The following statement are equivalent for a finite dimensional nilpotent complex Lie superalgebra $\mathfrak{g}=\mathfrak{g}_0\oplus \mathfrak{g}_1$: \begin{enumerate} \item[(a)] Finitely generated essential extensions of simple $U(\mathfrak{g})$-modules are Artinian. \item[(b)] Finitely generated essential extensions of simple $U(\mathfrak{g}_0)$-modules are Artinian. \item[(c)] $\indx{\mathfrak{g}_0} \mathfrak{g}eq \dim(\mathfrak{g}_0)-2$, where $\indx{\mathfrak{g}_0}$ denotes the index of $\mathfrak{g}_0$. \item[(d)] Up to a central abelian direct factor $\mathfrak{g}_0$ is isomorphic \begin{enumerate} \item[(i)] to a nilpotent Lie algebra with abelian ideal of codimension $1$; \item[(ii)] to the $5$-dimensional Lie algebra $\mathfrak{h}_5$ with basis $\{e_1,e_2,e_3,e_4,e_5\}$ and nonzero brackets given by $$[e_{1}, e_{2}] = e_{3},\ [e_{1}, e_{3}] = e_{4},\ [e_{2}, e_{3}] = e_{5}.$$ \item[(iii)] to the $6$-dimensional Lie algebra $\mathfrak{h}_6$ with basis $\{e_1,e_2,e_3,e_4,e_5,e_6\}$ and nonzero brackets given by $$[e_{1}, e_{2}] = e_{3},\ [e_{1}, e_{4}] = e_{5},\ [e_{2}, e_{4}] = e_{6}.$$ \end{enumerate} \end{enumerate} \end{thm} Together with Musson's solvable counter example we have a characterisation of finite dimensional complex solvable Lie algebras $\mathfrak{g}$ whose enveloping algebra $U(\mathfrak{g})$ satisfies condition $(\diamond)$. \begin{cor} Let $\mathfrak{g}$ be a a finite dimensional solvable complex Lie algebra. $U(\mathfrak{g})$ satisfies $(\diamond)$ if and only if $\mathfrak{g}$ is isomporphic up to an abelian direct factor to a Lie algebra with an abelian ideal of codimension $1$ or to $\mathfrak{h}_5$ or to $\mathfrak{h}_6$. \end{cor} The proof of the main Theorem is organized in four steps. In the first step we show that Noetherian rings whose primitive ideals contain non-zero ideals with a normalizing set of generators satisfy $(\diamond)$, if all of its primitive factors statisfy property $(\diamond)$. In a second step we verify that ideals of the enveloping algebra $U(\mathfrak{g})$ of a finite dimensional nilpotent Lie superalgebra $\mathfrak{g}$ have a supercentralizing set of generators, which together with the first step shifts our problem to the study of primitive factors of $U(\mathfrak{g})$. In the third step we combine the description of primitive factors of $U(\mathfrak{g})$ given by A.Bell and I.Musson as tensor products of the form $\mathrm{Cliff}_q(\mathbb{C})\otimes A_p(\mathbb{C})$ with a result of T.Stafford that says that the only Weyl algebra $A_p(\mathbb{C})$ satisfying $(\diamond)$ is the first Weyl algebra. A result by E.Herscovich shows that the order $p$ of possible Weyl algebras appearing in the primitive factors of $U(\mathfrak{g})$ is determined by the index $\indx{\mathfrak{g}_0}$ of the underlying even part $\mathfrak{g}_0$ of $\mathfrak{g}$, whch in our case imposes $\indx{\mathfrak{g}_0}\mathfrak{g}eq \dim{\mathfrak{g}_0}-2$. The last step lists all finite dimensional nilpotent Lie algebras $\mathfrak{g}$ with $\indx{g}\mathfrak{g}eq \dim(\mathfrak{g})-2$. \section{Noetherian rings with enough normal elements} The purpose of this section is to examine the influence that normal elements have on property $(\diamond)$. Recall that a module $M$ is a subdirect product of a family of modules $\{F_\lambda\}_\Lambda$ if there exists an embedding $\imath: M\rightarrow \prod_{\lambda \in \Lambda} F_\lambda$ into a product of the modules $F_\lambda$ such that for each projection $\pi_\mu:\prod F_\lambda \rightarrow F_\mu$ the composition $\pi_\mu\imath$ is surjective. Compare the next result with \cite[Theorem 1.1]{on-injective-hulls-of-simple-modules-hirano}. \begin{lem}\label{Diamond_subdirectproduct} A ring $R$ has property $(\diamond)$ if and only if every left $R$-module is a subdirect product of locally Artinian modules. \end{lem} \begin{proof} A standard fact in module theory \cite[14.9]{wisbauer} says that every module is a subdirect product of factor modules that are essential extensions of a simple module\footnote{those modules occur in the literature under various names like {\it subdirectly irreducible}, {\it cocyclic}, {\it colocal} or {\it monolithic}}. Since property $(\diamond)$ is equivalent to subdirectly irreducible modules to be locally Artinian, the Lemma follows. \end{proof} A ring extension $R \subseteq S$ is said to be a \textit{finite normalizing extension} if there exists a finite set $\{a_{1}, \ldots a_{k}\}$ of elements of $S$ such that $S = \sum_{i=1}^{k}a_{i}R$ and $a_{i}R = Ra_{i}$, $\forall i=1, \ldots k$. The following is an adaption of Hirano's result {\cite[1.8]{on-injective-hulls-of-simple-modules-hirano}}: \begin{prop}\label{hirano_adapt} Let $S$ be a finite normalizing extension of a ring $R$. If $R$ satisfies $(\diamond)$ then so does $S$. \end{prop} \begin{proof} Let $M$ be a nonzero left $S$-module. By Lemma \ref{Diamond_subdirectproduct} there exists a family $\{N_\lambda \}$ of $R$-submodules of $M$ such that $M/N_\lambda$ is locally Artinian for all $\lambda$ and $\bigcap_\lambda N_\lambda = 0$. For any $R$-submodule $N$ of $M$ denote the largest $S$-submodule of $M$ contained in $N$ by $b(N)$ (called the bound of $N$ in \cite{noncommutative_noetherian_rings}). In fact, $b(N) = \cap_{i=1}^{k}a_{i}^{-1}N$, where $$a_{i}^{-1}N = \{m \in M \mid a_{i}m \in N\}.$$ Since $b(N_\lambda)\subseteq N_\lambda$, we certainly have $\bigcap_{\lambda} b(N_\lambda)=0$. By \cite[10.1.6]{noncommutative_noetherian_rings}, there is a lattice embedding of $R$-modules $\mathcal{L}(M/b(N_\lambda)) \longrightarrow \mathcal{L}(M/N_\lambda)$ which implies also that $b(N_\lambda)$ is locally Artinian. Hence $M$ is a subdirect product of locally Artinian $S$-modules. \end{proof} As a consequence we have the following. \begin{cor}\label{tensor_with_fdim_algebra} Let $C$ be a finite dimensional algebra and $A$ be any algebra. If $A$ satisfies $(\diamond)$ then $C \otimes A$ satisfies $(\diamond)$ too. \end{cor} \begin{proof} Let $\{x_{1}, \ldots, x_{n}\}$ be a basis of $C$. Then we have $C \otimes A = \sum_{i}^{n}(x_{i} \otimes 1)A$ where each $x_{i} \otimes 1$ is a normal element and so $C \otimes A$ is a finite normalizing extension of $A$ and hence it satisfies $(\diamond)$ by Proposition~\ref{hirano_adapt}. \end{proof} A set of elements $\{x_{1}, \ldots, x_{n}\}$ of a ring $R$ is called a \textit{normalizing (resp. centralizing) set} if for each $j = 0,\ldots,n-1$ the image of $x_{j+1}$ in $R/\sum_{i=1}^{j}x_{i}R$ is a normal (resp. central) element. McConnell showed in \cite{intersection_theorem_for_rings} that every ideal in the enveloping algebra of a finite dimensional nilpotent Lie algebra has a centralizing generator set. In the next section we will show a super version of his result. \begin{lem}\label{annihilator_of_q} Let $A$ be a Noetherian algebra, $E$ be a simple $A$-module and $E \leq M$ be an essential extension of left $A$-modules. Let $Q \subseteq\mathcal{A}nn{A}{E}$ be an ideal of $A$ that has a normalizing set of generators. Then $M$ is Artinian if and only if $M' = \mathcal{A}nn{M}{Q}$ is Artinian. \end{lem} \begin{proof} We proceed by induction on the number of elements of the generating set of $Q$. Suppose $Q = \langle x_{1}\rangle$ with $ x_{1} $ being a normal element. Define a map $ f: M \longrightarrow M $ by $ f(m) = x_{1}m $. This map is $Z(A)$-linear and preserves $A$-submodules of $M$ because if $U \leq M$ is an $A$-submodule of $M$, then $A\cdot f(U) = Ax_{1}U = x_{1}AU = x_{1}U = f(U)$ and so $ f(U) $ if an $A$-submodule of $M$. Since $Q$ is generated by a normal element it satisfies the Artin-Rees property (see \cite[4.1.10]{noncommutative_noetherian_rings}) and so there exists a natural number $n > 0$ such that $ Q^{n}M = x_{1}^{n}M = 0 $. In other words $\Ker{f^{n}} = M$. Hence we have a finite filtration $$0 \subseteq \Ker{f} = \mathcal{A}nn{M}{Q} \subseteq \Ker{f^{2}} \subseteq \ldots \subseteq \Ker{f^{n-1}} \subseteq \Ker{f^{n}} = M$$ whose subfactors are $A/Q$-modules and $f$ induces a submodule preserving chain of embeddings $$M/\Ker{f^{n-1}} \mathfrak{h}ookrightarrow \Ker{f^{n-1}} / \Ker{f^{n-2}} \mathfrak{h}ookrightarrow \ldots \mathfrak{h}ookrightarrow \Ker{f^{2}} / \Ker{f} \mathfrak{h}ookrightarrow \Ker{f}.$$ Hence $M$ is Artinian if and only if $M'=\Ker{f}=\mathcal{A}nn{M}{Q}$ is Artinian. Now let $n>0$ and suppose that the assertion holds for all Noetherian algebras and finitely generated essential extensions $E\subseteq M$ of simple left $A$-modules $E$ such that $\mathcal{A}nn{A}{E}$ contains an ideal $Q$ which has a normalizing set of generators with less than $n$ elements. Let $E\subseteq M$ be a finitely generated essential extension of a simple $A$-module such that $Q\subseteq \mathcal{A}nn{A}{E}$ has a normalizing set of generators $ \{x_{1}, \ldots, x_{n}\} $ of $n$ elements. Consider the submodule $M' = \mathcal{A}nn{M}{x_{1}}$. Since $x_{1}$ is a normal element, we can apply the same procedure to conclude that $M$ is Artinian if and ony if $M'$ is Artinian. Let $A' = A / Ax_{1}$ and $ Q' = Q / Ax_{1} $. Then $Q'\subseteq \mathcal{A}nn{A'}{E}$ is generated by the set $ \{\overline{x_{2}}, \ldots , \overline{x_{n}}\} $ of normalizing elements, where $ \overline{x_{i}} $ is the image of $ x_{i} $ in $A'$ for $ i = 2, \ldots,n $. Now, $E \leq M'$ is an essential extension of $A'$-modules such that $Q'E = 0$. Since $Q'$ is generated by a normalizing set of $n-1$ elements, by the induction hypotheses we conclude that $M$ is Artinian if and only if $\mathcal{A}nn{M'}{Q'} = \mathcal{A}nn{M}{Q}$ is Artinian as $A'$-modules and hence also as $A$-modules. \end{proof} \begin{lem}\label{Lemma_pre_Conclusion} Suppose that $A$ is a Noetherian algebra such that every primitive ideal $P$ of $A$ contains an ideal $Q\subseteq P$ which has a normalizing set of generators and $A/Q$ satisfies $(\diamond)$. Then $A$ satisfies $(\diamond)$. \end{lem} \begin{proof} Let $E$ be a simple $A$-module, $P = \mathcal{A}nn{A}{E}$ and let $E \leq M$ be a finitely generated essential extension of $E$. Let $M' = \mathcal{A}nn{M}{Q}$, where $Q\subseteq P$ is an ideal that has a normalizing set of generators and with $A/Q$ satisfying $(\diamond)$. Then $E \leq M'$ is a finitely generated essential extension of $A/Q$-modules and so $M'$ is Artinian because $A/Q$ satisfies $ (\diamond) $. Since by Lemma~\ref{annihilator_of_q} $M'$ is Artinian if and only if $M$ is Artinian, it follows that $M$ is Artinian and $A$ satisfies $(\diamond)$. \end{proof} Recall that a {\bf superalgebra} is a $\mathbb{Z}_2$-graded algebra $A=A_0\oplus A_1$. We denote by $|a|$ the degree of a homogeneous element of $A$. When refering to graded ideals $I$ of $A$ we mean ideals $I=I_0 \oplus I_1$ that are graded with respect to the $\mathbb{Z}_2$-grading of $A$. Given any ideal $P$ of $A$ it is easy to see that $Q=P\cap \sigma(P)$ is a graded ideal where $\sigma$ denotes the automorphism : $$\sigma: A \rightarrow A \qquad a_0+a_1 \mapsto a_0-a_1 \qquad \forall a_0\in A_0, a_1\in A_1.$$ \begin{thm}\label{Conclusion} Let $A$ be a Noetherian superalgebra such that every primitive ideal is maximal and every graded primitive ideal is generated by a normalizing set of generators. Then $A$ satisfies property $(\diamond)$ if and only if every primitive factor of A does. \end{thm} \begin{proof} The part $(\Rightarrow)$ is clear since the property $(\diamond)$ is inherited by factor rings.\\ $(\Leftarrow)$ Suppose that every primitive factor of $A$ satisfies $(\diamond)$. Let $E$ be a simple $A$-module, $P = \mathcal{A}nn{A}{E}$, and let $E \leq M$ be an essential extension of $E$. $P$ is maximal by assumption. The ideal $Q=P\cap \sigma(P)$ is graded and has a normalizing set of generators by assumption. If $P$ is graded, then $P=Q$ and $A/Q$ satisfies $(\diamond)$ by hypothesis. If $P$ is not graded, then $P\neq \sigma(P)$ and as $P$ is maximal, $A=P + \sigma(P)$. Hence $A/Q\simeq A/P \times A/\sigma(P)$. Note that any left $A/Q$-module is $M$ can be written as a direct sum $M=M_1\oplus M_2$ of an $A/P$-module $M_1$ and an $A/\sigma(P)$-module $M_2$. Thus if $E'$ is a simple $A/Q$-module and $M'$ is a finitely generated essential extension of $E'$ as $A/Q$-module, $E'\subseteq M'$ is also a finitely generated essential extension of $A/P$- (resp. $A/\sigma(P)$-) modules. Since $A/P$ satisfies $(\diamond)$ and since $A/P\simeq A/\sigma(P)$, also $A/Q$ satisfies $(\diamond)$. By Lemma~\ref{Lemma_pre_Conclusion} we conclude that $A$ satisfies $(\diamond)$. \end{proof} \section{Ideals in enveloping algebras of nilpotent Lie superalgebras} McConnell showed in \cite{intersection_theorem_for_rings} that every ideal of the enveloping algebra of a finite dimensional nilpotent Lie algebra has a centralizing set of generators. We intend to prove an analogous result for superalgebras. The {\bf supercommutator} of two homogeneous elements $a,b$ is the element $$ \llbracket a,b \rrbracket \ := \ ab - (-1)^{|a||b|}ba$$ and is extended bilinearly to a form $\llbracket - , - \rrbracket : A \rightarrow A$. The {\bf supercenter} of $A$ is the set $\SZ{A} = \{a\in A \mid \forall b\in A: \llbracket a,b \rrbracket = 0\}$ and its elements are called {\bf supercentral}. Given a supercentral element $a\in A$, the ideal $I=Aa$ is a graded and $A/Aa$ is again a superalgebra. We say that a set of elements $\{x_{1}, \ldots, x_{n}\}$ of a superalgebra $A$ is a \textit{supercentralizing set} if for each $j = 0,\ldots,n-1$ the image of $x_{j+1}$ in $A/\sum_{i=1}^{j}x_{i}A$ is a supercentral element. A homogenous superderivation of a superalgebra $A$ is a linear map $f: A \longrightarrow A$ such that $$f(ab) = f(a)b + (-1)^{|a||b|}af(b)$$ for all homogeneous $a,b \in A$. The supercommutator $\llbracket a, - \rrbracket$ for a homogeneous element is a superderivation. In case $|a|=0$, $\llbracket a, - \rrbracket$ is a derivation of $A$. Let $\mathfrak{g}=\mathfrak{g}_0 \oplus \mathfrak{g}_1$ be a Lie superalgebra and choose a basis $\{x_1,\ldots, x_n\}$ of $\mathfrak{g}_0$ and a basis $\{y_1,\ldots, y_m\}$ of $\mathfrak{g}_1$. The PBW theorem for Lie superalgebras (see \cite{Behr}) says that the monomials $x_1^{\alpha_1}\cdots x_n^{\alpha_n} y_1^{\beta_1}\cdots y_m^{\beta_m}$ with $\alpha_i, \beta_j \in \mathbb{N}_{0}$ and $\beta_i\leq 1$ form a basis of the enveloping algebra $A=U(\mathfrak{g})$. For $i\in \{0,1\}$ let $$A_i=\mathrm{span} \{ x_1^{\alpha_1}\cdots x_n^{\alpha_n} y_1^{\beta_1}\cdots y_m^{\beta_m} \mid \beta_1+\cdots + \beta_m = i (\mathrm{mod} 2) \}.$$ Then $A=A_0\oplus A_1$ is a superalgebra such that the degree of a homogeneous element of $\mathfrak{g}$ equals its degree in $A$. For any $x\in \mathfrak{g}$, the adjoint action of $x$ on $A$ is defined by $$\ad{x}: A \rightarrow A \qquad \ad{x}(a)=\llbracket x,a \rrbracket \:\:\forall a\in A.$$ By definition of the enveloping algebra we have for all $x,y \in \mathfrak{g}$: $$ \ad{x}(y) = \llbracket x,y \rrbracket \ =\ [x,y].$$ The following Lemma follows from a direct computation which we carry out for the convenience of the reader. \begin{lem}\label{Lemma1} For any $x,y \in \mathfrak{g}$ one has \begin{equation}\ad{x}\circ \ad{y} - (-1)^{|x||y|} \ad{y}\circ\ad{x} = \ad{[x,y]}.\end{equation} \end{lem} \begin{proof} Let $a$ be a homogeneous element of $A$, $x,y \in \mathfrak{g}$. \begin{eqnarray*} \lefteqn{\llbracket x, \llbracket y,a\rrbracket\rrbracket-(-1)^{|x||y|}\llbracket y, \llbracket x,a\rrbracket\rrbracket}\\ &=& x(ya-(-1)^{|y||a|}ay)-(-1)^{|x|(|y|+|a|)}(ya-(-1)^{|y||a|}ay)x \\ && -(-1)^{|x||y|}\{y(xa-(-1)^{|x||a|}ax)-(-1)^{|y|(|x|+|a|)}(xa-(-1)^{|x||a|}ax)y \}\\ &=& xya+(-1)^{|x||y|+|x||a|+|y||a|}ayx -(-1)^{|x||y|}yxa-(-1)^{|a||y|+|x||a|}axy\\ &=& [x,y]a+(-1)^{|a|(|x|+|y|)}a[x,y] = \llbracket [x,y], a\rrbracket \end{eqnarray*} \end{proof} \begin{cor}\label{corollary_lemma1} For any $x\in \mathfrak{g}_1$, then $\ad{x}^2 = 0$. \end{cor} \begin{proof} Set $y=x$ in Lemma \ref{Lemma1}, then $2\ad{x}^2 = \ad{[x,x]} = 0$. \end{proof} Recall that a map $f:A\rightarrow A$ is called locally nilpotent if for every $a\in A$ there exists a number $n(a)\mathfrak{g}eq 0$ such that $f^{n(a)}(a)=0$. \begin{prop}\label{all_inner_locally_nilpotent} Let $\mathfrak{g}$ be a finite dimensional nilpotent Lie superalgebra. Then $\ad{x}$ is locally nilpotent superderivation of $A=U(\mathfrak{g})$, for every homogeneous element $x\in \mathfrak{g}$. \end{prop} \begin{proof} In case $x \in \mathfrak{g}_1$ is odd, we see from Corollary \ref{corollary_lemma1} that $\ad{x}$ is nilpotent. In case $x\in \mathfrak{g}_0$ is even, then $\ad{x}=\llbracket x, - \rrbracket$ is an ordinary derivation of $A$. Let $r$ be the nilpotency degree of $\mathfrak{g}$, i.e. $\mathfrak{g}^{r}=0$. Then for any $a\in \mathfrak{g}$ we have $\ad{x}^r(a)=0$. Let $m\mathfrak{g}eq 0$. Suppose that for every monomial $a\in A$ of length $m$ there exists $n(a)\mathfrak{g}eq 0$ such that $\ad{x}^{n(a)}(a)=0$. Let $y\in \mathfrak{g}$. Then $$\ad{x}^{n(a)+r}(ay) = \sum_{i=1}^{n(a)+r} { {n(a)+r} \choose {i} } \ad{x}^i(a)\ad{x}^{n(a)+r-i}(y) = 0.$$ By induction $\ad{x}$ is locally nilpotent on all basis elements of $A$. \end{proof} Given an $l$-tuple of superderivations $\partial=(\partial_1, \ldots, \partial_l)$ of a superalgebra $A$ we say that a subset $X$ of $A$ is $\partial$-stable if $\partial_i(X)\subseteq X$ for all $1\leq i \leq l$. Note that if all superderivations $\partial_i$ are inner, i.e. $\partial_i=\llbracket x_i,-\rrbracket$ for some homogeneous $x_i\in A$, then any ideal $I$ is $\partial$-stable. \begin{thm}\label{superalgebra_case} Let $A$ be a superalgebra with locally nilpotent superderivations $\partial_1, \ldots, \partial_l$ such that $\bigcap_{i=1}^l \ker\partial_i \subseteq \SZ{A}$ and for all $i\leq j$ there exist $\lambda_{i,j}\in \mathbb{C}$. \begin{equation}\label{relation_eq} \partial_i \circ \partial_j - \lambda_{i,j} \partial_j \circ \partial_i \in \sum_{s=1}^{i-1} \mathbb{C} \partial_s.\end{equation} Then any non-zero $\partial$-stable ideal $I$ of $A$ contains a non-zero supercentral element. In particular if $I$ is graded and Noetherian, then it contains a supercentralizing set of generators. \end{thm} \begin{proof} For each $1\leq t \leq l$ set $K_t = \bigcap_{i=1}^t \ker\partial_i$. We will first show that $K_i$ are $\partial$-stable subalgebras of $A$. Let $1\leq t, j \leq l$ and $a\in K_t$. If $j\leq t$, then $\partial_j(a)=0\in K_t$ by definition. Hence suppose $j>t$. By hypothesis for any $1\leq i \leq t<j$ we have $$\partial_i(\partial_j(a)) = \lambda_{i,j}\partial_j(\partial_i(a)) + \sum_{s=1}^{i-1} \mu_{i,j,s} \partial_s(a) = 0$$ for some $\lambda_{i,j}, \mu_{i,j,s} \in \mathbb{C}$. Thus $\partial_j(a)\in K_t$. To show that $I$ contains a non-zero element of the supercentre of $A$ note that since $\partial_1$ is locally nilpotent, for any $0\neq a \in I$ there exists $n_1\mathfrak{g}eq 0$ such that $0\neq a'=\partial_1^{n_1}(a) \in \ker \partial_1=K_1$. Since $I$ is $\partial_1$-stable, $a'\in I\cap K_1$. Suppose $1\leq t \leq l$ and $0\neq a_t \in I\cap K_t$, then since $\partial_{t+1}$ is locally nilpotent, there exists $n_{t+1}\mathfrak{g}eq 0$ such that $0\neq a'=\partial_{t+1}^{n_{t+1}}(a) \in \ker \partial_{t+1}$. Since $I$ and $K_t$ are $\partial$-stable, we have $a'\in I\cap K_{t+1}$. Hence for $t=l$, we get $0\neq I\cap K_t \subseteq I\cap \SZ{A}$. Assume that $I$ is graded and Noetherian and let $0\neq a=a_0+a_1 \in I \cap \SZ{A}$. Since $I$ and $\SZ{A}$ are graded, both parts $a_0$ and $a_1$ belong to $I\cap \SZ{A}$, one of them being non-zero. Thus we might choose $a$ to be homogeneous. Let $J_1=Aa$ be the graded ideal generated by $a$, then all superderivations $\partial_i$ lift to superderivations of $A/J_1$ satisfying the same relation (\ref{relation_eq}) as before. Moreover $I/J_1$ is a graded Noetherian $\partial$-stable ideal of $A/J_1$. Applying the procedure of obtaining a supercentral element to $I/J_1$ in $A/J_1$ yields a supercentral homogeneous element $a'\in I/J_1 \cap \SZ{(A/J_1}$. Set $J_2=Aa + Aa'$. Continuing in this way leads to an ascending chain of ideals $J_1\subseteq J_2 \subseteq \cdots \subseteq I$ that eventually has to stop, i.e. $I=J_m$ for some $m$. By construction, the generators used to build up $J_1, J_2, \ldots, J_m$ form a supercentralizing set of generators for $I$. \end{proof} In order to apply the last Proposition to the enveloping algebra of a finite dimensional nilpotent Lie superalgebra $\mathfrak{g}$, we have to choose an appropriate basis of homogeneous elements. Without loss of generality we might assume that $\mathfrak{g}$ has a refined central series $$ \mathfrak{g}=\mathfrak{g}^n \supset \mathfrak{g}^{n-1} \supset \mathfrak{g}^{n-2} \supset \cdots \supset \mathfrak{g}^1 \supset \mathfrak{g}^{0}=\{0\}.$$ with $[\mathfrak{g},\mathfrak{g}^i] \subseteq \mathfrak{g}^{i-1}$ and $dim(\mathfrak{g}^i/\mathfrak{g}^{i-1})=1$ for all $1\leq i \leq n$. Let $x_1,x_2, \ldots, x_{n}$ be a basis of $\mathfrak{g}$ such that each element $x_i+\mathfrak{g}^{i-1}$ is non-zero (and hence forms a basis) in $\mathfrak{g}^i/\mathfrak{g}^{i-1}$. Actually each $x_i$ is homogeneous, since if $x_i={x_i}_0 + {x_i}_1$ with ${x_i}_j$ homogeneous, then as ${x_i}_0$ and ${x_i}_1$ cannot be linearly independent as $g^i/g^{i-1}$ is 1-dimensional, one of them belongs to $\mathfrak{g}^{i-1}$. \begin{cor}\label{supercentral} Any graded ideal of the enveloping algebra of a finite dimensional nilpotent Lie superalgebra has a supercentralizing set of generators. \end{cor} \begin{proof} Let $\mathfrak{g}$ and $A=U(\mathfrak{g})$ be as above, as well as the chosen basis of $\mathfrak{g}$ $x_1, \ldots, x_n$ of homogeneous elements. Set $\partial_i=\ad{x_i}$. By Proposition \ref{all_inner_locally_nilpotent} all superderivations $\partial_i$ are locally nilpotent. Let $i<j$, then $[x_i,x_j] \in \mathfrak{g}^{i-1}$ show that there are scalars $\mu_{i,j,s}\in \mathbb{C}$ such that $$[x_i,x_j] = \sum_{s=1}^{i-1} \mu_{i,j,s} x_s.$$ Note that $\ad{[x_i,x_j]} = \sum_{s=1}^{i-1} \mu_{i,j,s} \ad{x_s}.$ Therefore, using Lemma \ref{Lemma1}, we have $$\partial_i\circ \partial_j = (-1)^{|x_i||x_j|}\partial_j\circ \partial_i + \sum_{s=1}^{i-1} \mu_{i,j,s} \partial_s.$$ Hence the assumptions of Theorem \ref{superalgebra_case} are fulfilled and our claim follows (since $A$ is Noetherian). \end{proof} This last result with Theorem~\ref{Conclusion} gives the following: \begin{cor}\label{conclusion_super_lie} Let $\mathfrak{g}$ be a finite dimensional nilpotent Lie superalgebra. Then $U=U(\mathfrak{g})$ satisfies property $(\diamond)$ if and only if every primitive factor of $U$ does. \end{cor} \begin{proof} By Corollary \ref{supercentral} any graded ideal is generated by supercentral hence normal elements. Moreover every primitive ideal of $U(\mathfrak{g})$ is maximal by \cite[Corollary 1.6]{Letzter}. Hence the result follows from Theorem \ref{Conclusion}. \end{proof} \section{Primitive factors of nilpotent Lie superalgebras} It is a standard fact that primitive factors of enveloping algebras of finite dimensional nilpotent Lie algebras are Weyl algebras. Recall that the $n$th Weyl algebra over $\mathbb{C}$ is the algebra $A_{n}(\mathbb{C})$ generated by $2n$ elements $x_{1}, \ldots , x_{n}, y_{1}, \ldots , y_{n}$ subject to the relations $x_{i}y_{j}-y_jx_i= \delta_{ij}$, for all $1\leq i,j\leq n$. A.Bell and I.Musson showed in \cite{primitive_factors_of_enveloping_algebras_of_nilpotent_lie_superalgebras-musson_bell} that primitive factors of enveloping algebras of finite dimensional nilpotent Lie superalgebras are of the form $\mathrm{Cliff}_q(\mathbb{C})\otimes A_p(\mathbb{C})$ where $\mathrm{Cliff}_q(\mathbb{C})$ is a Clifford algebra. We know from \cite{Lam_QuadraticForms} that $$\mathrm{Cliff}_0(\mathbb{C}) = \mathbb{C},\qquad \mathrm{Cliff}_1(\mathbb{C}) = \mathbb{C} \times \mathbb{C}, \qquad \mathrm{Cliff}_2(\mathbb{C}) = M_2(\mathbb{C})$$ and $\mathrm{Cliff}_{n+2}(\mathbb{C}) = \mathrm{Cliff}_n(\mathbb{C}) \otimes M_2(\mathbb{C})$ for all $n>2$. The next Lemma shows that property $(\diamond)$ is stable under tensoring with a Clifford algebra: \begin{lem}\label{Clifford} A $\mathbb{C}$-algebra $A$ satisfies $(\diamond)$ if and only if $\mathrm{Cliff}_q(\mathbb{C}) \otimes A$ satisfies $(\diamond)$ for all (for one) $q$. \end{lem} \begin{proof} By Corollary~\ref{tensor_with_fdim_algebra}, $\mathrm{Cliff}_q(\mathbb{C}) \otimes A$ satisfies $(\diamond)$ if $A$ does. On the other hand suppose that there exists $q>0$ such that $\mathrm{Cliff}_q(\mathbb{C}) \otimes A$ satisfies $(\diamond)$. If $q=2m$ is even, then $\mathrm{Cliff}_q(\mathbb{C}) \otimes A = M_{2^m}(A)$ which is Morita equivalent to $A$. Since $(\diamond)$ is a Morita-invariant property as the equivalence between module categories yields lattice isomorphisms of the lattice of submodules of modules, we get that $A$ satisfies $(\diamond)$. If $q=2m+1$ is odd, then $\mathrm{Cliff}_q(\mathbb{C}) \otimes A = M_{2^m}(A)\times M_{2^m}(A)$. Since $A$ is Morita equivalent to the factor $M_{2^m}(A)$ it also satisfies $(\diamond)$. \end{proof} The question is hence which Weyl algebras do satisfy $(\diamond)$. Being a semiprime Noetherian ring of Krull dimension 1, the first Weyl algebra $A_{1}(\mathbb{C})$ satisfies the property $(\diamond)$ \cite{injective_modules_over_down-up_algebras}. However, for $n \mathfrak{g}eq 2$, the Weyl algebra $A_n=A_{n}(\mathbb{C})$ does not satisfy the property $(\diamond)$. In\cite{nonholonomic_modules_over_weyl_algebras_and_enveloping_algebras} J. T. Stafford constructs a simple $A_{n}(\mathbb{C})$-module which has an essential extension of Krull dimension $n-1$: \begin{thm}[T.Stafford {\cite[Theorem 1.1, Corollary 1.4]{nonholonomic_modules_over_weyl_algebras_and_enveloping_algebras}}]\label{stafford's_theorem} For $2 \leq i \leq n$ pick $\lambda_{i} \in \mathbb{C}$ that are linearly independent over $\mathbb{Q}$. Then the element $$\alpha = x_{1} + y_{1}\left(\sum_{2}^{n}\lambda_{i}x_{i}y_{i}\right) + \sum_{2}^{n}(x_{i} + y_{i})$$ generates a maximal right ideal of $A_n=A_{n}(\mathbb{C})$. In particular $A_{n} / x_{1}\alpha A_{n}$ is an essential extension of the simple $A_{n}$-module $A_{n} / \alpha A_{n}$ by the module $A_{n} / x_{1}A_{n}$, which has Krull dimension $n-1$. \end{thm} Since Artinian modules are exactly the ones with Krull dimension zero, this implies that $A_{n}(\mathbb{C})$ satisfies the property $(\diamond)$ if and only if $n=1$. Stafford's result is a key ingredients in the proof of our main theorem. The order of Weyl algebras appearing in the primitive factors of enveloping algebras $U(\mathfrak{g})$ of finite dimensional nilpotent Lie superalgebras $\mathfrak{g}$ has been determined by E.Herscovich in \cite{dixmier_map_for_nilpotent_super_lie_algebras-herscovich} and is related to the index of the underlying even part of $\mathfrak{g}$. Let $f\in \mathfrak{g}^*$ be a linear functional on a Lie algebra $\mathfrak{g}$ and set $$g^{f} = \{ x \in \mathfrak{g} \mid f([x,y]) = 0, \: \forall y\in \mathfrak{g} \}$$ be the orthogonal subspace of $\mathfrak{g}$ with respect to the bilinear form $f([-,-])$. The number $$\indx{\mathfrak{g}}:=\inf_{f \in \mathfrak{g}^{*}} \dim \mathfrak{g}^{f}$$ is called the \textit{index} of $\mathfrak{g}$. \begin{thm}[E.Herscovich {\cite{dixmier_map_for_nilpotent_super_lie_algebras-herscovich}}, A.Bell \& I.Musson {\cite{primitive_factors_of_enveloping_algebras_of_nilpotent_lie_superalgebras-musson_bell}}]\label{Estanislao} Let $\mathfrak{g}$ be a finite dimensional nilpotent complex Lie superalgebra. \begin{enumerate} \item For $f\in \mathfrak{g}^*$ there exists a graded primitive ideal $I(f)$ of $U(\mathfrak{g})$ such that $$U(\mathfrak{g})/I(f)\simeq \mathrm{Cliff}_q(\mathbb{C}) \otimes A_p(\mathbb{C}),$$ where $2p=\dim(\mathfrak{g}_0/\mathfrak{g}_0^f) \leq \dim(\mathfrak{g}_0)-\indx{\mathfrak{g}_0}$ and $q\mathfrak{g}eq 0$. \item For every graded primitive ideal $P$ of $U(\mathfrak{g})$ there exists $f\in \mathfrak{g}^*$ such that $P=I(f)$. \end{enumerate} \end{thm} Combining Stafford's and Herscovich's results with \ref{conclusion_super_lie} leads now easily to the following \begin{prop}\label{diamond-index} Let $\mathfrak{g}=\mathfrak{g}_0\oplus \mathfrak{g}_1$ be a finite dimensional nilpotent complex Lie superalgebra. Then $U(\mathfrak{g})$ satisfies $(\diamond)$ if and only if $\indx{\mathfrak{g}_0} \mathfrak{g}eq \dim(\mathfrak{g}_0)-2$. \end{prop} \begin{proof} $(\Rightarrow)$ By Theorem~\ref{Estanislao} each primitive factor of $U(\mathfrak{g})$ is of the form $ \mathrm{Cliff}_q(\mathbb{C}) \otimes A_p(\mathbb{C}) $ where $2p=\dim(\mathfrak{g}_0/\mathfrak{g}_0^f) = \dim(\mathfrak{g}_0)-\dim{\mathfrak{g}_0^f}$. Since the property $(\diamond)$ is inherited by factor rings this implies together with Theorem~\ref{stafford's_theorem} and Lemma~\ref{Clifford} that $p \leq 1$, that is $\dim{g_0^f} \mathfrak{g}eq \dim(\mathfrak{g}_0)-2$ ,i.e. $\indx{\mathfrak{g}_0}\mathfrak{g}eq \dim(\mathfrak{g}_0)-2$.\\ $(\Leftarrow)$ If $\indx{\mathfrak{g}_0} \mathfrak{g}eq \dim(\mathfrak{g}_0)-2$ then the primitive factors of $U(\mathfrak{g})$ are either of the form $\mathrm{Cliff}_q(\mathbb{C})$ or $\mathrm{Cliff}_q(\mathbb{C}) \otimes A_1(\mathbb{C})$. Thus the primitive factors of $U(\mathfrak{g})$ satisfy the property $(\diamond)$ by Lemma~\ref{Clifford}. This implies together with Corollary~\ref{conclusion_super_lie} that $U(\mathfrak{g})$ satisfies $(\diamond)$. \end{proof} \section{Nilpotent Lie algebras with almost maximal index} In this last section we will classify all finite dimensional complex Lie algebras $\mathfrak{g}$ with index greater or equal to $\dim{\mathfrak{g}}-2$. It is clear that if $\indx{\mathfrak{g}}=\dim{\mathfrak{g}}$, then $\mathfrak{g}$ is abelian. We say that a Lie algebra $\mathfrak{g}$ has {\it almost maximal index} if $\indx{\mathfrak{g}}=\dim(\mathfrak{g}) - 2$. As a first step we show that a direct product $\mathfrak{g}_1 \times \mathfrak{g}_2$ of two Lie algebras $\mathfrak{g}_{1}$ and $\mathfrak{g}_{2}$ has almost maximal index if and only if one of them is abelian and the other one has almost maximal index. Recall that the Lie bracket of the direct product $\mathfrak{g} = \mathfrak{g}_{1} \times \mathfrak{g}_{2}$ is defined as $$[(x_{1}, y_{1}), (x_{2}, y_{2})] := ([x_{1}, x_{2}], [y_{1}, y_{2}])$$ for all $x_{1}, x_{2} \in \mathfrak{g}_{1},\ y_{1}, y_{2} \in \mathfrak{g}_{2}$. For the product algebra, we have the following formula: \begin{lem}\label{index_formula} For Lie algebras $\mathfrak{g}_1, \mathfrak{g}_2$ the following formula holds: $$\indx{\mathfrak{g}_1 \times \mathfrak{g}_2} = \indx{\mathfrak{g}_1} + \indx{\mathfrak{g}_2}.$$ In particular $\mathfrak{g}_1\times \mathfrak{g}_2$ has almost maximal index if and only if one of the factors has almost maximal index and the other factor is Abelian. \end{lem} \begin{proof} Set $\mathfrak{g}=\mathfrak{g}_1 \times \mathfrak{g}_2$. Since $\mathfrak{g}^* = \mathfrak{g}_1^* \times \mathfrak{g}_2^*$, for all $f \in \mathfrak{g}^{*}$, we have $\dim{\mathfrak{g}^f} = \dim{\mathfrak{g}_1^{f_1}} + \dim{\mathfrak{g}_2^{f_2}}$, with $f_i = f\varepsilonilon_i \in \mathfrak{g}_i^*$ and inclusions $\varepsilonilon_i:\mathfrak{g}_i\rightarrow \mathfrak{g}$. Thus $\indx{\mathfrak{g}} = \indx{\mathfrak{g}_1} + \indx{\mathfrak{g}_2}$. Note that in general $\indx{\mathfrak{g}_i}=\dim(\mathfrak{g}_i) - 2n_i$ for some $n_i\mathfrak{g}eq 0$ and let $\mathfrak{g}=\mathfrak{g}_1\times \mathfrak{g}_2$. Hence $$\indx{\mathfrak{g}}=\indx{g_1}+\indx{g_2} = \dim(\mathfrak{g}_1)-2n_1+\dim(\mathfrak{g}_2)-2n_2 = \dim(\mathfrak{g}) - 2(n_1+n_2) = \dim(\mathfrak{g})-2$$ if and only if $n_1+n_2=1$ which shows our claim. \end{proof} The Lemma together with Proposition \ref{diamond-index} implies: \begin{prop} Let $\mathfrak{g}$ be a finite dimensional complex nilpotent Lie algebra. Then \\$U(\mathfrak{g})[x_{1}, \ldots, x_{n}]$ has the property $(\diamond)$ if and only if $U(\mathfrak{g})$ has the property $(\diamond)$. \end{prop} \begin{proof} Suppose that $U(\mathfrak{g})$ has the property $(\diamond)$. We have $$ U(\mathfrak{g})[x_{1}, \ldots, x_{n}] = U(\mathfrak{g}) \otimes \mathbb{C}[x_{1}, \ldots, x_{n}] = U(\mathfrak{g}) \otimes U(\mathfrak{a}) = U(\mathfrak{g} \oplus \mathfrak{a}) $$ for an $n$-dimensional Abelian Lie algebra $\mathfrak{a}$. Since $U(\mathfrak{g})$ satisfies $(\diamond)$, $\mathfrak{g}$ has index at least $\dim(\mathfrak{g}) - 2$. By Lemma~\ref{index_formula}, we have $\operatorname{ind}(\mathfrak{g} \oplus \mathfrak{a}) \mathfrak{g}eq \dim(\mathfrak{g}) + n - 2 = \dim(\mathfrak{g} \oplus \mathfrak{a}) - 2$. Since $\mathfrak{g} \oplus \mathfrak{a}$ is nilpotent, it follows by Proposition~\ref{diamond-index} that $U(\mathfrak{g} \oplus \mathfrak{a})$ satisfies $(\diamond)$. Thus $U(\mathfrak{g})[x_{1}, \ldots, x_{n}]$ also satisfies $(\diamond)$. Conversely, if the polynomial algebra $U(\mathfrak{g})[x_{1}, \ldots, x_{n}]$ has the property $(\diamond)$, then $U(\mathfrak{g})$ also has it since it is inherited by factor rings. \end{proof} Note that in general it seems unknown whether property $(\diamond)$ is inherited by forming polynomial rings. Lemma \ref{index_formula} also shows that we can ignore abelian direct factors for the characterization of Lie algebras with almost maximal index. The following Proposition will classify those Lie algebras. \begin{prop}\label{listOfDiamond} A finite dimensional nilpotent Lie algebra $\mathfrak{g}$ has almost maximal index if and only if $\mathfrak{g}$ has an abelian ideal of codimension $1$ or if $\mathfrak{g}$ is isomorphic (up to an abelian direct factor) to $\mathfrak{h}_5$ or $\mathfrak{h}_6$. \end{prop} \begin{proof} Let $\mathfrak{g}$ be a finite dimensional nilpotent Lie algebra of dimension $n$ and index $n-2$ and suppose that $\mathfrak{g}$ does not have an abelian ideal of codimension $1$. There exists a functional $f\in \mathfrak{g}^*$ such that $\dim\mathfrak{g}^{f} = \indx{\mathfrak{g}}=n-2$. By \cite[1.11.7]{enveloping_algebras}, $\mathfrak{g}^{f}$ is an abelian Lie subalgebra of $\mathfrak{g}$. By \cite[5.1]{BurdeCeballos} there exists an abelian ideal $\mathfrak{a}$ of $\mathfrak{g}$ of codimension $2$. Let $\{e_{1}, \ldots , e_{n}\}$ be a basis of $\mathfrak{g}$, such that $\{e_3, \ldots, e_n\}$ is a basis for $\mathfrak{a}$. Since $\mathfrak{a}$ is abelian, the matrix of brackets $[e_i,e_j]$ has the form $$M=([e_i,e_j])_{i,j} = \left( \begin{array}{cc} \phantom{-}A & B \\ -B^{t} & \mathbf{0} \\ \end{array} \right) $$ where $A$ is a $2\times 2$ skew-symmetric matrix, $B$ is a $2\times(n-2)$ matrix with entries in $\mathfrak{a}$, and $\mathbf{0}$ is the $(n-2)\times(n-2)$ zero matrix. Let $$B_{ij}=\left(\begin{array}{ll} [e_1,e_i] & [e_1,e_j] \\ {[e_2,e_i]} & [e_2,e_j] \end{array}\right)$$ be a $2\times 2$-minor of $B$, for some $3\leq i\neq j\leq n$. We remark that $B_{ij}$ cannot have a triangular shape. For example if $[e_2,e_i]=0$, then we could define a linear form $f\in \mathfrak{g}^*$ such that $f$ is non-zero on $[e_1,e_i]$ and $[e_2,e_j]$ and zero outside $\mathbb{C}[e_1,e_i]+\mathbb{C}[e_2,e_j]$. Then $\{e_1,e_2,e_i\}$ would be linearly independent modulo $\mathfrak{g}^f$, i.e. $\dim(\mathfrak{g}/\mathfrak{g}^f)\leq n-3$ contradicting the hypothesis that the index is $n-2$. If one of the rows of $B$ is zero, then $\mathfrak{a}\oplus \mathbb{C} e_i$ is an abelian ideal of codimension $1$ for $i=1$ or $i=2$, contradicting our assumption. Hence there exist $3\leq i,j\leq n$ such that $[e_1,e_i]\neq 0 \neq [e_2,e_j]$. We will show that after a suitable base change we can assume $i=j$. Assume $i\neq j$ and suppose first that $[e_1,e_i]$ and $[e_1,e_j]$ are linearly independent. Note that $[e_1,e_i]$ and $[e_2,e_i]$ are linearly independent, otherwise if $[e_2,e_i]=\lambda [e_1,e_i]$ for some $\lambda \in \mathbb{C}$, then after a base change $e_2 \leftarrow e_2-\lambda e_1$ we have $[e_2,e_i]=0$ and hence $B_{ij}$ has triangular shape which is not possible as mentioned before. This allows us to define a linear form $f\in \mathfrak{g}^*$ that is non-zero on $[e_1,e_j]$ and $[e_2,e_i]$, and zero on $[e_1,e_i]$. Then $\{e_1,e_2,e_i\}$ are linearly independent over $\mathfrak{g}^f$ contradicting the assumption on the index. Hence we must have that $[e_1,e_j]$ and $[e_1,e_i]$ are linearly dependent, say $[e_1,e_j]=\lambda[e_1,e_i]$, $\lambda\in \mathbb{C}$. After the base change $e_j \leftarrow e_j-\lambda e_i$ we have $[e_1,e_j]=0$, which gives the minor $B_{ij}$ a triangular shape since $[e_2,e_j]\neq 0$ and since $i\neq j$. But as said before, the minors $B_{ij}$ cannot have a triangular shape. Hence $i=j$ and without loss of generality we may assume $i=3$. Moreover we can change the basis of $\mathfrak{a}$ such that $e_4=[e_1,e_3]$ and $e_5=[e_2,e_3]$. Note that since $\mathfrak{g}/\mathfrak{a}$ is abelian, $[e_1,e_2]\in \mathfrak{a}$ and we are left with two cases: \textbf{Case 1} If $[e_1,e_2] \not\in \langle e_3,e_4,e_5\rangle$, then change the basis of $\mathfrak{a}$ such that $e_6=[e_1,e_2]$. The Lie algebra $\mathfrak{g}$ is then equal to the direct product $\mathfrak{g} = \mathfrak{h}_6 \times \mathfrak{a}'$ where $\mathfrak{a}'=\langle e_7, \ldots, e_n\rangle$. \textbf{Case 2} If $[e_1,e_2] \in \langle e_3,e_4,e_5\rangle$ then note first that $[e_1,e_2]\not\in\langle e_4,e_5\rangle$, since otherwise if $[e_1,e_2] = \alpha e_4 + \beta e_5$, for some $\alpha, \beta \in\mathbb{C}$ we have after the base change $e_1\leftarrow e_1+\beta e_3$ and $e_2\leftarrow e_2-\alpha e_3$ that $[e_1,e_2]=0$. Hence $\mathbb{C} e_1 \oplus \mathbb{C} e_2 \oplus \langle e_4,\ldots, e_n\rangle$ would be an abelian ideal of codimension $1$ - a contradiction to our hypothesis. Thus $[e_1,e_2] = \alpha e_3 + \beta e_4 + \mathfrak{g}amma e_5$ with $\alpha\neq 0$. After the basis change $$e_3 \leftarrow \alpha e_3 + \beta e_4 + \mathfrak{g}amma e_5, \qquad e_4 \leftarrow \alpha^{-1} e_4, \qquad e_5 \leftarrow \alpha^{-1} e_5$$ we have $[e_1,e_2]=e_3$, $[e_1,e_3]=e_4$ and $[e_2,e_3]=e_5$. Hence $\mathfrak{g}=\mathfrak{h}_5 \times \mathfrak{a}'$ where $\mathfrak{a}' = \langle e_6, \ldots, e_n\rangle$ is abelian. \end{proof} \begin{proof}[Proof of the Main Theorem \ref{Main_Theorem}] $(a)\Leftrightarrow (c)$ and$(b)\Leftrightarrow (c)$ follow from Proposition \ref{diamond-index}. $(c)\Leftrightarrow (d)$ follows from Proposition \ref{listOfDiamond}. \end{proof} \section{Examples} Finite dimensional Lie algebras $\mathfrak{g}$ with an abelian ideal of codimension $1$ are in bijection with finite dimensional vector spaces $V$ and nilpotent endomorphisms $f:V\rightarrow V$. For such data one defines $\mathfrak{g}=\mathbb{C} e \oplus V$ and $[e,x]=f(x)$ for all $x\in V$. An example of this construction is given by the $n$-dimensional {\bf standard filliform} Lie algebra, which is the Lie algebra on the vector space $\mathcal{L}_n=\mathrm{span} \{e_1, \ldots, e_n\}$ such that $[e_1,e_i]=e_{i+1}$ for all $2\leq i < n$ and $[e_i,e_j]=0$ for $i,j\neq 1$. Hence $\mathcal{L}_n$ provides an example of a non-abelian nilpotent Lie algebra $\mathfrak{g}$ such that $U(\mathfrak{g})$ has property $(\diamond)$. The $3$-dimensional Heisenberg Lie algebra occurs as $\mathcal{L}_3$. Given an even dimensional complex vector space $V=\mathbb{C}^{2n}$ and an anti-symmetric bilinear form $\omega: V\times V \rightarrow \mathbb{C}$, one defines the $2n+1$-dimensional {\bf Heisenberg Lie algebra} associated to $(V,\omega)$ as $\mathcal{H}_{2n+1}=V\oplus \mathbb{C} h$ with $h$ being central and $[x,y]=\omega(x,y)h$ for all $x,y\in V$. Note that $\indx{\mathcal{H}_{2n+1}} = 1$. Thus $U(\mathcal{H}_{2n+1})$ satisfies $(\diamond)$ if and only if $n=1$, i.e. for $\mathcal{H}_{3}=\mathcal{L}_3$. In \cite{HeisenbergLieSuperAlgebras} a finite dimensional Lie superalgebra $\mathfrak{g}$ is called a {\bf Heisenberg Lie superalgebra} if it has a $1$-dimensional homogeneous center $\mathbb{C} h= Z(\mathfrak{g})$ such that $[\mathfrak{g},\mathfrak{g}] \subseteq Z(\mathfrak{g})$ and such that the associated homogeneous skew-supersymmetric bilinear form $\omega: \mathfrak{g}\times \mathfrak{g} \rightarrow \mathbb{C}$ given by $[x,y]=\omega(x,y)h$ for all $x,y\in \mathfrak{g}$ is non-degenerated when extended to $\mathfrak{g}/Z(\mathfrak{g})$. On the other hand one can construct a Heisenberg Lie superalgebra on any finite-dimensional supersymplectic vector superspace $V$ with a homogeneous supersymplectic form $\omega$. By \cite[page 73]{HeisenbergLieSuperAlgebras} if $\omega$ is even, i.e. $\omega(\mathfrak{g}_0,\mathfrak{g}_1)=0$, then $\mathfrak{g}_0$ is a Heisenberg Lie algebra and if $\omega$ is odd, i.e. $\omega(\mathfrak{g}_i,\mathfrak{g}_i)=0$ for $i\in \{0,1\}$, then $\mathfrak{g}_0$ is Abelian. Hence $U(\mathfrak{g})$ satisfies $(\diamond)$ if and only if $\omega$ is odd or $\dim{\mathfrak{g}_0}\leq 3$. \providecommand{\bysame}{\leavevmode\mathfrak{h}box to3em{\mathfrak{h}rulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \mathfrak{h}ref{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\mathfrak{h}ref}[2]{#2} \end{document}
\begin{document} \title{On Low Tree-Depth Decompositions} \author{Jaroslav Ne{\v s}et{\v r}il} \address{Jaroslav Ne{\v s}et{\v r}il\\ Computer Science Institute of Charles University (IUUK and ITI)\\ Malostransk\' e n\' am.25, 11800 Praha 1, Czech Republic} \email{[email protected]} \thanks{Supported by grant ERCCZ LL-1201 and CE-ITI P202/12/G061, and by the European Associated Laboratory ``Structures in Combinatorics'' (LEA STRUCO)} \author{Patrice Ossona~de~Mendez} \address{Patrice~Ossona~de~Mendez\\ Centre d'Analyse et de Math\'ematiques Sociales (CNRS, UMR 8557)\\ 190-198 avenue de France, 75013 Paris, France --- and --- Computer Science Institute of Charles University (IUUK)\\ Malostransk\' e n\' am.25, 11800 Praha 1, Czech Republic} \email{[email protected]} \thanks{Supported by grant ERCCZ LL-1201 and by the European Associated Laboratory ``Structures in Combinatorics'' (LEA STRUCO), and partially supported by ANR project Stint under reference ANR-13-BS02-0007} \date{\today} \begin{abstract} The theory of sparse structures usually uses tree like structures as building blocks. In the context of sparse/dense dichotomy this role is played by graphs with bounded tree depth. In this paper we survey results related to this concept and particularly explain how these graphs are used to decompose and construct more complex graphs and structures. In more technical terms we survey some of the properties and applications of low tree depth decomposition of graphs. \end{abstract} \maketitle \section{Tree-Depth} The {\em tree-depth} of a graph is a minor montone graph invariant that has been defined in \cite{Taxi_tdepth}, and which is equivalent or similar to the {\em rank function} (used for the analysis of countable graphs, see e.g.\ \cite{Nev2003}), the {\em vertex ranking number} \cite{vertex_ranking,Schaffer}, and the minimum height of an elimination tree \cite{Bodlaender1995}. Tree-depth can also be seen as an analog for undirected graphs of the cycle rank defined by Eggan \cite{Eggan1963}, which is a parameter relating digraph complexity to other areas such as regular language complexity and asymmetric matrix factorization. The notion of tree-depth found a wide range of applications, from the study of non-repetitive coloring \cite{Thue_choos} to the proof of the homomorphism preservation theorem for finite structures \cite{Rossman2007}. Recall the definition of tree-depth: \begin{definition} The {\em tree-depth} ${\rm td}(G)$ of a graph $G$ is defined as the minimum height\footnote{Here the height is defined as the maximum number of vertices in a chain from a root to a leaf} of a rooted forest $Y$ such that $G$ is a subgraph of the closure of $Y$ (that is of the graph obtained by adding edges between a vertex and all its ancestors). In particular, the tree-depth of a disconnected graph is the maximum of the tree-depths of its connected components. \end{definition} Several characterizations of tree-depth have been given, which can be seen as possible alternative definitions. Let us mention: \begin{trivlist} \setlength{\itemsep}{3mm} \item {\bf TD$1$.} The tree-depth of a graph is the order of the largest clique in a trivially perfect supergraph of $G$ \cite{td-wiki}. Recall that a graph is {\em trivially perfect} if it has the property that in each of its induced subgraphs the size of the maximum independent set equals the number of maximal cliques \cite{Golumbic1978105}. This characterization follows directly from the property that a connected graph is trivially perfect if and only if it is the comparability graph of a rooted tree \cite{Golumbic1978105}. \item {\bf TD$2$.} The tree-depth of a graph is the minimum number of colors in a {\em centered coloring} of $G$, that is in a vertex coloring of $G$ such that in every connected subgraph of $G$ some color appears exactly once \cite{Taxi_tdepth}. \item {\bf TD$3$.} A strongly related notion is vertex ranking, which has has been investigated in \cite{vertex_ranking,Schaffer}. The {\em vertex ranking} (or {\em ordered coloring}) of a graph is a vertex coloring by a linearly ordered set of colors such that for every path in the graph with end vertices of the same color there is a vertex on this path with a higher color. The equality of the minimum number of colors in a vertex ranking and the tree-detph is proved in \cite{Taxi_tdepth}. \item {\bf TD$4$.} The tree-depth of a graph $G$ with connected components $G_1,\dots,G_p$, is recursively defined by: $$ {\rm td}(G)=\begin{cases} 1&\text{ if }G\simeq K_1\\ \displaystyle\max_{i=1}^p {\rm td}(G_i)&\text{ if $G$ is disconnected}\\ \displaystyle 1+\min_{v\in V(G)}{\rm td}(G-v)&\text{ if $G$ is connected and }G\not\simeq K_1 \end{cases} $$ The equivalence between the value given by this recursive definition and minimum height of an elimination tree, as well as the equality of this value with the tree-depth are proved in \cite{Taxi_tdepth}. \item {\bf TD$5$.} The tree-depth can also be defined by means of games, see \cite{Giannopoulou2011, Gruber2008,Hunter2011}. In particular, this leads to a min-max formula for tree-depth in the spirit of the min-max formula relating tree-width and bramble size \cite{Seymour1993}. Precisely, a {\em shelter} in a graph $G$ is a family $\mathcal S$ of non-empty connected subgraphs of $G$ partially ordered by inclusion such that for every subgraph $H\in\mathcal S$ not minimal in $\mathcal F$ and for every $x\in H$ there exists $H'\in\mathcal S$ covered by $H$ (in the partial order) such that $x\not\in H'$. The {\em thickness} of a shelter $\mathcal S$ is the minimal length of a maximal chain of $\mathcal S$. Then the tree-depth of a graph $G$ equals the maximum thickness of a shelter in $G$ \cite{Giannopoulou2011}. \item {\bf TD$6$.} Also, graphs with tree-depth at most $t$ can be theoretically characterized by means of a finite set of forbidden minors, subgraphs, or even induced subgraphs. But in each case, the number of obstruction grows at least like a double (and at most a triple) exponential in $t$ \cite{Dvorak2012969}. \end{trivlist} More generally, classes with bounded tree-depth can be characterized by several properties: \begin{trivlist} \setlength{\itemsep}{3mm} \item {\bf TD$7$.} A class of graphs $\mathcal C$ has bounded tree-depth if and only if there is some integer $k$ such that graphs in $\mathcal C$ exclude $P_k$ as a subgraph. More precisely, while computing the tree-depth of a graph $G$ is a hard problem, it can be (very roughly) approximated bu considering the height $h$ of a Depth-First Search tree of $G$, as $\lceil\log_2 (h+2)\rceil\leq {\rm td}(G)\leq h$ \cite{Sparsity}. \item {\bf TD$8$.} A class of graphs $\mathcal C$ has bounded tree-depth if and only if there is some integers $s,t,q$ such that graphs in $\mathcal C$ exclude $P_s, K_t,$ and $K_{q,q}$ as induced subgraphs (this follows from the previous item and \cite[Theorem 3]{Atminas2014}, which states that for every $s$, $t$, and $q$, there is a number $Z = Z(s, t, q)$ such that every graph with a path of length at least $Z$ contains either $P_s$ or $K_t$ or $K_{q,q}$ as an induced subgraph. \item {\bf TD$9$.} A monotone class of graphs has bounded tree-depth if and only if it is well quasi-ordered for the induced-subgraph relation (with vertices possibly colored using $k\geq 2$ colors) (follows from \cite{Ding1992}). \item {\bf TD$10$.} A monotone class of graphs has bounded tree-depth if and only if First-order logic (FO) and monadic second-order (MSO) logic have the same expressive power on the class \cite{6280445}. \end{trivlist} Classes of graphs with tree-depth at most $t$ are computationally very simple, as witnessed by the following properties: \begin{trivlist} \setlength{\itemsep}{3mm} \item It follows from {\bf TD$9$} that every hereditary property can be tested in polynomial time when restricted to graphs with tree-depth at most $t$. Let us emphasize how one can combine {\bf TD$8$} and {\bf TD$9$} to get complexity results for $P_s$-free graphs. Recall that a graph $G$ is {\em $k$-choosable} if for every assignment of a set $S(v)$ of $k$ colors to every vertex $v$ of $G$, there is a proper coloring of $G$ that assigns to each vertex $v$ a color from $S(v)$ \cite{vizing,ert}. Note that in general, for $k>2$, deciding $k$-choosability for bipartite graphs is $\Pi_2^P$-complete, hence more difficult that both NP and co-NP problems. It was proved in \cite{Heggernes2009} that for $P_5$-free graphs, that is, graphs excluding $P_5$ as an induced subgraph, k-choosability is fixed-parameter tractable. For general $P_s$-free graphs we prove: \begin{theorem} For every integers $s$ and $k$, there is a polynomial time algorithm to decide whether a $P_s$-free graph $G$ is $k$-choosable. \end{theorem} \begin{proof} Assume $G$ is $P_s$-free. We can decide in polynomial time whether $G$ includes $K_{k+1}$ or $K_{k,k^k}$ as an induced subgraph. In the affirmative, $G$ is not $k$-choosable. Otherwise, the tree-depth of $G$ is bounded by some constant $C(s,k)$. As the property to be $k$-choosable is hereditary, we can use a polynomial time algorithm deciding whether a graph with tree-depth at most $C(s,k)$ is $k$-choosable \end{proof} \item Graphs with tree-depth at most $t$ have a (homomorphism) core of order bounded by a function of $t$ \cite{Taxi_tdepth}. In other word, every graph $G$ with tree-depth at most $t$ has an induced subgraph $H$ of order at most $F(t)$ such that there exists an adjacency preserving map (that is: a {\em homomorphism}) from $V(G)$ to $V(H)$. \item The complexity of checking the satisfaction of an ${\rm MSO}_2$ property $\phi$ on a class with tree-depth at most $t$ in time $O(f(\phi, t)\cdot|G|)$, where $f$ has an elementary dependence on $\phi$ \cite{Gajarsky2012}. This is in contrast with the dependence arising for ${\rm MSO}_2$-model checking in classes with bounded treewidth using Courcelle's algorithm \cite{Courcelle2}, where $f$ involves a tower of exponents of height growing with $\phi$ (what is generally unavoidable \cite{Frick2004}). These properties led to the study of classes with bounded shrub-depth, generalizing classes with bounded tree-depth, and enjoying similar properties for ${\rm MSO}_1$-logic \cite{Gajarsky2012,Ganian2012}. Concerning the dependency on the tree-depth $t$, note that the $(t + 1)$-fold exponential algorithm for MSO model-checking given by Gajarsk{\'y} and Hlin{\v e}n\'y in \cite{gajarsky2012faster} is essentially optimal \cite{Lampis2013}. \end{trivlist} Graphs with bounded tree depth form the building blocs for more complicated graphs, with which we deal in the next section. \section{Low Tree-Depth Decomposition of Graphs} Several extensions of chromatic number of been proposed and studied in the literature. For instance, the {\em acyclic chromatic number} is the minimum number of colors in a proper vertex-coloring such that any two colors induce an acyclic graph (see e.g. \cite{AMS,Borodin1979}). More generally, for a fixed parameter $p$, one can ask what is the minimum number of colors in a proper vertex-coloring of a graph $G$, such that any subset $I$ of at most $p$ colors induce a subgraph with treewidth at most $|I|-1$. In this setting, the value obtained for $p=1$ is the chromatic number, while the value obtained for $p=2$ is the acyclic chromatic number. In this setting, the following result has been proved by Devos, Oporowski, Sanders, Reed, Seymour and Vertigan using the structure theorem for graphs excluding a minor: \begin{theorem}[\cite{2tw}] \label{th:2tw}For every proper minor closed class $\mathcal K$ and integer $k\geq 1$, there is an integer $N=N({\mathcal K},k)$, such that every graph $G \in {\mathcal K}$ has a vertex partition into $N$ graphs such that any $j\leq k$ parts form a graph with tree-width at most $j-1$. \end{theorem} The stronger concept of low tree-depth decomposition has been introduced by the authors in \cite{Taxi_tdepth}. \begin{definition} A {\em low tree-depth decomposition} with parameter $p$ of a graph $G$ is a coloring of the vertices of $G$, such that any subset $I$ of at most $p$ colors induce a subgraph with tree-depth at least $|I|$. The minimum number of colors in a low tree-depth decomposition with parameter $p$ of $G$ is denoted by $\chi_p(G)$. \end{definition} For instance, $\chi_1(G)$ is the (standard) chromatic number of $G$, while $\chi_2(G)$ is the {\em star chromatic number} of $G$, that is the minimum number of colors in a proper vertex-coloring of $G$ such that any two colors induce a star forest (see e.g. \cite{alon2,Taxi_jcolor}). The authors were able to extend Theorem~\ref{th:2tw} to low tree-depth decomposition in \cite{Taxi_tdepth}. Then, using the concept of transitive fraternal augmentation \cite{POMNI}, the authors extended further existence of low tree-depth decomposition (with bounded number of colors) to classes with bounded expansion, the definition of which we recall now: \begin{definition} A class $\mathcal C$ has {\em bounded expansion} if there exists a function $f:\mathbb{N}\rightarrow\mathbb{N}$ such that every topological minor $H$ of a graph $G\in\mathcal{C}$ has an average degree bounded by $f(p)$, where $p$ is the maximum number of subdivisions per edge needed to turn $H$ into a subgraph of $G$. \end{definition} Extending low tree-depth decomposition to classes with bounded expansion in is the best possible: \begin{theorem}[\cite{POMNI}] \label{thm:chiBE} Let $\mathcal{C}$ be a class of graphs, then the following are equivalent: \begin{enumerate} \item for every integer $p$ it holds $\sup_{G\in\mathcal{C}}\chi_p(G)<\infty$; \item the class $\mathcal{C}$ has bounded expansion. \end{enumerate} \end{theorem} Properties and characterizations of classes with bounded expansion will be discussed in more details in Section~\ref{sec:taxonomy} (we refer the reader to \cite{Sparsity} for a thorough analysis). Let us mention that classes with bounded expansion in particular include proper minor closed classes (as for instance planar graphs or graphs embeddable on some fixed surface), classes with bounded degree, and more generally classes excluding a topological minor. Thus on the one side the classes of graphs with bounded expansion include most of the sparse classes of structural graph theory, yet on the other side they have pleasant algorithmic and extremal properties. On the other hand, one could ask whether for proper minor-closed classes one could ask there exists a stronger coloring than the one given by low tree-depth decompositions. Precisely, one can ask what is the minimum number of colors required for a vertex coloring of a graph $G$, so that any subgraph $H$ of $G$ gets at least $f (H)$ colors. (For instance that the star coloring corresponds to the graph function where any $P_4$ gets at least $3$ colors.) Define the {\em upper chromatic number} $\overline{\chi}(H)$ of a graph $H$ as the greatest integer, such that for any proper minor closed class of graph $\mathcal C$, there exists a constant $N=N(\mathcal C, H)$, such that any graph $G\in\mathcal C$ has a vertex coloring by at $N$ colors so that any subgraph of $G$ isomorphic to $H$ gets at least $\overline{\chi}(H)$ colors. The authors proved in \cite{Taxi_tdepth} that $\overline{\chi}(H)={\rm td}(H)$, showing that low tree-depth decomposition is the best we can achieve for proper minor closed classes. Note that the tree-depth of a graph $G$ is also related to the chromatic numbers $\chi_p(G)$ by ${\rm td}(G)=\max_p \chi_p(G)$ \cite{Taxi_tdepth}. \section{Low Tree-Depth Decomposition and Restricted Dualities} The original motivation of low tree-depth decomposition was to prove the existence of a triangle free graph $H$ such that every triangle-free planar $G$ admits a homomorphism to $H$, thus providing a structural strengthening of Gr\" otzsch's theorem \cite{Taxi_jcolor}. Recall that a {\em homomorphism} of a graph $G$ to a graph $H$ is a mapping from the vertex set $V(G)$ of $G$ to the vertex set $V(H)$ of $H$ that preserves adjacency. The existence (resp. non-existence) of a homomorphism of $G$ to $H$ will be denoted by $G\rightarrow H$ (resp. by $G\nrightarrow H$). We refer the interested reader to the monograph \cite{HN} for a detailed study of graph homomorphisms. Thus the above planar triangle-free problem can be restated as follows: Prove that there exists a graph $H$ such that $K_3\nrightarrow H$ and such that for every planar graph $G$ it holds $$ K_3\nrightarrow G\quad\iff\quad G\rightarrow H. $$ More generally, we are interested in the following problem: given a class of graphs $\mathcal C$ and a connected graph $F$, find a graph $D_{\mathcal{C}}(F)$ for $\mathcal{C}$ (which we shall refer to as a {\em dual} of $F$ for $\mathcal C$), such that $F\nrightarrow D_{\mathcal{C}}(F)$ and such that for every $G\in\mathcal C$ it holds $$ F\nrightarrow G\quad\iff\quad G\rightarrow D_{\mathcal{C}}(F). $$ (Note that $D_{\mathcal{C}}(F)$ is not uniquely determined by the above equivalence.) A couple $(F,D_{\mathcal{C}}(F))$ with the above property is called a {\em restricted duality} of $\mathcal C$. \begin{example} For the special case of triangle-free planar graphs, the existence of a dual was proved by the authors in \cite{Taxi_tdepth} and the minimum order dual has been proved to be the Clebsch graph by Naserasr \cite{Nas}. $\forall\text{ planar }G:$ $$\duality[15mm]{K3}{Clebsch}.$$ Note that this restricted homomorphism duality extends to the class of all graphs excluding $K_5$ as a minor \cite{Naserasr20095789}. \end{example} \begin{example} A restricted homomorphism duality for toroidal graphs follows from the existence of a finite set of obstructions for $5$-coloring proved by Thomassen in \cite{Thomassen199411}: Noticing that all the obstructions shown Fig.~\ref{fig:6crit} are homomorphic images of one of them, namely $C_1^3$. \begin{figure} \caption{The $6$-critical graphs for the torus.} \label{fig:6crit} \end{figure} Thus we get the following restricted homomorphism duality. $\forall\text{ toroidal }G:$ $$\duality[15mm]{T11}{K5}$$ \end{example} \begin{definition} A class $\mathcal C$ with the property that every connected graph $F$ has a dual for $\mathcal C$ is said to have {\em all restricted dualities}. \end{definition} In \cite{Taxi_tdepth} we proved, using low tree-depth decomposition, that for every proper minor closed class $\mathcal C$ has all restricted dualities. We generalized in \cite{POMNIII} this result to classes with bounded expansions. We briefly outline this. In the study of restricted homomorphism dualities, a main tool appeared to be notion of $t$-approximation: \begin{definition} Let $G$ be a graph and let $t$ be a positive integer. A graph $H$ is a {\em $t$-approximation} of $G$ if $G$ is homomorphic to $H$ (i.e. $G\rightarrow H$) and every subgraph of $H$ of order at most $t$ is homomorphic to $G$. \end{definition} Indeed the following theorem is proved in \cite{FO_CSP}: \begin{theorem} \label{thm:dual_approx} Let $\mathcal C$ be a class of graphs. Then the following are equivalent: \begin{enumerate} \item The class $\mathcal C$ is bounded and has all restricted dualities (i.e. every connected graph $F$ has a dual for $\mathcal C$); \item For every integer $t$ there is a constant $N(t)$ such that every graph $G\in\mathcal{C}$ has a $t$-approximation of order at most $N(t)$. \end{enumerate} \end{theorem} The following lemma stresses the connection existing between $t$-approximation and low tree-depth decomposition: \begin{theorem}[\cite{FO_CSP}] \label{thm:chi2approx} For every integer $t$ there exists a constant $C_t$ such that every graph $G$ has a $t$-approximation $H$ with order $$ |H|\leq C_t^{\chi_t(G)^t}. $$ \end{theorem} Hence we have the following corollary of Theorems~\ref{thm:dual_approx}, \ref{thm:chi2approx}, and~\ref{thm:chiBE}, which was originally proved in \cite{POMNIII}: \begin{corollary} Every class with bounded expansion has all restricted dualities. \end{corollary} The connection between classes with bounded expansion and restricted dualities appears to be even stronger, as witnessed by the following (partial) characterization theorem. \begin{theorem}[\cite{FO_CSP}] Let $\mathcal{C}$ be a topologically closed class of graphs (that is a class closed by the operation of graph subdivision). Then the following are equivalent: \begin{enumerate} \item the class $\mathcal{C}$ has all restricted dualities; \item the class $\mathcal{C}$ has bounded expansion. \end{enumerate} \end{theorem} This theorem has also a variant in the context of directed graphs: \begin{theorem}[\cite{FO_CSP}] Let $\mathcal{C}$ be a class of directed graphs closed by reorientation. Then the following are equivalent: \begin{enumerate} \item the class $\mathcal{C}$ has all restricted dualities; \item the class $\mathcal{C}$ has bounded expansion. \end{enumerate} \end{theorem} \section{Intermezzo: Low Tree-Depth Decomposition and Odd-Distance Coloring} Let $n$ be an odd integer and let $G$ be a graph. The problem of finding a coloring of the vertices of $G$ with minimum number of colors such that two vertices at distance $n$ are colored differently, called {$D_n$-coloring} of $G$, was introduced in 1977 in Graph Theory Newsletter by E. Sampathkumar \cite{Sampathkumar1977} (see also \cite{jensen2011graph}). In \cite{Sampathkumar1977}, Sampathkumar claimed that every planar graph has a $D_n$-coloring for every odd integer $n$ with $5$ colors, and conjectured that $4$ colors suffice. Unfortunately, the claimed result was flawed, as witnessed by the graph depicted on Figure~\ref{fig:D3col}, which needs $6$ colors for a $D_3$-coloring \cite{Sparsity}. \begin{figure} \caption{On the left, a planar graph $G$ needing $6$-colors for a $D_3$-coloring. On the right, a witness: this a graph with vertex set $A\subset V(G)$ in which adjacent vertices are at distance $3$ in $G$, thus should get distinct colors in a $D_3$-coloring of $G$.} \label{fig:D3col} \end{figure} Low tree-depth decomposition allows to prove that for any odd integer $n$, a fixed number of colors is sufficient for $D_n$-coloring planar graphs, and this results extends to all classes with bounded expansion. \begin{theorem}[\cite{Sparsity}] \label{thm:oddD} For every class with bounded expansion $\mathcal C$ and every odd integer $n$ there exists a constant $N$ such that every graph $G\in\mathcal{C}$ has a $D_n$-coloring with at most $N$ colors. \end{theorem} The proof of Theorem~\ref{thm:oddD} relies on low tree-depth decomposition, and the bound $N$ given in \cite{Sparsity} for the number of colors sufficient for a $D_n$-coloring of a graph $G$ is double exponential in $\chi_n(G)$. Hence it is still not clear whether a uniform bound could exist for $D_n$-coloring of planar graphs. \begin{problem}[van den Heuvel and Naserasr] Does there exist a constant $C$ such that for every odd integer $n$, it holds that every planar graph has a $D_n$-coloring with at most $C$ colors? \end{problem} Note that, however, there exists no bound for the {\em odd-distance coloring} of planar graphs, which requires that two vertices at odd distance get different colors. Indeed, one can construct outerplanar graphs having an arbitrarily large subset of vertices pairwise at odd distance (see Fig.~\ref{fig:oddcl}). \begin{figure} \caption{There exist outerplanar graphs with arbitrarily large subset of vertices pairwise at odd distance. (In the figure, the vertices in the periphery are pairwise at distance $1$, $3$, $5$, or $7$.)} \label{fig:oddcl} \end{figure} However, no construction requiring a large number of colors without having a large set of vertices pairwise at odd-distance is known. Hence the following problem. \begin{problem}[Thomass\'e] Does there exist a function $f:\mathbb{N}\rightarrow\mathbb{N}$ such that every planar graph without $k$ vertices pairwise at odd distance has an odd-distance coloring with at most $f(k)$ colors? \end{problem} \section{Low Tree-Depth Decomposition and Density of Shallow Minors, Shallow Topological Minors, and Shallow Immersions} \label{sec:taxonomy} Classes with bounded expansion, which have been introduced in \cite{POMNI}, may be viewed as a relaxation of the notion of proper minor closed class. The original definition of classes with bounded expansion relates to the notion of shallow minor, as introduced by Plotkin, Rao, and Smith \cite{shallow}. \begin{definition} Let $G,H$ be graphs with $V(H)=\{v_1,\dots,v_h\}$ and let $r$ be an integer. A graph $H$ is a {\em shallow minor} of a graph $G$ {\em at depth} $r$, if there exists disjoint subsets $A_1,\dots,A_h$ of $V(G)$ such that (see Fig.~\ref{fig:shm}) \begin{itemize} \item the subgraph of $G$ induced by $A_i$ is connected and as radius at most $r$, \item if $v_i$ is adjacent to $v_j$ in $H$, then some vertex in $A_i$ is adjacent in $G$ to some vertex in $A_j$. \end{itemize} \end{definition} \begin{figure} \caption{A shallow minor} \label{fig:shm} \end{figure} We denote \cite{POMNI, Sparsity} by $G\,{\triangledown}\, r$ the class of the (simple) graphs which are shallow minors of $G$ at depth $r$, and we denote by $\rdens{r}(G)$ the maximum density of a graph in $G\,{\triangledown}\, r$, that is: $$\rdens{r}(G)=\max_{H\in G\,{\triangledown}\, r}\frac{\|H\|}{|H|}$$ A class $\mathcal C$ has {\em bounded expansion} if $\sup_{G\in\mathcal{C}}\rdens{r}(G)<\infty$ for each value of $r$. Considering shallow minors may, at first glance, look arbitrary. Indeed one can define as well the notions of shallow topological minors and shallow immersions: \begin{definition} A graph $H$ is a {\em shallow topological minor} at depth $r$ of a graph $G$ if some subgraph of $G$ is isomorphic to a subdivision of $H$ in which every edge has been subdivided at most $2r$ times (see Fig.~\ref{fig:stm}). \begin{figure} \caption{$H$ is a shallow topological minor of $G$ at depth $r$} \label{fig:stm} \end{figure} We denote \cite{POMNI, Sparsity} by $G\,\widetilde{\triangledown}\, r$ the class of the (simple) graphs which are shallow topological minors of $G$ at depth $r$, and we denote by $\trdens{r}(G)$ the maximum density of a graph in $G\,\widetilde{\triangledown}\, r$, that is: $$\trdens{r}(G)=\max_{H\in G\,\widetilde{\triangledown}\, r}\frac{\|H\|}{|H|}$$ \end{definition} Note that shallow topological minors can be alternatively defined by considering how a graph $H$ can be topologically embedded in a graph $G$: a graph $H$ with vertex set $V(H)=\{a_1,\dots,a_k\}$ is a shallow topological minor of a graph $G$ at depth $r$ is there exists vertices $v_1,\dots,v_k$ in $G$ and a family $\mathcal P$ of paths of $G$ such that \begin{itemize} \item two vertices $a_i$ and $a_j$ are adjacent in $H$ if and only if there is a path in $\mathcal{P}$ linking $v_i$ and $v_j$; \item no vertex $v_i$ is interior to a path in $\mathcal{P}$; \item the paths in $\mathcal{P}$ are internally vertex disjoint; \item every path in $\mathcal{P}$ has length at most $2r+1$. \end{itemize} We can similarly define the notion of shallow immersion: \begin{definition} A graph $H$ with vertex set $V(H)=\{a_1,\dots,a_k\}$ is a {\em shallow immersion} of a graph $G$ at depth $r$ is there exists vertices $v_1,\dots,v_k$ in $G$ and a family $\mathcal P$ of paths of $G$ such that \begin{itemize} \item two vertices $a_i$ and $a_j$ are adjacent in $H$ if and only if there is a path in $\mathcal{P}$ linking $v_i$ and $v_j$; \item the paths in $\mathcal{P}$ are edge disjoint; \item every path in $\mathcal{P}$ has length at most $2r+1$; \item no vertex of $G$ is internal to more than $r$ paths in $\mathcal{P}$. \end{itemize} We denote \cite{POMNI, Sparsity} by $G\shim r$ the class of the (simple) graphs which are shallow immersions of $G$ at depth $r$, and we denote by $\irdens{r}(G)$ the maximum density of a graph in $G\shim r$, that is: $$\irdens{r}(G)=\max_{H\in G\shim r}\frac{\|H\|}{|H|}$$ \end{definition} It appears that although minors, topological minors, and immersions behave very differently, their shallow versions are deeply related, as witnessed by the following theorem: \begin{theorem}[\cite{Sparsity}] \label{thm:BE} Let $\mathcal{C}$ be a class of graphs. Then the following are equivalent: \begin{enumerate} \item the class $\mathcal{C}$ has bounded expansion; \item for every integer $r$ it holds $\sup_{G\in\mathcal{C}}\rdens{r}(G)<\infty$; \item for every integer $r$ it holds $\sup_{G\in\mathcal{C}}\trdens{r}(G)<\infty$; \item for every integer $r$ it holds $\sup_{G\in\mathcal{C}}\irdens{r}(G)<\infty$; \item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\,{\triangledown}\, r}\chi(H)<\infty$; \item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\,\widetilde{\triangledown}\, r}\chi(H)<\infty$; \item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\shim r}\chi(H)<\infty$. \end{enumerate} \end{theorem} In the above theorem, we see that not only shallow minors, shallow topological minors, and shallow immersions behave closely, but that the (sparse) graph density $\|G\|/|G|$ and the chromatic number $\chi(G)$ of a graph $G$ are also related. This last relation is intimately related to the following result of Dvor\'ak \cite{Dvo2007}. \begin{lemma} \label{lem:degchr} Let $c\geq 4$ be an integer and let $G$ be a graph with average degree $d>56(c-1)^2\frac{\log (c-1)}{\log c-\log (c-1)}$. Then the graph $G$ contains a subgraph $G'$ that is the $1$-subdivision of a graph with chromatic number $c$. \end{lemma} It follows from Theorem~\ref{thm:BE} that the notion of class with bounded expansion is quite robust. Not only classes with bounded expansion can be defined by edge densities and chromatic number, but also by virtually all common combinatorial parameters \cite{Sparsity}. If one considers the clique number instead of the density or the chromatic number, then a different type of classes is defined: \begin{definition} A class of graph $\mathcal{C}$ is {\em somewhere dense} if there exists an integer $p$ such that every clique is a shallow topological minor at depth $p$ of some graph in $\mathcal C$ (in other words, $\mathcal{C}\,\widetilde{\triangledown}\, p$ contain all graphs); the class $\mathcal{C}$ is {\em nowhere dense} if it is not somewhere dense. \end{definition} Similarly that Theorem~\ref{thm:BE}, we have several characterizations of nowhere dense classes. \begin{theorem}[\cite{Sparsity}] \label{thm:ND} Let $\mathcal{C}$ be a class of graphs. Then the following are equivalent: \begin{enumerate} \item the class $\mathcal{C}$ is nowhere dense; \item for every integer $r$ it holds $\limsup_{G\in\mathcal{C}} \frac{\log\rdens{r}(G)}{\log |G|}=0$; \item for every integer $r$ it holds $\limsup_{G\in\mathcal{C}} \frac{\log\trdens{r}(G)}{\log |G|}=0$ ; \item for every integer $r$ it holds $\limsup_{G\in\mathcal{C}} \frac{\log\irdens{r}(G)}{\log |G|}=0$; \item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\,{\triangledown}\, r}\omega(H)<\infty$; \item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\,\widetilde{\triangledown}\, r}\omega(H)<\infty$; \item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\shim r}\omega(H)<\infty$. \end{enumerate} \end{theorem} Note that every class with bounded expansion is nowhere dense. As mentioned in Theorem~\ref{thm:chiBE}, classes with bounded expansion are also characterized by the fact that they allow low tree-depth decompositions with bounded number of colors. A similar statement holds for nowhere dense classes: Precisely, we have the following: \begin{theorem} \label{thm:chiND} Let $\mathcal{C}$ be a class of graphs, then the following are equivalent: \begin{enumerate} \item for every integer $p$ it holds $\limsup_{G\in\mathcal{C}}\frac{\chi_p(G)}{\log |G|}=0$; \item the class $\mathcal{C}$ is nowhere dense. \end{enumerate} \end{theorem} The direction bounding $\chi_p(G)$ of both Theorem~\ref{thm:chiBE} and~\ref{thm:chiND} follow from the next more precise result: \begin{theorem}[\cite{Sparsity}] \label{thm:chibound} For every integer $p$ there is a polynomial $P_p$ (${\rm deg}\,P_p\approx 2^{2^p}$) such that for every graph $G$ it holds $$ \chi_p(G)\leq P_p(\trdens{2^{p-2}+1}(G)). $$ \end{theorem} Note that the original proof given in \cite{POMNI} gave a slightly weaker bound, and that an alternative proof of this result has been obtained by Zhu \cite{Zhu2008}, in a paper relating low tree-depth decomposition with the generalized coloring numbers introduced by Kierstead and Yang \cite{Kierstead2003}. \section{Low Tree-Depth Decomposition and Covering} In a low treedepth decomposition of a graph $G$ by $N$ colors and for parameter $t$, the subsets of $t$ colors define a disjoint union of clusters that cover the graph, such that each cluster has tree-depth at most $t$, every vertex belongs to at most $\binom{N}{t}$ clusters, and every connected subgraph of order $t$ is included in at least one cluster. It is natural to ask whether the condition that such a covering comes from a coloring could be dropped. \begin{theorem} Let $\mathcal C$ be a monotone class. Then $\mathcal C$ has bounded expansion if and only if there exists a function $f$ such that for every integer $t$, every graph $G\in\mathcal C$ has a covering $C_1,\dots, C_k$ of its vertex set such that \begin{itemize} \item each $C_i$ induces a connected subgraph with tree-depth at most $t$; \item every vertex belongs to at most $f(t)$ clusters; \item every connected subgraph of order at most $t$ is included in at least one cluster. \end{itemize} \end{theorem} \begin{proof} One direction is a direct consequence of Theorem~\ref{thm:chiBE}. Conversely, assume that the class $\mathcal C$ does not have bounded expansion. Then there exists an integer $p$ such that for every integer $d$ the class $\mathcal C$ contains the $p$-th subdivision of a graph $H_d$ with average degree at least $d$. Moreover, it is a standard argument that we can require $H_d$ to be bipartite (as every graph with average degree $2d$ contains a bipartite subgraph with average degree at least $d$). Let $t=2(p+1)$ and let $d=2f(t)+1$. Assume for contradiction that there exist clusters $C_1,\dots,C_k$ as required, then we can cover $H_d$ by clusters $C_1',\dots,C_k'$ such that each $C_i'$ induces a star (possibly reduced to an edge), every vertex belongs to at most $f(t)$ clusters, and every edge is included in at least one cluster. If an edge $\{u,v\}$ of $H_d$ is included in more than two clusters, it is easily checked that (at least) one of $u$ and $v$ can be safely removed from one of the cluster. Hence we can assume that each edge of $H_d$ is covered exactly once. To each cluster $C_i'$ associates the center of the star induced by $C_i'$ (or an arbitrary vertex of $C_i'$ if $C_i'$ has cardinality $2$) and orient the edges of the star induced by $C_i'$ away from the center. This way, every edge is oriented once and every vertex gets indegree at most $f(t)$. However, summing the indegrees we get $f(t)\geq d/2$, a contradiction. \end{proof} It is natural to ask whether similar statements would hold, if we weaken the condition that each cluster has tree-depth at most $t$ while we strengthen the condition that every connected subgraph of order at most $t$ is included in some cluster. Namely, we consider the question whether a similar statement holds if we allow each cluster to have radius at most $2t$ while requiring that every $t$-neighborhood is included in some cluster. In the context of their solution of model checking problem for nowhere dense classes, Grohe, Kreutzer and Siebertz introduced in \cite{Grohe2013} the notion of $r$-neighborhood cover and proved that nowhere dense classes admit such cover with small maximum degree, and proved that nowhere dense classes and bounded expansion classes admit such nice covering. Precisely, for $r\in\mathbb{N}$, an {\em $r$-neighborhood cover} $\mathcal{X}$ of a graph $G$ is a set of connected subgraphs of $G$ called {\em clusters}, such that for every vertex $v\in V(G)$ there is some $X\in\mathcal{X}$ with $N_r(v)\subseteq X$. The {\em radius} ${\rm rad}(\mathcal{X})$ of a cover $\mathcal{X}$ is the maximum radius of its clusters. The {\em degree} $d^\mathcal{X}(v)$ of $v$ in $\mathcal{X}$ is the number of clusters that contain $v$. The {\em maximum degree} $\Delta(\mathcal{X})=\max_{v\in V(G)}d^\mathcal{X}(v)$. For a graph $G$ and $r\in\mathbb{N}$ we define $\tau_r(G)$ as the minimum maximum degree of an $r$-neighborhood cover of radius at most $2r$ of $G$. The following theorem is proved in \cite{Grohe2013}. \begin{theorem} \label{thm:1b} Let $\mathcal{C}$ be a class of graphs with bounded expansion. Then there is a function $f$ such that for all $r\in\mathbb{N}$ and all graphs $G\in\mathcal{C}$ , it holds $\tau_r(G)\leq f(r)$. \end{theorem} In order to prove the converse statement, we shall need the following result of K\"uhn and Osthus \cite{K`uhn2004}: \begin{theorem} \label{thm:ko} For every $k$ there exists $d = d(k)$ such that every graph of average degree at least $d$ contains a subgraph of average degree at least $k$ whose girth is at least six. \end{theorem} We are now ready to turn Theorem~\ref{thm:1b} into a characterization theorem of classes with bounded expansion. \begin{theorem} Let $\mathcal{C}$ be an infinite monotone class of graphs. Then $\mathcal C$ has bounded expansion if and only if, for every integer $r$ it holds $$\sup_{G\in\mathcal{C}} \tau_r(G)<\infty.$$ \end{theorem} \begin{proof} One direction follows from Theorem~\ref{thm:1b}. For the other direction, assume that the class $\mathcal C$ does not have bounded expansion. Then there exists an integer $p$ such that for every integer $n$, $\mathcal C$ contains the $p$-th subdivision of a graph $G_n$ with average degree at least $n$. Let $d\in\mathbb{N}$. According to Theorem~\ref{thm:ko}, there exists $N(d)$ such that every graph with average degree at least $N(d)$ contains a subgraph of girth $6$ and average degree at least $d$. We deduce that $\mathcal C$ contains the $p$-th subdivision $H_d'$ of a graph $H_d$ with girth at least $6$ and average degree at least $d$. As in the proof of Theorem~\ref{thm:2}, we get $$\sup_{G\in\mathcal{C}} \tau_{p+1}(G) \geq \sup_{d} \tau_{p+1}(H_d') \geq \sup_{d}\tau_{1}(H_d) \geq \sup_{d}\frac{\|H_d\|}{|H_d|}=\infty.$$ \end{proof} Also, similar statements exist for nowhere dense classes: \begin{theorem} A hereditary class $\mathcal C$ is nowhere dense if there exists a function $f$ such that for every integer $t$ and every $\epsilon>0$, every graph $G\in\mathcal C$ of order $n\geq f(t,\epsilon)$ has a covering $C_1,\dots, C_k$ of its vertex set such that \begin{itemize} \item each $C_i$ induces a connected subgraph with tree-depth at most $t$; \item every vertex belongs to at most $n^\epsilon$ clusters; \item every connected subgraph of order at most $t$ is included in at least one cluster. \end{itemize} \end{theorem} \begin{proof} One direction directly follows from Theorem~\ref{thm:chiND}. For the reverse direction, assume that $\mathcal C$ is not nowhere dense. Then there exists $p$ such that for every $n\in\mathbb{N}$, the class $\mathcal C$ contains a graph $G_n$ having the $p$-th subdivision of $K_n$ as the spanning subgraph. Assume that a covering exists for $t=3p+3$. Then every $p$-subdivided triangle of $K_n$ is included in some cluster. As the $p$-subdivided $K_n$ includes $\binom{n}{3}$ triangles, and as there are at most $n^{1+\epsilon}$ clusters including some principal vertex of the subdivided $K_n$ (which is necessary to include some subdivided triangle), some cluster $C$ includes at least $n^{2-\epsilon}$ triangles. It follows that the subgraph induced by $C$ has a minor $H$ of order at most $n$ with at least $n^{2-\epsilon}$ triangles. However, as tree-depth is minor monotone, the graph $H$ has tree-depth at most $t$ hence is $t$-degenerate thus cannot contain more than $\binom{t}{2}n$ triangles. Whence we are led to a contradiction if $n>\binom{t}{2}^{\frac{1}{1-\epsilon}}$. \end{proof} \begin{theorem}[ \cite{Grohe2013}] \label{thm:1} Let $\mathcal{C}$ be a nowhere dense class of graphs. Then there is a function $f$ such that for all $r\in\mathbb{N}$ and $\epsilon>0$ and all graphs $G\in\mathcal{C}$ with $n\geq f(r,\epsilon)$ vertices, it holds $\tau_r(G)\leq n^\epsilon$. \end{theorem} In other words, every infinite nowhere dense class of graphs $\mathcal C$ is such that $$\adjustlimits\sup_{r\in\mathbb{N}}\limsup_{G\in\mathcal{C}}\frac{\log \tau_r(G)}{\log |G|}=0.$$ We shall deduce from this theorem the following characterization of nowhere dense classes of graphs. \begin{theorem} \label{thm:2} Let $\mathcal{C}$ be an infinite monotone class of graphs. Then $$\adjustlimits\sup_{r\in\mathbb{N}}\limsup_{G\in\mathcal{C}}\frac{\log \tau_r(G)}{\log |G|}$$ is either $0$ if $\mathcal C$ is nowhere dense, at at least $1/3$ if $\mathcal{C}$ is somewhere dense. \end{theorem} This theorem will directly follow from Theorem~\ref{thm:1} and the following two lemmas. \begin{lemma} \label{lem:1} Let $G$ be a graph of girth at least $5$. Then it holds $$ \tau_1(G)\geq \nabla_0(G), $$ where $$\nabla_0(G)=\max_{H\subseteq G}\frac{\|H\|}{|H|}.$$ \end{lemma} \begin{proof} Let $\mathcal{X}$ be a $1$-neighborhood cover of radius at most $2$ of $G$ with maximum degree $\tau_1(G)$. Let $X_1,\dots,X_k$ be the clusters of $\mathcal{X}$. For an edge $e=\{u,v\}$, let $i\leq k$ be the minimum integer such that $N_1(u)$ or $N_1(v)$ is included in $X_i$. Let $c_i$ be a center of $X_i$. Then $e$ belongs to a path of length at most $2$ with endpoint $c_i$. We orient $e$ according to the orientation of this path away from $c_i$. Note that by the process, we orient every edge, and that every vertex $v$ gets at most one incoming edge by cluster that contains $v$. Hence we constructed an orientation of $G$ with maximum degree at most $\tau_1(G)$. As the maximum indegree of an orientation of $G$ is at least $\nabla_0(G)$, we get $\tau_1(G)\geq \nabla_0(G)$. \end{proof} We deduce the following \begin{lemma} \label{lem:2} Let $\mathcal{C}$ be a monotone somewhere dense class of graphs. Then $$\adjustlimits\sup_{r\in\mathbb{N}}\limsup_{G\in\mathcal{C}}\frac{\log \tau_r(G)}{\log |G|}\geq \frac{1}{3}.$$ \end{lemma} \begin{proof} A $\mathcal{C}$ is monotone and somewhere dense, there exists integer $p\geq 0$ such that for every $n\in\mathbb{N}$, the $p$-th subdivision ${\rm Sub}_p(K_n)$ of $K_n$ belongs to $\mathcal C$. For $n\in\mathbb{N}$, let $H_n$ be a graph of girth at least $5$, with order $|H_n|\sim n$ and size $\|H_n\|\sim n^{3/2}$. If $p=0$, then according to Lemma~\ref{lem:1} it holds \begin{align*} \adjustlimits\sup_{r\in\mathbb{N}}\limsup_{G\in\mathcal{C}}\frac{\log \tau_r(G)}{\log |G|}&\geq \limsup_{G\in\mathcal{C}}\frac{\log \tau_1(G)}{\log |G|}\\ &\geq \lim_{n\rightarrow\infty}\frac{\log \nabla_0(H_n)}{\log |H_n|}\\ &\geq \lim_{n\rightarrow\infty}\frac{\log \|H_n\|-\log |H_n|}{\log |H_n|} = \frac{1}{2}. \end{align*} Thus assume $p\geq 1$. Denote by $H_n'$ the $p$-th subdivision of $H_n$, where we identify $V(H_n)$ with a subset of $V(H_n')$ for convenience. Then $|H_n|\sim pn^{3/2}$. Let $\mathcal X=\{X_1,\dots,X_k\}$ be a $(p+1)$-neighborhood cover of radius at most $2(p+1)$ of $H_n'$ with maximum degree $\tau_{p+1}(H_n')$. Let $c_i$ be a center of cluster $X_i$, and let $d_i$ be a vertex of $H_n$ at minimal distance of $c_i$ in $H_n'$. It is easily checked that there exists a cluster $X_i'$ with center $d_i$ and radius $2(p+1)$ such that $X_i\cap V(H_n)=X_i'\cap V(H_n)$. Define $Y_i=X_i'\cap V(H_n)$. As $\mathcal X$ is a $(p+1)$-neighborhood cover of radius at most $2(p+1)$ of $H_n'$ with maximum degree $\tau_1(H_n')$, the cover $\mathcal Y=\{Y_i\}$ is a $1$-neighborhood cover of radius $2$ of $H_n$ with maximum degree $\tau_{p+1}(H_n')$. Hence $\tau_1(H_n)\leq \tau_{p+1}(H_n')$. Thus it holds \begin{align*} \adjustlimits\sup_{r\in\mathbb{N}}\limsup_{G\in\mathcal{C}}\frac{\log \tau_r(G)}{\log |G|} &\geq \lim_{n\rightarrow\infty}\frac{\log \tau_{p+1}(H_n')}{\log |H_n'|}\\ &\geq \lim_{n\rightarrow\infty}\frac{\log \tau_{1}(H_n)}{\log |H_n'|}\\ &\geq \lim_{n\rightarrow\infty}\frac{\log \|H_n\|-\log |H_n|}{\log |H_n'|}=\frac{1}{3}. \end{align*} \end{proof} \section{Algorithmic Applications of Low Tree-Depth Decomposition} \label{sec:algo} Theorem~\ref{thm:chibound} has the following algorithmic version. \begin{theorem}[\cite{Sparsity}] \label{thm:chialgo} There exist polynomials $P_p$ (${\rm deg}\,P_p\approx 2^{2^p}$) and an algorithm that computes, for input graph $G$ and integer $p$, a low tree-depth decomposition of $G$ with parameter $p$ using $N_p(G)$ colors in time $O(N_p(G)\,|G|)$, where $$ \chi_p(G)\leq N_p(G)\leq P_p(\trdens{2^{p-2}+1}(G)). $$ \end{theorem} It is not surprising that low tree-depth decompositions have immediately found several algorithmic applications \cite{Taxi_stoc06, POMNII}. As noticed in \cite{chrobak}, the existence of an orientation of planar graphs with bounded out-degree allows for a planar graph $G$ (once such an orientation has been computed for $G$) an easy $O(1)$ adjacency test, and an enumeration of all the triangles of $G$ in linear time. For a fixed pattern $H$, the problem is to check whether an input graph $G$ has an induced subgraph isomorphic to $H$ is called the {\em subgraph isomorphism problem}. This problem is known to have complexity at most $O(n^{\omega l/3})$ where $l$ is the order of $H$ and where $\omega$ is the exponent of square matrix fast multiplication algorithm \cite{NP85} (hence $O(n^{0.792\ l})$ using the fast matrix algorithm of \cite{coppersmith90}). The particular case of subgraph isomorphism in planar graphs have been studied by Plehn and Voigt \cite{plehn91}, Alon \cite{alon95} with super-linear bounds and then by Eppstein \cite{Epp-SODA-95,Epp-JGAA-99} who gave the first linear time algorithm for fixed pattern $H$ and $G$ planar. This was extended to graphs with bounded genus in \cite{Epp-Algo-00}. We further generalized this result to classes with bounded expansion \cite{POMNII}: \begin{theorem} \label{thm:count} There is a function $f$ and an algorithm such that for every input graphs $G$ and $H$, counts the number of occurrences of $H$ is $G$ in time $$O\bigl(f(H)\,(N_{|H|}(G))^{|H|}\,|G|\bigr),$$ where $N_p(G)$ is the number of colors computed by the algorithm in Theorem~\ref{thm:chialgo}. In particular, for every fixed bounded expansion class (resp. nowhere dense class) $\mathcal C$ and every fixed pattern $H$, the number of occurrences of $H$ in a graph $G\in\mathcal C$ can be computed in linear time (resp. in time $O(|G|^{1+\epsilon})$ for any fixed $\epsilon>0$). \end{theorem} Theorem~\ref{thm:count} can be extended from the subgraph isomorphism problem to first-order model checking. \begin{theorem}[\cite{DKT2}, see also \cite{Dawar2009}] Let $\mathcal C$ be a class of graphs with bounded expansion, and let $\phi$ be a first-order sentence (on the natural language of graphs). There exists a linear time algorithm that decides whether a graph $G\in\mathcal C$ satisfies $\phi$. \end{theorem} The above theorem relies on low tree-depth decomposition. However, the next result, due to Kazana and Segoufin, is based on the notion of transitive fraternal augmentation, which was introduced in \cite{POMNI} to prove Theorem~\ref{thm:chibound}. \begin{theorem}[\cite{Kazana2013}] \label{thm:Kazana} Let $\mathcal C$ be a class of graphs with bounded expansion and let $\phi$ be a first-order formula. Then, for all $G\in\mathcal C$, we can compute the number $|\phi(G)|$ of satisfying assignements for $\phi$ in $G$ in in time $O(|G|)$. Moreover, the set $\phi(G)$ can be enumerated in lexicographic order in constant time between consecutive outputs and linear time preprocessing time. \end{theorem} Eventually, the existence of efficient model checking algorithm has been extended to nowhere dense classes by Grohe, Kreutzer, and Siebertz \cite{Grohe2013} using the notion of $r$-neighborhood cover we already mentioned: \begin{theorem} \label{thm:FOND} For every nowhere dense class $\mathcal C$ and every $\epsilon >0$, every property of graphs definable in first-order logic can be decided in time $O(n^{1+\epsilon})$ on $\mathcal{C}$. \end{theorem} However, it is still open whether a counting version of Theorem~\ref{thm:FOND} (in the spirit of Theorem~\ref{thm:Kazana}) holds. \section{Low Tree-Depth Decomposition and Logarithmic Density of Patterns} We have seen in the Section~\ref{sec:algo} that low tree-depth decomposition allows an easy counting of patterns. It appears that they also allow to prove some ``extremal'' results. A typical problem studied in extremal graph theory is to determine the maximum number of edges ${\rm ex}(n,H)$ a graph on $n$ vertices can contain without containing a subgraph isomorphic to $H$. For non-bipartite graph $H$, the seminal result of Erd\H os and Stone \cite{erdos1946structure} gives a tight bound: \begin{theorem} $${\rm ex}(n,H)=\left(1-\frac{1}{\chi(H)-1}\right)\binom{n}{2}+o(n^2).$$ \end{theorem} In the case of bipartite graphs, less is known. Let us mention the following result of Alon, Krivelevich and Sudakov \cite{alon2003turan} \begin{theorem} Let $H$ be a bipartite graph with maximum degree $r$ on one side. $${\rm ex}(n,H)= O(n^{2-\frac{1}{r}}).$$ \end{theorem} The special case where $H$ is a subdivision of a complete graph will be of prime interest in the study of nowhere dense classes. Precisely, denoting ${\rm ex}(n,K_t^{(\leq p)})$ the maximum number of edges a graph on $n$ vertices can contain without containing a subdivision of $K_t$ in which every edge is subdivided at most $p$ times, Jiang \cite{Jiang} proved the following bound: \begin{theorem} For every integers $k,p$ it holds $${\rm ex}(n,K_k^{(\leq p)})=O(n^{1+\frac{10}{p}}).$$ \end{theorem} From this theorem follows that if a class $\mathcal{C}$ is such that $\limsup_{G\in\mathcal{C}\,\widetilde{\triangledown}\, t}\frac{\log \|G\|}{\log |G|}>1+\epsilon$ then $\mathcal C\,\widetilde{\triangledown}\, \frac{10t}{\epsilon}$ contains graphs with unbounded clique number. This property is a main ingredient in the proof of the following classification ``trichotomy'' theorem. \begin{theorem}[\cite{ND_characterization}] \label{thm:tri} Let $\mathcal C$ be an infinite class of graphs. Then \begin{equation*} \adjustlimits\sup_{t}\limsup_{G\in\mathcal C \,\widetilde{\triangledown}\, t}\frac{\log\|G\|}{\log|G|}\in\{-\infty,0,1,2\}. \end{equation*} Moreover, $\mathcal{C}$ is nowhere dense if and only if $\adjustlimits\sup_{t}\limsup_{G\in\mathcal C \,\widetilde{\triangledown}\, t}\frac{\log\|G\|}{\log|G|}\leq 1$. \end{theorem} Note that the property that the logarithmic density of edges is integral needs to consider all the classes $\mathcal C\,\widetilde{\triangledown}\, t$. For instance, the class $\mathcal D$ of graphs with no $C_4$ has a bounding logarithmic edge density of $3/2$, which jumps to $2$ when on considers $\mathcal{D}\,\widetilde{\triangledown}\, 1$. Using low tree-depth decomposition, it is possible to extend Theorem~\ref{thm:tri} to other pattern graphs: \begin{theorem}[\cite{Taxi_hom}] \label{thm:countF} For every infinite class of graphs $\mathcal C$ and every graph $F$ $$ \adjustlimits \lim_{i\rightarrow\infty}\limsup_{G\in\mathcal C\,\widetilde{\triangledown}\, i}\frac{\log (\#F\subseteq G)}{\log |G|}\in\{-\infty,0,1,\dots,\alpha(F),|F|\},$$ where $\alpha(F)$ is the stability number of $F$. Moreover, if $F$ has at least one edge, then $\mathcal{C}$ is nowhere dense if and only if $\adjustlimits \lim_{i\rightarrow\infty}\limsup_{G\in\mathcal C\,\widetilde{\triangledown}\, i}\frac{\log (\#F\subseteq G)}{\log |G|}\leq\alpha(F)$. \end{theorem} The main ingredient in the proof of this theorem is the analysis of local configurations, called $(k,F)$-sunflowers (see Fig.~\ref{fig:sunflower}). Precisely, for graphs $F$ and $G$, a {\em $(k,F)$-sunflower} in $G$ is a $(k+1)$-tuple $(C,\mathcal F_1,\dots,\mathcal F_k)$, such that $C\subseteq V(G), \mathcal F_i\subseteq \mathcal P(V(G))$, the sets in $\{C\}\cup\bigcup_i\mathcal F_i$ are pairwise disjoints and there exists a partition $(K,Y_1,\dots,Y_k)$ of $V(F)$ so that \begin{itemize} \item $\forall i\neq j,\ \omega(Y_i,Y_j)=\emptyset$, \item $G[C]\approx F[K]$, \item $\forall X_i\in\mathcal F_i, G[X_i]\approx F[Y_i]$, \item $\forall (X_1,\dots,X_k)\in\mathcal F_1\times\dots\times\mathcal F_k$, the subgraph of $G$ induced by $C\cup X_1\cup\dots\cup X_k$ is isomorphic to $F$. \end{itemize} \begin{figure} \caption{A $(3,{\rm Petersen} \label{fig:sunflower} \end{figure} The following stepping up lemma gives some indication on how low tree-depth decomposition is related to the proof of Theorem~\ref{thm:countF}: \begin{lemma}[\cite{Taxi_hom}] There exists a function $\tau$ such that for every integers $p,k$, every graph $F$ of order $p$, every $0<\epsilon<1$, the following property holds: Every graph $G$ such that $(\#F\subseteq G)\,> |G|^{k+\epsilon}$ contains a $({k+1},F)$-sunflower $(C,\mathcal F_1,\dots,\mathcal F_{k+1})$ with $$ \min_i |\mathcal F_i|\geq \left(\frac{|G|}{\binom{\chi_p(G)}{p}^{1/\epsilon}}\right)^{\tau(\epsilon,p)} $$ In particular, $G$ contains a subgraph $G'$ such that \begin{align*} &|G'|\geq (k+1)\left(\frac{{|G|}}{{\binom{\chi_p(G)}{p}^{1/\epsilon}}}\right)^{\tau(\epsilon,p)}\\ \text{and}\qquad&(\#F\subseteq G')\geq \left(\frac{|G'|-|F|}{k+1}\right)^{{k+1}}. \end{align*} \end{lemma} \providecommand{\noopsort}[1]{}\providecommand{\noopsort}[1]{} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{Affine nil-Hecke algebras and braided differential structure on affine Weyl groups} \date{} \author{Anatol N. Kirillov and Toshiaki Maeno} \maketitle \begin{abstract} We construct a model of the affine nil-Hecke algebra as a subalgebra of the Nichols-Woronowicz algebra associated to a Yetter-Drinfeld module over the affine Weyl group. We also discuss the Peterson isomorphism between the homology of the affine Grassmannian and the small quantum cohomology ring of the flag variety in terms of the braided differential calculus. {\cal{E}}nd{abstract} \section*{Introduction} The cohomology ring of the flag variety is a fundamental object of research in the study of the Schubert calculus. Fomin and the first author \cite{FK} gave a combinatorial model of the cohomology $H^*(Fl_n)$ ring of the flag variety of type $A$ as a commutative subalgebra of a quadratic algebra ${\cal E}_n$. It is remarkable that the algebra ${\cal E}_n$ has a natural quantum deformation ${\cal E}_n^q$ so that ${\cal E}_n^q$ contains the quantum cohomology ring $QH^*(Fl_n)$ as a commutative subalgebra. It has been observed by Milinski and Schneider \cite{MS} and by Majid \cite{Maj2} that the defining relations of the Fomin-Kirillov quadratic algebra ${\cal E}_n$ are understandable from the viewpoint of a certain kind of braided Hopf algebra called the Nichols-Woronowicz algebra. Bazlov \cite{Ba} constructed the model of the coinvariant algebra of the finite Coxeter groups as a commutative subalgebra of the Nichols-Woronowicz algebra. At the same time, the nil-Coxeter algebra, which is dual to the coinvariant algebra, is also realized as a subalgebra of the Nichols-Woronowicz algebra. The braided analgue of the symmetric or exterior algebra was introduced by Woronowicz \cite{Wo} for the study of the differential forms on the quantum groups. For a given braided vector space $M$ over a field $K$ of characteristic zero, the braided analogue ${\cal{B}}(M)$ of the symmetric algebra of $M$ is defined to be the quotient of the free tensor algebra of $M$ by the kernel of the braided symmetrizer. It is known that the algebra ${\cal{B}}(M)$ is a braided graded Hopf algebra characterized by the following conditions: \\ (1) ${\cal{B}}^0(M)=K,$ \\ (2) ${\cal{B}}^1(M)=M=\{ \textrm{primitive elements in ${\cal{B}}(M)$} \},$ \\ (3) ${\cal{B}}^1(M)$ generates ${\cal{B}}(M)$ as an algebra. \\ The Hopf algebra characterized by the above conditions has been studied by Nichols \cite{Ni} and named the Nichols algebra by Andruskiewitsch and Schneider \cite{AS}. The study of the algebra ${\cal{B}}(M)$ from the viewpoint of the free braided differential calculus was developed by \cite{Maj1}. In this paper we will call ${\cal{B}}(M)$ the Nichols-Woronowicz algebra simply following \cite{Ba}. The aim of this paper is to construct the nil-Hecke algebra as a subalgebra of an extension of the Nichols-Woronowicz algebra ${\cal{B}}af$ associated to a Yetter-Drinfeld module over the affine Weyl groups. Our construction is analogous to the one in \cite[Section 6]{Ba}. It is known that the affine Grassmannian ${\widehat{\rm Gr}}:=G({\mathbb{C}}((t)))/G({\mathbb{C}}[[t]])$ of a semisimple Lie group $G$ is homotopic to the loop group $\Omega K$ of the maximal compact subgroup $K\subset G.$ The homology $H_*({\widehat{\rm Gr}})\cong H_*(\Omega K)$ carries an associative algebra structure induced by the Pontryagin product. The strucuture of the Pontryagin ring $H_*(\Omega K)$ has been determined by Bott \cite{Bo}. The Schubert calculus for Kac-Moody flag varieties was studied by Kostant and Kumar \cite{KK} by using the nil-Hecke algebra. Peterson \cite{Pe} stated that the torus-equivariant homology $H^T_*({\widehat{\rm Gr}})$ of the affine Grassmannian is isomorphic to the so-called Peterson subalgebra of the affine nil-Hecke algebra. So our construction gives a model of $H^T_*({\widehat{\rm Gr}})$ as a commutative subalgebra of the Nichols-Woronowicz algebra ${\cal{B}}af(S),$ see Theorem 3.1. Peterson \cite{Pe} also pointed out that the Pontryagin ring $H^T_*({\widehat{\rm Gr}})$ is isomorphic to the small quantum cohomology ring $QH^*_T(G/B)$ of the corresponding flag variety $G/B$ as an algebra after a suitable localization. The affine Bruhat operator acting on $H^T_*({\widehat{\rm Gr}})$ introduced by Lam and Shimozono \cite{LS} gives an explicit comparison between the multiplicative structure of $H^T_*({\widehat{\rm Gr}})$ and that of $QH^*_T(G/B).$ In this paper, we will realize the affine Bruhat operator as a braided differential operator acting on our algebra ${\cal{B}}af.$ \subsection*{Acknowledgement.} The second author is supported by Grant-in-Aid for Scientific Research. \section{Affine nil-Hecke algebra} Let $G$ be a simply-connected semisimple complex Lie group and $W$ its Weyl group. Denote by ${\overleftarrow{D}}elta$ the set of the roots. We fix the set ${\overleftarrow{D}}elta_+$ of the positive roots by choosing a set of simple roots $\alpha_1,\ldots,\alpha_r.$ The Weyl group $W$ acts on the weight lattice $P$ and the coroot lattice $Q^{{\cal{V}}ee}$ of $G.$ The affine Weyl group ${W_{\rm aff}}$ is generated by the affine reflections $s_{\alpha, k},$ $\alpha\in {\overleftarrow{D}}elta,$ $k\in {\mathbb{Z}},$ with respect to the affine hyperplanes $H_{\alpha,k}:= \{ \lambda \in P\otimes {\mathbb{R}} \; | \; \langle\lambda ,\alpha^{{\cal{V}}ee}\rangle=k \}.$ The affine Weyl group is the semidirect product of $W$ and $Q^{{\cal{V}}ee},$ i.e., ${W_{\rm aff}}=W \ltimes Q^{{\cal{V}}ee}.$ The affine Weyl group ${W_{\rm aff}}$ is generated by the simple reflections $s_1:=s_{\alpha_1,0},\ldots,s_r:=s_{\alpha_r,0}$ and $s_0:=s_{\theta,1}$ where $\theta=-\alpha_0$ is the highest root. The affine Weyl group $W$ has the presentation as a Coxeter group as follows: \[ {W_{\rm aff}} = \langle s_0,\ldots,s_r \; | \; s_0^2=\cdots =s_r^2=1, (s_is_j)^{m_{ij}}=1 \rangle . \] \begin{dfn} {\rm The affine nil-Coxeter algebra ${\mathbb{A}}_0$ is the associative algebra generated by $\tau_0,\ldots,\tau_r$ subject to the relations \[ \tau_0^2=\cdots=\tau_r^2=0, \;\; (\tau_i\tau_j)^{[m_{ij}/2]}\tau_i^{\nu_{ij}}=(\tau_j\tau_i)^{[m_{ij}/2]}\tau_j^{\nu_{ij}} , \] where $\nu_{ij}:=m_{ij}-2[m_{ij}/2].$} {\cal{E}}nd{dfn} For a reduced expression $x=s_{i_1}\cdots s_{i_l}$ of an element $x\in {W_{\rm aff}},$ the element $\tau_x:=\tau_{i_1}\cdots \tau_{i_l}\in {\mathbb{A}}_0$ is independent of the choice of the reduced expression of $x.$ It is known that $\{ \tau_x \}_{x\in {W_{\rm aff}}}$ form a linear basis of ${\mathbb{A}}_0.$ The nil-Coxeter algebra ${\mathbb{A}}_0$ acts on $S:={\rm{Sym}} P_{{\mathbb{Q}}}$ via \[ \tau_0(f):=\partial_{\alpha_0}(f)=-(f-s_{\theta,0}f) / \theta , \] \[ \tau_i(f):= \partial_{\alpha_i}(f)=(f-s_{\alpha_i,0}f)/\alpha_i, \; \; i=1,\ldots, r, \] for $f\in S.$ \begin{dfn} {\rm (\cite{KK}) The nil-Hecke algebra ${\mathbb{A}}$ is defined to be the cross product ${\mathbb{A}}_0 \ltimes S,$ where the cross relation is given by \[ \tau_i f= \partial_{\alpha_i}(f)+s_i(f)\tau_i \;\; f\in S, i=1,\ldots, r. \] } {\cal{E}}nd{dfn} Here, we summarize some known results on the homology of the affine Grassmannian. The affine Grassmannian ${\widehat{\rm Gr}}:=G({\mathbb{C}}((t)))/G({\mathbb{C}}[[t]])$ is homotopic to the loop group $\Omega K$ of the maximal compact subgroup $K\subset G.$ Let $T\subset G$ be the maximal torus. An associative algebra structure on the $T$-equivariant homology group $H^T_*({\widehat{\rm Gr}})\cong H^T_*(\Omega K)$ is induced from the group multiplication \[ \Omega K \times \Omega K \rightarrow \Omega K. \] It is known that the algebra $H^T_*({\widehat{\rm Gr}})$ is commutative. The algebra $H^T_*(\Omega K)$ is called the Pontryagin ring. We regard the $T$-equivariant homology $H_*^T({\widehat{\rm Gr}})$ as an $S$-algebra by identifying $S=H_T^*(pt).$ The diagonal embedding \[ \Omega K \rightarrow \Omega K \times \Omega K \] induces a coproduct on $H_*^T({\widehat{\rm Gr}}).$ \begin{prop} {\rm (\cite{Pe})} The $T$-equivariant homology $H_*^T({\widehat{\rm Gr}})$ is isomorphic to the centralizer $Z_{{\mathbb{A}}}(S)$ of $S$ in ${\mathbb{A}}$ as Hopf algebras. {\cal{E}}nd{prop} \section{Nichols-Woronowicz algebra for affine Weyl groups} We briefly recall the construction of the Nichols-Woronowicz algebra associated to a braided vector space. Let $M$ be a vector space over a field of characteristic zero and $\psi:M^{\otimes 2} \rightarrow M^{\otimes 2}$ be a fixed linear endomorphism satisfying the braid relations $\psi_{i}\psi_{i+1}\psi_{i}=\psi_{i+1}\psi_{i}\psi_{i+1}$ where $\psi_i:M^{\otimes n} \rightarrow M^{\otimes n}$ is a linear endomorphism obtained by applying $\psi$ to the $i$-th and $(i+1)$-st components. Denote by $s_i$ the simple transposition $(i,i+1)\in S_n.$ For any reduced expression $w=s_{i_1}\cdots s_{i_l}\in S_n,$ the endomorphism $\Psi_w=\psi_{i_1}\cdots \psi_{i_l}:M^{\otimes n} \rightarrow M^{\otimes n}$ is well-defined. The Woronowicz symmetrizer \cite{Wo} is given by $\sigma_n := \sum_{w\in S_n} \Psi_w .$ \begin{dfn} {\rm (\cite{Wo})} The Nichols-Woronowicz algebra associated to a braided vector space $M$ is defined by \[ {\cal{B}}(M):= \bigoplus_{n\geq 0} M^{\otimes n}/{\rm Ker}(\sigma_n), \] where $\sigma_n : M^{\otimes n} \rightarrow M^{\otimes n}$ is the Woronowicz symmetrizer. {\cal{E}}nd{dfn} \begin{dfn} A vector space $M$ is called a Yetter-Drinfeld module over a group $\Gamma,$ if the following conditions are satisfied: \\ $(1)$ $M$ is a $\Gamma$-module, \\ $(2)$ $M$ is $\Gamma$-graded, i.e. $M=\bigoplus_{g\in \Gamma}M_g,$ where $M_g$ is a linear subspace of $M,$ \\ $(3)$ for $h\in \Gamma$ and $v\in M_g,$ $h(v)\in M_{hgh^{-1}}.$ {\cal{E}}nd{dfn} The Yetter-Drinfeld module $M$ over a group $\Gamma$ is naturally braided with the braiding $\psi:M^{\otimes 2} \rightarrow M^{\otimes 2}$ defined by $\psi(a \otimes b)= g(b) \otimes a$ for $a \in M_g$ and $b\in M.$ In the following we are interested in the Yetter-Drinfeld module over the affine Weyl groups ${W_{\rm aff}}.$ Denote by $t_{\lambda} \in {W_{\rm aff}}$ the translation by $\lambda \in Q^{{\cal{V}}ee}.$ We define a Yetter-Drinfeld module ${V_{\rm aff}}$ over ${W_{\rm aff}}$ by \[ {V_{\rm aff}} := \bigoplus_{\alpha\in {\overleftarrow{D}}elta, k\in {\mathbb{Z}}} {\mathbb{Q}} \cdot [\alpha,k]/ ( [\alpha,k]+[-\alpha,-k]) , \] where the ${W_{\rm aff}}$ acts on ${V_{\rm aff}}$ by \[ w [\alpha,k] := [w(\alpha),k], \;\; w\in W, \;\;\; t_{\lambda}[\alpha,k] := [\alpha,k+(\alpha,\lambda)] , \;\; \lambda \in Q^{{\cal{V}}ee}. \] The ${W_{\rm aff}}$-grading is given by $\deg_{{W_{\rm aff}}}([\alpha,k]):=s_{\alpha,k}.$ Then it is easy to check the conditions in Definition 2.1. Now we have the Nichols-Woronowicz algebra ${\cal{B}}af:={\cal{B}}({V_{\rm aff}})$ associated to the Yetter-Drinfeld module ${V_{\rm aff}}.$ Let ${\cal{B}}_W$ be the Nichols-Woronowicz algebra associated to the Yetter-Drinfeld module $V=\oplus_{\alpha \in {\overleftarrow{D}}elta} {\mathbb{Q}} \cdot [\alpha]/ ([\alpha]+[-\alpha])$ as in \cite[Section 4]{Ba}. \begin{lem} $(1)$ We have a surjective homomorphism $\pi:{\cal{B}}af \rightarrow {\cal{B}}_W,$ $\pi([ \alpha ,k ]):= [ \alpha ] .$ \\ $(2)$ The algebra ${\cal{B}}af$ acts on $S$ via $[\alpha,k] f= \partial_{\alpha}(f)$ for all $k\in {\mathbb{Z}}.$ {\cal{E}}nd{lem} {\it Proof.} $(1)$ Denote by $\psi$ and $\bar{\psi}$ the braidings on ${V_{\rm aff}}$ and $V$ respectively. Let $\tilde{\pi}:\oplus_n {V_{\rm aff}}^{\otimes n} \rightarrow \oplus_n V^{\otimes n}$ be the lift of $\pi.$ Since \[ \psi([\alpha,k] \otimes [\beta,l])= [s_{\alpha}(\beta), l-\langle \alpha^{{\cal{V}}ee},\beta \rangle k] \otimes [ \alpha, k ] \] and $\bar{\psi}([\alpha] \otimes [\beta])= [s_{\alpha}(\beta)]\otimes [\alpha],$ the map $\tilde{\pi}$ sends the kernel of the braided symmetrizer $\sigma_n$ of ${V_{\rm aff}}^{\otimes n}$ to that of $V^{\otimes n}.$ \\ $(2)$ In \cite{Ba}, it is shown that the algebra ${\cal{B}}_W$ acts on the coinvariant algebra $S_W$ via $[\alpha] \mapsto \partial_{\alpha}.$ Let $S^W$ be the $W$-invariant subalgebra of $S.$ Then we have the decomposition $S=S^W \otimes S_W.$ The operator $\partial_{\alpha}$ extends $S^W$-linearly to the operator on $S.$ Hence ${\cal{B}}_W$ acts on $S.$ We have seen the existence of the natural projection $\pi$ from ${\cal{B}}af$ to ${\cal{B}},$ so $\pi$ induces the action of ${\cal{B}}af$ on $S.$ \\ Let us define the extension ${\cal{B}}af (S)={\cal{B}}af \ltimes S$ by the cross relation \[ [\alpha,k] f = \partial_{\alpha}f +s_{\alpha,0}(f)[\alpha,k] , \;\; [\alpha,k] \in {V_{\rm aff}}, f\in S. \] \begin{prop} There exists a homomorphism ${\cal{V}}arphi : {\mathbb{A}} \rightarrow {\cal{B}}af (S)$ given by $\tau_0 \mapsto [\alpha_0,-1],$ $\tau_i \mapsto [\alpha_i,0],$ $i=1,\ldots,r,$ and $f \mapsto f,$ $f\in S.$ {\cal{E}}nd{prop} {\it Proof.} It is enough to check the Coxeter relations among ${\cal{V}}arphi(\tau_0),\ldots ,{\cal{V}}arphi(\tau_r)$ in ${\cal{B}}af(S)$ based on the classification of the affine root systems. This is done by the direct computation of the symmetrizer for the subsystems of rank 2 in the similar manner to \cite[Section 6]{Ba}. \begin{ex} {\rm Here we list the Coxeter relations in ${\cal{B}}af$ involving $[\theta,1]=-[\alpha_0,-1]$ for the root systems of rank 2. Let $({\cal{V}}arepsilon_1,\ldots,{\cal{V}}arepsilon_r)$ be an orthonormal basis of the $r$-dimensional Euclidean space. Put $[ij,k]:=[{\cal{V}}arepsilon_i-{\cal{V}}arepsilon_j,k],$ $[\overline{ij},k]:=[{\cal{V}}arepsilon_i+{\cal{V}}arepsilon_j,k],$ $[i,k]:=[{\cal{V}}arepsilon_i,k]$ and $[\alpha]:=[\alpha,0].$ \\ (i) (Type $A_2$ case) \[ [13,1][23][13,1]+[23][13,1][23]=0, \;\; [13,1][12][13,1]+[12][13,1][12]=0 \] (ii) (Type $B_2$ case) \[ [\overline{12},1][2][\overline{12},1][2]=[2][\overline{12},1][2][\overline{12},1] \] (iii) (Type $G_2$ case) Let $\alpha_1,\alpha_2$ be the simple roots for $G_2$-system. We assume that $\alpha_1$ is a short root and $\alpha_2$ is a long one. Then we have $\theta=3\alpha_1+2\alpha_2.$ \[ [\theta,1][\alpha_2][\theta,1]+[\alpha_2][\theta,1][\alpha_2]=0. \] } {\cal{E}}nd{ex} \section{Model of nil-Hecke algebra} The connected components of $P\otimes {\mathbb{R}} \setminus \cup_{\alpha\in {\overleftarrow{D}}elta_+,k\in {\mathbb{Z}}}H_{\alpha ,k}$ are called alcoves. The affine Weyl group ${W_{\rm aff}}$ acts on the set of the alcoves simply and transitively. \begin{dfn}{\rm (\cite{LP})} $(1)$ A sequence $(A_0,\ldots,A_l)$ of alcoves $A_i$ is called an alcove path if $A_i$ and $A_{i+1}$ have a common wall and $A_i \not= A_{i+1}.$ \\ $(2)$ An alcove path $(A_0,\ldots,A_l)$ is called reduced if the length $l$ of the path is minimal among all alcove paths connecting $A_0$ and $A_l.$ \\ $(3)$ We use the symbol $A_i \stackrel{\beta,k}{\longrightarrow} A_{i+1}$ when $A_i$ and $A_{i+1}$ have a common wall of the form $H_{\beta,k}$ and the direction of the root $\beta$ is from $A_i$ to $A_{i+1}.$ {\cal{E}}nd{dfn} The alcove $A^{\circ}$ defined by the inequalities $\langle \lambda,\alpha_0^{{\cal{V}}ee}\rangle \geq -1$ and $\langle \lambda,\alpha_i^{{\cal{V}}ee}\rangle \geq 0,$ $i=1,\ldots,r,$ is called the fundamental alcove. For a reduced alcove path $\gamma:A_0=A^{\circ} \stackrel{\beta_1,k_1}{\longrightarrow} \cdots \stackrel{\beta_l,k_l}{\longrightarrow} A_l,$ we define an element $[\gamma]\in {\cal{B}}af$ by \[ [ \gamma ] := [-\beta_1,-k_1] \cdots [-\beta_l,-k_l] . \] When $A_l=x^{-1}(A^{\circ})$ for $x\in {W_{\rm aff}},$ we will also use the symbol $[x]$ instead of $[\gamma],$ since $[\gamma]$ depends only on $x$ thanks to the Yang-Baxter relation. For a braided vector space $M,$ it is known that an element $a\in M$ acts on ${\cal{B}}(M^*)$ as a braided differential operator (see \cite{Ba}, \cite{Maj1}). Let us identify $M^*$ with $M$ via the ${W_{\rm aff}}$-invariant inner product $(\; , \; )$ given by \[ ([\alpha,k],[\beta ,l])= \left\{ \begin{array}{cc} 1, & \textrm{if $\alpha=\beta$ and $k=l,$} \\ 0, & \textrm{otherwise,} {\cal{E}}nd{array} \right. \] for $\alpha,\beta \in {\overleftarrow{D}}elta_+,$ $k,l\in {\mathbb{Z}}.$ In our case, the differential operator ${\overleftarrow{D}}_{[\alpha,k]},$ $[\alpha,k]\in {V_{\rm aff}},$ acting from the right is determined by the following characterization: \\ (0) $(c){\overleftarrow{D}}_{[\alpha,k]}=0,$ $c\in {\mathbb{Q}},$ \\ (1) $([\alpha,k]){\overleftarrow{D}}_{[\beta ,l]} = ([\alpha,k],[\beta ,l]),$ \\ (2) $(FG){\overleftarrow{D}}_{[\alpha,k]}=F(G{\overleftarrow{D}}_{[\alpha,k]})+(F{\overleftarrow{D}}_{[\alpha,k]})s_{\alpha,k}(G),$ \\ for $\alpha,\beta \in {\overleftarrow{D}}elta,$ $k,l\in {\mathbb{Z}},$ $F,G\in {\cal{B}}af.$ The operator ${\overleftarrow{D}}_{[\alpha,k]}$ extends to the one acting on ${\cal{B}}af(S)$ by the commutation relation $f\cdot{\overleftarrow{D}}_{[\alpha,k]}={\overleftarrow{D}}_{[\alpha,k]}\cdot s_{\alpha,k}(f),$ $f\in S.$ We use the abbreviation ${\overleftarrow{D}}_0:={\overleftarrow{D}}_{[\alpha_0,-1]},$ ${\overleftarrow{D}}_i:={\overleftarrow{D}}_{[\alpha_i,0]},$ $i=1,\ldots,r.$ For $x\in {W_{\rm aff}},$ fix a reduced decomposition $x=s_{i_1}\cdots s_{i_l}.$ We define the corresponding braided differential operator ${\overleftarrow{D}}_x$ acting on ${\cal{B}}af$ by the formula \[ {\overleftarrow{D}}_x := {\overleftarrow{D}}_{i_l}\cdots {\overleftarrow{D}}_{i_1} , \] which is also independent of the choice of the reduced decomposition of $x$ because of the braid relations. \begin{lem} For $x\in {W_{\rm aff}},$ take a reduced alcove path $\gamma$ from the fundamental alcove $A^{\circ}$ to $x^{-1}(A^{\circ}).$ Then, we have $([\gamma]){\overleftarrow{D}}_x=1.$ {\cal{E}}nd{lem} {\it Proof.} Let us take a reduced path \[ \gamma: A_0=A^{\circ} \stackrel{\beta_1,k_1}{\longrightarrow} A_1 \stackrel{\beta_2,k_2}{\longrightarrow} \cdots \stackrel{\beta_l,k_l}{\longrightarrow} A_l=x^{-1}(A^{\circ}). \] Define a sequence $\sigma_1,\ldots,\sigma_l \in {W_{\rm aff}}$ inductively by \[ \sigma_1:=s_{\beta_1,k_1}, \; \sigma_{j+1}:= \sigma_j s_{\beta_{j+1},k_{j+1}} \sigma_j. \] Then it is easy to see that $\sigma_{\nu}(A_j)\not= A^{\circ},$ $1\leq \nu \leq j-1,$ $\sigma_j(A_j)=A^{\circ}$ and the walls $\sigma_j(H_{\beta_{j+1},k_{j+1}})$ are corresponding to simple roots. Hence, $\sigma_1,\ldots,\sigma_l$ are simple reflections. This sequence gives a reduced expression $x=\sigma_l \cdots \sigma_1.$ Put $\sigma_i=s_{\alpha_{i_j}}.$ Since the direction of $\beta_{j+1}$ is chosen to be from $A_j$ to $A_{j+1},$ we have \[ [\gamma] {\overleftarrow{D}}_x= ([\beta_1,k_1]){\overleftarrow{D}}_{i_1}\cdot (\sigma_1([\beta_2,k_2])){\overleftarrow{D}}_{i_2} \cdots (\sigma_{l-1}([\beta_l,k_l])){\overleftarrow{D}}_{i_l} =1. \] \begin{ex} ($A_2$-case) The standard realization is given by $\alpha_1={\cal{V}}arepsilon_1-{\cal{V}}arepsilon_2,$ $\alpha_2={\cal{V}}arepsilon_2-{\cal{V}}arepsilon_3,$ $\alpha_0={\cal{V}}arepsilon_3-{\cal{V}}arepsilon_1.$ Consider the translation $t_{\alpha_1}$ by the simple root $\alpha_1.$ If we take a reduced path \[ \gamma: A_0=A^{\circ} \stackrel{-\alpha_2,0}{\longrightarrow} A_1 \stackrel{\alpha_1,1}{\longrightarrow} A_2 \stackrel{-\alpha_0,1}{\longrightarrow} A_3 \stackrel{\alpha_1,2}{\longrightarrow} A_4=t_{\alpha_1}(A^{\circ}) , \] then we have $[\gamma]=[23][21,-1][31,-1][21,-2].$ On the other hand, the differential operator corresponding to $t_{-\alpha_1}$ is given by ${\overleftarrow{D}}_2{\overleftarrow{D}}_0{\overleftarrow{D}}_2{\overleftarrow{D}}_1,$ where ${\overleftarrow{D}}_0={\overleftarrow{D}}_{[31,-1]},{\overleftarrow{D}}_1={\overleftarrow{D}}_{[12]}, {\overleftarrow{D}}_2={\overleftarrow{D}}_{[23]}.$ It is easy to check by direct computation \[ ([23][21,-1][31,-1][12,2]){\overleftarrow{D}}_2{\overleftarrow{D}}_0{\overleftarrow{D}}_2{\overleftarrow{D}}_1=1 . \] {\cal{E}}nd{ex} \begin{thm} The algebra homomorphism ${\cal{V}}arphi: {\mathbb{A}} \rightarrow {\cal{B}}af (S)$ is injective. {\cal{E}}nd{thm} {\it Proof.} The nil-Hecke algebra ${\mathbb{A}}$ is also ${W_{\rm aff}}$-graded. Since the homomorphism ${\cal{V}}arphi:{\mathbb{A}} \rightarrow {\cal{B}}af (S)$ preserves the ${W_{\rm aff}}$-grading, it is enough to check ${\cal{V}}arphi(\tau_x) \not=0,$ for $x\in {W_{\rm aff}}$ in order to show the injectivity of ${\cal{V}}arphi.$ On the other hand, ${\cal{B}}af^{op}$ acts on ${\cal{B}}af$ itself via the braded differential operators. Let $\gamma$ be a reduced alcove path from $A^{\circ}$ to $x^{-1}(A^{\circ}).$ Then we have $([\gamma]){\overleftarrow{D}}_x=1$ from Lemma 3.1. This shows ${\overleftarrow{D}}_x\not=0,$ so ${\cal{V}}arphi(\tau_x)\not=0.$ This theorem implies the following (see Proposition 1.1): \begin{cor} The $T$-equivariant Pontryagin ring $H_*^T({\widehat{\rm Gr}})$ is a subalgebra of ${\cal{B}}af(S).$ {\cal{E}}nd{cor} By taking the non-equivariant limit, we also have: \begin{cor} The Pontryagin ring $H_*({\widehat{\rm Gr}})$ is a subalgebra of ${\cal{B}}af.$ {\cal{E}}nd{cor} \section{Affine Bruhat operators} We denote by $x \rightarrow y$ the cover relation in the Bruhat ordering of ${W_{\rm aff}},$ i.e. $y=xs_{\alpha,k}$ for some $\alpha \in {\overleftarrow{D}}elta$ and $k\in {\mathbb{Z}},$ and $l(y)=l(x)+1.$ We will use some terminology from \cite{LS}. Denote by $\tilde{Q}$ the set of antidominant elements in $Q^{{\cal{V}}ee}.$ An element $x\in {W_{\rm aff}}$ can be expressed uniquely as a product of form $x=wt_{v\lambda}\in {W_{\rm aff}}$ with $v,w \in W,$ $\lambda \in \tilde{Q}.$ We say that $x=wt_{v\lambda}$ belongs to the "$v$-chamber". An element $\lambda \in \tilde{Q}$ is called superregular when $|\langle \lambda, \alpha \rangle | > 2(\# W) +2$ for all $\alpha \in {\overleftarrow{D}}elta_+.$ If $\lambda\in \tilde{Q}$ is superregular, then $x=wt_{v\lambda}$ is called superregular. The subset of superregular elements in ${W_{\rm aff}}$ is denoted by ${W_{\rm aff}}^{\rm sreg}.$ We say that a property holds for sufficiently superregular elements ${W_{\rm aff}}^{\rm ssreg}\subset {W_{\rm aff}}$ if there is a positive constant $k\in {\mathbb{Z}}$ such that the property holds for all $x\in {W_{\rm aff}}^{\rm sreg}$ satisfying the following condition: \[ y \in {W_{\rm aff}},\; y<x,\; \textrm{and} \; l(x)-l(y)<k \Rightarrow y\in {W_{\rm aff}}^{\rm sreg}. \] The meaning of ${W_{\rm aff}}^{\rm ssreg}$ depends on the context, see \cite[Section 4]{LS} for the details. For $v\in W,$ consider the $S$-submodule $M^{\rm ssreg}_v$ in ${\cal{B}}af$ generated by the sufficiently superregular elements $[x]$ where $x$ belongs to the $v$-chamber. \begin{lem} Let $x\in {W_{\rm aff}}.$ For $\alpha \in {\overleftarrow{D}}elta$ and $k\in {\mathbb{Z}}_{> 0},$ we have \[ [x] {\overleftarrow{D}}_{[\alpha,k]} = \left\{ \begin{array}{cc} [xs_{\alpha,k}] , & \textrm{if $l(x)=l(xs_{\alpha,k})+1,$} \\ 0, & \textrm{otherwise.} {\cal{E}}nd{array} \right. \] {\cal{E}}nd{lem} {\it Proof.} The fundamental alcove $A^{\circ}$ is contained in the region $\{ \lambda \in P\otimes {\mathbb{R}} | \langle \lambda,\alpha^{{\cal{V}}ee} \rangle < k \}$ for $\alpha \in {\overleftarrow{D}}elta$ and $k\in {\mathbb{Z}}_{> 0}.$ Let us choose any reduced path $\gamma:A_0 \stackrel{\beta_1,k_1}{\longrightarrow} \cdots \stackrel{\beta_l,k_l}{\longrightarrow} A_l=x^{-1}(A^{\circ})$ with $k_i \geq 0.$ If $l(x)>l(xs_{\alpha,k}),$ then $(\beta_i,k_i)=(\alpha,k)$ for some $i.$ Take the largest $i$ and consider the path \[ \gamma':A_0 \stackrel{\beta_1,k_1}{\longrightarrow} \cdots \stackrel{\beta_{i-1},k_{i-1}}{\longrightarrow} A_{i-1} \stackrel{\beta'_{i+1},k'_{i+1}}{\longrightarrow} s_{\alpha,k}(A_{i+1}) \stackrel{\beta'_{i+2},k'_{i+2}}{\longrightarrow} \cdots \] \[ \cdots \stackrel{\beta'_l,k'_l}{\longrightarrow} s_{\alpha,k}(A_l)= s_{\alpha,k}x^{-1}(A^{\circ})=(xs_{\alpha,k})^{-1}(A^{\circ}), \] where $(\beta'_j,k'_j)$ is determined by the condition $s_{\alpha,k}(H_{\beta_j,k_j})=H_{\beta'_j,k'_j}.$ If $l(x)=l(xs_{\alpha,k})+1,$ then the path $\gamma'$ is a reduced path. In this case, we have $[x] {\overleftarrow{D}}_{[\alpha,k]}=[xs_{\alpha,k}].$ If $l(x) > l(xs_{\alpha,k})+1,$ the above path $\gamma'$ is not reduced and $[x] {\overleftarrow{D}}_{[\alpha,k]}=0.$ When $l(x)<l(xs_{\alpha,k}),$ the element $[\alpha,k]$ does not appear in the monomial $[\gamma],$ so we have $[x] {\overleftarrow{D}}_{[\alpha,k]}=0.$ \begin{prop} {\rm (\cite[Proposition 4.1]{LS})} Let $\lambda \in \tilde{Q}$ be superregular. For $x=wt_{v\lambda}$ and $y=xs_{v\alpha,-n}$ with $v,w\in W,$ we have the cover relation $y \rightarrow x$ if and only if one of the following conditions holds: \\ $(1)$ $l(wv)=l(wvs_{\alpha})-1$ and $n=\langle \lambda, \alpha \rangle,$ giving $y=ws_{v(\alpha)}t_{v(\lambda)},$ \\ $(2)$ $l(wv)=l(wvs_{\alpha})+\langle \alpha^{{\cal{V}}ee},2\rho \rangle -1$ and $n=\langle \lambda, \alpha \rangle +1,$ giving $y=ws_{v(\alpha)}t_{v(\lambda+\alpha^{{\cal{V}}ee})},$ \\ $(3)$ $l(v)=l(vs_{\alpha})+1$ and $n=0,$ giving $y=ws_{v(\alpha)}t_{vs_{\alpha}(\lambda)},$ \\ $(4)$ $l(v)=l(vs_{\alpha})-\langle \alpha^{{\cal{V}}ee},2\rho \rangle +1$ and $n=-1,$ giving $y=ws_{v(\alpha)}t_{vs_{\alpha}(\lambda+\alpha^{{\cal{V}}ee})}.$ {\cal{E}}nd{prop} In \cite{LS}, the first kind of the conditions (1) and (2) are called the near relation because $x$ and $y$ belong to the same chamber. In this paper we denote the near relation by $y \rightarrow_{near} x.$ The affine Bruhat operator $B^{\mu}:S\langle {W_{\rm aff}}^{\rm ssreg} \rangle \rightarrow S\langle {W_{\rm aff}}^{\rm sreg}\rangle,$ $\mu \in P,$ due to Lam and Shimozono \cite[Section 5]{LS} is an $S$-linear map defined by the formula \[ B^{\mu}(x) =(\mu - wv \mu) x + \sum_{\alpha\in {\overleftarrow{D}}elta_+}\sum_{xs_{v(\alpha),k}\rightarrow_{near} x} \langle \alpha^{{\cal{V}}ee}, \mu \rangle xs_{v(\alpha),k} \] for $x=wt_{v\lambda}\in {W_{\rm aff}}^{\rm ssreg}.$ We also introduce the operator $\beta^{\mu}_v,$ $\mu \in P,$ acting on each $M^{\rm ssreg}_v$ by \[ \beta^{\mu}_v([x]):=(\mu - wv \mu) [x] + [x] \sum_{\alpha\in {\overleftarrow{D}}elta_+,k>1} \langle \alpha^{{\cal{V}}ee}, \mu \rangle {\overleftarrow{D}}_{[v(\alpha),k]}, \] where $x=wt_{v\lambda} \in {W_{\rm aff}}^{\rm ssreg}.$ Denote by ${W_{\rm aff}}^{\rm ssreg}(v)$ the subset of ${W_{\rm aff}}$ consisting of the superregular elements belonging to the $v$-chamber. Fix a left $S$-module isomorphism \[ \begin{array}{cccc} \iota : & S \langle {W_{\rm aff}}^{\rm ssreg}(v) \rangle & \rightarrow & M_v^{\rm ssreg} \\ & x & \mapsto & [x] . {\cal{E}}nd{array} \] \begin{prop} For each $v\in W$ and a sufficiently superregular element $x \in {W_{\rm aff}}^{\rm ssreg}(v),$ \[ \beta^{\mu}_v([x])=\iota (B^{\mu}(x)). \] {\cal{E}}nd{prop} {\it Proof.} This can be shown by using Lemma 4.1 and Proposition 4.1. \[ \beta^{\mu}_v( [x])=(\mu - wv \mu) [x] + [x] \sum_{\alpha\in {\overleftarrow{D}}elta_+,k>1} \langle \alpha^{{\cal{V}}ee}, \mu \rangle {\overleftarrow{D}}_{[v(\alpha),k]} \] \[ =(\mu - wv \mu) [x] + \sum_{\alpha\in {\overleftarrow{D}}elta_+}\sum_{k>1,l(xs_{[v(\alpha),k}])=l(x)-1} \langle \alpha^{{\cal{V}}ee}, \mu \rangle [xs_{v(\alpha),k}] \] \[ =(\mu - wv \mu) [x] + \sum_{\alpha\in {\overleftarrow{D}}elta_+}\sum_{xs_{v(\alpha),k}\rightarrow_{near} x} \langle \alpha^{{\cal{V}}ee}, \mu \rangle [xs_{v(\alpha),k}] = \iota (B^{\mu}(x)). \] \begin{rem} {\rm In \cite{KM} the authors introduced the quantization operators ${\cal{E}}ta_{\alpha}$ acting on the model of $H^*(G/B)\otimes {\mathbb{C}}[q_1,\ldots,q_r]$ realized as a subalgebra of ${\cal{B}}_W \otimes {\mathbb{C}}[q_1,\ldots,q_{n-1}].$ For a superregular element $\lambda \in \tilde{Q}$ and $w\in W,$ consider a homomorphism $\theta_w^{\lambda}$ from the $\lambda$-small elements (see \cite[Section 5]{LS}) of $H^*(G/B)\otimes {\mathbb{C}}[q]$ to ${\cal{B}}af$ defined by \[ \theta_w^{\lambda}(q^{\mu}\sigma^v):= [vw^{-1}t_{w(\lambda+\mu)}], \] where $\sigma^v$ is the Schubert class of $G/B$ corresponding to $v\in W$ and $q^{\mu}=q_1^{\mu_1}\cdots q_r^{\mu_r}$ for $\mu=\sum_{i=1}^r\mu_i\alpha_i^{{\cal{V}}ee}.$ The following is an interpretation of the formula of \cite[Proposition 5.1]{LS} in our setting: \[ \theta_w^{\lambda}({\cal{E}}ta_{\alpha}(\sigma))= \beta_w^{{\cal{V}}arpi_{\alpha}}(\theta_w^{\lambda}(\sigma)). \] } {\cal{E}}nd{rem} \section{Quadratic relations} For $\alpha\in {\overleftarrow{D}}elta_+$ and $v\in W,$ let us define the operator ${\overleftarrow{D}}D_v(\alpha)$ by \[ {\overleftarrow{D}}D_v(\alpha):=\sum_{k>1} {\overleftarrow{D}}_{[v(\alpha),k]}. \] Then we have \[ \beta^{\mu}_v([x])= (\mu - wv \mu) [x] + [x]\sum_{\alpha \in {\overleftarrow{D}}elta_+} \langle \alpha^{{\cal{V}}ee}, \mu \rangle {\overleftarrow{D}}D_v(\alpha) . \] In the following, we discuss the relations among the operators ${\overleftarrow{D}}D_v(\alpha),$ $\alpha\in {\overleftarrow{D}}elta_+,$ for the root system of type $A_{n-1}.$ For simplicity, we consider only non-equivariant case with $v=id.$ Take the standard realization of the $A_{n-1}$-system: \[ {\overleftarrow{D}}elta= \{ {\cal{V}}arepsilon_i-{\cal{V}}arepsilon_j \; | \; 1\leq i,j \leq n, i\not=j \} . \] Put ${\overleftarrow{D}}D(ij):={\overleftarrow{D}}D_{id.}({\cal{V}}arepsilon_i-{\cal{V}}arepsilon_j)$ for $1\leq i<j \leq n,$ and ${\overleftarrow{D}}D(ij):=-{\overleftarrow{D}}D(ji)$ for $i>j.$ In this situation, we have a formula for the non-equivariant limit $\bar{\beta}^{{\cal{V}}arepsilon_i}_{id.}$ of the operator $\beta^{{\cal{V}}arepsilon_i}_{id.}:$ \[ \bar{\beta}^{{\cal{V}}arepsilon_i}_{id.}=\sum_{j\not=i}{\overleftarrow{D}}D(ij) . \] Note that this formula is analogous to the definition of the Dunkl elements in \cite{FK}. Let $T_i,$ $1\leq i \leq n-1,$ be linear operators on $M^{\rm ssreg}$ defined by $T_i([x]):=[xt_{\alpha_i}],$ where $x\in {W_{\rm aff}}$ and $\alpha_i={\cal{V}}arepsilon_i-{\cal{V}}arepsilon_{i+1}.$ It is easy to check from Proposition 4.1 that $(T_i[x]){\overleftarrow{D}}D(jk)=T_i([x]{\overleftarrow{D}}D(jk)).$ Our next goal is to show that the operators ${\overleftarrow{D}}D(ij)$ satisfy the defining relations of the quantum deformation ${\cal E}_n^q$ of the Fomin-Kirillov quadratic algebra \cite{FK}. \begin{prop} {\rm (i)} For $1\leq i<j \leq n,$ we have \[ {\overleftarrow{D}}D(ij)^2= \left\{ \begin{array}{cc} T_i, & \textrm{if $j=i+1,$} \\ 0, & \textrm{otherwise.} {\cal{E}}nd{array} \right. \] {\rm (ii)} If $\{i,j\} \cap \{ k,l \}={\cal{E}}mptyset,$ then we have ${\overleftarrow{D}}D(ij){\overleftarrow{D}}D(kl)={\overleftarrow{D}}D(kl){\overleftarrow{D}}D(ij).$ \\ {\rm (iii)} For $1\leq i,j \leq n,$ $i\not= j,$ we have \[ {\overleftarrow{D}}D(ij){\overleftarrow{D}}D(jk)+{\overleftarrow{D}}D(jk){\overleftarrow{D}}D(kl)+{\overleftarrow{D}}D(ki){\overleftarrow{D}}D(ij)=0. \] {\cal{E}}nd{prop} {\it Proof.} First of all, let us check the equality (i). We have \[ {\overleftarrow{D}}D(ij)^2= \sum_{k,l>1}{\overleftarrow{D}}_{[ij,k]} {\overleftarrow{D}}_{[ij,l]} . \] Let $\lambda \in \tilde{Q}$ be sufficiently superregular. For $x=wt_{\lambda}\in {W_{\rm aff}},$ assume that $[x]{\overleftarrow{D}}_{[ij,k]} {\overleftarrow{D}}_{[ij,l]} \not=0.$ Then we have the arrows $xs_{ij,k} \rightarrow_{near} x$ and $xs_{ij,k}s_{ij,l} \rightarrow_{near} xs_{ij,k}$ in the Bruhat ordering. From the conditions (1) and (2) in Proposition 4.1, one of the following conditions holds: \\ Case (1): $k=-\langle \lambda ,{\cal{V}}arepsilon_i-{\cal{V}}arepsilon_j \rangle$ and $l(w)=l(ws_{ij})-1,$ \\ Case (2): $k=-\langle \lambda, {\cal{V}}arepsilon_i-{\cal{V}}arepsilon_j \rangle -1$ and $l(w)=l(ws_{ij})+\langle {\cal{V}}arepsilon_i-{\cal{V}}arepsilon_j,2\rho \rangle -1.$ \\ In Case (1), since the arrow $xs_{ij,k}s_{ij,l}=ws_{ij}t_{\lambda}s_{ij,l} \rightarrow_{near} xs_{ij,k}$ must come from the condition (2) of Proposition 4.1, we have $\langle {\cal{V}}arepsilon_i-{\cal{V}}arepsilon_j,2\rho \rangle -1=1.$ This equality implies that ${\cal{V}}arepsilon_i-{\cal{V}}arepsilon_j$ is a simple root $\alpha_i,$ and we get \[ [x]{\overleftarrow{D}}D(i \; i+1)^2 = [x]{\overleftarrow{D}}_{[\alpha_i,-\langle \lambda ,\alpha_i \rangle]}{\overleftarrow{D}}_{[\alpha_i,-\langle \lambda ,\alpha_i \rangle-1]} =[xt_{\alpha_i}]=T_i[x]. \] In Case (2), since the arrow $xs_{ij,k}s_{ij,l}= ws_{ij}t_{\lambda+{\cal{V}}arepsilon_i-{\cal{V}}arepsilon_j}s_{ij,l} \rightarrow_{near} xs_{ij,k}$ comes from the condition (1) of Proposition 4.1, we again obtain $\langle {\cal{V}}arepsilon_i-{\cal{V}}arepsilon_j,2\rho \rangle -1=1$ and ${\cal{V}}arepsilon_i-{\cal{V}}arepsilon_j=\alpha_i.$ Hence we get \[ [x]{\overleftarrow{D}}D(i \; i+1)^2 = [x]{\overleftarrow{D}}_{[\alpha_i,-\langle \lambda ,\alpha_i \rangle-1]}{\overleftarrow{D}}_{[\alpha_i,-\langle \lambda ,\alpha_i \rangle-2]} =[xt_{\alpha_i}]=T_i[x]. \] If $j\not=i+1,$ we have ${\overleftarrow{D}}D(ij)^2=0.$ The relations (ii) and (iii) follow from the identities $[ij,a][kl,b]=[kl,b][ij,a]$ for $\{ i,j\} \cap \{ k,l \} ={\cal{E}}mptyset,$ and \[ [ij,a][jk,b]+[jk,b][ki,-a-b]+[ki,-a-b][ij,a]=0 \] in ${\cal{B}}af.$ \begin{rem} The operators ${\overleftarrow{D}}D_v(\alpha)$ induce the quantum Bruhat representation of ${\cal E}_n^q$ via $\theta_v^{\lambda}.$ {\cal{E}}nd{rem} \begin{thebibliography}{99} \bibitem{AS} N. Andruskiewitsch and H.-J. Schneider, {\it Finite quantum groups and Cartan matrices,} Adv. Math. {\bf 154} (2000), 1-45. \bibitem{Ba} Y. Bazlov, {\it Nichols-Woronowicz algebra model for Schubert calculus on Coxeter groups,} J. Algebra, {\bf 297} (2006), 372-399. \bibitem{Bo} R. Bott, {\it The space of loops on a Lie group,} Michigan Math. J., {\bf 5} (1958), 35-61. \bibitem{FK} S. Fomin and A. N. Kirillov, {\it Quadratic algebras, Dunkl elements and Schubert calculus,} Advances in Geometry, (J.-L. Brylinski, R. Brylinski, V. Nistor, B. Tsygan, and P. Xu, eds. ) Progress in Math., {\bf 172}, Birkh\"auser, 1995, 147-182. \bibitem{KM} A. N. Kirillov and T. Maeno, {\it A note on quantization operators on Nichols algebra model for Schubert calculus on Weyl groups,} Lett. Math. Phys. {\bf 72} (2005), 233-241. \bibitem{KK} B. Kostant and S. Kumar, {\it The nil Hecke ring and cohomology of $G/P$ for a Kac-Moody group $G$ }, Adv. in Math. {\bf 62} (1986), 187-237. \bibitem{LS} T. Lam and M. Shimozono, {\it Quantum cohomology of $G/P$ and homology of affine Grassmannian,} Acta Math., {\bf 24} (2010), 49-90. \bibitem{LP} C. Lenart and A. Postnikov, {\it Affine Weyl groups in $K$-theory and representation theory,} Int. Math. Res. Notices {\bf 2007}, no. 12, Art. ID rnm038, 65pp. \bibitem{Maj1} S. Majid, {\it Free braided differential calculus, braided binomial theorem, and the braided exponential map,} J. Math. Phys., {\bf 34} (1993), 4843-4856. \bibitem{Maj2} S. Majid, {\it Noncommutative differentials and Yang-Mills on permutation groups $S_N,$} Lecture Notes in Pure and Appl. Math., vol. 239, Dekker, 2004, 189-214. \bibitem{MS} A. Milinski and H.-J. Schneider, {\it Pointed indecomposable Hopf algebras over Coxeter groups,} Contemp. Math., {\bf 267} (2000), 215-236. \bibitem{Ni} W. D. Nichols, {\it Bialgebras of type one,} Comm. Algebra, {\bf 6} (1978), 1521-1552. \bibitem{Pe} D. Peterson, Lecture notes at MIT, 1997. \bibitem{Wo} S. L. Woronowicz, {\it Differential calculus on compact matrix pseudogroups (quantum groups),} Commun. Math. Phys., {\bf 122} (1989), 125-170. {\cal{E}}nd{thebibliography} Research Institute for Mathematical Sciences \\ Kyoto University \\ Sakyo-ku, Kyoto 606-8502, Japan \\ e-mail: {\tt [email protected]} \\ URL: {\tt http://www.kurims.kyoto-u.ac.jp/\textasciitilde kirillov} \\ Department of Electrical Engineering, \\ Kyoto University, \\ Sakyo-ku, Kyoto 606-8501, Japan \\ e-mail: {\tt [email protected]} {\cal{E}}nd{document}
\begin{document} \begin{abstract} This paper provides a new class of moment models for linear kinetic equations in slab geometry. These models can be evaluated cheaply while preserving the important realizability property, that is the fact that the underlying closure is non-negative. Several comparisons with the (expensive) state-of-the-art minimum-entropy models are made, showing the similarity in approximation quality of the two classes. \end{abstract} \begin{keyword} moment models \sep minimum entropy \sep Kershaw closures \sep kinetic transport equation \MSC[2010] 35L40 \sep 35Q84 \sep 47B35 \sep 65M08 \sep 65M70 \end{keyword} \title{Kershaw closures for linear transport equations in slab geometry I: model derivation } \noindent \section{Introduction} In recent years, many approaches have been considered for solving time-dependent linear kinetic transport equations, which arise for example in electron radiation therapy or radiative heat transfer problems. Many of the most popular methods are \emph{moment methods}, also known as \emph{moment closures} because they are distinguished by how they close the truncated system of exact moment equations. Moments are defined by angular averages against basis functions to produce spectral approximations in the angle variable. A typical family of moment models are the so-called \emph{$\PN$ methods} \cite{Lewis-Miller-1984,Gel61}, which are pure spectral methods. However, many high-order moment methods, including $\PN$, do not take into account that the original kinetic particle distribution to be approximated must be non-negative. The moment vectors produced by such models are therefore often not realizable, that is the fact that there is no associated non-negative kinetic distribution consistent with the moment vector. Thus, the solutions can contain non-physical artefacts such as negative local particle densities \cite{Bru02}. The family of \emph{minimum-entropy models}, colloquially known as \emph{$\MN$ models} or entropy-based moment closures, solve this problem (for certain physically relevant entropies) by specifying the closure using a non-negative density reconstructed from the moments. Additionally, they are hyperbolic and dissipate entropy \cite{Levermore1998}.\\ All these properties of the minimum-entropy ansatz (for all types of angular bases) come at the price that the reconstruction of the distribution function involves solving an optimization problem at every point on the space-time mesh. These reconstructions can be parallelized, and so the recent emphasis on algorithms that can take advantage of massively parallel computing environments has led to renewed interest in the computation of $\MN$ solutions both for linear and non-linear kinetic equations \cite{DubFeu99,Hauck2010,Lam2014,AllHau12,Garrett2014, McDonald2012}. To avoid these expensive computations a new class of models has been introduced in \cite{Ker76}, which will be called Kershaw closures. These models aim at providing a closed flux function which is generated by a non-negative distribution, thus ensuring that this crucial property of the transport solution is not violated. One derivation of Kershaw closures has been provided in \cite{Monreal}. Unfortunately, it is not straight forward to lift this procedure to higher dimensions. Here, a new ansatz is derived for which it is possible to do so. This paper is organized as follows. First, the transport equation and its moment approximations are given. Then, the available realizability theory is shortly reviewed. Using these information allows to derive and investigate the class of Kershaw closures which is then intensively tested in well-known benchmark problems. Finally, conclusions and an outlook on future work is given. \section{Modelling} In slab geometry, the transport equation under consideration has the form \begin{align} \label{eq:TransportEquation1D} \partial_\timevar\distribution+\ensuremath{\Omega}height\partial_{\z}\distribution + \ensuremath{\sigma_a}\distribution = \ensuremath{\sigma_s}\collision{\distribution}+\ensuremath{Q}, \qquad \ensuremath{t}\in\ensuremath{T},\ensuremath{z}\in\ensuremath{X},\ensuremath{\Omega}height\in[-1,1]. \end{align} The physical parameters are the absorption and scattering coefficient $\ensuremath{\sigma_a},\ensuremath{\sigma_s}:\ensuremath{T}\times\ensuremath{X}\to\mathbb{R}pos$, respectively, and the emitting source $\ensuremath{Q}:\ensuremath{T}\times\ensuremath{X}\times[-1,1]\to\mathbb{R}pos$. Furthermore, $\ensuremath{\Omega}height\in[-1,1]$, and $\distribution = \distribution(\ensuremath{t},\ensuremath{z},\ensuremath{\Omega}height)$. The shorthand notation $\ints{\cdot} = \int\limits_{-1}^1\cdot~d\ensuremath{\Omega}height$ denotes integration over $[-1,1]$. \begin{assumption} \label{ass:CollisionOperator} Following \cite{Levermore1996}, the collision operator $\ensuremath{\cC}$ is assumed to have the following properties. \begin{enumerate} \begin{subequations} \label{eq:CollisionProperty} \item Mass conservation \begin{align} \label{eq:CollisionPropertyMass} \ints{\collision{\distribution}}=0. \end{align} \item Local entropy dissipation \begin{align} \label{eq:CollisionPropertyLocalDissipation} \ints{\ensuremath{\eta}'(\distribution)\collision{\distribution}}\leq 0, \end{align} where $\ensuremath{\eta}$ denotes a strictly convex, twice differentiable entropy. \item Constants in the kernel: \begin{align} \label{eq:ConstantKernel} \collision{c} = 0 \qquad \text{for every } c\in\mathbb{R}. \end{align} \end{subequations} \end{enumerate} \end{assumption} {A typical example for $\ensuremath{\cC}$ is the linear integral operator \begin{equation} \lincollision{\distribution} = \int\limits_{-1}^1 \ensuremath{K}(\ensuremath{\Omega}height, \ensuremath{\Omega}height^\prime) \distribution(\ensuremath{t}, \ensuremath{z}, \ensuremath{\Omega}height^\prime)~d\ensuremath{\Omega}height^\prime - \distribution(\ensuremath{t}, \ensuremath{z}, \ensuremath{\Omega}height), \label{eq:collisionOperatorLin1D} \end{equation} where $\ensuremath{K}$ is non-negative, symmetric in both arguments and normalized to $\int\limits_{-1}^1 \ensuremath{K}(\ensuremath{\Omega}height, \ensuremath{\Omega}height^\prime)~d\ensuremath{\Omega}height^\prime=1$. It includes the often-used special case of the BGK-type isotropic-scattering operator if $\ensuremath{K}\equiv \frac12$, which will be used here as well.} \eqref{eq:TransportEquation1D} is supplemented by initial and boundary conditions: \begin{subequations} \begin{align} \distribution(0,\ensuremath{z},\ensuremath{\Omega}height) &= \ensuremath{\distribution[\ensuremath{t}=0]}(\ensuremath{z},\ensuremath{\Omega}height) &\text{for } \ensuremath{z}\in\ensuremath{X} = (\ensuremath{z}L,\ensuremath{z}R), \ensuremath{\Omega}height\in[-1,1], \label{eq:TransportEquation1DIC}\\ \distribution(\ensuremath{t},\ensuremath{z}L,\ensuremath{\Omega}height) &= \ensuremath{\distribution[b]}(\ensuremath{t},\ensuremath{z}L,\ensuremath{\Omega}height) &\text{for } \ensuremath{t}\in\ensuremath{T}, \ensuremath{\Omega}height>0, \label{eq:TransportEquation1DBCa}\\ \distribution(\ensuremath{t},\ensuremath{z}R,\ensuremath{\Omega}height) &= \ensuremath{\distribution[b]}(\ensuremath{t},\ensuremath{z}R,\ensuremath{\Omega}height) &\text{for } \ensuremath{t}\in\ensuremath{T}, \ensuremath{\Omega}height<0. \label{eq:TransportEquation1DBCb} \end{align} \end{subequations} \section{Moment models and realizability} In general, solving equation \eqref{eq:TransportEquation1D} is very expensive in higher dimensions due to the high dimensionality of the state space. For this reason it is convenient to use some type of spectral or Galerkin method to transform the high-dimensional equation into a system of lower-dimensional equations. Typically, one chooses to reduce the dimensionality by representing the angular dependence of $\distribution$ in terms of some basis $\basis$. \begin{definition} The vector of functions $\basis = \basis[\ensuremath{N}]:[-1,1]\to\mathbb{R}^{\ensuremath{N}+1}$ consisting of $\ensuremath{N}+1$ basis functions $\basiscomp[\ensuremath{i}]$, $\ensuremath{i}=0,\ldots\ensuremath{N}$ of maximal \emph{order} $\ensuremath{N}$ is called an \emph{angular basis}. The so-called \emph{moments} of a given distribution function $\distribution$ with respect to $\basis$ are then defined by \begin{align} \label{eq:moments} {\moments[] =\ints{{\basis}\distribution} = \left(\momentcomp{0},\ldots,\momentcomp{\ensuremath{N}}\right)^T}, \end{align} where the integration is performed componentwise.\\ Assuming for simplicity $\basiscomp[0]\equiv 1$, the quantity $\momentcomp{0} = \ints{\basiscomp[0]\distribution}=\ints{\distribution}$ is called \emph{local particle density}. Furthermore, \emph{normalized moments} $\ensuremath{\bn}izedmoments = \left(\ensuremath{\bn}izedmomentcomp{1},\ldots,\ensuremath{\bn}izedmomentcomp{\ensuremath{N}}\right)\in\mathbb{R}^{\ensuremath{N}}$ are defined as \begin{align} \label{eq:NormalizedMoments} \ensuremath{\bn}izedmoments = \cfrac{\ints{\ensuremath{\bn}izedbasis\distribution}}{\ints{\distribution}}~, \end{align} where $\ensuremath{\bn}izedbasis = \left(\basiscomp[1],\ldots,\basiscomp[\ensuremath{N}]\right)^T$ is the remainder of the basis $\basis$. \end{definition} To obtain a set of equations for $\moments$, \eqref{eq:TransportEquation1D} has to be multiplied through by $\basis$ and integrated over $[-1,1]$, giving \begin{align*} \ints{\basis\partial_\timevar\distribution}+\ints{\basis\partial_{\z}\ensuremath{\Omega}height\distribution} + \ints{\basis\ensuremath{\sigma_a}\distribution} = \ensuremath{\sigma_s}\ints{\basis\collision{\distribution}}+\ints{\basis\ensuremath{Q}}. \end{align*} Collecting known terms, and interchanging integrals and differentiation where possible, the moment system has the form \begin{align} \label{eq:MomentSystemUnclosed1D} \partial_\timevar\moments+\partial_{\z}\ints{\ensuremath{\Omega}height \basis\ansatz[\moments]} + \ensuremath{\sigma_a}\moments = \ensuremath{\sigma_s}\ints{\basis\collision{\ansatz[\moments]}}+\ints{\basis\ensuremath{Q}}. \end{align} The solution of \eqref{eq:MomentSystemUnclosed1D} is equivalent to the one of \eqref{eq:TransportEquation1D} if $\basis=\basis[\infty]$ is a basis of $\Lp{2}(\sphere,\mathbb{R})$. Since it is impractical to work with an infinite-dimensional system, only a finite number of $\ensuremath{N}+1<\infty$ basis functions $\basis[\ensuremath{N}]$ of order $\ensuremath{N}$ can be considered. Unfortunately, there always exists an index $\ensuremath{i}\in\{0,\dots,\ensuremath{N}\}$ such that the components of $\basiscomp\cdot\ensuremath{\Omega}height$ are not in the linear span of $\basis[\ensuremath{N}]$. Therefore, the flux term cannot be expressed in terms of $\moments[\ensuremath{N}]$ without additional information. Furthermore, the same might be true for the projection of the scattering operator onto the moment-space given by $\ints{\basis\collision{\distribution}}$. This is the so-called \emph{closure problem}. One usually prescribes some \emph{ansatz} distribution $\ansatz[\moments](\ensuremath{t},\ensuremath{\bx},\ensuremath{\Omega}):=\ansatz(\moments(\ensuremath{t},\ensuremath{\bx}),\basis(\ensuremath{\Omega}))$ to calculate the unknown quantities in \eqref{eq:MomentSystemUnclosed1D}. Note that the dependence on the angular basis in the short-hand notation $\ansatz[\moments]$ is neglected for notational simplicity.\\ In this paper, the \emph{full-moment monomial basis} $\basiscomp = \ensuremath{\Omega}height^\ensuremath{i}$ is considered. However, it is in principle possible to extend the derived concepts to other bases like half \cite{DubKla02,DubFraKlaTho03} or mixed moments \cite{Frank07,Schneider2014}. \subsection{Minimum-entropy approach} \label{sec:MinimumEntropy}\index{MinimumEntropy@\textbf{Minimum-entropy models}|(} An important class of models are the so-called \emph{minimum-entropy models}. Here the ansatz density $\ansatz$ is reconstructed from the moments $\moments$ by minimizing the \emph{entropy functional} \begin{align} \label{eq:entropyFunctional} \ensuremath{\eta}Functional(\distribution) = \ints{\ensuremath{\eta}(\distribution)} \end{align} under the moment constraints \index{Entropy@\textbf{Entropy}!Entropy density $\ensuremath{\eta}$}\index{Entropy@\textbf{Entropy}!Entropy functional $\ensuremath{\eta}Functional$} \begin{align} \label{eq:MomentConstraints} \ints{\basis\distribution} = \moments. \end{align} The kinetic \emph{entropy density} $\ensuremath{\eta}:\mathbb{R}\to\mathbb{R}$ is strictly convex and twice continuously differentiable and the minimum is simply taken over all functions $\distribution = \distribution(\ensuremath{\Omega})$ such that $\ensuremath{\eta}Functional(\distribution)$ is well defined. The obtained ansatz $\ansatz = \ansatz[\moments]$ by solving the constrained optimization problem is given by\index{Ansatz@\textbf{Ansatz} $\ansatz$} \begin{equation} \ansatz[\moments] = \argmin\limits_{\distribution:\ensuremath{\eta}(\distribution)\in\Lp{1}}\left\{\ints{\ensuremath{\eta}(\distribution)} : \ints{\basis \distribution} = \moments \right\}. \label{eq:primal} \end{equation} This problem is typically solved through its strictly convex finite-dimensional dual\index{Legendredual@\textbf{Legendre dual}} \begin{equation} \multipliers(\moments) := \argmin_{\tilde{\multipliers} \in \mathbb{R}^{\ensuremath{n}}} \ints{\ld{\ensuremath{\eta}}(\basis^T \tilde{\multipliers})} - \moments^T \tilde{\multipliers}, \label{eq:dual} \end{equation} where $\ld{\ensuremath{\eta}}$ is the Legendre dual \cite{courant2008methods} of $\ensuremath{\eta}$. Differentiating \eqref{eq:dual} with respect to the multipliers $\multipliers(\moments)$, the first-order necessary conditions show that the solution to \eqref{eq:primal} has the form \begin{equation} \ansatz[\moments] = \ld{\ensuremath{\eta}}' \left(\basis^T \multipliers(\moments) \right), \label{eq:psiME} \end{equation} where $\ld{\ensuremath{\eta}}'$ is the derivative of $\ld{\ensuremath{\eta}}$.\\ After substituting $\distribution$ in \eqref{eq:MomentSystemUnclosed1D} with $\ansatz[\moments]$, a closed system of equations remains, yielding \begin{align} \label{eq:MomentSystemClosed1D} \partial_\timevar\moments+\partial_{\z}\cdot\ints{\ensuremath{\Omega}height\basis\ansatz[\moments]} + \ensuremath{\sigma_a}\moments = \ensuremath{\sigma_s}\ints{\basis\collision{\ansatz[\moments]}}+\ints{\basis\ensuremath{Q}}. \end{align} For convenience, \eqref{eq:MomentSystemClosed1D} can be written in the form of a usual first-order system of balance laws \begin{align} \label{eq:GeneralHyperbolicSystem1D} \partial_\timevar\moments+\partial_{\z}\ensuremath{\bF}_3\left(\moments\right) = \ensuremath{\bs}\left(\moments\right), \end{align} where \begin{subequations} \label{eq:FluxDefinitions} \begin{align} \ensuremath{\bF}\left(\moments\right) &= \ints{\ensuremath{\Omega}height\basis\ansatz[\moments]}\in\mathbb{R}^{\ensuremath{N}+1},\\ \ensuremath{\bs}\left(\moments\right) &= \ensuremath{\sigma_s}\ints{\basis\collision{\ansatz[\moments]}}+\ints{\basis\ensuremath{Q}}-\ensuremath{\sigma_a}\moments. \end{align} \end{subequations} This system has several attractive properties, namely that it is hyperbolic and dissipates entropy \cite{Levermore1996,Levermore1998} and the eigenvalues of the flux Jacobian $\ensuremath{\bF}'(\moments)$ are bounded in absolute value by one \cite{Schneider2015a}, which corresponds to the original velocities of the transport equation since $\ensuremath{\Omega}height\in[-1,1]$. The kinetic entropy density $\ensuremath{\eta}$ can be chosen according to the physics being modelled. As in \cite{Levermore1996,Hauck2010}, \emph{Maxwell-Boltzmann entropy} \begin{align} \label{eq:EntropyM} \ensuremath{\eta}(\distribution) = \distribution \log(\distribution) - \distribution \end{align} is used, thus $\ld{\ensuremath{\eta}}(p) = \ld{\ensuremath{\eta}}'(p) = \exp(p)$. This entropy is used for non-interacting particles as in an ideal gas. Other physically relevant entropies are \cite{Levermore1996} \begin{align*} \ensuremath{\eta}(\distribution) = \distribution \log(\distribution) + \left(1-\distribution\right)\log\left(1-\distribution\right) \end{align*} for particles satisfying \emph{Fermi-Dirac} (e.g. fermions) or \begin{align*} \ensuremath{\eta}(\distribution) = \distribution \log(\distribution) - \left(1+\distribution\right)\log\left(1+\distribution\right) \end{align*} for particles satisfying \emph{Bose-Einstein statistics} (e.g. bosons). The resulting minimum-entropy model is commonly referred to as the $\MN$ model. Note that the classical $\PN$ approximation \cite{Eddington,Lewis-Miller-1984} is also an entropy-based moment closure using the entropy $$ \ensuremath{\eta}(\distribution) = \frac12 \distribution^2 = \ld{\ensuremath{\eta}}(\distribution) $$ and Legendre polynomials as angular basis \cite{Brunner2005b,Seibold2014}. \subsection{Realizability} Since the underlying kinetic density to be approximated is non-negative, a moment vector only makes sense physically if it can be associated with a non-negative distribution function. In this case the moment vector is called \emph{realizable}. \begin{definition} \label{def:RealizableSet} The \emph{realizable set} $\mathbb{R}D{\basis}{}$\index{Realizability@\textbf{Realizability}!Realizable set $\mathbb{R}D{\basis}{}$} is $$ \mathbb{R}D{\basis}{} = \left\{\moments~:~\exists \distribution(\ensuremath{\Omega}height)\ge 0,\, \ints{\distribution} > 0, \text{ such that } \moments =\ints{\basis\distribution} \right\}. $$ If $\moments\in\mathbb{R}D{\basis}{}$, then $\moments$ is called \emph{realizable}. Any $\distribution$ such that $\moments =\ints{\basis \distribution}$ is called a \emph{representing density}. If $\distribution$ is additionally a linear combination of Dirac deltas \cite{Hassani2009,Mathematics2011,Kuo2006}, it is called \emph{atomic} \cite{Curto1991}. \end{definition} \begin{definition} \mbox{ } \begin{enumerate}[(a)] \item Let $A,B\in\mathbb{R}^{\ensuremath{n}\times \ensuremath{n}}$ be Hermitian matrices. The partial ordering $"\geq"$ on such matrices is defined by $A\geq B$ if and only if $A-B$ is positive semi-definite. In particular $A\geq 0$ denotes that $A$ is positive semi-definite. \item A distribution function $\distribution$ is said to be $r-$\emph{atomic} with \emph{atoms} $\ensuremath{\Omega}height_i$ and \emph{densities} $\rho_i$ if it is a linear-combination of $r$ Dirac deltas of the form \begin{align*} \distribution = \sum_{i=0}^{r-1}\rho_i\ensuremath{\partiallta}(\ensuremath{\Omega}height-\ensuremath{\Omega}height_i). \end{align*} \end{enumerate} \end{definition} For the full-moment basis the question of finding practical characterizations of the realizable set $\mathbb{R}D{\basis}{}$ has been completely solved in \cite{Curto1991}. The resulting problems are special cases of the so-called \emph{truncated Hausdorff moment problem} which aims to find a realizing distribution with support on the interval $[a,b]$ for a moment vector $\moments$ in the set $$\left\{\moments\in\mathbb{R}^{\ensuremath{N}+1}~|~ \exists \distribution\geq 0 \text{ with } \momentcomp{\ensuremath{i}} = \int\limits_a^b \ensuremath{\Omega}height^\ensuremath{i}\distribution~d\ensuremath{\Omega}height,~\ensuremath{i}=0,\ldots,\ensuremath{N}\right\}.$$ The following characterizations of the above realizable set hold. {\begin{lemma}[Truncated Hausdorff moment problem \cite{Curto1991}] \label{lem:FullMomentRealizability} Define the \emph{Hankel matrices} $$ A(k):=\left(\momentcomp{i+j}\right)_{i,j=0}^k, \quad B(k):=\left(\momentcomp{i+j+1}\right)_{i,j=0}^k, \quad C(k):=\left(\momentcomp{i+j}\right)_{i,j=1}^k. $$ Then the truncated Hausdorff moment problem has a solution if and only if \begin{itemize} \item for $\ensuremath{N}=2k+1$, \begin{align} bA(k)\geq B(k)\geq aA(k); \label{eq:haus-odd} \end{align} \item for $\ensuremath{N}=2k$, \begin{align} A(k) &\geq 0 \qquad \text{and} \label{eq:haus-even-1} \\ (a+b)B(k-1) &\geq abA(k-1) + C(k). \label{eq:haus-even-2} \end{align} \end{itemize} \end{lemma} } \begin{corollary} \label{cor:FullMomentRealizability} The realizable set $\mathbb{R}D{\fmbasis}{}$ satisfies \begin{align*} \mathbb{R}D{\fmbasis}{} = \begin{cases} \left\{\moments\in\mathbb{R}^{\ensuremath{N}+1}~|~ {A(k)\geq B(k),~A(k)\geq -B(k)}\right\} & \text{ if } $\ensuremath{N}=2k+1$,\\ \left\{\moments\in\mathbb{R}^{\ensuremath{N}+1}~|~ A(k)\geq 0, A(k-1)\geq C(k)\right\} & \text{ if } $\ensuremath{N}=2k$.\\ \end{cases} \end{align*} \end{corollary} \begin{proof} {Follows directly from \lemref{lem:FullMomentRealizability} with $a = -1$, $b=1$.} \end{proof} Furthermore, it can be shown that there exists a minimal atomic representing distribution $\distribution$ (in the sense that it contains the fewest possible number of atoms while still representing the moments) and that one can directly find this distribution with the help of its \emph{generating function}. This is the consequence of a recursiveness property of the Hankel matrices $A(k)$ and $B(k)$ \cite{Curto1991}. \begin{corollary} \label{cor:GeneratingFunction} Let $\left(\gamma_0,\ldots,\gamma_{r-1}\right)^T:= A(r-1)^{-1}v(r,r-1)$, where $v(i,j) = \left(\momentcomp{i+l}\right)_{l=0}^j$ and $r$ is the smallest integer such that $A(r)$ is singular, \footnote{$r$ is also called the \emph{Hankel rank} \cite{Curto1991}.} the generating function is defined by \begin{align*} \generatingFunction(\ensuremath{\Omega}height) = \ensuremath{\Omega}height^r-\sum\limits_{i=0}^{r-1}\gamma_i\ensuremath{\Omega}height^i. \end{align*} The roots of this polynomial give the atoms $\ensuremath{\Omega}height_i$ of the distribution $\distribution = \sum_{i=0}^{r-1}\rho_i\ensuremath{\partiallta}(\ensuremath{\Omega}height-\ensuremath{\Omega}height_i)$ whereas the densities $\rho_i$ are calculated afterwards from the Vandermonde system \begin{align} \label{eq:VandermondeSystem} \rho_0\ensuremath{\Omega}height_0^i+\cdots +\rho_{r-1}\ensuremath{\Omega}height_{r-1}^i = v(i,0) = \momentcomp{i}~~~i=0\ldots r-1. \end{align} Furthermore, when the moment vector $\moments$ is on the boundary of the realizable set, the minimal atomic representing measure is the unique representing measure. \end{corollary} Due to the structure of the used Hankel matrices (the highest moment $\momentcomp{\ensuremath{N}}$ always appears exactly once in the entries of the matrices) it is always possible to rearrange the conditions involving this highest moment in \corref{cor:FullMomentRealizability} in such a way that \begin{align*} \ensuremath{f_{\text{up}}}(\momentcomp{0},\ldots,\momentcomp{\ensuremath{N}-1})\geq \momentcomp{\ensuremath{N}} \geq \ensuremath{f_{\text{low}}}(\momentcomp{0},\ldots,\momentcomp{\ensuremath{N}-1}) \end{align*} for functions $\ensuremath{f_{\text{up}}}$ and $\ensuremath{f_{\text{low}}}$. Whenever $\moments$ is realizable and $\momentcomp{\ensuremath{N}} = \ensuremath{f_{\text{low}}}(\momentcomp{0},\ldots,\momentcomp{\ensuremath{N}-1})$, $\moments$ is said to be on the \emph{lower $\ensuremath{N}^{\text{th}}$-order realizability boundary}. Similarly, if $\momentcomp{\ensuremath{N}} = \ensuremath{f_{\text{up}}}(\momentcomp{0},\ldots,\momentcomp{\ensuremath{N}-1})$, $\moments$ is said to be on the \emph{upper $\ensuremath{N}^{\text{th}}$-order realizability boundary}. The functions $\ensuremath{f_{\text{up}}}$ and $\ensuremath{f_{\text{low}}}$ can be specified using the pseudoinverses of $A(k-1)$ and $A(k-1)\pm B(k-1)$ due to Lemma 2.3 in \cite{Curto1991}. Also compare \cite{Schneider2014} where this relation is used for the mixed-moment setting. { \begin{lemma}[cf. Lemma 2.3 in \cite{Curto1991}] \label{lem:pd-ext} \mbox{ }\\ Let $A \in \mathbb{R}^{k \timesk}$ be symmetric, $\bsbeta \in \mathbb{R}^k$, and $c \in \mathbb{R}$ such that $$ \tildeA = \left(\begin{array}{cc} A & \bsbeta \\ \bsbeta^T & c \end{array}\right). $$ \begin{itemize} \item[(i)] If $\tildeA \ge 0$, then $A \ge 0$, $\bsbeta = A \bw \in \range{A}$ and $c \ge \bw^T A \bw$. \item[(ii)] If $A \ge 0$ and $\bsbeta = A \bw\in \range{A}$, then $\tildeA \ge 0$ if and only if $c \ge \bw^T A \bw$. \end{itemize} \end{lemma} } \begin{remark} \label{rem:Pseudoinverse} Since $A$ is invertible on its range, a more explicit formula for the bound on $c$ in Lemma \ref{lem:pd-ext} can be written using the pseudo-inverse $\pseudoinv{A}$ of $A$. That is, for all $\bw$ such that $\bsbeta = A \bw$, it is also $\bw^T A \bw = \bsbeta^T \pseudoinv{A} \bsbeta$. Thus the bound on $c$ is well-defined even when $A$ is singular. \end{remark} To simplify notation later, the following corollary is written in terms of $\momentcomp{\ensuremath{N}+1}$ instead of $\momentcomp{\ensuremath{N}}$. \begin{corollary} \label{cor:FupFlow} The functions $\ensuremath{f_{\text{up}}}$ and $\ensuremath{f_{\text{low}}}$ satisfying \begin{align} \label{eq:UpperLowerBoundsFullMoments} \ensuremath{f_{\text{up}}}(\momentcomp{0},\ldots,\momentcomp{\ensuremath{N}})\geq \momentcomp{\ensuremath{N}+1} \geq \ensuremath{f_{\text{low}}}(\momentcomp{0},\ldots,\momentcomp{\ensuremath{N}}) \end{align} are given by \begin{align*} \ensuremath{f_{\text{up}}}(\momentcomp{0},\ldots,\momentcomp{\ensuremath{N}}) &= \begin{cases} \momentcomp{\ensuremath{N}-1}-\bsbeta_-^T\pseudoinv{\left(A(k-1)-C(k-1)\right)}\bsbeta_- & \text{ if } \ensuremath{N} = 2k+1\\ \momentcomp{\ensuremath{N}}-\bsbeta_-^T\pseudoinv{\left(A(k-1)-B(k-1)\right)}\bsbeta_- & \text{ if } \ensuremath{N} = 2k \end{cases}\\ \ensuremath{f_{\text{low}}}(\momentcomp{0},\ldots,\momentcomp{\ensuremath{N}}) &= \begin{cases} \bsbeta_+^T\pseudoinv{A}(k)\bsbeta_+& \text{ if } \ensuremath{N} = 2k+1\\ -\momentcomp{\ensuremath{N}}+\bsbeta_+^T\pseudoinv{\left(A(k-1)+B(k-1)\right)}\bsbeta_+~ & \text{ if } \ensuremath{N} = 2k \end{cases} \end{align*} where in the odd case \begin{align*} \bsbeta_- = \left(\momentcomp{k}-\momentcomp{k+2},\ldots,\momentcomp{\ensuremath{N}-2}-\momentcomp{\ensuremath{N}}\right)^T,\qquad \bsbeta_+ = \left(\momentcomp{k+1},\ldots,\momentcomp{\ensuremath{N}}\right)^T \end{align*} and in the even case \begin{align*} \bsbeta_\mp = \left(\momentcomp{k}\mp\momentcomp{k+1},\ldots,\momentcomp{\ensuremath{N}-1}\mp\momentcomp{\ensuremath{N}}\right)^T. \end{align*} \end{corollary} \begin{proof} For convenience, a proof of the even case if given. The odd case follows similarly. If $\ensuremath{N} = 2k>0$, the $(\ensuremath{N}+1)^\text{th}$-order realizable set is given by \begin{align*} \mathbb{R}D{\fmbasis+1}{} = \left\{\moments[\ensuremath{N}+1]\in\mathbb{R}^{\ensuremath{N}+2}~|~ {A(k)\geq B(k),~A(k)\geq -B(k)}\right\}. \end{align*} The moment constraints have the form \begin{align*} A(k)\mpB(k) = \begin{pmatrix} A(k-1)\mpB(k-1) & \bsbeta_\mp\\\bsbeta_\mp^T & \momentcomp{\ensuremath{N}}\mp \momentcomp{\ensuremath{N}+1} \end{pmatrix}\geq 0. \end{align*} Thus, by \lemref{lem:pd-ext} and \rmref{rem:Pseudoinverse}, it holds that \begin{align} \label{eq:bounds1} \momentcomp{\ensuremath{N}}\mp \momentcomp{\ensuremath{N}+1} &\geq \bsbeta_\mp^T\pseudoinv{\left(A(k-1)\mpB(k-1)\right)}\bsbeta_\mp = \bw_\mp^T\left(A(k-1)\mpB(k-1)\right)\bw_\mp~,\\ \label{eq:bounds2} \left(A(k-1)\mpB(k-1)\right)\bw_\mp &= \bsbeta_\mp \end{align} Consequently, \begin{align*} \momentcomp{\ensuremath{N}}-\bsbeta_-^T\pseudoinv{\left(A(k-1)-B(k-1)\right)}\bsbeta_- \geq \momentcomp{\ensuremath{N}+1} \geq -\momentcomp{\ensuremath{N}}+\bsbeta_+^T\pseudoinv{\left(A(k-1)+B(k-1)\right)}\bsbeta_+~. \end{align*} \end{proof} \begin{remark} By convention, $\bsbeta_-^T\pseudoinv{\left(A(k-1)-C(k-1)\right)}\bsbeta_- = 0$ if $\ensuremath{N} = 1$. \end{remark} { \begin{remark} An equivalent description for $\ensuremath{f_{\text{up}}}$ and $\ensuremath{f_{\text{low}}}$ has been derived in \cite{Ker76} using determinants of the Hankel matrices. However, that description is only well-defined if these matrices are invertible. This can lead to numerical problems close to the realizability boundary. In implementation, the formulation given in \lemref{lem:pd-ext} proved to be the most stable. For example in the proof of \corref{cor:FupFlow} it is more stable to first calculate the solution $\bw_\mp$ of \eqref{eq:bounds2} (this can be handled efficiently even in the singular case using the \emph{backslash} operator in Matlab \cite{MATLAB:2012}) and plug it into the quadratic form on the right-hand side of \eqref{eq:bounds1} instead of calculating the pseudoinverse of $A(k-1)\mpB(k-1)$. \end{remark} } \begin{example} \label{ex:M2Realizability} For the full-moment setting $\fmbasis[2]$, i.e. $k=1$, the Hankel matrices have the form \begin{align*} A(0) = \momentcomp{0},~ B(0) = \momentcomp{1},~ C(0) = \momentcomp{2},~ A(1) = \begin{pmatrix} \momentcomp{0} & \momentcomp{1}\\ \momentcomp{1} & \momentcomp{2} \end{pmatrix}. \end{align*} Sylvester's criterion implies that a symmetric matrix is positive semi-definite if its leading principal minors are non-negative \cite{meyer2000matrix}. Thus, all moments in $\mathbb{R}D{\basis[2]}{}$ satisfy \begin{align} \label{eq:M2Realizability} \momentcomp{0}\geq 0, \qquad\momentcomp{0}\momentcomp{2}\geq \momentcomp{1}^2,\quad \mbox{and} \quad \momentcomp{0}\geq\momentcomp{2}. \end{align} Note that these conditions also imply the first-order realizability condition \begin{align*} \pm\momentcomp{1}\leq \momentcomp{0}. \end{align*} The lower second-order realizability boundary is given by $\momentcomp{0}\momentcomp{2} = \momentcomp{1}^2$ while $\momentcomp{0}=\momentcomp{2}$ defines the upper one.\\ The third-order realizability conditions in the non-singular case $\pm \momentcomp{1} < \momentcomp{0}$ are given by \cite{Ker76} \begin{align} \label{eq:FullMomentsThirdOrderRealizabilityConditions} \momentcomp{2}-\cfrac{(\momentcomp{1}-\momentcomp{2})^2}{\momentcomp{0}-\momentcomp{1}}\geq \momentcomp{3} \geq -\momentcomp{2}+\cfrac{(\momentcomp{1}+\momentcomp{2})^2}{\momentcomp{0}+\momentcomp{1}}~. \end{align} \end{example} \begin{remark} Due to the structure of the Hankel matrices, the upper and lower bounds can be expressed in terms of normalized moments $\ensuremath{\bn}izedmoments$ as \begin{align} \label{eq:UpperLowerBoundsFullMomentsNorm} \ensuremath{f_{\text{up}}}(1,\ldots,\ensuremath{\bn}izedmomentcomp{\ensuremath{N}})\geq \ensuremath{\bn}izedmomentcomp{\ensuremath{N}+1} \geq \ensuremath{f_{\text{low}}}(1,\ldots,\ensuremath{\bn}izedmomentcomp{\ensuremath{N}}). \end{align} \end{remark} \section{Kershaw closures} \subsection{Derivation} With the previous realizability theory it is now possible to develop another closure strategy, which is called \emph{Kershaw} closure. The key idea of Kershaw in \cite{Ker76} was to derive an easy closure that preserves the realizability conditions, i.e. for every $\moments[{\basis[\ensuremath{N}]}]\in \mathbb{R}D{\basis[\ensuremath{N}]}{}$ the moment vector including the higher-order moments of the flux, generated by the closure relation, satisfies $\moments[{\basis[\ensuremath{N}+1]}]\in \mathbb{R}D{\basis[\ensuremath{N}+1]}{}$. If one uses the non-negative representing (atomic) distributions provided by the realizability theorems above, this is satisfied automatically. A very important property is defined as follows. \begin{definition} \mbox{ }\\ A representing distribution $\distribution$ for $\moments$ is said to \emph{interpolate the isotropic point} if $$\ints{\basis[\ensuremath{N}]\distribution} = \frac12\ints{\distribution}\ints{\basis[\ensuremath{N}]} = \frac12\ints{\distribution}\moments[\text{iso}] ~\text{ implies }~ \ints{\basis[\ensuremath{N}+1]\distribution} = \frac12\ints{\distribution}\ints{\basis[\ensuremath{N}+1]},$$ i.e. the moment vector including the $(\ensuremath{N}+1)^{\text{th}}$ moment is also isotropic with respect to $\basis[\ensuremath{N}+1]$. \end{definition} Unfortunately, it is in general wrong that this atomic distribution correctly interpolates the isotropic point, but it is possible to cure this problem abusing the structure of the bounded realizable set. Since $\mathbb{R}Done{\basis[\ensuremath{N}+1]}{} = \left\{\moments\in \mathbb{R}D{\basis[\ensuremath{N}+1]}{}~|~\momentcomp{0} = 1 \right\}$ is convex and bounded \cite{Schneider2014}, every vector $\moments[{\basis[\ensuremath{N}+1]}]\in\mathbb{R}Done{\basis[\ensuremath{N}+1]}{}$ can be written as a convex combination of moment vectors on the realizability boundary. Since all these boundary moments are by definition realizable, and due to the linearity of the moment problem, the convex combination of their representing densities is a representing density for $\moments[{\basis[\ensuremath{N}+1]}]$. All that remains is to choose the convex combination in such a way that the isotropic point is correctly interpolated. However, this requires $(\ensuremath{N}+1)^{\text{th}}$-order realizability information for a $\ensuremath{N}^{\text{th}}$-order closure. Since the above definition of a Kershaw closure is very abstract, the procedure is introduced in detail for $\fmbasis[1]$. Recall the second-order realizability conditions \eqref{eq:M2Realizability}. In normalized moments, they are equivalent to $$ -1 \leq \ensuremath{\bn}izedmomentcomp{1}\leq 1,\quad \ensuremath{\bn}izedmomentcomp{1}^2\leq \ensuremath{\bn}izedmomentcomp{2}\leq 1. $$ Thus the normalized second moment $\ensuremath{\bn}izedmomentcomp{2}$ is bounded from above and below by $\ensuremath{f_{\text{up}}}(\ensuremath{\bn}izedmomentcomp{1}) = 1$ and $\ensuremath{f_{\text{low}}}(\ensuremath{\bn}izedmomentcomp{1}) = \ensuremath{\bn}izedmomentcomp{1}^2$ depending only on $\ensuremath{\bn}izedmomentcomp{1}$. The distribution on the lower second-order realizability boundary ($\ensuremath{\bn}izedmomentcomp{2} = \ensuremath{\bn}izedmomentcomp{1}^2$) is given by \begin{align} \label{eq:K1psilow} \ansatz[\text{low}] = \momentcomp{0}\ensuremath{\partiallta}\left(\ensuremath{\bn}izedmomentcomp{1}-\ensuremath{\Omega}height\right) \end{align} while the upper second-order realizability-boundary distribution ($\ensuremath{\bn}izedmomentcomp{2}=1$) is given by \begin{align} \label{eq:K1psiup} \ansatz[\text{up}] = \momentcomp{0}\left(\frac{1-\ensuremath{\bn}izedmomentcomp{1}}{2}\ensuremath{\partiallta}(1+\ensuremath{\Omega}height) + \frac{1+\ensuremath{\bn}izedmomentcomp{1}}{2}\ensuremath{\partiallta}(1-\ensuremath{\Omega}height)\right). \end{align} By linearity of the problem, every convex combination \begin{align} \label{eq:K1Distribution} \ansatz = \ensuremath{\ensuremath{z}eta}\ansatz[\text{low}] + (1-\ensuremath{\ensuremath{z}eta})\ansatz[\text{up}],\quad \ensuremath{\ensuremath{z}eta}\in[0,1] \end{align} reproduces all moments up to first order and satisfies $\ansatz\geq 0$. \begin{remark} The choice of $\ensuremath{\ensuremath{z}eta} = \ensuremath{\ensuremath{z}eta}(\ensuremath{\bn}izedmoments)$ is completely arbitrary. There are only two conditions, namely $\ensuremath{\ensuremath{z}eta}\in[0,1]$ (convex combination property) and $\ensuremath{\ensuremath{z}eta}(\ensuremath{\bn}izedmoments[\text{iso}])$ is chosen such that the isotropic point is correctly interpolated. If, for whatever reason, other points should be interpolated as well, these interpolation conditions can be satisfied similarly. For simplicity, $\ensuremath{\ensuremath{z}eta}$ is chosen to be constant. This also ensures that the described procedure is uniquely determined. \end{remark} Calculating normalized isotropic moments up to order two gives $\ensuremath{\bn}izedmoments[\text{iso}] = (0,\frac{1}{3})$. The normalized second moment of \eqref{eq:K1Distribution} at the isotropic point therefore satisfies \begin{align*} \ensuremath{\bn}izedmomentcomp{2} = \ensuremath{\ensuremath{z}eta} (\ensuremath{\bn}izedmomentcomp{1})^2+(1-\ensuremath{\ensuremath{z}eta}) \stackrel{\ensuremath{\bn}izedmomentcomp{1}=0}{=} (1-\ensuremath{\ensuremath{z}eta}) \stackrel{!}{=} \frac{1}{3}. \end{align*} This implies $\ensuremath{\ensuremath{z}eta} = \frac{2}{3}$, altogether resulting in the analytically closed directive for the normalized second moment \begin{align} \label{eq:K1closure} \ensuremath{\bn}izedmomentcomp{2} = \frac23\ensuremath{\bn}izedmomentcomp{1}^2+\frac13. \end{align} This construction procedure is shown in \figref{fig:K1Construction}, where the corresponding upper and lower realizability conditions and their convex combination interpolating the isotropic point are plotted.\\ \begin{figure} \caption{Construction of the $\KN[1]$ closure. The red dash-dotted curve is the convex combination of the upper (black, dashed) and lower (blue, solid) second-order boundary-moments which interpolates the second-order isotropic point (red dot).} \label{fig:K1Construction} \end{figure} This model is called the \emph{Kershaw $\KN[1]$ closure}. Similarly, higher-order Kershaw closures can be derived. If only the flux has to be closed, it suffices to calculate the interpolation on the moment level. \begin{example}[${\KN[2]}$ closure] The third-order realizability conditions \eqref{eq:FullMomentsThirdOrderRealizabilityConditions} can be written as \begin{align*} \ensuremath{\bn}izedmomentcomp{2}-\cfrac{(\ensuremath{\bn}izedmomentcomp{1}-\ensuremath{\bn}izedmomentcomp{2})^2}{1-\ensuremath{\bn}izedmomentcomp{1}}\geq \ensuremath{\bn}izedmomentcomp{3} \geq -\ensuremath{\bn}izedmomentcomp{2}+\cfrac{(\ensuremath{\bn}izedmomentcomp{1}+\ensuremath{\bn}izedmomentcomp{2})^2}{1+\ensuremath{\bn}izedmomentcomp{1}}~. \end{align*} The normalized isotropic moment of third order is $\ensuremath{\bn}izedmoments[\text{iso}] = \left(0,\frac13,0\right)^T$. To obtain the $\KN[2]$ closure, the ansatz is again a convex combination of upper and lower boundary moments, i.e. \begin{align} \label{eq:Kershaw2ConvexCombination} \ensuremath{\bn}izedmomentcomp{3}(\ensuremath{\bn}izedmoments) = \ensuremath{\ensuremath{z}eta}\left(-\ensuremath{\bn}izedmomentcomp{2}+\cfrac{(\ensuremath{\bn}izedmomentcomp{1}+\ensuremath{\bn}izedmomentcomp{2})^2}{1+\ensuremath{\bn}izedmomentcomp{1}}\right) +(1-\ensuremath{\ensuremath{z}eta})\left(\ensuremath{\bn}izedmomentcomp{2}-\cfrac{(\ensuremath{\bn}izedmomentcomp{1}-\ensuremath{\bn}izedmomentcomp{2})^2}{1-\ensuremath{\bn}izedmomentcomp{1}}\right). \end{align} Evaluating \eqref{eq:Kershaw2ConvexCombination} at $\ensuremath{\bn}izedmomentcomp{1} = 0$, $\ensuremath{\bn}izedmomentcomp{2} = \frac13$ gives \begin{align*} \ensuremath{\bn}izedmomentcomp{3}(\ensuremath{\bn}izedmoments[\text{iso}]) = \frac{4\, \ensuremath{\ensuremath{z}eta}}{9} - \frac{2}{9} \stackrel{!}{=} 0, \end{align*} which implies $\ensuremath{\ensuremath{z}eta} = \frac12$. Therefore, the $\KN[2]$ closure for the third moment is given by \begin{align} \label{eq:K2closure} \ensuremath{\bn}izedmomentcomp{3}(\ensuremath{\bn}izedmoments) = \frac{\ensuremath{\bn}izedmomentcomp{1}\, \left(\ensuremath{\bn}izedmomentcomp{1}^2 + \ensuremath{\bn}izedmomentcomp{2}^2 - 2\, \ensuremath{\bn}izedmomentcomp{2}\right)}{\ensuremath{\bn}izedmomentcomp{1}^2 - 1}. \end{align} See also \figref{fig:K2FluxAndDifference} for a visualization of \eqref{eq:K2closure} and its slight deviation from the minimum-entropy $\MN[2]$ model. \end{example} \begin{figure} \caption{Third normalized moment $\ensuremath{\bn} \label{fig:K2FluxAndDifference} \end{figure} \begin{remark} Note that the interpolation procedure given in \cite{Monreal} produces different results for $\ensuremath{N}>1$. There, the $\KN[2]$ closure is given by \begin{align*} \ensuremath{\bn}izedmomentcomp{3}(\ensuremath{\bn}izedmoments) = \ensuremath{\bn}izedmomentcomp{1}\ensuremath{\bn}izedmomentcomp{2}. \end{align*} { These two representations are identical on the realizability boundary but differ slightly in the interior. However, the overall structure generated by this approach is completely different, as investigated later on in \secref{sec:Eigenstructure}. Furthermore, the here-derived model is consistent with the construction of the mixed-moment Kershaw closures in \cite{Schneider2014}. } \end{remark} { \begin{theorem} \mbox{ }\\ The \emph{Kershaw closure} $\KN$ of order $\ensuremath{N}$ is given by \begin{align} \label{eq:KnAnsatz} \ensuremath{\bn}izedmomentcomp{\ensuremath{N}+1}(\ensuremath{\bn}izedmoments) = \ensuremath{\ensuremath{z}eta} \ensuremath{f_{\text{low}}}(\ensuremath{\bn}izedmoments)+(1-\ensuremath{\ensuremath{z}eta}) \ensuremath{f_{\text{up}}}(\ensuremath{\bn}izedmoments), \end{align} where the interpolation constant \begin{align} \label{eq:KnAnsatzScalar} \ensuremath{\ensuremath{z}eta} = \cfrac{\frac12\ints{\ensuremath{\Omega}height^{\ensuremath{N}+1}}-\ensuremath{f_{\text{up}}}(\ensuremath{\bn}izedmoments[\text{iso}])}{\ensuremath{f_{\text{low}}}(\ensuremath{\bn}izedmoments[\text{iso}])-\ensuremath{f_{\text{up}}}(\ensuremath{\bn}izedmoments[\text{iso}])} = \begin{cases} \frac{k+2}{2k+3} & \text{ if } \ensuremath{N} = 2k+1\\ \frac12 & \text{ if } \ensuremath{N} = 2k \end{cases} \end{align} is defined via the functions $\ensuremath{f_{\text{up}}}$ and $\ensuremath{f_{\text{low}}}$ as given in \corref{cor:FupFlow}. \end{theorem} } \begin{proof} Using the ansatz \eqref{eq:KnAnsatz}, the interpolation condition yields \begin{align*} \ensuremath{\bn}izedmomentcomp{\ensuremath{N}+1}(\ensuremath{\bn}izedmoments[\text{iso}]) = \ensuremath{\ensuremath{z}eta} \ensuremath{f_{\text{low}}}(\ensuremath{\bn}izedmoments[\text{iso}])+(1-\ensuremath{\ensuremath{z}eta}) \ensuremath{f_{\text{up}}}(\ensuremath{\bn}izedmoments[\text{iso}]) \stackrel{!}{=} \frac12\ints{\ensuremath{\Omega}height^{\ensuremath{N}+1}}. \end{align*} Solving this equation for $\ensuremath{\ensuremath{z}eta}$ gives the first equality in \eqref{eq:KnAnsatzScalar}. Since the proof of the second equality only requires simple algebra and analysis but is tedious and unenlightening, it is omitted here. \end{proof} \begin{remark} The big advantage of Kershaw closures is that the calculation of the fluxes is very cheap compared to minimum-entropy models (recall that they require to solve the non-linear moment system \eqref{eq:dual} to calculate the flux) while mimicking their behaviour. Indeed, the more robust $\bw$-notation in \corref{cor:FupFlow} only requires the solution of four linear systems of dimension at most $k+1$. Even more, for $\ensuremath{n}$ not too big, the closures can be derived symbolically leading to an even greater gain in efficiency. {Also note that the class of Kershaw closures is related to the often-used \emph{quadrature method of moments}. See \cite{Schneider2014} for a discussion of the similarities and differences between these two classes of models.} \end{remark} \subsection{Eigenvalues and characteristic fields} \label{sec:Eigenstructure} While much is known about the eigenstructure of minimum-entropy models, until now no general theory is available for Kershaw closures. This section briefly investigates the eigenstructure and the characteristic fields of the first two models in this hierarchy to show that they are indeed hyperbolic. Due to the structure of the flux, similar to the minimum-entropy equations, the Jacobian $\ensuremath{\bF}'$ always has ones on the first super-diagonal and last row $\left(\nabla_{\moments[]} \momentcomp{\ensuremath{N}+1}\right)^T$, such that every eigenvalue $\lambda$ of $\ensuremath{\bF}'$ has the eigenvector \begin{align} \label{eq:MnEigenvectors} \ensuremath{\bv} &= \left(1,\ensuremath{\lambda},\ensuremath{\lambda}^2,\ldots,\ensuremath{\lambda}^{\ensuremath{N}}\right)^T\quad\text{ with }\\ \label{eq:MnEigenvalueEq} 0 &= \ensuremath{\lambda}^{\ensuremath{N}+1}-\left(\nabla_{\moments[]} \momentcomp{\ensuremath{N}+1}\right)^T\ensuremath{\bv}. \end{align} \begin{example}[The {$\KN[1]$} closure] Given \eqref{eq:K1closure}, it is easy to conclude that \begin{align} \ensuremath{\bF}'(\moments) = \left(\begin{array}{cc} 0 & 1\\ \frac{1}{3} - \frac{2\, \ensuremath{\bn}izedmomentcomp{1}^2}{3} & \frac{4\, \ensuremath{\bn}izedmomentcomp{1}}{3} \end{array}\right) \end{align} with eigenvalues \begin{align*} \ensuremath{\lambda}_{1,2} = \frac{2\, \ensuremath{\bn}izedmomentcomp{1}}{3} \mp \frac{\sqrt{3 - 2\, \ensuremath{\bn}izedmomentcomp{1}^2}}{3} \end{align*} and eigenvectors as in \eqref{eq:MnEigenvectors}. These eigenvalues are shown in \figref{fig:K1Eigenvalues}. Comparing these values with the eigenvalues of the $\MN[1]$ model (dashed), the remarkable difference between minimum-entropy and Kershaw models appears the first time. While the eigenvalues of the $\MN[1]$ model satisfy $\ensuremath{\lambda}_1(\pm 1) = \ensuremath{\lambda}_2(\pm 1)$, the $\KN[1]$ eigenvalues evaluate as \begin{align*} \ensuremath{\lambda}_1(1) = \frac13&\neq \ensuremath{\lambda}_2(1) = 1,\\ \ensuremath{\lambda}_2(-1) = -\frac13&\neq \ensuremath{\lambda}_1(-1) = -1. \end{align*} A simple calculation shows that \begin{align*} \ensuremath{\bv}_1\cdot \nabla_{\moments} \ensuremath{\lambda}_1 = -\cfrac{2}{9\momentcomp{0}}\left(2\ensuremath{\bn}izedmomentcomp{1}+\cfrac{3-\ensuremath{\bn}izedmomentcomp{1}^2}{\sqrt{3-2\ensuremath{\bn}izedmomentcomp{1}^2}}\right), \end{align*} which is zero for $\ensuremath{\bn}izedmomentcomp{1} = -1$, but not for $\ensuremath{\bn}izedmomentcomp{1} = 1$. A similar result is true for $\ensuremath{\bv}_2\cdot \nabla_{\moments} \ensuremath{\lambda}_2$, which is zero for $\ensuremath{\bn}izedmomentcomp{1} = 1$, but not for $\ensuremath{\bn}izedmomentcomp{1} = -1$. This means that only one of the characteristic fields of the $\KN[1]$ model degenerates on the realizability boundary. In contrast to the $\MN[1]$ model, $\ensuremath{\bv}_\ensuremath{i}\cdot\nabla_{\moments} \ensuremath{\lambda}_\ensuremath{i}$ is monotonic. The above-derived characteristic fields are shown in \figref{fig:K1Fields}. Note that the author of \cite{Monreal} claimed that at the realizability boundary $\ensuremath{\bn}izedmomentcomp{1} = \pm 1$ both characteristic fields of the $\KN[1]$ model are linearly degenerate. This is, as shown above, wrong and follows from a mistake in the authors calculation of $\nabla_{\moments} \ensuremath{\lambda}_i$. \end{example} \begin{figure} \caption{{Eigenvalues and characteristic fields ($\momentcomp{0} \end{figure} \begin{example}[The {$\KN[2]$} closure] \label{ex:K2Eigenvalues} Starting from \eqref{eq:K2closure}, it follows that \begin{align} \label{eq:K2Jacobian} \small\ensuremath{\bF}'(\moments) = \left(\begin{array}{ccc} 0 & 1 & 0\\ 0 & 0 & 1\\ \frac{2\, \ensuremath{\bn}izedmomentcomp{1}\, \left(\ensuremath{\bn}izedmomentcomp{2} - 1\right)\, \left(\ensuremath{\bn}izedmomentcomp{2} - \ensuremath{\bn}izedmomentcomp{1}^2\right)}{{\left(\ensuremath{\bn}izedmomentcomp{1}^2 - 1\right)}^2} & 1 - \frac{{\left(\ensuremath{\bn}izedmomentcomp{2} - 1\right)}^2}{2\, {\left(\ensuremath{\bn}izedmomentcomp{1} + 1\right)}^2} - \frac{{\left(\ensuremath{\bn}izedmomentcomp{2} - 1\right)}^2}{2\, {\left(\ensuremath{\bn}izedmomentcomp{1} - 1\right)}^2} & \frac{2\, \ensuremath{\bn}izedmomentcomp{1}\, \left(\ensuremath{\bn}izedmomentcomp{2} - 1\right)}{\ensuremath{\bn}izedmomentcomp{1}^2 - 1} \end{array}\right). \end{align} Since the general formula for the eigenvalues is very lengthy, it is omitted here. Instead, the eigenvalues are plotted in \figref{fig:K2Eigenvalues}. The minimal and maximal distance between two adjacent eigenvalues is depicted in \figref{fig:K2Eigenvalues2}. It is visible that on the lower realizability boundary, where $\ensuremath{\bn}izedmomentcomp{2} = \ensuremath{\bn}izedmomentcomp{1}^2$ and \begin{align} \label{eq:K2JacobianOnLowerBoundary} \ensuremath{\bF}'(\moments) = \left(\begin{array}{ccc} 0 & 1 & 0\\ 0 & 0 & 1\\ 0 & - \ensuremath{\bn}izedmomentcomp{1}^2 & 2\, \ensuremath{\bn}izedmomentcomp{1} \end{array}\right), \end{align} the minimal distance is zero, while the maximal distance is greater than zero almost everywhere. Calculating the eigenvalues of \eqref{eq:K2JacobianOnLowerBoundary}, one obtains zero as a single eigenvalue and $\ensuremath{\bn}izedmomentcomp{1}$ twice. For $\ensuremath{\bn}izedmomentcomp{1}=\ensuremath{\bn}izedmomentcomp{2}=0$, all eigenvalues coincide. On the other hand, looking at the upper realizability boundary, i.e. $\ensuremath{\bn}izedmomentcomp{2}=1$, it follows that \begin{align} \label{eq:K2JacobianOnUpperBoundary} \ensuremath{\bF}'(\moments) = \left(\begin{array}{ccc} 0 & 1 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0 \end{array}\right), \end{align} which has eigenvalues $\ensuremath{\lambda}_1 = -1$, $\ensuremath{\lambda}_2 = 0$ and $\ensuremath{\lambda}_3 = 1$. Therefore, the system is, as the $\KN[1]$ model, strictly hyperbolic on the upper realizability boundary. Note that this calculation also implies that the eigenvalues of \eqref{eq:K2Jacobian} are discontinuous at $\abs{\ensuremath{\bn}izedmomentcomp{1}}=\ensuremath{\bn}izedmomentcomp{2}=1$. This is different for the $\KN[2]$ model derived in \cite{Monreal}. However, the model there is linearly degenerate everywhere. Nevertheless, the $\MN[2]$ model shows the same behaviour, see e.g. \cite{Wright2009} where this is shown for a different entropy. \end{example} \begin{figure} \caption{Eigenvalues of the flux-Jacobian \eqref{eq:K2Jacobian} \label{fig:K2Eigenvalues} \end{figure} The characteristic fields, given in \figref{fig:K2Fields}, are always zero on exactly two parts of the realizability boundary. For example, $\ensuremath{\bv}_1\cdot\nabla_{\moments}\ensuremath{\lambda}_1$ is zero for $\ensuremath{\bn}izedmomentcomp{2}=\ensuremath{\bn}izedmomentcomp{1}^2$ and $\ensuremath{\bn}izedmomentcomp{1}\in[-1,0]$ or $\ensuremath{\bn}izedmomentcomp{2}=1$. Similarly, $\ensuremath{\bv}_3\cdot\nabla_{\moments}\ensuremath{\lambda}_3 = 0$ for $\ensuremath{\bn}izedmomentcomp{2}=\ensuremath{\bn}izedmomentcomp{1}^2$ and $\ensuremath{\bn}izedmomentcomp{1}\in[0,1]$ or $\ensuremath{\bn}izedmomentcomp{2}=1$. The second field (\figref{fig:K2Field2}) is zero for $\ensuremath{\bn}izedmomentcomp{2}=\ensuremath{\bn}izedmomentcomp{1}^2$ or $\ensuremath{\bn}izedmomentcomp{1} = 0$. Overall, at least one of the characteristic fields is never linearly degenerate, which is in good coincidence with the behaviour of the $\KN[1]$ system (see \figref{fig:K1Fields}). Note that the third characteristic field is skew-symmetric to the first field along the symmetry axis $\ensuremath{\bn}izedmomentcomp{1} = 0$. \begin{figure} \caption{Distance of adjacent eigenvalues of the flux-Jacobian \eqref{eq:K2Jacobian} \label{fig:K2Eigenvalues2} \end{figure} \begin{figure} \caption{Two characteristic fields of the $\KN[2]$ model. The values of the second characteristic field are limited to $[-10,10]$ for easier visualization.} \label{fig:K2Fields} \end{figure} \section{Numerical experiments} This section contains some often-used benchmark problems for moment models. With them it is possible to qualitatively investigate the differences between the derived moment models. The following results are calculated on a highly-resolved first-order grid ($\ensuremath{n_{\z}} = 10000$), using the techniques given in \cite{Schneider2015a}. Everything is calculated with isotropic scattering, i.e. $\collision{\distribution} = \frac12\momentcomp{0}-\distribution$. The \emph{reference solution} is given by the $\PN[199]$ model. \subsection{Plane source} \label{sec:Planesource} In this test case an isotropic distribution with all mass concentrated in the middle of an infinite domain $\ensuremath{z} \in (-\infty, \infty)$ is defined as initial condition, i.e. \begin{align*} \ensuremath{\distribution[\ensuremath{t}=0]}(\ensuremath{z}, \ensuremath{\Omega}height) = \ensuremath{\distribution[\text{vac}]} + \partiallta(\ensuremath{z}), \end{align*} where the small parameter $\ensuremath{\distribution[\text{vac}]} = 0.5 \times 10^{-8}$ is used to approximate a vacuum. In practice, a bounded domain must be used which is large enough that the boundary should have only negligible effects on the solution. For the final time $\ensuremath{t_f} = 1$, the domain is set to $\ensuremath{X} = [-1.2, 1.2]$ (recall that for all presented models the maximal speed of propagation is bounded in absolute value by one). At the boundary the vacuum approximation \begin{align*} \ensuremath{\distribution[b]}(\ensuremath{t},\ensuremath{z}L,\ensuremath{\Omega}height) \equiv \ensuremath{\distribution[\text{vac}]} \quad \mbox{and} \quad \ensuremath{\distribution[b]}(\ensuremath{t},\ensuremath{z}R,\ensuremath{\Omega}height) \equiv \ensuremath{\distribution[\text{vac}]} \end{align*} is used again. Furthermore, the physical coefficients are set to $\ensuremath{\sigma_s} \equiv 1$, $\ensuremath{\sigma_a} \equiv 0$ and $\ensuremath{Q} \equiv 0$. All solutions are computed with an even number of cells, so the initial Dirac delta lies on a cell boundary. Therefore it is approximated by splitting it into the cells immediately to the left and right. In all figures below, only positive $\ensuremath{z}$ are shown since the solutions are always symmetric around $\ensuremath{z} = 0$.\\ Noting that the method of moments is indeed a type of spectral method, it can be expected that due to the non-smoothness of the initial condition the convergence towards the kinetic solution of this test case is slow (note that $\ensuremath{\distribution[\ensuremath{t}=0]}(\cdot, \ensuremath{\Omega}height)\notin\Lp{p}$ for any $p$). This is shown in \figref{fig:PlanesourceConvergence}, where the $\Lp{1}$- and $\Lp{\infty}$-error measured in the local particle density evaluated at the final time are plotted against the number of moments for the isotropic-scattering operator. Minimum-entropy models behave reasonably better than the standard $\PN$ method. This has been observed as well in \cite{Hauck2010}. \begin{figure} \caption{{$\Lp{p} \label{fig:PlanesourceConvergence} \end{figure} A big surprise is the behaviour of the Kershaw models. Although there is a structural similarity between them and their corresponding minimum-entropy versions, the difference obviously increases with higher moment order. This can be seen in \figref{fig:PlanesourceIsotropicCutsFullMoments} where $\MN$ (\figref{fig:PlanesourceIsotropicCutsMn}) and $\KN$ models (\figref{fig:PlanesourceIsotropicCutsKn}) of multiple orders are plotted beside each other. \begin{figure} \caption{{$\MN$ and $\KN$ solutions of the plane-source test at $\ensuremath{t_f} \label{fig:PlanesourceIsotropicCutsFullMoments} \end{figure} \subsection{Source beam} \label{sec:SourceBeam} Finally, a discontinuous version of the source-beam problem from \cite{Hauck2013} is presented. The spatial domain is $\ensuremath{X} = [0,3]$, and \begin{gather*} \ensuremath{\sigma_a}(\ensuremath{z}) = \begin{cases} 1 & \text{ if } \ensuremath{z}\leq 2,\\ 0 & \text{ else}, \end{cases} \quad \ensuremath{\sigma_s}(\ensuremath{z}) = \begin{cases} 0 & \text{ if } \ensuremath{z}\leq 1,\\ 2 & \text{ if } 1<\ensuremath{z}\leq 2,\\ 10 & \text{ else} \end{cases} \quad \ensuremath{Q}(\ensuremath{z}) = \begin{cases} 1 & \text{ if } 1\leq \ensuremath{z}\leq 1.5,\\ 0 & \text{ else}, \end{cases} \end{gather*} with initial and boundary conditions \begin{gather*} \ensuremath{\distribution[\ensuremath{t}=0]}(\ensuremath{z}, \ensuremath{\Omega}height) \equiv \ensuremath{\distribution[\text{vac}]}, \\ \ensuremath{\distribution[b]}(\ensuremath{t},\ensuremath{z}L,\ensuremath{\Omega}height) = \cfrac{e^{-10^5(\ensuremath{\Omega}height-1)^2}}{\ints{e^{-10^5(\ensuremath{\Omega}height-1)^2}}} \quad \mbox{and} \quad \ensuremath{\distribution[b]}(\ensuremath{t},\ensuremath{z}R,\ensuremath{\Omega}height) \equiv \ensuremath{\distribution[\text{vac}]}. \end{gather*} The final time is $\ensuremath{t_f} = 2.5$ and the same vacuum approximation $\ensuremath{\distribution[\text{vac}]}$ as in the plane-source problem is used. $\Lp{p}$-errors are shown in \figref{fig:SourceBeamConvergence}. It is visible that $\MN$ and $\KN$ models perform similarly. Both models outperform the classical $\PN$ model. \begin{figure} \caption{$\Lp{p} \label{fig:SourceBeamConvergence} \end{figure} \begin{figure} \caption{$\MN$ and $\KN$ solutions of the source-beam test at $\ensuremath{t_f} \label{fig:SourceBeamIsotropicCutsFullMoments} \end{figure} \section{Conclusions and outlook} In this paper the basic concepts of deriving full-moment Kershaw closures were derived. These models provide a huge gain in efficiency compared to the state-of-the-art minimum-entropy models, since they can be closed (in principle) analytically using the available realizability theory. Benchmark tests confirm that Kershaw closures can indeed compete with minimum-entropy models. In some situations, they are even better. Although the gain in efficiency is very big, recent results for minimum-entropy models showed that using high-order numerics is still advantageous \cite{Schneider2015a,Schneider2015b}. The big problem in high-order schemes for moment models is that the property of realizability has to be preserved during the simulation, since otherwise the fluxes cannot be evaluated. Future work will have to investigate how to adapt the scheme in \cite{Schneider2015a} to Kershaw closures. Furthermore, different scattering operators should be taken into account, like the slightly more complicated (in terms of realizability preservation) Laplace-Beltrami operator. Finally, the concepts have to be lifted to higher dimensions. While fully three-dimensional first-order variants of Kershaw closures exist \cite{Ker76,Schneider2015c}, no higher-order models or a completely closed theory is available. \end{document}
\begin{document} \begin{abstract} Efficient time integration schemes are necessary to capture the complex processes involved in atmospheric flows over long periods of time. In this work, we propose a high-order, implicit-explicit numerical scheme that combines Multi-Level Spectral Deferred Corrections (MLSDC) and the Spherical Harmonics (SH) transform to solve the wave-propagation problems arising from the shallow-water equations on the rotating sphere. The iterative temporal integration is based on a sequence of corrections distributed on coupled space-time levels to perform a significant portion of the calculations on a coarse representation of the problem and hence to reduce the time-to-solution while preserving accuracy. In our scheme, referred to as MLSDC-SH, the spatial discretization plays a key role in the efficiency of MLSDC, since the SH basis allows for consistent transfer functions between space-time levels that preserve important physical properties of the solution. We study the performance of the MLSDC-SH scheme with shallow-water test cases commonly used in numerical atmospheric modeling. We use this suite of test cases, which gradually adds more complexity to the nonlinear system of governing partial differential equations, to perform a detailed analysis of the accuracy of MLSDC-SH upon refinement in time. We illustrate the stability properties of MLSDC-SH and show that the proposed scheme achieves up to eighth-order convergence in time. Finally, we study the conditions in which MLSDC-SH achieves its theoretical speedup, and we show that it can significantly reduce the computational cost compared to single-level Spectral Deferred Corrections (SDC). \end{abstract} \begin{keyword} high-order time integration, multi-level spectral deferred corrections, implicit-explicit splitting, atmospheric flows, shallow-water equations on the rotating sphere, spherical harmonics \end{keyword} \title{ extbf{Multi-Level Spectral Deferred Corrections Scheme \ for the Shallow Water Equations \ on the Rotating Sphere} \section{\label{section_introduction}Introduction} The numerical modeling of global atmospheric processes presents a challenging application area requiring accurate time integration methods for the discretized governing partial differential equations. These complex processes operate on a wide range of time scales but often have to be simulated over long periods of time -- up to a hundred years for long-term paleoclimate studies -- which constitutes a challenge for the design of stable and efficient integration schemes. One strategy for creating more efficient temporal integration schemes for such systems is to employ a semi-implicit scheme that allows larger time steps to be taken than with explicit methods at a cost that is less than that of fully implicit methods \citep{giraldo2005semi}. A second strategy is to use a parallel-in-time strategy to solve multiple time steps concurrently on multiple processors. Examples of parallel-in-time methods include Parareal \citep{lions2001resolution}, the Parallel Full Approximation Scheme in Space and Time, (PFASST, \cite{emmett2012toward}), and MultiGrid Reduction in Time (MGRIT, \cite{falgout2014parallel}). In this work, we consider semi-implicit, iterative, multi-level temporal integration methods based on Spectral Deferred Corrections (SDC) that are easily extended to high-order and also serve as a first step toward constructing parallel-in-time integration methods for the atmospheric dynamics based on PFASST. SDC methods are first presented in \cite{dutt2000spectral} and consist in applying a sequence of low-order corrections -- referred to as sweeps -- to a provisional solution in order to achieve high-order accuracy. Single-level SDC schemes have been applied to a wide range of problems, including reacting flow simulation \citep{bourlioux2003high,layton2004conservative}, atmospheric modeling \citep{jia2013spectral}, particle motion in magnetic fields \citep{winkel2015high}, and radiative transport modeling \citep{crockatt2017arbitrary}. In \cite{jia2013spectral}, a fully implicit SDC scheme is combined with the Spectral Element Method (SEM) to solve the shallow-water equations on the rotating sphere. The authors demonstrate that the SDC method can take larger stable time steps than competing explicit schemes such as leapfrog, second-order Runge-Kutta methods, and implicit second-order Backward Differentiation Formula (BDF) method without loss of accuracy. The approach considered here for atmospheric simulations builds on the work of \cite{speck2015multi}, in which a Multi-Level Spectral Deferred Corrections (MLSDC) scheme is proposed to improve the efficiency of the SDC time integration process while preserving its high-order accuracy. MLSDC relies on the construction of coarse space-time representations -- referred to as levels -- of the problem under consideration. The calculations are then performed on this hierarchy of levels in a way that shifts a significant portion of the computational burden to the coarse levels. As in nonlinear multigrid methods, the space-time levels are coupled by the introduction of a Full Approximation Scheme (FAS) term in the collocation problems solved on coarse levels. With this multi-level approach, the iterative correction process requires fewer fine sweeps than the standard SDC scheme but still achieves fast convergence to the fixed point solution. Synthetic numerical examples demonstrate the efficiency and accuracy of the MLSDC approach. The MLSDC approach is combined here with a spatial discretization based on the global Spherical Harmonics (SH) transform to solve the shallow-water equations on the rotating sphere. This study is relevant for practical applications since the SH transform is implemented in major forecasting systems such as the Integrated Forecast System (IFS) at the European Centre for Medium-Range Weather Forecasts (ECMWF, \cite{wedi2013fast}) and the Global Spectral Model (GSM) at the Japan Meteorological Agency (JMA, \cite{kanamitsu1983description}). Using a highly accurate method in space significantly reduces the spatial discretization errors and allows us to focus on the temporal integration. Our approach, referred to as MLSDC-SH, uses a temporal splitting in which only the stiff linear terms in the governing equations are treated implicitly, whereas less stiff terms are evaluated explicitly. Here, the word \textit{stiff} is used to denote the terms that limit the time step size of fully explicit schemes. The temporal integration scheme retains the main features of the multi-level algorithm presented in \cite{speck2015multi}, and takes full advantage of the structure of the spatial discretization to achieve efficiency. Specifically, we construct accurate interpolation and restriction functions between space-time levels by padding or truncating the spectral representation of the variables in the SH transform. In addition, the spherical harmonics combined with the implicit-explicit temporal splitting considered in this work circumvent the need for a global linear solver and rely on an efficient local solver for the implicit systems. We illustrate the properties of MLSDC-SH using a widely used suite of shallow-water test cases \citep{williamson1992standard,galewsky2004initial}. We start the numerical study with a steady-state benchmark that highlights the connection between the magnitude of the spectral coefficients truncated during coarsening and the convergence rate of MLSDC-SH upon refinement in time. Then we proceed to more challenging unsteady test cases to show that MLSDC-SH is stable for large time steps and achieves up to eighth-order temporal convergence. Finally, we investigate the conditions in which the proposed scheme achieves its theoretical speedup and we demonstrate that MLSDC-SH can reduce the computational cost compared to single-level SDC schemes. In the remainder of the paper, we first introduce the system of governing equations in Section~\oldref{section_governing_equations}. Then, we briefly review the fundamentals of the spatial discretization based on the global SH transform in Section~\oldref{section_spatial_discretization}. In Section~\oldref{section_temporal_discretization}, we describe the implicit-explicit temporal integration scheme, with an emphasis on the Multi-Level Spectral Deferred Corrections (MLSDC) scheme. Finally, in Section~\oldref{section_numerical_examples}, we present numerical examples on the sphere demonstrating the efficiency and accuracy of our approach. \section{\label{section_governing_equations}Governing equations} We consider the Shallow-Water Equations (SWE) on the rotating sphere. These equations capture the main horizontal effects present in the full atmospheric equations. Well-defined test cases are available -- such as those considered in this work -- that relate the SWE to some key features of the full atmospheric equations. Hence, they provide a simplified assessment of the properties of temporal and spatial discretizations for atmospheric simulations on the rotating sphere. We use the vorticity-divergence formulation \citep{bourke1972efficient,hack1992description} in which the prognostic variables $\boldsymbol{U} = [ \Phi, \, \zeta, \, \delta ]^T$ are respectively the potential, $\Phi$, the vorticity, $\zeta$, and the divergence, $\delta$. Here, the vorticity and divergence state variables are used to overcome the singularities in the velocity field at the poles. The system of governing partial differential equations is \begin{align} \frac{\partial \Phi'}{\partial t} &= - \nabla \cdot ( \Phi' \boldsymbol{V} ) - \bar{\Phi} \delta + \nu \nabla^2 \Phi', \label{geopotential_equation} \\ \frac{\partial \zeta}{\partial t} &= - \nabla \cdot ( \zeta + f ) \boldsymbol{V} + \nu \nabla^2 \zeta, \label{vorticity_equation} \\ \frac{\partial \delta}{\partial t} &= \boldsymbol{k} \cdot \nabla \times (\zeta + f) \boldsymbol{V} - \nabla^2 \bigg( \Phi + \frac{\boldsymbol{V} \cdot \boldsymbol{V}}{2} \bigg) + \nu \nabla^2 \delta, \label{divergence_equation} \end{align} where $\boldsymbol{k}$ is the outward radial unit vector. The average geopotential, $\bar{\Phi} = g \bar{h}$, is written as the product of the gravitational acceleration by the average height, and $\Phi'$ is defined as $\Phi' = \Phi - \bar{\Phi}$. The horizontal velocity vector is $\boldsymbol{V} \equiv \boldsymbol{i}u + \boldsymbol{j}v$, where $\boldsymbol{i}$ and $\boldsymbol{j}$ are the unit vectors in the eastward and northward directions, respectively. The Coriolis force is represented by $f = 2 \Omega \sin \phi$, where $\Omega$ is the angular rate of rotation, and $\phi$ is the latitude. The diffusion coefficient is denoted by $\nu$. Including a diffusion term in the governing equations is used in practice in atmospheric simulations to stabilize the flow dynamics and reduce the errors caused by nonlinearly interacting modes. Using the inviscid equations is not a viable option due to the extremely fast generation of small-scale features \citep{galewsky2004initial}, in particular for global spectral methods using a collocated grid. For simplicity and reproducibility, we employ a second-order diffusion term with a diffusion coefficient set to $\nu = 1.0 \times 10^5 \, \text{m}^2.\text{s}^{-1}$ for all spatial resolutions as in \cite{galewsky2004initial}. To express the velocities as a function of the prognostic variables, $\zeta$ and $\delta$, we first use the Helmholtz theorem which relates $\boldsymbol{V}$ to a scalar stream function, $\psi$, and a scalar velocity potential, $\chi$, \begin{equation} \boldsymbol{V} = \boldsymbol{k} \times \nabla \psi + \nabla \chi. \label{helmholtz_theorem} \end{equation} Using the identities \begin{align} \zeta &\equiv \boldsymbol{k} \cdot ( \nabla \times \boldsymbol{V} ), \label{identity_defining_zeta} \\ \delta &\equiv \nabla \cdot \boldsymbol{V}, \label{identity_defining_delta} \end{align} the application of the curl and divergence operators to \ref{helmholtz_theorem} yields $\zeta = \nabla^2 \psi$ and $\delta = \nabla^2 \chi$. The Laplacian operators can be efficiently inverted using the SH transform to compute the stream function, $\psi$, and the velocity potential, $\chi$, as a function of $\zeta$ and $\delta$, as explained in Section~\oldref{section_spatial_discretization}. Equations \ref{geopotential_equation}, \ref{vorticity_equation}, and \ref{divergence_equation}, form the system that we would like to solve. Next, we use the identities \ref{identity_defining_zeta} and \ref{identity_defining_delta} to split the right-hand side of \ref{geopotential_equation}, \ref{vorticity_equation}, and \ref{divergence_equation} into linear and nonlinear parts as follows \begin{equation} \frac{\partial \boldsymbol{U}}{\partial t} = \boldsymbol{\mathcal{L}}_G( \boldsymbol{U} ) + \boldsymbol{\mathcal{L}}_F ( \boldsymbol{U} ) + \boldsymbol{\mathcal{N}} ( \boldsymbol{U} ). \label{linear_nonlinear_decomposition} \end{equation} The first term in the right-hand side of \ref{linear_nonlinear_decomposition} represents the linear wave motion induced by gravitational forces and also includes the diffusion term \begin{equation} \boldsymbol{\mathcal{L}}_G ( \boldsymbol{U} ) \equiv [ - \bar{\Phi} \delta + \nu \nabla^2 \Phi', \, \, \nu \nabla^2 \zeta, \, \, - \nabla^2 \Phi + \nu \nabla^2 \delta]^T. \label{linear_wave_motion_induced_by_gravitational_forces} \end{equation} The second term in the right-hand side of \ref{linear_nonlinear_decomposition} contains a linear harmonic oscillator on the velocity components that includes the Coriolis term \begin{align} \boldsymbol{\mathcal{L}}_F ( \boldsymbol{U} ) \equiv [ 0, \, \, -f \delta - \boldsymbol{V} \cdot \nabla f, \, \, f \zeta + \boldsymbol{k} \cdot (\nabla f) \times \boldsymbol{V} ]^T. \label{linear_harmonic_oscillator_on_the_velocity_components} \end{align} The third term in the right-hand side of \ref{linear_nonlinear_decomposition} represents the nonlinear operators \begin{equation} \boldsymbol{\mathcal{N}} ( \boldsymbol{U} ) \equiv \left[ - \nabla \cdot ( \Phi' \boldsymbol{V} ), \, \, - \nabla \cdot ( \zeta \boldsymbol{V}), \, \, \boldsymbol{k} \cdot \nabla \times ( \zeta \boldsymbol{V} ) - \nabla^2 \frac{ \boldsymbol{V} \cdot \boldsymbol{V} }{2} \right]^T. \label{nonlinear_operators} \end{equation} In Section~\oldref{section_temporal_discretization}, this decomposition is used to define the temporal implicit-explicit splitting chosen based on the stiffness of the different terms. Next, the details of the spatial discretization of $\boldsymbol{\mathcal{L}}_F$, $\boldsymbol{\mathcal{L}}_G$, and $\boldsymbol{\mathcal{N}}$ are presented. \section{\label{section_spatial_discretization}Spatial discretization} This section presents an overview of the spatial discretization based on the global SH transform applied to the system of governing equations. The global SH transform is a key feature of the multi-level scheme presented here since it allows for simple and accurate data transfer between different spatial levels. We will show with numerical examples in Section~\oldref{section_numerical_examples} that this is critical for the design of efficient MLSDC schemes. In the SH scheme, the representation of a function of longitude $\lambda$ and Gaussian latitude $\mu \equiv \sin( \phi )$, $\xi(\lambda,\mu)$, consists of a sum of spherical harmonic basis functions $P^r_s(\mu) e^{i r \lambda}$ weighted by the spectral coefficients $\xi^r_s$, \begin{equation} \xi(\lambda, \mu) = \sum^R_{r = -R} \sum^{S(r)}_{s = |r|} \xi^r_s P^r_s(\mu) e^{i r \lambda}, \label{spectral_to_physical_space} \end{equation} where the index $r$ (respectively, $s$) refers to the latitudinal (respectively, longitudinal) mode. In \ref{spectral_to_physical_space}, $P^r_s$ is the normalized associated Legendre polynomial. Without loss of generality, we use a triangular truncation with $S(r) = R$. In Section~\oldref{subsubsection_coarsening_strategy_and_transfer_functions}, we will explain that a coarse representation of $\xi$ can be obtained by simply truncating the number of modes -- i.e., reducing $R$ and $S$ in \ref{spectral_to_physical_space} -- to construct a hierarchy of spatial levels with different degrees of coarsening in MLSDC-SH. The transformation from physical to spectral space is achieved in two steps. The first step consists in taking the discrete Fourier transform of $\xi(\lambda, \mu)$ in longitude -- i.e., over $\lambda$ --, defined as \begin{equation} \xi^r(\mu) = \frac{1}{I} \sum_{\iota=1}^I \xi(\lambda_\iota,\mu) e^{-i r \lambda_\iota}, \label{fourier_transform} \end{equation} where $I$ denotes the number of grid points in the longitudinal direction, located at longitudes $\lambda_{\iota} = \frac{2 \pi \iota}{I}$. Then, in the second step, the application of the discrete Legendre transformation in latitude yields \begin{equation} \xi^r_s = \sum_{j = 1}^J \xi^r(\mu_j) P^r_s(\mu_j) w_j. \label{legendre_transform} \end{equation} In \ref{legendre_transform}, $J$ is the number of Gaussian latitudes $\mu_j$, chosen as the roots of the Legendre polynomial of degree $J$, $P_J$, and $w_j$ denotes the Gaussian weight at latitude $\mu_j$. This two-step global transform is applied to \ref{linear_nonlinear_decomposition} to obtain a system of coupled ordinary differential equations involving the prognostic variables in spectral space, $\boldsymbol{\Theta}^r_s = [ \Phi^r_s, \, \zeta^r_s, \, \delta^r_s ]$. Note that due to the symmetry of the spectral coefficients, it is sufficient to only include the indices $r \geq 0$. Hence, for $r \in \{ 0, \dots, R \}$ and $s \in \{ r, \dots, R \}$, the equations are \begin{equation} \frac{\partial \boldsymbol{\Theta}^r_s}{\partial t} = (\boldsymbol{L}_{G})^r_s( \boldsymbol{\Theta} ) + (\boldsymbol{L}_{F})^r_s ( \boldsymbol{\Theta} ) + \boldsymbol{N}^r_s ( \boldsymbol{\Theta} ), \label{semidiscrete_system} \end{equation} where $(\boldsymbol{L}_G)^r_s$, $(\boldsymbol{L}_F)^r_s$, and $\boldsymbol{N}^r_s$ are the discrete, spectral representations of the operators defined in \ref{linear_wave_motion_induced_by_gravitational_forces}, \ref{linear_harmonic_oscillator_on_the_velocity_components}, and \ref{nonlinear_operators}. The state variable in spectral space, $\boldsymbol{\Theta}$, is defined as a vector of size $K = 3 R (R+1)/2$ as follows \begin{equation} \boldsymbol{\Theta} \equiv [ \boldsymbol{\Theta}^0_0, \, \boldsymbol{\Theta}^{1}_{0}, \dots, \, \boldsymbol{\Theta}^{R}_{R-1}, \, \boldsymbol{\Theta}^R_R ]^T. \label{spectral_space} \end{equation} We refer to the work of \cite{hack1992description} for a thorough presentation of the scheme, including the full expression of the right-hand side of \ref{semidiscrete_system} in spectral space. More details about an efficient implementation of the global SH transform can be found in \cite{temperton1991scalar,rivier2002efficient}. The implementation of the spherical harmonics transformation used in this work is based on the SHTns library developed by \cite{schaeffer2013efficient}. Next, we proceed to the presentation of the discretization in time based on spectral deferred corrections. \section{\label{section_temporal_discretization}Temporal discretization} \subsection{\label{subsection_temporal_splitting}Temporal splitting} The choice of a temporal splitting for the right-hand side of \ref{semidiscrete_system} is one of the key determinants of the performance of the scheme. Fully explicit schemes are based on inexpensive local updates but are limited by a severe stability restriction on the time step size. In the context of atmospheric modeling, this limitation is often caused by the presence of fast waves (e.g., sound or gravity waves) propagating in the system. Fully implicit schemes overcome the stability constraint on the time step size but rely on costly nonlinear global implicit solves to update all the degrees of freedom simultaneously \citep{evans2010accuracy,jia2013spectral,lott2015algorithmically}. Instead, implicit-explicit (IMEX) schemes only treat the stiff terms responsible for the propagation of the fast-moving waves implicitly, while the non-stiff terms that represent processes operating on a slower time scale are evaluated explicitly. This strategy reduces the cost of the implicit solves, relative to fully implicit solves, and allows for relatively large stable time steps. A common IMEX approach employed in non-hydrostatic atmospheric modeling is based on dimensional splitting and implicitly discretizes only the terms involved in the (fast) vertical dynamics \citep{ullrich2012operator,durran2012implicit,weller2013runge,giraldo2013implicit,lock2014numerical,gardner2018implicit}. Alternatively, the approach of \cite{robert1972implicit,giraldo2005semi} consists in linearizing the governing PDEs in the neighborhood of a reference state. The linearized piece is then discretized implicitly, and the term treated explicitly is obtained by subtracting the linearized piece from the nonlinear system. For the shallow-water equations, we directly discretize the fast linear terms on the right-hand side of \ref{semidiscrete_system} implicitly, while the other terms are evaluated explicitly. Specifically, we investigate an IMEX scheme based on the following splitting, for $r \in \{ 0, \dots, R \}$ and $s \in \{ r, \dots, R \}$, \begin{equation} \frac{\partial \boldsymbol{\Theta}^r_s}{\partial t} = (\boldsymbol{F}_I)^r_s(\boldsymbol{\Theta}) + (\boldsymbol{F}_E)^r_s(\boldsymbol{\Theta}), \label{system_of_odes} \end{equation} in which the implicit right-hand side, $(\boldsymbol{F}_I)^r_s$, contains the terms representing linear wave motion induced by gravitational forces and the diffusion term. The explicit right-hand side, $(\boldsymbol{F}_E)^r_s$, contains the linear harmonic oscillator and the nonlinear terms. This temporal splitting leads to implicit and explicit right-hand sides defined as \begin{align} (\boldsymbol{F}_I)^r_s &\equiv (\boldsymbol{L}_G)^r_s, \label{system_of_odes_implicit_part} \\ (\boldsymbol{F}_E)^r_s &\equiv (\boldsymbol{L}_F)^r_s + \boldsymbol{N}^r_s. \label{system_of_odes_explicit_part} \end{align} This implicit-explicit approach greatly simplifies the solution strategy for the implicit systems and circumvents the need for a global linear solver. The solution algorithm treats the geopotential separately from the divergence and vorticity variables. Since the Coriolis term and the nonlinear terms are treated explicitly, one can form a diagonal linear system in spectral space to update the geopotential, and then update locally the vorticity and divergence variables. This is explained in Section~\oldref{implicit_solver_for_sdc_and_mlsdc}. We will investigate the stability and accuracy of the splitting with numerical examples in Section \oldref{section_numerical_examples}. Next, we describe the multi-level temporal integration scheme starting with the fundamentals of SDC. \subsection{\label{subsection_implicit_explicit_spectral_deferred_correction}IMEX Spectral Deferred Corrections } We start with a review of the fundamentals of the Spectral Deferred Corrections (SDC) scheme. SDC methods have been introduced in \cite{dutt2000spectral} and later extended to methods with different temporal splittings in \cite{minion2003semi,bourlioux2003high,layton2004conservative}. In \cite{minion2003semi}, an implicit-explicit SDC method is described and referred to as {\it semi-implicit SDC} to contrast the method with subsequent {\it multi-implicit SDC} methods with multiple implicit terms introduced in \cite{bourlioux2003high}. Here we employ the more used term {\it IMEX} to refer to SDC methods with an implicit-explicit splitting. The properties of IMEX SDC schemes for fast-wave slow-wave problems are analyzed in \cite{ruprecht2016spectral}. We consider a system of coupled ODEs in the generic form \begin{align} \frac{\partial \boldsymbol{\Theta} }{\partial t} (t) &= \boldsymbol{F}_I \big( \boldsymbol{\Theta}(t) \big) + \boldsymbol{F}_E \big( \boldsymbol{\Theta}(t) \big), \qquad t \in [t^n,t^{n} + \Delta t], \\ \boldsymbol{\Theta}(t^n) &= \boldsymbol{\Theta}^n, \end{align} and its solution in integral form given by \begin{equation} \boldsymbol{\Theta}(t) = \boldsymbol{\Theta}^n + \int^t_{t^n} (\boldsymbol{F}_I + \boldsymbol{F}_E) \big( \boldsymbol{\Theta}(a) \big)d a = \boldsymbol{\Theta}^n + \int^t_{t^n} \boldsymbol{F} \big( \boldsymbol{\Theta}(a) \big)d a, \label{integral_form} \end{equation} where $\boldsymbol{F}_I$ and $\boldsymbol{F}_E$ are the implicit and explicit right-hand sides, respectively, with $\boldsymbol{F} = \boldsymbol{F}_I + \boldsymbol{F}_E$, and $\boldsymbol{\Theta}(t)$ is the state variable at time $t$. In \ref{integral_form}, the integral is applied componentwise. Denote by $\tilde{\boldsymbol{\Theta}}(t)$ an approximation of $\boldsymbol{\Theta}(t)$, and then define the correction $\boldsymbol{\Delta} \boldsymbol{\Theta}(t) = \boldsymbol{\Theta}(t) - \tilde{\boldsymbol{\Theta}}(t)$. The SDC scheme applied to the implicit-explicit temporal splitting described above iteratively improves the accuracy of the approximation based on a discretization of the update or correction equation \begin{align} \tilde{\boldsymbol{\Theta}}(t) + \boldsymbol{\Delta}\boldsymbol{\Theta}(t) = \boldsymbol{\Theta}^{n} & + \int_{t^{n}}^{t} \big[ \boldsymbol{F}_E \big( \tilde{\boldsymbol{\Theta}}(a) + \boldsymbol{\Delta}\boldsymbol{\Theta}(a) \big) - \boldsymbol{F}_E \big( \tilde{\boldsymbol{\Theta}}(a) \big) \big] d a \nonumber \\ & + \int_{t^{n}}^{t} \big[ \boldsymbol{F}_I \big( \tilde{\boldsymbol{\Theta}}(a) + \boldsymbol{\Delta}\boldsymbol{\Theta}(a) \big) - \boldsymbol{F}_I \big( \tilde{\boldsymbol{\Theta}}(a) \big) \big] d a \nonumber \\ & + \int_{t^n}^{t} \boldsymbol{F} \big( \tilde{\boldsymbol{\Theta}}(a) \big) d a, \label{standard_sdc_sweep_implicit} \end{align} where $\boldsymbol{\Theta}^{n}$ is the (known) state variable at the beginning of the time step. In the update equation \ref{standard_sdc_sweep_implicit}, the last integral is computed with a high-order Gaussian quadrature rule. However, the other integrals are approximated with simpler low-order quadrature rules. We mention here that all the quadrature rules used in \ref{standard_sdc_sweep_implicit} are based on a relatively small number of Gauss points -- up to five in this work -- compared to the quadrature rule used in the discrete Legendre transform to obtain \ref{legendre_transform}. Each pass of the discrete version of the update equation \ref{standard_sdc_sweep_implicit}, referred to as sweep, increases the formal order of accuracy by one until the order of accuracy of the quadrature applied to the third integral is reached \citep{hagstrom2007spectral,xia2007efficient,christlieb2009comments}. To discretize the update equation \ref{standard_sdc_sweep_implicit}, the correction algorithm uses a decomposition of the time interval $[t^n, t^{n+1}]$ into $M$ subintervals using $M+1$ temporal nodes, such that \begin{equation} t^n \equiv t^{n,0} < t^{n,1} < \dots < t^{n,M} = t^n + \Delta t \equiv t^{n+1}. \label{time_discretization_sdc_nodes} \end{equation} The points $t^{n,m}$ are chosen to correspond to Gaussian quadrature nodes. Throughout this paper, we use Gauss-Lobatto nodes. We use the shorthand notations $t^m = t^{n,m}$ and $\Delta t^m = t^{m+1} - t^m$. We denote by $\boldsymbol{\Theta}^{m+1,(k+1)}$ the approximate solution at node $m+1$ and at sweep $(k+1)$. The terms in the first integral of \ref{standard_sdc_sweep_implicit} are treated explicitly, and therefore this integral is discretized with a forward Euler method. Conversely, the second integral in \ref{standard_sdc_sweep_implicit} is discretized implicitly. The general form of the discrete version of equation \ref{standard_sdc_sweep_implicit} is then \begin{align} \boldsymbol{\Theta}^{m+1,(k+1)} = \boldsymbol{\Theta}^{n} &+ \Delta t \sum_{j = 1}^m \tilde{q}^E_{m+1,j} \big[ \boldsymbol{F}_E \big( \boldsymbol{\Theta}^{j,(k+1)} \big) - \boldsymbol{F}_E \big( \boldsymbol{\Theta}^{j,(k)} \big) \big] \nonumber \\ &+ \Delta t \sum_{j = 1}^{m+1} \tilde{q}^I_{m+1,j} \big[ \boldsymbol{F}_I \big( \boldsymbol{\Theta}^{j,(k+1)} \big) - \boldsymbol{F}_I \big( \boldsymbol{\Theta}^{j,(k)} \big) \big] \nonumber \\%[5pt] &+ \Delta t \sum_{j = 0}^{M} q_{m+1,j} \boldsymbol{F} \big( \boldsymbol{\Theta}^{j,(k)} \big). \label{update_equation_sdcq_discrete_form} \end{align} In \ref{update_equation_sdcq_discrete_form}, the coefficients $\tilde{q}^E_{m+1,j}$ correspond to forward-Euler time stepping. The coefficients $q_{m+1,j}$ correspond to the Lobatto IIIA optimal order collocation quadrature \begin{equation} q_{m+1,j} \equiv \frac{1}{\Delta t} \int_{t^{n,0}}^{t^{{n,m+1}}} L^j(a) da, \label{weights} \end{equation} where $L^j$ denotes the $j^{\text{th}}$ Lagrange polynomial constructed using the SDC nodes \ref{time_discretization_sdc_nodes}. We note that the formulation of the correction given in \ref{update_equation_sdcq_discrete_form} differs from that of \cite{jia2013spectral} in two ways. First, our scheme is based on an implicit-explicit splitting, whereas that of \cite{jia2013spectral} is fully implicit. Second, for the choice of the quadrature weights used in the discretization of the implicit correction integral, we adopt the approach of \cite{weiser2015faster}. Specifically, the weights $\tilde{q}^I_{m+1,j}$ in \ref{update_equation_sdcq_discrete_form} are chosen to be the coefficients of the upper triangular matrix in the LU decomposition of $\boldsymbol{Q} = \{ q_{ij} \} \in \mathbb{R}^{(M+1)\times(M+1)}$, while a diagonal matrix is used in \cite{jia2013spectral}. This formulation leads to a faster convergence of the iterative process to the fixed-point solution and remains convergent even when the underlying problem is stiff. We refer to \cite{weiser2015faster} for a proof, and to \cite{hamon2018concurrent} for numerical examples illustrating the improved convergence. Using these definitions, the integration scheme \ref{update_equation_sdcq_discrete_form} is effectively an iterative solution method for the collocation problem defined by \begin{equation} \boldsymbol{A} ( \vec{\boldsymbol{\Theta}} ) = \boldsymbol{1}_{M+1} \otimes \boldsymbol{\Theta}^{n,0}. \label{collocation_problem} \end{equation} The operator $\boldsymbol{A}$ is \begin{equation} \boldsymbol{A} ( \vec{\boldsymbol{\Theta}} ) \equiv \vec{\boldsymbol{\Theta}} - \Delta t ( \boldsymbol{Q} \otimes \boldsymbol{I}_{K} ) \vec{\boldsymbol{F}}, \end{equation} where $\otimes$ denotes the Kronecker product and $\boldsymbol{I}_{K} \in \mathbb{R}^{K \times K}$ is the identity matrix. $\boldsymbol{1}_{M+1} \in \mathbb{R}^{M+1}$ is a vector of ones. Following the notation used in \cite{bolten2016multigrid}, the space-time vectors $\vec{\boldsymbol{\Theta}} \in \mathbb{C}^{(M+1)K}$ and $\vec{\boldsymbol{F}} \in \mathbb{C}^{(M+1)K}$ are such that \begin{align} \vec{\boldsymbol{\Theta}} &\equiv [\boldsymbol{\Theta}^{n,0}, \dots, \boldsymbol{\Theta}^{n,M}]^T, \\ \vec{\boldsymbol{F}} &\equiv \vec{\boldsymbol{F}} ( \vec{\boldsymbol{\Theta}} ) = [\boldsymbol{F}( \boldsymbol{\Theta}^{n,0} ), \dots, \boldsymbol{F}( \boldsymbol{\Theta}^{n,M}) ]^T. \end{align} Next, we introduce the multi-level algorithm based on SDC that we will apply to the collocation problem \ref{collocation_problem}. \subsection{\label{subsection_multi_level_spectral_deferred_correction}Multi-Level Spectral Deferred Corrections (MLSDC)} Multi-Level Spectral Deferred Corrections (MLSDC) schemes are based on the idea of replacing some of the SDC iterations required to converge to the collocation problem \ref{collocation_problem} with SDC sweeps performed on a coarsened (and hence computationally cheaper) version of the problem. The solutions on different levels are coupled by the introduction of a Full Approximation Scheme (FAS) correction term explained below as in nonlinear multigrid methods. The combination of performing SDC sweeps on multiple space-time levels with a FAS correction term first appears as part of the PFASST method in \cite{emmett2012toward}. The idea is generalized and analyzed in \cite{speck2015multi} showing how MLSDC can improve the efficiency for certain problems compared to single-level SDC methods. In this work, only two-level MLSDC schemes are considered, and the study of MLSDC with three or more space-time levels to integrate the shallow-water equations is left for future work. \subsubsection{Full Approximation Scheme (FAS)} We define two space-time levels to solve the collocation problem \ref{collocation_problem}, and we denote by $\ell = f$ (respectively, $\ell = c$) the fine level (respectively, the coarse level). We denote by $\vec{\boldsymbol{\Theta}}_{\ell} \in \mathbb{C}^{(M_{\ell}+1) K_{\ell}}$ and $\vec{\boldsymbol{F}}_{\ell} \in \mathbb{C}^{(M_{\ell}+1) K_{\ell}}$ the space-time vector and right-hand side at level $\ell$, respectively. The matrix $\boldsymbol{R}^{c}_{f} \in \mathbb{R}^{ (M_{c}+1)K_{c} \times (M_{f} + 1)K_{f}}$ is the linear restriction operator from the fine level to the coarse level. Here, $K_{\ell}$ represents the total number of spectral coefficients in \ref{spectral_space} on level $\ell$. As in nonlinear multigrid methods \citep{brandt1977multi}, the coarse problem is modified by the introduction of a correction term, denoted by $\vec{\boldsymbol{\tau}}_{c}$, that couples the solutions at the two space-time levels. Specifically, the coarse problem reads \begin{equation} \boldsymbol{A}_{c} ( \vec{\boldsymbol{\Theta}}_{c} ) - \vec{\boldsymbol{\tau}}_{c} = \boldsymbol{1}_{M_{c}+1} \otimes \boldsymbol{\Theta}^{n,0}_{c}, \label{collocation_problem_level_l+1} \end{equation} where the FAS correction term at the coarse level is defined as \begin{equation} \vec{\boldsymbol{\tau}}_{c} \equiv \boldsymbol{A}_{c} ( \boldsymbol{R}^{c}_{f} \vec{\boldsymbol{\Theta}}_{f} ) - \boldsymbol{R}^{c}_{f} \boldsymbol{A}_{f} ( \vec{\boldsymbol{\Theta}}_{f} ) + \boldsymbol{R}^{c}_{f} \vec{\boldsymbol{\tau}}_{f}, \label{tau_term_level_l+1} \end{equation} with, for the two-level case, $\vec{\boldsymbol{\tau}}_f = \boldsymbol{0}$ on the fine level. In \ref{collocation_problem_level_l+1}-\ref{tau_term_level_l+1}, the operator $\boldsymbol{A}_{c}$ denotes an approximation of $\boldsymbol{A}$ at the coarse level. We note that \begin{align} \boldsymbol{A}_{c} ( \boldsymbol{R}^{c}_{f} \vec{\boldsymbol{\Theta}}_{f} ) - \vec{\boldsymbol{\tau}}_{c} &= \boldsymbol{A}_{c} ( \boldsymbol{R}^{c}_{f} \vec{\boldsymbol{\Theta}}_{f} ) - \boldsymbol{A}_{c} ( \boldsymbol{R}^{c}_{f} \vec{\boldsymbol{\Theta}}_{f} ) + \boldsymbol{R}^{c}_{f} \boldsymbol{A}_{f} ( \vec{\boldsymbol{\Theta}}_{f} ) - \boldsymbol{R}^{c}_{f} \vec{\boldsymbol{\tau}}_{f} \nonumber \\ &= \boldsymbol{R}^{c}_{f} \big( \boldsymbol{A}_{f} ( \vec{\boldsymbol{\Theta}}_{f} )- \vec{\boldsymbol{\tau}}_{f} \big) , \end{align} which implies that the restriction of the fine solution, $\boldsymbol{R}^{c}_{f} \vec{\boldsymbol{\Theta}}_{f}$, is a solution of the coarse problem. On the coarse problem \ref{collocation_problem_level_l+1}, the modified SDC update for temporal node $m+1$ at sweep $(k+1)$ is \begin{align} \boldsymbol{\Theta}^{m+1,(k+1)}_{c} &= \boldsymbol{\Theta}^{n,0}_{c} \nonumber \\ &+ \Delta t \sum_{j = 1}^m (\tilde{q}^E_{m+1,j})_{c} \big[ \boldsymbol{F}_{E,c} \big( \boldsymbol{\Theta}^{j,(k+1)}_{c} \big) - \boldsymbol{F}_{E, c} \big( \boldsymbol{\Theta}^{j,(k)}_{c} \big) \big] \nonumber \\ &+ \Delta t \sum_{j = 1}^{m+1} (\tilde{q}^I_{m+1,j})_{c} \big[ \boldsymbol{F}_{I, c} \big( \boldsymbol{\Theta}^{j,(k+1)}_{c} \big) - \boldsymbol{F}_{I, c} \big( \boldsymbol{\Theta}^{j,(k)}_{c} \big) \big] \nonumber \\ &+ \Delta t \sum_{j = 0}^{M} (q_{m+1,j})_c \boldsymbol{F}_c \big( \boldsymbol{\Theta}^{j,(k)}_c \big) + \boldsymbol{\tau}^{m+1,(k)}_{c} \label{modified_update_equation_sdcq_discrete_form} . \end{align} \subsubsection{IMEX MLSDC algorithm} We are now ready to review the steps of the MLSDC algorithm of \cite{emmett2012toward,speck2015multi} for the case of two space-time levels. In this section, $\boldsymbol{\Theta}^{m, (k)}_{\ell}$ denotes the approximate solution at temporal node $m$, space-time level $\ell$, and sweep $(k)$. $\vec{\boldsymbol{\Theta}}^{(k)}_{\ell}$ is the space-time vector that contains the approximate solution at all temporal nodes on level $\ell$. The vectors $\boldsymbol{F}^{m, (k)}_{\ell}$ and $\vec{\boldsymbol{F}}^{(k)}_{\ell}$ are defined analogously. The MLSDC iteration starts with an SDC sweep on the fine level. The iteration continues as in a V-cycle from the fine level to the coarse level, and then back to the fine level. The specifics of the MLSDC iteration with two space-time levels are detailed in Algorithm~\oldref{alg:mlsdc_iteration}. \begin{algorithm}[H] \SetAlgoLined \caption{\label{alg:mlsdc_iteration}IMEX MLSDC iteration on two space-time levels denoted by ``coarse'' and ``fine''.} \BlankLine \KwData{Initial data $\boldsymbol{\Theta}^{0,(k)}_{f}$ and function evaluations $\vec{\boldsymbol{F}}^{(k)}_{I, f}$, $\vec{\boldsymbol{F}}^{(k)}_{E, f}$ from the previous MLSDC iteration $(k)$ on the fine level.} \KwResult{Approximate solution $\vec{\boldsymbol{\Theta}}^{(k+1)}_{\ell}$ and function evaluations $\vec{\boldsymbol{F}}^{(k+1)}_{I, \ell}$, $\vec{\boldsymbol{F}}^{(k+1)}_{E, \ell}$ on all levels.} \textit{\textbf{A)} Perform a fine sweep} \\% and check convergence} \\ $\vec{\boldsymbol{\Theta}}^{(k+1)}_{f}, \, \vec{\boldsymbol{F}}^{(k+1)}_{I, f}, \, \vec{\boldsymbol{F}}^{(k+1)}_{E, f} \longleftarrow \textbf{SweepFine}\big( \vec{\boldsymbol{\Theta}}^{(k)}_{f}, \, \vec{\boldsymbol{F}}^{(k)}_{I, f}, \, \vec{\boldsymbol{F}}^{(k)}_{E, f} \big) $ \\ \textit{\textbf{B)} Restrict, re-evaluate, and save restriction} \\ \For{$m = 1, \dots, M_c$}{ $\boldsymbol{\Theta}^{m,(k)}_{c} \longleftarrow \textbf{Restrict} \big( \boldsymbol{\Theta}^{m,(k+1)}_{f} \big)$ \\ $\boldsymbol{F}^{m,(k)}_{I, c}, \, \boldsymbol{F}^{m,(k)}_{E, c} \longleftarrow \textbf{Evaluate\_F} \big( \boldsymbol{\Theta}^{m,(k)}_{c} \big)$ \\ $\boldsymbol{\tilde{\Theta}}^{m,(k)}_{c} \longleftarrow \boldsymbol{\Theta}^{m,(k)}_{c}$ \\ $\boldsymbol{\tilde{F}}^{m,(k)}_{I,c}, \, \boldsymbol{\tilde{F}}^{m,(k)}_{E,c} \longleftarrow \boldsymbol{F}^{m,(k)}_{I,c}, \, \boldsymbol{F}^{m,(k)}_{E,c}$ } \textit{\textbf{C)} Compute FAS correction and sweep} \\ $\boldsymbol{\tau}_{c} \longleftarrow \text{FAS} \big( \vec{\boldsymbol{F}}^{(k)}_{I, f}, \, \vec{\boldsymbol{F}}^{(k)}_{E, f}, \, \vec{\boldsymbol{F}}^{(k)}_{I, c}, \, \vec{\boldsymbol{F}}^{(k)}_{E, c}, \, \boldsymbol{\tau}_{f} \big)$ \\ $\vec{\boldsymbol{\Theta}}^{(k+1)}_{c}, \, \vec{\boldsymbol{F}}^{(k+1)}_{I, c}, \, \vec{\boldsymbol{F}}^{(k+1)}_{E, c} \longleftarrow \textbf{SweepCoarse} \big( \vec{\boldsymbol{\Theta}}^{(k)}_{c}, \, \vec{\boldsymbol{F}}^{(k)}_{I, c}, \, \vec{\boldsymbol{F}}^{(k)}_{E, c}, \, \boldsymbol{\tau}_{c} \big)$ \textit{\textbf{D)} Return to finest level before next iteration} \\ \For{$m = 1, \dots, M_f$}{ $\boldsymbol{\Theta}^{m,(k+1)}_{f} \longleftarrow \boldsymbol{\Theta}^{m,(k+1)}_{f} + \textbf{Interpolate} \big( \boldsymbol{\Theta}^{m, (k+1)}_{c} - \boldsymbol{\tilde{\Theta}}^{m, (k)}_{c} \big)$ \\ $\boldsymbol{F}^{m, (k+1)}_{I, f} \longleftarrow \boldsymbol{F}^{m, (k+1)}_{I, f} + \textbf{Interpolate} \big( \boldsymbol{F}^{m, (k+1)}_{I, c} - \boldsymbol{\tilde{F}}^{m, (k)}_{I, c}\big)$ \\ $\boldsymbol{F}^{m, (k+1)}_{E, f} \longleftarrow\boldsymbol{F}^{m, (k+1)}_{E, f} + \textbf{Interpolate} \big( \boldsymbol{F}^{m, (k+1)}_{E, c} - \boldsymbol{\tilde{F}}^{m, (k)}_{E, c}\big)$ } \end{algorithm} The single-level SDC iteration only consists of the fine sweep of Step $\boldsymbol{A}$. Both schemes share the same initialization procedure. Specifically, before the first iteration, for $k=0$, we initialize the algorithm described above by simply copying the initial data for the time step, denoted by $\boldsymbol{\Theta}^{n,0}_f$, to all the other SDC nodes, that is, for $m \in \{ 0, \dots, M_f \}$: \begin{equation} \boldsymbol{\Theta}^{m,(k = 0)}_f := \boldsymbol{\Theta}^{n,0}_f. \end{equation} In Algorithm~\oldref{alg:mlsdc_iteration}, the procedure \textit{SweepFine} consists in applying \ref{update_equation_sdcq_discrete_form} once on the fine level. The procedure \textit{SweepCoarse} involves applying the correction described by \ref{modified_update_equation_sdcq_discrete_form} on the coarse level. We found that for the numerical examples considered in this work, doing multiple sweeps on the coarse level instead of one every time \textit{SweepCoarse} is called does not improve the accuracy of MLSDC, but increases the computational cost. This is why the procedure \textit{SweepCoarse} only involves one sweep per call. The procedure \textit{Evaluate\_F} involves computing the implicit and explicit right-hand sides. We highlight that the last step of Algorithm~\oldref{alg:mlsdc_iteration} does not involve any function evaluation. Instead, when we return to the fine level, we interpolate the coarse solution update as well as the coarse right-hand side corrections to the fine level. By avoiding $M_f$ function evaluations, this reduces the computational cost of the algorithm without undermining the order of accuracy of the scheme, as shown with numerical examples in Section~\oldref{section_numerical_examples}. We now discuss two key determinants of the performance of MLSDC-SH, namely, the coarsening strategy and the solver for the implicit systems. \subsubsection{\label{subsubsection_coarsening_strategy_and_transfer_functions}Coarsening strategy and transfer functions} In this section, we describe the linear restriction and interpolation operators used in the MLSDC-SH algorithm to transfer the approximate solution from fine to coarse levels, and vice-versa. In this work, the spatial restriction and interpolation procedures are performed in spectral space and heavily rely on the decomposition \ref{spectral_space} resulting from the SH basis. As explained below, this approach is based on the truncation of high-frequency modes, and therefore avoids the generation of spurious modes in the approximate solution that would propagate in the spectrum due to nonlinear wave interweaving over one coarse sweep. We reiterate that $K_{\ell}$ denotes the number of spectral coefficients used in \ref{spectral_space} at level $\ell$ and $M_{\ell}+1$ denotes the number of SDC nodes at level $\ell$. Therefore, the space-time vector storing the state of the system at level $\ell$, denoted by $\vec{\boldsymbol{\Theta}}_{\ell}$, is in $\mathbb{C}^{(M_{\ell}+1)K_{\ell}}$. The two-step restriction process from fine level $\ell = f$ to coarse level $\ell = c$ consists in applying a restriction operator in time, denoted by $(\boldsymbol{R}^{t})^{c}_{f}$, followed by a restriction operator in space, denoted by $(\boldsymbol{R}^{s})^{c}_{f}$, that is, \begin{equation} \vec{\boldsymbol{\Theta}}_{c} = \boldsymbol{R}^{c}_{f} \vec{\boldsymbol{\Theta}}_{f} = ( \boldsymbol{R}^{\textit{s}} )^{c}_{f} ( \boldsymbol{R}^{\textit{t}} )^{c}_{f} \vec{\boldsymbol{\Theta}}_{f}. \label{two_step_restriction} \end{equation} In \ref{two_step_restriction}, the restriction operator in time is defined using the Kronecker product as \begin{equation} ( \boldsymbol{R}^{\textit{t}} )^{c}_{f} \equiv \boldsymbol{\Pi}^{c}_{f} \otimes \boldsymbol{I}_{K_{f}} \in \mathbb{R}^{(M_{c}+1)K_{f} \times (M_{f}+1)K_{f}}, \label{time_restriction} \end{equation} where $\boldsymbol{I}_{K_{f}} \in \mathbb{R}^{K_{f} \times K_{f}} $ is the identity matrix, and $\boldsymbol{\Pi}^{c}_{f} \in \mathbb{R}^{(M_{c}+1) \times (M_{f}+1)}$ is the rectangle matrix employed to interpolate a scalar function from the fine temporal discretization to the coarse temporal discretization. Using the Lagrange polynomials $L^{j}_{f}$ on the fine temporal discretization, this matrix reads \begin{equation} (\boldsymbol{\Pi}^{c}_{f})_{ij} = L^{j-1}_{f}(t^{i-1}_{c}), \end{equation} using the SDC node $i-1$ at the coarse level, denoted by $t^{i-1}_{c}$. We note that, in the special case of two, three, and five Gauss-Lobatto nodes, applying this restriction operator in time amounts to performing pointwise injection. The restriction operator in space consists in truncating the spectral representation of the primary variables \ref{spectral_space} based on the SH transform to remove the high-frequency features from the approximate solution. This is achieved by applying the matrix \begin{equation} ( \boldsymbol{R}^s )^{c}_{f} \equiv \boldsymbol{I}_{M_{c}+1} \otimes \boldsymbol{D}^{c}_{f} \in \mathbb{R}^{(M_{c}+1)K_{c} \times (M_{c} + 1)K_{f} }. \label{space_restriction} \end{equation} In \ref{space_restriction}, $\boldsymbol{D}^{c}_{f} \in \mathbb{R}^{K_{c} \times K_{f}}$ is a rectangle truncation matrix defined as \begin{equation} (\boldsymbol{D}^{c}_{f})_{ij} = \left\{ \begin{array}{l l} 1 & i = j \\[4pt] 0 & \text{otherwise.} \end{array} \right. \end{equation} In the interpolation procedure employed to transfer the approximate solution from the coarse level to the fine level, we start with the application of the interpolation operator in space, $(\boldsymbol{P}^s)^{f}_{c}$, followed by the application of the interpolation operator in time, $(\boldsymbol{P}^t)^{f}_{c}$, \begin{equation} \vec{\boldsymbol{\Theta}}_{f} \equiv \boldsymbol{P}^{f}_{c} \vec{\boldsymbol{\Theta}}_{c} = (\boldsymbol{P}^t)^{f}_{c} (\boldsymbol{P}^{s})^{f}_{c} \vec{\boldsymbol{\Theta}}_{c}. \end{equation} The interpolation operator in space consists in padding the spectral representation of the primary variables at the coarse level with $K_{f} - K_{c}$ zeros and can be defined as the transpose of the restriction operator in space (see \ref{space_restriction}), that is, \begin{equation} ( \boldsymbol{P}^s )^{f}_{c} \equiv \big( ( \boldsymbol{R}^s )^{c}_{f} \big)^T \in \mathbb{R}^{(M_{c}+1)K_{f} \times (M_{c}+1)K_{c}}. \end{equation} Finally, the interpolation operator in time is analogous to \ref{time_restriction} and reads \begin{equation} ( \boldsymbol{P}^t )^{f}_{c} \equiv \boldsymbol{\Pi}^{f}_{c} \otimes \boldsymbol{I}_{K_{f}} \in \mathbb{R}^{(M_{f}+1)K_{f} \times (M_{c}+1)K_{f}}, \end{equation} where the rectangle interpolation matrix $\boldsymbol{\Pi}^{f}_{c}$ is constructed with the Lagrange polynomials $L^j_{c}$ on the coarse temporal discretization. For two, three, and five Gauss-Lobatto nodes, this amounts to performing pointwise injection at the fine nodes that correspond to the coarse nodes, and then polynomial interpolation to compute the solution at the remaining fine nodes. This completes the presentation of the MLSDC-SH algorithm for the time integration of the shallow-water equations on the rotating sphere. Next, we discuss the implicit solver used in this work. \subsection{\label{implicit_solver_for_sdc_and_mlsdc}Solver for the implicit systems in SDC and MLSDC} The time integration schemes of Sections~\oldref{subsection_implicit_explicit_spectral_deferred_correction} and \oldref{subsection_multi_level_spectral_deferred_correction} involve solving implicit linear systems in the form \begin{equation} \boldsymbol{\Theta}^{m+1,(k+1)} - \Delta t \tilde{q}^I_{m+1,m+1} \boldsymbol{F}_I( \boldsymbol{\Theta}^{m+1,(k+1)} ) = \boldsymbol{b}, \label{implicit_systems} \end{equation} where $\boldsymbol{b}$ is obtained from \ref{update_equation_sdcq_discrete_form} or \ref{modified_update_equation_sdcq_discrete_form}, and where we have dropped the subscripts denoting the space-time levels for simplicity. The structure of the implicit linear systems results from the spatial discretization based on the SH transform, but also from the temporal splitting between implicit and explicit terms described in Section~\oldref{subsection_temporal_splitting}. The solution strategy for \ref{implicit_systems} is performed in spectral space and follows two steps briefly outlined below. First, we algebraically form a reduced linear system containing only the geopotential unknowns -- that is, $K/3$ degrees of freedom, where $K$ denotes the total number of spectral coefficients needed to represent the three primary variables in \ref{spectral_space}. Given that the longitudinal and latitudinal coupling terms present in the Coriolis term and in the nonlinear operators are discretized explicitly, the $K/3$ geopotential degrees of freedom are fully decoupled from one another. The geopotential linear system is therefore diagonal and trivial to solve. Second, we have to solve for the remaining $2K/3$ vorticity and divergence degrees of freedom. This is again a trivial operation that does not require a linear solver, since we have to solve two diagonal linear systems to update the vorticity and divergence variables, respectively. Therefore, solving the implicit system \ref{implicit_systems} is purely based on local operations during which the degrees of freedom are updated one at a time in spectral space. We refer to \cite{schreiber2018sph} for the detailed formulation of the geopotential, vorticity, and divergence diagonal linear systems. \subsection{\label{subsection_computational_cost_of_sdc_and_mlsdc}Computational cost of SDC and MLSDC} In this section, we compare the computational cost of the MLSDC-SH scheme described in Section \oldref{subsection_multi_level_spectral_deferred_correction} to that of the single-level SDC scheme. We refer to the single-level SDC scheme with $M_f+1$ temporal nodes and $N_S$ fine sweeps as SDC($M_f+1$,$N_S$). We denote by MLSDC($M_f+1$, $M_c+1$, $N_{ML}$, $\alpha$) the MLSDC-SH scheme with $M_f+1$ nodes on the fine level, $M_c + 1$ nodes of the coarse level, $N_{\textit{ML}}$ iterations, and a spatial coarsening ratio, $\alpha$, defined using \ref{spectral_to_physical_space} as $\alpha = R_{c}/R_{f}$. The parameters of the SDC and MLSDC-SH schemes are summarized in Tables~\oldref{tbl:sdc_parameters} and \oldref{tbl:mlsdc_overview}, respectively. \begin{table}[!ht] \begin{center} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SDC($M_f+1$, $N_{\textit{S}}$)} \\ \hline \textbf{Parameter} & \multicolumn{1}{c|}{\textbf{Description}} \\ \hline $M_f+1$ & SDC nodes on fine level\\ \hline $N_{S}$ & Number of SDC iterations \\ \hline \end{tabular} \end{center} \caption{\label{tbl:sdc_parameters}Parameters for the SDC scheme. The SDC iteration only involves one sweep on the fine level.} \end{table} \begin{table}[!ht] \begin{center} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{MLSDC($M_f+1$, $M_c+1$, $N_{\textit{ML}}$, $\alpha$)} \\ \hline \textbf{Parameter} & \multicolumn{1}{c|}{\textbf{Description}} \\ \hline $M_f+1$ & SDC nodes on fine level\\ \hline $M_c+1$ & SDC nodes on coarse level\\ \hline $N_{ML}$ & Number of MLSDC iterations \\ \hline $\alpha$ & Spatial coarsening ratio \\ \hline \end{tabular} \end{center} \caption{\label{tbl:mlsdc_overview}Parameters for the MLSDC-SH scheme. The MLSDC-SH iteration is described in Alg.~\oldref{alg:mlsdc_iteration}, and involves one sweep on the fine level and one sweep on the coarse level.} \end{table} To evaluate the theoretical computational cost of the MLSDC-SH scheme, we count the number of function evaluations and the number of solves involved in a time step. We denote by $C^s_{\ell}$ the cost of a solve at level $\ell$, and by $C^{\textit{fi}}_{\ell}$ (respectively, $C^{\textit{fe}}_{\ell}$) the cost of an implicit (respectively, explicit) function evaluation at level $\ell$. We neglect the cost of computing the FAS correction. The quantities $C^s_{c}$, $C^{\textit{fi}}_{c}$, $C^{\textit{fe}}_{c}$ depend on the spatial coarsening ratio, $\alpha = R_c/R_f$. Here, $R_{\ell}$ represents the highest Fourier wavenumber in the east-west representation of \ref{spectral_to_physical_space} on level $\ell$. The cost of a time step with the two-level MLSDC($M_f+1$,$M_c+1$,$N_{\textit{ML}}$,$\alpha$) is \begin{align} C^{\textit{MLSDC}(M_f+1,M_c+1,N_{\textit{ML}},\alpha)} &= N_{\textit{ML}} M_{f} ( C^s_{f} + C^{\textit{fi}}_{f} + C^{\textit{fe}}_{f} ) \nonumber \\ &+ N_{\textit{ML}} M_{c} ( C^s_{c} + C^{\textit{fi}}_{c} + C^{\textit{fe}}_{c} ) \nonumber \\ &+ N_{\textit{ML}} M_{c} ( C^{\textit{fi}}_{c} + C^{\textit{fe}}_{c}), \label{computational_cost_mlsdc} \end{align} where the term in the right-hand side of the first line represents the cost of the fine sweeps, the second term represents the cost of the coarse sweeps, and the third term accounts for the cost of the evaluation of the right-hand sides at the coarse nodes after the restriction. This can be compared with the cost of a time step in SDC($M_f+1$,$N_{\textit{S}}$), given by \begin{equation} C^{\textit{SDC}(M_f+1,N_{\textit{S}})} = N_{\textit{S}} M_f ( C^s_{f} + C^{\textit{fi}}_{f} + C^{\textit{fe}}_{f} ). \label{computational_cost_sdc} \end{equation} Furthermore, we assume that the cost of a linear solve is the same as the cost of evaluating the right-hand side, that is, \begin{equation} C^s_{f} = C^{\textit{fi}}_{f} = C^{\textit{fe}}_{f}, \end{equation} with this assumption being motivated by Section~\oldref{implicit_solver_for_sdc_and_mlsdc}. In addition, we will also assume that the computational cost of the operators is proportional to the number of spectral coefficients in \ref{spectral_space}, denoted by $K_\ell$. There are three primary variables, and each of them is represented in the triangular truncation framework with $R_{\ell}(R_{\ell}+1)/2$ spectral coefficients, where $R_{\ell}$ denotes the highest Fourier wavenumber in the east-west representation of \ref{spectral_to_physical_space}. This yields $K_{\ell} = 3 R_{\ell}(R_{\ell}+1) / 2$. Using this notation and the definition $\alpha = R_c/R_f$, we can obtain an expression of $C^s_c$ as a function $\alpha$, $C^s_f$, and $R_f$, by writing \begin{equation} C^s_c = \frac{K_c}{K_f} C^s_f = \frac{R_c(R_c+1)}{R_f(R_f+1)} C^s_f = \alpha^2 \frac{R_f + 1/\alpha}{R_f + 1} C^s_f. \end{equation} Similarly, using the same assumptions, we obtain \begin{equation} C^{fi}_c = \alpha^2 \frac{R_f+1/\alpha}{R_f+1} C^{fi}_f, \qquad \qquad C^{fe}_c = \alpha^2 \frac{R_f+1/\alpha}{R_f+1} C^{fe}_f. \end{equation} Using these notations and assuming that MLSDC-SH and SDC achieve the same accuracy, the theoretical speedup obtained with MLSDC-SH, denoted by $\mathcal{S}^{\textit{theo}}$, reads \begin{equation} \mathcal{S}^{\textit{theo}} = \frac{C^{\textit{SDC}(M_f+1,N_{\textit{S}})}}{C^{\textit{MLSDC}(M_f+1,M_c+1,N_{\textit{ML}},\alpha)}} = \frac{N_S}{N_{ML}} \times \frac{1}{1 + \displaystyle \alpha^2 \frac{5 (R_{f} + 1/\alpha) M_c}{3 (R_{f}+1) M_f }}. \label{theoretical_speedup} \end{equation} That is, MLSDC(3,2,2,1/2), based on two fine sweeps and two coarse sweeps, yields a theoretical speedup $\mathcal{S}^{\textit{theo}} \approx 1.66$ compared to SDC(3,4), which uses four fine sweeps. This corresponds to a reduction of 40 \% in the wall-clock time. MLSDC(5,3,4,1/2), based on four fine sweeps and four coarse sweeps, also results in a theoretical speedup $\mathcal{S}^{\textit{theo}} \approx 1.66$ compared to SDC(5,8), which relies on eight fine sweeps. This reasoning assumes that one MLSDC-SH iteration can replace two single-level SDC iterations and still achieve the same accuracy. This point is investigated in the next section using numerical examples. We conclude this section by comparing the computational cost of MLSDC-SH to that of the fully implicit single-level SDC scheme based on the Spectral Element Method (SEM) presented in \cite{jia2013spectral} and referred to as SDC-SEM in the remainder of this paper. The fully implicit SDC-SEM iteration entails solving a large nonlinear system on the fine problem to update all the degrees of freedom simultaneously. This is achieved using the Jacobian-free Newton-Krylov (JFNK) method. As stated by the authors, the computational cost of this nonlinear solve can be large and heavily depends on the availability of a scalable preconditioner for the linear systems. Instead, the MLSDC-SH iteration involves trivial diagonal linear solves that can easily be parallelized. In addition, our multi-level time integration framework relies on a hierarchy of space-time levels to shift a significant fraction of the computational work to the coarser representation of the problem. This reduces the number of fine sweeps in the algorithm and therefore further reduces the cost of a time step. As a result, we expect IMEX MLSDC-SH to be significantly less expensive than the fully implicit SDC-SEM on a per-timestep basis for moderate resolutions. Assuming that the linear systems can be efficiently preconditioned -- as in \cite{lott2015algorithmically} -- the key to the performance of SDC-SEM lies in its ability to take much larger stable time steps than MLSDC-SH to compensate for its relatively high cost on a per-timestep basis. Exploration of this trade-off requires a careful analysis that will be presented in future work. \section{\label{section_numerical_examples}Numerical examples} We assess the performance of MLSDC-SH with state-of-the-art test cases for the development of dynamical cores. All the test cases are nonlinear. They are selected to focus on particular challenges that arise with MLSDC-SH. The first test case in Section~\oldref{subsection_steady_zonal_jet} targets geostrophically balanced modes. It evaluates the effects of multi-level mode truncation and the relation to the diffusion used in the simulations. The second test case in Section~\oldref{subsection_nonlinear_propagation_of_gaussian_dome} studies the observed order of convergence of MLSDC-SH upon refinement in time for waves propagating on the rotating sphere. While the first two test cases are mainly dominated by the linear parts, the following benchmarks assess the performance of MLSDC-SH in the presence of stronger nonlinear interactions. The Rossby-Haurwitz benchmark in Section~\oldref{subsection_rossby_haurwitz_wave} studies the advection of a wave that propagates around the sphere without changing shape. This is followed by the unstable barotropic wave benchmark in Section~\oldref{subsection_galewsky} with an initially linear balanced flow perturbed by the introduction of a Gaussian bump in the geopotential field. All these benchmarks provide a key insight into the numerical properties of MLSDC-SH in the context of atmospheric simulations. \subsection{\label{subsection_steady_zonal_jet}Steady zonal jet} We first study the behavior of the multi-level SDC scheme on a steady test case derived from \cite{galewsky2004initial}. This test case consists in the simulation of a steady, analytically specified mid-latitude jet with an unperturbed, balanced height field. This test assesses the ability of the numerical schemes to maintain this balanced state for 144 hours. The vorticity field obtained with the single-level SDC(5,8) with a modal resolution of $R_f = S_f = 256$ is in Fig.~\oldref{fig:galewsky_test_case_without_localized_bump_vorticity_field}, along with the corresponding vorticity spectrum in Fig.~\oldref{fig:galewsky_test_case_no_localized_bump_vorticity_spectrum}. This steady numerical test is used to illustrate the order of convergence of MLSDC-SH upon refinement in time. We will consider the computational cost of the multi-level scheme in subsequent examples. We highlight that we do not use the steady geostrophic balance test case of \cite{williamson1992standard} here because it is based on an initial vorticity field that can be represented with a few modes only. Instead, the vorticity field of Fig.~\oldref{fig:galewsky_test_case_no_localized_bump_vorticity_spectrum} has a spectrum that spans a larger number of modes. This is key for our analysis because it allows us to better study the impact of the coarsening strategy based on spectral coefficient truncation (see Section~\oldref{subsubsection_coarsening_strategy_and_transfer_functions}) on the convergence rate upon temporal refinement. To our best knowledge, this is the first time that a study of the effect of the coarsening strategy on the observed order of convergence of MLSDC is conducted. \begin{figure} \caption{\label{fig:galewsky_test_case_without_localized_bump_vorticity_field} \label{fig:galewsky_test_case_without_localized_bump_vorticity_field} \end{figure} \begin{figure} \caption{\label{fig:galewsky_test_case_no_localized_bump_vorticity_spectrum} \label{fig:galewsky_test_case_no_localized_bump_vorticity_spectrum} \end{figure} We perform a refinement study in time to assess the impact of the spatial coarsening ratio, $\alpha$, on the observed order of convergence of the MLSDC-SH scheme. The study is done with a fixed fine resolution of $R_f = S_f = 256$. We consider two configurations, denoted by $\mathcal{A}$ and $\mathcal{B}$, in which the diffusion coefficient is set to $\nu_{\mathcal{A}} = 1.0 \times 10^4 \, \text{m}^2. \text{s}^{-1}$ and $\nu_{\mathcal{B}} = 1.0 \times 10^5 \, \text{m}^2. \text{s}^{-1}$, respectively. In each configuration, the reference solution is obtained with SDC(5,8) using a time step size of $\Delta t_{\textit{ref}} = 90 \, \text{s}$. Fig.~\oldref{fig:galewsky_test_case_no_localized_bump_accuracy_vorticity} shows the norm of the error in the vorticity field with respect to the reference solution as a function of the time step size. We use the $L_{\infty}$-norm to illustrate the connection between the convergence rate of MLSDC-SH upon refinement in time and the magnitude of the spectral coefficients of the vorticity that are truncated during the spatial restriction from the fine level to the coarse level. For this test case, we focus on MLSDC(3,2,2,$\alpha$), which relies on three fine temporal nodes, two coarse temporal nodes, and uses two iterations (with one fine sweep and one coarse sweep per iteration). In both configurations, the observed order of convergence of MLSDC(3,2,2,$\alpha$) varies significantly as a function of the spatial coarsening ratio, $\alpha$. \begin{figure} \caption{\label{fig:galewsky_test_case_no_localized_bump_accuracy_vorticity} \label{fig:galewsky_test_case_no_localized_bump_accuracy_vorticity_A} \label{fig:galewsky_test_case_no_localized_bump_accuracy_vorticity_B} \label{fig:galewsky_test_case_no_localized_bump_accuracy_vorticity} \end{figure} In configuration $\mathcal{A}$, MLSDC(3,2,2,1/2) achieves fourth-order convergence upon refinement in time for stable time steps larger than $120 \, \text{s}$, but exhibits only second-order convergence for shorter time steps. The reduction in the observed order of convergence can be explained by considering the vorticity spectrum of Fig.~\oldref{fig:galewsky_test_case_no_localized_bump_vorticity_spectrum}. Specifically, we note that for the time step size range defined by $\Delta t \leq 90 \, \text{s}$, the MLSDC(3,2,2,1/2) scheme reaches an $L_{\infty}$-norm of the error smaller than $10^{-9}$. We see in Fig.~\oldref{fig:galewsky_test_case_no_localized_bump_vorticity_spectrum} that this threshold corresponds to the order of magnitude of the truncated terms during the restriction to the coarse level when $R_c = S_c = 128$. In MLSDC(3,2,2,1/4) and MLSDC(3,2,2,1/8), the truncated coefficients in the vorticity spectrum are relatively large which causes the observed convergence rate upon refinement in time to be reduced to second order in the entire range of stable time step sizes. Conversely, MLSDC(3,2,2,4/5) achieves the same convergence rate as SDC(3,4) over the full time step range (not shown here for brevity). In configuration $\mathcal{B}$, the use of a larger diffusion coefficient significantly reduces the magnitude of the spectral coefficients associated with the high-frequency modes. Therefore, the MLSDC(3,2,2,1/2) scheme achieves fourth-order convergence in the entire time step range considered here. MLSDC(3,2,2,1/4) achieves fourth-order convergence in a larger fraction of the range of stable time step sizes, but still exhibits a reduction of its observed order of convergence when the norm of the error reaches the magnitude of the terms that are truncated during the restriction procedure. MLSDC(3,2,2,1/8) is still limited to second-order convergence. For this test case, the order of convergence of the MLSDC-SH scheme for the geopotential and divergence variables, not shown here, is similar to that observed for the vorticity. The key insight of this section is that the accuracy of MLSDC-SH as the time step is reduced depends on the interplay between two key factors, namely the spectrum of the fine solution and the magnitude of the spatial coarsening ratio. They determine the range of scales of the fine solution that can be captured by the coarse correction, and as a result have a strong impact on the observed order of convergence of MLSDC-SH upon refinement in time. In particular, the presence of large high-frequency modes in the fine solution imposes of lower limit on the spatial coarsening ratio to preserve the high-order convergence of the multi-level scheme. Next, we study the computational cost of the MLSDC-SH scheme using unsteady test cases, starting with Gaussian dome propagation. \subsection{\label{subsection_nonlinear_propagation_of_gaussian_dome}Propagation of a Gaussian dome} We now consider an initial condition derived from the third numerical experiment of \cite{swarztrauber2004shallow}. The velocities are initially equal to zero ($u = v = 0$). We place a Gaussian dome in the initial geopotential field, such that \begin{equation} h( \lambda, \phi ) = \bar{h} + A \text{e}^{-\alpha (d / a )^2}, \label{gaussian_bump} \end{equation} where $a$ denotes the Earth radius. The distance $d$ is defined as \begin{equation} d = \sqrt{x^2 + y^2 + z^2}, \end{equation} with \begin{align} x &= a \big( \cos(\lambda) \cos(\phi) - \cos(\lambda_c) \cos(\phi_c) \big), \\ y &= a \big( \sin(\lambda) \cos(\phi) - \sin(\lambda_c) \cos(\phi_c) \big), \\ z &= a \big( \sin(\phi) - \sin(\phi_c) \big). \end{align} This corresponds to a Gaussian dome centered at $\lambda_c = \pi$ and $\phi_c = \pi/4$. We use realistic values for the Earth radius, the gravitational acceleration, and the angular rate of rotation $\Omega$ involved in the Coriolis force. We set $\bar{h} = 29400 \, \text{m}$ and $A = 6000 \, \text{m}$, which is about ten times larger than in the original test case. The simulation of the collapsing dome is run for one day to study the behavior of MLSDC-SH on an advection-dominated test case. The geopotential field at different times is in Fig.~\oldref{fig:gaussian_bump_test_case_geopotential_field}, and the spectrum is in Fig.~\oldref{fig:gaussian_test_case_spectra}. \begin{figure} \caption{\label{fig:gaussian_bump_test_case_geopotential_field} \label{fig:gaussian_bump_test_case_geopotential_field_0} \label{fig:gaussian_bump_test_case_geopotential_field_2} \label{fig:gaussian_bump_test_case_geopotential_field_4} \label{fig:gaussian_bump_test_case_geopotential_field} \end{figure} \begin{figure} \caption{\label{fig:gaussian_test_case_spectra} \label{fig:gaussian_test_case_spectra} \end{figure} To assess the accuracy of the MLSDC-SH scheme, we perform a refinement study in time for a fixed spatial resolution ($R_f = S_f = 256$) over one day. The reference solution is obtained with the single-level SDC(5,8) and a time step size of $\Delta t_{\textit{ref}} \ = 60 \, \text{s}$. The diffusion coefficient is set to $\nu = 1.0 \times 10^5 \, \text{m}^2. \text{s}^{-1}$. For this test case, we focus again on the geopotential and vorticity fields, because the results for the divergence field are qualitatively similar. For completeness, the study includes the results obtained with a second-order implicit-explicit Runge-Kutta scheme based on the temporal splitting \ref{system_of_odes} to \ref{system_of_odes_explicit_part}. The $L_{\infty}$-norm of the error with respect to the reference solution, as a function of time step size, is shown in Fig.~\oldref{fig:gaussian_bump_test_case_accuracy}. \begin{figure} \caption{\label{fig:gaussian_bump_test_case_accuracy} \label{fig:gaussian_bump_test_case_accuracy_geopotential} \label{fig:gaussian_bump_test_case_accuracy_vorticity} \label{fig:gaussian_bump_test_case_accuracy} \end{figure} We see that MLSDC(3,2,2,1/2) achieves fourth-order convergence and an error of the same magnitude as that obtained with the single-level SDC(3,4) for time step sizes such that $\Delta t \geq 60 \, \text{s}$. For a smaller time step size of $30 \, \text{s}$, the error induced by the truncation of high-frequency modes during coarsening reduces the observed convergence of MLSDC(3,2,2,1/2) to second order. As in the previous numerical example, the reduction in the observed convergence rate occurs when the $L_{\infty}$-norm of the error reaches the magnitude of the spectral coefficients truncated during spatial coarsening -- about $10^{-1}$ for the geopotential according to Fig.~\oldref{fig:gaussian_test_case_spectra}. Numerical results not included for brevity indicate that this reduction in accuracy caused by spatial coarsening persists even in the absence of temporal coarsening ($M_f = M_c$). Still, in Fig.~\oldref{fig:gaussian_bump_test_case_accuracy}, the magnitude of the error obtained with MLSDC(3,2,2,1/2) in this range of small time steps remains significantly smaller than that obtained with the single-level second-order SDC(2,2). Fig.~\oldref{fig:gaussian_bump_test_case_accuracy} also shows that MLSDC(5,3,4,1/2) achieves the same convergence rate as the single-level SDC(5,8) whenever $\Delta t \geq 400 \, \text{s}$. For smaller time step sizes, the observed convergence of MLSDC(5,3,4,1/2) is reduced to fourth order. But, Fig.~\oldref{fig:gaussian_bump_test_case_accuracy} demonstrates that doing more iterations with MLSDC(5,3,7,1/2) is sufficient to recover eighth-order convergence in the asymptotic range. To interpret these results, we distinguish two regimes in the temporal refinement study of Fig.~\oldref{fig:gaussian_bump_test_case_accuracy}. For very large time steps, large errors are caused by the fact that the fine and coarse corrections do not resolve the large temporal scales accurately and overstep the small scales present in the problem. Reducing the time step size in this regime reduces the errors associated with the large temporal scales. These scales can be resolved on both the coarse level and the fine level which explains why MLSDC-SH and SDC converge at the same rate. The second regime starts for smaller time steps once these large scales have been resolved accurately. The error is then dominated by small-scale features that can be resolved by the fine correction but cannot be captured by the coarse correction due to its lower spatial resolution of the coarse problem. This undermines the observed order of convergence of MLSDC-SH as the time step size becomes very small. \begin{figure} \caption{\label{fig:gaussian_bump_test_case_computational_cost} \label{fig:gaussian_bump_test_case_computational_cost_geopotential} \label{fig:gaussian_bump_test_case_computational_cost_vorticity} \label{fig:gaussian_bump_test_case_computational_cost} \end{figure} In Fig.~\oldref{fig:gaussian_bump_test_case_computational_cost}, we investigate the computational cost the MLSDC-SH scheme by measuring the wall-clock time of the simulations. MLSDC(3,2,2,1/2) is more efficient than the SDC(2,2) and SDC(3,4) schemes for the time step sizes considered here despite the reduced observed order of convergence when $\Delta t \leq 60 \, \text{s}$. MLSDC(5,3,4,1/2) is more efficient than SDC(5,8) whenever $\Delta t \geq 120 \, \text{s}$. Below this time step size, its efficiency deteriorates slightly. Since the MLSDC-SH scheme does not necessarily match the convergence rate of SDC in the simulations, we compute an observed speedup, $\mathcal{S}^{\textit{obs}}$, as the ratio of the computational cost of SDC over that of MLSDC-SH for a given error norm in Fig.~\oldref{fig:gaussian_bump_test_case_computational_cost}. For an error norm of $10^0$ in the geopotential field, MLSDC(3,2,2,1/2) achieves an observed speedup $\mathcal{S}^{\textit{obs}} \approx 1.58$ -- i.e., a reduction of 37 \% in wall-clock time -- compared to SDC(3,4). Considering that the cost of the FAS correction has been neglected in \ref{theoretical_speedup}, this is close to the theoretical speedup $\mathcal{S}^{\textit{theo}} \approx 1.66$ computed in Section~\oldref{section_temporal_discretization}. For the same magnitude of the error norm, MLSDC(5,3,4,1/2) achieves an observed speedup $\mathcal{S}^{\textit{obs}} \approx 1.50$ compared to SDC(5,8), which represents a reduction of 33 \% in wall-clock time. This is again relatively close to the theoretical speedup $\mathcal{S}^{\textit{theo}} \approx 1.66$. Although this is not included in the figure for clarity, we point out that MLSDC(5,3,5,1/2) and MLSDC(5,3,7,1/2) are more accurate, but also more expensive than MLSDC(5,3,4,1/2) in the range of time step sizes considered here. \subsection{\label{subsection_rossby_haurwitz_wave}Rossby-Haurwitz wave} In this section, we apply MLSDC-SH to the Rossby-Haurwitz wave test case included in \cite{williamson1992standard} and also considered in \cite{jia2013spectral}. The initial analytically specified velocity field is non-divergent, and is computed with wavenumber 4. The initial geopotential field is obtained by solving the balance equation. The resulting Haurwitz pattern moves from east to west. We consider a fine resolution defined by $R_f = S_f = 256$ and we use a diffusion coefficient $\nu = 1.0 \times 10^{5} \, \text{m}^2.\text{s}^{-1}$. In Fig.~\oldref{fig:rossby_haurwitz_wave_field} (respectively, Fig.~\oldref{fig:rossby_haurwitz_wave_vorticity_spectrum}), we show the solution (respectively, the vorticity spectrum) obtained with SDC(5,8) after one day. \begin{figure} \caption{\label{fig:rossby_haurwitz_wave_field} \label{fig:rossby_haurwitz_wave_geopotential_field} \label{fig:rossby_haurwitz_wave_vorticity_field} \label{fig:rossby_haurwitz_wave_field} \end{figure} \begin{figure} \caption{\label{fig:rossby_haurwitz_wave_vorticity_spectrum} \label{fig:rossby_haurwitz_wave_vorticity_spectrum} \end{figure} As in the previous sections, we carry out a refinement study in time using a reference solution obtained with SDC(5,8) over one day using a time step size $\Delta t_{\textit{ref}} = 120 \, \text{s}$. The results, shown in Fig.~\oldref{fig:rossby_haurwitz_wave_test_case_accuracy}, differ between the geopotential variable and the vorticity variable. Specifically, for the former, MLSDC(3,2,2,1/2) achieves fourth-order convergence upon refinement in time and the same error magnitude as SDC(3,4) for the full time step size range considered here. MLSDC(5,3,4,1/2) is also more accurate than in the previous examples and reaches fifth-order convergence. But, for the vorticity variable, MLSDC(3,2,2,1/2) exhibits a reduction in its convergence rate when $\Delta t \leq 120 \, \text{s}$, which is slightly earlier than SDC(3,4). Here again, this reduction is caused by the truncation of high-frequency modes during coarsening (see the vorticity spectrum in Fig.~\oldref{fig:rossby_haurwitz_wave_vorticity_spectrum}). MLSDC(5,3,4,1/2) is not in the asymptotic range and has already converged for the range of time step sizes considered here, which explains the flat line in Fig.~\oldref{fig:rossby_haurwitz_wave_test_case_accuracy_vorticity}. In terms of wall-clock time, the most efficient scheme for both variables is MLSDC(5,3,4,1/2), as shown in Fig.~\oldref{fig:rossby_haurwitz_wave_test_case_computational_cost}. This multi-level scheme achieves a very low error norm for both variables while performing a large portion of the computations on the coarse level. MLSDC(3,2,2,1/2) is less efficient than MLSDC(5,3,4,1/2) but still more efficient than SDC(3,4) on the full range of time steps. In particular, for an error norm of $10^{-4}$ in the geopotential field, MLSDC(3,2,2,1/2) achieves an observed speedup $\mathcal{S}^{\textit{obs}} \approx 1.50$ compared to SDC(3,4), that is, a reduction in wall-clock time of 35 \%. This is in good agreement with the theoretical speedup $\mathcal{S}^{\textit{theo}} \approx 1.66$ computed in Section~\oldref{subsection_computational_cost_of_sdc_and_mlsdc}. The MLSDC(3,2,2,1/2) performance deteriorates for the vorticity variable for smaller time steps and the speedup compared to SDC(3,4) decreases because of the reduction in its observed order of convergence. For an error norm of $10^{-12}$ in the vorticity field, the observed speedup is $S^{\textit{obs}} \approx 1.50$, but it is reduced to $S^{\textit{obs}} \approx 1.13$ for an error norm of $10^{-14}$. Next, we conclude the analysis of MLSDC-SH with a challenging unsteady test case representative of atmospheric flows. \begin{figure} \caption{\label{fig:rossby_haurwitz_wave_test_case_accuracy} \label{fig:rossby_haurwitz_wave_test_case_accuracy_geopotential} \label{fig:rossby_haurwitz_wave_test_case_accuracy_vorticity} \label{fig:rossby_haurwitz_wave_test_case_accuracy} \end{figure} \begin{figure} \caption{\label{fig:rossby_haurwitz_wave_test_case_computational_cost} \label{fig:rossby_haurwitz_wave_test_case_computational_cost_geopotential} \label{fig:rossby_haurwitz_wave_test_case_computational_cost_vorticity} \label{fig:rossby_haurwitz_wave_test_case_computational_cost} \end{figure} \subsection{\label{subsection_galewsky}Nonlinear evolution of an unstable barotropic wave} In this section, we consider the barotropic instability test case proposed in \cite{galewsky2004initial}. This is done by introducing a localized bump in the height field to perturb the balanced state described in Section~\oldref{subsection_steady_zonal_jet}. The perturbation first triggers the development of gravity waves and then leads to the formation of complex vortical dynamics. These processes operate on multiple time scales and are representative of the horizontal features of atmospheric flows. We run the simulations using two configurations, $\mathcal{B}$ and $\mathcal{C}$, based on a diffusion coefficient $\nu_{\mathcal{B}} = 1.0 \times 10^5 \, \text{m}^2.\text{s}^{-1}$ -- as in \cite{galewsky2004initial} -- and $\nu_{\mathcal{C}} = 2.0 \times 10^5 \, \text{m}^2.\text{s}^{-1}$, respectively. The reference solutions for the refinement studies detailed below are obtained with SDC(5,8) with a time step size $\Delta t_{\textit{ref}} = 60 \, \text{s}$. In \cite{jia2013spectral}, the largest time step size used in the single-level SDC scheme combined with the Spectral Element Method (SEM) based on 24 elements along each cube edge and a polynomial basis of degree seven is $1200 \, \text{s}$, which is the same order of magnitude as the largest time step size used here. The vorticity fields after 122 hours and 144 hours for configuration $\mathcal{B}$ are shown in Fig.~\oldref{fig:galewsky_test_case_with_localized_bump_vorticity_field}. Fig.~\oldref{fig:galewsky_test_case_with_localized_bump_vorticity_spectrum} presents the corresponding spectrum of the vorticity field at the same times. \begin{figure} \caption{\label{fig:galewsky_test_case_with_localized_bump_vorticity_field} \label{fig:galewsky_test_case_with_localized_bump_vorticity_field_120} \label{fig:galewsky_test_case_with_localized_bump_vorticity_field_144} \label{fig:galewsky_test_case_with_localized_bump_vorticity_field} \end{figure} \begin{figure} \caption{\label{fig:galewsky_test_case_with_localized_bump_vorticity_spectrum} \label{fig:galewsky_test_case_with_localized_bump_vorticity_spectrum} \end{figure} As in Section~\oldref{subsection_steady_zonal_jet}, we first highlight the connection between the spectrum of the vorticity field and the observed order of convergence of the MLSDC-SH scheme upon refinement in time. This is done with MLSDC(3,2,2,1/2) -- that is, MLSDC-SH with three nodes on the fine level, two nodes on the coarse level, two iterations, and $R_c = S_c = 128$ -- in the refinement study in time shown in Fig.~\oldref{fig:galewsky_test_case_with_localized_bump_vorticity_accuracy_l_infty_norm}. When $\nu_{\mathcal{B}} = 1.0 \times 10^5 \, \text{m}^2.\text{s}^{-1}$, the magnitude of the truncated terms in the vorticity spectrum is of the order of $10^{-8}$ (see Fig.~\oldref{fig:galewsky_test_case_with_localized_bump_vorticity_spectrum}). Since this is also the order of the $L_{\infty}$-norm of the error for the largest stable time step ($\Delta t = 400 \, \text{s}$), MLSDC(3,2,2,1/2) achieves only second-order convergence for the range of time step sizes considered here. With $\nu_{\mathcal{C}} = 2.0 \times 10^5 \, \text{m}^2.\text{s}^{-1}$, the magnitude of the truncated terms is of the order of $10^{-9}$, and MLSDC(3,2,2,1/2) reaches fourth-order convergence until this threshold is reached for $\Delta t = 320 \, \text{s}$. For completeness, we have run the same test with $\nu = 3.0 \times 10^5 \, \text{m}^2.\text{s}^{-1}$, in which case this threshold is lower, which allows MLSDC(3,2,2,1/2) to exhibit fourth-order convergence for $\Delta t \geq 120 \, \text{s}$. In the following paragraphs, we show that this reduction in the observed order of convergence can be overcome by doing additional MLSDC-SH iterations. For instance, we demonstrate that MLSDC(3,2,3,1/2) recovers fourth-order convergence in configurations $\mathcal{B}$ and $\mathcal{C}$. \begin{figure} \caption{\label{fig:galewsky_test_case_with_localized_bump_vorticity_accuracy_l_infty_norm} \label{fig:galewsky_test_case_with_localized_bump_vorticity_accuracy_l_infty_norm} \end{figure} We now use this knowledge of the spectrum of the vorticity field to motivate our choice of the spatial coarsening ratio in each configuration. The goal is to make the coarse sweeps as inexpensive as possible without undermining the observed order of convergence of the MLSDC-SH scheme upon refinement in time. In configuration $\mathcal{B}$, we choose a relatively modest spatial coarsening ratio $\alpha_{\mathcal{B}} = 4/5$ to account for the presence of large spectral coefficients associated with the high-frequency modes. At the coarse level, this choice yields $R^{\mathcal{B}}_c = S^{\mathcal{B}}_c = 204$. In configuration $\mathcal{C}$, we can choose a more aggressive coarsening strategy with $\alpha_{\mathcal{C}} = 1/2$, leading to $R^{\mathcal{C}}_c = S^{\mathcal{C}}_c = 128$ as in the previous test cases. These choices are such that the magnitude of the truncated spectral vorticity coefficients have the same order of magnitude, that is $|\zeta^{\mathcal{B}}_{204}| \approx |\zeta^{\mathcal{C}}_{128}| \approx 3 \times 10^{-9}$ in Fig.~\oldref{fig:galewsky_test_case_with_localized_bump_vorticity_spectrum}. The results of the refinement study in $L_{\infty}$-norm are shown in Fig.~\oldref{fig:galewsky_test_case_with_localized_bump_accuracy_geopotential} for the geopotential. The asymptotic rates observed for the divergence and the vorticity are qualitatively similar to those of the geopotential and are therefore omitted for brevity. \begin{figure} \caption{\label{fig:galewsky_test_case_with_localized_bump_accuracy_geopotential} \label{fig:galewsky_test_case_with_localized_bump_accuracy_geopotential_A} \label{fig:galewsky_test_case_with_localized_bump_accuracy_geopotential_B} \label{fig:galewsky_test_case_with_localized_bump_accuracy_geopotential} \end{figure} In Fig.~\oldref{fig:galewsky_test_case_with_localized_bump_accuracy_geopotential}, MLSDC(3,2,2,$\alpha$) exhibits the same observed order of convergence and error magnitude as SDC(3,4) for both diffusion configurations. MLSDC(5,3,4,$\alpha$) also converges at a fourth-order rate in the asymptotic range, but achieves a significantly smaller error magnitude than MLSDC(3,2,2,$\alpha$). MLSDC(5,3,4,$\alpha$) is as accurate as SDC(5,8) for larger time step sizes. But, to achieve the same observed order of convergence as SDC(5,8) in the entire range of time step sizes, seven iterations -- with MLSDC(5,3,7,$\alpha$) -- are needed. Finally, we note that numerical examples not shown here for brevity confirm that the observed order of convergence of MLSDC-SH increases significantly for a larger coefficient $\alpha$ -- i.e., less aggressive spatial coarsening --, but this also drastically increases the computational cost. \begin{figure} \caption{\label{fig:galewsky_test_case_with_localized_bump_computational_cost_geopotential} \label{fig:galewsky_test_case_with_localized_bump_computational_cost_geopotential_A} \label{fig:galewsky_test_case_with_localized_bump_computational_cost_geopotential_B} \label{fig:galewsky_test_case_with_localized_bump_computational_cost_geopotential} \end{figure} In Fig.~\oldref{fig:galewsky_test_case_with_localized_bump_computational_cost_geopotential}, we show the $L_2$-norm of the error as a function of the wall-clock time of the simulations for the geopotential. We see that MLSDC(3,2,2,$\alpha$) is significantly less expensive than SDC(3,4) in the full range of time step sizes. This cost reduction is larger with MLSDC(3,2,2,$\alpha_{\mathcal{C}}$) since configuration $\mathcal{C}$ allows for a more aggressive spatial coarsening strategy than configuration $\mathcal{B}$. In both configurations, the observed speedup of MLSDC(3,2,2,$\alpha$) compared to SDC(3,4) is close to the theoretical speedup. Specifically, in configuration $\mathcal{B}$, the observed speedup is $\mathcal{S}_{\mathcal{B}}^{\textit{obs}} \approx 1.28$ for an error norm of $3 \times 10^{-3}$ in the geopotential field whereas the theoretical speedup -- obtained with \ref{theoretical_speedup} evaluated with $\alpha_{\mathcal{B}} = 4/5$ -- is $\mathcal{S}_{\mathcal{B}}^{\textit{theo}} \approx 1.30$. In configuration $\mathcal{C}$, MLSDC(3,2,2,1/2) achieves $\mathcal{S}^{\textit{obs}}_{\mathcal{C}} \approx 1.56$ for an error norm of $8 \times 10^{-5}$ in the geopotential, for a theoretical speedup $\mathcal{S}^{\textit{theo}}_{\mathcal{C}} \approx 1.66$. MLSDC(5,3,4,$\alpha$) is the most efficient scheme for relatively large error magnitudes and also achieves observed speedups close to the theoretical speedup. But, the performance of MLSDC(5,3,4,$\alpha$) deteriorates for lower error magnitudes. We found that doing additional iterations, for instance with MLSDC(3,2,3,$\alpha$) or MLSDC(5,3,5,$\alpha$), does not improve the efficiency of MLSDC-SH. \section{\label{section_conclusion}Conclusions and future work} We have studied a high-order implicit-explicit iterative multi-level time integration scheme for the nonlinear shallow-water equations on the rotating sphere. Our algorithm relies on the Multi-Level Spectral Deferred Corrections (MLSDC) scheme of \cite{emmett2012toward,speck2015multi} combined with a spatial discretization performed with the global Spherical Harmonics (SH) transform. MLSDC-SH applies a sequence of updates distributed on a hierarchy of space-time levels obtained by coarsening the problem in space and in time. This approach makes it possible to shift a significant portion of the computational work to the coarse representation of the problem to reduce the time-to-solution while preserving accuracy. We have discussed the requirements of consistent inter-level transfer operators which play a crucial role in MLSDC-SH. Our approach consists in exploiting the canonical basis of the multi-level scheme. This SH-based algorithm leads to restriction and interpolation procedures performed in spectral space to transfer the solution between different spatio-temporal levels. The proposed restriction and interpolation methods do not introduce spurious modes that would, driven by nonlinear interactions, propagate across the spectrum. Our results show that this is one the key features needed to obtain an efficient MLSDC-SH scheme. The development of restriction and interpolation operators for other non-global spatial discretization schemes is left for future work. We have shown that MLSDC-SH is efficient for the nonlinear wave-propagation-dominated problems arising from the discretized Shallow-Water Equations (SWE) on the rotating sphere. Our numerical studies are based on challenging test cases that are representative of the horizontal effects present in the full atmospheric dynamics. With a steady zonal jet test case, we have first examined the impact of the coarsening strategy on the observed accuracy of MLSDC-SH upon refinement in time. Then, using unsteady numerical examples, we have shown that MLSDC-SH can achieve up to eighth-order convergence upon refinement in time, and that MLSDC-SH can take stable time steps that are as large as those of the single-level SDC schemes. We have also demonstrated that MLSDC-SH is more efficient than the single-level SDC schemes, and in particular requires fewer function evaluations. Our results show that MLSDC-SH can reduce the wall-clock time of the simulations by up to 37\% compared to single-level SDC schemes. As a final note, we mention here that MLSDC is one of the key building blocks of the Parallel Full Approximation Scheme in Space and in Time (PFASST). The present work therefore lays the foundations of a parallel-in-time integration of the full shallow-water equations on the sphere with PFASST. \section{Code availability} The code used to generate the simulations presented here is publicly available \citep{schreiber2018sweet}. \end{document}
\begin{document} \allowdisplaybreaks \newcommand{1609.06157}{1609.06157} \renewcommand{\arabic{footnote}}{} \renewcommand{063}{063} \FirstPageHeading \ShortArticleName{$d$-Orthogonal Analogs of Classical Orthogonal Polynomials} \ArticleName{$\boldsymbol{d}$-Orthogonal Analogs of Classical Orthogonal\\ Polynomials\footnote{This paper is a~contribution to the Special Issue on Orthogonal Polynomials, Special Functions and Applications (OPSFA14). The full collection is available at \href{https://www.emis.de/journals/SIGMA/OPSFA2017.html}{https://www.emis.de/journals/SIGMA/OPSFA2017.html}}} \Author{Emil HOROZOV~$^{\dag\ddag}$} \AuthorNameForHeading{E.I.~Horozov} \Address{$^\dag$~Department of Mathematics and Informatics, Sofia University,\\ \hphantom{$^\dag$}~5 J.~Bourchier Blvd., Sofia 1126, Bulgaria} \EmailD{\href{mailto:[email protected]}{[email protected]}} \Address{$^\ddag$~Institute of Mathematics and Informatics, Bulg. Acad. of Sci.,\\ \hphantom{$^\ddag$}~Acad. G.~Bonchev Str., Block 8, 1113 Sofia, Bulgaria} \ArticleDates{Received October 01, 2017, in final form June 13, 2018; Published online June 26, 2018} \Abstract{Classical orthogonal polynomial systems of Jacobi, Hermite and Laguerre have the property that the polynomials of each system are eigenfunctions of a second order ordinary differential operator. According to a famous theorem by Bochner they are the only systems on the real line with this property. Similar results hold for the discrete orthogonal polynomials. In a recent paper we introduced a natural class of polynomial systems whose members are the eigenfunctions of a differential operator of higher order and which are orthogonal with respect to $d$ measures, rather than one. These polynomial systems, enjoy a~number of properties which make them a natural analog of the classical orthogonal polynomials. In the present paper we continue their study. The most important new properties are their hypergeometric representations which allow us to derive their generating functions and in some cases also Mehler--Heine type formulas.} \Keywords{$d$-orthogonal polynomials; finite recurrence relations; bispectral problem; gene\-ralized hypergeometric functions; generating functions} \Classification{34L20; 30C15; 33E05} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0} \section{Introduction} \label{intro} This paper is a natural continuation of the study initiated in \cite{Ho3} on the basis of several classes of examples in \cite{Ho1, Ho2}. There we constructed large families of polynomial systems that were called {\it $d$-orthogonal polynomials with the Bochner property.} The terminology ``Bochner's property'' derives from the Bochner's theorem~\cite{Bo} mentioned in the abstract and means that the polynomials are eigenfunction of a differential operator. We recall that, by definition, general $d$-orthogonal polynomials are polynomial systems $P_n(x)$, $ n =0,1, 2, \ldots$, $\deg (P_n) =n$ iff there exist $d$ linear functionals $\mathcal{L}_j$, $j =0,\ldots,d-1$ on the space of all polynomials~${\mathbb C}[x]$ such that \begin{gather*} \begin{cases} \mathcal{L}_j (P_nP_m) = 0, & m > nd+ j, \ n \geq 0, \\ \mathcal{L}_j(P_nP_{nd+ j}) \neq 0, & n \geq 0, \end{cases} \end{gather*} for each $j \in N_{d} := \{0, \ldots, d-1 \}$. When $d = 1$ this is the ordinary notion of orthogonal polynomials. The orthogonality is connected with $d$ functionals rather than with only one. According to~\cite{Ma, VIs} the above property is equivalent to the existence of a~linear recurrence relation of the form \begin{gather*} xP_n(x) = P_{n+1} + \sum_{j=0}^{d} \gamma_j(n) P_{n-j}(x). \end{gather*} Here and later we use mostly monic polynomials, i.e., whose coefficient at the highest degree is~1. The $d$-orthogonal polynomials and the more general class of the so-called multiple orthogonal polynomials have been intensively studied in the last 30 years due to their intriguing properties and applications, cf., e.g., \cite{Apt, ApKu, BDK} and~\cite[Chapter~23]{Is} and further references in the cited literature. In particular they have applications to random matrices \cite{Ku, KZh}, simultaneous Pad\'e approximations~\cite{DBr}, number theory \cite{ BR,Beu, So} (which in fact go back to Hermite), etc. Notice that the classical orthogonal polynomials have a number of properties that are missing \textit{in general} for the rest of the polynomial systems. Here we list some of them: \begin{itemize}\itemsep=0pt \item they are eigenfunctions of an ordinary differential operator, \item they have \textit{explicit differential} ladder operators (operators raising or lowering the index), \item they can be presented in terms of hypergeometric functions, \item they can be presented via Rodrigues formulas, \item there are Pearson's equations for the weights of their measures, \item they possess the Hahn's property, i.e., the polynomial system of their derivatives are again orthogonal polynomials. \end{itemize} The class of polynomial systems that we introduced in \cite{Ho3} also shares these properties. Some of them were established in the cited paper, e.g., \textit{explicit differential} ladder operators\footnote{As one of the referees kindly pointed to me in fact each polynomial system has ladder operators, cf.~\cite{BC}. However in~\cite{Ho3} we have given explicit differential ones. } and Rodrigues-like formulas, apart of the differential equation. All these properties were direct consequences of our main construction. Other properties will be found here~-- their hypergeometric representations, generating functions and in some cases~-- Mehler--Heine formulas. In another project we intend to obtain also the weights, defining the functionals $\mathcal{L}_j$, together with the Pearson equations for them and show their connection to biorthogonal ensembles, cf.~\cite{Bor, KZh}. All these properties make them close analogs of the classical orthogonal polynomials. In fact there are other polynomial systems that are analogs of classical orthogonal polyno\-mials. In \cite{ABV, VAC} the authors take another direction to generalize classical orthogonal polynomials. Namely they use the weights of the latter to produce a vector of weights. Their polynomial systems were later used in the study of different random matrix models~-- non-intersecting Brownian motion, matrix models with an external source, two-matrix models, cf.~\cite{ABK, BDK, Ku}. The main tools we used in \cite{Ho3} are the automorphisms of non-commutative algebras. We explain the construction for the case of the first Weyl algebra~$W_1$. It can be realized as the algebra of differential operators with polynomial coefficients in one variable. $W_1$ acts on the space of polynomials ${\mathbb C}[x]$. We consider the simplest polynomial system $\{\psi_n(x) = x^n, \, n=0, 1, \ldots\}$, and the differential operator $H= x\partial$ which satisfies \begin{gather*} H\psi_n(x) =n\psi_n(x), \qquad \partial \psi_n(x) = n\psi_{n-1}(x) \qquad \text{and} \qquad x\psi_n(x) = \psi_{n+1}(x). \end{gather*} Take any polynomial $q(\partial)$ in $\partial$, where $\partial := \frac{{\rm d}}{{\rm d}x}$. Below for simplicity we take $q(\partial) = -\partial^l/l$, $l\in {\mathbb N}$. It defines an automorphism $\sigma = e^{\operatorname{ad}_{q(\partial)}}$ of $W_1$, acting on $A \in W_1$ as \begin{gather*} \sigma(A) = e^{-\operatorname{ad}_{\partial^l/l}} A = \sum_{j=0}^{\infty} \frac{\operatorname{ad}^j_{-\partial^l/l}(A)}{j!}, \end{gather*} where $\operatorname{ad}_A(B) = [A,B]$. It is easy to see that the sum is finite. The images of the above operators are \begin{gather*} \sigma (H) = H - \partial^l, \qquad \sigma (\partial) = \partial, \qquad \sigma (x) = x -\partial^{l-1}. \end{gather*} If we define the polynomials \begin{gather*} P_n(x) = e^{-\partial^l/l} \psi_n(x) = \sum_{j=0}^{\infty}\frac{(-\partial^l/l)^jx^n}{j!}, \end{gather*} and put $L= \sigma (H)$ we easily see that \begin{gather*} LP_n(x) = n P_n(x), \\ xP_n(x) = P_{n+1}(x) + n(n-1)\cdots (n-l+2)P_{n-l}(x). \end{gather*} {\it The main point is that we constructed simultaneously the polynomial system $\{P_n(x)\}$, the differential operator $L$ and the finite-term recurrence.} Notice that for $l=2$ these are the Hermite polynomials. For arbitrary $l$ these are the Gould--Hopper polynomials~\cite{GH}. See also the examples. The same procedure can be repeated with other algebras. We can take instead of $\partial$ an operator of the form $G= R(H)\partial$, where $R(H)$ is any polynomial in $H$. Then instead of $W_1$ we can take the algebra spanned by $G$, $H$, $x$. The case of discrete orthogonal polynomials can be treated exactly in the same manner, realizing $W_1$ by difference operators, and starting with $\psi(x, n) = x(x-1)\cdots(x-n+1)$, cf.\ the next section or~\cite{Ho3}. In the present paper we study further the $d$-orthogonal polynomials with the Bochner property adding to the tools some new arguments which are not present in~\cite{Ho3}. Let us first list the new properties which we discuss below. The most fundamental one, of which the rest are consequences, is \textit{the hypergeometric representation} of the class of $d$-orthogonal polynomials with the Bochner property corresponding to the special case $q(G) = \rho G^l$, $l \in {\mathbb N}$, $\rho \in {\mathbb C}$, which we obtain both in the continuous and the discrete cases. Moreover, in both cases we provide two different representations in terms of hypergeometric functions. One of these representations has the advantage that the corresponding formula has the same values of hypergeometric parameters for all positive integers~$n$. However, the other formula, in which the corresponding parameters depend on $n$ $({\rm mod}~l)$ seems to be more useful in applications. The hypergeometric representations show that the class of $d$-orthogonal polynomials under consideration is very similar to the Gould--Hopper polynomials \cite{GH}, which correspond to $G= \partial_x$ and can also be included as a special case of our construction. For this reason, we call the $d$-orthogonal polynomials introduced above the {\it generalized Gould--Hopper $($GGH$)$ polynomials}. One immediate consequence of the above hypergeometric representations is that these special polynomial systems naturally split in~$l$ families, each originating from the initial differential (difference) operator~$G$, exactly as in the case of the Hermite polynomials, which also is a~particular case of our general construction. Recall that the latter are naturally subdivided into two families: the even-indexed $H_{2n}(x)$, $n =0, 1, 2 \ldots$ and the odd-indexed ones $H_{2n +1}(x)$, $n =0, 1, 2 \ldots$. The corresponding representations are $H_{2n}(x)= L_n^{(-1/2)}\big(x^2\big) $ and $H_{2n}(x)= xL_n^{(1/2)}\big(x^2\big) $ (up to multiplicative constants), where $L^{(\alpha)}_n$ are the generalized Laguerre polynomials. It is worth to point out that the above mentioned hypergeometric representations also use our main construction of the $d$-orthogonal polynomials in \cite{Ho3}, but they require some additional transformations. The point is that in these representations of the polynomials $P_n(x)$ each summand naturally corresponds to a summand in a generalized hypergeometric series. In the differential case when one uses $q(G) = G$, the hypergeometric representations were known before, see~\cite{BCD2}. Also the case of the Gould--Hopper polynomials is known \cite{LCS}. However, for $q(G) = \rho G^l$, $l >1,$ our formulas are new. In the discrete case, the formulas for the families corresponding to $G = a\Delta$, $a \in {\mathbb C}$ and $q(G) = G^l$, $l >1$ can be found in \cite{BCZ}. All other formulas, except for the Charlier and Meixner polynomials are new. Our next goal is to find the \textit{generating functions} for the GGH polynomials. We present two such formulas, both based on the second hypergeometric representation. For the first family of generating functions, we use the well-known method to obtain the generating functions for the classical orthogonal polynomials. For the second family, we apply a formula due to Srivastava~\cite{Sr1}. Some of the $d$-orthogonal polynomials are known to have a generating function. These are exactly the ones mentioned above and which can be found in~\cite{BCD2}. Finally we find Mehler--Heine type asymptotic formulas \cite{Hei, Meh}, which are again based on the second hypergeometric representation. In case of the multiple orthogonal polynomials, such formulas can be found in \cite{Tak, VAss}. In particular~\cite{VAss} contains our first result. We do not treat the discrete case, where as kindly pointed to me one of the referees the corresponding notion is local separate convergence, although some authors use again Mehler--Heine type asymptotics, cf.~\cite{Dom}. It is essential to mention that the constructions of all systems of $d$-orthogonal polynomials in \cite{Ho3} are based on the so-called \textit{bispectral problem}, see~\cite{BHY1, BHY2, DG}. Namely, these polynomials are the eigenfunctions for two linear operators~-- the first one is differential (difference)~$L(x)$ in the variable~$x$, and the second one is the operator $\Lambda(n)$ in the variable~$n$, corresponding to the finite recurence relation: \begin{gather*} L(x) P_n(x) = nP_n(x), \qquad xP_n(x) = \Lambda(n)P_n(x). \end{gather*} It turned out that both the generating functions and the Mehler--Heine asymptotics in the continuous case are expressed in terms of hypergeometric functions of the form \begin{gather*} \pFq{0}{q}{-}{\alpha_1, \alpha_2, \ldots, \alpha_q}{xt}. \end{gather*} The latter functions in the form of Meijer's $G$-functions appear in an entirely different bispectral problem~-- for which both variables $x$ and $n$ are continuous, see~\cite{BHY2}, where they are called generalized Bessel functions. It seems that this is not a mere coincidence but could be exploited further. In particular, the already known results for the Bessel bispectral functions and their Darboux transformations could be used as a model in the study of our $d$-orthogonal polynomials with the Bochner property. Apart from Darboux transformations, some other possible directions of continuations of the present studies might include the so-called linearization problem (Clebsch--Gordan coefficients) and the problem of ``connection coefficients'' for the GGH polynomials. The asymptotic formulas from the present paper seem promising in the study of the zeros of the $d$-orthogonal polynomials with the Bochner property. Some other classical issues could be studied as well, such as finding the measures and the corresponding Pearson equations for them. In another project we intend to pursue the connections of the polynomial systems and the corresponding measures with integrable systems such as bi-graded Toda hierarchy and KP-hierarchy. Some of the well known matrix models originate from the special solutions of these hierarchies, corresponding to these polynomial systems or their measures, e.g. the generalized Kontsevich--Penner model, Brezin--Gross--Witten model, etc., see \cite{Al2,Al1, MMS} as well as the Kontsevich~\cite{Kont} model itself. We finally point out that some of these polynomial systems have been studied and used for many years. Apart from the Gould--Hopper polynomials we can mention the Konhauser--Toscano polynomials, whose special cases have been studied and used as early as in 1951 in connection with the penetration and diffusion of the $X$-rays, see~\cite{SF}. These polynomials found more recent applications in the random matrix theory where they have been used in the so-called Muttalib--Borodin biorthogonal ensembles. The latter appeared in the studies of disordered conductors in the metallic regime cf.~\cite{Bor,FW, Mu, Zh}. Also some of the continuous families describe products of Ginibre random matrices, cf.~\cite{AIK, KZh} (again by making use of the hypergeometric and Meijer's G-functions representations). Based on the rich mathematical properties of the $d$-orthogonal polynomials from this paper and in~\cite{Ho3} we hope that their study will be useful also in other problems. However here we do not pursue direct application but rather to give a unified approach to all these important special cases, scattered in the literature, see, e.g., \cite{BCBR1,BCD2, BCZ,GVZ, GH, VZh}. The paper is organized as follows. In order to make it independent of~\cite{Ho3}, in Section~\ref{pre} we recall all definitions and statements needed in the main part of the paper. We also give a proof of Hahn's property. Section~\ref{hyp} is the central one; here we derive different formulas for the GGH polynomials in terms of the hypergeometric functions, including some well-known. However our approach is novel; it is based entirely on the algebraic construction from \cite{Ho3} and it emphasizes the common origin of all formulas, old and new. Next, in Section~\ref{genf} using the hypergeometric representations, we derive the generating functions for the GGH polynomials. In Section~\ref{MHA} we prove some results of the Mehler--Heine type asymptotics for GGH polynomials. We finish the paper with a number of examples, see Section~\ref{exa}. Besides pure illustrations some of them provide new interpretations, see, e.g., certain cases of ``matching polynomials of graphs''. \section{Preliminaries} \label{pre} To make the present paper self-contained below we briefly recall some of the required notions and results obtained in~\cite{Ho3}. Given a field ${\mathcal F}$ of characteristic zero, consider the Weyl algebra $W_1$ with coefficients in ${\mathcal F}$ spanned by two generators $Y$, $Z$ subject to the relation $[Z, Y] = 1$, where $[Z, Y]:=ZY - YZ$ is the standard commutator. Let us introduce some subalgebras of~$W_1$. Set $H = YZ$. Fixing a nonzero polyno\-mial~$R(H)$, put $G = R(H)Z$. The first subalgebra~${\mathcal B}_1\subset W_1$ is, by definition, generated by the elements~$H$,~$G$,~$Y$. These elements satisfy the following relations: \begin{gather*} [H, Y]= Y, \qquad [H, G] = - G, \qquad [G, Y]= R(H)(H+ 1) - H R(H-1). \end{gather*} For any polynomial $q(G)$ without a constant term\footnote{The constant term contributes only to multiplication of the polynomials, defined below in \eqref{poly}, by a constant. On the other hand, if it vanishes, the formulas and the arguments become simpler.}, define the automorphism of ${\mathcal B}_1$ given by the operator $\sigma_q = e^{\operatorname{ad}_{q(G)}}$. The images of the generators of ${\mathcal B}_1$ under the automorphism $\sigma_q$ are given below. \begin{Lemma}\label{lemma-aut} In the above notation, \begin{gather*} \sigma_q(G) = G,\qquad \sigma_q(H) = H + q'(G)G, \qquad \sigma_q Y = Y + \sum\limits_{j=0}^{ld+l-1}\gamma_j(H)G^j. \end{gather*} for some polynomials $\gamma_j(H)$. \end{Lemma} To move further, we need to introduce an auxiliary algebra ${\mathcal R}_2$ over ${\mathcal F}$ defined by the gene\-ra\-tors~$T$, $T^{-1}$, $\hat{n}$ subject to the relations: \begin{gather*} T\cdot T^{-1} = T^{-1} \cdot T=1, \qquad [T, \hat{n}] = T, \qquad \big[T^{-1}, \hat{n}\big] = -T^{-1}. \end{gather*} One can easily check that $\big[T, \hat{n} T^{-1}\big] = 1$ which implies that the operators~$T$, $\hat{n} T^{-1}$ determine a~realization of the Weyl algebra~$W_1$. We can now introduce another non-commutative algebra ${\mathcal B}_2$ as follows. First we define an anti-homomorphism~$b$, i.e., a map $b\colon {\mathcal B}_1 \to {\mathcal R}_2$ satisfying $b(m_1 \cdot m_2) = b(m_2) \cdot b(m_1)$, for each $m_1, m_2 \in {\mathcal B}_1$ given by \begin{gather*} b(Y)= T,\qquad b(H) = \hat{n},\qquad b(G) = \hat{n} T^{-1}R(\hat{n}). \end{gather*} By definition, the algebra ${\mathcal B}_2$ is the image $ b({\mathcal B}_1)$. With this definition, $b\colon {\mathcal B}_1 \to {\mathcal B}_2$ is an anti-isomorphism and, in particular, $b^{-1} \colon {\mathcal B}_2 \to {\mathcal B}_1$ is well defined. Observe that we can represent the algebra ${\mathcal B}_1$ on the space ${\mathbb C}[x]$ of polynomials in one variable by realizing $Y$ as the operator~$x$ of multiplication by~$x$, and $Z$ as the operator of differentia\-tion~$\partial_x$. Consider the polynomial system $\psi(x, n) = x^n$. Then the action of the operators $H$, $G$ on ${\mathbb C}[x]$ is given by $H \to x\partial_x$, $G\to R(x\partial_x) \partial_x$. In the same way, we can represent in ${\mathbb C}[x]$ the algebra~${\mathcal B}_2$ by realizing~$T$ and~$T^{-1}$ as the shift operators acting on a function $f(n)$ as $T^{\pm}f(n) = f(n\pm 1)$. Finally, $\hat{n}$ acts on ${\mathbb C}[x]$ by multiplication by the number~$n$. Using this notation, we get \begin{Lemma}\label{bisp} \begin{gather*} G \psi(x,n) = \hat{n}R(\hat{n}-1) T^{-1} \psi(x,n),\qquad H \psi(x,n) = \hat{n} \psi(x,n),\qquad x \psi(x,n) = T \psi(x,n). \end{gather*} \end{Lemma} Furthermore, using the operator $q(G)$, we can define another polynomial system $\{P^q_n(x)\}$ as \begin{gather} \label{poly} P_n^q(x) = e^{q(G)}\psi(x,n)= \sum_{j=0}^{\infty}\frac{q(G)^j\psi(x,n)}{j!}. \end{gather} Notice that, in fact, the above series is always finite since the operator $q(G)$ reduces the degree of any polynomial it is applied to. Denote by $L$ the operator $\sigma_q(H)$ and put $d=\deg R$, $l=\deg q$. \begin{Theorem} \label{any} In the above notation, the polynomials $P_n^q(x)$ have the following properties: \begin{enumerate}\itemsep=0pt \item[$(i)$] They are the eigenfunctions of the differential operator \begin{gather*} L := q'(G)G + x\partial \end{gather*} with the eigenvalues $\lambda(n) =n$. \item[$(ii)$] They satisfy a recurrence relation of the form \begin{gather*} xP^q_n(x) =P^q_{n+1}(x) + \sum_{j=0}^{ld +l -1}\gamma_j(n)P^q_{n-j}(x). \end{gather*} \item[$(iii)$] They possess the Hahn's property, i.e., their derivatives are of the same class with a new $ \hat{G} = R(H+1)\partial_x$. \end{enumerate} \end{Theorem} \begin{proof} Proof of (i) and (ii) can be found in~\cite{Ho3}. The proof of (iii) is simple and goes as follows. We have \begin{gather*} \partial_x P_n(x) = \sum_{j=0}^{\infty} \frac{\partial_xq(G)^j x^n}{j!}. \end{gather*} Notice that $\partial_x G =\partial_x R(H)\partial_x = R(H+1)\partial_x^2$. Hence $\partial_xq(G)^j x^n = n q(\hat{G})x^{n-1}$. This shows that the system $Q_{n-1}(x) = P^{'}_n(x)/n$, $n=1, 2, \ldots$ is the system corresponding to the opera\-tor~$q(\hat{G})$. \end{proof} Notice that, similarly to the above, we have earlier realized the abstract construction of ${\mathcal B}_1$ but in terms of difference operators instead of differential ones, acting again on the space of polynomials~${\mathbb C}[x]$~\cite{Ho3}. Then we define the operators $\tau_{\pm}$ acting on $f \in {\mathbb C}[x]$ by the shift of the argument $\tau_{\pm 1}f(x) = f(x\pm 1)$. The operator~$x$ acts on~$f(x)$ as multiplication by~$x$. We also need the notation \begin{gather*} \Delta = \tau_{+1} -1,\qquad \nabla = \tau_{-1} - 1, \qquad H = - x\nabla.\end{gather*} Finally, put $g = x - H$. For this realization we use the following polynomials system $\psi(x, n) = (-1)^n(-x)_n = x(x-1)\cdots(x-n+1)$. Here we use the Pochhammer symbol \begin{gather} \label{Pocch} (a)_j = a(a+1)\cdots (a+j-1), \qquad (a)_0 =1. \end{gather} \begin{Lemma}\label{dis-bi} The following identities hold: \begin{gather*} H\psi(x,n) = n\psi(x,n), \qquad g\psi(x,n) = T \psi(x,n), \qquad \Delta\psi(x,n) = n T^{-1}\psi(x,n). \end{gather*} \end{Lemma} Exactly as in the continuous case we define the operator $G = R(H)\Delta$. Let $q(G)$ be a~poly\-nomial without a constant term. Then we can define the automorphism $\sigma_q = e^{\operatorname{ad}_{q(G)}}$. Also introduce the new polynomial system $\{P_n^g(x)\}$ given by \begin{gather*} P_n^q(x) = e^{q(G)}\psi(x,n). \end{gather*} In \cite{Ho3} among other things, we proved the following properties of $\big\{P_n^q(x)\big\}$. \begin{Theorem}\label{any-dis} The polynomials $P_n^q(x)$ have the following properties: \begin{enumerate}\itemsep=0pt \item[$(i)$] They are eigenfunctions of the difference operator \begin{gather*} L := q'(G)G - x\nabla. \end{gather*} \item[$(ii)$] They satisfy a recurrence relation of the form \begin{gather*} xP^q_n(x) =P^q_{n+1} + \sum_{j=0}^{m}\gamma_j(n)P^q_{n-j}, \end{gather*} where $m=l$ for $d=0$ and $m=ld+l-1$ for $d>0$. \item[$(iii)$] They possess the Hahn's property, i.e., their polynomial system \begin{gather*} Q_n(x) \stackrel{def}{=} \frac{\Delta P_{n+1}}{n+1},\qquad n=0,1,\ldots, \end{gather*} corresponds to $q(\hat{G})$ with a new $ \hat{G} = R(H+1)\Delta$. \end{enumerate} \end{Theorem} \begin{proof} For (i) and (ii) see \cite{Ho3}. The proof of (iii) is similar to the one of Theorem~\ref{any}, using the relation $[\Delta, H] = \Delta$. \end{proof} \section{Hypergeometric representations} \label{hyp} In this section which is central for the present paper, we derive formulas for some of the families of $d$-orthogonal polynomials introduced in~\cite{Ho3} and Section~\ref{pre} in terms of the generalized hypergeometric functions. These families are defined via the operators $q(G) = \rho G^l$, $l \geq 1$. This property is very important and, in particular, it makes our polynomial systems similar to the classical orthogonal polynomials. Notice that the system of Laguerre polynomials $\big\{L^{(\alpha)}_n(x)\big\}$ corresponds to $G= (x \partial_x +\alpha +1) \partial_x$ and $l=1$, $\rho=1$. The system of Hermite polynomials corresponds to $\partial_x$ and $q(G) = -G^2/2$. Let us recall the definition of generalized hypergeometric series, cf.~\cite{NIST}. Given a pair of nonnegative integers $(p, q)$, let $\alpha_1, \ldots, \alpha_p$ and $\beta_1, \ldots, \beta_q$ be complex constants and let~$x$ be a~complex variable. (The parameters $\beta_i$ are assumed to be different from non-positive integers.) The series \begin{gather} \label{def-hyp} \pFq{p}{q}{\alpha_1, \ldots, \alpha_p}{\beta_1, \ldots, \beta_q}{x} \stackrel{\rm def}{:=} \sum\limits_{j=0}^{\infty}\frac{(\alpha_1)_j\cdots (\alpha_p)_j}{(\beta_1)_j\cdots(\beta_q)_j}\frac{x^j}{j!}, \end{gather} where $(\alpha)_n$ is the Pochhammer symbol~\eqref{Pocch} (also the symbol of the rising factorial), is called a~{\it generalized hypergeometric series}. When it is convergent in some open set its analytic continuation is called generalized hypergeometric function. This function satisfies the differential equation \begin{gather*} \left\{\partial_x \prod_{j=1}^{q}( x\partial_x + \beta_j -1) - \prod_{k=1}^{p}( x\partial_x + \alpha_k) \right\} \pFq{p}{q}{\alpha_1, \ldots, \alpha_p}{\beta_1, \ldots, \beta_q}{x} = 0. \end{gather*} We have to point out that the above function does not always exist. However, when one of the parameters $\alpha_i$ is a non-positive integer, the series terminates and the function becomes a~polynomial. In this paper we mainly deal with generalized hypergeometric polynomials. In the remaining cases the series will be obviously convergent, due to the fact that $p<q$ in the gene\-ra\-ting functions and in the Mehler--Heine asymptotic formulas. (For more details consult~\cite{NIST}.) From now on we will omit the word ``generalized'' since there will be no danger of confusion. Notice however that the expression ``the hypergeometric function'' usually refers to $\pFqS{2}{1}{a,b}{c}{x}$. \subsection{Notation}\label{section3.1} In what follows we will introduce some notation that will be used to formulate and prove the results. First we will use the shorthand notation \begin{gather*} \pFq{p}{q}{(\alpha_p)}{(\beta_q)}{x} \end{gather*} for the hypergeometric series \eqref{def-hyp}. Let us introduce $\alpha_{d+1} = 0$ in an effort to make the notation less clumsy. By $(\alpha_{d+1})$ we denote the vector of parameters $( \alpha_1, \ldots,\alpha_d, \alpha_{d+1})$. Recall that the notation $\Delta(l; \lambda)$ abbreviates the vector \begin{gather*} \left( \frac{\lambda}{l}, \frac{\lambda +1}{l}, \ldots , \frac{\lambda +l -1}{l}\right),\qquad l \in {\mathbb N}, \end{gather*} of $l$ parameters. We will also combine the latter symbol with $( \alpha_{d+1})$ to write $(\Delta(l; \alpha_{d+1}))$ for \begin{gather*} \frac{\alpha_1}{l}, \ldots , \frac{\alpha_1 +l - 1}{l}, \ldots , \frac{\alpha_{d+1}}{l}, \ldots, \frac{\alpha_{d+1} +l - 1}{l}, \end{gather*} in the expressions for the hypergeometric functions. We assume that $\rho \neq 0$. Let us put $\eta = l^{d+1}\rho$. Let us fix $i\in \{0, 1, \ldots, l-1\}$ and consider the polynomials with $n = ml + i$, $ m = 0, 1, \ldots$. Then the parameters $\Delta(l; -n-\alpha_{d+1})$ can be presented as follows \begin{gather*} \Delta(l; -n-\alpha_{d+1}) = \left(-m - \frac{i + \alpha_1}{l}, -m - \frac{i-1 + \alpha_1}{l}, \ldots, -m - \frac{i + \alpha_1 - l+1}{l},\ldots,\right.\\ \left. \hphantom{\Delta(l; -n-\alpha_{d+1}) =}{} -m - \frac{i + \alpha_{d+1}}{l}, -m - \frac{i-1 + \alpha_{d+1}}{l}, \ldots, -m - \frac{i + \alpha_{d+1} - l+1}{l} \right). \end{gather*} Let us introduce the sets \begin{gather*} S_k(i) = \left\{ \frac{\alpha_k +i-r}{l} +1, \ r= 0,\ldots, l-1\right\}, \qquad k=1, \ldots, d+1 \end{gather*} and $ S(i) = \cup_{k=1}^{d+1} S_k(i) $. By $\hat{S}(i)$ we denote the set $S(i)\setminus\{1\}$, 1 is the element, corresponding to $k=d+1$, $r=i$. The elements of $S (i)$ will be denoted by $S_{\beta}(i)$, $\beta= 1,\ldots, (d+1)l$. By~$I$ we denote the set of indexes~$\beta$, which correspond to~$\hat{S}(i)$. Notice that~$I$ depends on~$i$. We also introduce a notation for the product of Pochhammer symbols: \begin{gather*} [\alpha_p]_n \stackrel{\rm def}{=} \prod_{k=1}^{p}(\alpha_k)_n. \end{gather*} Most of the notation is taken from \cite{Is, KLS,LCS, SM}. \subsection{Continuous GGH polynomials} \label{c-hyp} Some of the polynomial systems of the previous section have well known representations in terms of generalized hypergeometric functions~\cite{BCD2}, where the authors take the formulas as their definition. Below we will derive the representation from~\cite{BCD2} on the basis of the constructions from~\cite{Ho3}. Our proof will help us to find hypergeometric representations for the other systems studied here, that are not treated elsewhere. The discrete $d$-orthogonal polynomials will be treated in the same manner. We start with the case when the automorphism $\sigma$ is defined by $G = R(H)\partial_x$ where $R(H)$ is a polynomial. Before that let us factor the polynomial $R(H)$ into $R(H) = \rho\prod\limits_{j=1}^{m}(H +\alpha_j +1)$, $\rho \in{\mathbb C}$, $\rho \neq 0$. Here some of the numbers $\alpha_j$ can be equal. In what follows we will drop the dependence of the $d$-orthogonal polynomials on $R$. \begin{Theorem} \label{1hyp} The polynomials $P_n(x)$ have the following hypergeometric representations: \begin{gather} P_n(x) = x^n\, \pFq{d+1}{0}{-n ,(-n -\alpha_d)}{-}{(-1)^{d+1}\rho x^{-1}}, \label{hyp11}\\ P_n(x) = \rho^n [\alpha_k +1]_n\, \pFq{1}{d}{-n}{(\alpha_d +1)}{-\frac{x}{\rho}}. \label{hyp12} \end{gather} For the second formula the coefficients $\alpha_j$ have to be different from negative integers or zero. \end{Theorem} \begin{proof} Notice that \begin{gather*} G x^n = n R(H)x^{n-1} = nR(n-1)x^{n-1}. \end{gather*} By induction we find that for any $j \in {\mathbb N}$ \begin{gather*} G^jx^n = \prod_{s=0}^{j-1}(n-s)R(n -1 -s)x^{n-j}. \end{gather*} In terms of the Pochhammer symbol we can write the last formula as \begin{gather*} G^jx^n = (-1)^{j(d+1)}(-n)_j\rho^j[-n - \alpha_d]_jx^{n-j}. \end{gather*} The last formula allows to express the polynomials $P_n(x)$ as \begin{gather} P_n(x) = x^n \sum_{j=0}^n\frac{ (-1)^{j(d+1)}(-n)_j [-n -\alpha_d ]_j \rho^jx^{-j}}{j!}\nonumber\\ \hphantom{P_n(x)}{} = x^n \pFq{d+1}{0}{-n,(-n -\alpha_d)}{-}{(-1)^{d+1}\rho x^{-1}}.\label{2hyp} \end{gather} To obtain the second expression for the polynomials $P_n(x)$, we transform the coefficients in the first line of \eqref{2hyp} using the formula \begin{gather}\label{trans} (-n -\mu)_j = (-1)^j \frac{(\mu +1)_{n}}{(\mu +1)_{n-j}}. \end{gather} Substitute this expression for the parameter values $\mu = 0$, $\alpha_1, \ldots, \alpha_d$ into the sum~\eqref{2hyp}, defi\-ning~$P_n(x)$. As a result, we obtain \begin{gather*} P_n(x) =\rho^n [\alpha_d+1]_n\sum_{j=0}^n \frac{n!}{j!} \frac{ 1}{[\alpha_d +1]_{n-j}} \frac{ \rho^{-n+j} x^{n-j}}{(n-j)!}. \end{gather*} Write the expression $n!/j!$ as $(-1)^{n-j} (-n)_{n-j}$. After changing the summation index $j \rightarrow s = n-j$, we find \begin{gather*} P_n(x) = \rho^n [\alpha_d +1]_n \, \pFq{1}{d}{-n}{(\alpha_d +1)}{-\frac{x}{\rho}}. \end{gather*} Of course we need to impose the condition $\alpha_k \neq - 1, -2, \ldots$. \end{proof} \begin{Remark} \label{BCD} (1) This expression coincides with the corresponding one in~\cite{BCD2}. The form of the roots of $R$ was chosen for this purpose as well as to obtain the hypergeometric formula of the generalized Laguerre polynomials~$L^{(\alpha)}_n(x)$. (2) Formula~\eqref{2hyp} is valid for all values of the coefficients~$\alpha_j$. However if some of the coefficients~$\alpha_j$ is are negative integers, the polynomial system becomes $x^n$ for $n \geq -\alpha_j$ and therefore in such a case it forms a finite system of $d$-orthogonal polynomials. \end{Remark} Let us consider the polynomials obtained by the automorphisms $\sigma_q$, where $q(G)$ is some polynomial without constant term. We recall that they are given by \begin{gather*} P_n^q(x): = e^{q(G)}x^n = x^n + \sum_{k=1}^{\infty}\frac{q^k(G) x^n}{k!}. \end{gather*} I don't know if all these $d$-orthogonal polynomials have representations in terms of generalized hypergeometric functions or other special functions. However in the case when $q(G) = \rho G^l$ they have. We are going to present the formula. Let us fix $q(G) = \rho G^l$. Finally we will drop the dependence on~$q(G)$ as there is no danger of confusion. \begin{Theorem}\label{ch} For $q(G) = \rho G^l$ the polynomials $P_n(x)$ have the following representation in hypergeometric functions: \begin{gather}\label{Gl} P_n(x) = x^n\, \pFq{dl+l}{0}{ (\Delta(l; -n-\alpha_d))} {-}{\left(\frac{(-1)^{d+1}\eta}{x}\right)^l}. \end{gather} \end{Theorem} \begin{proof} Again we use the formula for $P_n(x)$: \begin{gather*} P_n(x) = \sum_{j=0}^{\infty} \frac{G^{lj} x^n }{j!}. \end{gather*} We know that \begin{gather*} G^{lj} x^n = \prod_{s=0}^{lj-1}(n-s)R(n -1 -s)x^{n-lj}. \end{gather*} Using that $\alpha_{d+1} =0$ we present the coefficient at $x^{n-lj}$ of $G^{lj}x^n$ in the form \begin{gather} \label{Glj} \eta^{lj} (-1)^{lj(d+1)} \prod_{r= 0}^{l-1} \left[\frac{-n-\alpha_{d+1}+r}{l}\right]_j. \end{gather} Then the polynomials $P_n(x)$ are given by \begin{gather*} P_n(x) = x^n + \sum_{j=1}^n\frac{(G^l)^j x^n}{j!} = x^n \sum_{j=1}^n\eta^{lj} (-1)^{lj(d+1)} \prod_{r= 0}^{l-1} \left[\frac{-n-\alpha_{d+1}+r}{l}\right]_j\frac{x^{-lj}}{j!} \nonumber\\ \hphantom{P_n(x)}{} = x^n \sum_{j=0}^{n} \prod_{r=0}^{l-1} \left[\frac{-n-\alpha_{d+1}+r}{l}\right]_j \left(\frac{(-1)^{d+1}\eta}{x}\right)^{lj}\frac{1}{j!}. \end{gather*} This proves \eqref{Gl}. \end{proof} \begin{Remark}\label{her}The formula for the $d$-orthogonal polynomials, corresponding to $q(G)= \rho G^l$, in the above theorem resembles the representation of Hermite polynomials in hypergeometric functions, see, e.g.,~\cite{NIST}. Notice that we don't need to sum up to~$n$ but only up to $\big\lceil{\frac{n}{l}}\big\rceil $ as the next terms are~0. \end{Remark} As in the case $l=1$ we are going to find a second representation. \begin{Theorem}\label{th-new-h}The polynomials $P_n(x)$, where $n= ml +i$, have the following hypergeometric representation \begin{gather} \label{new-h} P_n(x) =\eta^{ml} x^i \prod_{\beta \in I}\big(\hat{S}_{\beta}(i)\big)_m \,\pFq{1}{ld +l}{-m}{\big(\hat{S}(i)\big)}{- \left[\frac{x}{\eta}\right]^l}. \end{gather} \end{Theorem} \begin{proof} Notice that \eqref{Gl} can be written in the form \begin{gather*} P_n(x) = x^{ml +i} \,\pFq{dl+l}{0}{ (-S(i) +1 -m)} {-}{\left(\frac{(-1)^{d+1}\eta}{x}\right)^l}. \end{gather*} Let us put $ y = \big(\frac{x}{\eta}\big)^{l} $ and consider the polynomials $P_m^{(i)}(y)$ defined by $P_n(x) = x^i \eta^{ml}P_m^{(i)}(y)$. We use that one of the elements of the set $S(i) $ is~$1$, which shows that they have the representation~\eqref{hyp12} from Theorem~\ref{1hyp} \begin{gather*} P_m^{(i)}(y) = y^{m} \,\pFq{dl+l}{0}{-m, (-\hat{S}(i) +1 -m)} {-}{\frac{(-1)^{ld+l}}{y}}. \end{gather*} They are exactly in the form of~\eqref{hyp11}. This means that they can be written as \begin{gather*} P_n^{(i)}(y)= \eta^{ml} \prod_{\beta \in I}(S_{\beta}(i))_m \,\pFq{1}{ld +l-1}{-m}{\big(\hat{S}(i)\big)}{- y}. \end{gather*} Returning to the polynomials $P_n(x)$ we obtain the formula~\eqref{new-h}. \end{proof} \begin{Corollary}\label{new-hypF2} The polynomials $P_n(x)$ have also the representation \begin{gather*} P_n(x)= \eta^{ml} \prod_{\beta \in I }(S_{\beta}(i))_m \,\pFq{2}{ld +l-1}{-m, 1}{(S(i))}{- \left[\frac{x}{\eta}\right]^l}. \end{gather*} \end{Corollary} \begin{proof} Use that $S(i) = \hat{S}(i) \cup \{1\}$. Then the general formula \begin{gather*} \pFq{p}{q}{(\alpha_p)}{(\beta_q)}{x} =\pFq{p+1}{q+1}{(\alpha_p), 1}{(\beta_q), 1}{x} \end{gather*} applied to \eqref{new-h} gives the result. \end{proof} We see that the GGH polynomial system $P_n(x)$ can be split into~$l$ families of $d$-orthogonal polynomials with Bochner's property exactly as Hermite polynomials split into~2 families in terms of Laguerre polynomials. More precisely \begin{Corollary}\label{lag-typ} The polynomial system $P_n(x)$ consists of $l$ families $P_n(x) = C_i x^i P_m^{(i)}(y)$, $i=0, \ldots, l-1$, where \begin{gather*} y = \left(\frac{x}{\eta}\right)^{l}, \qquad C_i = \eta^{ml} \prod_{\beta \in I} (S_{\beta}(i) )_m. \end{gather*} The family $P_m^{(i)}(x)$ is the system of $d$-orthogonal polynomials with Bochner's property corresponding to $G= \prod\limits_{\beta \in I}(H +S_{\beta})\partial$. \end{Corollary} While the above statement does not need a proof it is worth to point out that it is a precise formulation of Remark~7.4. in~\cite{Ho3}. In \cite[Example~7.3]{Ho3} we demonstrated that such a splitting of the family exists for the simplest case of $q(G) = G^2$, where $G = (x\partial_x + \alpha +1)\partial_x$ without using hypergeometric functions. \subsection{Discrete GGH Polynomials} The discrete polynomial systems also have representation in terms of generalized hypergeometric functions. Below we will derive them using the approach that was exploited for continuous $d$-orthogonal polynomials. These representations are new except for the cases of Charlier and Meixner polynomials. We again factor the polynomial $R(H)$ into $R(H) = \rho\prod\limits_{j=1}^{m}(H+\alpha_j +1)$. \begin{Theorem}\label{hyp-d}The discrete $d$-orthogonal polynomials $P_n(x)$ have the following hypergeometric representations: \begin{gather} P_n(x) = (-1)^n(-x)_n \,\pFq{d+1}{1}{-n , (-n -\alpha_d )}{ x-n+1}{(-1)^{d+1}\rho },\label{2for} \end{gather} and \begin{gather} P_n(x) = [\alpha_d+1]_n \,\pFq{2}{d}{-n, -x }{ (\alpha_d +1)}{\rho }.\label{4hyp} \end{gather} \end{Theorem} \begin{proof} First we transform the formula for $P_n(x)$ to get rid of the difference operators. We are going to exploit again the formula \begin{gather*} P_n(x)= \sum\limits_{j=0}^{\infty}\frac{G^j\psi(x,n)}{j!} \qquad \text{with} \quad \psi(x,n) = (-1)^n (-x)_n. \end{gather*} Using that \begin{gather*} G \psi(x,n) = n R(n-1)\psi(x, n-1), \end{gather*} we obtain \begin{gather*} G^j \psi(x,n) = \rho^j (-1)^{n-j}(-x)_{n-j} (-1)^j (-n)_{j} (-1)^j [-\alpha_d - n]_{j}. \end{gather*} Let us use the formula \begin{gather} \label{trans1} (-1)^{n-j}(-x)_{n-j} = \frac{(-x)_n (-1)^n}{(x-n +1)_j}. \end{gather} We obtain that \begin{gather*} G^j \psi(x,n) = (-1)^n(-x)_n \frac{ \rho^j (-1)^{dj +j} (-n)_{j} [-\alpha_d - n]_{j}}{(x-n +1)_j}. \end{gather*} For the polynomials $P_n(x)$ this gives the expression \begin{gather*} P_n(x)= (-1)^n (-x)_n \sum_{j=0}^n\frac{\rho^j (-1)^{(d+1)j} (-n)_j [-\alpha_d - n]_{j}}{(x-n +1)_j}{(x-n+1)_j j!}, \end{gather*} which is \eqref{2for}. The second hypergeometric expression can be obtained as in the continuous case. Using again~\eqref{trans} we find \begin{gather*} P_n(x)= \prod_{k=1}^m (\alpha_k+1)_n \sum_{j=0}^n \frac{n! }{(n-j)!} \frac{(-1)^{n-j}\rho^j(-x)_{n-j}}{j! [-\alpha_d - n]_{n-j} }{(x-n +1)_j}. \end{gather*} After standard manipulations that we used for the continuous $d$-orthogonal polynomials we come to the second formula~\eqref{4hyp}. \end{proof} We can obtain hypergeometric representations for the class of discrete $d$-orthogonal polynomials, constructed via $q(G)$, where $G = R(H)\Delta$. Again we will treat the case when $q(G) = \rho G^l$. Let us put \begin{gather*} \eta_1 = \frac{(-1)^{(d+1)l}\eta}{l} .\end{gather*} \begin{Theorem}\label{dh} In the case when $q(G) = \rho G^l$ the polynomials $P_n(x)$ have the following representation in hypergeometric functions \begin{gather} \label{hyp-d1} P_n(x) = (-1)^n(-x)_n \,\pFq{dl+l}{l}{\Delta(l; -n), (\Delta(l; -n-\alpha))} {\Delta(l; x-n +1)}{ \eta_1^l}. \end{gather} \end{Theorem} \begin{proof} The proof needs a few changes in comparison with the continuous case but otherwise is straightforward. We are going to use again the series defining the polynomials \begin{gather*} P_n(x) = \sum_{j=0}^{\infty}\frac{G^{jl} \psi(x,n)}{j!}. \end{gather*} A formula, similar to the one in the continuous case holds \begin{gather*} G^{lj} (-1)^n(-x)_n = \prod_{s=0}^{lj-1} (n-s)R(n- 1 -s)(-x)_{n-lj}(-1)^{n-lj}. \end{gather*} For the term $(-x)_{n-lj}$ we use \eqref{trans1} with $lj$ instead of $j$ \begin{gather*} (-1)^{n-lj} (-x)_{n-lj} = \frac{(-1)^{n}(-x)_n}{(x-n +1)_{lj}}. \end{gather*} The factor $(x-n +1)_{lj}$ can be presented as indicated in Section~\ref{c-hyp} \begin{gather*} (x-n +1)_{lj} = l^{lj}\prod_{r=0}^{l-1}\left(\frac{x-n +r +1}{l}\right)_{j}. \end{gather*} This gives \begin{gather} \label{n-lj} (-1)^{n-lj}(-x)_{n-lj} = \frac{(-1)^n(-x)_n}{ l^{lj}\prod\limits_{r=0}^{l-1}\left(\frac{x-n +r +1}{l}\right)_j}. \end{gather} Using \eqref{Glj} for $\prod\limits_{s=0}^{lj-1} (n-s)R(n- 1-s) $ and expressing the last factor by~\eqref{n-lj} in the sum for the polynomials we obtain the desired formula~\eqref{hyp-d1}. \end{proof} Again we can find a second formula for the $d$-orthogonal polynomials, corresponding to \smash{$q(G)= \rho G^l$}. We use the notations~\eqref{c-hyp} from Section~\ref{section3.1}. With this notation we have \begin{Theorem} \label{new-dh} The polynomials $P_n(x)$, where $n= ml +i$ have the following hypergeometric representation \begin{gather} \label{new-d} P_{ml +i}(x) = (-1)^{mdl +1} C(i) (-x)_i\prod_{k=1}^{ld +l-1} \,\pFq{l}{ld +l -1}{-m, \Delta(l; -x+i -1)}{ \big(\hat{S}(i)\big)}{-\eta_1^{-l}}, \end{gather} where $C(i)$ was defined in Corollary~{\rm \ref{lag-typ}}. \end{Theorem} \begin{proof} We start with the formula \begin{gather*} P_n(x) = \sum_{j=0}^{\infty} \frac{\prod\limits_{s=0}^{lj-1} (n-s)R(n- 1 -s) (-1)^{n-lj}(-x)_{n-lj}}{j!} . \end{gather*} We transform the expression \begin{gather*} \prod_{s=0}^{lj-1} (n-s)R(n-1-s) \end{gather*} using \eqref{Glj} to obtain \begin{gather*} (-1)^{lj}\eta^{lj} (-m)_j\prod_{\beta \in I} (-m -S_{\beta}(i))_j. \end{gather*} We further transform the last expression using \eqref{trans} into \begin{gather} \label{prod} (-1)^{l(d+1)j}\eta^{lj}\frac{ (-1)^j m!}{(m-j)!}\frac{\prod\limits_{\beta \in I} (S_{\beta}(i))_m}{\prod\limits_{\beta \in I} (S_{\beta}(i))_{m-j}}. \end{gather} Next we transform the factor $(-x)_{n-lj}$ as follows. First \begin{gather*} (-1)^{n-jl}(-x)_{n-lj} = (-1)^{i} (-x)_i (-1)^{ml - jl}(-x+i)_{lm-lj}. \end{gather*} Then we present the last factor in the right-hand side of the last formula as \begin{gather*} (-x+i)_{lm-lj} = l^{(m-j)l}\prod_{r=0}^{l-1}\left(\frac{-x+i -r}{l}\right)_{m-j}. \end{gather*} At the end plugging the last formula and \eqref{prod} into the sum for $P_n(x)$ and changing the summation index $j \to m-j$ we obtain \begin{gather*} P_{ml +i} = C(i) (-x)_i \sum_{j=0}^{m} \frac{(-m)_j \prod\limits_{r=0}^{l-1}\left(\frac{-x +i -r}{l}\right)_j}{\prod\limits_{\beta \in I} (S_{\beta})_j} \frac{\big[{-}\eta_1^{-l}\big]^j}{j!}, \end{gather*} which is \eqref{new-d}. \end{proof} \section{Generating functions} \label{genf} In this section we will find generating functions for all $d$-orthogonal polynomials for which we have earlier obtained hypergeometric representations. Generating functions for the continuous $d$-orthogonal polynomials, corresponding to $q(G) = G$, can be found in~\cite{BCD2}. \subsection{Continuous GGH Polynomials} \label{contGH} In what follows we use a different normalization of the $d$-orthogonal polynomials. Namely, we introduce the polynomials \begin{gather} \label{pure} Q_n(x) := x^{i} \,\pFq{1}{ld +l-1}{-m}{\big(\hat{S}(i)\big)}{\left[\frac{x}{(-1)^{d+1}\eta\rho}\right]^l}. \end{gather} They differ from the polynomials $P_n(x)$, given by~\eqref{new-h} by a multiplicative constant. We also assume that $((-1)^{d+1}\eta)^l = -1$ which can be achieved by rescaling of $x$ together with multiplication by a suitable constant. Our first result is as follows. \begin{Theorem}\label{gen-c} For a given positive integer $i$, the function $\Phi_i(x,t)$ defined as \begin{gather} \label{gen2-c} \Phi_i(x, t) := (tx)^i e^{t^l}\, \pFq{0}{ld+l-1}{-}{\big(\hat{S}(i)\big) }{(xt)^l}, \end{gather} generates the polynomial system $\{Q_{lm +i}(x)\}_{m=0}^\infty$ by means of the formula \begin{gather*} \Phi_i(x, t)=\sum\limits_{m=0}^{\infty} \frac{ t^{ml +i}}{m!} Q_{ml +i}(x). \end{gather*} \end{Theorem} \begin{proof} Consider the series \begin{gather*} \sum\limits_{m=0}^{\infty} \frac{ t^{ml +i}}{m!} Q_{ml +i}(x)= t^ix^i\sum\limits_{m=0}^{\infty} \frac{ t^{ml}}{m!} \,\pFq{1}{ld +l-1}{-m}{\big(\hat{S}(i)\big) }{-x^l}. \end{gather*} One can present the sum in the right-hand side of the latter equation in the form of a double series \begin{gather*} \sum\limits_{m=0}^{\infty} \frac{ t^{ml +i}}{m!} Q_{ml +i}(x) = \sum\limits_{m=0}^{\infty} \frac{ t^{ml}}{m!} \sum\limits_{j=0}^{m} \frac{ (-m)_j }{\prod\limits_{\beta \in I} (S_{\beta}(i))_j} \frac{\big({-}x^l\big)^j}{j!}. \end{gather*} Changing the order of summation in the double series, we obtain \begin{gather*} \sum\limits_{j=0}^{\infty} \frac{ \big({-}x^l\big)^j }{\prod\limits_{\beta \in I} (S_{\beta}(i))_j j!} \sum\limits_{m=j}^{\infty} \frac{(-m)_j t^{ml}}{m!}. \end{gather*} It is easy to see that \begin{gather*} \frac{(-m)_j} {m!} = \frac{(-1)^j}{(m-j)!}. \end{gather*} Hence after introducing a new index $s = m-j$ of summation, the double series becomes \begin{gather*} \sum\limits_{j=0}^{\infty} \frac{ (xt)^{lj} }{\prod\limits_{\beta \in I}(S_{ \beta}(i))_j j!} \sum\limits_{s=0}^{\infty} \frac{t^{sl}}{s!}. \end{gather*} This gives for the double sum \begin{gather*} \pFq{0}{ld+l-1}{-}{ \big(\hat{S}(i)\big) }{(xt)^l} e^{t^l}, \end{gather*} which implies \eqref{gen2-c} for the function $\Phi_i(x,t)$. \end{proof} \begin{Remark}\label{GBess} We notice that the hypergeometric functions without upper parameters appear in a quite different bispectral problem. In \cite{BHY2} we defined the functions $\Psi_{\beta}(x,z)$, which are solutions of an equation of the form \begin{gather*} x^{-N} (\theta - \beta_1)\cdots (\theta - \beta_N) \Psi_{\beta}(x,z) = z^N\Psi_{\beta}(x,z), \end{gather*} where $\theta = x\partial_x$ and $\beta_j \in {\mathbb C}$. We called them generalized Bessel functions. As one of the referees kindly informed me these functions have been studied by P.~Delerue in~\cite{Del}, and are called hyper-Bessel functions. They are expressed in terms of the hypergeometric functions without upper parameters (in~\cite{BHY2} we used Meijer's $G$-functions). Through these functions we were able to find non-trivial bispectral operators of any rank. It is interesting to understand if this is a~mere coincidence or the reasons are deeper. I hope that such a connection exists and in this case it would be useful in the studies of Darboux transformation of the generalized Gould--Hopper polynomials. Even in the case of $l=1$ (for which the same formula was found by different methods in~\cite{BCD2}, see also below Corollary~\ref{l=1}) the connection deserves attention. \end{Remark} From the above formulas we can write a generating function for the entire family $Q_n$. Let us define the function \begin{gather*} \Phi(x,t) = \sum\limits_{i=0}^{l-1} \Phi_i(x, t) = e^{t^l}\sum_{i=0}^{l-1} (xt)^i \cdot \pFq{0}{ld+l-1}{-}{\big(\hat{S}(i)\big)}{(xt)^l}. \end{gather*} \begin{Corollary}\label{gen-nc} The function $\Phi(x,t) $ is a generating function for the polynomials $Q_n(x)/ \lceil{n/l} \rceil !$: \begin{gather*} \sum\limits_{n=0}^{\infty} \frac{t^n}{ \lceil{n/l} \rceil !} Q_n(x) = \Phi(x,t). \end{gather*} \end{Corollary} The proof is obvious and we omit it. \begin{Remark}\label{Bren} Polynomial systems that have a generating function of the form \begin{gather*} \sum_{n=0}^{\infty} P_n(x) t^n = A(t)B(xt). \end{gather*} are called Brenke polynomials, see, e.g.,~\cite{BCBR1}. The last corollary shows that the GGH polynomials are Brenke polynomials. \end{Remark} It deserves to write separately the formula for the case $l=1$. \begin{Corollary}[\cite{BCD2}]\label{l=1} When $l=1$ we have \begin{gather*} \sum\limits_{n=0}^{\infty} \frac{t^n}{n!} Q_n(x) = \pFq{0}{d}{-}{(\alpha_d +1)}{xt} e^{t}. \end{gather*} \end{Corollary} We are going to obtain a second formula based on a theorem from \cite{ Sr1}. Let us formulate the corresponding result explicitly in a slightly less general form that suffices for our purposes. \begin{Proposition}\label{pr-sri} Let $a \in {\mathbb C}$, $-a \notin {\mathbb N}$ and let $\alpha_1, \ldots, \alpha_p$, $\beta_1, \ldots, \beta_q$ be complex numbers such that the hypergeometric function \begin{gather*} \pFq{p+1}{q +1}{-n, (\alpha_p)}{a +1, (\beta_q)}{x} \end{gather*} be well defined. Then the following formula holds \begin{gather} \sum\limits_{n=0}^{\infty} \binom{a +n}{n}\, \pFq{p+1}{q+1}{-n, (\alpha_p)}{a+1, (\beta_q)}{x}t^n = \frac{1}{(1-t)^{a+1}} \, \pFq{p}{q}{ (\alpha_p)}{(\beta_q)}{\frac{xt}{1-t}}. \label{sri} \end{gather} \end{Proposition} See \cite{Sr1} for a simple proof. We again use the $d$-orthogonal polynomials $Q_n(x)$ from \eqref{pure} as well as the convention $\eta = 1$. \begin{Theorem}\label{gen-c+} The function $G_i$ given by \begin{gather*} G_i(x, t) = \frac{(tx)^i}{1-t^l} \, \pFq{1}{ld+l-1}{1}{\big(\hat{S}(i)\big) }{\frac{(xt)^l}{1-t^l}}, \end{gather*} generates the polynomials $Q_{lm +i}(x)$, $m= 0, 1, \ldots$ in the following way \begin{gather*} G_i(x, t) = \sum\limits_{m=0}^{\infty} t^{ml +i} Q_{ml +i}(x). \end{gather*} \end{Theorem} \begin{proof} Let us multiply the polynomials $Q_{lm +i}$ by $t^{ml+i}$ and sum. Thus we obtain \begin{gather*} \sum\limits_{m=0}^{\infty} t^{ml +i} Q_{ml +i}(x)= t^ix^i\sum\limits_{m=0}^{\infty} t^{ml} \,\pFq{2}{ld +l}{-m, 1}{ \big(\hat{S}(i) \big),1}{x^l}. \end{gather*} Here we have used that \begin{gather*} \pFq{p}{q}{ (\alpha_p)}{(\beta_q)}{x} = \pFq{p+1}{q+1}{ (\alpha_p), \mu}{(\beta_q) , \mu }{x}, \qquad \mu \neq 0. \end{gather*} We apply the above cited formula \eqref{sri} from~\cite{Sr1} with $a=0$ to obtain \begin{gather*} \sum\limits_{m=0}^{\infty} t^{ml+i} Q_{ml +i}(x) = \frac{(tx)^i}{1-t^l} \, \pFq{1}{ld+l-1}{1}{ \big(\hat{S}(i)\big)}{\frac{(xt)^l}{1-t^l}}.\tag*{\qed} \end{gather*}\renewcommand{\qed}{} \end{proof} From this theorem we can write a generating function for all polynomials $Q_n(x)$. \begin{Corollary}\label{ggen1} A generating function for $Q_n(x)$ is given by \begin{gather*} \sum\limits_{n=0}^{\infty} t^n Q_n(x) = \sum\limits_{i=0}^{l-1} \frac{(tx)^i}{1-t^l} \,\pFq{1}{ld+l-1}{1}{ \big(\hat{S}(i)\big)}{\frac{(xt)^l}{1-t^l}}. \end{gather*} \end{Corollary} \begin{proof} Just sum up the generating functions $\Phi_i(x,t)$ for $i = 0, \ldots, l-1$ and replace $n$ by $ml +i$. \end{proof} Notice that the coefficients $\hat{S}(i)$ depend on $i$, which makes it difficult to obtain a better formula in general. However when $l=1$ the above expression gives \begin{Corollary}\label{ggen2} The $d$-orthogonal polynomials defined in terms of $q(G) = \rho G$ have the following generating function \begin{gather*} \sum\limits_{n=0}^{\infty} t^n P_n(x) = \frac{1}{1-t} \,\pFq{1}{d}{1}{(\alpha_d +1)}{\frac{x t }{1-t}}. \end{gather*} \end{Corollary} \subsection{Discrete GGH polynomials} In the discrete case there is nothing special. We follow the arguments for the continuous case. However we will keep the coefficient $\rho$. We again put $n = ml +i$ and fix~$i$. We use the following modification of the polynomials \eqref{new-d} \begin{gather*} Q_n(x) = (-1)^i (-x)_i\, \pFq{1+l}{ld +l-1}{-m, \Delta(l; -x+i -1)}{ \big(\hat{S}(i)\big)}{\eta_1^l}, \end{gather*} which differ from $P_n(x)$ only by a multiplicative constant. \begin{Theorem} The function $\Phi_i$ given by \begin{gather*} \Phi_i(x, t) = (-1)^j (-x)_i t^i e^{t^l} \, \pFq{l}{ld +l-1}{ \Delta(l; -x+i)}{ \big(\hat{S}(i)\big)}{ -(\eta_1t)^l}. \end{gather*} is a generating function for the polynomials $Q_{lm +i}(y)$ in the sense that \begin{gather*} \sum\limits_{m=0}^{\infty} \frac{t^{ml +i}}{m!}Q_{ml +i}(x) = \Phi_i(x,t). \end{gather*} \end{Theorem} \begin{proof} We write the defining series as \begin{gather*} (-1)^i (-x)_i\sum\limits_{m=0}^{\infty} \frac{t^{ml +i}}{m!} \sum\limits_{j=0}^{m} \frac{(-m)_j ( \Delta(l; -x+i))_j \eta_1^{jl} }{\prod\limits_{\beta \in I} (S_{\beta}(i) )_jj!}. \end{gather*} In the right-hand side we make the following transformations. We first change the order of the summation and then introduce a new summation variable $m \to s = m-j$. We obtain \begin{gather*} (-1)^i(-x)_i t^i \sum\limits_{j=0}^{m} \frac{ ( \Delta(l; -x+i))_j \big[\eta_1^{l}]^j }{ \prod\limits_{\beta \in I} (S_{\beta}(i) )_jj!} \sum\limits_{s=0}^{\infty} \frac{(-s -j)_jt^{(s+j)l} }{(s+j)!}. \end{gather*} Notice that \begin{gather*} \frac{(-s -j)_j}{(s+j)!} = \frac{(-1)^j}{s!}. \end{gather*} This gives \begin{gather*} \Phi_i(x, t) =(-1)^i(-x)_i t^i \sum\limits_{j=0}^{m} \frac{ \eta_1^{lj} (-1)^j t^{lj }}{\prod\limits_{\beta \in I} (S_{\beta}(i) )_jj!} \sum\limits_{s=0}^{\infty} \frac{t^{sl} }{s!}, \end{gather*} which is exactly the desired formula. \end{proof} As in the continuous case we are going to derive a second formula. \begin{Theorem} The function $G_i$ given by \begin{gather*} G_i(x, t) = \frac{t^i(-1)^i(-x)_i}{1-t} \,\pFq{1 +l}{ld +l-1}{ \Delta(l; -x+i), 1}{\big(\hat{S}(i)\big)}{-\left[\frac{(-1)^{d}}{\rho}\right]^l} \end{gather*} generates the polynomials $Q_{lm +i}(y)$ in the sense that \begin{gather*} G_i(x, t) = \sum\limits_{m=0}^{\infty} t^mQ_{ml+ i}(x). \end{gather*} \end{Theorem} \begin{proof} We are going to use again Proposition~\ref{pr-sri}. It is obvious that we need to put $a=0$. We have \begin{gather*} \sum\limits_{m=0}^{\infty} t^mQ_{ml+ i}(x) = (-1)^i (-x)_i \sum\limits_{m=0}^{\infty} t^m \;\pFq{2+l}{ld +l}{-m, \Delta(l; -x+i -1), 1}{ \big(\hat{S}(i)\big), 1 }{\eta_1^l}. \end{gather*} Then~\eqref{sri} gives \begin{gather*} (-1)^i (-x)_i \sum\limits_{m=0}^{\infty} t^{ml+ i} \;\pFq{2+l}{ld +l}{-m, \Delta(l; -x+i -1), 1}{\big(\hat{S}(i)\big), 1}{\eta_1^l} \\ \qquad{} =\frac{(-1)^i(-x)_i t^i}{1- t^l} \,\pFq{1+l}{ld +l-1}{ \Delta(l; -x+i -1), 1}{ \big(\hat{S}(i)\big)}{\frac{\eta_1^l}{1-t^l}}.\tag*{\qed} \end{gather*}\renewcommand{\qed}{} \end{proof} As a trivial corollary again we can write a generating function for all polynomials $Q_n(x)$. \begin{Corollary}\label{dgen} A generating function for $Q_n(x)$ is given by \begin{gather*} \sum\limits_{n=0}^{\infty} t^n Q_n(x) = \sum\limits_{i=0}^{l-1} \frac{(-1)^i(-x)_i t^i}{1- t^l} \, \pFq{1+l}{ld +l-1}{\Delta(l; -x+i -1), 1}{\big(\hat{S}(i)\big) }{\frac{\eta_1^l}{1-t^l}}. \end{gather*} \end{Corollary} Finally for $l=1$ we get a better formula \begin{Corollary}\label{ggen3} The polynomials defined in terms of $q(G) = \rho G$ have the following generating function \begin{gather*} \sum\limits_{n=0}^{\infty} t^n Q_n(x) = \frac{1}{1-t} \,\pFq{2}{d}{ x, 1}{(\alpha_d+1)}{ \frac{t\rho}{1-t}}. \end{gather*} \end{Corollary} \section{Mehler--Heine type formulas} \label{MHA} In this section, we are going to consider only the continuous $d$-orthogonal polynomials and, without loss of generality, we will assume that $\rho l^{d+1} = 1$. (This assumption will make the formulas simpler.) There are many ways to write down the Mehler--Heine type formulas depending on the normalization of the polynomials of which we choose only the simplest one. Let us start with the case of continuous $d$-orthogonal polynomials corresponding to $q(G) = G$ and use the polynomials $Q_n(x)$ given by~\eqref{pure}. \begin{Theorem}\label{1MH} For the $d$-orthogonal polynomials obtained from $q(G) = G$, the following Mehler--Heine type formula holds \begin{gather} \lim\limits_{n\to \infty}Q_n(x/n) = \pFq{0}{d}{-}{(\alpha_d +1)}{ x}.\label{MH1} \end{gather} \end{Theorem} \begin{proof} Notice that in the case $l=1$, equation \eqref{sri} can be written as \begin{gather*} Q_n(x) = \pFq{1}{d}{-n}{(\alpha_d+1)}{-x}. \end{gather*} Then we use the formula \begin{gather*} \lim\limits_{\lambda \to \infty}\pFq{p +1 }{q}{(\alpha_p), a\lambda}{(\beta_q)}{ \frac{x}{\lambda}} = \pFq{p}{q}{(\alpha_p)}{(\beta_q)}{a x}, \end{gather*} with $\lambda = n$, $a =-1$, see, e.g., \cite[p.~5]{KLS}. This immediately gives~\eqref{MH1}. \end{proof} \begin{Remark}\label{MH} This theorem is proved in \cite{VAss}. We present it here as an illustration of the results of the present paper. Also the Mehler--Heine formula for general GGP follows from it. As one of the referees kindly pointed to me, the polynomials $Q_n(x)$ considered here all are Jensen polynomials. This means that they are associated to an entire function $\varphi(x) = \sum\limits_{n=0}^{\infty} \gamma_n \frac{x^n}{n!}$ in the following way \begin{gather*} Q_n(x) = \sum_{j=0}^{n} \binom{n}{j} \gamma_j x^j. \end{gather*} Also the function $e^t \varphi(xt)$ is their generating function: \begin{gather*} e^t \varphi(xt) = \sum_{n=0}^{\infty} Q_n(x) \frac{t^n}{n!}, \end{gather*} see, e.g., \cite{CC} for a contemporary reference to properties of Jensen polynomials that we refer to here. In our case \begin{gather*} \varphi(x) = \pFq{0}{d}{-}{(\alpha_d +1)}{ x}. \end{gather*} By the properties of the Jensen polynomials we have \begin{gather*} \lim\limits_{n \to \infty} Q_n(x/n) = \varphi(x) = \pFq{0}{d}{-}{(\alpha_d +1)}{ x} \end{gather*} locally uniformly as was found by Jensen himself in 1913. Observe that the hypergeometric function \begin{gather*} \varphi(x) = \pFq{0}{d}{-}{(\alpha_d+1)}{x} \end{gather*} appears once again naturally as in the formulas for the generating functions for the polynomial system $\{Q_n(x)\}$. In both cases this is connected to the fact that they are Jensen polynomials. \end{Remark} Now consider the polynomials obtained by the automorphisms $\sigma_q$, where $q(G)$ is some polynomial. We recall that they are given by \begin{gather*} P_n(x): = e^{q(G)}x^n = x^n + \sum_{k=1}^{\infty}\frac{q^k(G) x^n}{k!}. \end{gather*} We will restrict ourselves to the case $q(G)= G^l$ for which the corresponding hypergeometric representation was obtained in Section~\ref{hyp}. Presenting $n$ as $n = ml + i$ and using~\eqref{pure} for the polynomials~$Q_n(x)$, we get \begin{gather*} Q_n(x) = x^i \,\pFq{1}{ld +l-1}{-m}{ \big(\hat{S}(i)\big)}{\big[(-1)^{d+1}x\big]^l}. \end{gather*} \begin{Theorem}\label{2MH} For the $d$-orthogonal polynomials obtained via the automorphisms $\sigma_q$ with\linebreak \smash{$q(G) = G^l$}, the following Mehler--Heine type formula holds \begin{gather} \lim\limits_{m\to \infty} m^{i/l} Q_n\big(x/m^{1/l}\big) = x^i \, \pFq{0}{ld +l-1}{-}{\big(\hat{S}(i)\big)}{\big[(-1)^{d+1}x\big]^{l}}.\label{MH2} \end{gather} \end{Theorem} \begin{proof} We will use the hypergeometric representation~\eqref{new-h}. Consider the expression \linebreak $Q_n\big(x/m^{1/l}\big)$ which gives \begin{gather*} Q_{ml +i}\big(x/m^{1/l}\big) = \frac{x^i}{m^{i/l}}\, \pFq{1}{ld +l-1}{-m}{\big(\hat{S}(i)\big)}{\frac{\big[(-1)^{d+1}x\big]^l}{m}}. \end{gather*} The latter formula can be rewritten in the form \begin{gather*} m^{i/l} Q_{ml +i}\big(x/m^{1/l}\big) = x^i\,\pFq{1}{ld +l-1}{-m}{\big(\hat{S}(i)\big)}{\frac{\big[(-1)^{d+1}x\big]^l}{m}}. \end{gather*} Then formula \eqref{MH2} follows from \eqref{MH1}. \end{proof} It is worth noticing that the asymptotics depends on the remainder $n$ $({\rm mod}~l)$ which indicates that there is probably no general asymptotic formula, but it might exist for the subsequence with the same value of the remainder $n$ $({\rm mod}~l)$. This phenomenon is well-known in the case of Hermite polynomials, where the even-indexed and the odd-indexed polynomials have different asymptotics, see, e.g.,~\cite{AS}. Notice that as we explained in Remark~\ref{GBess} the function \begin{gather*} \pFq{0}{ld +l-1}{-}{\big(\hat{S}(i)\big)}{\big[(-1)^{d+1}x\big]^l} \end{gather*} is also a generalized Bessel function in the sense of~\cite{BHY2}. \section{Examples} \label{exa} \begin{Example}[Gould--Hopper polynomials]\label{GH} Consider the simplest case $R(H) \in {\mathbb C}$, i.e., $G = \partial$. Then for $q = \tau G^l$, using equation~\eqref{Gl} we obtain the polynomial system \begin{gather*} P_n(x) = x^n\,\pFq{l}{0}{\dfrac{-n}{l}, \ldots, \dfrac{-n+l-1}{l}}{-}{\tau\left(\frac{-l}{x}\right)^l}. \end{gather*} These polynomials coincide with the well-known Gould--Hopper polynomials $g^l_n(x, \tau) $, cf.~\cite{GH,LCS}. According to our scheme they are the eigenfunctions of the differential operator \begin{gather*} L = l\tau\partial^{l} + x\partial \end{gather*} and satisfy the recurrence relation \begin{gather*} xg^l_n(x,\tau)= g^l_{n+1}(x,\tau) - \tau l n(n-1)\cdots (n-l +2) g^l_{n-l+1}(x,\tau). \end{gather*} Using the second form of hypergeometric representation \eqref{new-h} we can also express them as \begin{gather*} g_{ml +i}^l(x) =x^i\pFq{1}{l-1}{-m }{\big(\hat{S}(i)\big)}{-\tau\left(\frac{x}{l}\right)^l}. \end{gather*} Observe that for $l=2$, these polynomials coincide (up to rescaling) with the classical Hermite polynomials. Eventually the Gould--Hopper polynomials turned out to be quite useful in quantum mechanics, integrable systems (Novikov--Vesselov equation), combinatorics, etc., see, e.g., \cite{Cha,DLMTC, VL}. The cases with $G = R(H)\partial$ with arbitrary $R$ can be considered as generalizations of the Gould--Hopper polynomials. In this situation we use $q(G) = G^l$, where $G = R(H)\partial$, $R$ being a~polynomial of an arbitrary degree. The corresponding expression for these polynomials provided by~\eqref{Gl} is as follows \begin{gather*} P_n(x) = x^n \,\pFq{dl+l}{0}{\Delta(l; -n) , (\Delta(l; -n-\alpha))} {-}{\left(\frac{(-l)^{d+1}\rho}{x}\right)^l}. \end{gather*} We can explicitly write discrete analogs of the (generalized) Gould--Hopper polynomials. Namely, \begin{gather*} P_n(x) = (x)_n \,\pFq{dl+l}{l}{\Delta(l; -n), (\Delta(l; -n-\alpha))} {\Delta(l; x-n)}{ \big((-1)^{(d+1)}l^d\rho\big)^l}. \end{gather*} The most straightforward analog, which one might call the {\it discrete Gould--Hopper polynomials}, corresponds to $G = \tau \Delta^l$ ($d = 0$). These polynomials have the following hypergeometric representation \begin{gather*} P_n(x) = (x)_n \,\pFq{l}{l}{\Delta(l; -n)} {\Delta(l; x-n)}{(-l\rho)^l}. \end{gather*} Having in mind the existing applications of the Gould--Hopper polynomials it is worth checking whether their generalized versions have similar or other applications. \end{Example} \begin{Example}[Konhauser--Toscano polynomials]\label{Kon-Tos} In \cite{Kon} Konhauser has defined two families of polynomials denoted by $Y^{\alpha}_n(x;l)$ and $Z^{\alpha}_n(x;l)$, $n = 0, \ldots$, where $\alpha \in {\mathbb R}$, $l \in {\mathbb N}$. The polynomials~$Z^{\alpha}_n(x;l)$ are in fact polynomials in~$x^l$. The polynomials~$Y^{\alpha}_n(x;l)$ are polynomials in the original variable~$x$. These two families are biorthogonal with respect to the weight function corresponding to the Laguerre polynomials, i.e., \begin{gather*} \int_{0}^{\infty}x^{\alpha}e^{-x}Y^{\alpha}_n(x;l) Z^{\alpha}_m(x;l){\rm d}x = h_m\delta_{n,m} \end{gather*} with $h_m \neq 0$. The polynomials $Z^{\alpha}_n(x;l)$ were introduced earlier by Toscano, see~\cite{Tos}. Their hypergeometric representation \begin{gather*} Z^{\alpha}_n(x;l) = \dbinom{\alpha + ln}{ln} \frac{(ln)!}{n!} \, \pFq{1}{l}{-n}{\Delta(l;\alpha+1)}{(x/l)^l}, \end{gather*} found in \cite{LCS} shows that they can be constructed using the methods of the present paper. Let us consider $G= R(H)\partial$, withl $R(H) = \prod\limits^{l}_{s=1}\big(H+\frac{\alpha +s}{l}\big)$. Then \begin{gather*} Z_n^{\alpha}(x; l) = P_n\big((x/l)^l\big). \end{gather*} In the case $l=2$ the polynomials were discovered much earlier by L.V.~Spencer and U.~Fano~\cite{SF} in their studies of the $X$-rays diffusion. \end{Example} \begin{Remark}\label{Rkon} In \cite{Kon} Konhauser has proven that the polynomials $Y^{\alpha}_n(x;l)$ are the eigenfunctions of a differential operator of order $l+1$ and that they satisfy a $(l+2)$-recurrence relation of the form \begin{gather*} x^l Y^{\alpha}_n(x;l) = \sum\limits_{j=-1}^{l} \gamma_j(n) Y^{\alpha}_{n+j}(x;l). \end{gather*} The polynomials $Z^{\alpha}_n(x;l)$ satisfy a relation of the form \begin{gather*} x^l Z^{\alpha}_n(x;l) = \sum\limits_{j=-1}^{l} \beta_j(n) Z^{\alpha}_{n-j}(x;l). \end{gather*} (This relation follows from the definition $Z_n(x;l) := P^R_n\big((x/l)^l\big)$ and the properties of \linebreak $P^R_n\big((x/l)^l\big)$.) As mentioned earlier the Konhauser--Toscano polynomials have applications to random mat\-rix theory. The so-called Borodin--Muttalib ensembles~\cite{Bor, Mu} are based on their biorthogonality. \end{Remark} In fact the polynomials $ Y^{\alpha}_n(x;l)$, $n=0, 1,\ldots, l-1$ are closely related with the functionals, which define the $d$-orthogonal polynomials $P_n(x)$. It is quite interesting to define discrete analogs for the above Konhauser polynomials. \begin{Example}[matching polynomials of graphs]\label{graph} Polynomial systems considered in this example are taken from \cite{AEMR} and they are relevant for the so-called chemical graph theory. We need some notions from the graph theory, see, e.g., \cite{AEMR, Di, Farr} and the references therein. Let $K$ be a connected graph with $n$ vertices. Following~\cite{RMA} we define the higher Hosoya number~$p_r(K, j)$ as the number of ways one can select $j$ non-incident paths of length~$r$ in~$K$. Using the Hosoya numbers, the higher-order matching polynomial $M_r(K)$ of $K$ is defined by the relation \begin{gather*} M_r(K) := \sum\limits_{j=0}^{ \lceil{\frac{n}{r+1}}\rceil } (-1)^jp_r(K, j)x^{n-(r+1)j}, \end{gather*} see \cite{AEMR,Farr}. When $K = K_n$ is the complete graph on $n$ vertices (i.e., each pair of vertices is connected by an edge) the corresponding polynomials were explicitly computed in~\cite{AEMR, Farr}. Using combinatorial arguments it was shown that these polynomials are given by \begin{gather*} M_{r} (K_n) = \sum\limits_{j=0}^{n} (-1)^j\frac{n!x^{n- (r+1)j}}{(n- (r+1)j)! j!2^j }. \end{gather*} If we compare this expression with \begin{gather*} P_n = e^{-G^{r+1}/2}x^n, \end{gather*} where $G = \partial_x$, we see that they coincide. Hence we obtain a hypergeometric representation \begin{gather*} M_r(K_n) = x^n \pFq{r+1}{0}{\Delta(r+1; -n)}{-}{ \frac{(-1)^r(r+1)^{r+1}}{2x^{r+1}}}. \end{gather*} (This representation was earlier found in \cite{AEMR}.) We see that the matching polynomials of complete graphs are the eigenfunctions of a linear differential operator and that they satisfy an $(r+2)$-term recurrence relation which can be useful in the studies of these polynomials. The case $r = 1$ deserves a special attention since it corresponds to the Hermite polynomials as was observed long ago, e.g., in~\cite{AEMR, Gut}. The corresponding coefficients~$p_1(K,j)$ are the original Hosoya numbers~\cite{Hos}. Let us consider another example of matching polynomials, this time of complete bipartite graphs $K_{n,m}$, $n \geq m$ with $n+m$ vertices. (Recall that~$K_{n,m}$ is a graph whose vertices are split into two nonintersecting sets $V_n$ and $V_m$ with $n$ and $m$ elements resp. and every vertex in $V_n$ is connected to every vertex in~$V_m$. We consider the case when~$r$ is odd. Then~\cite{AEMR, Farr} contain the formula \begin{gather*} M_r(K_{n, m}) = \sum\limits_{j=0}^{n} \binom{n}{jr} \binom{m}{jr}\frac{(-1)^j((jr)!)^2}{j!} x^{n- 2rj}, \end{gather*} which can easily be transformed into \begin{gather*} M_r(K_{n,m}) = x^{n+m}\, \pFq{r+1}{0}{\Delta(r+1; -n), \Delta(r+1; -m) }{-}{ \frac{-(r+1)^{r+1}}{(2x)^{r+1}}}. \end{gather*} Set $m = n - M$, $M\geq 0$. Using this notation, we see that \begin{gather*} M_r(K_{n,m}) = x^{2n - M}\, \pFq{r+1}{0}{\Delta(r+1; -n), \Delta(r+1; - n +M ) }{-}{ \frac{-(r+1)^{r+1}}{(2x)^{r+1}}}. \end{gather*} In other words, $M_r(K_{n,m}) $ coincide with $x^{n-M}P_n(x)$, where the polynomials $P_n(x)$ are constructed via $q(G) = -G^{r+1}/2$ and $G = (x\partial - M)\partial$. While the hypergeometric representation is known, the properties of these polynomials listed in Theorem~\ref{any} are new. In particular, the differential equation and the recurrence relations they satisfy seem to be new. Notice that for $r=1$ (i.e., when all the paths are edges), these polynomials coincide with $L^{(M)}_n\big(x^2\big)$, where $L^{(\alpha)}_n(y)$ are the generalized Laguerre polynomials. \end{Example} The above result suggests a conjecture about the matching polynomials of complete $k$-partite graphs, i.e., graphs whose vertices can be colored into~$k$ distinct colors, so that the two endpoints of every edge have different colors. By a complete $k$-partite graphs we mean that any two vertices with different colors are connected by an edge, see more details in~\cite{CZ}. Namely, consider all graphs with $N=n_1 + \cdots + n_k$ vertices. Denote the corresponding $k$-partite graph by $K_{(n)}$. \begin{Conjecture}\label{n-part} For odd $r$, the matching polynomials $M_r(K_{(n)})$ are given by \begin{gather*} M_r(K_{(n)}) = x^N \,\pFq{r+1}{0}{\Delta(r+1; -n_1)\ldots \Delta(r+1; -n_k) }{-}{ \frac{-(r+1)^{r+1}}{(2x)^{r+1}}}. \end{gather*} \end{Conjecture} More examples can be found in the cited papers. \LastPageEnding \end{document}
\begin{document} \def\mathcal{F}{\mathcal{F}} \def\mathbb{Z}{\mathbb{Z}} \def\textrm{ex}{\textrm{ex}} \def\textrm{ex}P{\textrm{ex}_{\mathcal{P}}} \defTur\'{a}n{Tur\'{a}n} \title{Planar {Tur\'{a}n} Numbers of Cycles:\\ A Counterexample} \date{} \author{Daniel W. Cranston\thanks{ Department of Computer Science, Virginia Commonwealth University, Richmond, VA, USA; \texttt{[email protected]} } \and Bernard Lidick\'{y}\thanks{Iowa State University, Department of Mathematics, Iowa State University, Ames, IA, USA; \newline \texttt{[email protected]}. Research of this author is partially supported by NSF grant DMS-1855653.} \and Xiaonan Liu\thanks{School of Mathematics, Georgia Institute of Technology, Atlanta, GA, USA; \texttt{[email protected]}} \and Abhinav Shantanam \thanks{Department of Mathematics, Simon Fraser University, Burnaby, BC, Canada; \texttt{[email protected]}} } \maketitle \begin{abstract} The planar {Tur\'{a}n} number $\textrm{ex}_{\mathcal{P}}(C_{\ell},n)$ is the largest number of edges in an $n$-vertex planar graph with no $\ell$-cycle. For $\ell\in \{3,4,5,6\}$, upper bounds on $\textrm{ex}_{\mathcal{P}}(C_{\ell},n)$ are known that hold with equality infinitely often. Ghosh, Gy\"{o}ri, Martin, Paulo, and Xiao [arxiv:2004.14094] conjectured an upper bound on $\textrm{ex}_{\mathcal{P}}(C_{\ell},n)$ for every $\ell\ge 7$ and $n$ sufficiently large. We disprove this conjecture for every $\ell\ge 11$. We also propose two revised versions of the conjecture. \end{abstract} \section{Introduction} The {Tur\'{a}n} number $\textrm{ex}(n,H)$ for a graph $H$ is the maximum number of edges in an $n$-vertex graph with no copy of $H$ as a subgraph. {Tur\'{a}n} famously showed that $\textrm{ex}(n,K_{\ell})\le (1-\frac1{\ell-1})\frac{n^2}2$; for example, see \cite[Chapter 32]{PFTB}. The Erd\H{o}s--Stone Theorem~\cite[Exercise 10.38]{lovasz-problems-book} generalizes this result, by asymptotically determining $\textrm{ex}(n,H)$ for every non-bipartite graph $H$: $\textrm{ex}(n,H)=(1-\frac1{\chi(H)-1})\frac{n^2}2+o(n^2)$; here $\chi(H)$ is the chromatic number of $H$. Dowden~\cite{dowden} considered the problem when restricting to $n$-vertex graphs that are planar. The \emph{planar {Tur\'{a}n} number} $\textrm{ex}P(n,H)$\aaside{$\textrm{ex}P(n,H)$}{-4mm} for a graph $H$ is the maximum number of edges in an $n$-vertex planar graph with no copy of $H$ as a subgraph (not necessarily induced). This parameter has been investigated for various graphs $H$ in~\cite{LSS2} and~\cite{FZW}; but here we focus mainly on cycles. It is well-known that if $G$ is an $n$-vertex planar graph with no triangle, then $G$ has at most $2n-4$ edges; further, this bound is achieved by every planar graph with each face of length 4. Thus, $\textrm{ex}P(n,C_3)=2n-4$ for all $n\ge 4$. Dowden~\cite{dowden} proved that $\textrm{ex}P(n,C_4)\le\frac{15(n-2)}7$ for all $n\ge 4$ and $\textrm{ex}P(n,C_5)\le\frac{12n-33}5$ for all $n\ge 11$. He also gave constructions showing that both of these bounds are sharp infinitely often. For each $k\in \{4,5\}$, form $\Theta_k$ from $C_k$ by adding a chord of the cycle. Lan, Shi, and Song~\cite{LSS} showed that $\textrm{ex}P(n,\Theta_4)\le \frac{12(n-2)}5$ for all $n\ge 4$, that $\textrm{ex}P(n,\Theta_5)\le \frac{5(n-2)}2$ for all $n\ge 5$, and that $\textrm{ex}P(n,C_6)\le \frac{18(n-2)}7$ for all $n\ge 7$. The bounds for $\Theta_4$ and $\Theta_5$ are sharp infinitely often. However, the bound for $C_6$ was strengthened by Ghosh, Gy\"{o}ri, Martin, Paulos, and Xiao~\cite{GGMPX}, who showed that $\textrm{ex}P(n,C_6)\le \frac{5n-14}2$ for all $n\ge 18$. They also showed that this bound is sharp infinitely often. In the same paper, Ghosh et al. conjectured a bound on $\textrm{ex}P(n,C_{\ell})$ for each $\ell\ge 7$ and each sufficiently large $n$. In this note, we disprove their conjecture. \begin{conj}[\cite{GGMPX}; now disproved] \label{main-conj} For each $\ell\ge 7$, for $n$ sufficiently large, if $G$ is an $n$-vertex planar graph with no copy of $C_{\ell}$, then $e(G)\le \frac{3(\ell-1)}{\ell}n-\frac{6(\ell+1)}{\ell}$. That is, $\textrm{ex}P(n,C_{\ell})\le \frac{3(\ell-1)}{\ell}n-\frac{6(\ell+1)}{\ell}$. \end{conj} In fact, we disprove the conjecture in a strong way. \begin{thm} \label{over-thm} For each $\ell\ge 11$ and each $n$ sufficiently large (as a function of $\ell$), we have $\textrm{ex}P(n,C_{\ell}) > \frac{3(\ell-1)}{\ell}n-\frac{6(\ell+1)}{\ell}$. Furthermore, if there exists a function $s:\mathbb{Z}^+\to \mathbb{Z}^+$ such that $\textrm{ex}P(n,C_{\ell})\le \frac{3(s(\ell)-1)}{s(\ell)}n$ for all $\ell$ and all $n$ sufficiently large (as a function of $\ell$), then $s(\ell)=\Omega(\ell^{\lg_23})$. \end{thm} We prove the first statement of Theorem~1.0ef{over-thm} in Section~1.0ef{first-sec}, and sketch a proof of the second statement in Section~1.0ef{second-sec}. Our constructions modify that outlined by Ghosh et al.~\cite{GGMPX}. The main building blocks, which we call \emph{gadgets}, are triangulations, in which every cycle has length less than $\ell$. Clearly, a set of vertex-disjoint gadgets will have no $C_{\ell}$. To increase the average degree, we can identify vertices on the outer faces of these gadgets as long as we avoid creating cycles. We can also allow ourselves to create cycles among the gadgets as long as each created cycle has length more than $\ell$. So we must find the way to do this most efficiently. Our notation is standard, but for completeness we record a few things here. We let $e(G)$ and $n(G)$ denote the numbers of edges and vertices in a graph $G$. We write $C_{\ell}$ for a cycle \mbox{of length $\ell$.} \section{Disproving the Conjecture: a First Construction} \label{first-sec} To disprove Conjecture~1.0ef{main-conj}, we start with a planar graph in which each face has length $\ell+1$ (and each cycle has length at least $\ell+1$), and then we ``substitute'' a gadget for each vertex. As a first step, we construct the densest planar graphs with a given girth $g$, for each fixed $g\ge 6$. We will also need that our dense graphs have maximum degree 3, as we require in the following definition. \begin{defn} If $G$ is a planar graph of girth $g$ with each vertex of degree 2 or 3, and $e(G)=\frac{g}{g-2}(n-2)$, then $G$ is a \emph{dense graph of girth $g$}\aaside{dense graph}{-5mm}. \end{defn} An easy counting argument shows that if $G$ is an $n$-vertex dense graph of girth $g$, where $n=(5g-10)\frac{k}{2}-g+4$ (for some positive even integer $k$), then $G$ has $10k-8$ vertices of degree 3 and all other vertices of degree 2. \begin{lem} Fix an integer $g\ge 3$. If $G$ is an $n$-vertex planar graph with girth $g$, then $e(G)\le \frac{g}{g-2}(n-2)$. For each $g\ge 6$, this bound holds with equality infinitely often; specifically, it holds with equality if $k$ is a positive even integer and $n=(5g-10)\frac{k}{2}-g+4$. In fact, for each such $k$ and $n$, there exists a graph $G$ that attains this bound and that has every vertex of degree 2 or 3. \label{dense-lem} \end{lem} \begin{proof}[Proof of Lemma~1.0ef{dense-lem}.] Let $G$ be an $n$-vertex planar graph with girth $g$. Denote by $n$, $e$, and $f$ the numbers of vertices, edges, and faces in $G$. Every face boundary contains a cycle,\footnote{To see this, form $G'$ from $G$ by deleting all cut-edges. Since each component of $G'$ is 2-connected, each face boundary \emph{is} either a cycle or a disjoint union of cycles (if $G'$ is disconnected). Note that each face boundary in $G$ contains all edges of a face boundary in $G'$.} so every face boundary has length at least $g$. Thus, $2e\ge gf$. Substituting into Euler's formula and simplifying gives the desired bound: $e\le \frac{g}{g-2}(n-2)$. Now we construct graphs for which the bound holds with equality. Before giving our full construction, we sketch a simpler construction which has the desired properties except that it has maximum degree 6 (rather than each degree being 2 or 3, as we require). Begin with a 4-connected $n$-vertex planar triangulation with maximum degree 6. We will find a set $M$ of edges such that every triangular face contains exactly one edge in $M$. To see that such a set exists, we consider the planar dual $G^*$. Since $G$ is a triangulation and 2-connected, $G^*$ is $3$-regular. By Tutte's Theorem, $G^*$ contains a perfect matching $M^*$ (in fact, this was proved earlier by Petersen). The set $M$ of edges in $G$ corresponding to the edges of $M^*$ in $G^*$ has the desired property: each triangle of $G$ contains exactly one edge of $M$. To get the desired graph $G'$ with each face of length $g$, we replace each edge of $G$ not in $M$ with a path of length $\lfloor (g+1)/31.0floor$ and replace each edge of $G$ in $M$ with a path of length $g-2\lfloor (g+1)/31.0floor$. Now each face of $G'$ has length $2\lfloor (g+1)/31.0floor+(g-2\lfloor (g+1)/31.0floor)=g$. Thus, for $G'$ the inequality $2e(G')\ge gf(G')$ in the initial paragraph holds with equality. So $e(G')=\frac{g}{g-2}(n(G')-2)$. Since each non-facial cycle of $G$ has length at least 4, each non-facial cycle of $G'$ has length at least $g$. Now we show how to also guarantee that each vertex of $G'$ has degree 2 or 3. The construction is similar, except that it starts from a particular planar graph $G$ with every face of length 6 and every vertex of degree 2 or 3. Again, we find a subset $M$ of edges such that each face of $G$ contains exactly one edge of $M$. To form $G'$ from $G$, we replace each edge not in $M$ with a path of length $\lfloor(g+1)/61.0floor$ and we replace each edge in $M$ with a path of length $g-5\lfloor(g+1)/61.0floor$. Thus, each face of $G'$ has length exactly $5\lfloor(g+1)/61.0floor + (g-5\lfloor(g+1)/61.0floor)=g$. It will turn out that each non-facial cycle of $G$ has either (i) length at least 10 or (ii) length at least 8 and at least one edge in $M$. The corresponding non-facial cycle in $G'$ thus has length at least $g$. In Case (ii) this follows from the calculation in the previous paragraph. In Case (i), when $g \ge 10$ this holds because $10 \lfloor(g+1)/61.0floor \ge 10 (g-4)/6 \ge g$. So consider Case (i) when $g\le 9$. Since each path in $G'$ replacing an edge in $G$ has length at least 1, each non-facial cycle in $G'$ has length at least 10, which is at least $g$ since $g\le 9$. Thus, what remains is to construct our graph $G$, specify the set of edges $M$, and check that each non-facial cycle in $G$ either has length at least 10 or has length 8 and includes an edge in $M$. \def1.0{1.0} \def0.0{0.0} \def0.0{0.0} \def6{6} \def4{4} \def2.5{2.5} \def3.5{3.5} \def1.0mm{1.0mm} \defblue!70!white{blue!70!white} \begin{figure} \caption{The planar graph $G_k$ has $10k-2$ vertices, $15k-6$ edges, and every face of length 6. Every vertex of $G_k$ has degree 2 or 3 and every non-facial cycle either (i) has length at least 10 or (ii) has length 8 and includes a blue edge. The set of blue edges intersects every face exactly once.} \end{figure} We construct an infinite family of planar graphs $G_k$ on $10k-2$ vertices, with $5k-2$ faces (each of length 6), and with all vertices of degree 2 or 3; here $k$ is an arbitrary positive even integer. Figure~1 shows $G_k$. (By Euler's formula, each $G_k$ has 6 vertices of degree 2 and $10k-8$ vertices of degree 3.) Each of $k$ ``diagonal columns'' contains 10 vertices, except for the first and last, which each contain one vertex fewer. We write $v_{i,j}$ to denote the $j$th vertex down from the top in column $i$, except that we start column 1 with $v_{1,2}$. So $V=\{v_{i,j}~|~1\le i\le k,~1\le j\le 10, (i,j)\notin\{(1,1),(k,10)\}\}$. The edge set consists of the boundary cycles of $4(k-1)$ 6-faces in the hexagonal grid, $k-1$ ``curved edges'' $v_{i,1}v_{i-1,10}$, when $2\le i\le k$, as well two ``end edges'' $v_{1,2}v_{1,7}$ and $v_{k,4}v_{k,9}$. The matching $M$ contains $v_{i,4}v_{i+1,3}$ and $v_{i,8}v_{i+1,7}$ when $1\le i\le k-1$, edge $v_{i,1}v_{i-1,10}$ for each odd $i\ge 3$ if $k\ge 4$, and the end edges $v_{1,2}v_{1,7}$ and $v_{k,4}v_{k,9}$. It is easy to check that the only vertices with degree 2 are $v_{1,3}, v_{1,5}, v_{1,9}, v_{k,2}, v_{k,6}, v_{k,8}$; the remaining $10k-8$ vertices all have degree 3. We now show that every non-facial cycle has either (i) length at least 10 or (ii) length at least 8 and at least one edge in $M$. We denote by $C_2, C_3, \ldots, C_{5k-5}$ the facial cycles that do not use any end-edge. Informally, $C_2$ is the ``top left'' of these (containing $v_{1, 2}$), and subscripts increase as we move down the first diagonal and then wrap around toroidally with the facial cycle containing $v_{1, 10}$ and two curved edges (see Figure~1), and continue on to the facial cycle containing $v_{k, 9}$. Formally, each of these is $C_k$, where $X$ denotes its vertex set and $k := \max\{j/2: v_{i, j} \in X\} + 5 * \min\{i-1: v_{i, j} \in X\} + (|\{i: v_{i, j} \in X\}|-2)$. The facial cycles containing the left end-edge are $C_0$ and $C_1$, and those containing the right end-edge are $C_{5k-4}$ and $C_{5k-3}$. Note that the edge-set of any non-facial cycle $C$ is the symmetric difference of the edge-sets of the facial cycles ``inside'' (or ``outside'') of $C$. Consider first a non-facial cycle $C$ that does not contain any end-edge. Pick the side of $C$ that does not contain the right end-edge; take the symmetric difference of the edge-sets of the facial cycles on this side incrementally, in order of increasing subscripts. The symmetric difference of the first two facial cycles has size at least 10 and this size never decreases. Now consider the non-facial cycles that contain exactly one end-edge; by (rotational) symmetry, assume it is the left end-edge. For these cycles, take the symmetric difference incrementally as above for the side not containing the right end-edge; the symmetric difference of the first two facial cycles has size at least 8 and again this size never decreases. Finally, consider a non-facial cycle $C$ that contains both end-edges. Now take the symmetric difference incrementally as above for the side of $C$ that includes $C_1$; the size of the symmetric difference is now initially at least 8, and never decreases until the final facial cycle ($C_{5k-4}$ or $C_{5k-3}$) is added and the symmetric difference is complete. The final facial cycle $C'$ may reduce the size of the symmetric difference by at most 4, but the final symmetric difference still has size at least 12 (due to the position of $C'$ relative to $C_1$, and the fact that $k \geq 2$). To finish the proof, we should verify that $|V(G')|=(5g-10)\frac{k}{2}-g+4$, as claimed. By construction, each vertex of $G'$ has degree 2 or 3. Each vertex with degree 3 in $G'$ also has degree 3 in $G$, and we have exactly $10k-8$ of these. Let $n$, $e$, and $f$ denote the numbers of vertices, edges, and faces in $G'$. Now summing degrees gives $$ 3(10k-8)+ 2 (n-(10k-8))=2e=gf=\frac{g}{g-2} (2n-4), $$ where the last two equalities hold as at the start of the proof. Thus, $n=(5g-10)\frac{k}{2}-g+4$. \end{proof} \begin{defn} \label{sub-defn} Let $G$ be a 2-connected plane graph, with every vertex of degree 2 or 3. Let $B$ be a plane graph with 3 vertices specified on its outer face. To \EmphE{substitute $B$ into $G$}{-4mm} we do the following. Subdivide every edge of $G$. For each vertex $v$ in $G$, delete $v$ from the subdivided graph and identify $d(v)$ vertices on the outer face of a copy of $B$ with the neighbors of $v$ in the subdivided graph. \end{defn} Now we consider the result of substituting $B$ into $G$, as in Definition~1.0ef{sub-defn}. \begin{lem} \label{lem:construction} Let $G$ be a plane graph; denote by $n_2$ and $n_3$ the numbers of vertices with degree 2 and 3 in $G$. Let $B$ be a plane graph with $n_B$ vertices and $e_B$ edges, and with 3 vertices specified on its outer face. Form $G'$ by substituting $B$ into $G$. Now $e(G') = (n_2+n_3)e_B$ and $n(G') = n_2(n_B-1)+n_3(n_B-3/2)$. Further, if $G$ has no cycle of length $\ell$ or shorter, and $B$ has no cycle of length $\ell$, then $G'$ has no cycle of length $\ell$. \end{lem} \begin{proof} Each vertex in $G$ gives rise to an edge-disjoint copy of $B$ in $G'$; thus $e(G')=(n_2+n_3)e_B$. Each vertex of degree 2 in $G$ contributes $n_B-1$ vertices to $G'$, since exactly two of its vertices lie in two copies of $B$ in $G'$ (and all others vertices lie in one copy of $B$). Similarly, each vertex of degree 3 in $G$ contributes $n_B-3/2$ vertices to $G'$. Finally, assume $G$ and $B$ satisfy the hypotheses on the lengths of their cycles. Now consider a cycle $C'$ in $G'$. If $C'$ is contained entirely in one copy of $B$, then $C'$ has length not equal to $\ell$. If $C'$ visits two or more copies of $B$, then $C'$ maps to a cycle $C$ in $G$ with length no longer than the length of $C'$. Since each cycle in $G$ has length longer than $\ell$, we are done. \end{proof} Now suppose that we plan to substitute some plane graph $B$ into a dense planar graph of girth $\ell+1$. Which $B$ should we choose? Since $B$ must not contain any $\ell$-cycle, a natural choice is a triangulation of order $\ell-1$. Indeed, every such $B$ yields a graph that attains the bound in Conjecture~1.0ef{main-conj}. This is Corollary~1.0ef{obs1}, which follows from our next lemma. \begin{lem} \label{obs} Let $G$ be a dense graph of girth $\ell+1$. Form $G'$ by substituting into $G$ a plane graph $B$ with 3 vertices specified on its outer face. Now $e(G')=\frac{e_B(\ell-1)}{(n_B-1)(\ell-1)-2} \left(n(G')-\frac{2(\ell+1)}{\ell-1}1.0ight)$, where $e_B=e(B)$ and $n_B=n(B)$. \end{lem} \begin{proof} Let $G$ be a dense graph of girth $\ell+1$ on $n$ vertices, and let $n_2$ and $n_3$ denote, respectively, its numbers of vertices with degree $2$ and $3$. Recall from Lemma~1.0ef{dense-lem} (with $g=\ell+1$) that $n=(5\ell-5)\frac{k}{2}-\ell+3$ for some even integer $k$, that $n_3=10k-8$, and that $n_2=n-n_3$. Lemma~1.0ef{lem:construction} implies that $e(G')=(n_2+n_3)e_B=ne_B$ and that $n(G') = n_2(n_B-1)+n_3(n_B-3/2) = (n-n_3)(n_B-1)+n_3(n_B-3/2) =n(n_B-1)-n_3/2$. Now we show that $e(G')=\frac{e_B(\ell-1)}{(n_B-1)(\ell-1)-2} (n(G')-\frac{2(\ell+1)}{\ell-1})$. The final equality comes from substituting for $n_3$ and simplifying (using that $n=(5\ell-5)\frac{k}2-\ell+3$). \begin{align*} \frac{e(G')}{n(G')-\frac{2(\ell+1)}{\ell-1}} &= \frac{n e_B (\ell-1)}{(n(n_B-1)-n_3/2)(\ell-1)-2(\ell+1)}\\ &=\frac{e_B(\ell-1)}{(n_B-1)(\ell-1)-\frac{n_3(\ell-1)+4(\ell+1)}{2n}}\\ &=\frac{e_B(\ell-1)}{(n_B-1)(\ell-1)-2}. \end{align*} \par \end{proof} \begin{cor} \label{obs1} The bound in Conjecture~1.0ef{main-conj} holds with equality for each graph formed by substituting a triangulation on $\ell-1$ vertices into a dense graph of girth $\ell+1$. \end{cor} \begin{proof} This follows from the above lemma when $B$ is a plane triangulation on $\ell-1$ vertices, so $n_B=\ell-1$ and $e_B=3(\ell-1)-6=3\ell-9$. We get \begin{align*} \frac{e_B(\ell-1)}{(n_B-1)(\ell-1)-2}&= \frac{3(\ell-3)(\ell-1)}{(\ell-2)(\ell-1)-2}\\ &=\frac{3(\ell-3)(\ell-1)}{\ell^2-3\ell+2-2}\\ &=\frac{3(\ell-1)}{\ell}. \end{align*} \par \end{proof} To beat the bound of Conjecture~1.0ef{main-conj}, it will suffice to instead substitute into a dense graph of girth $\ell+1$ any triangulation with order larger than $\ell-1$, as long as it has each cycle of length at most $\ell-1$. This is because the conjectured average degree is less than 6, and is attained by substituting a triangulation of order $\ell-1$, as shown in Corollary~1.0ef{obs1}. However, the average degree of a triangulation tends to 6 (from below) as its order grows. For each $\ell\in \{3,\ldots,10\}$, every triangulation on $\ell$ vertices is Hamiltonian, i.e., it contains an $\ell$-cycle. But for each $\ell\ge 11$, there exists a triangulation on $\ell$ vertices with no $\ell$-cycle; this is a consequence of Lemma~1.0ef{non-ham-lem}, which we prove next. (In fact, much more is true, as we show in Section~1.0ef{denser-sec}.) \begin{lem} \label{non-ham-lem} For every integer $t\ge 5$, there exist a plane triangulation with $3t-4$ vertices and each cycle of length at most $2t$, and a plane triangulation with $3t-3$ vertices and each cycle of length at most $2t+1$. \end{lem} \begin{proof} We start with a plane triangulation on $t$ vertices. First we add into the interior of each face a new vertex, making it adjacent to each vertex on the face. Let $A$ denote the set of vertices in the original triangulation, and let $B$ denote the set of added vertices. Since $|A|=t$ and $|B|=2t-4$, the resulting graph $G_1$ has order $3t-4$. Further, $B$ is an independent set. Thus, on every cycle $C$, at least half of the vertices must be from $A$. Hence, $C$ has length at most $2|A|=2t$. Now we obtain $G_2$ by adding a single vertex inside some face of $G_1$. It is easy to check that $G_2$ is a $(3t-3)$-vertex triangulation with each cycle of length at most $2t+1$. \end{proof} We have already outlined the proof of our main result. We let $B$ be a plane triangulation with no $\ell$-cycle, and with order at least $\ell$, as guaranteed by Lemma~1.0ef{non-ham-lem}. We simply substitute $B$ into a dense graph of girth $\ell+1$. For completeness, we include more details in the proof of Theorem~1.0ef{main1-thm}. \begin{thm} \label{main1-thm} For each $\ell\ge 11$, Conjecture~1.0ef{main-conj} is false. In particular, whenever $k$ is positive if $\ell\ge 11$ and $\ell$ is odd then, $\textrm{ex}P(n,C_{\ell})\ge \frac{9(\ell-5)(\ell-1)}{(3\ell-13)(\ell-1)-4} \left(n-\frac{2(\ell+1)}{\ell-1}1.0ight)$ for $n=((5\ell-5)\frac{k}{2}-\ell+3)(\frac{3(\ell-1)}{2}-5)-(5k-4)$ and if $\ell\ge 11$ and $\ell$ is even, then $\textrm{ex}P(n,C_{\ell})\geq \frac{3(3\ell-16)(\ell-1)}{(3\ell-14)(\ell-1)-4} \left(n-\frac{2(\ell+1)}{\ell-1}1.0ight)$ for $n=((5\ell-5)\frac{k}{2}-\ell+3)(3(\frac{\ell}{2}-1)-4)-(5k-4)$. \end{thm} \begin{proof} Let $a_1:= \frac{9(\ell-5)(\ell-1)}{(3\ell-13)(\ell-1)-4}$ and $a_2:=\frac{3(3\ell-16)(\ell-1)}{(3\ell-14)(\ell-1)-4}$. Since $\ell \geq 11$, easy algebra implies that $a_i>\frac{3(\ell-1)}{\ell}$, for each $i\in\{1,2\}$. Thus, $a_i(n-\frac{2(\ell+1)}{\ell-1}) > \frac{3(\ell-1)}{\ell}\left(n-\frac{2(\ell+1)}{\ell-1}1.0ight) = \frac{3(\ell-1)}{\ell}n-\frac{6(\ell+1)}{\ell-1}$ for each $i\in\{1,2\}$. So it suffices to show that $\textrm{ex}P(n,C_{\ell})\geq a_1(n-\frac{2(\ell+1)}{\ell-1})$ when $\ell \ge 11$ and $\ell$ is odd; and that $\textrm{ex}P(n,C_{\ell})\geq a_2(n-\frac{2(\ell+1)}{\ell-1})$ when $\ell \ge 11$ and $\ell$ is even (for the claimed values of $n$). Let $G$ be a dense graph of girth $\ell+1$. Recall that $n(G)=(5\ell-5)\frac{k}{2} - \ell+3$ for some even integer $k$, and that $G$ has $10k-8$ vertices of degree $3$; let $n_3:=10k-8$. When $\ell \ge 11$ and $\ell$ is odd, let $t_1:=\frac{\ell-1}{2}$ and $n_{B_1}:=3t_1-4=\frac{3(\ell-1)}{2}-4$. We have $t_1\ge 5$; so by Lemma~1.0ef{non-ham-lem}, there exists a plane triangulation $B_1$ with $n_{B_1}$ vertices and with each cycle of length at most $2t_1=\ell-1$. By Euler's formula, $e_{B_1}=e(B_1)=3(3t_1-4)-6=9t_1-18=9(\frac{\ell-1}{2}-2)$. Form $G_1'$ by substituting $B_1$ into $G$. Lemma~1.0ef{lem:construction} implies that $G'$ is a plane graph with no cycle of length $\ell$, and that $n(G_1')=n(G)(n_{B_1}-1)-n_3/2=((5\ell-5)\frac{k}{2}-\ell+3)(\frac{3(\ell-1)}{2}-5)-(5k-4)$. By Lemma~1.0ef{obs}, we have \begin{align*} e(G_1') &=\frac{e_{B_1}(\ell-1)}{(n_{B_1}-1)(\ell-1)-2}\left(n(G_1')-\frac{2(\ell+1)}{\ell-1}1.0ight)\\ &=\frac{9(\ell-5)(\ell-1)}{(3\ell-13)(\ell-1)-4} \left(n(G_1')-\frac{2(\ell+1)}{\ell-1}1.0ight)\\ &=a_1\left(n(G_1')-\frac{2(\ell+1)}{\ell-1}1.0ight). \end{align*} Hence, if $\ell \ge 11$ and $\ell$ is odd, then whenever $k$ is positive and even and $n=((5\ell-5)\frac{k}{2}-\ell+3)(\frac{3(\ell-1)}{2}-5)-(5k-4)$, we have $\textrm{ex}P(n, C_{\ell})\ge a_1\left(n- \frac{2(\ell+1)}{\ell-1}1.0ight)>\frac{3(\ell-1)}{\ell}n- \frac{6(\ell+1)}{\ell}.$ Now suppose $\ell \ge 11$ and $\ell$ is even. Let $t_2:=\frac{\ell}{2}-1$ and $n_{B_2}:=3t_2-3=\frac{3\ell}{2}-6$. Form $G_2'$ by substituting $B_2$ into $G$, where $B_2$ is a plane triangulation with $n_{B_2}$ vertices and each cycle of $B_2$ has length at most $2t_2+1=\ell-1$. (The existence of $B_2$ is guaranteed by Lemma~1.0ef{non-ham-lem}.) By Euler's formula, $e_{B_2}=e(B_2)=\frac{9\ell}{2}-24$. Similarly, it follows from Lemma~1.0ef{lem:construction} that $G_2'$ is a plane graph with no cycle of length $\ell$, and that $n(G_2')= n(G)(n_{B_2}-1)-n_3/2 = ((5\ell-5)\frac{k}{2}-\ell+3)(\frac{3\ell}{2}-7) - (5k-4)$. Lemma~1.0ef{obs} implies that \begin{align*} e(G_2') &=\frac{e_{B_2}(\ell-1)}{(n_{B_2}-1)(\ell-1)-2}\left(n(G_2') -\frac{2(\ell+1)}{\ell-1}1.0ight)\\ &=\frac{3(3\ell-16)(\ell-1)}{(3\ell-14)(\ell-1)-4} \left(n(G_2')-\frac{2(\ell+1)}{\ell-1}1.0ight)\\ &=a_2\left(n(G_2')-\frac{2(\ell+1)}{\ell-1}1.0ight)>\frac{3(\ell-1)}{\ell}n(G_2')- \frac{6(\ell+1)}{\ell}. \end{align*} This completes our proof. \end{proof} Now for each $\ell\ge 11$, we extend the construction in Theorem~1.0ef{main1-thm} to all sufficiently large $n$ (which will prove the first sentence of Theorem~1.0ef{over-thm}). Our general idea is to build a counterexample with order $n'$, larger than $n$, and delete vertices to get a counterexample of order precisely $n$. To see that this works, note that we can substitute different gadgets for different vertices in a sparse planar graph of girth $\ell+1$. As long as each gadget has more than $\ell$ vertices, we will beat the bound in Conjecture~1.0ef{main-conj}. In fact, we still beat the bound if a bounded number of gadgets have exactly $\ell$ vertices, and all other gadgets have more vertices (this is only needed in the case that $\ell\in \{11,12\}$, since that is when the gadget has precisely $\ell$ vertices). So we follow the construction in Theorem~1.0ef{main1-thm}, and then repeatedly remove vertices of degree 3 (that lie in $B$ in Lemma~1.0ef{non-ham-lem}). We can remove up to $t-4$ of these from each gadget. And the increase to the order of $G'$ when we increase $k$ in Theorem~1.0ef{main1-thm} is less than $(5g-10)(3t-5)$. So it suffices that the number of vertices in the sparse planar graph $G$ is greater than $\lceil(5g-10)(3t-5)/(t-4)1.0ceil\le 50(g-2)$. This proves the first sentence of Theorem~1.0ef{over-thm}. \section{Denser Constructions and a Revised Conjecture} \label{second-sec} \label{denser-sec} In this short section, we construct counterexamples to Conjecture~1.0ef{main-conj} that are asymptotically much denser than those in the previous section. We also propose two revised versions of Conjecture~1.0ef{main-conj}. By iterating the idea in Lemma~1.0ef{non-ham-lem}, Moon and Moser~\cite{moon-moser} constructed planar triangulations where the length of the longest cycle is sublinear in the order. These triangulations will serve as the gadgets in our denser constructions. \begin{thm}[\cite{moon-moser}] \label{MM-thm} For each positive integer $k$ there exists a 3-connected plane triangulation $G_k$ with $n(G_k)=\frac{3^{k+1}+5}2$ and with longest cycle of length less than $\frac72n(G_k)^{\log_32}$. \end{thm} \begin{cor} \label{MM-cor} There exists a positive real $D_1$ such that for all integers $\ell\ge 6$ there exists a plane triangulation $G_{\ell}$ with $n(G_{\ell})\ge D_1\ell^{\lg_23}$ such that $G_{\ell}$ has no cycle of length at least $\ell$. \end{cor} Chen and Yu~\cite{chen-yu} showed that Theorem~1.0ef{MM-thm} is essentially best possible. \begin{thm}[\cite{chen-yu}] \label{CY-thm} There exists a positive real $D_2$ such that every 3-connected $n$-vertex planar graph contains a cycle of length at least $D_2n^{\log_32}$. \end{thm} We briefly sketch the Moon--Moser construction, which proves Theorem~1.0ef{MM-thm}. For a more detailed analysis, we recommend Section 2 of~\cite{chen-yu}. Start with a planar drawing of $K_4$, which we call $T_1$. To form $T_{i+1}$ from $T_i$, add a new vertex $v_f$ inside each face $f$ (other than the outer face), making $v_f$ adjacent to each of the three vertices on the boundary of $f$, see Figure~1.0ef{fig:Ti}. It is each to check that the order of $T_i$ is $3+ (1+3+\ldots+3^{i-1})\approx \frac{3^i}2$. \newcommand{\deeprer}[4]{ \path($(#1)!0.5!(#2)$) -- node[pos=0.35](#4){} (#3); \draw(#4)--(#3)(#1)--(#4)--(#2); } \newcommand{\deeprerB}[4]{ \path($(#1)!0.5!(#2)$) -- node[pos=0.55](#4){} (#3); \draw(#4)--(#3)(#1)--(#4)--(#2); } \newcommand{\deeeprer}[4]{ \deeprer{#1}{#2}{#3}{#4} \deeprerB{#1}{#2}{#4}{xx1} \deeprerB{#1}{#3}{#4}{xx2} \deeprerB{#2}{#3}{#4}{xx3} } \begin{figure} \caption{Triangulations $T_1$, $T_2$, and $T_3$.} \label{fig:Ti} \end{figure} To bound the length of the longest cycle in $T_i$, we note that the vertices added when forming $T_j$ from $T_{j-1}$ form an independent set, for each $j$. Thus, for any cycle in $T_i$, at most half of the vertices were added at the final step. Of those added earlier, at most half were added at the penultimate step, etc. So the length of a longest cycle grows roughly by a factor of 2 at each step (while the order of $T_i$ grows roughly by a factor of 3). To prove the second statement of Theorem~1.0ef{over-thm}, we substitute into a sparse planar graph of girth $\ell+1$ a gadget with no cycle of length $\ell$, as guaranteed by Corollary~1.0ef{MM-cor}. We suspect this construction is extremal. So we conclude with the following two conjectures, which are each best possible. \begin{conj} \label{conj1} Fix $\ell\ge 7$, let $G$ be a dense graph of girth $\ell+1$, and let $B$ be a $n$-vertex planar triangulation with no $\ell$-cycle, where $B$ is chosen to maximize $n$. If $G'$ is formed by substituting $B$ into $G$ and $n':=|V(G')|$, then $\textrm{ex}P(n',C_l)=|E(G')|$. \end{conj} Proving Conjecture~1.0ef{conj1} seems plausible for some small values of $\ell$. But proving it in general seems difficult. So we also pose the following weaker conjecture. Note that Conjecture~1.0ef{conj2} would be immediately implied by Conjecture~1.0ef{conj1} (together with Theorem~1.0ef{CY-thm}). \begin{conj} \label{conj2} There exists a constant $D$ such that for all $\ell$ and for all sufficiently large $n$ we have $\textrm{ex}P(n,C_{\ell})\le \frac{3(D\ell^{\lg_23}-1)}{D\ell^{\lg_23}}n$. \end{conj} \section*{Acknowledgments} Most research in this paper took place at the 2021 Graduate Research Workshop in Combinatorics. We heartily thank the organizers. We also thank Caroline Bang, Florian Pfender, and Alexandra Wesolek for early discussions on this problem. {\footnotesize{ }} \end{document}
\begin{document} \title[ Asymptotic Properties of Unbounded Quadrature Domains ] {Asymptotic Properties of Unbounded Quadrature Domains the Plane} \author[L. Karp]{Lavi Karp*} \address{ Department of Mathematics\\ ORT Braude College\\ P.O. Box 78, 21982 Karmiel\\ Israel} \email{[email protected]} \thanks{*Research supported ORT Braude College's Research Authority} \subjclass[2010]{Primary 31A35, 30C20; Secondary 35R35} \keywords{Quadrature domains, asymptotic curve, Cauchy transform, contact surfaces, null quadrature domains, conformal mapping, free boundaries} \begin{abstract} We prove that if $\Omega$ is a simply connected quadrature domain of a distribution with compact support and the infinity point belongs the boundary, then the boundary has an asymptotic curve that is a straight line or a parabola or an infinite ray. In other words, such quadrature domains in the plane are perturbations of null quadrature domains. \end{abstract} \maketitle \section{Introduction} A domain $\Omega$ in the complex plane $ {\mathord{\mathbb C}}$ is called a \textit{quadrature domain} (QD) of a measure $\mu$ (or a distribution) and for the class of analytic functions, if $\mu _{\mid_{{\mathord{\mathbb C}} \setminus \Omega}} = 0$ and \begin{equation} \label{eq:q-i} \int_\Omega f dA= \mu(f) \quad \text{for all}\ f\in AL^1(\Omega). \end{equation} Here $AL^1(\Omega)$ is the space of all analytic and integrable functions in $\Omega$ and $dA$ the area measure. We may also consider quadrature domains for the class of harmonic functions. In that case the space $AL^1(\Omega)$ is replaced by $HL^1(\Omega)$, the set of all harmonic and integrable functions in $\Omega$, and the measure $\mu$ is real. Any QD for the class of harmonic functions is also a QD for the class of analytic functions. A particular class of unbounded QDs is the family of \textit{null quadrature domains}, that is, domains $\Omega$ for which \begin{equation*} \int_\Omega f dA=0, \quad\text{for all}\ f\in AL^1(\Omega). \end{equation*} This class comprises half-planes, the exteriors of parabolas, ellipses and strips, and the complement of any set which contains more than three points and lies in a straight line \cite{Sakai_81}. The main result of the present paper asserts that if $\Omega$ is a simply connected QD of a measure with compact support and with an unbounded boundary, then $\Omega$ is asymptotically like a null QD. More precisely, the boundary of $\Omega$, $\mathop{\partial}ial\Omega$, has an asymptotic curve that is either a straight line or a parabola or an infinite ray. In terms of free boundary problems this result has the following interpretation. Let $\chi_\Omega$ denotes the characteristic set of $\Omega$ and assume there is a solution to the overdetermined problem \begin{equation*} \left\{\begin{array}{ll} \Delta u=\chi_\Omega-g \ \ \text{in}\ & {\mathord{\mathbb R}}^2\\ u=|{\mathord{|||}}abla u|=0 \ \ \text{on}\ & {\mathord{\mathbb R}}^2\setminus \Omega, \end{array}\right. \end{equation*} where $g$ has a compact support in $\Omega$, and that the boundary of $\Omega$ is unbounded. Then $\mathop{\partial}ial\Omega$ has an asymptotic curve as is described above. Unbounded QDs in the two dimensional plane were studied by Sakai \cite{Sakai_82, Sakai_93}, Shapiro \cite{Shapiro_87}, and recently by Lee and Makarov \cite{Lee_Makarov_13}. Sakai showed that for a given non--negative measure with compact support there is an unbounded QD that contains a given null QD \cite[Ch. 11]{Sakai_82}. He used variational methods that are also available in higher dimensions. Shapiro proposed the use of an inversion in order to characterize unbounded quadrature domains \cite{Shapiro_87}. This idea was accomplished by Sakai in \cite{Sakai_93}. However, the asymptotic behavior of the boundary were not considered in those papers. Our results are inspired by the study of \textit{contact surfaces}, and in particular by the works of Strakhov \cite{Strakhov_74, Strakhov_74_2}, since in those works the asymptotic line appears naturally. These problems arise in geophysics and in the two dimensional plane they have the following formulation. Assume that an unbounded Jordan curve $\Gamma$ separates an infinite strip into two domains with different constant densities. Assume also that the strip is parallel to the $x$--axis. Strakhov showed that if $\Gamma$ has an asymptotic line ${\rm Im}(z)=h$ ($z=x+iy$), then the gravitational fields can be computed by the Cauchy integral \begin{equation} \label{eq:Cauchy-integral} \int_\Gamma \sigma\frac{\bar w- w+2ih}{w-z}dw, \end{equation} where $\sigma $ is the difference between the two densities and $z$ lies above the strip. The question is whether the shape of $\Gamma$ can be determined by the Cauchy integral when ${\rm Im}(z)$ is large. This type of problem is embedded in the frame of \textit{inverse problems in potential theory}, in which one aims to determinate the shape of a body from the measurements of its Newtonian potential far away from the body itself \cite{Gardiner_Sjodin_08, Isakov_93, Ivanov_56_2, Novikov_1938, Zalcman_87}. Quadrature domains are closely related to these problems \cite{Gustafsson_90, margulis_95}, for example, in the two dimensional plane the Schwarz function has a significant role in all these types of problems \cite{Aharonov_Shapiro_76, Davis_74, shapiro_92, tsirulskiy_63}. An essential tool of the proof is the relation between the conformal mapping in lower half--plane and the Cauchy transform of the measure $\mu$ of the QD (formula (\ref{eq:2.21})). Conformal mappings have been frequently incorporated in all these types of problems. The main feature of these sorts of results is to asserts that a conformal map from the unit disk to a domain $\Omega$ is rational if and only if $\Omega$ is a QD of a combination of Dirac measures and their derivatives \cite{Aharonov_Shapiro_76, Davis_74, Gustafsson_83}, or equivalently, the complex derivative of the external logarithmic potential of $\chi_\Omega$ is a rational function \cite{Ivanov_56_2, Strakhov_74_2, tsirulskiy_63}. Strakhov used the specific form of the conformal mapping in the lower half-plane (see (\ref{eq:6.12}) and (\ref{eq:2.14})) in order to show that contact surfaces are highly non-unique \cite{Strakhov_74_2}. Moreover, he constructed a continuous family of third order algebraic curves such that the Cauchy integral (\ref{eq:Cauchy-integral}) has the same value for all the curves in the family when ${\rm Im}(z)$ is large. In the context of QDs, Strakhov's example provides an explicit family of unbounded domains, and each one of them is a QD of the same Dirac measure. Furthermore, the domains in the family converge to a union of a disk and a half--plane, in other words, a union of a disk and a null QD. We will extend Strakhov's example to other types of null QDs, that is, we will construct families of unbounded QDs of a fixed Dirac measure, and such that their boundary has a parabola, or an infinite ray, as an asymptotic curve. We believe that the structure of unbounded QDs in higher dimension is similar to the two dimensional. However, the corresponding theory in higher dimensions is very restricted, and most of the problems are open. For example, in \cite{karp_shahgholian_00} it is proved that if $\Omega\subset{\mathord{\mathbb R}}^n$ is an unbounded QD of a measure with compact support and the complement of $\Omega$ is not too thin at infinity, then inversion of the boundary is a $C^1$ surface near the origin. However, this does not imply that the boundary has an asymptotic plane. For further properties of unbounded QDs in higher dimensions see \cite{karp_margulis_96, sakai_09}. The plan of the paper is the following: In the next section we shall first establish the relation between rational conformal mappings in the lower half--plane and quadrature identities. Having established that, the main result follows easily. Section \ref{sec:contact} deals with contact surfaces. Although Strakhov's model is published in \cite{Strakhov_74}, we shall present its basic idea here, and in particular, we will emphasize its connections to unbounded QDs. In Section \ref{sec:example} we shall construct examples of families of unbounded QDs of a fixed Dirac measure, and in each example the boundary has a different type of an asymptotic curve. Throughout this paper $f^\ast$ stands for $\bar{f}(\bar z)$ and $\chi_\Omega$ is the characteristic function of a set $\Omega$. We also may assume that $\Omega$ is the interior of its closure whenever it is a QD (see cf. \cite[Corollary 2.15]{karp_margulis_96}). \section{Conformal mappings and quadrature domains } Conformal mappings have been used extensively in QDs and the inverse problem of potential theory. A fundamental property is the relations between the conformal mapping, quadrature identity (\ref{eq:q-i}) and the Cauchy transform. To be more specific, a conformal map from the unit disk to a domain $\Omega$ is rational if and only if $\Omega$ is a QD of a combination of Dirac measures and their derivatives \cite{Aharonov_Shapiro_76, Davis_74, Gustafsson_83}, or equivalently, the Cauchy transform of the measure $\chi_\Omega$ is a rational function outside $\Omega$ \cite{Ivanov_56_2, tsirulskiy_63}. Theorem \ref{thm:2} below comprises similar statements. It was proved in the context of contact surfaces under the assumption that the boundary has an asymptotic line \cite{fedorova_tsirulskiy, Strakhov_74_2}, and for QDs by Shapiro under the assumption that the Schwarz function $S(z)$ tends to infinity as $z$ goes to infinity \cite{Shapiro_87}. Shapiro used that assumption in order to apply an inversion and to reduce the problem to the one of bounded QDs. This assumption was affirmed later by Sakai in \cite{Sakai_93}. Since the proof of Theorem \ref{thm:2} is the core for the main result we prove it here. We also slightly extend these previous results, and in addition, our proof is based upon the generalized Cauchy transform. Let $z$ and $\zeta$ be points in the complex plane. For a measure $\mu$, we denote by ${\mathord{\mathcal C}}^\mu$ the Cauchy transform of $\mu$, \begin{equation} \label{eq:Cauchy transform} {\mathord{\mathcal C}}^\mu(z):=\dfrac{1}{\pi}\int\dfrac{d\mu(\zeta)}{\zeta-z}. \end{equation} Whenever $\sigma\in L^\infty({\mathord{\mathbb C}})$ and $\mu=\sigma\chi_DdA$ we will denote the Cauchy transform by $C^{\sigma D}$. The Cauchy transform satisfies the differential equation $\mathop{\partial}ial_{\bar z}\mathcal{C}^\mu=-\mu$ in the distributional sense, and it is well defined whenever $\mu$ has a compact support. But it may not converge when the measure has arbitrary support. In order to over come this we use the following device that was first implemented by Bers \cite{Bers_65} and later by Sakai \cite{Sakai_81, Sakai_93}. We modify the Cauchy kernel $(\pi(\zeta-z))^{-1}$ by \begin{equation} \label{eq:6.20} \mathcal{K}(\zeta,z,a,b)=\dfrac{1}{\pi}\left(\dfrac{1}{\zeta-z}+\dfrac{z-b}{ (b-a)(\zeta-a) }+\dfrac{z-a}{ (a-b)(\zeta-b) } \right), \quad a{\mathord{|||}}eq b. \end{equation} Since $\mathcal{K}(\zeta,z,a,b)=O(|\zeta|^{-3})$ for large $|\zeta|$, the integral \begin{equation} \label{eq:6.21} \mathcal{C}^g_{\mathcal{K}}(z)=\int \mathcal{K}(\zeta,z,a,b)g(\zeta)dA(\zeta), \end{equation} converges for any $g\in L^\infty({\mathord{\mathbb C}})$. The transformation (\ref{eq:6.21}) is called a {\it generalized Cauchy transform} of $g$. Obviously if ${a,b}{\mathord{|||}}ot\in{\rm supp}(g)$, then $\mathop{\partial}ial_{\bar{z}}\mathcal{C}^g_\mathcal{K}=-g$. Let $\mathbb{H}_{\pm}$ denote the upper/lower half-plane respectively and consider a conformal map $\psi$, from $\mathbb{H}_{-}$ onto a domain $\Omega$ of, the form \begin{equation} \label{eq:6.12} z= \psi(w)=q(w)+\varphi(w), \end{equation} where $q$ is a quadratic polynomial, \begin{equation} \label{eq:2.14} \varphi(w) = \sum_{k=1}^m\sum_{j=0}^{m_k}\dfrac{a_{kj}}{(w-b_k)^{j+1}} +\sum_{k=1}^n c_k\int_{d_k}^{d_{k+1}} \dfrac{1}{w-s}ds, \{b_1,\ldots,b_m, d_1,\ldots,d_{n+1}\}\subset \mathbb{H}_+ \end{equation} and $\{a_{kj}\}$, $\{c_k\}$ are complex numbers. The integrals are along piecewise straight lines connecting $d_k$ to $d_{k+1}$ and do not pass through the points $b_k$. We assume that the points $\{b_k\}_{k=1}^m, \{d_k\}_{k=1}^{n+1}$ are distinct. For given points $\{\beta_1,\ldots,\beta_m\}$ and $\{\delta_1,\ldots,\delta_{n+1}\}$, we define a distribution $T$ as follows: \begin{equation} \label{eq:6.14} T\left(\phi\right)= \sum_{k=1}^m\sum_{j=0}^{m_k} \alpha_{kj}\mathop{\partial}ial_z^j\phi(\beta_k) +\sum_{k=1}^n \bar{c}_k \int_{\delta_k}^{\delta_{k+1}}\phi(s)ds, \end{equation} where the integrals are along piecewise straight lines connecting $\delta_k$ to $\delta_{k+1}$, $\alpha_{kj}$ and $c_k$ are constants. The test function $\phi$ belongs to $C^\infty(D)$, for any open set $D$ such that $\mathop{\mathrm{supp}}(T)\subset D$. \begin{thm} \label{thm:2} Let $\Omega$ be a simply connected domain with an unbounded boundary $\mathop{\partial}ial\Omega$. Then the following are equivalent: \begin{enumerate} \item[(a)] The domain $\Omega$ is QD of a distribution $T$ of the form (\ref{eq:6.14}), that is, \begin{equation} \label{eq:q_i:2} \int_\Omega f dA= T(f) \qquad \text{for all} \ f\in AL^1(\Omega) \tag{\ref{eq:q-i}}. \end{equation} \item[(b)] There is a conformal map $\psi$ from $\mathbb{H}_-$ onto $\Omega$ of the form (\ref{eq:6.12})--(\ref{eq:2.14}). \end{enumerate} \end{thm} \begin{rem} The relations between the coefficients of the conformal mapping $\psi$ and the distribution $T$ are as follows: $\beta_k=\psi(\bar{b}_k)$, $\delta_k=\psi(\bar{d}_k)$ and $\{\alpha_{kj}\}$ are depended on $\{a_{kj}\}$ through equation (\ref{eq:2.9}) below. \end{rem} {\mathord{|||}}oindent \textit{Proof}( of Theorem \ref{thm:2}). Suppose first that $\Omega=\psi(\mathbb{H}_-)$, where $\psi$ is a conformal mapping of the form (\ref{eq:6.12})--(\ref{eq:2.14}). Since the class consisting of holomorphic functions $f$ in a neighborhood of $\Omega$ such that $ f(z)=O(|z|^{-k})$ at infinity is dense in $AL^1(\Omega)$ for any integer $k\geq 3$ (see \cite{hayman_karp_shapiro, Shapiro_87}), we may prove the identity (\ref{eq:6.14}) for $f$ that is holomorphic in a neighborhood of $\Omega$ and has arbitrary polynomial decay at infinity. Let $B_r$ be a ball with radius $r$ and center at the origin, and let $\Omega_r$ be the image of $\mathbb{H}_-\cap B_r$ under the map $\psi$. Then by Green's theorem, \begin{equation*} \begin{split} \int\limits_{\Omega_r} f dA &=\frac{1}{2i} \int\limits_{\mathop{\partial}ial\Omega_r} \bar zf(z)dz=\frac{1}{2i}\int\limits_{-r}^r \overline{\psi(t)}f(\psi(t))\psi'(t)dt \\ & + \frac{1}{2i}\int\limits_{\{|w|=r, {\rm Im}(w)<0\}} \overline{\psi(w)}f(\psi(w))\psi'(w)dw. \end{split} \end{equation*} Noting that $\psi$ has at most a quadratic growth at infinity and $f$ has arbitrary polynomial decay, the second integral of the right hand side tends to zero as $r$ goes to infinity. Hence \begin{equation} \label{eq:2.7} \int\limits_{\Omega} f dA =\frac{1}{2i}\int\limits_{-\infty}^\infty \overline{\psi(t)}f(\psi(t))\psi'(t)dt. \end{equation} Let $\overline{\psi(\bar w)}=\psi^\ast(w)$. Since $\psi^\ast(t)=\overline{\psi}(t)$ for $t\in{\mathord{\mathbb R}}$, we may replace $\overline{\psi}$ by $\psi^\ast$ in (\ref{eq:2.7}). Note $\psi^\ast=q^\ast+\varphi^\ast$ is holomorphic outside the reflection of the singularities of $\varphi$, and since $f$ has polynomial decay at infinity we have that \begin{equation*} \lim_{r\to \infty}\int\limits_{\{|w|=r, {\rm Im}(w)<0\}} {\psi^\ast(w)}f(\psi(w))\psi'(w)dw=0. \end{equation*} Hence, we can replace the line integral in (\ref{eq:2.7}) by several line integrals around the singularities of $\varphi^*$ in $\mathbb{H}_-$. Let $C_\rho(\bar{b}_k)$ be a small circle around $\bar{b}_k$, then \begin{equation} \label{eq:2.9} \begin{split} & \dfrac{1}{2i} \int\limits_{C_\rho(\bar{b}_k)} \left(q^*(w)+\varphi^*(w)\right)f(\psi(w))\psi'(w) dw \\ = & \dfrac{1}{2i}\sum_{j=0}^{m_k} \int\limits_{C_\rho(\bar{b}_k)}\left( \dfrac{\bar{a}_{kj}}{(w-\bar{b}_k)^{j+1}}\right)f(\psi(w))\psi'(w) dw \\ = & \pi \sum_{j=0}^{m_k} \bar{a}_{kj}{\rm Res}\left(\dfrac{(f \circ\psi)\psi'}{(w-\bar{b}_{k})^{j+1}}, \bar{b}_k\right) =: \sum_{j=0}^{m_k} \alpha_{kj}\mathop{\partial}ial_z^{j}f(\beta_k), \end{split} \end{equation} where $\beta_k=\psi(\bar{b}_k)$. Let $\gamma$ be a closed curve in $\mathbb{H}_-$ around the polygonal curve connecting $\bar{d}_1$ to $\bar{d}_{n+1}$, and such that it does not surround any of the points $\bar{b}_i, i=1,\ldots,m$. Then \begin{equation*} \begin{split} & \dfrac{1}{2i} \int\limits_{\gamma} \left(q^*(w)+\varphi^*(w)\right)f(\psi(w))\psi'(w) dw \\ = & \dfrac{1}{2i} \int\limits_{\gamma}\left(\sum_{k=1}^n \bar{c}_k\int_{\bar{d}_k}^{\bar{d}_{k+1}}\dfrac{1}{w-s }ds\right)f(\psi(w))\psi'(w) dw\\ = & \dfrac{1}{2i} \sum_{k=1}^n \bar{c}_k\int_{\bar{d}_k}^{\bar{d}_{k+1}}\left(\int\limits_{\gamma}\dfrac{1}{w-s }f(\psi(w))\psi'(w) dw\right)ds\\ = &\pi\sum_{k=1}^n \bar{c}_k\int_{\bar{d}_k}^{\bar{d}_{k+1}}f(\psi(s))\psi'(s) ds = \pi\sum_{k=1}^n \bar{c}_k\int_{\delta_k}^{\delta_{k+1}}f(\tau) d\tau, \end{split} \end{equation*} where $\delta_k=\psi(\bar{d}_k)$. Summing all the integrals around the singularities of $\varphi^\ast$ in $\mathbb{H}_-$, we obtain the quadrature identity (\ref{eq:q_i:2}) with $T$ as in (\ref{eq:6.14}). For the converse assertion we follow Aharonov and Shapiro method \cite{Aharonov_Shapiro_76} with some obvious modifications, and we shall also use Sakai's regularity results \cite{Sakai_91, Sakai_93} to overcome the difficulty near the infinity point. Calculating the generalized Cauchy transform of the distribution $T$, we find that \begin{equation} \label{eq:6.22} \mathcal{C}_{\mathcal{K}}^T(z)=\dfrac{1}{\pi}\sum_{k=1}^m\sum_{j=0}^{m_k} \frac{(-1)^jj!\alpha_{kj}}{\left(z-\beta_k\right)^{j+1}}+\dfrac{1}{\pi}\sum_{k=1 } ^n \bar{c}_k\int_{\delta_k}^{\delta_{k+1}}\dfrac{1}{s-z}ds+L(z), \end{equation} where $L(z)$ is a linear function. Let $a,b{\mathord{|||}}ot\in\Omega$. Then for any $z{\mathord{|||}}ot\in \Omega$ the modified Cauchy kernel $\mathcal{K}(\cdot,z,a,b)$ belongs to $ AL^1(\Omega)$. Hence, the quadrature identity (\ref{eq:q-i}) implies that $\mathcal{C}_{\mathcal{K}}^{\Omega}(z)=\mathcal{C}_{ \mathcal{K} } ^T(z)$ for $z{\mathord{|||}}ot\in\Omega$. Since $\mathcal{C}_\mathcal{K}^T(z)$ is analytic for $z$ outside the support of $T$, and $\mathop{\mathrm{supp}}(T)\Subset\Omega$, \begin{equation*} S(z)=\bar{z}+\mathcal{C}_{\mathcal{K}}^{\Omega}(z)-\mathcal{C}_{\mathcal{K}} ^T(z) \end{equation*} is the Schwarz function of the boundary $\mathop{\partial}ial\Omega$ in $\Omega$. That is, $S(z)=\bar z$ on $\mathop{\partial}ial\Omega$ and it is holomorphic for $z\in \Omega$ near $\mathop{\partial}ial\Omega$. Let now $\psi$ be a conformal map from $\mathbb{H}_-$ to $\Omega$ and set \begin{equation} \label{eq:6.23} F(w)=\left\{\begin{array}{ll} S(\psi(w)), \ &w\in \mathbb{H}_-\\ \psi^*(w), & w\in \mathbb{H}_+ \end{array}\right.. \end{equation} Since $S(\psi(t))=\overline{\psi(t)}=\psi^\ast(t) $ for $t\in {\mathord{\mathbb R}}$, the function $F$ is analytically continuable to a single-valued function across the real line (here we rely of Sakai's regularity result that implies that the boundary is a union of piecewise analytic arcs \cite{Sakai_91}). The crucial point is the analytic continuation near infinity. In order to overcome this difficulty we use an inversion, we set $\Omega_{\rm inv}=\{z: 1/{z}\in\Omega\}$ and $\mathop{\partial}ial\Omega_{\rm inv}=\{z:1/z\in\mathop{\partial}ial\Omega\}$. Then it follows from Sakai's regularity result \cite[Corollary 2.6]{Sakai_93}, that \begin{equation*} S_{\rm inv}(z):=\left\{\begin{array}{ll}\frac{1}{S({z}^{-1})}, \ & z{\mathord{|||}}eq0\\ 0,\ & z=0\end{array}\right. \end{equation*} is the Schwarz function of $\mathop{\partial}ial\Omega_{\rm inv}$. Let $\Psi(\zeta):= 1/\psi(-\zeta^{-1})$ and set \begin{equation*} \widetilde{F}(\zeta)=\left\{\begin{array}{ll} S_{\rm inv}(\Psi(\zeta)), \ &\zeta\in \mathbb{H}_-\\ \Psi^*(\zeta), & \zeta\in \mathbb{H}_+ \end{array}\right.. \end{equation*} Then the function $\Psi(\zeta)$ maps $\mathbb{H}_-$ conformally onto $\Omega_{\rm inv}$ and $ S_{\rm inv}(\Psi(t))=\overline{\Psi(t)}$ for $t$ in $ {\mathord{\mathbb R}}$. Hence $\widetilde{F}$ has an analytic continuation across the real line, and in particular, in a neighborhood of the origin. Thus $F$ in (\ref{eq:6.23}) is holomorphic in a neighborhood of infinity. Let $\{\bar{b}_1,\ldots,\bar{b}_m, \bar{d}_1,\ldots,\bar{d}_{n+1}\}\subset \mathbb{H}_-$ be the pre-image of $\{\beta_1,\ldots,\beta_m\}$ and $\{\delta_1,\ldots,\delta_{n+1}\}$ under $\psi$. From (\ref{eq:6.22}) and (\ref{eq:6.23}) we see that $F'(w)$ is a single-valued meromorphic function in the Riemann sphere with poles of order higher than or equal to two at the points $\{\bar{b}_1,\ldots,\bar{b}_m\}$, and of order one at the points $\{\bar{d}_1,\ldots,\bar{d}_{n+1}\}$. Since $\psi$ is univalent in $\mathbb{H}_-$, $F$ has polynomial growth at infinity, and hence $F$ is a meromorphic function in the entire plane of the form \begin{equation} \label{eq:6.24} F(w)=q^\ast(w)+\sum_{k=1}^n\sum_{j=0}^{m_k} \dfrac{\bar{a}_{kj}}{(w-\bar{b}_k)^{j+1}}+\sum_{k=1}^{n+1} \gamma_k\log(w-\bar{d}_k), \end{equation} where $q$ is a polynomial and $a_{kj},\gamma_k$ are constants. From (\ref{eq:6.23}) we see that $F$ is a single--valued function and hence $\sum_{k=1}^{n+1} \gamma_k\log(w-\bar{d}_k)$ is also a single-valued. This implies that $\sum_{k=1}^{n+1}\gamma_k=0$, and \begin{equation} \label{eq:2.13} \sum_{k=1}^{n+1} \gamma_k\log(w-\bar{d}_k)=\sum_{k=1}^{n} \bar{c}_k\int_{\bar{d}_k}^{\bar{d}_{k+1}}\dfrac{1}{w-s}ds \end{equation} for some constants $\{c_k\}$ (see Remark \ref{remark:1} below). Setting \begin{equation*} \varphi^*(w)=\sum_{k=1}^m\sum_{j=0}^{m_k} \dfrac{\bar{a}_{kj}}{(w-\bar{b}_k)^{j+1}}+\sum_{k=1}^{n} \bar{c}_k\int_{\bar{d}_k}^{\bar{d}_{k+1}}\dfrac{1}{w-s}ds, \end{equation*} we see from (\ref{eq:6.23}), (\ref{eq:6.24}) and (\ref{eq:2.13}) that \begin{equation*} \psi(w)=q(w)+\varphi(w), \end{equation*} where $\varphi$ is given by (\ref{eq:2.14}). It remains to prove that the degree of $q$ does not exceed two. Let $N$ be the degree of $q$, then we may write $\psi(w)=Aw^Ng(w)$, where $g$ is holomorphic in $\mathbb{H}_-$ and $g(\infty)=1$. Therefore $\Psi(\zeta):= 1/\psi(-\zeta^{-1})={(-\zeta)^N}{\left(A g(-\zeta^{-1})\right)}^{-1}$. Now, according to Sakai's regularity result \cite{Sakai_91, Sakai_93}, there are three possibilities: Either $\mathop{\partial}ial\Omega_{\rm inv}$ is a smooth analytic curve near the origin, or $\mathop{\partial}ial\Omega_{\rm inv}$ is a union of two tangential analytic arcs in a neighborhood of the origin, or $\mathop{\partial}ial\Omega_{\rm inv}$ has cusp singularity at the origin. In the first and second cases, $\Psi(t)$ is a smooth curve for $t\in (-\epsilon,\epsilon)$ and some positive $\epsilon$. Then clearly $N=1$. In the third one $\Psi$ has a Taylor expansion $\kappa\left(\zeta^2+a_3 \zeta^3+\cdots\right)$, for $\zeta\in\mathbb{H}_-\cap \{|\zeta|<\epsilon\}$ and $\kappa\in{\mathord{\mathbb C}}$. Then obviously $N=2$. So we conclude that \begin{equation} \label{eq:2.15} q(w)=A_2w^2+A_1w+A_0 \end{equation} for some complex numbers $A_2,A_1$ and $A_0$, and this completes the proof. {$\square$} The boundary of $\Omega$, $\mathop{\partial}ial\Omega$, has a parametric representation $\{\psi(t)=q(t)+\varphi(t): t\in{\mathord{\mathbb R}}\}$. Since $\lim_{t\to\pm\infty}\varphi(t)=0$, $\mathop{\partial}ial\Omega$ has the asymptotic of the curve $\{q(t): t\in {\mathord{\mathbb R}}\}$. Hence there are three possibilities of asymptotic curves: In case $A_2=0$ in (\ref{eq:2.15}), then $\mathop{\partial}ial\Omega$ has an asymptotic of a straight line. Or else, we may write \begin{equation*} q(w)=A_2\left(w^2+\frac{A_1}{A_2}w+\frac{A_0}{A_2}\right). \end{equation*} So if ${\rm Im}({A_1}/{A_2}){\mathord{|||}}eq 0$, then $\mathop{\partial}ial\Omega$ has an asymptotic of a parabola, otherwise it has an asymptotic of a ray. This phenomenon of the asymptotic behavior can be extended for unbounded QDs for any distribution with compact support. \begin{thm} Let $\Omega$ be a simply connected quadrature domain of a distribution $T$ with compact support in $\Omega$. If $\infty\in\mathop{\partial}ial\Omega$, then the boundary $\mathop{\partial}ial \Omega$ has an asymptotic curve that is either a straight line or a parabola or an infinite ray. \end{thm} \begin{proof} We may assume that the distribution $T$ can be represented by a smooth function $\rho$ with compact support in $\Omega$, that is, $T(\phi)=\int \phi(z)\rho(z)dA$ (see e.g. \cite[Lemma 4.3]{shapiro_92}). Since $\mathcal{C}_{\mathcal{K}}^\Omega(z)=\mathcal{C}_{\mathcal{K}}^{\rho}(z)$ for $z\in {\mathord{\mathbb C}}\setminus \Omega$, \begin{equation*} S(z)=\bar z+\mathcal{C}_{\mathcal{K}}^\Omega(z)-\mathcal{C}_{\mathcal{K}}^{\rho} (z) \end{equation*} is the Schwarz function of $\mathop{\partial}ial \Omega$ when $z\in\Omega$. Similarly to the above proof, we denote by $\psi$ the conformal map from the lower half-plane $\mathbb{H}_-$ onto $\Omega$ and define $F$ as in (\ref{eq:6.23}). Obviously $F(w)$ is holomorphic in the upper half-plane $\mathbb{H}_+$, while for $w\in \mathbb{H}_-$ we have that \begin{equation} \begin{split} \mathop{\partial}ial_{\bar w}F(w) & =\overline{\mathop{\partial}ial_{w}\psi(w)} +\left\{ -\chi_\Omega\left(\psi(w)\right)+\rho\left(\psi(w)\right)\right\}\overline { \mathop{\partial}ial_{w}\psi(w)} \\ & =\rho\left(\psi(w)\right)\overline { \mathop{\partial}ial_{w}\psi(w)}. \end{split} \end{equation} Hence $F$ is holomorphic in $\mathbb{H}_-$ apart from the preimage of ${\rm supp}(\rho)$. Since $S(\psi(t))=\overline{\psi(t)}=\psi^\ast(t) $ for $t\in {\mathord{\mathbb R}}$, $F$ has a holomorphic continuation through the real line. We now set $\mu=\left(\rho\circ \psi\right)\overline{{\mathop{\partial}ial_{w}\psi}}$. Then $\left\{\mathcal{C}^\mu(w)- \left(\mathcal{C}^{\rho} \left(\psi(w)\right)\right)\right\}$ is a holomorphic function in the lower half--plane. This means that the Cauchy transform $\mathcal{C}^\mu$ captures the singularities of $F$. Using inversion as in the proof of Theorem \ref{thm:2}, we conclude that $F$ is holomorphic near the infinity point, and hence in the Riemann sphere excluding the support of $\mu$. Therfore \begin{equation*} F(w)=q^\ast(w) + \mathcal{C}^\mu(w), \end{equation*} where $q$ is a polynomial. So by (\ref{eq:6.23}), \begin{equation*} \psi(w)=q(w)+\left(\mathcal{C}^\mu(w)\right)^\ast. \end{equation*} Applying again Sakai's \cite{Sakai_91, Sakai_93} in a similar manner as we did in the previous proof, we obtain that the degree of $q$ is at most two. Let $K\subset\mathbb{H}_-$ denote the support of $\rho\circ\psi$, which is the preimage of $\mathop{\mathrm{supp}}(\rho)$. Then obviously ${\rm supp}(\mu)=K$, \begin{equation*} \left(\mathcal{C}^\mu(w)\right)^\ast=\frac{1}{\pi}\overline{\int \frac{\mu(\zeta)}{\bar w-\zeta}dA(\zeta) }=\frac{1}{\pi}\int \frac{\overline{\mu}(\bar\zeta)}{ w-\zeta}dA(\zeta)=\mathcal{C}^{\mu^\ast}(w), \end{equation*} and ${\mathop{\mathrm{supp}}}(\mu^\ast)=K^\ast\subset\mathbb{H}_+$, where $K^\ast$ is the mirror image of $K$. Thus the conformal map $\psi$ is holomorphic in $\mathbb{H}_-$ and satisfies the equation \begin{equation} \label{eq:2.21} \psi(w)=q(w)+\frac{1}{\pi}\int\frac{\mu^\ast \left(\zeta\right)}{w-\zeta}dA(\zeta). \end{equation} The boundary $\mathop{\partial}ial\Omega$ has a parametric representation $\{\psi(t): t\in{\mathord{\mathbb R}}\}$. So since $\mathop{\mathrm{supp}}(\mu^\ast)=K^\ast$ is compact, we obtain from (\ref{eq:2.21}) that \begin{equation*} \lim_{t\to\pm\infty}\left\{\psi(t)-q(t)\right\}=0. \end{equation*} Thus the asymptotic curve is determined by the coefficients of $q$ in (\ref{eq:2.15}): A straight line if $A_2=0$; otherwise, a parabola when ${\rm Im}({A_1}/{A_2}){\mathord{|||}}eq 0$ and an infinite ray when ${\rm Im}({A_1}/{A_2})= 0$. \end{proof} \begin{rem} \label{remark:1} Here is a simple argument showing that $\sum_{k=1}^{n+1}\gamma_k\log(w-d_k)$ is single--valued outside the piecewise straight line connected the points $\{d_1,\ldots,d_{n+1}\}$ if and only if $\sum \gamma_k =0$. Let $\sigma=\sum_{k=1}^{n+1}\gamma_k$. Then \begin{equation*} \begin{split} &\sum_{k=1}^{n+1}\gamma_k\log(w-d_k)\\ = & \sum_{k=1}^{n} \gamma_k\log(w-d_k)+\left\{-(\gamma_1+\cdots+\gamma_n)+\sigma\right\}\log(w-d_{ n+1 } )\\ =&\gamma_1\int_{d_1}^{d_2}\dfrac{1}{w-s} ds+(\gamma_1+\gamma_2) \int_{d_2}^{d_3}\dfrac{1}{w-s}ds \\& +\cdots+(\gamma_1+\cdots+\gamma_n)\int_{d_n}^{ d_{n+1}}\dfrac{1}{w-s}ds + \sigma \log(w-d_{n+1}). \end{split} \end{equation*} Since the integrals are holomorphic off the curve, while $ \log(w-d_{n+1})$ is not single--valued in any neighborhood of $d_{n+1}$, we see that $\sum_{k=1}^{n+1}\gamma_k\log(w-d_k)$ is single-valued if and only if $\sigma=0$. \end{rem} \section{Contact surfaces and quadrature domains} \label{sec:contact} We present here Strakhov's model of two dimensional contact surfaces \cite{Strakhov_74} and show its relations to unbounded QDs. Let $\Gamma$ be a Jordan curve (the contact curve) in the strip $\{h_1<{\rm Im}(\zeta)<h_2\}$ and let $\{\zeta(t): t\in{\mathord{\mathbb R}}\}$ be its parametric representation. We assume that $\Gamma$ has an asymptotic line, that is, $\lim_{t\to\pm\infty}{\rm Im}(\zeta(t))=h$. The curve $\Gamma$ separates the strip into two layers; $D_1$ below $\Gamma$ and with a constant density $\sigma_1$, and $D_2$ above the curve and with a density $\sigma_2$. Let \begin{equation} U^\mu(z)=-\frac{1}{2\pi}\int \log|\zeta-z|d\mu(\zeta), \end{equation} be the logarithmic potential of a measure $\mu$, and whenever $\mu=\sigma\chi_D dA$ for some set $D$ we denote it by $U^{\sigma D}$. Note that if $D\subset \{h_1\leq{\rm Im}(\zeta)\leq h_2\}$, then the potential $U^{\sigma D}$ may not converges. However, the gravitational field in the direction perpendicular to the strip \begin{equation*} \mathop{\partial}ial_y U^{\sigma D}(x,y)=\frac{1}{2\pi}\int_D \sigma\frac{y-y'}{(x-x')^2+(y-y')^2}dx'dy' \end{equation*} always exists. Let $D_+$, $D_-$ be the sets in between $\Gamma$ and its asymptotic line $\{{\rm Im}(\zeta)=h\}$, that is, $D_+=D_2\cap\{ h<{\rm Im}(\zeta)\}$ and $D_-=D_1\cap\{ h>{\rm Im}(\zeta)\}$. Note that whenever $D$ is a strip $\{a\leq{\rm Im}(\zeta)\leq b\}$, then $\mathop{\partial}ial_y U^{\sigma D}$ is a constant for ${\rm Im}(z)>b$ (see e.g. \cite{karp_margulis_96}). Using this property we find that \begin{equation} \mathop{\partial}ial_y U^{\sigma_2D_2}(z)+\mathop{\partial}ial_y U^{\sigma_1D_1}(z)=\text{const.}-\mathop{\partial}ial_y U^{\sigma D_+}(z) - \mathop{\partial}ial_y U^{-\sigma D_-}(z), \end{equation} when ${\rm Im}(z)>h_2$ and where $\sigma=\sigma_2-\sigma_1$. If in addition, $|{\rm Im}(\zeta(t))-h|=O(|t|^{-\alpha})$ for large $|t|$ and $\alpha>0$, then the gravitational fields $\mathop{\partial}ial_xU^{\sigma D_\pm}$ converge, and hence \begin{equation} \begin{split} 2\left(\mathop{\partial}ial_xU^{\sigma D_+}(z)-i\mathop{\partial}ial_yU^{\sigma D_+}(z) +\mathop{\partial}ial_xU^{-\sigma D_-}(z)-i\mathop{\partial}ial_yU^{-\sigma D_-}(z)\right)\\ =\frac{1}{\pi}\int_{D_+}\frac{\sigma}{\zeta-z}dA+\frac{1}{\pi}\int_{D_-}\frac{-\sigma}{\zeta-z}dA= {\mathord{\mathcal C}}^{\sigma D_+}(z)+{\mathord{\mathcal C}}^{-\sigma D_-}(z). \end{split} \end{equation} Applying Green's theorem to the function $\sigma\frac{\bar{\zeta}-\zeta+2ih}{\zeta-z}$ in each component of $D_+$ and $D_-$, and noting that this function vanishes on the asymptotic line $\{{\rm Im}(\zeta)=h\}$, we get that \begin{equation} \label{eq:cauchy:2} F(z):=\frac{1}{2\pi i}\int_\Gamma \sigma\frac{\bar{\zeta}-\zeta+2ih}{\zeta-z}d\zeta ={\mathord{\mathcal C}}^{\sigma D_+}(z)+{\mathord{\mathcal C}}^{-\sigma D_-}(z), \qquad {\rm Im}(z)>h_2. \end{equation} Thus up to a constant term, the Cauchy integral determines the gravitational fields. We shall see that the Schwarz function of $\Gamma$ governs the complex gravitational field $F(z)$ in (\ref{eq:cauchy:2}). Let $\Omega$ be the domain below $\Gamma$ and let $S(\zeta)$ be the Schwarz function of $\Gamma$ and assume that the singularities of the Schwarz function in $\Omega$ have compact support. Then \begin{equation} \label{eq:3.11} \int_{\Omega\cap \{|\zeta|=R\}}\sigma\frac{S(\zeta)-\zeta+2ih}{\zeta-z}d\zeta+ \int_{\Gamma\cap B_R}\sigma\frac{S(\zeta)-\zeta+2ih}{\zeta-z}d\zeta =\sigma\int_\gamma\frac{S(\zeta)}{\zeta-z}d\zeta, \end{equation} where $\gamma$ is a closed curve around the singularities of the Schwarz function in $\Omega$ and $R$ sufficiently large positive number. Assume further that $\Omega$ is the image of a conformal mapping $\psi(w)=w +ih +\varphi(w)$ in the lower half-plane, then \begin{equation*} S(\zeta)=\psi^{-1}(\zeta)-ih+\varphi^\ast\left(\psi^{-1}(\zeta)\right) \end{equation*} is the Schwarz function of $\Gamma$. Therefore, if $\varphi$ is given by (\ref{eq:2.14}), then $S(\zeta)-\zeta+2ih=o(|\zeta|)$, as $|\zeta|\to \infty$, and this implies that the first integral of the left hand side of (\ref{eq:3.11}) tends to zero as $R\to \infty$. So we conclude that \begin{equation*} F(z)=\frac{1}{2\pi i}\int_\Gamma \sigma\frac{{\bar\zeta}-\zeta+2ih}{\zeta-z}d\zeta=\frac{1}{2\pi i}\int_\Gamma \sigma\frac{S({\zeta})-\zeta+2ih}{\zeta-z}d\zeta =\frac{\sigma}{2\pi i}\int_\gamma\frac{S(\zeta)}{\zeta-z}d\zeta. \end{equation*} On the other hand, by (\ref{eq:2.7}), \begin{equation*} \int_\Omega f dA=\frac{1}{2i}\int_\Gamma S(z)f(z)dz=\frac{1}{2i}\int_\gamma\frac{S(\zeta)}{\zeta-z}d\zeta. \end{equation*} Thus the singularities of the Schwarz function control the complex potential $F(z)$ as well as the distribution $T$ of the quadrature identity (\ref{eq:q-i}). The Cauchy integral (\ref{eq:cauchy:2}) of the curve $\Gamma$ is a rational function if and only if the domain below it is a QD of a combination Dirac measures and their derivatives. \section{Examples of non--uniqueness} \label{sec:example} In this section we use the specific form of the conformal map (\ref{eq:6.12}) and present examples of a continuous family of domains such that each one of them is a QD of the same measure. In Examples \ref{ex:1} and \ref{ex2} the families will converge to an union of a disk and a null QD. The idea of these constructions is due to Strakhov \cite{Strakhov_74_2}, where he used the conformal mapping (\ref{eq:6.12}) with $q$ a linear polynomial (see also \cite{fedorova_tsirulskiy}). In a particulate case where the perturbation $\varphi$ in (\ref{eq:6.12}) has a single simple pole, Strakhov computed the boundary explicitly and showed that it is a third order algebraic curve. For the convenience of the reader we present his example here. \begin{exmp}[Strakhov ] \label{ex:1} We consider a conformal map of the form (\ref{eq:6.12}), \begin{equation*} \label{eq:3.1} z=\psi(w) = w +ih +\frac{a}{w-ib}, \quad a, b>0,\ h\in{\mathord{\mathbb R}}. \end{equation*} The parameters $a$, $b$ and $h$ are chosen so that $\psi(-ib)=0$, and that the residue of the Schwarz function at zero is one. From equations (\ref{eq:2.7}) and (\ref{eq:2.9}), and the above expression for $\psi$ we have that \begin{equation} \label{eq:3.2b} \left\{\begin{array}{l} h=b-\frac{a}{2b}\\ a+\frac{a^2}{4b^2}=1 \end{array}\right.. \end{equation} Since there are three parameters, one of them is free. Thus for an appropriate choice of the parameters $a, b$ and $h$, there is a one parameter family of conformal mappings, which we will denote by $\psi_b$, and such that $\Omega_{b}:=\psi_b(\mathbb{H}_-)$ becomes a QD for $\pi\delta_0$, where $\delta_0$ is the Dirac measure at $0$. The boundary of $\Omega_{b}$ is given by \begin{equation*} \Gamma_{b}=\left\{z(t)=t+ \frac{at}{t^2+b^2} + i\left(h+ \frac{ab}{t^2+b^2}\right): t\in{\mathord{\mathbb R}}\right\}. \end{equation*} Using polar coordinates one may show that $\Gamma_{b}$ has the implicit representation \begin{equation} \label{eq:3.2} \{(y+r-\alpha)\left(x^2+(y+r)^2\right)=2r(y+r)^2,\quad \alpha,r>0\}, \end{equation} where $\alpha=b$, $r=\frac{a}{2b}$ and $h=\alpha-r$. This is a part of a larger family of third order curves that are called {\it Conchoids of de Sluze}. From (\ref{eq:3.2b}) we see that $\alpha=\frac{1}{2}(\frac{1}{r}-r)$. Thus $\alpha\to 0$ when $r\to 1$, and from the implicit representation (\ref{eq:3.2}) it is clear that $\Gamma_{b}$ converges to the union of the circle $\{x^2+y^2=1\}$ and the line $\{y=-1\}$. Therefore, as $(a,b)\to(0,0)$ and (\ref{eq:3.1}) is fulfilled, then $\Omega_{b}$ converges to the union of the unit disk and the null QD $\{y<-1\}$. \begin{figure} \caption{A family of Conchoids of de Sluze} \label{fig:conchoid6} \end{figure} \end{exmp} \begin{rem} In a similar manner we can fix the nodes and the coefficients of the quadrature identity and construct a rational conformal mapping of the form (\ref{eq:6.12}) --(\ref{eq:2.14}). This will result in a system of algebraic equations that has one free parameter, and hence provides examples of families of domains such that each one of them is a QD of the same distribution. \end{rem} \begin{exmp} \label{ex2} Following Strakhov's example we construct a family of QDs where a parabola is the asymptote of the boundary. We consider a map \begin{equation} z=\psi(w)= 2w+iw^2+ih+\frac{a}{w-ib}, \quad w\in \mathbb{H}_-,\ a,b>0,\ h\in{\mathord{\mathbb R}}. \end{equation} Assuming for a moment that the map $\psi$ is univalent, and requiring that $\psi(-ib)=0$ and that the Schwarz function has residue one at the origin, then we get the following two equations \begin{equation} \label{eq:3.6b} \left\{\begin{array}{l} h=2b+b^2-\frac{a}{2b}\\ 2a+2ab +\frac{a^2}{4b^2}=1 \end{array}\right.. \end{equation} Since the algebraic equations (\ref{eq:3.6b}) has one parameter free, there is a family of conformal mappings $\psi_a$ such that $\Omega_{a}=\psi_a(\mathbb{H}_-)$ is a QD of the unit Dirac measure. Our aim is to let $a,b$ tend to zero in a such manner that $\Omega_{a}$ will converge. From the second equation of (\ref{eq:3.6b}) we see that the condition \begin{equation} \label{eq:3.6} \lim_{a,b\to0}\frac{a}{b}=2 \end{equation} is demanded. In order to assure that the map $\psi_a$ is univalent, we will show that for an appropriate choice of small parameters $a$ and $b$ it maps the real line onto a curve without closed loops. To see this we let $z(t)=X(t)+iY(t)$, where \begin{equation} \label{eq:3.4} X(t)= 2t + \frac{at}{t^2+b^2}, \qquad Y(t)=t^2+h+\frac{ab}{t^2+b^2}, \end{equation} be the parametric presentation of the boundary of $\Omega_a$, and we shall and analyze the critical points of these functions. Furthermore, since $X$ is an odd function and $Y$ is an even, it suffices to examine the critical points only for positive $t$. Computing the derivatives \begin{equation} \label{eq:3.5} \frac{dX}{dt}(t)=\frac{2(t^2+b^2)^2+a(b^2-t^2)}{(t^2+b^2)^2}, \qquad \frac{dY}{dt}(t)=2t-\frac{2abt}{(t^2+b^2)^2}, \end{equation} we find that the function $Y$ has critical points when $t^2=b\sqrt{\frac{a}{b}}-b^2$ and $b\sqrt{\frac{a}{b}}-b^2\geq0$. The critical points of $X$ satisfy the equation $t^2=\frac{1}{4}\left(\pm\sqrt{a^2-16ab^2}+a-4b^2\right)$. This equation has two positive roots when $a$ and $b$ are small. Now, the contour $z(t)=X(t)+iY(t)$ will have a closed loop for positive $t$ if and only if the critical point of $Y$ is in between the two critical points of $X$. The largest root is approximately $\frac{1}{4}\left(2a-12b^2\right)$ , and since by (\ref{eq:3.5}) $a\simeq 2b$, it is less than $b\sqrt{\frac{a}{b}}-b^2 $ for small $b$. So we conclude that the curve $z(t)$ has no closed loops when $a$ and $b$ are sufficiently small. Having showed that $\psi_a$ is univalent, we turn now to compute the limit of the boundary of $\Omega_{a}$, $\Gamma_{a}=\psi_a({\mathord{\mathbb R}})$, as $a$ and $b$ tend to zero. However, unlike Example \ref{ex:1}, we do not know the explicit representation of the boundary. Therefore we make the variable change \begin{equation*} \cos\theta=\frac{1}{\sqrt{(t/b)^2+1}}, \qquad \sin\theta=\frac{t/b}{\sqrt{(t/b)^2+1}}, \quad -\frac{\pi}{2}<\theta<\frac{\pi}{2}. \end{equation*} Then (\ref{eq:3.4}) becomes \begin{equation} \label{eq:3.7} X(\theta)= 2b\tan\theta + \frac{a}{b}\sin\theta\cos\theta, \qquad Y(\theta)=b^2\tan^2\theta+h + \frac{a}{b}\cos^2\theta. \end{equation} By (\ref{eq:3.6b}) and (\ref{eq:3.6}), $\lim_{a,b\to0} h=-1$. Therefore, for any $\epsilon>0$ and $\theta\in [-\frac{\pi}{2}+\epsilon,\frac{\pi}{2}-\epsilon]$, the curve (\ref{eq:3.7}) tends to \begin{equation*} X(\theta)= 2\sin\theta\cos\theta = \sin 2\theta, \qquad Y(\theta)=2\cos^2\theta-1=\cos 2\theta. \end{equation*} On the other hand, we see that for $t^2\geq b$ \begin{equation*} |X(t)-2t|=\frac{a|t|}{t^2+b^2}=\frac{a}{|t|}\frac{1}{1+\left(\frac{b}{t} \right)^2}\leq \left(\frac{a}{b}\right)\sqrt{b}\to 0 \end{equation*} and \begin{equation*} |Y(t)-(t^2-1)|\leq |h+1|+\frac{a}{1+b}\to 0, \end{equation*} as $(a,b)\to (0,0)$ and equations (\ref{eq:3.6b}) hold. Thus the family of curves $\Gamma_{a}$ tends to a union of the unit circle and the parabola $y+1=(x/2)^2$. The family of simply connected QDs converges to \begin{math} B_1(0)\cup\left\{y<\left({x}/{2}\right)^2-1\right\}, \end{math} that is, a union of null QD and a disk. \begin{figure} \caption{A family converging to a parabola and a circle} \label{fig:parabola4} \end{figure} \end{exmp} The existence of a QD for a positive measure and with an infinite ray as an asymptote of the boundary is not evident. It cannot be constructed by Sakai's variational method \cite[Ch. 11]{Sakai_82}, since this method requires that the complement of the attached null QD has non--empty interior. Nevertheless, by using similar ideas as in examples \ref{ex:1} and \ref{ex2} we are able to construct a family of QDs for a positive Dirac measure at zero and with the positive $x$--axis as the asymptote of the boundary. In contrast with the previous two examples, the boundary of the family cannot converge to a union of a circle and a ray, since this contradicts the regularity of the Schwarz function \cite{Sakai_91}. \begin{exmp} We consider a conformal map \begin{equation} \label{eq:3.8} z=\psi(w)=w^2+h-\frac{ia}{w-ib}, \quad b>0, \ a,h\in{\mathord{\mathbb R}} \end{equation} from $\mathbb{H}_-$ to the domain $\Omega$. Then the requirements that $\psi(-ib)=0$, the Schwarz function has residue one at $z=0$, and the origin belongs to the image of $\psi$, leads to the following relations: \begin{subequations} \label{eq:3.9} \begin{align} \label{eq:3.9a} & h =b^2-\frac{a}{2b}& \\ \label{eq:3.9b} &8b^3a-4b^2+a^2=0&\\ &h+\frac{a}{b}=b^2+\frac{a}{2b}>0.& \end{align} \end{subequations} For $a<0$ relations (\ref{eq:3.9}) cannot hold. When $a>0$, we find by computing the minimum of the third degree polynomial in (\ref{eq:3.9b}) that it has positive roots only when $a\leq\sqrt[4]{4/27}$. For that range of $a$ there are two types of QDs. For both types we need to check that the map (\ref{eq:3.8}) is conformal. We do this by checking the conditions which guarantee that the real line is mapped in a one to one manner onto the curve \begin{equation} \label{eq:3.10} X(t)= t^2+h + \frac{ab}{t^2+b^2}, \qquad Y(t)=\frac{-at}{t^2+b^2}, \ t\in{\mathord{\mathbb R}}. \end{equation} The first type is when the function $X$ has critical points for $t{\mathord{|||}}eq0$, this occurs when $t^2=\sqrt{ab}-b^2$, which implies that $b^3<a$. Then the curve (\ref{eq:3.10}) will not have a closed loop if the critical points of $Y$ appear ``after'' the critical points of $X$, which means that $b^2\geq \sqrt{ab}-b^2$. Thus in that case we have to require that the largest root of (\ref{eq:3.9b}) will satisfy the condition $b^3<a<4b^3$. The second type is where the function $X$ is monotone for $t\gtrless0$. Also, in that case that the largest root of (\ref{eq:3.9b}) needs to satisfy the condition $b^3>a$. Note that if $a<1/3$, then the largest root is greater than one and hence this condition is satisfied. \begin{figure} \caption{Two types of QDs for the Dirac measure and a ray as an asymptote} \label{fig:ray5} \end{figure} \end{exmp} \section{Summary} By means of conformal mappings we established the asymptotic behavior of the boundary of unbounded quadrature domains in the plane and when the infinity point belongs to the boundary. Although this tool is not available in higher dimensions, we hope the present paper will stimulate further investigations of unbounded quadrature domains in the space. The specific form of the conformal mapping from the lower half plane enables the construction of families of quadrature domains of the Dirac measure at a given point and possessing a given type of the asymptote. \vskip 10mm {\mathord{|||}}oindent \textbf{Acknowledgement.} I would like to thanks Avmir Margulis for many valuable talks and enlightening comments. I also grateful to the anonymous referee for his/her constructive comments, which definitely contributed to the improvement of the manuscript. \end{document}
\begin{document} \partial_{x_j} ef \partial_{x_j} { \partial_{x_j} } \partial_{x_j} ef{\mathbb{N}}{{\mathbb{N}}} \partial_{x_j} ef{\mathbb{Z}}{{\mathbb{Z}}} \partial_{x_j} ef{\mathbb{R}}{{\mathbb{R}}} \newcommand{\E}[0]{ \varepsilon} \newcommand{\la}[0]{ \lambda} \newcommand{\s}[0]{ \mathcal{S}} \newcommand{\AO}[1]{\| #1 \| } \newcommand{\BO}[2]{ \left( #1 , #2 \right) } \newcommand{\CO}[2]{ \left\langle #1 , #2 \right\rangle} \newcommand{\R}[0]{ {\mathbb{R}}\cup \{\infty \} } \newcommand{\co}[1]{ #1^{\prime}} \newcommand{\p}[0]{ p^{\prime}} \newcommand{\m}[1]{ \mathcal{ #1 }} \newcommand{ \W}[0]{ \mathcal{W}} \newcommand{ \A}[1]{ \left\| #1 \right\|_H } \newcommand{\B}[2]{ \left( #1 , #2 \right)_H } \newcommand{\C}[2]{ \left\langle #1 , #2 \right\rangle_{ H^* , H } } \newcommand{ H^1 \left( \Omega \right)N}[1]{ \| #1 \|_{ H^1} } \newcommand{ \Om }{ \Omega} \newcommand{ \pOm}{\partial \Omega} \newcommand{ \mathcal{D} \left( \Omega \right)}{ \mathcal{D} \left( \Omega \right)} \newcommand{ \mathcal{D} \left( \Omega \right)P}{ \mathcal{D}^{\prime} \left( \Omega \right) } \newcommand{ \mathcal{D} \left( \Omega \right)PP}[2]{ \left\langle #1 , #2 \right\rangle_{ \mathcal{D}^{\prime}, \mathcal{D} }} \newcommand{\PHH}[2]{ \left\langle #1 , #2 \right\rangle_{ \left(H^1 \right)^* , H^1 } } \newcommand{\PHO}[2]{ \left\langle #1 , #2 \right\rangle_{ H^{-1} , H_0^1 }} \newcommand{ H^1 \left( \Omega \right)}{ H^1 \left( \Omega \right)} \newcommand{ H^1 \left( \Omega \right)O}{ H_0^1 \left( \Omega \right) } \newcommand{C_c^\infty\left(\Omega \right) }{C_c^\infty\left(\Omega \right) } \newcommand{\N}[1]{ \left\| #1\right\|_{ H_0^1 } } \newcommand{\IN}[2]{ \left(#1,#2\right)_{ H_0^1} } \newcommand{\INI}[2]{ \left( #1 ,#2 \right)_ { H^1}} \newcommand{ H^1 \left( \Omega \right)^* }{ H^1 \left( \Omega \right)^* } \newcommand{ H^{-1} \left( \Omega \right) }{ H^{-1} \left( \Omega \right) } \newcommand{\HS}[1]{ \| #1 \|_{H^*}} \newcommand{\HSI}[2]{ \left( #1 , #2 \right)_{ H^*}} \newcommand{ W_0^{1,p}}{ W_0^{1,p}} \newcommand{\w}[1]{ \| #1 \|_{W_0^{1,p}}} \newcommand{(W_0^{1,p})^*}{(W_0^{1,p})^*} \newcommand{ \overline{\Omega}}{ \overline{\Omega}} \title{Stability of entire solutions to supercritical elliptic problems involving advection} \author{Craig Cowan\\ {\it\small Department of Mathematical Sciences}\\ {\it\small University of Alabama in Huntsville}\\ {\it\small 258A Shelby Center}\\ \it\small Huntsville, AL 35899 \\ {\it\small [email protected]} } \maketitle \begin{abstract} We examine the equation given by \begin{equation} \label{eq_abstract} - \mathcal{D} \left( \Omega \right)elta u + a(x) \cdot \nabla u = u^p \qquad \mbox{in $ {\mathbb{R}}^N$,} \end{equation} where $p>1$ and $ a(x)$ is a smooth vector field satisfying some decay conditions. We show that for $ p < p_c$, the Joseph-Lundgren exponent, that there is no positive stable solution of (\ref{eq_abstract}) provided one imposes a smallness condition on $a$ along with a divergence free condition. In the other direction we show that for $ N \ge 4$ and $ p > \frac{N-1}{N-3}$ there exists a positive solution of (\ref{eq_abstract}) provided $a$ satisfies a smallness condition. For $ p>p_c$ we show the existence of a positive stable solution of (\ref{eq_abstract}) provided $a$ is divergence free and satisfies a smallness condition. \end{abstract} \noindent {\it \footnotesize 2010 Mathematics Subject Classification}. {\scriptsize }\\ {\it \footnotesize Key words: Entire solutions, Liouville theorems, Stability, Advection}. {\scriptsize } \section{Introduction and results} In this article we are interested the existence versus nonexistence of positive stable solutions of \begin{equation} \label{eq} - \mathcal{D} \left( \Omega \right)elta u + a(x) \cdot \nabla u = u^p \qquad \mbox{in $ {\mathbb{R}}^N$,} \end{equation} where $p>1$ and $ a(x)$ is a smooth vector field satisfying some decay conditions. We now define the notion of stability and for this we prefer to work on a general domain. \begin{dfn} Let $ u $ denote a nonnegative smooth solution of (\ref{eq}) in an open set $\Omega \subset {\mathbb{R}}^N$. We say $ u$ is a stable solution of (\ref{eq}) in $ \Omega$ provided there is some smooth positive function $ E$ such that \begin{equation} \label{linearized} - \mathcal{D} \left( \Omega \right)elta E + a(x) \cdot \nabla E \ge p u^{p-1} E \qquad \mbox{in $ \Omega$.} \end{equation} \end{dfn} We begin by recalling some facts in the case where $a(x)=0$. There has been much work done on the existence and nonexistence of positive classical solutions of \begin{equation} \label{lane_class} - \mathcal{D} \left( \Omega \right)elta u = u^p, \qquad \mbox{in $ {\mathbb{R}}^N$.} \end{equation} For $ N \ge 3$ there exists a critical value of $p$, given by $ p_S= \frac{N+2}{N-2}$, such that for $ 1 <p < p_S$ there is no positive classical solution of (\ref{lane_class}) and for $ p > p_S$ there exist positive classical solutions, see \cite{Caf,chen,gidas,Gidas}. By definition we call a nonnegative solution $u$ of (\ref{lane_class}) stable if \begin{equation} \label{stable} \int p u^{p-1} \phi^2 \le \int |\nabla \phi|^2 \qquad \forall \phi \in C_c^\infty({\mathbb{R}}^N), \end{equation} which is nothing more than the stability of $u$ using (\ref{linearized}), after using a variational principle. The additional requirement that the solution be stable drastically alters the existence versus nonexistence results. It is known that there is a new critical exponent, the so called Joseph-Lundgren exponent $p_{c}$, such that for all $ 1 <p < p_{c}$ there is no positive stable solution of (\ref{lane_class}) and for $ p>p_{c} $ there exists positive stable solutions of (\ref{lane_class}). The value of the $p_c$ is given by \begin{equation*} p_c= \left\{ \begin{array}{lr} \frac{ (N-2)^2-4N+8\sqrt{N-1}}{(N-2)(N-10)} & \qquad N \ge 11 \\ \infty & \qquad 3 \le N \le 10. \end{array} \right. \end{equation*} The first implicit appearance of $p_c$ was in the work \cite{Joseph_lundgren} where they examined $ - \mathcal{D} \left( \Omega \right)elta u = \lambda (u+1)^p$ on the unit ball in $ {\mathbb{R}}^N$ with zero Dirichlet boundary conditions. The exponent $p_c$ first explicitly appeared in the works \cite{Wang_solo, Gui_Ni_Wang} where they examined the stability of radial solutions to a parabolic version of (\ref{lane_class}). Their results easily imply the existence of a positive radial stable solution of (\ref{lane_class}) when $ p >p_c$ and the nonexistence of positive radial stable solutions in the case of $ p < p_c$. More recently there has been interest in finite Morse index solutions of either (\ref{lane_class}) and the generalized version given by \begin{equation} \label{far} - \mathcal{D} \left( \Omega \right)elta u = |u|^{p-1} u, \qquad \mbox{in $ {\mathbb{R}}^N$.} \end{equation} In \cite{farina} they completely classified the finite Morse index solutions of (\ref{far}) and again the critical exponent $p_c$ was involved. For results regarding singular nonlinearities, general nonlinearities, or quasilinear equation see \cite{ces,e1,e2,egg,zz,cabreent}. In the work \cite{Cowan_fazly} the nonexistence of nontrivial solutions of \[ -div( \omega_1 \nabla u) = \omega_2 u^p \qquad \mbox{ in $ {\mathbb{R}}^N$},\] was examined where $ \omega_i$ are some nonnegative functions. In the special case where $ \omega_1=\omega_2$ this equation reduces to \begin{equation} \label{var} - \mathcal{D} \left( \Omega \right)elta u + \nabla \gamma(x) \cdot \nabla u = u^p \qquad \mbox{in $ {\mathbb{R}}^N$}, \end{equation} where $ \gamma$ is a scalar function. Even though (\ref{var}) and (\ref{eq}) are similar a major difference is that (\ref{var}) is variational in nature; critical points of \[ E(u)= \frac{1}{2} \int e^{-\gamma} |\nabla u|^2 - \frac{1}{p+1} \int e^{-\gamma} |u|^{p+1},\] are solutions of (\ref{var}). This variational structure of (\ref{var}) allows one to prove various nonexistence results for (\ref{var}) by slightly modifying the nonexistence proofs used in proving similar results for $- \mathcal{D} \left( \Omega \right)elta u = u^p $ in $ {\mathbb{R}}^N$. This approach will generally not work for (\ref{eq}) since in general there will not be a variational structure. In \cite{advection} the regularity of the extremal solution, $u^*$, associated with problems of the form \begin{eqnarray*} \left\{ \begin{array}{lcl} - \mathcal{D} \left( \Omega \right)elta u +a(x) \cdot \nabla u &=& \lambda f(u) \qquad \mbox{ in } \Omega \\ u &=& 0 \qquad \qquad \quad \mbox{ on } \pOm, \end{array}\right. \end{eqnarray*} was examined for various nonlinearities $f$. Here $a(x)$ was an arbitrary smooth advection and the main difficulty was to to utilize the stability of $u^*$ in a meaningful way. As mentioned earlier, this is not a problem when $a(x)$ is the gradient of a scalar function. The main tool used was the generalized Hardy inequality from \cite{craig}. This same approach was extended to more general nonlinearities in \cite{advect_2}. We now list our results. \begin{thm} \label{smallness} Suppose $ 3 \le N \le 10$ or $ N \ge 11$ and $ 1 < p < p_c$. Suppose $a(x)$ is a smooth divergence free vector field satisfying $ | a(x) | \le \frac{C}{|x|+1}$ with $0<C$ sufficiently small. Then there is no positive stable solution of (\ref{eq}). \end{thm} The next result gives a decay estimate in the case of $ p < p_c$. We are including this result since it may allow one to use a Lane-Emden type of change of variables to obtain a nonexistence result without a smallness condition on the advection. \begin{thm} \label{decay} Suppose $ \frac{N+2}{N-2} <p <p_c$, $a(x)$ is a smooth divergence free vector field with $ |a(x)| \le \frac{C}{|x|+1}$ and $ |a| \in L^N({\mathbb{R}}^N)$. Then any positive stable solution $u$ of (\ref{eq}) satisfies \begin{equation} \label{atinf} \lim_{|x| \rightarrow \infty} |x|^\frac{2}{p-1} u(x)=0. \end{equation} \end{thm} The approach to solve Theorem \ref{smallness} will be to combine the methods used in \cite{farina} with the techniques from \cite{advection} which relied on generalized Hardy inequalities from \cite{craig}. The same approach will be used in the proof of Theorem \ref{decay} with an added scaling argument. Our final result gives an existence result. \begin{thm} \label{existence} \begin{enumerate} \item Suppose $ N \ge 4$, $ p > \frac{N+1}{N-3}$ and $ a(x)$ is some smooth vector field with $ |a(x)| \le \frac{C}{|x|+1}$. If $ 0<C$ is sufficiently small there exists a positive solution of (\ref{eq}). \item Suppose $ N \ge 11$, $ p > p_c$ and let $ a(x)$ denote some smooth divergence free vector field with $ |a(x)| \le \frac{C}{|x|+1}$. For $ 0 <C$ sufficiently small (\ref{eq}) has a positive stable solution. \end{enumerate} \end{thm} The idea of the proof will be to look for a solution $u$ as a perturbation of the positive radial solution $w$ of $- \mathcal{D} \left( \Omega \right)elta w=w^p$ in $ {\mathbb{R}}^N$ with $ w(0)=1$. See the beginning of Section \ref{existsec} for details on $w$. The framework we will use to prove the existence of a positive solution will be the approach developed in \cite{davila}. Their interest was in the existence of positive solutions of $ - \mathcal{D} \left( \Omega \right)elta u = u^p$ in $ \Omega \subset {\mathbb{R}}^N $ an exterior domain with zero Dirichlet boundary conditions. \\ \noindent \textbf{Open Problem.} It would be interesting to see if these smallness conditions on $a(x)$ can be removed, possibly at the expense of adding some additional decay requirements. \section{Nonexistence proofs} \begin{remark} A computation shows that $ p <p_c$ is equivalent to the condition \begin{equation} \label{cond} \frac{N}{2} < 1 + \frac{2p}{p-1} + \frac{2}{p-1} \sqrt{p^2-p}. \end{equation} For our nonexistence results it will be easier to deal with (\ref{cond}). \end{remark} Theorem \ref{smallness} and Theorem \ref{decay} will depend on the following energy estimate, which we state for a general domain. \begin{prop} \label{prop_1} Suppose $u$ is a smooth positive stable solution of (\ref{eq}) and $a(x)$ is smooth divergence free vector field. Then for all $ 1 \le T$, $0 < \beta <1$, $0<\E$, $0< \partial_{x_j} elta$, $ \frac{1}{2}<t$ and $ 0 \le \psi \in C_c^\infty(\Omega)$ we have \begin{eqnarray} \label{first} \left( \beta p - \frac{Tt^2}{2t-1} \right) \int u^{2t+p-1} \psi^2 &+& \beta (1-\beta-\E) \int \frac{ | \nabla E|^2}{E^2} u^{2t} \psi^2 \nonumber \\ &&+ (T-1) \int | \nabla (u^t \psi)|^2 \nonumber \\ & \le & \left( \frac{\beta}{4 \E} + \frac{T t \partial_{x_j} elta}{2t-1} \right) \int | a|^2 u^{2t} \psi^2 \nonumber \\ &&+ \left( T + \frac{Tt}{4 \partial_{x_j} elta(2t-1)} \right) \int u^{2t} | \nabla \psi|^2 \nonumber \\ &&+ \frac{T |t-1|}{2(2t-1)} \int u^{2t} | \mathcal{D} \left( \Omega \right)elta \psi^2|. \end{eqnarray} \end{prop} Define the following parameters \[ t_-(p)= p - \sqrt{p^2-p} \quad \mbox{and} \quad \quad t_+(p)= p+ \sqrt{p^2-p}.\] A computation shows that for $ t_-(p) <t<t_+(p)$ we have $ p- \frac{t^2}{2t-1}>0$. This restriction on $t$ will be related to the restrictions on $t$ we must impose if one wants to obtain an estimate from Proposition \ref{prop_1}. \noindent \textbf{Proof of Proposition \ref{prop_1}.} Suppose $u$ is a smooth positive stable solution of (\ref{eq}) in $\Omega$ and let $E>0$ satisfy (\ref{linearized}). From \cite{craig} we have the following generalized Hardy inequality \begin{equation} \label{hardy_mine} \beta \int \frac{- \mathcal{D} \left( \Omega \right)elta E}{E} \phi^2 + (\beta - \beta^2) \int \frac{| \nabla E|^2}{E^2} \phi^2 \le \int | \nabla \phi|^2, \qquad \forall \phi \in C_c^\infty(\Omega), \end{equation} for all $ \beta {\mathbb{R}}$. Adding $ T \int | \nabla \phi|^2$ to both sides of the inequality, using the fact that $E$ satisfies (\ref{linearized}) and taking $\phi= u^t \psi$ where $ \psi \in C_c^\infty(\Omega)$ gives \begin{eqnarray*} \beta p \int u^{p-1} u^{2t+p-1} \psi^2 - \beta \int \frac{a \cdot \nabla E}{E} u^{2t} \psi^2 && \\ + (\beta - \beta^2) \int \frac{| \nabla E|^2}{E^2} u^{2t} \psi^2 + (T-1) \int | \nabla (u^t \psi)|^2 && \\ && \le T \int | \nabla (u^t \psi)|^2. \end{eqnarray*} Note that the right side expands as \[ T t^2 \int u^{2t-2} | \nabla u|^2 \psi^2 + 2tT \int u^{2t-1} \psi \nabla u \cdot \nabla \psi + T \int u^{2t} | \nabla \psi|^2.\] We now wish to eliminate the term $ \int u^{2t-2} | \nabla u|^2 \psi^2$ from the inequality. To do this we multiply (\ref{eq}) by $ u^{2t-1} \psi^2$ and integrate over $ \Omega$ to arrive at \begin{eqnarray*} (2t-1) \int u^{2t-2} | \nabla u|^2 \psi^2 &=& \int u^{p+2t-1} \psi^2 - \int a\cdot \nabla u u^{2t-1} \psi^2 \\ && - 2 \int \nabla u \cdot \nabla \psi u^{2t-1} \psi. \end{eqnarray*} Using this equality we replace the desired term in the inequality to arrive at an inequality of the form \begin{eqnarray} \label{arr} \left( \beta p - \frac{T t^2}{2t-1} \right) \int u^{2t+p-1} \psi^2 + \beta(1-\beta) \int \frac{ | \nabla E|^2}{E^2} u^{2t} \psi^2 && \nonumber \\ +(T-1) \int | \nabla (u^t \psi)|^2 & \le & T \int u^{2t} | \nabla \psi|^2 \nonumber \\ && + \sum_{k=1}^3 I_k \end{eqnarray} where \[ I_1 = \left( 2Tt - \frac{2T t^2}{2t-1} \right) \int u^{2t-1} \psi \nabla u \cdot \nabla \psi,\] \[ I_2 = - \frac{T t^2}{2t-1} \int a(x) \cdot \nabla u u^{2t-1} \psi^2,\] \[ I_3 = \beta \int \frac{ a(x) \cdot \nabla E}{E} u^{2t} \psi^2.\] An integration by parts shows that \[ I_1= \frac{T(1-t)}{2(2t-1)} \int u^{2t} \mathcal{D} \left( \Omega \right)elta (\psi^2).\] An integration by parts shows that \[ |I_2| \le \frac{Tt}{2t-1} \int |a| \psi | \nabla \psi| u^{2t},\] and an application of Young's inequality shows this is less than or equal \[ \frac{Tt \partial_{x_j} elta}{2t-1} \int |a|^2 \psi^2 u^{2t} + \frac{Tt}{(2t-1) 4 \partial_{x_j} elta} | \nabla \psi|^2 u^{2t}.\] An application of Young's inequality shows that \[ |I_3| \le \beta \E \int \frac{ | \nabla E|^2}{E^2} u^{2t} \psi^2 + \frac{\beta}{4 \E} \int |a|^2 u^{2t} \psi^2.\] Using these upper bounds in (\ref{arr}) and regrouping gives the desired result. $\Box$ \noindent \textbf{Proof of Theorem \ref{smallness}.} We assume that $u$ is a positive stable solution of (\ref{eq}). Firstly note that \[ \int |a|^2 u^{2t} \psi^2 \le C^2 \int \frac{u^{2t} \psi^2}{|x|^2},\] after considering the conditions on $a$. Also note by Hardy's inequality we have \[ \int | \nabla (u^t \psi)|^2 \ge C_N \int \frac{u^{2t} \psi^2}{|x|^2},\] where $ C_N= \frac{(N-2)^2}{4}$. Putting these into (\ref{first}) gives \begin{eqnarray} \label{second} \left( \beta p - \frac{Tt^2}{2t-1} \right) \int u^{2t+p-1} \psi^2 && \nonumber \\ + \beta (1-\beta-\E) \int \frac{ | \nabla E|^2}{E^2} u^{2t} \psi^2 \nonumber \\ + C_1 \int \frac{u^{2t} \psi^2}{|x|^2} & \le & C_2 \int u^{2t} \left( | \nabla \psi|^2 + | \mathcal{D} \left( \Omega \right)elta (\psi^2)| \right) \end{eqnarray} where \[ C_1= (T-1)C_N - C^2 \left( \frac{\beta}{4 \E} + \frac{T t \partial_{x_j} elta}{2t-1} \right),\] and $ C_2= C_2(T,t, \partial_{x_j} elta)$. Note that for each $ t_-(p) <t< t_+(p)$ we have $ \beta p - \frac{Tt^2}{2t-1}>0$ provided $ \beta <1$ and $ T>1$ are chosen sufficiently close to $1$. We now pick $ \E>0$ small enough such that $ 1-\beta -\E >0$. We now assume $C>0$ is sufficiently small such that $ C_1 \ge 0$. We then arrive at an estimate of the form \begin{equation} \label{ten} \left( \beta p - \frac{Tt^2}{2t-1} \right) \int u^{2t+p-1} \psi^2 \le C_2 \int u^{2t} \left( | \nabla \psi|^2 + | \mathcal{D} \left( \Omega \right)elta (\psi^2)| \right), \end{equation} for all $ \psi \in C_c^\infty({\mathbb{R}}^N)$. We now assume that $ \phi$ is a smooth cut-off function with, $ 0 \le \phi \le 1$, $ \phi=1$ in $ B_R$ and compactly supported in $ B_{2R}$ such that $ | \nabla \phi| \le \frac{C}{R}$ and $ | \mathcal{D} \left( \Omega \right)elta \phi| \le \frac{C}{R^2}$ where $ C$ is independent of $R$. Putting $\psi=\phi^m$ where $ m$ is a large integer into (\ref{ten}) gives \[ \left( \beta p - \frac{Tt^2}{2t-1} \right) \int u^{2t+p-1} \phi^{2m} \le C_2 C_m \int u^{2t} \phi^{2m-2} \left( | \nabla \phi|^2 + | \mathcal{D} \left( \Omega \right)elta \phi| \right),\] where $C_m$ depends only on $m$. We now apply H\"older's inequality to see the right hand side of this inequality is bounded above by \[ C_2 C_m \left( \int u^{2t+p-1} \phi^\frac{(m-1) (2t+p-1)}{t} dx \right)^\frac{2t}{2t+p-1} \left( \int ( | \nabla \phi|^2 + | \mathcal{D} \left( \Omega \right)elta \phi|)^\frac{2t+p-1}{p-1} dx \right)^\frac{p-1}{2t+p-1}.\] Now note that for sufficiently large $m$ we have that $\frac{(m-1) (2t+p-1)}{t} > 2m$ and hence we can replace the first term on the right hand side of the inequality with \[\left( \int u^{2t+p-1} \phi^{2m} dx \right)^\frac{2t}{2t+p-1},\] which allows one to cancel terms to arrive at \[ \left( \beta p - \frac{Tt^2}{2t-1} \right)^\frac{2t+p-1}{p-1} \int u^{2t+p-1} \phi^{2m} \le \tilde{C}_m \int \left( | \nabla \phi|^2 + | \mathcal{D} \left( \Omega \right)elta \phi| \right)^\frac{2t+p-1}{p-1}.\] We now take into account the support of $ \phi$ and how $ \phi$ scales to arrive at \[ \int_{B_R} u^{2t+p-1} \le C_0 R^{ N-2- \frac{2(2t+p-1)}{p-1}},\] where $C_0$ depends on the various parameters but is independent of $ R$. Now provided $N-2- \frac{2(2t+p-1)}{p-1}<0$ we can send $ R \rightarrow \infty$ to arrive at a contradiction. Now note we can pick a $t \in (t_-(p),t_+(p))$ such that this exponent is negative provided \[ \frac{N(p-1)}{2} < 2 \left( p + \sqrt{p^2-p} \right)+p-1,\] which is precisely (\ref{cond}). $ \Box$ \textbf{Proof of Theorem \ref{decay}.} Suppose $ 0 < u$ is a smooth stable solution of (\ref{eq}) and $E>0$ solves (\ref{linearized}). Let $ |x_k | \rightarrow \infty$ and set $ r_k:= \frac{|x_k|}{4}$. By passing to a subsequence we can assume that $ \{ B(x_k,r_k): k \ge 1 \}$ is a disjoint family of balls. We now define the rescaled functions \[ u_k(x)= r_k^\frac{2}{p-1} u(x_k + r_k x), \qquad a_k(x)=r_k a(x_k + r_k x), \quad E_k(x)= E(x_k+r_k x),\] and we restrict $|x|<2$. Then equation (\ref{eq}) and (\ref{linearized}) are satisfied on $B_2$ with $ u_k,a_k,E_k$ replacing $u,a,E$. Note that $ a_k(x)$ is a sequence of smooth divergence free vector fields which satisfy the bound $ |a_k(x)| \le C$ for all $ |x| <2$. From this we see the term involving $a_k$ in (\ref{first}) will be a lower order term as far as powers of $u$ are concerned and hence will cause no issues. With the conditions on $N$ and $p$ there is some $ t_-(p)<t<t_+(p)$ such that $2t+p-1 > \frac{N}{2}(p-1)>0$ and by taking $T=1$ (we can take $T=1$ since the advection term is lower order) and $ \beta<1$ sufficiently close to $1$ we can assume $\beta p - \frac{ t^2}{2t-1}>0$. Let $ 0 \le \phi \le 1$ be compactly supported in $B_2$ with $ \phi=1$ on $B_1$ and put $ \psi=\phi^m$, where $m$ a large integer, into (\ref{first}) where now $u,a,E$ are given by $ u_k,a_k,E_k$. Arguing as in the proof of Theorem \ref{smallness} one can obtain a bound of the form \[ \int_{B_1} u_k^{2t+p-1} \le C_0 ,\] where $ C_0$ depends on the various parameters but is independent of $k$. Now note that $u_k>0$ is a sequence of smooth positive solutions of \[ - \mathcal{D} \left( \Omega \right)elta u_k + a_k (x) \cdot \nabla u_k = C_k(x) u_k \qquad \mbox{ in } B_2,\] where $ C_k(x)=u_k^{p-1}$. The above integral estimate shows that $C_k$ is bounded in $L^q(B_1)$ for some $ q> \frac{N}{2}$. We can now apply a Harnack inequality from \cite{harnack} to see that \begin{equation} \label{second} \sup_{B_\frac{1}{2}} u_k \le C \inf_{B_{\frac{1}{2}}} u_k. \end{equation} If we can show that $\inf_{B_\frac{1}{2}} u_k \rightarrow 0$ then one has $ \sup_{B_\frac{1}{2}} \rightarrow 0$ and in particular this gives \[ |x_k|^\frac{2}{p-1} u(x_k) \le 4^\frac{2}{p-1} \sup_{B_\frac{1}{2}} u_k \rightarrow 0\] which gives us the desired decay estimate. To show $\inf_{B_\frac{1}{2}} u_k \rightarrow 0$ we will show \[ \int_{B_1} u_k^\frac{(p-1)N}{2} \rightarrow 0.\] Using a change of variables shows that \[ \int_{B_1} u_k^\frac{(p-1)N}{2} = \int_{B(x_k,r_k)} u^\frac{(p-1)N}{2},\] and if we show that $ u \in L^\frac{(p-1)N}{2}({\mathbb{R}}^N)$ then we'd have the desired result since \[ \int_{{\mathbb{R}}^N} u^\frac{(p-1)N}{2} \ge \sum_{k=1}^\infty \int_{B(x_k,r_k)} u^\frac{(p-1)N}{2}. \] Towards this we now set $ t= \frac{(p-1)(N-2)}{4}$ and note that the condition on $N$ and $p$ imply that $ t_-(p)<t<t_+(p)$. We now pick $ \beta <1$ but sufficiently close such that $ \beta p - \frac{t^2}{2t-1}>0$ and pick $ \E>0$ sufficiently small such that $ 1-\beta -\E>0$. Let $ \phi$ be the smooth cut-off function from the proof of Theorem \ref{smallness}, which is equal to $1$ in $B_R$ and compactly supported in $B_{2R}$. We now put $ \psi=\phi^m$, where $m$ is a large integer, into (\ref{first}) taking $T=1$, to arrive at inequality of the form \begin{equation} \label{hundred} \int u^{2t+p-1} \phi^{2m} \le C_0 \int |a|^2 u^{2t} \phi^{2m} + C_0 \int u^{2t} \phi^{2m-2} \left( | \nabla \phi|^2+ | \mathcal{D} \left( \Omega \right)elta \phi| \right). \end{equation} We now let $ \tau$ be such that $ 2 t \tau = 2t+p-1$ and let $ \tau'$ denote the conjugate index of $ \tau$. Applying H\"older's inequality to the right hand side of (\ref{hundred}) and arguing as in the proof of Theorem \ref{smallness} we arrive at an inequality, for sufficiently large $m$, of the form \[ \int u^{2t+p-1} \phi^{2m} \le C_0 \int_{B_{2R}} |a|^{2 \tau'} + C_0 \int \left( | \nabla \phi|^2 + | \mathcal{D} \left( \Omega \right)elta \phi| \right)^{\tau'},\] where $C_0$ is a constant which depends on the various parameters but is independent of $ R$. A computation shows that $ \tau'= \frac{N}{2}$ and $ 2t+p-1= \frac{N}{2}(p-1)$. Using these explicit values and the scaling of $ \phi$ we arrive at \[ \int_{B_R} u^{ \frac{N(p-1)}{2}} \le C_0 \int_{B_{2R}} |a|^N + C_0,\] and from this we obtain the desired bound on $u$ after recalling that $ |a| \in L^N({\mathbb{R}}^N)$. $ \Box$ \section{Existence proofs} \label{existsec} \noindent \textbf{The positive radial solution.}\\ For $ p > \frac{N+2}{N-2}$ let $w=w(r)$ denote the positive radial decreasing solution of $ - \mathcal{D} \left( \Omega \right)elta w = w^p$ in $ {\mathbb{R}}^N$ with $ w(0)=1$. Asymptotics of $ w$ as $ r \rightarrow \infty$ are given by \[ w(r)= \beta^\frac{1}{p-1} r^\frac{-2}{p-1}(1+o(1)),\] where \[ \beta=\beta(p,N)= \frac{2}{p-1} \left( N-2- \frac{2}{p-1} \right).\] In the case where $ p>p_c$ the refined asymptotics are given by \[ w(r)= \beta^\frac{1}{p-1} r^\frac{-2}{p-1} + \frac{a_1}{r^{\mu_0^-}} + o \left( \frac{1}{r^{\mu_0^-}}\right),\] where $ a_1 <0$ and $ \mu_0^- > \frac{2}{p-1}$; see \cite{Gui_Ni_Wang}. \\ We begin by analysing the radial solution $w$ as defined above. Let $ v(r)=\beta^\frac{1}{p-1} r^\frac{-2}{p-1}$ where $ \beta$ is defined as above. \begin{lemma} \label{compar} Suppose $ p >p_c$, $v(r)=\beta^\frac{1}{p-1} r^\frac{-2}{p-1}$ and $ \beta$ is defined as in the definition of $w$. \begin{enumerate} \item Then $ v \ge w$ in $ {\mathbb{R}}^N$. \item There is some $ \E>0$ such that \begin{equation} \label{superstable} \int (p+\E) w^{p-1} \phi^2 \le \int | \nabla \phi|^2 \qquad \forall \phi \in C_c^\infty({\mathbb{R}}^N). \end{equation} \end{enumerate} \end{lemma} \begin{proof} 1) Note that $ v(r) >w(r)$ for large $ r$ and small $ r$. Towards a contradiction we assume that there is $ 0<r_0 < r_1$ such that $ w(r) > v(r) $ for all $ r_0 <r<r_1$ with $ w=v$ at $ r=r_0,r_1$. A computation shows that for $ p >p_c$ there is some $ \E>0$ such that $ (p+\E)\beta \le \frac{(N-2)^2}{4}$ and then from Hardy's inequality we obtain \begin{equation} \label{thi} \int (p+\E) v^{p-1} \phi^2 \le \int | \nabla \phi|^2 \qquad \forall \phi \in C_c^\infty({\mathbb{R}}^N). \end{equation} From this we see that $ v$ is a stable singular solution of $ - \mathcal{D} \left( \Omega \right)elta v = v^p$ in $ {\mathbb{R}}^N$ and in particular its a stable solution of \[ - \mathcal{D} \left( \Omega \right)elta v = v^p \mbox{ in } r_0 <r<r_1 \qquad \mbox{ with $ v=w$ on $ r=r_0,r_1$.}\] It is possible to use the stability of $v$ to show that $v$ is the minimal solution of this equation with the given prescribed boundary conditions. This fact relies on the strict convexity of the nonlinearity. Noting that $w$ satisfies the same equation with the prescribed boundary conditions one must have $ v \le w$ on $ r_0 <r<r_1$ since $v$ is a minimal solution. This gives us the desired contradiction. \\ 2) The result is immediate after combining the pointwise comparison between $w$ and $v$ and using (\ref{thi}). \end{proof} For the remainder $ w$ always refers to the above radial solution and $L$ to the linear operator $ L(\phi)=- \mathcal{D} \left( \Omega \right)elta \phi - p w^{p-1} \phi$. We now define the various function spaces. For $ \sigma >0$ but small, define \[ \| \phi\|_{\tilde{X}_\sigma} := \sup_{|x| \le 1} |x|^\sigma |\phi(x)| + \sup_{|x| \ge 1} |x|^\frac{2}{p-1} | \phi(x)|,\] and \[ \| f\|_{Y_\sigma} := \sup_{|x| \le 1} |x|^{\sigma+2} |f(x)| + \sup_{|x| \ge 1} |x|^{\frac{2}{p-1}+2} |f(x)|.\] Let $ \tilde{X}_\sigma$ and $ Y_\sigma$ denote the completions of $ C_c^\infty({\mathbb{R}}^N \backslash \{0\})$ under the appropriate norms. \\ The following linear estimate is from \cite{davila} and is a key starting point for their work. They also obtain results in the case of $ \frac{N+2}{N-2} <p < \frac{N+1}{N-3}$ in \cite{davila} and also in another of their works \cite{davila_fast}. This case is harder to deal with but luckily are main interest is in the case of $ p >p_c$ which allows us to avoid the harder case. \begin{thm*} \cite{davila} Suppose $ N \ge 4$ and $ p> \frac{N+1}{N-3}$. There exists some small $ \sigma >0$ such that for any $ f \in Y_\sigma$ there exists some $ \phi \in \tilde{X}_\sigma $ such that $L(\phi)=f$ in $ {\mathbb{R}}^N$. Moreover the linear map $ T: Y_\sigma \rightarrow \tilde{X}_\sigma$, given by $ T(f)=\phi$, is continuous. \end{thm*} For our approach we won't work directly with $\tilde{X}_\sigma$ but instead work with a slight variant that allows us to handle the advection term. So towards this define the norm \begin{eqnarray*} \| \phi\|_{X_\sigma}&:=& \sup_{|x| \le 1} \left( |x|^\sigma | \phi(x)| + |x|^{\sigma +1} | \nabla \phi(x)| \right) \\ && + \sup_{|x|\ge 1} \left( |x|^\frac{2}{p-1} | \phi(x)| + |x|^{ \frac{2}{p-1}+1} | \nabla \phi(x)| \right) \end{eqnarray*} and let $ X_\sigma$ denote the completion of $ C_c^\infty({\mathbb{R}}^N \backslash \{0\})$ with respect to this norm. \begin{lemma} \label{homo} Suppose $ N \ge 4$ and $ p> \frac{N+1}{N-3}$. For sufficiently small $ \sigma>0$ and for all $ f \in Y_\sigma$ there exists some $ \phi \in X_\sigma$ such that $L(\phi)=f$ in $ {\mathbb{R}}^N$. Moreover the linear map $ T:Y_\sigma \rightarrow X_\sigma$ defined by $ T(f)=\phi$ is continuous. \end{lemma} \noindent \textbf{Proof of Lemma \ref{homo}.} Suppose $ f \in Y_\sigma$ and let $ \phi \in \tilde{X}_\sigma$ be such that $L(\phi)=f$ in $ {\mathbb{R}}^N$. Then there exists some $ C>0$, independent of $f$ and $ \phi$, such that $ \| \phi\|_{\tilde{X}_\sigma} \le C \| f\|_{Y_\sigma}$. Our goal is to now show there is some $ C_1>0$, independent of $ f$ and $ \phi$, such that $ \| \phi\|_{X_\sigma} \le C_1 \| f\|_{Y_\sigma}$ and this will complete the proof. Define the re-scaled functions $ \phi_m(x)= \phi(x_m + r_m x)$ where $ |x_m|>0$, $ r_m= \frac{|x_m|}{4}$ for $ |x| <1$. Note that \[ - \mathcal{D} \left( \Omega \right)elta \phi_m(x)= p r_m^2 w(x_m+r_m x)^{p-1} \phi(x_m+r_m x) + r_m^2 f(_m+ r_m x)=:g_m(x), \] for all $ x \in B_1$. We now obtain some estimates on $\phi_m$ using the following result, which is just an elliptic regularity result coupled with the Sobolev imbedding theorem: for $ t>N$ there is some $ C_t$ such that \begin{equation} \label{ell_reg} \sup_{B_\frac{1}{2}} | \nabla \phi_m(x)| \le C_t \left( \int_{|x|<1} | \mathcal{D} \left( \Omega \right)elta \phi_m(x)|^t dx \right)^\frac{1}{t} + C_t \int_{|x|< 1} | \phi_m(x)| dx. \end{equation} We now assume we are in the case of $ |x_m| \ge 1$. Using the fact that $ f \in Y_\sigma$ and $ \phi \in \tilde{X}_\sigma$ one sees that $ |x_m|^\frac{2}{p-1} |g_m(x)| \le C$ for all $ |x| <1$ and $m$. Putting these estimates into (\ref{ell_reg}) gives $ \sup_{B_\frac{1}{2}} | \nabla \phi_m(x)| \le C |x_m|^\frac{-2}{p-1}$ and from this we see that \[ |x_m|^{\frac{2}{p-1}+1} | \nabla \phi(x_m)| \le C_1.\] The case of $ |x_m| \le 1$ is handled as above. Combining these results gives us the desired bound. $ \Box$ \textbf{Proof of Theorem \ref{existence}, 1).} To solve (\ref{eq}) we first consider solving the related problem given by \begin{equation} \label{absol} - \mathcal{D} \left( \Omega \right)elta u + a(x) \cdot \nabla u = |u|^p \qquad \mbox{ in } {\mathbb{R}}^N. \end{equation} To do this we perturb off the radial solution $w$ of the advection free problem. So we look for a solution of the form $ u=\phi+w$. So we need to find a solution $ \phi$ of \begin{equation} \label{absol_1} L(\phi) = - a \cdot \nabla w - a \cdot \nabla \phi + |w+\phi|^p - p w^{p-1} \phi - w^p \qquad \mbox{ in $ {\mathbb{R}}^N$}, \end{equation} where $ L(\phi)=- \mathcal{D} \left( \Omega \right)elta \phi - p w^{p-1} \phi$. Letting $T$ be defined as in Lemma \ref{homo} we are looking for a $ \phi \in X_\sigma$ such that \begin{equation} \label{ma} \phi= -T(a \cdot \nabla w) - T(a \cdot \nabla \phi) + T( |w+\phi|^p - p w^{p-1} \phi - w^p). \end{equation} To find such a $\phi$ we define $ J(\phi)$ to be the mapping on the right hand side of (\ref{ma}) and we will now show that for a suitable $R$ that $J$ is a contraction mapping on the closed ball $B_R$, centered at the origin, in $X_\sigma$. We will then argue that $u=w+\phi$ is positive. We begin by showing $J$ is into $B_R$. In what follows $C$ can depend on $p,a,w $ but not on $x,R,\phi$ and $ \sigma$ provided $ \sigma$ is small. Let $ R>0$ and let $ \phi \in B_R$. Then note that that there is some $ C>0$ such that \begin{equation} \label{last} \|J(\phi)\|_{X_\sigma} \le C \|a \cdot \nabla w\|_{Y_\sigma} + C \| a \cdot \nabla \phi\|_{Y_\sigma} + C\| |w+\phi|^p- pw^{p-1} \phi - w^p\|_{Y_\sigma}. \end{equation} We now estimate the terms on the right hand side. \begin{eqnarray*} \| a \cdot \nabla w\|_{Y_\sigma} & \le & \sup_{|x| \le 1} |a(x)| |x| \sup_{|x| \le 1} |x|^{1+\sigma} | \nabla w(x)| \\ && + \sup_{|x| \ge 1} |x||a(x)| \sup_{|x| \ge 1} |x|^{ \frac{2}{p-1}+1} | \nabla w(x)| \\ & \le & ( \sup_{x} |a(x)||x|) \| w\|_{X_\sigma} \\ \end{eqnarray*} The same argument shows that \[ \| a \cdot \nabla \phi\|_{Y_\sigma} \le ( \sup_{x} |x| |a(x)|) \| \phi \|_{X_\sigma}.\] We now approximate the last term in (\ref{last}). For this we need the following real analysis result. There exists some $ C=C_p$ such that for all numbers $ w>0$ and $ \phi \in {\mathbb{R}}$ we have \[ \big| |w+\phi|^p- p w^{p-1} \phi - w^p \big| \le C \left( w^{p-2} \phi^2 + | \phi|^p \right).\] Set $ \Gamma= |w+\phi|^p-p w^{p-1} \phi - w^p$. Then one sees \begin{eqnarray*} \| \Gamma \|_{Y_\sigma} & \le & C \sup_{|x| \le 1} |x|^{\sigma +2} \left( w^{p-2} \phi^2 + | \phi|^p \right) \\ && + C \sup_{|x| \ge 1} |x|^{\frac{2}{p-1}+2} \left( w^{p-2} \phi^2 + | \phi|^p \right) \\ & := & C I_1 + C I_2. \end{eqnarray*} Then note that \begin{eqnarray*} I_1 & = & \sup_{|x| \le 1} \left( |x|^{2 - \sigma} w^{p-2} \left( |x|^\sigma \phi(x) \right)^2 + |x|^{ \sigma+2-\sigma p} \left( | \phi(x)| |x|^\sigma \right)^p \right) \\ & \le & \sup_{|x| \le 1} \left(|x|^{2-\sigma} w^{p-2} \| \phi\|_{X_\sigma}^2 + |x|^{\sigma +2 - \sigma p} \| \phi\|_{X_\sigma}^p \right) \\ & \le & C \| \phi\|_{X_\sigma}^2 + C \| \phi\|_{X_\sigma}^p \end{eqnarray*} for sufficiently small $ \sigma >0$. One can similarly show that \begin{eqnarray*} I_2 &\le& \sup_{|x| \ge 1} \left( |x|^\frac{2}{p-1} w \right)^{p-2} \| \phi\|_{X_\sigma}^2 + \| \phi\|_{X_\sigma}^p \\ & \le & C \| \phi\|_{X_\sigma}^2 + \| \phi\|_{X_\sigma}^p. \end{eqnarray*} So combining these results we arrive at \begin{eqnarray} \label{poo} \|J(\phi)\|_{X_\sigma} & \le & C \sup_x |x| |a(x)| + C \sup_x |x| |a(x)| \| \phi\|_{X_\sigma} \nonumber \\ && + C \| \phi\|_{X_\sigma}^2 + C \| \phi\|_{X_\sigma}^p. \end{eqnarray} Before choosing $R$ we examine the condition on $J$ to be a contraction on $B_R$. First note there is some $ C=C_p$ such that for all numbers $ w >0$ and $ \hat{\phi}, \phi \in {\mathbb{R}}$ one has \begin{equation} \label{contr} \Big| | \hat{\phi} +w|^p - | \phi + w|^p - p w^{p-1}(\hat{\phi}-\phi) \big| \le C M | \hat{\phi}-\phi| \end{equation} where \[ M= w^{p-2} \left( | \hat{\phi}| + | \phi| \right) + | \hat{\phi}|^{p-1} | + | \phi|^{p-1} .\] Let $ \hat{\phi}, \phi \in B_R$. Then \[ J(\hat{\phi})- J(\phi)= -T( a \cdot \nabla ( \hat{\phi}- \phi)) + T( |w+\hat{\phi}|^p- |w+\phi|^p - p w^{p-2} (\hat{\phi}- \phi)),\] and so \begin{eqnarray*} \| J(\hat{\phi})- J(\phi) \|_{X_\sigma} & \le & C \| a \cdot \nabla ( \hat{\phi}- \phi) \|_{Y_\sigma} \\ &&+ C \| |w+\hat{\phi}|^p- |w+\phi|^p- p w^{p-1}( \hat{\phi}-\phi)\|_{Y_\sigma} \\ & =:& C J_1 + C J_2 \end{eqnarray*} Arguing as above one easily sees that $ J_1 \le \sup_x( |x||a(x)|) \| \hat{\phi}- \phi\|_{X_\sigma}$. Using (\ref{contr}) we see that \[ J_2 \le C \sup_{|x| \le 1} |x|^{2+\sigma} M | \hat{\phi}-\phi|+ C \sup_{|x| \ge 1} |x|^{\frac{2}{p-1}+2} M | \hat{\phi}-\phi| =: CJ_3+ C J_4.\] We now compute the various terms in $J_3$ and $J_4$. \begin{eqnarray*} \sup_{|x| \le 1} |x|^{2+\sigma} w^{p-1} | \hat{\phi}| | \hat{\phi} - \phi| & \le & \sup_{|x| \le 1} ( |x|^{2-\sigma} w^{p-2} ) \| \hat{\phi}\|_{X_\sigma} \| \hat{\phi}- \phi\|_{X_\sigma}. \end{eqnarray*} Also we have \begin{eqnarray*} \sup_{|x| \le 1} |x|^{2+\sigma} | \hat{\phi}|^{p-1} | \hat{\phi} - \phi| & \le & \sup_{|x| \le 1} |x|^{2-\sigma-\sigma(p-1)} \| \hat{\phi}\|_{X_\sigma}^{p-1} \| \hat{\phi}- \phi\|_{X_\sigma}, \\ & \le & \| \hat{\phi}\|_{X_\sigma}^{p-1} \| \hat{\phi}- \phi\|_{X_\sigma}, \end{eqnarray*} for small enough $ \sigma>0$. Combining these results we obtain \begin{eqnarray*} J_3 &\le& \left( \sup_{|x| \le 1} |x|^{2-\sigma} w^{p-2} 2R + 2 R^{p-1} \right) \| \hat{\phi}- \phi\|_{X_\sigma} \\ & \le & \left( C R + 2 R^{p-1} \right) \| \hat{\phi}- \phi\|_{X_\sigma}. \end{eqnarray*} One can argue in a similar fashion to show \begin{eqnarray*} J_4 & \le & \sup_{|x| \ge 1} \left( |x|^\frac{2}{p-1} w \right)^{p-2} ( \| \hat{\phi}\|_{X_\sigma} + \| \phi\|_{X_\sigma} ) \| \hat{\phi}-\phi\|_{X_\sigma} \\ && + \left( \| \hat{\phi} \|_{X_\sigma}^{p-1} + \| {\phi} \|_{X_\sigma}^{p-1} \right)\| \hat{\phi}-\phi\|_{X_\sigma} \\ & \le & \left( C R + 2 R^{p-1} \right) \| \hat{\phi}-\phi\|_{X_\sigma}. \end{eqnarray*} Combining the results we obtain an inequality of the form \begin{equation} \label{contraction99} \| J(\hat{\phi})-J(\phi)\|_{X_\sigma} \le C \left( \sup_x |x| |a(x)| + R + R^{p-1} \right) \| \hat{\phi}- \phi\|_{X_\sigma}. \end{equation} We now pick $R$ and put conditions on $a$. Fix $ R$ sufficiently small such that $ CR^2 + CR^p \le \frac{R}{10}$ and such that $ CR + CR^{p-1} < \frac{1}{2}$. Now impose a smallness condition on $ a$ such that $ C \sup_x |x| |a(x)| + C \sup_x |x| |a(x)|R \le \frac{R}{10}$ and $ C \sup_x |x| |a(x)| < \frac{1}{4}$. These conditions are sufficient to show that $J$ is a contraction mapping on $B_R$ in $ X_\sigma$ and hence by the Contraction Mapping Principle there is some $ \phi \in B_R$ such that $J(\phi)=\phi$, which was the desired result. So we have $\phi \in B_R$ such that \begin{equation} \label{almost} - \mathcal{D} \left( \Omega \right)elta ( w + \phi ) + a \cdot \nabla ( w+ \phi) = |w+ \phi|^p \qquad \mbox{in } {\mathbb{R}}^N. \end{equation} By taking $ R>0$ smaller, which imposes a further smallness condition on $a$, we can assume that \begin{equation} \label{close} \sup_{|x| \ge 1} |x|^\frac{2}{p-1} | \phi(x)| \le \frac{1}{10} \inf_{|y| \ge 1} |y|^\frac{2}{p-1}w(y). \end{equation} Using this one sees that $ \phi + w >0$ on $ |x| \ge 1$. Note there are some possible regularity issues for $ \phi$ near the origin. But taking $ \sigma >0$ small enough and applying elliptic regularity theory, along with a bootstrap, one sees that $ \phi$ is at least $ C^{2,\alpha}$ in a ball around the origin for some $ \alpha >0$. One can now apply the maximum principle to see that $ u=w+\phi$ is a positive solution of (\ref{eq}). $\Box$ \textbf{Proof of Theorem \ref{existence}, 2).} First note that a computation shows that $ p_c >\frac{N+1}{N-3}$. For $ R>0$ sufficiently small there is some $u_R>0$ which satisfies (\ref{eq}) and as $R$ gets small one imposes smallness conditions on $a$. For $ m \ge 2$ an integer let $ E=E_{m,R}>0$ denote the first eigenfunction of $L(E):=- \mathcal{D} \left( \Omega \right)elta E + a \cdot \nabla E - p u_R^{p-1} E $ on the ball $ B_m$ with $E=0$ on $ \partial B_m$ and let $ \mu_{m,R}$ denote the first eigenvalue. We now multiply the equation for $E$ by $E$ and integrate over $B_m$. Using the fact that $a$ is divergence free (this is only spot we utilize this fact) one sees, after a suitable $L^2$ normalization of $E$, that \[ \int | \nabla E|^2 = \int p u_R^{p-1} E^2 + \mu_{m,R}.\] We now extend $E$ outside $B_m$ by setting it to be zero and we use the fact that $w$ satisfies (\ref{superstable}) to arrive at \[ (p+\E) \int w^{p-1} E^2 \le \int p u_R^{p-1} E^2 + \mu_{m,R},\] for some fixed $ \E>0$. Note that $ u_R \rightarrow w$ in $ X_\sigma$ as $ R \rightarrow 0$ and so we can argue as in (\ref{close}), that for any $ \partial_{x_j} elta >0$ we can pick $R$ small enough such that $ u_R(x) \le ( \partial_{x_j} elta +1) w(x)$ for all $ |x| \ge 1$. Using elliptic regularity and Sobolev imbedding one sees that the restriction of $u_R$ to the unit ball converges to the restriction of $w$ uniformly. And so we can assume that $ u_R^{p-1} \le w^{p-1} + \partial_{x_j} elta$ for $ |x| \le 1$ for small enough $R$. Using this estimates and breaking the integrals into the regions $ |x| \ge 1$ and $ |x| \le 1$ one arrives at \[ \left( (p+\E) -p(1+ \partial_{x_j} elta)^{p-1} \right) \int_{|x| \ge 1} w^{p-1} E^2 +(\E- p \partial_{x_j} elta) \int_{|x| \le 1} w^{p-1} E^2 \le \mu_{m,R},\] for sufficiently small $ R$. Now by taking $ \partial_{x_j} elta >0$ small enough one sees that for fixed $ R$ small enough we have $ \mu_{m,R} \ge 0$ for all $ m \ge 2$. We now fix this $R$ and let $ u=u_R$, $ E_m=E_{m,R} $ and $ \mu_m= \mu_{m,R}$. So we have that $ E_m >0$ satisfies \begin{eqnarray*} \left\{ \begin{array}{lcl} - \mathcal{D} \left( \Omega \right)elta E_m +a(x) \cdot \nabla E_m &=& p u^{p-1} E_m + \mu_m E_m \qquad \mbox{ in } B_m \\ E_m &=& 0 \qquad \qquad \qquad \quad \quad \mbox{ on } \partial B_m. \end{array}\right. \end{eqnarray*} Lets assume that $ \mu_m \rightarrow 0$. By suitably scaling $E_m$ we can assume that $ E_m(0)=1$. Now fix $ k \ge 1$ an integer and let $ m \ge k+2$. Now note that $E_m$ satisfies the same equation on $ B_{k+1}$ and hence by Harnacks inequality there is some $C_k>0$ such that \[ \sup_{B_k} E_m \le C_k \inf_{B_k} E_m \le C_k,\] for all $ m \ge k+2$. Using elliptic regularity one can show that $ E_m$ is bounded in $ C^{1,\alpha}(B_k)$ and by a diagonal argument there is some subsequence of $E_m$, which we still denote by $ E_m$, which converges to some $ E \ge 0$ locally in $C^{1,\beta}$ for some $ \beta>0$ and $E(0)=1$. One can then argue that $ E$ satisfies \[ - \mathcal{D} \left( \Omega \right)elta E + a(x) \cdot \nabla E = p u^{p-1} E \qquad \mbox{in $ {\mathbb{R}}^N$,} \] and then we can apply the strong maximum principle to see that $ E>0$. This shows that $u$ is a stable solution of (\ref{eq}) which was the desired result. We now show $ \mu_m \rightarrow 0$. We begin by putting $ E_m$, which we $L^2$ normalize, into (\ref{hardy_mine}) with $ \beta = \frac{1}{2}$ to arrive at \[ \mu_m \int \phi^2 + \frac{1}{2} \int \frac{|\nabla E_m|^2}{E_m^2} \phi^2 \le 2 \int | \nabla \phi |^2 + \int \frac{ a \cdot \nabla E_m}{E_m} \phi^2,\] for all $ \phi \in C_c^\infty(B_m)$. We now use Young's inequality to arrive at \begin{eqnarray*} \mu_m \int \phi^2 + \frac{1}{2} \int \frac{| \nabla E_m|^2}{E_m^2} \phi^2 & \le & 2 \int | \nabla \phi^2| \\ && +\E \int \frac{| \nabla E_m|^2}{E_m^2} \phi^2 \\ && + \frac{1}{4\E} \int | a|^2 \phi^2. \end{eqnarray*} By taking $ \E>0$ small enough and re-grouping terms and by using the fact that $ |a(x)| \le \frac{C^2}{|x|^2}$ along with Hardy's inequality, one can obtain \[ \mu_m \int \phi^2 \le C \int | \nabla \phi|^2 \qquad \forall \phi \in C_c^\infty(B_m),\] where $C$ is independent of $m$. From this we can conclude that $ \limsup_m \mu_m \le 0$ but we already have $ \mu_m \ge 0$ and hence we have the desired result. $\Box$ \end{document}
\begin{document} \maketitle \newtheorem{pro}{Proposition}[section] \newtheorem{lem}[pro]{Lemma} \newtheorem{thm}[pro]{Theorem} \newtheorem{de}[pro]{Definition} \newtheorem{co}[pro]{Comment} \newtheorem{no}[pro]{Notation} \newtheorem{vb}[pro]{Example} \newtheorem{vbn}[pro]{Examples} \newtheorem{gev}[pro]{Corollary} \newtheorem{vrg}[pro]{Question} \newtheorem{rem}[pro]{Remark} \begin{abstract} After shortly reviewing the fundamentals of approach theory as introduced by R. Lowen in 1989, we show that this theory is intimately related with the well-known Wasserstein metric on the space of probability measures with a finite first moment on a complete and separable metric space. More precisely, we introduce a canonical approach structure, called the contractive approach structure, and prove that it is metrized by the Wasserstein metric. The key ingredients of the proof of this result are Dini's Theorem, Ascoli's Theorem, and the fact that the class of real-valued contractions on a metric space has some nice stability properties. We then combine the obtained result with Prokhorov's Theorem to establish inequalities between the relative Hausdorff measure of non-compactness for the Wasserstein metric and a canonical measure of non-uniform integrability. \end{abstract} \section{Introduction} Approach spaces were introduced in 1989 by R. Lowen as a unification of metric spaces and topological spaces (\cite{L89},\cite{Lo},\cite{L15},\cite{SV16}). Instead of quantifying the properties of a space $X$ by means of one metric, an approach space is determined by assigning to each point $x \in X$ a collection $\mathcal{A}_x$ of $[0,\infty]$-valued maps on $X$ which are interpreted as local distances based at $x$. By imposing suitable axioms on the collections $\mathcal{A}_x$, we obtain a structure on $X$ which, as in the case of metric spaces, allows us to deal with quantitative concepts such as asymptotic radius and center (\cite{AMS82},\cite{B85},\cite{L81},\cite{L01}) and Hausdorff measure of non-compactness (\cite{BG80},\cite{L88}) as used in functional analysis, especially in various areas of approximation theory (\cite{AMS82}), fixed-point theory (\cite{GK90}), operator theory (\cite{AKPRS92},\cite{PS88}), and Banach space geometry (\cite{DB86},\cite{KV07},\cite{WW96}). However, unlike metric spaces, approach spaces share many of the `structurally good' properties of topological spaces, such as e.g. the easy and canonical formation of product spaces. The structural flexibility of approach theory entails the existence of canonical approach spaces in branches of mathematical analysis such as functional analysis (\cite{LS00},\cite{SV03},\cite{LV04},\cite{SV04},\cite{SV06},\cite{SV07}), hyperspace theory (\cite{LS96},\cite{LS98},\cite{LS00'}), domain theory (\cite{CDL11},\cite{CDL14},\cite{CDS14}), and probability theory and statistics (\cite{BLV11},\cite{BLV13},\cite{BLV16}). A careful study of these approach spaces has resulted in new insights and applications in these branches. In this paper, we will use approach theory to study the relative Hausdorff measure of non-compactness for the well-known Wasserstein metric (\cite{V03}) on the space of probability measures with a finite first moment on a separable and complete metric space. The paper is structured as follows. A brief overview of the basic notions of approach theory is given in Section \ref{sec:AppThy}. For a deep and complete treatment of the topic, we refer the reader to (\cite{Lo}) and (\cite{L15}). In Section \ref{sec:WMAP}, we show that the Wasserstein metric is intimately related to approach theory. We introduce a canonical approach structure, called the contractive approach structure, on the set of probability measures with a finite first moment on a separable and complete metric space, and we prove that it is metrizable by the Wasserstein metric. The main ingredients of the proof consist of Dini's theorem, Ascoli's Theorem, and the fact that the class of contractions of a metric space into the real numbers has nice stability properties. The relative Hausdorff measure of non-compactness for the Wasserstein metric is investigated in Section \ref{sec:HMWM}. Approach theory, and more precisely Theorem \ref{thm:AkW} obtained in Section \ref{sec:WMAP}, and Prokhorov's Theorem will turn out to be the essential tools to obtain in Theorem \ref{thm:muUImuH} inequalities between the Hausdorff measure of noncompactness for the Wasserstein metric and a canonical measure of non-uniform integrability. \section{Approach theory}\label{sec:AppThy} \subsection{Approach spaces} Let $X$ be a non-empty set and $[0,\infty]^X$ the collection of maps of $X$ into $[0,\infty]$. For maps $\phi_1,\phi_2 \in [0,\infty]^X$ we shall always interpret their maximum and minimum pointwise, e.g. $\max\{\phi_1,\phi_2\}(x) = \max\{\phi_1(x),\phi_2(x)\}$. A subcollection $\mathcal{A}_0 \subset [0,\infty]^X$ is said to be {\em upwards-directed} iff for all $\phi_1, \phi_2 \in \mathcal{A}_0$ there exists $\phi \in \mathcal{A}_0$ such that $\max\{\phi_1,\phi_2\} \leq \phi$. A {\em functional ideal} on $X$ is an upwards-directed collection $\mathcal{A}_0 \subset [0,\infty]^X$ such that for each $\phi \in [0,\infty]^X$ $$\left(\forall \varepsilon > 0, \forall \omega < \infty, \exists \phi_0 \in \mathcal{A}_0: \min\{\phi,\omega\} \leq \phi_0 + \varepsilon\right) \Rightarrow \phi \in \mathcal{A}_0.$$ Notice that functional ideals are closed under the formation of finite maxima. An {\em approach structure} on $X$ is an assignment $\mathcal{A}$ of a functional ideal $\mathcal{A}_x$ on $X$ to each point $x \in X$ such that for each $x \in X$ and each $\phi \in \mathcal{A}_x$ \begin{itemize} \item[(A1)] $\phi(x) = 0,$ \item[(A2)] $\forall \varepsilon > 0, \forall \omega < \infty, \exists \left(\phi_x\right)_x \in \Pi_{x \in X} \mathcal{A}_x, \forall y,z \in X :$ \begin{displaymath} \min\{\phi(y),\omega\} \leq \phi_x(z) + \phi_z(y) + \varepsilon. \end{displaymath} \end{itemize} If $\mathcal{A}$ is an approach structure on $X$, then $(X,\mathcal{A})$ is called an {\em approach space}. In an approach space $(X,\mathcal{A})$, a map $\phi \in \mathcal{A}_x$ is interpreted as a local distance based at $x$. A {\em basis for an approach structure $\mathcal{A}$} on $X$ is an assignment $\mathcal{B}$ of a collection $\mathcal{B}_{x} \subset \mathcal{A}_x$ to each point $x \in X$ such that for each $x \in X$ \begin{displaymath} \forall \phi \in \mathcal{A}_x, \forall \varepsilon > 0, \forall \omega < \infty, \exists \psi \in \mathcal{B}_{x} : \min\{\phi,\omega\} \leq \psi + \varepsilon. \end{displaymath} We will also say that $\mathcal{B}$ {\em generates} $\mathcal{A}$. The following result provides a common method to introduce approach structures on a set. \begin{pro}\label{pro:ApproachGems} Let $\mathcal{B}$ be an assignment of a non-empty collection $\mathcal{B}_x \subset [0,\infty]^X$ to each point $x \in X$. Then there exists a unique approach structure $\mathcal{A}$ on $X$ such that $\mathcal{B}$ is a basis for $\mathcal{A}$ if and only if each $\mathcal{B}_{x}$ is upwards-directed and for each $x \in X$ and $\psi \in \mathcal{B}_{x}$ \begin{itemize} \item[(BA1)] $\psi(x) = 0,$ \item[(BA2)] $\forall \varepsilon > 0, \forall \omega < \infty, \exists \left(\psi_x\right)_x \in \Pi_{x \in X} \mathcal{B}_{x}, \forall y,z \in X : $ \begin{displaymath} \min\{\psi(y),\omega\} \leq \psi_x(z) + \psi_z(y) + \varepsilon. \end{displaymath} \end{itemize} \end{pro} Recall that a {\em metric} on $X$ is a map $m$ of $X \times X$ into $\mathbb{R}^+$ such that \begin{itemize} \item[(M1)] $\forall x \in X : m(x,x) = 0,$ \item[(M2)] $\forall x,y \in X : m(x,y) = m(y,x),$ \item[(M3)] $\forall x,y,z \in X : m(x,y) \leq m(x,z) + m(z,y).$ \end{itemize} Let $m$ be a metric on $X$ and assign to each point $x \in X$ the collection \begin{displaymath} \mathcal{B}_{m,x} = \{m(x,\cdot)\}, \end{displaymath} where $m(x,\cdot)$ stands for the map \begin{displaymath} m(x,\cdot) : X \rightarrow \mathbb{R}^+ : y \mapsto m(x,y). \end{displaymath} Then it follows from Proposition \ref{pro:ApproachGems} that there exists a unique approach structure $\mathcal{A}_m$ on $X$ such that $\left(\mathcal{B}_{m,x}\right)_{x \in X}$ is a basis for $\mathcal{A}_m$. It is not hard to establish that \begin{displaymath} \mathcal{A}_{m,x} = \left\{\phi \in \left(\mathbb{R}^+\right)^X \mid \phi \leq m(x,\cdot)\right\}. \end{displaymath} We call $\mathcal{A}_m$ the {\em approach structure underlying $m$}\index{underlying!approach structure of a metric}. An approach structure $\mathcal{A}$ is said to be {\em metrizable}\index{metrizable approach structure} iff there exists a metric $m$ such that $\mathcal{A} = \mathcal{A}_{m}$. Approach spaces form the appropriate axiomatic setting for a quantitative analysis based on numerical indices measuring to what extent topological properties (such as convergence and compactness) fail to be valid. In the next subsections, we give a brief outline of how such an analysis can be developed. Throughout, $X = \left(X,\mathcal{A}\right)$ will be an approach space. \subsection{The associated topology} For $x \in X$, $\phi \in \mathcal{A}_x$, and $\varepsilon > 0$, we define the {\em $\phi$-ball with center $x$ and radius $\varepsilon$} as the set $$B_{\phi}(x,\varepsilon) = \left\{y \in X \mid \phi(y) < \varepsilon\right\}.$$ More loosely, we will also refer to the latter set as a ball with center $x$ or a ball with radius $\varepsilon$. Consider a point $x \in X$ and a set $A \subset X$. We say that $x$ belongs to the {\em closure of $A$}\index{closure} iff each ball with center $x$ contains a point of $A$, and to the {\em interior of $A$}\index{interior} iff $A$ contains a ball with center $x$. We denote the closure of $A$ as $\overline{A}$, and the interior of $A$ as $A^\circ$. We call a set $A \subset X$ {\em closed}\index{closed set} iff $\overline{A} = A$, and {\em open}\index{open set} iff $A^\circ = A$. It is not hard to establish that the collection of closed sets in $X$ contains $\emptyset$ and $X$ and is closed under the formation of finite unions and arbitrary intersections. Furthermore, a set is open if and only if its complement is closed. In particular, the collection of open sets in $X$ contains $\emptyset$ and $X$ and is closed under the formation of finite intersections and arbitrary unions. We infer that the open sets in $X$ define a topology, $\mathcal{T}_{\mathcal{A}}$, which will be called the {\em topology associated with} $\mathcal{A}$. If $\mathcal{A} = \mathcal{A}_m$ for a metric $m$, then the topology associated with $\mathcal{A}$ coincides with the usual topology derived from $m$. In the remainder of this section, each topological notion (such as e.g. convergence and compactness) should be interpreted in the topological space $(X,\mathcal{T}_{\mathcal{A}})$. \subsection{The associated quasimetric} A {\em quasimetric} on $X$ is a map $d$ of $X \times X$ into $\mathbb{R}^+$ which satisfies the properties (M1) and (M3) of a metric. We call $X$ {\em locally bounded} iff for all $(x,y) \in X \times X$ there exists a constant $C > 0$ such that $\phi(y) \leq C$ for all $\phi \in \mathcal{A}_x$. If $X$ is locally bounded, put, for $x,y \in X$, $$d_{\mathcal{A}}(x,y) = \sup_{\phi \in \mathcal{A}_x}\phi(y).$$ One easily verifies that $d_{\mathcal{A}}$ is the smallest quasimetric on $X$ with the property that $\phi \leq d_{\mathcal{A}}(x,\cdot)$ for all $x \in X$ and $\phi \in \mathcal{A}_x$. We call $d_{\mathcal{A}}$ the {\em quasimetric associated with} $\mathcal{A}$. If $\mathcal{A} = \mathcal{A}_m$ for a metric $m$, then $d_{\mathcal{A}} = m$. Where needed, we shall assume that $X$ is locally bounded. \subsection{Convergence} Throughout the paper, we will make use of the convergence theory of nets. For the basic facts about this theory, we refer the reader to \cite{W70}. Consider a net $\left(x_\eta\right)_\eta$ in $X$, a point $x \in X$, and $\varepsilon > 0$. We say that $\left(x_\eta\right)_\eta$ is {\em $\varepsilon$-convergent to $x$} iff each ball $B$ with center $x$ and radius $\varepsilon$ eventually contains $(x_\eta)_\eta$. We define the {\em limit operator of $\left(x_\eta\right)_\eta$ at $x$} as $$\lambda\left(x_\eta \rightarrow x\right) = \inf\left\{\alpha > 0 \mid (x_\eta)_\eta \text{ is $\alpha$-convergent to } x\right\}.$$ Notice that the latter number is zero if and only if $x_\eta \rightarrow x$. If $\mathcal{B}$ is a basis for $\mathcal{A}$, then one easily verifies that $$\lambda(x_\eta \rightarrow x) = \sup_{\phi \in \mathcal{B}_{x}} \limsup_{\eta} \phi(x_\eta).$$ Therefore, if $\mathcal{A} = \mathcal{A}_m$ for a metric $m$, \begin{equation} \lambda(x_\eta \rightarrow x) = \limsup_{\eta} m(x,x_\eta).\label{eq:LimOpM} \end{equation} Notice that if $(x_\eta)_\eta$ is convergent (in the topological space $(X,\mathcal{T}_\mathcal{A})$), then $\myinf_{x \in X} \lambda(x_\eta \rightarrow x) = 0$. We call $X$ {\em complete} iff the converse holds, i.e. if each net $(x_\eta)_\eta$ with the property $\myinf_{x \in X} \lambda(x_\eta \rightarrow x) = 0$ is convergent. If $\mathcal{A} = \mathcal{A}_m$ for a metric $m$, then $\mathcal{A}$ is complete if and only if $m$ is complete in the classical sense. \subsection{Compactness}\label{subsec:compactness} If $\left(\Phi = \left(\phi_x\right)_x\right) \in \Pi_{x \in X} \mathcal{A}_x,$ then a set $B \subset X$ is called a {\em $\Phi$-ball} iff there exist $x \in X$ and $\alpha > 0$ such that $B = B_{\phi_x}(x,\alpha)$. Let $A \subset X$. We say that a collection $\mathcal{V}$ of sets $V \subset X$ {\em covers} $A$ iff $A \subset \cup_{V \in \mathcal{V}} V$. We call $A$ {\em$\varepsilon$-relatively compact} iff it holds for each $\Phi \in \Pi_{x \in X} \mathcal{A}_x$ that $A$ can be covered with finitely many $\Phi$-balls with radius $\varepsilon$, and we define the {\em relative measure of non-compactness of $A$} as $$\mu_{rc}(A) = \inf \{\alpha > 0 \mid \textrm{$A$ is $\alpha$-relatively compact}\}.$$ If $\mathcal{B}$ is a basis for $\mathcal{A}$, then $$\mu_{rc}(A) = \sup_{\left(\phi_x\right)_x \in \Pi_{x \in X} \mathcal{B}_{x}} \myinf_{\substack{Y \subset X\\\textrm{finite}}} \sup_{a \in A} \myinf_{y \in Y} \phi_y(a).$$ Therefore, if $\mathcal{A} = \mathcal{A}_m$ for a metric $m$, $$\mu_{rc}(A) = \myinf_{\substack{Y \subset X\\\textrm{finite}}} \sup_{a \in A} \myinf_{y \in Y} m(y,a),$$ which is known as the {\em relative Hausdorff measure of non-compactness for $m$} (\cite{BG80},\cite{L88},\cite{WW96}). The following result links the relative measure of non-compactness to the limit operator. \begin{thm}\label{thm:compactness} For $A \subset X$, \begin{displaymath} \mu_{rc}(A) = \sup_{(x_\eta)_\eta} \myinf_{(x_{h(\eta^\prime)})_{\eta^\prime}} \myinf_{x \in X} \lambda(x_{h(\eta^\prime)} \rightarrow x), \end{displaymath} the supremum taken over all nets $(x_\eta)_\eta$ in $A$, and the first infimum over all subnets $(x_{h(\eta^\prime)})_{\eta^\prime}$ of $(x_\eta)_\eta$. Furthermore, if $A$ is relatively compact, then $\mu_{rc}(A) = 0$, and the converse holds if $X$ is complete. \end{thm} \subsection{Contractions} Let $f$ be a map of $X$ into an approach space $Y = \left(Y, \mathcal{A}^\prime\right)$ and $x \in X$. We say that $f$ is {\em contractive at $x$} iff $$\forall \phi \in \mathcal{A}_{f(x)}^\prime : \phi \circ f \in \mathcal{A}_x,$$ and that $f$ is a {\em contraction} iff it is contractive at every $x \in X$. The following result characterizes the contractive property in terms of the limit operator. \begin{thm}\label{pro:CharacContraction} A map $f : X \rightarrow Y$ is contractive at $x$ if and only if for every net $\left(x_\eta\right)_\eta$ in $X$ $$\lambda\left(f(x_\eta) \rightarrow f(x)\right) \leq \lambda\left(x_\eta \rightarrow x\right).$$ \end{thm} One easily derives from Theorem \ref{pro:CharacContraction} that contractions between approach spaces are continuous, and that, if $\mathcal{A} = \mathcal{A}_m$ for a metric $m$ and $\mathcal{A}^\prime = \mathcal{A}_{m^\prime}$ for a metric $m^\prime$, a map $f : X \rightarrow Y$ is contractive if and only if it is contractive in the metric sense, i.e. $m^\prime(f(x),f(y)) \leq m(x,y)$ for all $x,y \in X$. \subsection{Weak approach structures} Let $\mathcal{A}^\prime$ be an approach structure on $X$. We say that {\em $\mathcal{A}^\prime$ is weaker than $\mathcal{A}$} or, equivalently, that {\em $\mathcal{A}$ is stronger than $\mathcal{A}^\prime$} iff the identity map \begin{displaymath} 1_X : \left(X,\mathcal{A}\right) \rightarrow \left(X,\mathcal{A}^\prime\right) : x \mapsto x \end{displaymath} is a contraction. Notice that $\mathcal{A}^\prime$ is weaker than $\mathcal{A}$ if and only if $\mathcal{A}_x^\prime \subset \mathcal{A}_x$ for each $x \in X$. It easily follows from Theorem \ref{pro:CharacContraction} that \begin{thm}\label{thm:characweak} $\mathcal{A}^\prime$ is weaker than $\mathcal{A}$ if and only if, for each net $\left(x_\eta\right)_\eta$ in $X$ and each point $x \in X$, $$\lambda_{\mathcal{A}^\prime}(x_\eta \rightarrow x) \leq \lambda_{\mathcal{A}}(x_\eta \rightarrow x).$$ In particular, $\mathcal{A}^\prime = \mathcal{A}$ if and only if, for each net $\left(x_\eta\right)_\eta$ in $X$ and each point $x \in X$, $$\lambda_{\mathcal{A}^\prime}(x_\eta \rightarrow x) = \lambda_{\mathcal{A}}(x_\eta \rightarrow x).$$ \end{thm} Many interesting approach structures arise naturally in various areas in mathematical analysis. This is mainly due to the fact that approach spaces were designed in such a way that they allow for a lot of `structural flexibility'. This metaprinciple is captured by the following theorem (\cite{Lo}). \begin{thm}\label{thm:StructureThmApp} Consider a set $Y$ and an indexed collection of maps $$\left(f_i : Y \rightarrow Y_i\right)_{i \in I},$$ where each $Y_i = \left(Y_i,\mathcal{A}_i\right)$ is an approach space. Then there exists a weakest approach structure $\mathcal{A}_{w}$ on $Y$ with the property that each map $$f_i: \left(Y,\mathcal{A}_{w}\right) \rightarrow Y_i$$ is contractive. Moreover, putting for $y \in Y$ $$\mathcal{B}_{y} = \left\{\max_{k \in K} \phi_k \circ f_k \mid K \subset I \textrm{ finite}, \forall k \in K : \phi_k \in \mathcal{A}_{k,f_k(y)}\right\},$$ it follows that $\left(\mathcal{B}_y\right)_{y \in Y}$ is a basis for $\mathcal{A}_w$. Finally, $\mathcal{A}_w$ is characterized by the property that it holds for every map $$f : Z \rightarrow \left(Y,\mathcal{A}_w\right),$$ where $Z$ is an approach space, that $f$ is contractive if and only if each map $$f_i \circ f : Z \rightarrow Y_i$$ is contractive. \end{thm} We call the approach structure $\mathcal{A}_w$ in Theorem \ref{thm:StructureThmApp} the {\em weak approach structure for the collection of maps} $$\left(f_i : Y \rightarrow Y_i\right)_{i \in I}.$$ Let $\lambda_i$ stand for the limit operator in the space $Y_i$ and $\lambda_w$ for the limit operator in the space $\left(Y,\mathcal{A}_w\right)$. Then \begin{gev}\label{gev:InAppLimOp} For a net $\left(y_\eta\right)_\eta$ in $Y$ and a point $y \in Y$, $$\lambda_w(y_\eta \rightarrow y) = \sup_{i \in I} \lambda_i(f_i\left(y_\eta\right) \rightarrow f_i\left(y\right)).$$ In particular, $$y_\eta \rightarrow y \textrm{ in $\left(Y,\mathcal{A}_w\right)$ }\Leftrightarrow \forall i \in I : f_i\left(y_\eta\right) \rightarrow f_i(y) \textrm{ in $Y_i$}.$$ \end{gev} It is important to notice that the weak approach structure for a collection of maps of a set into metrizable approach spaces in general fails to be metrizable. \section{The Wasserstein metric and approach theory}\label{sec:WMAP} Let $S = (S,d)$ be a separable and complete metric space, $\mathcal{C}_b(S,\mathbb{R})$ the collection of all maps $f$ of $S$ into $\mathbb{R}$ which are bounded and continuous, and $\mathcal{P}(S)$ the set of all Borel probability measures on $S$. Recall that a net $(P_\eta)_\eta$ in $\mathcal{P}(S)$ is said to {\em converge weakly} to $P \in \mathcal{P}(S)$ iff $$\forall f \in \mathcal{C}_b(S,\mathbb{R}) : \int_S f dP_\eta \rightarrow\int_S f dP.$$ We write $P_\eta \stackrel{w}{\rightarrow} P$ to indicate that $(P_\eta)_\eta$ converges weakly to $P$. It is well known that weak convergence corresponds to convergence with respect to the {\em weak topology} on $S$ (\cite{B99},\cite{P67}). The latter is separable and completely metrizable. Furthermore, let $\kappa(S,\mathbb{R})$ be the collection of all maps $f$ of $S$ into $\mathbb{R}$ which are contractive, i.e. for which the inequality $$\left|f(x) - f(y)\right| \leq d(x,y)$$ holds for all $x, y \in S$, and let $\mathcal{P}^1(S)$ be the set of all $P \in \mathcal{P}(S)$ with a finite first moment, i.e. for which $$\int_S d(a,\cdot) dP < \infty$$ for a certain (and thus, by the triangle inequality, for all) $a \in S$, where $d(a,\cdot)$ stands for the map $$d(a,\cdot) : S \rightarrow \mathbb{R}^+ : x \mapsto d(a,x).$$ Note that for $f \in \kappa(S,\mathbb{R})$, $P \in \mathcal{P}^1(S)$, and $a \in S$, $$\int_S \left|f\right| dP \leq \left|f(a)\right| + \int_S d(a,\cdot) dx < \infty,$$ from which we infer that $\int_S f dP$ is well-defined. The {\em Wasserstein metric} on $\mathcal{P}^1(S)$, see e.g. \cite{V03}, is defined by the formula \begin{equation*} W(P,Q) = \myinf_{\pi} \int_{S \times S} d(x,y) d\pi(x,y), \end{equation*} where the infimum runs over all Borel probability measures $\pi$ on $S \times S$ with first marginal $P$ and second marginal $Q$. By Kantorovich duality theory (\cite{V03},\cite{E10},\cite{E11}), the alternative formula \begin{equation} W(P,Q) = \sup_{f \in \kappa(S,\mathbb{R})} \left|\int_S f dP - \int_S f dQ\right|\label{eq:defWass} \end{equation} holds. Furthermore, if $S = \mathbb{R}$, then \begin{equation*} W(P,Q) = \int_{-\infty}^\infty \left|F_P(x) - F_Q(x)\right| dx \end{equation*} with $F_P$ (respectively $F_Q$) the cumulative distribution function of $P$ (respectively $Q$). The topology underlying the Wasserstein metric is stronger than the weak topology. More precisely, for a net $(P_\eta)_\eta$ in $\mathcal{P}^1(S)$ and $P \in \mathcal{P}^1(S)$, the following are equivalent (\cite{V03}): \begin{enumerate} \item $W(P,P_\eta) \rightarrow 0,$ \item $P_\eta \stackrel{w}{\rightarrow} P$ and $\forall a \in S : \int_S d(a,\cdot) dP_\eta \rightarrow \int_S d(a,\cdot) dP$. \end{enumerate} Also, the Wasserstein metric is separable and complete, see e.g. \cite{B08}. \begin{pro} Let $\mathcal{A}_W$ be the underlying approach structure of the Wasserstein metric $W$. Then, in the approach space $(\mathcal{P}^1(S),\mathcal{A}_W)$, the limit operator of a net $(P_\eta)_\eta$ at a point $P$ is given by $$\lambda_W(P_\eta \rightarrow P) = \limsup_{\eta} \sup_{f \in \kappa(S,\mathbb{R})} \left|\int f dP - \int f dP_\eta\right|.$$ \end{pro} \begin{proof} This follows from (\ref{eq:LimOpM}) and (\ref{eq:defWass}) . \end{proof} Inspired by formula (\ref{eq:defWass}), we introduce the {\em contractive approach structure} $\mathcal{A}_\kappa$ on $\mathcal{P}^1(S)$ as the weak approach structure for the collection of maps $$\left(\mathcal{P}^1(S) \rightarrow \mathbb{R} : P \mapsto \int f dP\right)_{f \in \kappa(S,\mathbb{R})},$$ where, of course, the approach structure underlying the Euclidean metric is considered on $\mathbb{R}$. \begin{pro}\label{pro:limopkappa} In the approach space $(\mathcal{P}^1(S),\mathcal{A}_\kappa)$, the limit operator of a net $(P_\eta)_\eta$ at a point $P$ is given by $$\lambda_\kappa(P_\eta \rightarrow P) = \sup_{f \in \kappa(S,\mathbb{R})} \limsup_{\eta} \left|\int f dP - \int f dP_\eta\right|.$$ \end{pro} \begin{proof} This follows from the definition of $\mathcal{A}_\kappa$ and Corollary \ref{gev:InAppLimOp}. \end{proof} The main goal of this section is to establish the following result. \begin{thm}\label{thm:AkW} The Wasserstein metric metrizes the contractive approach structure, which, by Theorem \ref{thm:characweak}, is equivalent with the assertion that, for each net $(P_\eta)_\eta$ in $\mathcal{P}^1(S)$ and each $P \in \mathcal{P}^1(S)$, $$\lambda_\kappa(P_\eta \rightarrow P) = \lambda_{W}(P_\eta \rightarrow P),$$ or, more explicitly, \begin{equation*} \sup_{f \in \kappa(S,\mathbb{R})} \limsup_{\eta} \left|\int f dP - \int f dP_\eta\right| = \limsup_{\eta} \sup_{f \in \kappa(S,\mathbb{R})} \left|\int f dP - \int f dP_\eta\right|. \end{equation*} \end{thm} For the proof of Theorem \ref{thm:AkW}, the authors were inspired by \cite{E10} and \cite{E11}, in particular Section 3 in \cite{E10}. We first give four lemmas. Fix $a \in S$ and let $\kappa_a(S,\mathbb{R})$ be the set of those $f \in \kappa(S,\mathbb{R})$ for which $f(a) = 0$. \begin{lem}\label{lem:AbsBecomesNormal} For $P,Q \in \mathcal{P}^1(S)$, \begin{equation} W(P,Q) = \sup_{f \in \kappa_a(S,\mathbb{R})} \left(\int f dP - \int f dQ\right),\label{eq:NewFormW} \end{equation} and, for a net $(P_\eta)_\eta$ in $\mathcal{P}^1(S)$ and $P \in \mathcal{P}^1(S)$, \begin{equation} \lambda_\kappa(P_\eta \rightarrow P) = \sup_{f \in \kappa_a(S,\mathbb{R})} \limsup_{\eta} \left(\int f dP_\eta - \int f dP\right).\label{eq:NewFormLambdaKappa} \end{equation} \end{lem} \begin{proof} For $g \in \kappa(S,\mathbb{R})$, put $$g_a = g - g(a).$$ Then $g_a \in \kappa_a(S,\mathbb{R})$ and $$\int g dP - \int g dQ = \int g_a dP - \int g_a dQ.$$ Thus, from (\ref{eq:defWass}) we now learn that $$W(P,Q) = \sup_{f \in \kappa_a(S,\mathbb{R})} \left|\int f dP - \int f dQ\right|.$$ But then (\ref{eq:NewFormW}) follows from the fact that, for any $f : S \rightarrow \mathbb{R}$, $f \in \kappa_a(S,\mathbb{R})$ if and only if $-f \in \kappa_a(S,\mathbb{R})$. By the same observations, (\ref{eq:NewFormLambdaKappa}) is established. \end{proof} \begin{lem}\label{lem:seqsup} For each $\varepsilon > 0$, there exists a net $(f_\eta)_{\eta \in D}$ in $\kappa_a(S,\mathbb{R})$, a directed set $D'$, and a monotonically increasing and cofinal map $h : D' \rightarrow D$ such that \begin{equation*} \lambda_{W}(P_\eta \rightarrow P) - \varepsilon \leq \lim_{\eta^\prime} \left(\int f_{h(\eta')} dP - \int f_{h(\eta')} dP_{h(\eta')}\right) \leq \lambda_{W}(P_\eta \rightarrow P). \end{equation*} \end{lem} \begin{proof} By Lemma \ref{lem:AbsBecomesNormal}, choose, for each $\eta \in D$, $f_\eta \in \kappa_a(S,\mathbb{R})$ such that $$W(P,P_\eta) - \varepsilon \leq \int f_\eta dP - \int f_\eta dP_\eta \leq W(P,P_\eta) .$$ Then $$\lambda_{W}(P_\eta \rightarrow P) - \varepsilon \leq \limsup_{\eta} \left(\int f_\eta dP - \int f_\eta dP_\eta \right) \leq \lambda_{W}(P_\eta \rightarrow P).$$ Since $\limsup_{\eta} \left(\int f_\eta dP - \int f_\eta dP_\eta \right)$ is the largest accumulation point of the net $\left(\int f_\eta dP - \int f_\eta dP_\eta \right)_{\eta \in D}$, we can find $D^\prime$ and $h$ with the desired properties. \end{proof} The following lemma is well-known, but we include a proof for the sake of completeness. \begin{lem}\label{lem:supcontractive} Let $(f_i)_{i \in I}$ be a collection of contractions of $S$ into $\mathbb{R}$ such that for each $x \in S$ the set $\left\{f_{i}(x) \mid i \in I\right\}$ is bounded. Then the maps $\sup_{i \in I} f_i$ and $\myinf_{i \in I} f_i$ are also contractions. \end{lem} \begin{proof} Fix $x, y \in S$. For each $i \in I$, $f_i$ being a contraction, $$f_i(x) \leq f_i(y) + d(x,y),$$ whence $$\sup_{i \in I} f_i(x) \leq \sup_{i \in I} f_i(y) + d(x,y),$$ and, reversing the role of $x$ and $y$, also $$\sup_{i \in I} f_i(y) \leq \sup_{i \in I} f_i(x) + d(x,y),$$ from which we conclude dat $\sup_{i \in I} f_i$ is a contraction. Analogously, one proves that $\myinf_{i \in I} f_i$ is a contraction. \end{proof} \begin{lem}\label{lem:DiniApp} Let $K \subset S$ be compact, $(g_\eta)_{\eta}$ a net of contractions of $K$ into $S$, and $g$ a contraction of $K$ into $S$. Suppose that $$\mylim_{\eta} \sup_{x \in K} \left|g(x) - g_\eta(x)\right| = 0.$$ Then the net $(g_\eta)_{\eta}$ is uniformly bounded on $K$, and, for each $\eta$, the maps $\sup_{\zeta \succeq \eta} g_\zeta$ and $\myinf_{\zeta \succeq \eta} g_\zeta$ are contractions. Furthermore, \begin{equation*} \mylim_{\eta} \sup_{x \in K} \left|\left(\sup_{\zeta \succeq \eta} g_\zeta\right)(x) - \left(\myinf_{\zeta \succeq \eta} g_\zeta \right)(x)\right| = 0. \end{equation*} \end{lem} \begin{proof} By Lemma \ref{lem:supcontractive}, for each $ \eta$, the maps $\sup_{\zeta \succeq \eta} g_\zeta$ and $\myinf_{\zeta \succeq \eta} g_\zeta$ are contractions. Furthermore, for each $x \in K$, $g(x) = \lim_{\eta}g_\eta(x),$ whence $$g(x) = \limsup_\eta g_\eta(x) = \liminf_\eta g_\eta(x). $$ In particular, the net $\left(\sup_{\zeta \succeq \eta} g_\zeta\right)_\eta$ is monotincally decreasing to $g$, and the net $\left(\myinf_{\zeta \succeq \eta} g_\zeta\right)_\eta$ is monotonically increasing to $g$. Therefore, the net $\left(\sup_{\zeta \succeq \eta} g_\zeta - \myinf_{\zeta \succeq \eta} g_\zeta\right)_\eta$ is monotonically decreasing to $0$. Now the proof is finished by Dini's Theorem (\cite{K75},\cite{KR01}). \end{proof} We now provide the proof of Theorem \ref{thm:AkW}. \begin{proof}[Proof of Theorem \ref{thm:AkW}] Fix a net $(P_\eta)_{\eta \in D}$ in $\mathcal{P}^1(S)$ and $P \in \mathcal{P}^1(S)$. One readily sees that $$\lambda_\kappa(P_\eta \rightarrow P) \leq \lambda_W(P_\eta \rightarrow P).$$ We now establish the reverse inequality, which will finish the proof. Fix $\varepsilon > 0$. According to Lemma \ref{lem:seqsup}, choose a net $(f_\eta)_{\eta \in D}$ in $\kappa_a(S,\mathbb{R})$, a directed set $D'$, and a monotonically increasing and cofinal map $h : D' \rightarrow D$ such that \begin{equation} \lambda_{W}(P_\eta \rightarrow P) - \varepsilon \leq \lim_{\eta'} \left(\int f_{h(\eta')} dP - \int f_{h(\eta')} dP_{h(\eta')}\right) \\ \leq \lambda_{W}(P_\eta \rightarrow P).\label{eq:LWStarSeq} \end{equation} Furthermore, finite Borel measures on a separable and completely metrizable topological space being tight (\cite{P67}), take a compact set $K \subset S$ such that \begin{equation} \int_{S \setminus K} d(a,\cdot) dP < \varepsilon,\label{eq:outsideK} \end{equation} and assume without loss of generality that $a \in K$. By Ascoli's Theorem (\cite{W70}), the collection $\kappa_a(K,\mathbb{R})$ is uniformly compact, which allows us to fix $f \in \kappa_a(K,\mathbb{R})$, a directed set $D''$, and a monotonically increasing and cofinal map $h' : D'' \rightarrow D'$ such that, putting \begin{equation} k = h \circ h^\prime,\label{eq:defk} \end{equation} \begin{equation} \mylim_{\eta^{\prime \prime}} \sup_{x \in K} \left|f(x) - f_{k(\eta'')}(x)\right| = 0.\label{eq:AscApp} \end{equation} Notice that, for all $\eta^{\prime \prime} \in D^{\prime \prime}$, since $f_{k(\eta^{\prime \prime})} \in \kappa_a(S,\mathbb{R})$, $\left|f_{k(\eta^{\prime \prime})}\right| \leq d(a,\cdot)$, whence, by Lemma \ref{lem:supcontractive}, $\sup_{\zeta^{\prime \prime} \succeq {\eta''}} f_{k(\zeta^{\prime \prime})}$ and $\myinf_{\zeta'' \succeq \eta''} f_{k(\zeta'')}$ also belong to $\kappa_a(S,\mathbb{R})$. Furthermore, by (\ref{eq:AscApp}) and Lemma \ref{lem:DiniApp}, \begin{equation*} \mylim_{\eta^{\prime \prime}} \sup_{x \in K} \left|\left(\sup_{\zeta'' \succeq \eta''} f_{k(\zeta'')}\right)(x) - \left(\myinf_{\zeta'' \succeq \eta''} f_{k(\zeta'')}\right)(x)\right| = 0, \end{equation*} so that we can fix $\eta^{\prime \prime}_0 \in D^{\prime \prime}$ such that \begin{equation} \sup_{x \in K} \left|\left(\sup_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')}\right)(x) - \left(\myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')}\right)(x)\right| < \varepsilon.\label{eq:unifcvcon} \end{equation} For each $\eta^{\prime \prime} \in D^{\prime \prime}$ with $\eta^{\prime \prime} \succeq \eta^{\prime \prime}_0$, \begin{eqnarray} \lefteqn{\left(\int f_{k(\eta'')} dP - \int f_{k(\eta'')} dP_{k(\eta'')}\right)}\label{eq:startbigineq}\\ &\leq& \int \sup_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} dP - \int \myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} dP_{k(\eta'')}\nonumber\\ &=& \int \left(\sup_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} - \myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')}\right) dP\nonumber\\ && + \left( \int \myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} dP - \int \myinf_{\zeta''\succeq \eta''_0} f_{k(\zeta'')} dP_{k(\eta'')}\right)\nonumber\\ &=& \int_{K} \left(\sup_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} - \myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')}\right) dP\nonumber\\ && + \int_{S \setminus K} \left(\sup_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} - \myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')}\right) dP\nonumber\\ && + \left( \int \myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} dP - \int \myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} dP_{k(\eta'')}\right).\nonumber \end{eqnarray} By (\ref{eq:unifcvcon}), \begin{equation} \int_{K} \left(\sup_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} - \myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')}\right) dP < \varepsilon.\label{eq:onK} \end{equation} Since $\sup_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')}$ and $\myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')}$ belong to $\kappa_a(S,\mathbb{R})$, and using (\ref{eq:outsideK}), \begin{equation} \int_{S \setminus K} \left(\sup_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} - \myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')}\right) dP \leq 2 \int_{S \setminus K} d(a,\cdot) dP < 2 \varepsilon.\label{eq:AppOutsideK} \end{equation} Plugging in (\ref{eq:onK}) and (\ref{eq:AppOutsideK}) in (\ref{eq:startbigineq}), taking superior limits, and again using the fact that $\myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')}$ belongs to $\kappa_a(S,\mathbb{R})$, \begin{eqnarray} \lefteqn{\limsup_{\eta^{\prime \prime}} \left(\int f_{k(\eta'')} dP - \int f_{k(\eta'')} dP_{k(\eta'')}\right)} \label{eq:AlmostDone}\\ &\leq&3 \varepsilon + \limsup_{\eta^{\prime \prime}} \left( \int \myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} dP - \int \myinf_{\zeta'' \succeq \eta''_0} f_{k(\zeta'')} dP_{k(\eta'')}\right)\nonumber\\ &\leq& 3 \varepsilon + \lambda_\kappa(P_{k(\eta'')} \rightarrow P)\nonumber\\ &\leq& 3 \varepsilon + \lambda_\kappa(P_\eta \rightarrow P).\nonumber \end{eqnarray} Using (\ref{eq:defk}) and the fact that a subnet of a convergent net converges to the same limit point, \begin{eqnarray*} \lefteqn{\limsup_{\eta^{\prime \prime}} \left(\int f_{k(\eta'')} dP - \int f_{k(\eta'')} dP_{k(\eta'')}\right)}\\ && = \lim_{\eta' } \left(\int f_{h(\eta')} dP - \int f_{h(\eta')} dP_{h(\eta')}\right), \end{eqnarray*} which, by (\ref{eq:LWStarSeq}) and (\ref{eq:AlmostDone}), yields \begin{equation*} \lambda_{W}(P_\eta \rightarrow P) \leq \lambda_{\kappa}(P_{\eta} \rightarrow P) + 4\varepsilon. \end{equation*} This finishes the proof by arbitrariness of $\varepsilon > 0$. \end{proof} \section{The relative Hausdorff measure of non-compactness for the Wasserstein metric}\label{sec:HMWM} In a complete metric space $(X,m)$, the {\em relative Hausdorff measure of non-compactness} of a set $A \subset X$ (\cite{BG80},\cite{WW96}) is given by \begin{equation*} \mu_{\text{\upshape{H}},m}(A) = \myinf_{\substack{Y \subset X\\\textrm{finite}}} \sup_{a \in A} \myinf_{y \in Y} m(y,a), \end{equation*} and this measure coincides with the relative measure of non-compactness for the approach structure underlying $m$ (Subsection \ref{subsec:compactness}). One readily verifies that $A$ is $m$-bounded if and only if $\mu_{\textrm{\upshape{H}},m}(A) < \infty$. Furthermore, by Theorem \ref{thm:compactness}, $A \subset X$ is $m$-relatively compact if and only if $\mu_{\textrm{\upshape{H}},m}(A) = 0$. The relative (Hausdorff) measure of non-compactness of a set of probability measures was studied for the weak approach structure in \cite{BLV11}, for the continuity approach structure in \cite{B16}, and for the parametrized Prokhorov metric in \cite{B16I}. Here we are interested in finding a meaningful expression for the relative Hausdorff measure of non-compactness for the Wasserstein metric. More precisely, keeping the notation from the previous section, we will study, for a set $\Gamma \subset \mathcal{P}^1(S)$, the number \begin{equation} \mu_{\text{\upshape H},W}(\Gamma) = \myinf_{\substack{\Phi \subset \mathcal{P}^1(S)\\\textrm{finite}}} \sup_{P \in \Gamma} \myinf_{Q \in \Phi} W(P,Q).\label{eq:HMWdef} \end{equation} In doing so, we will make use of the following corollary of Theorem \ref{thm:AkW}. \begin{thm} For $\Gamma \subset \mathcal{P}^1(S)$, \begin{equation} \mu_{\text{\upshape H},W}(\Gamma) = \sup_{(P_\eta)_\eta} \myinf_{(P_{h(\eta^\prime)})_{\eta^\prime}} \myinf_{P \in \mathcal{P}^1(S)} \sup_{f \in \kappa(S,\mathbb{R})} \limsup_{\eta^\prime} \left|\int f dP - \int f dP_{h(\eta')}\right|,\label{eq:ExprHMW} \end{equation} the supremum taken over all nets $(P_\eta)_\eta$ in $\Gamma$, and the first infimum over all subnets $(P_{h(\eta^\prime)})_{\eta^\prime}$ of $(P_\eta)_\eta$. \end{thm} \begin{proof} By Theorem \ref{thm:AkW}, the contractive approach structure $\mathcal{A}_\kappa$ is metrized by the Wasserstein metric $W$. Therefore, $\mu_{\text{\upshape{H}},W}$ coincides with the relative measure of non-compactness for $\mathcal{A}_\kappa$, which, by Theorem \ref{thm:compactness} and Proposition \ref{pro:limopkappa}, coincides with the right-hand side of (\ref{eq:ExprHMW}). \end{proof} We say that a collection $\Gamma \subset \mathcal{P}^1(S)$ is {\em uniformly integrable} iff there exists $a \in S$ such that for each $\varepsilon > 0$ there exists a bounded set $B \subset S$ such that for each $P \in \Gamma$ $$\int_{S \setminus B} d(a,\cdot) dP < \varepsilon,$$ and we define the {\em measure of non-uniform integrability} by $$\mu_{\text{\upshape{UI}}}(\Gamma) = \myinf_{a \in S} \myinf_{B} \sup_{P \in \Gamma} \int_{S \setminus B} d(a,\cdot) dP,$$ the second infimum taken over all bounded sets $B \subset S$. Of course, $\mu_{\text{\upshape{UI}}}(\Gamma) = 0$ if and only if $\Gamma$ is uniformly integrable. For $a \in S$ and $R \in \mathbb{R}^+_0$, let $B(a,R)$ stand for the {\em open ball with center $a$ and radius $R$}, that is $$B(a,R) = \{x \in S \mid d(a,x) < R\},$$ and $B^\star(a,R)$ for the {\em closed ball with center $a$ and radius $R$}, that is $$B^\star(a,R) = \{x \in S \mid d(a,x) \leq R\}.$$ \begin{lem}\label{lem:Sioen} Fix $a \in S$ and $R \in \mathbb{R}^+_0$. Define the map $\phi_{a,R} : S \rightarrow \mathbb{R}$ by \begin{displaymath} \phi_{a,R}(x) = \left\{\begin{array}{clrr} \left(1 - \frac{R}{d(a,x)}\right)^+ &\textrm{ if }& x \neq a\\ 0 &\textrm{ if }& x = a \end{array}\right.. \end{displaymath} Then, for each $0 < \varepsilon < 1$, \begin{equation*} (1 - \varepsilon) 1_{S \setminus B^\star(a,R/\varepsilon)} \leq \phi_{a,R} \leq 1_{S \setminus B^\star(a,R)}, \end{equation*} and the map $\psi_{a,R} : S \rightarrow \mathbb{R}$, defined by $$\psi_{a,R}(x) = \phi_{a,R}(x) d(a,x),$$ is a contraction. \end{lem} \begin{proof} This follows by straightforward verification. \end{proof} \begin{lem}\label{lem:fRgR} Let $a \in \mathbb{R}$ and $f \in \kappa_a(S,\mathbb{R})$. Put, for each $R \in \mathbb{R}^+_0$, $$f_R = (f \wedge R) \vee (-R) \text{ and } g_R = f - f_R.$$ Then $f_R \rightarrow f$ pointwise as $R \rightarrow \infty$, and, for each $R \in \mathbb{R}^+_0$, the following assertions hold. \begin{itemize} \item[(a)] $f_R \in \mathcal{C}_b(S,\mathbb{R}),$ \item[(b)] $g_R \in \kappa_a(S,\mathbb{R}),$ \item[(c)] $f = f_R + g_R,$ \item[(d)] $\left|f_R\right| \leq \left|f\right|,$ \item[(e)] $g_R = g_R 1_{S \setminus B^\star(a,R)}.$ \end{itemize} \end{lem} \begin{proof} (a), (c), and (d) are trivial. To prove (b), notice that \begin{displaymath} f_R(x) = \left\{\begin{array}{clrrr} - R &\textrm{ if }& f(x) < -R\\ f(x) &\textrm{ if }& -R \leq f(x) \leq R\\ R &\textrm{ if }& R < f(x) \end{array}\right. \end{displaymath} and \begin{equation} g_R(x) = \left\{\begin{array}{clrrr} f(x) + R &\textrm{ if }& f(x) < -R\\ 0 &\textrm{ if }& -R \leq f(x) \leq R\\ f(x) - R &\textrm{ if }& R < f(x) \end{array}\right..\label{eq:gR} \end{equation} Take $x, y \in S$. If $f(x) < - R$ and $- R \leq f(y) \leq R$, then, $f$ being a contraction, $$-d(x,y) \leq f(x) - f(y) \leq f(x) + R = g_R(x) - g_R(y) = f(x) + R < 0,$$ from which we conclude that $$\left|g_R(x) - g_R(y)\right| \leq d(x,y).$$ If $f(x) < - R$ and $f(y) > R$, then, $f$ being a contraction, \begin{eqnarray*} 0 < f(y) - f(x) - 2R &=& g_R(y) - g_R(x)\\ &=& f(y) - f(x) - 2R < f(y) - f(x) \leq d(x,y), \end{eqnarray*} from which we again conclude that $$\left|g_R(x) - g_R(y)\right| \leq d(x,y).$$ The other cases are dealt with analogously, which finishes the proof of (b). To establish (e), observe that, since $f \in \kappa_a(S,\mathbb{R})$, $\left|f\right| \leq d(a,\cdot)$. Therefore, if $x \in B^\star(a,R)$, then $-R \leq f(x) \leq R$, whence, by (\ref{eq:gR}), $g_R(x) = 0$. This proves (e). \end{proof} We will now prove the main result of this section. Recall that a set $\Gamma \subset \mathcal{P}(S)$ is said to be {\em tight} iff for each $\varepsilon > 0$ there exists a compact set $K \subset S$ such that $P(S \setminus K) < \epsilon$ for each $P \in \Gamma$. \begin{thm}\label{thm:muUImuH} For $\Gamma \subset \mathcal{P}^1(S)$, \begin{equation} \mu_{\text{\upshape{UI}}}(\Gamma) \leq \mu_{\text{\upshape{H}},W}(\Gamma).\label{eq:lbW} \end{equation} Moreover, the inequality in (\ref{eq:lbW}) becomes an equality if $\Gamma$ is tight. \end{thm} \begin{proof} Suppose that, for $\alpha > 0$, \begin{equation} \mu_{\text{\upshape{H}},W}(\Gamma) < \alpha,\label{eq:hwalpha} \end{equation} and fix $a \in S$ and $0 < \varepsilon < 1$. By (\ref{eq:defWass}) and (\ref{eq:HMWdef}), there exists a finite set $\Phi \subset \mathcal{P}^1(S)$ such that for each $P \in \Gamma$ there exists $Q \in \Phi$ with the property that \begin{equation} \sup_{f \in \kappa(S,\mathbb{R})} \left|\int f dP - \int f dQ\right| < \alpha.\label{eq:GammaFinite} \end{equation} Since $\Phi$ is finite, we can choose $R > 0$ such that \begin{equation} \forall Q \in \Phi : \int_{S \setminus B^\star(a,R)} d(a,\cdot) dQ < \varepsilon.\label{eq:leqepsilon} \end{equation} Furthermore, according to Lemma \ref{lem:Sioen}, fix $\phi_{a,R} : S \rightarrow \mathbb{R}$ such that \begin{equation} (1 - \varepsilon) 1_{S \setminus B^\star(a,R/\varepsilon)} \leq \phi_{a,R} \leq 1_{S \setminus B^\star(a,R)}\label{eq:phibetween} \end{equation} and \begin{equation} \psi_{a,R} = \phi_{a,R} d(a,\cdot) \in \kappa(S,\mathbb{R}).\label{eq:contractivemap} \end{equation} Fix $P \in \Gamma$. Take $Q \in \Phi$ such that (\ref{eq:GammaFinite}) holds. Then, by (\ref{eq:phibetween}), (\ref{eq:contractivemap}), and (\ref{eq:leqepsilon}), \begin{eqnarray*} \lefteqn{(1 - \varepsilon) \int_{S \setminus B^\star(a,R/\varepsilon)} d(a,\cdot) dP}\\ &\leq& \int_S \phi_{a,R} d(a,\cdot) dP\\ &<& \int_S \phi_{a,R} d(a,\cdot) dQ + \alpha\\ &\leq& \int_{S \setminus B^\star(a,R)} d(a,\cdot) dQ + \alpha\\ &<& \varepsilon + \alpha, \end{eqnarray*} which proves that $$\mu_{\text{\upshape{UI}}}(\Gamma) < (\varepsilon + \alpha)/(1 - \varepsilon).$$ Since $\alpha$ was arbitrarily chosen such that (\ref{eq:hwalpha}) holds, we have shown (\ref{eq:lbW}). Now assume that $\Gamma$ is tight. We will prove that \begin{equation} \mu_{\text{\upshape{H}},W}(\Gamma) \leq \mu_{\text{\upshape{UI}}}(\Gamma).\label{eq:HWUnderUI} \end{equation} Suppose that, for $\alpha > 0$, \begin{equation} \mu_{\text{\upshape{UI}}}(\Gamma) < \alpha,\label{eq:UIleqAlpha} \end{equation} that is, there exist $a \in S$ and a bounded set $B \subset S$ such that \begin{equation} \forall P \in \Gamma : \int_{S \setminus B} d(a,\cdot) dP < \alpha.\label{eq:onBleqalpha} \end{equation} Take an arbitrary net $(P_\eta)_\eta$ in $\Gamma$. Since $\Gamma$ is tight, it is, by Prokhorov's Theorem (\cite{P67},\cite{B99}), weakly relatively compact. We thus find a subnet $(P_{h(\eta')})_{\eta'}$ and $P \in \mathcal{P}(S)$ such that $P_{h(\eta')} \stackrel{w}{\rightarrow} P$. We first show that $P \in \mathcal{P}^1(S)$. For $R \in \mathbb{R}^+$, \begin{equation} \int_S \left(d(a,\cdot) \wedge R\right) dP = \lim_{\eta^\prime} \int_S \left(d(a,\cdot) \wedge R\right) dP_{h(\eta^\prime)} \leq \liminf_{\eta^\prime} \int_S d(a,\cdot) dP_{h(\eta^\prime)}.\label{eq:FinMom1} \end{equation} Furthermore, by the fact that $B$ is bounded and (\ref{eq:onBleqalpha}), for each $\eta^\prime$, \begin{equation} \int_S d(a,\cdot) dP_{h(\eta^\prime)} = \int_{B} d(a,\cdot) dP_{h(\eta^\prime)} + \int_{S \setminus B} d(a,\cdot) dP_{h(\eta^\prime)} \leq \sup_{x \in B} d(a,x) + \alpha.\label{eq:FinMom2} \end{equation} Letting $R \uparrow \infty$ and using the Monotone Convergence Theorem, (\ref{eq:FinMom1}) and (\ref{eq:FinMom2}) yield $$\int_S d(a,\cdot) dP \leq \sup_{x \in B} d(a,x) + \alpha,$$ and we conclude that $P \in \mathcal{P}^1(S)$. We now establish that \begin{equation} \lambda_\kappa(P_{h(\eta')} \rightarrow P) < \alpha.\label{eq:LambdaKappaUnderAlpha} \end{equation} Fix $f \in \kappa_a(S,\mathbb{R})$ and put, for each $R \in \mathbb{R}^+$, $$f_R = (f \wedge R) \vee (-R) \text{ and } g_R = f - f_R.$$ Then $f_R \rightarrow f$ pointwise as $R \rightarrow \infty$, and the assertions (a)--(e) in Lemma \ref{lem:fRgR} hold. By (b), (c), and (e) in Lemma \ref{lem:fRgR}, for all $R \in \mathbb{R}^+_0$ and $\eta'$, \begin{eqnarray*} \int f dP_{h(\eta')} &=& \int f_R dP_{h(\eta')} + \int g_R dP_{h(\eta')}\\ &\leq& \int f_R dP_{h(\eta')} + \int_{S \setminus B^\star(a,R)} d(a,\cdot) dP_{h(\eta')}, \end{eqnarray*} which, by the fact that $P_{h(\eta^\prime)} \stackrel{w}{\rightarrow} P$ and (a) in Lemma \ref{lem:fRgR}, yields \begin{equation*} \limsup_{\eta^\prime} \int f dP_{h(\eta')} \leq \int f_R dP + \limsup_{\eta^\prime} \int_{S \setminus B^\star(a,R)} d(a,\cdot) dP_{h(\eta')}. \end{equation*} Letting $R \uparrow \infty$ and using (d) in Lemma \ref{lem:fRgR}, the Dominated Convergence Theorem, and (\ref{eq:onBleqalpha}), we infer \begin{equation*} \limsup_{\eta^\prime} \int f dP_{h(\eta')} \leq \int f dP + \alpha, \end{equation*} which, by (\ref{eq:NewFormLambdaKappa}), gives (\ref{eq:LambdaKappaUnderAlpha}). Now it follows from (\ref{eq:ExprHMW}) that $$\mu_{\text{\upshape{H}},W}(\Gamma) \leq \alpha.$$ Since $\alpha$ was arbitrarily chosen such that (\ref{eq:UIleqAlpha}) holds, we have shown (\ref{eq:HWUnderUI}). \end{proof} We will give an example which shows that the tightness condition in Theorem \ref{thm:muUImuH} cannot be omitted. For $a \in S$, let $\delta_a$ be the Dirac measure on $S$ putting all its mass on $a$ \begin{lem}\label{lem:WonDeltax} For $a,b \in S$, $$W(\delta_a,\delta_b) = d(a,b).$$ \end{lem} \begin{proof} We have $$W(\delta_a,\delta_b) = \sup_{f \in \kappa(S,\mathbb{R})}\left|f(a) - f(b)\right| \leq d(a,b).$$ Moreover, for $f = d(a,\cdot)$, $$\left|f(a) - f(b)\right| = d(a,b),$$ which proves the desired equality. \end{proof} \begin{lem}\label{lem:ConWPDeltaa} Let $P \in \mathcal{P}^1(S)$, $a\in S$, $M \in \mathbb{R}^+_0$, and $0 < \varepsilon \leq M$. Then $$P(B(a,M)) < \epsilon/M \Rightarrow W(P,\delta_a) > M - \epsilon.$$ \end{lem} \begin{proof} Suppose that $$W(P,\delta_a) \leq M - \epsilon.$$ Then \begin{eqnarray*} M P(S \setminus B(a,M)) &\leq& \int_{S \setminus B(a,M)} d(a,\cdot) dP\\ &\leq& \int d(a,\cdot) dP\\ &=& \left|\int d(a,\cdot) dP - \int d(a,\cdot) d\delta_a\right|\\ &\leq& M - \epsilon, \end{eqnarray*} whence $$P(B(a,M)) \geq \epsilon/M,$$ which finishes the proof. \end{proof} Let $\mathcal{C}$ be the space of continuous maps of the compact interval $[0,1]$ into $\mathbb{R}$, equipped with the supremum norm $\|\cdot\|_\infty$, defined by $$\|x\|_\infty = \sup_{t \in [0,1]} \left|x(t)\right|.$$ Then $\mathcal{C}$ is a separable Banach space. \begin{thm} For each $M \in \mathbb{R}^+_0$ there exists $\Gamma \subset \mathcal{P}^1(\mathcal{C})$ such that $$\mu_{\text{\upshape UI}}(\Gamma) = 0 \text{ and } \mu_{\text{\upshape{H}},W}(\Gamma) = M.$$ \end{thm} \begin{proof} Define, for $n \in \mathbb{N}_0$, \begin{displaymath} x_n(t) = \left\{\begin{array}{clrrr} M &\textrm{ if }& 0 \leq t \leq 1 - \frac{1}{n}\\ - 2 M n (n+1) t + 2 M n^2 - M &\textrm{ if }& 1 - \frac{1}{n}\leq t \leq 1 - \frac{1}{n+1}\\ -M &\textrm{ if }& 1 - \frac{1}{n+1} \leq t \leq 1 \end{array}\right.. \end{displaymath} Then, for all $m, n \in \mathbb{N}_0$, \begin{equation} \|x_n \|_\infty = M\label{eq:nxnleqM} \end{equation} and \begin{equation} m \neq n \Rightarrow \|x_m - x_n\|_\infty = 2 M.\label{eq:pwdisjoint} \end{equation} Put $$\Gamma = \{\delta_{x_n} \mid n \in \mathbb{N}_0\}.$$ By (\ref{eq:nxnleqM}), $$\mu_{\text{\upshape{UI}}}(\Gamma) = 0,$$ and, by Lemma \ref{lem:WonDeltax}, for each $n \in \mathbb{N}_0$, $$W(\delta_{x_n},\delta_0) = \|x_n\|_\infty = M,$$ whence, by (\ref{eq:HMWdef}), \begin{equation} \mu_{\text{\upshape{H}},W}(\Gamma) \leq M.\label{eq:muWleqM} \end{equation} Now, for $0 < \epsilon \leq M$, fix a finite collection $\Phi \subset \mathcal{P}^1(S)$. By (\ref{eq:pwdisjoint}), $$n \neq m \Rightarrow B_{\mathcal{C}}(x_n,M) \cap B_{\mathcal{C}}(x_m,M) = \emptyset.$$ Thus $$\cup_{k \geq n} B_{\mathcal{C}}(x_k,M) \downarrow \emptyset \text{ as } n \rightarrow \infty,$$ whence there exists $n_0$ such that, for all $Q \in \Phi$, $$Q(B_{\mathcal{C}}(x_{n_0}, M)) < \epsilon/M,$$ and by Lemma \ref{lem:ConWPDeltaa} we infer that $$W(Q,\delta_{x_{n_0}}) > M - \epsilon.$$ From (\ref{eq:HMWdef}) we now deduce that $$\mu_{\text{\upshape H},W}(\Gamma) \geq M - \epsilon,$$ which, combined with (\ref{eq:muWleqM}), gives $$\mu_{\text{\upshape{H}},W}(\Gamma) = M.$$ This finishes the proof. \end{proof} \end{document}
\begin{document} \title{Entwined modules over representations of categories } \begin{abstract} We introduce a theory of modules over a representation of a small category taking values in entwining structures over a semiperfect coalgebra. This takes forward the aim of developing categories of entwined modules to the same extent as that of module categories as well as the philosophy of Mitchell of working with rings with several objects. The representations are motivated by work of Estrada and Virili, who developed a theory of modules over a representation taking values in small preadditive categories, which were then studied in the same spirit as sheaves of modules over a scheme. We also describe, by means of Frobenius and separable functors, how our theory relates to that of modules over the underlying representation taking values in small $K$-linear categories. \end{abstract} {\emph{{\bf\emph{MSC(2020) Subject Classification:} } 16T15, 18E05, 18E10}} { \emph{{\bf \emph{Keywords:}} Rings with several objects, entwined modules, separable functors, Frobenius pairs}} \section{Introduction} The purpose of this paper is to study modules over representations of a small category taking values in spaces that behave like quotients of categorified fiber bundles. Let $H$ be a Hopf algebra having a coaction $\rho:A\longrightarrow A\otimes H$ on an algebra $A$ such that $A$ becomes an $H$-comodule algebra. Let $B$ denote the algebra of coinvariants of this coaction. Suppose that the inclusion $B\hookrightarrow A$ is faithfully flat and the canonical morphism \begin{equation*} can: A\otimes_BA\longrightarrow A\otimes H \qquad x\otimes y\mapsto x\cdot \rho(y) \end{equation*} is an isomorphism. This datum is the algebraic counterpart of a principal fiber bundle given by the quotient of an affine algebraic group scheme acting freely on an affine scheme over a field $K$ (see, for instance, \cite{MF}, \cite{Schn}). If $H$ has bijective antipode, then modules over the algebra $B$ of coinvariants may be recovered as ``$(A,H)$-Hopf modules'' (see Schneider \cite{Schn}). These $(A,H)$-Hopf modules may be rolled into the more general concept of modules over an `entwining structure' consisting of an algebra $R$, a coalgebra $C$ and a morphism $\psi:C\otimes R\longrightarrow R\otimes C$ satisfying certain conditions. Entwining structures were introduced by Brzezi\'{n}ski and Majid \cite{BrMj}. It was soon realized (see Brzezi\'{n}ski \cite{Brx1}) that entwining structures provide a single formalism that unifies relative Hopf modules, Doi-Hopf modules, Yetter-Drinfeld modules and several other concepts such as coalgebra Galois extensions. As pointed out in Brzezi\'{n}ski \cite{Brx2}, an entwining structure $(R,C,\psi)$ behaves like a single bialgebra, or more generally a comodule algebra over a bialgebra. Accordingly, the investigation of entwining structures as well as the modules over them has emerged as an object of study in its own right (see, for instance, \cite{Abu}, \cite{BBR0}, \cite{BBR}, \cite{Brx1}, \cite{Brx3}, \cite{BuTa2}, \cite{BuTa1}, \cite{CaDe}, \cite{HP}, \cite{Jia}, \cite{Schb}). We consider an entwining structure consisting of a small $K$-linear category $\mathcal R$, a coalgebra $C$ and a family of morphisms \begin{equation*} \psi=\{\psi_{rs}:C\otimes \mathcal R(r,s)\longrightarrow \mathcal R(r,s)\otimes C\}_{r,s\in \mathcal R} \end{equation*} satisfying certain conditions (see Definition \ref{entcatx}). This is in keeping with the general philosophy of Mitchell \cite{Mit}, where a small $K$-linear category is viewed as a $K$-algebra with several objects. In fact, we consider the category $\mathscr Ent$ of such entwining structures. When the coalgebra $C$ is fixed, we have the subcategory $\mathscr Ent_C$. Given an entwining structure $(\mathcal R,C,\psi)$, we have a category $\mathbf M^C_{\mathcal R}(\psi)$ of modules over it (see our earlier work in \cite{BBR}). These entwined modules over $(\mathcal R,C,\psi)$ may be seen as modules over a certain categorical quotient space of $\mathcal R$, which need not exist in an explicit sense, but is studied only through its category of modules. We work with representations $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ of a small category $\mathscr X$ taking values in $\mathscr Ent_C$, where $C$ is a fixed coalgebra. This is motivated by the work of Estrada and Virili \cite{EV}, who introduced a theory of modules over a representation $\mathscr A:\mathscr X\longrightarrow Add$, where $Add$ is the category of small preadditive categories. The modules over $\mathscr A:\mathscr X\longrightarrow Add$ were studied in the spirit of sheaves of modules over a scheme, or more generally, a ringed space. By considering small preaditive categories, the authors in \cite{EV} also intended to take Mitchell's idea one step forward: from replacing rings with small preadditive categories to replacing ring representations by representations taking values in small preadditive categories. In this paper, we develop a theory of modules over a representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ taking values in entwining structures. We also describe, by means of Frobenius and separable functors, how this theory relates to that of modules over the underlying representation taking values in small $K$-linear categories. This paper has two parts. In the first part, we introduce and develop the properties of the category $Mod^C-\mathscr R$ of modules over $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$. For this, we have to combine techniques on comodules along with adapting the methods of Estrada and Virili \cite{EV}. When $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is a flat representation (see Section 6), we also consider the subcategory $Cart-\mathscr R$ of cartesian entwined modules over $\mathscr R$. In the analogy with sheaves of modules over a scheme, the cartesian objects may be seen as similar to quasi-coherent sheaves. Let $\mathscr Lin$ be the category of small $K$-linear categories. In the second part, we consider the underlying representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C \longrightarrow \mathscr Lin$, which we continue to denote by $\mathscr R$. Accordingly, we have a category $Mod-\mathscr R$ of modules over $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C \longrightarrow \mathscr Lin$ in the sense of Estrada and Virili \cite{EV}. We study the relation between $Mod^C-\mathscr R$ and $Mod-\mathscr R$ by describing Frobenius and separability conditions for a pair of adjoint functors between them (see Section 7) \begin{equation*} \mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R\qquad\qquad\qquad \mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R \end{equation*} Here, the left adjoint $\mathscr F$ may be thought of as an `extension of scalars' and the right adjoint $\mathscr G$ as a `restriction of scalars.' The idea is as follows: as mentioned before, modules over an entwining structure $(\mathcal R,C,\psi)$ may be seen as modules over a certain categorical quotient space of $\mathcal R$, which behaves like a subcategory of $\mathcal R$. Again, this ``subcategory'' of $\mathcal R$ need not exist in an explicit sense, but is studied only through the category of modules $\mathbf M^C_{\mathcal R}(\psi)$. Accordingly, a representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ taking values in $\mathscr Ent_C$ may be thought of as a subfunctor of the underlying representation $ \mathscr R:\mathscr X\longrightarrow \mathscr Ent_C \longrightarrow \mathscr Lin$. We want to understand the properties of the inclusion of this ``subfunctor'': in particular, whether it behaves like a separable, split or Frobenius extension of rings. We recall here (see \cite[Theorem 1.2]{uni}) that if $R\longrightarrow S$ is an extension of rings, these properties may be expressed in terms of the functors $F:Mod-R\longrightarrow Mod-S$ (extension of scalars) and $G:Mod-S\longrightarrow Mod-R$ (restriction of scalars) as follows \begin{equation*} \mbox{ \begin{tabular}{ccc} $R\longrightarrow S$ split extension & $\qquad\qquad \Leftrightarrow\qquad\qquad$ & $F:Mod-R\longrightarrow Mod-S$ separable\\ $R\longrightarrow S$ separable extension & $\qquad\qquad \Leftrightarrow\qquad\qquad$ & $G:Mod-S\longrightarrow Mod-R$ separable\\ $R\longrightarrow S$ Frobenius extension & $\qquad\qquad \Leftrightarrow\qquad\qquad$ & $(F,G)$ Frobenius pair of functors\\ \end{tabular}} \end{equation*} We now describe the paper in more detail. Throughout,we let $K$ be a field. We begin in Section 2 by describing the categories of entwining structures and entwined modules. For a morphism $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ of entwining structures, we describe `extension of scalars' and `restriction of scalars' on categories of entwined modules. Our first result is as follows. \begin{Thrm}\label{resulta} (see \ref{P2.2}, \ref{P2.3} and \ref{T2.5}) Let $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ be a morphism of entwining structures. (1) There is a functor $(\alpha,\gamma)^\ast : \mathbf M_{\mathcal R}^C(\psi)\longrightarrow \mathbf M_{\mathcal S}^D(\psi')$ of extension of scalars. (2) Suppose that the coalgebra map $\gamma:C\longrightarrow D$ is also a monomorphism of vector spaces. Then, there is a functor $(\alpha,\gamma)_\ast : \mathbf M_{\mathcal S}^D(\psi')\longrightarrow \mathbf M_{\mathcal R}^C(\psi)$ of restriction of scalars. Further, there is an adjunction of functors which is given by natural isomorphisms \begin{equation*} \mathbf M_{\mathcal S}^D(\psi')((\alpha,\gamma)^*\mathcal M,\mathcal N)=\mathbf M_{\mathcal R}^C(\psi)(\mathcal M,(\alpha,\gamma)_*\mathcal N) \end{equation*} for any $\mathcal M\in \mathbf M_{\mathcal R}^C(\psi)$ and $\mathcal N\in \mathbf M_{\mathcal S}^D(\psi')$. \end{Thrm} In Section 3, we give conditions for the category $\mathbf M^C_{\mathcal R}(\psi)$ of modules over an entwining structure $(\mathcal R,C,\psi)$ to have projective generators. We recall that a $K$-coalgebra $C$ is said to be right semiperfect if the category of right $C$-comodules has enough projectives. \begin{Thrm}\label{resultb} (see \ref{T3.5}) Let $(\mathcal R,C,\psi)$ be an entwining structure and let $C$ be a right semiperfect $K$-coalgebra. Then, the category $\mathbf M_{\mathcal R}^C(\psi)$ of entwined modules is a Grothendieck category with a set of projective generators. \end{Thrm} In Section 4, we fix a coalgebra $C$. We introduce the category $Mod^C-\mathscr R$ of modules over a representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$, which is our main object of study. Our first purpose is to show that $Mod^C-\mathscr R$ is a Grothendieck category. \begin{Thrm}\label{resultc} (see \ref{T4.9}) Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr R:\mathscr X\longrightarrow\mathscr Ent_C$ be an entwined $C$-representation of a small category $\mathscr X$. Then, the category $Mod^C-\mathscr R$ of entwined modules over $\mathscr R$ is a Grothendieck category. \end{Thrm} Given $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$, we have an entwining structure $(\mathscr R_x,C,\psi_x)$ for each $x\in \mathscr X$. Our next aim is to give conditions for $Mod^C-\mathscr R$ to have projective generators. For this, we will construct an extension functor $ex_x^C$ and an evaluation functor $ev_x^C$ relating the categories $Mod^C-\mathscr R$ and $\mathbf M_{\mathscr R_x}^C(\psi_x)$ at each $x\in \mathscr X$. \begin{Thrm}\label{resultd} (see \ref{P5.3} and \ref{T5.5}) Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr X$ be a poset and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of $\mathscr X$. (1) For each $x\in \mathscr X$, there is an extension functor $ex_x^C:\mathbf M_{\mathscr R_x}^C\longrightarrow Mod^C-\mathscr R$ which is left adjoint to an evaluation functor $ev_x^C:Mod^C-\mathscr R\longrightarrow \mathbf M^C_{\mathscr R_x}(\psi_x)$. (2) The family $\{\mbox{$ex_x^C(V\otimes H_r)$ $\vert$ $x\in \mathscr X$, $r\in \mathscr R_x$, $V\in Proj^f(C)$}\}$ is a set of projective generators for $Mod^C-\mathscr R$, where $Proj^f(C)$ is the set of isomorphism classes of finite dimensional projective $C$-comodules. \end{Thrm} We introduce the category of cartesian entwined modules in Section 6. Here, we will assume that $\mathscr X$ is a poset and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is a flat representation, i.e., for any morphism $\alpha:x\longrightarrow y$ in $\mathscr X$, the functor $\alpha^\ast:=\mathscr R_\alpha^\ast:\mathbf M^C_{\mathscr R_x}(\psi_x) \longrightarrow \mathbf M^C_{\mathscr R_y}(\psi_y)$ is exact. We then apply induction on $\mathbb N\times Mor(\mathscr X)$ to show that any cartesian entwined module may be expressed as a sum of submodules whose cardinality is $\leq \kappa :=sup\{ \mbox{$|\mathbb N|$, $|C|$, $|K|$, $|Mor(\mathscr X)|$, $|Mor(\mathscr R_x)|$, $x\in \mathscr X$}\}$. \begin{Thrm}\label{resulte} (see \ref{T6.10}) Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr X$ be a poset and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of $\mathscr X$. Suppose that $\mathscr R$ is flat. Then, $Cart^C-\mathscr R$ is a Grothendieck category. \end{Thrm} In the next three sections, we study separability and Frobenius conditions for functors relating $Mod^C-\mathscr R$ to the category $Mod-\mathscr R$ of modules over the underlying representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C\longrightarrow \mathscr Lin$. For this, we have to adapt the techniques from \cite{uni} as well as our earlier work in \cite{BBR}. For more on Frobenius and separability conditions for Doi-Hopf modules and modules over entwining structures of algebras, we refer the reader to \cite{Brx5}, \cite{X13}, \cite{X14}, \cite{X15}. At each $x\in \mathscr X$, we have functors $\mathscr F_x:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{\mathscr R_x}$ and $\mathscr G_x:\mathbf M_{\mathscr R_x} \longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x)$ which combine to give functors $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ and $\mathscr G: Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ respectively. We will also need to consider a space $V_1$ of elements $\theta=\{\theta_x(r):C\otimes C\longrightarrow \mathscr R_x(r,r)\}_{x\in \mathscr X,r\in \mathscr R_x}$ and a space $W_1$ of elements $\eta=\{\eta_x(s,r):\mathscr R_x(s,r)\longrightarrow \mathscr R_x(s,r)\otimes C\}_{x\in \mathscr X,r,s\in \mathscr R_x}$ satisfying certain conditions (see Sections 7 and 8). \begin{Thrm}\label{resultf} (see \ref{P7.2}, \ref{P7.25} and \ref{P7.6}) Let $\mathscr X$ be a poset, $C$ be a right semiperfect $K$-coalgebra and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. (1) The forgetful functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ has a right adjoint $\mathscr G: Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$. (2) A natural transformation $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ corresponds to a collection of natural transformations $\{\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})\}_{x\in \mathscr X}$ such that for any $\alpha:x\longrightarrow y$ in $\mathscr X$ and object $\mathscr M\in Mod^C-\mathscr R$, we have $\mathscr M_\alpha\circ \upsilon_x(\mathscr M_x)=\alpha_\ast\upsilon_y(\mathscr M_y)\circ \mathscr G_x\mathscr F_x(\mathscr M_\alpha) $. (3) The space $Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ is isomorphic to $V_1$. \end{Thrm} The main results in Sections 7 and 8 give necessary and sufficient conditions for the forgetful functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ and its right adjoint $\mathscr G: Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ to be separable. In Section 9, we give necessary and sufficient conditions for $(\mathscr F,\mathscr G)$ to be a Frobenius pair, i.e., $\mathscr G$ is both a left and a right adjoint of $\mathscr F$. \begin{Thrm}\label{resultg} (see \ref{T7.7}, \ref{P7.8} and \ref{P7.9}) Let $\mathscr X$ be a partially ordered set. Let $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. (1) The functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ is separable if and only if there exists $\theta\in V_1$ such that $ \theta_x(r)(c_1\otimes c_2)=\varepsilon_C(c)\cdot id_r $ for every $x\in \mathscr X$, $r\in\mathscr R_x$ and $c\in C$. (2) Suppose additionally that the representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat. Then, we have \begin{itemize} \item[(a)] The functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ restricts to a functor $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$. Moreover, $\mathscr F^c$ has a right adjoint $\mathscr G^c:Cart-\mathscr R\longrightarrow Cart^C-\mathscr R$. \item[(b)] Suppose there exists $\theta\in V_1$ such that $ \theta_x(r)(c_1\otimes c_2)=\varepsilon_C(c)\cdot id_r $ for every $x\in \mathscr X$, $r\in\mathscr R_x$ and $c\in C$. Then, $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$ is separable. \end{itemize} \end{Thrm} \begin{Thrm}\label{resulth} (see \ref{Pro8.2} and \ref{T8.3}) Let $\mathscr X$ be a partially ordered set, $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. (1) The spaces $Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ and $ W_1$ are isomorphic. (2) The functor $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ is separable if and only if there exists $\eta\in W_1$ such that $ id=(id\otimes \varepsilon_C)\circ \eta_x(s,r) $ for each $x\in \mathscr X$ and $s$, $r\in \mathscr R_x$. \end{Thrm} \begin{Thrm}\label{resulti} (see \ref{T9.1}, \ref{P9.4}, \ref{C9.5}) Let $\mathscr X$ be a partially ordered set, $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. (1) $(\mathscr F,\mathscr G)$ is a Frobenius pair if and only if there exist $\theta\in V_1$ and $\eta\in W_1$ such that $ \varepsilon_C(d)f=\sum \widehat{f}\circ \theta_x(r)(c_f\otimes d)$ and $\varepsilon_C(d)f=\sum \widehat{f_{\psi_x}} \circ \theta_x(r)(d^{\psi_x}\otimes c_f) $ for every $x\in \mathscr X$, $r\in \mathscr R_x$, $f\in \mathscr R_x(r,s)$ and $d\in C$, where $\eta_x(r,s)(f)=\widehat{f}\otimes c_f$. (2) Suppose additionally that the representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat. Then, $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ restricts to a functor $\mathscr G^c:Cart-\mathscr R\longrightarrow Cart^C-\mathscr R$. Further, $(\mathscr F^c,\mathscr G^c)$ is a Frobenius pair of adjoint functors between $Cart^C-\mathscr R$ and $Cart-\mathscr R$. \end{Thrm} We conclude in Section 10 by giving examples of how to construct entwined representations and describe modules over them. In particular, we show how to construct entwined representations using $B$-comodule categories, where $B$ is a bialgebra. \section{Category of entwining structures} Let $K$ be a field and let $Vect_K$ be the category of vector spaces over $K$. Let $\mathcal R$ be a small $K$-linear category. The category of right $\mathcal R$-modules will be denoted by $\mathbf M_{\mathcal R}$. For any object $r\in \mathcal R$, we denote by $H_r:\mathcal R^{op}\longrightarrow Vect_K$ the right $\mathcal R$-module represented by $r$ and by $_rH:\mathcal R\longrightarrow Vect_K$ the left $\mathcal R$-module represented by $r$. Given a $K$-coalgebra $C$, the category of right $C$-comodules will be denoted by $Comod-C$. \begin{defn}\label{entcatx} (see \cite[$\S$ 2]{BBR}) Let $\mathcal R$ be a small $K$-linear category and $C$ be a $K$-coalgebra. An entwining structure $(\mathcal R,C,\psi)$ over $K$ is a collection of $K$-linear morphisms \begin{equation*} \psi=\{\psi_{rs}:C\otimes \mathcal R(r,s)\longrightarrow \mathcal R(r,s)\otimes C\}_{r,s\in \mathcal R} \end{equation*} satisfying the following conditions \begin{equation} \begin{array}{c} (gf)_\psi \otimes c^\psi = g_\psi f_\psi \otimes {c^\psi}^\psi \qquad \varepsilon_C(c^\psi)(f_\psi) = \varepsilon_C(c)f \\ f_\psi \otimes \Delta_C(c^\psi) = {f_\psi}_\psi \otimes {c_1}^\psi \otimes {c_2}^\psi \qquad \psi(c \otimes id_r)= id_r \otimes c \\ \end{array} \end{equation} for each $f \in \mathcal{R}(r,s)$, $g \in \mathcal{R}(s,t)$ and $c \in C$. Here, we have suppressed the summation and written $\psi(c\otimes f)$ simply as $f_\psi\otimes c^\psi$. A morphism $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ of entwining structures consists of a functor $\alpha:\mathcal{R}\longrightarrow \mathcal{S}$ and a counital coalgebra map $\gamma: C\longrightarrow D$ such that $\alpha({f}_{\psi})\otimes \gamma({c}^{\psi}) = \alpha(f)_{\psi'} \otimes \gamma(c)^{\psi'}$ for any $c\otimes f \in C\otimes \mathcal{R}(r,s)$, where $r,s \in \mathcal R$. We will denote by $\mathscr Ent$ the category of entwining structures over $K$. \end{defn} If $\mathcal M$ is a right $\mathcal R$-module, $m\in \mathcal M(r)$ and $f\in \mathcal R(s,r)$, the element $\mathcal M(f)(r)\in \mathcal M(s)$ will often be denoted by $mf$. If $\alpha:\mathcal{R}\longrightarrow \mathcal{S}$ is a functor of small $K$-linear categories, there is an obvious functor $\alpha_*: \mathbf M_{\mathcal S} \longrightarrow \mathbf M_{\mathcal R}$ of restriction of scalars. For the sake of convenience, we briefly recall here the well known extension of scalars $\alpha^*:\mathbf M_{\mathcal R}\longrightarrow \mathbf M_{\mathcal S}$. For $\mathcal M\in \mathbf M_{\mathcal R}$, the module $\alpha^*(\mathcal M)\in \mathbf M_{\mathcal S}$ is determined by setting \begin{equation}\label{ke2.2} \alpha^*(\mathcal M)(s):=\left(\underset{r\in \mathcal R}{\bigoplus}\mathcal M(r)\otimes \mathcal S(s,\alpha(r))\right)/V \end{equation} for $s\in \mathcal S$, where $V$ is the subspace generated by elements of the form \begin{equation} (m'\otimes \alpha(g)f)- (m'g\otimes f) \end{equation} for $m'\in \mathcal M(r')$, $g\in \mathcal R(r,r')$, $f\in \mathcal S(s,\alpha(r))$ and $r$, $r'\in \mathcal R$. On the other hand, if $\gamma : C\longrightarrow D$ is a morphism of coalgebras and $N$ is a right $C$-comodule, there is an obvious corestriction of scalars $\gamma^*:Comod-C\longrightarrow Comod-D$. The functor $\gamma^*$ has a well known right adjoint $\gamma_*:Comod-D\longrightarrow Comod-C$, known as the coinduction functor, given by the cotensor product $N\mapsto N\Box_D C$ (see, for instance, \cite[$\S$ 11.10]{Wibook}). In general, we recall that the cotensor product $N\Box_DN'$ of a right $D$-comodule $(N,\rho:N\longrightarrow N\otimes D)$ with a left $D$-comodule $(N',\rho':N'\longrightarrow D\otimes N')$ is given by the equalizer \begin{equation} N\Box_DN':=Eq\left(N\otimes N'\doublerightarrow{\rho\otimes id}{id\otimes \rho'} N\otimes D\otimes N'\right) \end{equation} In other words, an element $\sum n_i\otimes n'_i\in N\otimes N'$ lies in $N\Box_DN'$ if and only if $\sum n_{i0}\otimes n_{i1}\otimes n'_i=\sum n_i\otimes n'_{i0}\otimes n'_{i1}$. However, we will continue to suppress the summation and write an element of $N\Box_DN'$ simply as $n\otimes n'$. We will now consider modules over an entwining structure $(\mathcal R,C,\psi)$. \begin{defn}\label{D2.2} (see \cite[Definition 2.2]{BBR}) Let $\mathcal{M}$ be a right $\mathcal{R}$-module with a given right $C$-comodule structure $\rho_{\mathcal M(s)}:\mathcal M(s)\longrightarrow \mathcal M(s)\otimes C$ on $\mathcal{M}(s)$ for each $s\in \mathcal{R}$. Then, $\mathcal{M}$ is said to be an entwined module over $(\mathcal{R},C,\psi)$ if the following compatibility condition holds: \begin{equation}\label{comp 2} \rho_{\mathcal{M}(s)}(mf)= \big(mf\big)_0 \otimes \big(mf\big)_{1}=m_0f_\psi\otimes {m_1}^\psi \end{equation} for every $f \in \mathcal{R}(s,r)$ and $m \in \mathcal{M}(r).$ A morphism $\eta:\mathcal M\longrightarrow \mathcal N$ of entwined modules is a morphism $\eta:\mathcal M\longrightarrow \mathcal N$ in $\mathbf M_{\mathcal R}$ such that $\eta(r):\mathcal M(r)\longrightarrow\mathcal N(r)$ is $C$-colinear for each $r\in\mathcal R$. The category of entwined modules over $(\mathcal R,C,\psi)$ will be denoted by $\mathbf M_{\mathcal R}^C(\psi)$. \end{defn} \begin{thm} \label{P2.2} Let $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ be a morphism of entwining structures. Then, there is a functor $(\alpha,\gamma)^\ast : \mathbf M_{\mathcal R}^C(\psi)\longrightarrow \mathbf M_{\mathcal S}^D(\psi')$. \end{thm} \begin{proof} We take $\mathcal M\in \mathbf M_{\mathcal R}^C(\psi)$. Then, $\mathcal M\in \mathbf M_{\mathcal R}$ and we consider $\mathcal N:=\alpha^*(\mathcal M)\in \mathbf M_{\mathcal S}$. For $s\in S$, we consider an element $m\otimes f\in \mathcal N(s)$, where $m\in \mathcal M(r)$ and $f\in \mathcal S(s,\alpha(r))$ for some $r\in \mathcal R$. We claim that the morphism \begin{equation}\label{eq2.6} \rho_{\mathcal N(s)}:\mathcal N(s)\longrightarrow \mathcal N(s)\otimes D \qquad (m\otimes f)\mapsto (m\otimes f)_0\otimes (m\otimes f)_1:=(m_0\otimes f_{\psi'})\otimes \gamma(m_1)^{\psi'} \end{equation} makes $\mathcal N(s)$ a right $D$-comodule. Here, the association $m\mapsto m_0\otimes m_1$ comes from the $C$-comodule structure $\rho_{\mathcal M(r)}:\mathcal M(r)\longrightarrow \mathcal M(r)\otimes C$ of $\mathcal M(r)$. First, we show that $\rho_{\mathcal N(s)}$ is well defined. For this, we consider $m'\in\mathcal M(r')$, $g\in \mathcal R(r,r')$ and $f\in \mathcal S(s, \alpha(r))$. We have \begin{equation} \begin{array}{ll} (m'g\otimes f)_0\otimes (m'g\otimes f)_1 & = ((m'g)_0\otimes f_{\psi'})\otimes \gamma((m'g)_1)^{\psi'}\\ & = (m'_0g_\psi\otimes f_{\psi'})\otimes \gamma(m'^\psi_1)^{\psi'}\\ &= (m'_0\otimes \alpha(g_\psi)f_{\psi'})\otimes \gamma(m'^\psi_1)^{\psi'}\\ &= (m'_0\otimes \alpha(g)_{\psi'}f_{\psi'})\otimes \gamma(m'_1)^{\psi'\psi'}\\ &= (m'_0\otimes (\alpha(g)f)_{\psi'})\otimes \gamma(m'_1)^{\psi'}\\ \end{array} \end{equation} From the properties of entwining structures, it may be easily verified that the structure maps in \eqref{eq2.6} are coassociative and counital, giving a right $D$-comodule structure on $\mathcal N(s)$. We now consider $f'\in \mathcal S(s',s)$. Then, we have \begin{equation} \begin{array}{ll} (m\otimes ff')_0\otimes (m\otimes ff')_1 & =(m_0\otimes (ff')_{\psi'})\otimes \gamma(m_1)^{\psi'}\\ &=(m_0\otimes f_{\psi'})f'_{\psi'}\otimes \gamma(m_1)^{\psi'\psi'}\\ & = (m\otimes f)_0f'_{\psi'}\otimes (m\otimes f)_1^{\psi'}\\ \end{array} \end{equation} This shows that $\mathcal N\in \mathbf M_{\mathcal S}^D(\psi')$. \end{proof} \begin{thm} \label{P2.3} Let $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ be a morphism of entwining structures. Suppose additionally that $\gamma:C\longrightarrow D$ is a monomorphism of vector spaces. Then, there is a functor $(\alpha,\gamma)_\ast : \mathbf M_{\mathcal S}^D(\psi')\longrightarrow \mathbf M_{\mathcal R}^C(\psi)$. \end{thm} \begin{proof} We take $\mathcal N\in \mathbf M_{\mathcal S}^D(\psi')$ and set $\mathcal M(r):=\mathcal N(\alpha(r))\Box_DC$ for each $r\in \mathcal R$. For $f\in \mathcal R(r',r)$, we define \begin{equation}\label{eq2.9} \mathcal M(f):\mathcal M(r)\longrightarrow \mathcal M(r')\qquad n\otimes c\mapsto (n\otimes c)\cdot f :=n\alpha(f_\psi)\otimes c^\psi \end{equation} To show that this morphism is well defined, we need to check that $\mathcal M(f)(n\otimes c)\in \mathcal M(r')=\mathcal N(\alpha(r'))\Box_DC$. Since $n\otimes c\in \mathcal N(\alpha(r))\Box_DC$, we know that \begin{equation}\label{eq2.10} n_0\otimes n_1\otimes c= n\otimes \gamma(c_1)\otimes c_2 \end{equation} In particular, it follows that \begin{equation}\label{eq2.11} n\otimes c\otimes f\in Eq\left( \begin{CD} \begin{tikzcd} \mathcal N(\alpha(r))\otimes C\otimes \mathcal R(r',r) \ar[d,xshift = 5pt]\ar[d,xshift=-5pt]\\ \mathcal N(\alpha(r))\otimes D\otimes C\otimes \mathcal R(r',r) \\ \end{tikzcd}\\ @Vid\otimes id\otimes \psi VV\\ \mathcal N(\alpha(r))\otimes D\otimes \mathcal R(r',r)\otimes C \\ @Vid\otimes id \otimes \alpha \otimes id VV\\ \mathcal N(\alpha(r))\otimes D\otimes \mathcal S(\alpha(r'),\alpha(r))\otimes C \\ @Vid\otimes \psi'\otimes idVV\\ \mathcal N(\alpha(r))\otimes\mathcal S(\alpha(r'),\alpha(r))\otimes D\otimes C\\ @VVV\\ \mathcal N(\alpha(r'))\otimes D\otimes C\\ \end{CD}\right) \end{equation} From \eqref{eq2.11}, it follows that \begin{equation}\label{eq2.12} n_0\alpha(f_\psi)_{\psi'}\otimes n_1^{\psi'}\otimes c^\psi=n\alpha(f_\psi)_{\psi'}\otimes \gamma(c_1)^{\psi'}\otimes {c_2}^{\psi} \end{equation} Applying \eqref{eq2.10} and \eqref{eq2.12}, we now see that \begin{equation}\label{eq2.13} \begin{array}{ll} (n\alpha(f_\psi))_0\otimes (n\alpha(f_\psi))_1 \otimes c^\psi & =n_0\alpha(f_\psi)_{\psi'}\otimes n_1^{\psi'}\otimes c^\psi \\ &=n\alpha(f_\psi)_{\psi'}\otimes \gamma(c_1)^{\psi'}\otimes {c_2}^{\psi}\\ &=n\alpha(f_{\psi\psi})\otimes \gamma({c_1}^{\psi})\otimes {c_2}^{\psi}\\ & =n\alpha(f_\psi)\otimes \gamma({c^{\psi}}_1)\otimes {c^{\psi}}_2\\ \end{array} \end{equation} From the definition, we may easily verify that the structure maps in \eqref{eq2.9} make $\mathcal M$ into a right $\mathcal R$-module. To show that $\mathcal M$ is entwined, it remains to check that \begin{equation}\label{eq2.14} n\alpha(f_\psi)\otimes {c^\psi}_1\otimes {c^\psi}_2=((n\otimes c)\cdot f)_0\otimes ((n\otimes c)\cdot f)_1= (n\otimes c)_0\cdot f_\psi\otimes (n\otimes c)_1^\psi=n\alpha(f_{\psi\psi})\otimes c_1^\psi\otimes c_2^\psi \end{equation} in $\mathcal N(\alpha(r'))\otimes C\otimes C$. Since $\gamma: C\longrightarrow D$ is a monomorphism and all tensor products are taken over the field $K$, it suffices to show that \begin{equation}\label{eq2.15} n\alpha(f_\psi)\otimes \gamma({c^\psi}_1)\otimes {c^\psi}_2=n\alpha(f_{\psi\psi})\otimes \gamma( {c_1}^\psi)\otimes {c_2}^\psi\in \mathcal N(\alpha(r'))\otimes D\otimes C \end{equation} Using \eqref{eq2.13} and the fact that $(\alpha,\gamma)$ is a morphism of entwining structures, the right hand side of \eqref{eq2.15} becomes \begin{equation}\label{eq2.16} n\alpha(f_{\psi\psi})\otimes \gamma( {c_1}^\psi)\otimes {c_2}^\psi=n\alpha(f_\psi)_{\psi'}\otimes \gamma(c_1)^{\psi'}\otimes {c_2}^{\psi}=n_0\alpha(f_\psi)_{\psi'}\otimes n_1^{\psi'}\otimes c^\psi \end{equation} From \eqref{eq2.13}, we already know that $n\alpha(f_\psi)\otimes c^\psi\in \mathcal N(\alpha(r'))\Box_DC$. As such, we have \begin{equation}\label{eq2.17} n\alpha(f_\psi)\otimes \gamma({c^\psi}_1)\otimes {c^\psi}_2=(n\alpha(f_\psi))_0\otimes (n\alpha(f_\psi))_1\otimes c^\psi=n_0\alpha(f_\psi)_{\psi'}\otimes n_1^{\psi'}\otimes c^\psi \end{equation} where the second equality follows from \eqref{eq2.13}. From \eqref{eq2.16} and \eqref{eq2.17}, the result of \eqref{eq2.15} is now clear. \end{proof} \begin{Thm}\label{T2.5} Let $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ be a morphism of entwining structures such that $\gamma:C\longrightarrow D$ is a monomorphism of vector spaces. Then, there is an adjuction of functors \begin{equation} \mathbf M_{\mathcal S}^D(\psi')((\alpha,\gamma)^*\mathcal M,\mathcal N)=\mathbf M_{\mathcal R}^C(\psi)(\mathcal M,(\alpha,\gamma)_*\mathcal N) \end{equation} for $\mathcal M\in \mathbf M_{\mathcal R}^C(\psi)$ and $\mathcal N\in \mathbf M_{\mathcal S}^D(\psi')$. \end{Thm} \begin{proof} We consider a morphism $\eta :(\alpha,\gamma)^*\mathcal M\longrightarrow\mathcal N$ in $\mathbf M_{\mathcal S}^D(\psi')$. Then, $\eta$ corresponds to a morphism $\eta:\alpha^*\mathcal M\longrightarrow \mathcal N$ in $\mathbf M_{\mathcal S}$ such that $\eta(s):\alpha^*\mathcal M(s)\longrightarrow \mathcal N(s)$ is $D$-colinear for each $s\in S$. Accordingly, we have $\eta':\mathcal M\longrightarrow \mathcal N$ in $\mathbf M_{\mathcal R}$ such that $\eta'(r):\mathcal M(r)\longrightarrow \mathcal N(\alpha(r))$ is $D$-colinear for each $r\in \mathcal R$. Here, $\mathcal M(r)$ is treated as a $D$-comodule via corestriction of scalars. Therefore, we have morphisms $\eta''(r):\mathcal M(r)\longrightarrow \mathcal N(\alpha(r))\Box_DC$ of $C$-comodules for each $r\in \mathcal R$. Together, these determine a morphism $\mathcal M\longrightarrow (\alpha,\gamma)_*\mathcal N$ in $\mathbf M_{\mathcal R}$. These arguments can be easily reversed and hence the result. \end{proof} \section{Projective generators and entwined modules} Let $(\mathcal R,C,\psi)$ be an entwining structure. In \cite[Proposition 2.9]{BBR}, it was shown that the category $\mathbf M_{\mathcal R}^C(\psi)$ of entwined modules is a Grothendieck category. In this section, we will refine this result to give conditions for $\mathbf M_{\mathcal R}^C(\psi)$ to have a collection of projective generators. \begin{lem}\label{L3.1} Let $\mathcal G$ be a Grothendieck category. Fix a set of generators $\{ G_k\}_{k\in K}$ for $\mathcal G$. Let $Z\in \mathcal G$ be an object. Let $i_X:X\hookrightarrow Z$, $i_Y:Y\hookrightarrow Z$ be two subobjects of $Z$ such that for any $k\in K$ and any morphism $f_k:G_k\longrightarrow X$, there exists $g_k:G_k\longrightarrow Y$ such that $i_Y\circ g_k=i_X\circ f_k$. Then, $i_X:X\hookrightarrow Z$ factors through $i_Y:Y\hookrightarrow Z$, i.e., $X$ is a subobject of $Y$. \end{lem} \begin{proof} Since $\{ G_k\}_{k\in K}$ is a set of generators for $\mathcal G$, we can choose (see \cite[Proposition 1.9.1]{Tohoku}) an epimorphism $f:\underset{j\in J}{\bigoplus}\textrm{ }G_j\longrightarrow X$, corresponding to a collection of maps $f_j:G_j\longrightarrow X$, with each $G_j$ a generator from the collection $\{ G_k\}_{k\in K}$. Accordingly, we can choose morphisms $g_j:G_j\longrightarrow Y$ such that $i_Y\circ g_j=i_X\circ f_j$ for each $j\in J$. Together, these $\{g_j\}_{j\in J}$ determine a morphism $g:\underset{j\in J}{\bigoplus}\textrm{ }G_j\longrightarrow Y$ satisfying $i_Y\circ g=i_X\circ f$. Since $i_X$, $i_Y$ are monomorphisms and $f$ is an epimorphism, we have \begin{equation} X=Im(i_X)=Im(i_X\circ f)=Im(i_Y\circ g)=Im(i_Y|Im(g))\subseteq Im(i_Y)=Y \end{equation} \end{proof} \begin{lem}\label{L3.2} Let $\mathcal G$ be a Grothendieck category having a set of projective generators $\{ G_k\}_{k\in K}$. Let $f:X\longrightarrow Y$ be a morphism in $\mathcal G$. Let $i:X'\hookrightarrow X$ and $j:Y'\hookrightarrow Y$ be monomorphisms. Suppose that for any $k\in K$ and any morphism $f_k:G_k\longrightarrow X'$, there exists a morphism $g_k:G_k\longrightarrow Y'$ such that $f\circ i\circ f_k=j\circ g_k:G_k \longrightarrow Y$. Then, there exists $f':X'\longrightarrow Y'$ such that $j\circ f'=f\circ i$. \end{lem} \begin{proof} It suffices to show that $Im(f\circ i)\subseteq Y'$. We choose any $k\in K$ and a morphism $h_k:G_k\longrightarrow Im(f\circ i)\hookrightarrow Y$. Since $G_k$ is projective, we can choose $f_k:G_k\longrightarrow X'$ such that $f\circ i\circ f_k=h_k$. By assumption, we can now find $g_k:G_k\longrightarrow Y'$ such that $f\circ i\circ f_k=j\circ g_k:G_k \longrightarrow Y$. In particular, $j\circ g_k=h_k$. Applying Lemma \ref{L3.1}, we obtain $Im(f\circ i)\subseteq Y'$. \end{proof} \begin{lem}\label{L3.3} Let $(\mathcal R,C,\psi)$ be an entwining structure. Let $V$ be a right $C$-comodule. Then, for any $r\in \mathcal R$, the module $V\otimes H_r$ given by \begin{equation} \begin{array}{c} (V\otimes H_r)(r')=V\otimes \mathcal R(r',r)\\ (V\otimes H_r)(f):(V\otimes H_r)(r')\longrightarrow (V\otimes H_r)(r'')\qquad v\otimes g\mapsto v\otimes gf \end{array} \end{equation} for $r'\in \mathcal R$, $f\in \mathcal R(r'',r')$ is an entwined module in $\mathbf M^C_{\mathcal R}(\psi)$. Here, the right $C$-comodule structure on $(V\otimes H_r)(r')$ is given by taking $v\otimes g$ to $v_0\otimes g_\psi\otimes v^\psi_1$. \end{lem} \begin{proof} See \cite[Lemma 2.5]{BBR}. \end{proof} For the rest of this section, we will assume that the coalgebra $C$ is such that the category $Comod-C$ of right $C$-comodules has enough projective objects. In other words, the coalgebra $C$ is right semiperfect (see \cite[Definition 3.2.4]{book3}). \begin{thm}\label{P3.4} Let $(\mathcal R,C,\psi)$ be an entwining structure with $C$ a right semiperfect coalgebra. Let $V$ be a projective right $C$-comodule. Then, for any $r\in \mathcal R$, the module $V\otimes H_r$ is a projective object of $\mathbf M_{\mathcal R}^C(\psi)$. \end{thm} \begin{proof} We begin with a morphism $\zeta: V\otimes H_r\longrightarrow \mathcal M$ and an epimorphism $\eta:\mathcal N\longrightarrow \mathcal M$ in $\mathbf M_{\mathcal R}^C(\psi)$. In particular, we consider the composition \begin{equation}\label{eq3.3e} V\longrightarrow V\otimes H_r(r)\longrightarrow \mathcal M(r) \qquad v\mapsto v\otimes id_r\mapsto \zeta(r)(v\otimes id_r) \end{equation} which is a morphism in $Comod-C$. Since $V$ is projective, we can lift the map in \eqref{eq3.3e} to a map $T:V\longrightarrow \mathcal N(r)$ in $Comod-C$ such that $(\eta(r)(T(v))=\zeta(r)(v\otimes id_r)$ for each $v\in V$. We now define $\xi:V\otimes H_r\longrightarrow \mathcal N$ by setting for each $s\in \mathcal R$ \begin{equation}\label{eq3.4e} \xi(s):V\otimes H_r(s)\longrightarrow \mathcal N(s)\qquad v\otimes g\mapsto \mathcal N(g)(T(v)) \end{equation} We first check that $\xi:V\otimes H_r\longrightarrow \mathcal N$ is a morphism in $\mathbf M_{\mathcal R}$. Given $g'\in \mathcal R(s',s)$, we have \begin{equation}\label{pf3.5} \mathcal N(g')(\xi(s)(v\otimes g))=\mathcal N(gg')(T(v))=\xi(s')(v\otimes gg')=\xi(s')((V\otimes H_r)(g')(v\otimes g)) \end{equation} We also have, for $v\otimes g\in V\otimes H_r(s)$, \begin{equation} \begin{array}{ll} \xi(s)(v\otimes g)_0\otimes \xi(s)(v\otimes g)_1 & =\mathcal N(g)(T(v))_0\otimes \mathcal N(g)(T(v))_1\\ & = T(v)_0g_\psi\otimes T(v)_1^\psi \\ &= T(v_0)g_\psi \otimes v_1^\psi\\ & =\mathcal N(g_\psi)(T(v_0))\otimes v_1^\psi=(\xi(s)\otimes id_C)(v_0\otimes g_\psi\otimes v_1^\psi)\\ \end{array} \end{equation} This shows that $\xi(s):V\otimes H_r(s)\longrightarrow \mathcal N(s)$ is a morphism in $Comod-C$. Together with \eqref{pf3.5}, it follows that $\xi:V\otimes H_r\longrightarrow \mathcal N$ is a morphism in $\mathbf M_{\mathcal R}^C(\psi)$. Finally, we see that for $v\otimes g\in V\otimes H_r(s)$, we have \begin{equation} \begin{array}{ll} (\eta(s)\circ \xi(s))(v\otimes g)& =\eta(s)(\mathcal N(g)(T(v)))\\ &= \mathcal M(g)(\eta(r)(T(v)))\\ & = \mathcal M(g)(\zeta(r)(v\otimes id_r))\\ & =\zeta(s)((V\otimes H_r)(g)(v\otimes id_r))\\ &= \zeta(s)(v\otimes g)\\ \end{array} \end{equation} This gives us $\eta\circ \xi=\zeta:V\otimes H_r\longrightarrow \mathcal M$. Hence the result. \end{proof} \begin{Thm}\label{T3.5} Let $(\mathcal R,C,\psi)$ be an entwining structure and let $C$ be a right semiperfect $K$-coalgebra. Then, the category $\mathbf M_{\mathcal R}^C(\psi)$ of entwined modules is a Grothendieck category with a set of projective generators. \end{Thm} \begin{proof} From \cite[Proposition 2.9]{BBR}, we know that $\mathbf M_{\mathcal R}^C(\psi)$ is a Grothendieck category. Let $\mathcal M$ be an object of $\mathbf M_{\mathcal R}^C(\psi)$. From the proof of \cite[Proposition 2.9]{BBR}, we know that there exists an epimorphism \begin{equation} \eta' : \underset{i\in I}{\bigoplus}V'_i\otimes H_{r_i}\longrightarrow \mathcal M \end{equation} where each $r_i\in \mathcal R$ and each $V'_i$ is a finite dimensional $C$-comodule. Since $Comod-C$ has enough projectives, it follows from \cite[Corollary 2.4.21]{book3} that we can choose for each $V'_i$ an epimorphism $V_i\longrightarrow V'_i$ in $Comod-C$ such that $V_i$ is a finite dimensional projective in $Comod-C$. This induces an epimorphism \begin{equation} \eta : \underset{i\in I}{\bigoplus}V_i\otimes H_{r_i}\longrightarrow \mathcal M \end{equation} The collection $\{V\otimes H_r\}$ now gives a set of projective generators for $\mathbf M_{\mathcal R}^C(\psi)$, where $r\in \mathcal R$ and $V$ ranges over (isomorphism classes of) finite dimensional projective $C$-comodules. \end{proof} \section{Modules over an entwined representation} We fix a $K$-coalgebra $C$ which is right semiperfect. We consider the category $\mathscr Ent_C$ whose objects are entwining structures $(\mathcal R,C,\psi)$. A morphism in $\mathscr Ent_C$ is a map $(\alpha,id):(\mathcal R,C,\psi)\longrightarrow (\mathcal R',C,\psi')$ of entwining structures, which we will denote simply by $\alpha$. From Section 2, it follows that we have adjoint functors \begin{equation}\label{eq4.1} \begin{array}{c} \alpha^\ast=(\alpha,id_C)^\ast: \mathbf M^C_{\mathcal R}(\psi)\longrightarrow \mathbf M^C_{\mathcal R'}(\psi')\qquad \alpha_\ast=(\alpha,id_C)_\ast :\mathbf M^C_{\mathcal R'}(\psi')\longrightarrow \mathbf M^C_{\mathcal R}(\psi)\\ \end{array} \end{equation} We note in particular that the functors $\alpha_\ast=(\alpha,id_C)_\ast$ are exact. In fact, the functors $\alpha_\ast$ preserve both limits and colimits. \begin{defn}\label{D4.1} Let $\mathscr X$ be a small category. Let $C$ be a right semiperfect coalgebra over the field $K$. By an entwined $C$-representation of a small category, we will mean a functor $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$. In particular, for each object $x\in \mathscr X$, we have an entwining structure $(\mathscr R_x,C,\psi_x)$. Given a morphism $\alpha : x\longrightarrow y$ in $\mathscr X$, we have a morphism $\mathscr R_\alpha=(\mathscr R_\alpha,id_C):(\mathscr R_x,C,\psi_x) \longrightarrow (\mathscr R_y,C,\psi_y)$ of entwining structures. \end{defn} By abuse of notation, if $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is an entwined $C$-representation, we will write \begin{equation} \alpha^\ast=\mathscr R_\alpha^\ast: \mathbf M_{\mathscr R_x}^C(\psi_x)\longrightarrow \mathbf M_{\mathscr R_y}^C(\psi_y) \qquad \alpha_\ast=\mathscr R_{\alpha\ast}: \mathbf M_{\mathscr R_y}^C(\psi_y)\longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x) \end{equation} for any morphism $\alpha:x\longrightarrow y$ in $\mathscr X$. Also by abuse of notation, if $f:r'\longrightarrow r$ is a morphism in $\mathscr R_x$, we will often denote $\mathscr R_\alpha(f):\mathscr R_\alpha(r')\longrightarrow \mathscr R_\alpha(r)$ in $\mathscr R_y$ simply as $\alpha(f):\alpha(r')\longrightarrow \alpha(r)$. We will now consider modules over an entwined $C$-representation. \begin{defn}\label{D4.2} Let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of a small category $\mathscr X$. An entwined module $\mathscr M$ over $\mathscr R$ will consist of the following data (1) For each object $x\in \mathscr X$, an entwined module $\mathscr M_x\in \mathbf M_{\mathscr R_x}^C(\psi_x)$. (2) For each morphism $\alpha : x\longrightarrow y$ in $\mathscr X$, a morphism $\mathscr M_\alpha: \mathscr M_x\longrightarrow \alpha_\ast\mathscr M_y$ in $\mathbf M_{\mathscr R_x}^C(\psi_x)$ (equivalently, a morphism $\mathscr M^\alpha: \alpha^\ast\mathscr M_x\longrightarrow \mathscr M_y$ in $\mathbf M_{\mathscr R_y}^C(\psi_y)$). Further, we suppose that $\mathscr M_{id_x}=id_{\mathscr M_x}$ for each $x\in \mathscr X$ and that for any composable morphisms $x\overset{\alpha}{\longrightarrow}y\overset{\beta}{\longrightarrow}z$, we have $\alpha_\ast(\mathscr M_\beta)\circ \mathscr M_\alpha=\mathscr M_{\beta\alpha}:\mathscr M_x\longrightarrow \alpha_\ast\mathscr M_y\longrightarrow \alpha_\ast\beta_\ast\mathscr M_z=(\beta\alpha)_\ast\mathscr M_z$. The latter condition may be expressed in any of two equivalent ways \begin{equation}\label{4.25d} \mathscr M_{\beta\alpha}=\alpha_\ast(\mathscr M_\beta)\circ \mathscr M_\alpha\qquad\Leftrightarrow\qquad\mathscr M^{\beta\alpha}=\mathscr M^\beta\circ \beta^\ast(\mathscr M^\alpha) \end{equation} A morphism $\eta:\mathscr M\longrightarrow \mathscr N$ of entwined modules over $\mathscr R$ consists of morphisms $\eta_x:\mathscr M_x\longrightarrow \mathscr N_x$ in each $\mathbf M^C_{\mathscr R_x}(\psi_x)$ such that the following diagram commutes \begin{equation} \begin{CD} \mathscr M_x @>\eta_x>> \mathscr N_x\\ @V\mathscr M_\alpha VV @VV\mathscr N_\alpha V \\ \alpha_\ast\mathscr M_y @>\alpha_\ast\eta_y>> \alpha_\ast\mathscr N_y \\ \end{CD} \end{equation} for each $\alpha:x\longrightarrow y$ in $\mathscr R$. The category of entwined modules over $\mathscr R$ will be denoted by $Mod^C-\mathscr R$. \end{defn} \begin{thm}\label{P4.3} Let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of a small category $\mathscr X$. Then, $Mod^C-\mathscr R$ is an abelian category. \end{thm} \begin{proof} Let $\eta:\mathscr M\longrightarrow \mathscr N$ be a morphism $Mod^C-\mathscr R$. We define the kernel and cokernel of $\eta$ by setting \begin{equation} Ker(\eta)_x:=Ker(\eta_x:\mathscr M_x\longrightarrow \mathscr N_x)\qquad Cok(\eta)_x:=Cok(\eta_x:\mathscr M_x\longrightarrow \mathscr N_x) \end{equation} for each $x\in \mathscr X$. For $\alpha:x\longrightarrow y$ in $\mathscr X$, the morphisms $Ker(\eta)_\alpha$ and $Cok(\eta)_\alpha$ are induced in the obvious manner, using the fact that $\alpha_\ast:\mathbf M^C_{\mathscr R_y}(\psi_y)\longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x)$ is exact. From this, it is also clear that $Cok(Ker(\eta)\hookrightarrow \mathscr M)=Ker(\mathscr N\twoheadrightarrow Cok(\eta))$. \end{proof} We now let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of a small category $\mathscr X$ and let $\mathscr M$ be an entwined module over $\mathscr R$. We consider some $x\in \mathscr X$ and a morphism \begin{equation} \eta: V\otimes H_r\longrightarrow \mathscr M_x \end{equation} in $\mathbf M_{\mathscr R_x}^C(\psi_x)$, where $V$ is a finite dimensional projective in $Comod-C$ and $r\in \mathscr R_x$. For each $y\in \mathscr X$, we now set $\mathscr N_y\subseteq \mathscr M_y$ to be the image of the family of maps \begin{equation}\label{eq4.6} \begin{array}{ll} \mathscr N_y&=Im\left(\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\begin{CD}\beta^\ast (V\otimes H_r)@>\bigoplus \beta^\ast\eta>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right)\\ &=\underset{\beta\in \mathscr X(x,y)}{\sum}\textrm{ }Im\left(\begin{CD}\beta^\ast (V\otimes H_r)@>\beta^\ast\eta>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right)\\ \end{array} \end{equation} We denote by $\iota_y$ the inclusion $\iota_y:\mathscr N_y\hookrightarrow\mathscr M_y$. For each $\beta\in \mathscr X(x,y)$, we denote by $\eta'_\beta:\beta^\ast (V\otimes H_r) \longrightarrow \mathscr N_y$ the canonical morphism induced from \eqref{eq4.6}. \begin{lem}\label{L4.4} For any $\alpha\in \mathscr X(y,z)$, $\beta\in \mathscr X(x,y)$, the following composition \begin{equation} \begin{CD} \beta^\ast (V\otimes H_r)@>\eta'_\beta>> \mathscr N_y @>\iota_y>> \mathscr M_y @>\mathscr M_\alpha>> \alpha_\ast\mathscr M_z \end{CD} \end{equation} factors through $\alpha_\ast(\iota_z):\alpha_\ast\mathscr N_z\longrightarrow\alpha_\ast\mathscr M_z$. \end{lem} \begin{proof} Since $(\alpha^\ast,\alpha_\ast)$ is an adjoint pair, it suffices to show that the composition \begin{equation} \begin{CD} \alpha^\ast\beta^\ast (V\otimes H_r)@>\alpha^\ast(\eta'_\beta)>> \alpha^\ast\mathscr N_y @>\alpha^\ast(\iota_y)>> \alpha^\ast\mathscr M_y @>\mathscr M^\alpha>> \mathscr M_z \end{CD} \end{equation}factors through $\iota_z:\mathscr N_z\longrightarrow \mathscr M_z$. By definition, we know that the composition $\beta^\ast (V\otimes H_r)\xrightarrow{\eta'_\beta}\mathscr N_y\xrightarrow{\iota_y}\mathscr M_y$ factors through $\beta^\ast\mathscr M_x$, i.e., we have \begin{equation} \iota_y\circ \eta'_\beta=\mathscr M^\beta\circ \beta^\ast\eta \end{equation} Applying $\alpha^\ast$, composing with $\mathscr M^\alpha$ and using \eqref{4.25d}, we get \begin{equation} \mathscr M^\alpha\circ \alpha^\ast(\iota_y)\circ \alpha^\ast(\eta'_\beta)=\mathscr M^\alpha\circ \alpha^\ast(\mathscr M^\beta)\circ \alpha^\ast(\beta^\ast\eta)=\mathscr M^{\alpha\beta}\circ \alpha^\ast\beta^\ast\eta \end{equation} From the definition in \eqref{eq4.6}, it is now clear that the composition $\mathscr M^\alpha\circ \alpha^\ast(\iota_y)\circ \alpha^\ast(\eta'_\beta)=\mathscr M^{\alpha\beta}\circ \alpha^\ast\beta^\ast\eta$ factors through $\iota_z:\mathscr N_z\longrightarrow \mathscr M_z$ as $\mathscr M^\alpha\circ \alpha^\ast(\iota_y)\circ \alpha^\ast(\eta'_\beta)=\iota_z\circ \eta'_{\alpha\beta}$. \end{proof} \begin{thm}\label{P4.5} For any $\alpha\in \mathscr X(y,z)$, the morphism $\mathscr M_\alpha:\mathscr M_y\longrightarrow \alpha_\ast\mathscr M_z$ restricts to a morphism $\mathscr N_\alpha:\mathscr N_y\longrightarrow \alpha_\ast\mathscr N_z$, giving us a commutative diagram \begin{equation}\label{cd4} \begin{CD} \mathscr M_y @>\mathscr M_\alpha>>\alpha_\ast\mathscr M_z\\ @A\iota_yAA @AA\alpha_\ast(\iota_z)A \\ \mathscr N_y @>\mathscr N_\alpha >> \alpha_\ast\mathscr N_z\\ \end{CD} \end{equation} \end{thm} \begin{proof} We already know that $\iota_z:\mathscr N_z\longrightarrow \mathscr M_z$ is a monomorphism. Since $\alpha_\ast$ is a right adjoint, it follows that $\alpha_\ast(\iota_z)$ is also a monomorphism. Since $C$ is right semiperfect, we know from Theorem \ref{T3.5} that $\mathbf M^C_{\mathscr R_y}(\psi_y)$ is a Grothendieck category with projective generators $\{G_k\}_{k\in K}$.Using Lemma \ref{L3.2}, it suffices to show that for any $k\in K$ and any morphism $\xi_k : G_k \longrightarrow \mathscr N_y$, there exists $\xi'_k:G_k\longrightarrow \alpha_\ast\mathscr N_z$ such that $\alpha_\ast(\iota_z)\circ \xi'_k=\mathscr M_\alpha\circ \iota_y\circ \xi_k$. From \eqref{eq4.6}, we have an epimorphism \begin{equation}\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\eta'_\beta:\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\beta^\ast(V\otimes H_r)\longrightarrow \mathscr N_y \end{equation} Since $G_k$ is projective, we can lift $\xi_k : G_k \longrightarrow \mathscr N_y$ to a morphism $\xi''_k:G_k\longrightarrow \underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\beta^\ast(V\otimes H_r)$ such that \begin{equation}\xi_k=\left(\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\eta'_\beta\right)\circ \xi''_k \end{equation} From Lemma \ref{L4.4}, we know that $\mathscr M_\alpha\circ \iota_y\circ \eta'_\beta$ factors through $\alpha_\ast(\iota_z):\alpha_\ast\mathscr N_z\longrightarrow\alpha_\ast\mathscr M_z$ for each $\beta\in \mathscr X(x,y)$. The result is now clear. \end{proof} Using the adjointness of $(\alpha^\ast,\alpha_\ast)$, we can also obtain a morphism $\mathscr N^\alpha:\alpha^\ast\mathscr N_y\longrightarrow \mathscr N_z$ for each $\alpha\in \mathscr X(y,z)$, corresponding to the morphism $\mathscr N_\alpha:\mathscr N_y\longrightarrow\alpha_\ast\mathscr N_z$ in \eqref{cd4}. The objects $\{\mathscr N_y\in \mathbf M_{\mathscr R_y}^C(\psi_y)\}_{y\in \mathscr X}$, together with the morphisms $\{\mathscr N_\alpha\}_{\alpha\in Mor(\mathscr X)}$ determine an object of $Mod^C-\mathscr R$ that we denote by $\mathscr N$. Additionally, Proposition \ref{P4.5} shows that we have an inclusion $\iota:\mathscr N\hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$. Before we proceed further, we will describe the object $\mathscr N$ in a few more ways. \begin{lem}\label{P4.6} Let $\eta'_1:V\otimes H_r\longrightarrow \mathscr N_x$ be the canonical morphism corresponding to the identity map in $\mathscr X(x,x)$. Then, for any $y\in \mathscr X$, we have \begin{equation} \mathscr N_y=Im\left(\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\begin{CD}\beta^\ast (V\otimes H_r)@>\bigoplus\beta^\ast\eta'_1>>\beta^\ast \mathscr N_x@>\mathscr N^\beta>>\mathscr N_y\end{CD}\right) \end{equation} \end{lem} \begin{proof} For any $\beta\in\mathscr X(x,y)$, we consider the commutative diagram \begin{equation}\label{cd4.16} \begin{CD} \beta^\ast(V\otimes H_r)@>\beta^\ast\eta'_1>>\beta^\ast\mathscr N_x @>\mathscr N^\beta>>\mathscr N_y\\ @. @V\beta^\ast(\iota_x)VV @VV\iota_yV\\ @. \beta^\ast\mathscr M_x @>\mathscr M^\beta>> \mathscr M_y\\ \end{CD} \end{equation} By definition, we know that $\iota_x\circ \eta'_1=\eta$, which gives $\beta^\ast(\iota_x)\circ \beta^\ast(\eta'_1)=\beta^\ast(\eta)$. Composing with $\mathscr M^\beta$, we get \begin{equation} Im(\mathscr M^\beta\circ \beta^\ast(\eta))=Im(\mathscr M^\beta\circ \beta^\ast(\iota_x)\circ \beta^\ast(\eta'_1))=Im(\iota_y\circ \mathscr N^\beta\circ \beta^\ast\eta'_1)\cong Im(\mathscr N^\beta\circ \beta^\ast\eta'_1) \end{equation} where the last isomorphism follows from the fact that $\iota_y$ is monic. The result is now clear from the definition in \eqref{eq4.6}. \end{proof} \begin{lem}\label{L4.7} For any $y\in \mathscr X$, we have \begin{equation}\label{ny} \mathscr N_y=\underset{\beta\in \mathscr X(x,y)}{\sum}\textrm{ }Im\left(\begin{CD}\beta^\ast \mathscr N_x@>\beta^\ast(\iota_x)>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right) \end{equation} \end{lem} \begin{proof} For the sake of convenience, we set \begin{equation*} \mathscr N'_y:=\underset{\beta\in \mathscr X(x,y)}{\sum}\textrm{ }Im\left(\begin{CD}\beta^\ast \mathscr N_x@>\beta^\ast(\iota_x)>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right) \end{equation*} From the commutative diagram in \eqref{cd4.16}, we see that each of the morphisms $\begin{CD}\beta^\ast \mathscr N_x@>\beta^\ast(\iota_x)>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}$ factors through the subobject $\mathscr N_y\subseteq \mathscr M_y$. Hence, $\mathscr N_y'\subseteq \mathscr N_y$. On the other hand, it is clear that \begin{equation*} Im\left(\begin{CD}\beta^\ast(V\otimes H_r)@>\beta^\ast\eta_1'>>\beta^\ast \mathscr N_x@>\beta^\ast(\iota_x)>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right)\subseteq Im\left(\begin{CD}\beta^\ast \mathscr N_x@>\beta^\ast(\iota_x)>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right) \end{equation*} Applying Lemma \ref{P4.6}, it is now clear that $\mathscr N_y\subseteq \mathscr N_y'$. This proves the result. \end{proof} We now make a few conventions : if $\mathcal M$ is a module over a small $K$-linear category $\mathcal R$, we denote by $el(\mathcal M)$ the union $\underset{r\in \mathcal R}{\bigcup}\textrm{ }\mathcal M(r)$. The cardinality of $el(\mathcal M)$ will be denoted by $|\mathcal M|$. If $\mathscr M$ is a module over an entwined $C$-representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$, we denote by $el_{\mathscr X}(\mathscr M)$ the union $\underset{x\in \mathscr X}{\bigcup}\textrm{ }el(\mathscr M_x)$. The cardinality of $el_{\mathscr X}(\mathscr M)$ will be denoted by $|\mathscr M|$. It is evident that if $\mathscr M\in Mod^C-\mathscr R$ and $\mathscr N$ is either a quotient or a subobject of $\mathscr M$, then $|\mathscr N|\leq |\mathscr M|$. We now define the following cardinality \begin{equation} \kappa =sup\{ \mbox{$|\mathbb N|$, $|C|$, $|K|$, $|Mor(\mathscr X)|$, $|Mor(\mathscr R_x)|$, $x\in \mathscr X$}\} \end{equation} We observe that $|\beta^\ast(V\otimes H_r)|\leq\kappa$, where $V$ is any finite dimensional $C$-comodule and $\beta\in \mathscr X(x,y)$. \begin{lem}\label{L4.8} We have $|\mathscr N|\leq \kappa$. \end{lem} \begin{proof} We choose $y\in \mathscr X$. From Lemma \ref{P4.6}, we have \begin{equation} \mathscr N_y=Im\left(\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\begin{CD}\beta^\ast (V\otimes H_r)@>\bigoplus\beta^\ast\eta'_1>>\beta^\ast \mathscr N_x@>\mathscr N^\beta>>\mathscr N_y\end{CD}\right) \end{equation} Since $\mathscr N_y$ is an epimorphic image of $\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\beta^\ast (V\otimes H_r)$, we have \begin{equation}|\mathscr N_y|\leq | \underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\beta^\ast (V\otimes H_r)|\leq \kappa \end{equation} It follows that $ |\mathscr N|=\underset{y\in \mathscr X}{\sum}\textrm{ }|\mathscr N_y|\leq \kappa $. \end{proof} \begin{Thm}\label{T4.9} Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr R:\mathscr X\longrightarrow\mathscr Ent_C$ be an entwined $C$-representation of a small category $\mathscr X$. Then, the category $Mod^C-\mathscr R$ of entwined modules over $\mathscr R$ is a Grothendieck category. \end{Thm} \begin{proof} Since filtered colimits and finite limits in $Mod^C-\mathscr R$ are computed pointwise, it is clear that they commute with each other. We now consider an object $\mathscr M$ in $Mod^C-\mathscr R$ and an element $m\in el_{\mathscr X}(\mathscr M)$. Then, $m\in \mathscr M_x(r)$ for some $x\in \mathscr X$ and $r\in \mathscr R_x$. By \cite[Lemma 2.8]{BBR}, we can find a finite dimensionsal $C$-subcomodule $V'\subseteq \mathscr M_x(r)$ containing $m$ and a morphism $\eta': V'\otimes H_r\longrightarrow \mathscr M_x$ in $\mathbf M_{\mathscr R_x}^C(\psi_x)$ such that $\eta'(r)(m\otimes id_r)=m$. Since $C$ is semiperfect, we can choose a finite dimensional projective $V$ in $Comod-C$ along with an epimorphism $V\longrightarrow V'$. This induces a morphism $\eta:V\otimes H_r\longrightarrow \mathscr M_x$ in $\mathbf M_{\mathscr R_x}^C(\psi_x)$. Corresponding to $\eta$, we now define the subobject $\mathscr N\subseteq \mathscr M$ as in \eqref{eq4.6}. It is clear that $m\in el_{\mathscr X}(\mathscr N)$. By Lemma \ref{L4.8}, we know that $|\mathscr N|\leq \kappa$. We now consider the set of isomorphism classes of objects in $Mod^C-\mathscr R$ having cardinality $\leq \kappa$. From the above, it is clear that any object in $Mod^C-\mathscr R$ may be expressed as a sum of such objects. By choosing one object from each such isomorphism class, we obtain a set of generators for $Mod^C-\mathscr R$. \end{proof} \section{Entwined representations of a poset and projective generators} In this section, the small category $\mathscr X$ will always be a partially ordered set. If $x\leq y$ in $\mathscr X$, we will say that there is a single morphism $x\longrightarrow y$ in $\mathscr X$. We continue with $C$ being a right semiperfect coalgebra over the field $K$ and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ being an entwined $C$-representation of $\mathscr X$. From Theorem \ref{T4.9}, we know that $Mod^C-\mathscr R$ is a Grothendieck category. In this section, we will show that $Mod^C-\mathscr R$ has projective generators. For this, we will construct a pair of adjoint functors \begin{equation}\label{adjexev} ex_x^C:\mathbf M_{\mathscr R_x}^C(\psi_x)\longrightarrow Mod^C-\mathscr R \qquad ev_x^C:Mod^C-\mathscr R\longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x) \end{equation} for each $x\in \mathscr X$. \begin{lem}\label{L5.1} Let $\mathscr X$ be a poset. Fix $x\in \mathscr X$. Then, there is a functor $ex_x^C:\mathbf M_{\mathscr R_x}^C(\psi_x)\longrightarrow Mod^C-\mathscr R$ defined by setting \begin{equation} ex_x^C(\mathcal M)_y:=\left\{ \begin{array}{ll} \alpha^\ast\mathcal M & \mbox{if $\alpha\in \mathscr X(x,y)$}\\ 0 & \mbox{if $\mathscr X(x,y)=\phi$}\\ \end{array}\right. \end{equation} for each $y\in \mathscr X$. \end{lem} \begin{proof} It is immediate that each $ex_x^C(\mathcal M)_y\in \mathbf M_{\mathscr R_y}^C(\psi_y)$. We consider $\beta:y\longrightarrow y'$ in $\mathscr X$. If $x\not\leq y$, we have $0=ex_x^C(\mathcal M)^\beta:0=\beta^\ast ex_x^C(\mathcal M)_y\longrightarrow ex_x^C(\mathcal M)_{y'}$ in $\mathbf M_{\mathscr R_{y'}}^C(\psi_{y'})$. Otherwise, we consider $\alpha:x\longrightarrow y$ and $\alpha':x\longrightarrow y'$. Then, we have \begin{equation*} id=ex_x^C(\mathcal M)^\beta :\beta^\ast ex_x^C(\mathcal M)_y=\beta^\ast\alpha^\ast \mathcal M\longrightarrow \alpha'^\ast\mathcal M=ex_x^C(\mathcal M)_{y'} \end{equation*} which follows from the fact that $\beta\circ \alpha=\alpha'$. Given composable morphisms $\beta$, $\gamma$ in $\mathscr X$, it is now clear from the definitions that $ex_x^C(\mathcal M)^{\gamma\beta}=ex_x^C(\mathcal M)^\gamma\circ \gamma^\ast(ex_x^C(\mathcal M)^\beta)$. \end{proof} \begin{lem}\label{L5.2} Let $\mathscr X$ be a poset. Fix $x\in \mathscr X$. Then, there is a functor \begin{equation} ev_x^C:Mod^C-\mathscr R\longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x)\qquad \mathscr M\mapsto \mathscr M_x \end{equation} Additionally, $ev_x^C$ is exact. \end{lem} \begin{proof} It is immediate that $ev_x^C$ is a functor. Since finite limits and finite colimits in $Mod^C-\mathscr R$ are computed pointwise, it follows that $ev_x^C$ is exact. \end{proof} \begin{thm}\label{P5.3}Let $\mathscr X$ be a poset. Fix $x\in \mathscr X$. Then, $(ex_x^C,ev_x^C)$ is a pair of adjoint functors. \end{thm} \begin{proof} For any $\mathcal M\in \mathbf M_{\mathscr R_x}^C(\psi_x)$ and $\mathscr N\in Mod^C-\mathscr R$, we will show that \begin{equation} Mod^C-\mathscr R(ex_x^C(\mathcal M),\mathscr N)\cong \mathbf M_{\mathscr R_x}^C(\psi_x)(\mathcal M,ev_x^C(\mathscr N)) \end{equation} We begin with a morphism $f:\mathcal M\longrightarrow \mathscr N_x$ in $\mathbf M_{\mathscr R_x}^C(\psi_x)$. Corresponding to $f$, we define $\eta^f:ex_x^C(\mathcal M)\longrightarrow\mathscr N$ in $Mod^C-\mathscr R$ by setting \begin{equation} \eta^f_y:ex^C_x(\mathcal M)_y=\alpha^\ast\mathcal M\xrightarrow{\alpha^\ast f}\alpha^\ast\mathscr N_x\xrightarrow{\mathscr N^\alpha}\mathscr N_y \end{equation} whenever $x\leq y$ and $\alpha\in \mathscr X(x,y)$. Otherwise, we set $0=\eta^f_y:0=ex^C_x(\mathcal M)_y\longrightarrow \mathscr N_y$. For $\beta:y\longrightarrow y'$ in $\mathscr X$, we have to show that the following diagram is commutative. \begin{equation}\label{5.6cd} \begin{CD} \beta^\ast ex_x^C(\mathcal M)_y @>\beta^\ast\eta^f_y>> \beta^\ast\mathscr N_y \\ @Vex_x^C(\mathcal M)^\beta VV @VV\mathscr N^\beta V \\ ex_x^C(\mathcal M)_{y'} @>\eta^f_{y'}>> \mathscr N_{y'} \\ \end{CD} \end{equation} If $x\not\leq y$, then $ex_x^C(\mathcal M)_y=0$ and the diagram commutes. Otherwise, we consider $\alpha:x\longrightarrow y$ and $\alpha'=\beta\circ \alpha:x\longrightarrow y'$. Then, \eqref{5.6cd} reduces to the commutative diagram \begin{equation} \begin{CD} \beta^\ast \alpha^\ast\mathcal M @>\beta^\ast(\mathscr N^\alpha\circ \alpha^\ast f)>> \beta^\ast\mathscr N_y \\ @VidVV @VV\mathscr N^\beta V \\ \beta^\ast\alpha^\ast\mathcal M=\alpha'^\ast\mathcal M @>\mathscr N^{\alpha'}\circ \alpha'^\ast(f)=\mathscr N^\beta\circ\beta^\ast(\mathscr N^\alpha)\circ \beta^\ast\alpha^\ast f>>\mathscr N_{y'} \\ \end{CD} \end{equation} Conversely, we take $\eta:ex_x^C(\mathcal M)\longrightarrow \mathscr N$ in $Mod^C-\mathscr R$. In particular, this determines $f^\eta=\eta_x:\mathcal M\longrightarrow \mathscr N_x$ in $ \mathbf M_{\mathscr R_x}^C(\psi_x)$. It may be easily verified that these two associations are inverse to each other. This proves the result. \end{proof} \begin{cor}\label{C5.4} The functor $ex_x^C:\mathbf M_{\mathscr R_x}^C(\psi_x)\longrightarrow Mod^C-\mathscr R $ preserves projectives. \end{cor} \begin{proof} From Proposition \ref{P5.3}, we know that $(ex_x^C,ev_x^C)$ is a pair of adjoint functors. From Lemma \ref{L5.2}, we know that the right adjoint $ev_x^C$ is exact. It follows therefore that its left adjoint $ex_x^C$ preserves projective objects. \end{proof} \begin{Thm}\label{T5.5} Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr X$ be a poset and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of $\mathscr X$. Then, $Mod^C-\mathscr R$ has projective generators. \end{Thm} \begin{proof} We denote by $Proj^f(C)$ the set of isomorphism classes of finite dimensional projective $C$-comodules. We will show that the family \begin{equation}\mathcal G=\{\mbox{$ex_x^C(V\otimes H_r)$ $\vert$ $x\in \mathscr X$, $r\in \mathscr R_x$, $V\in Proj^f(C)$}\} \end{equation} is a set of projective generators for $Mod^C-\mathscr R$. From Proposition \ref{P3.4}, we know that $V\otimes H_r$ is projective in $\mathbf M_{\mathscr R_x}^C(\psi_x)$, where $r\in \mathscr R_x$ and $V\in Proj^f(C)$. It now follows from Corollary \ref{C5.4} that each $ex_x^C(V\otimes H_r)$ is projective in $Mod^C-\mathscr R$. It remains to show that $\mathcal G$ is a set of generators for $Mod^C-\mathscr R$. For this, we consider a monomorphism $\iota:\mathscr N\hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$ such that $\mathscr N\subsetneq\mathscr M$. Since kernels and cokernels in $Mod^C-\mathscr R$ are taken pointwise, it follows that there is some $x\in \mathscr X$ such that $\iota_x:\mathscr N_x\hookrightarrow\mathscr M_x$ is a monomorphism with $\mathscr N_x\subsetneq\mathscr M_x$. From the proof of Theorem \ref{T3.5}, we know that $\{V\otimes H_r\}_{r\in \mathscr R_x,V\in Proj^f(C)}$ is a set of generators for $\mathbf M^C_{\mathscr R_x}(\psi_x)$. Accordingly, we can choose a morphism $f:V\otimes H_r\longrightarrow \mathscr M_x$ with $r\in \mathscr R_x$ and $V\in Proj^f(C)$ such that $f$ does not factor through $ev_x^C(\iota)=\iota_x:\mathscr N_x\hookrightarrow\mathscr M_x$. Applying the adjunction $(ex^C_x,ev_x^C)$, we now obtain a morphism $\eta:ex_x^C(V\otimes H_r)\longrightarrow \mathscr M$ corresponding to $f$, which does not factor through $\iota: \mathscr N\longrightarrow \mathscr M$. It now follows (see, for instance, \cite[$\S$ 1.9]{Tohoku}) that the family $\mathcal G$ is a set of generators for $Mod^C-\mathscr R$. \end{proof} \section{Cartesian modules over entwined representations} We continue with $\mathscr X$ being a poset, $C$ being a right semiperfect $K$-coalgebra and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ being an entwined $C$-representation of $\mathscr X$. In this section, we will introduce the category of cartesian modules over $\mathscr R$. Given a morphism $\alpha:(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ in $\mathscr Ent_C$, we already know that the left adjoint $\alpha^\ast$ is right exact. We will say that $\alpha:(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ is flat if $\alpha^\ast: \mathbf M^C_{\mathcal R}(\psi)\longrightarrow \mathbf M^C_{\mathcal S}(\psi')$ is exact. Accordingly, we will say that an entwined $C$-representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat if $\alpha^\ast=\mathscr R_\alpha^\ast:\mathbf M^C_{\mathscr R_x}(\psi_x) \longrightarrow \mathbf M^C_{\mathscr R_y}(\psi_y)$ is exact for each $\alpha:x\longrightarrow y$ in $\mathscr X$. \begin{defn}\label{D6.1} Let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of $\mathscr X$. Suppose that $\mathscr R$ is flat. Let $\mathscr M$ be an entwined module over $\mathscr R$. We will say that $\mathscr M$ is cartesian if for each $\alpha :x\longrightarrow y$ in $\mathscr X$, the morphism $\mathscr M^\alpha:\alpha^\ast\mathscr M_x\longrightarrow\mathscr M_y$ in $\mathbf M^C_{\mathscr R_y}(\psi_y)$ is an isomorphism. We will denote by $Cart^C-\mathscr R$ the full subcategory of $Mod^C-\mathscr R$ consisting of cartesian modules. \end{defn} It is clear that $Cart^C-\mathscr R$ is an abelian category, with filtered colimits and finite limits coming from $Mod^C-\mathscr R$. We will now give conditions so that $Cart-\mathscr R$ is a Grothendieck category. For this, we will need some intermediate results. First, we recall (see, for instance, \cite{AR}) that an object $M$ in a Grothendieck category $\mathcal A$ is said to be finitely generated if the functor ${\mathcal A}(M,\_\_)$ satisfies \begin{equation} \underset{i\in I}{\varinjlim}\textrm{ } {\mathcal A}(M,M_i)={\mathcal A}(M,\underset{i\in I}{\varinjlim}\textrm{ } M_i) \end{equation} where $\{M_i\}_{i\in I}$ is any filtered system of objects in $\mathcal A$ connected by monomorphisms. \begin{thm}\label{P6.1} Let $(\mathcal R,C,\psi)$ be an entwining structure with $C$ a right semiperfect coalgebra. Let $V$ be a finite dimensional projective right $C$-comodule. Then, for any $r\in \mathcal R$, the module $V\otimes H_r$ is a finitely generated projective object in $\mathbf M_{\mathcal R}^C(\psi)$. \end{thm} \begin{proof} From Proposition \ref{P3.4}, we already know that $V\otimes H_r$ is a projective object in $\mathbf M_{\mathcal R}^C(\psi)$. To show that it is finitely generated, we consider a filtered system $\{\mathcal M_i\}_{i\in I}$ of objects in $\mathbf M_{\mathcal R}^C(\psi)$ connected by monomorphisms and set $\mathcal M:=\underset{i\in I}{\varinjlim}\textrm{ } \mathcal M_i$. Since $\mathbf M_{\mathcal R}^C(\psi)$ is a Grothendieck category, we note that we have an inclusion $\eta_i:\mathcal M_i\hookrightarrow \mathcal M$ for each $i\in I$. We now take a morphism $\zeta:V\otimes H_r\longrightarrow \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$. We choose a basis $\{v_1,...,v_n\}$ for $V$. For each $1\leq k\leq n$, we now have a morphism in $\mathbf M_{\mathcal R}$ given by \begin{equation} \zeta_k:H_r\longrightarrow V\otimes H_r \qquad H_r(s)=\mathcal R(s,r)\ni f\mapsto v_k\otimes f\in (V\otimes H_r)(s) \end{equation} Then, each composition $\zeta\circ \zeta_k:H_r\longrightarrow\mathcal M$ is a morphism in $\mathbf M_{\mathcal R}$. Since $H_r$ is a finitely generated object in $\mathbf M_{\mathcal R}$, we can now choose $j\in I$ such that every $\zeta\circ \zeta_k$ factors through $\eta_j:\mathcal M_j\hookrightarrow \mathcal M$. We now construct the following pullback diagram in $\mathbf M^C_{\mathcal R}(\psi)$ \begin{equation}\label{eq6.3} \begin{CD} \mathcal N @>>> \mathcal M_j \\ @V\iota VV @VV\eta_jV \\ V\otimes H_r @>\zeta>> \mathcal M\\ \end{CD} \end{equation} Then, $\iota:\mathcal N\longrightarrow\mathcal M$ is a monomorphism in $\mathbf M^C_{\mathcal R}(\psi)$. From the construction of finite limits in $\mathbf M^C_{\mathcal R}(\psi)$, it follows that for each $s\in \mathcal R$, we have a pullback diagram in $Vect_K$ \begin{equation}\label{eq6.4} \begin{CD} \mathcal N(s) @>>> \mathcal M_j(s) \\ @V\iota(s) VV @VV\eta_j(s)V \\ (V\otimes H_r)(s) @>\zeta(s)>> \mathcal M(s)\\ \end{CD} \end{equation} By assumption, we know that $\zeta(s)(v_k\otimes f)\in Im(\eta_j(s))$ for any basis element $v_k$ and any $f\in H_r(s)$. It follows that $Im(\zeta(s))\subseteq Im(\eta_j(s))$ and hence the pullback $\mathcal N(s)=(V\otimes H_r)(s)$. In other words, $\mathcal N=V\otimes H_r$. The result is now clear. \end{proof} \begin{lem}\label{L6.3} Let $\alpha:(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ be a flat morphism in $\mathscr Ent_C$. Let $\mathcal M\in \mathbf M^C_{\mathcal R}(\psi)$. (a) There exists a family $\{r_i\}_{i\in I}$ of objects of $\mathcal R$ and a family $\{V_i\}_{i\in I}$ of finite dimensional projective $C$-comodules such that there is an epimorphism in $\mathbf M^C_{\mathcal S}(\psi')$ \begin{equation} \eta: \underset{i\in I}{\bigoplus}\textrm{ } (V_i\otimes H_{\alpha(r_i)})\longrightarrow \alpha^\ast\mathcal M \end{equation} (b) Let $s\in \mathcal S$ and let $W$ be a finite dimensional projective in $Comod-C$. Let $\zeta:W\otimes H_s\longrightarrow \alpha^\ast\mathcal M$ be a morphism in $\mathbf M^C_{\mathcal S}(\psi')$. Then, there exists a finite set $\{r_1,...,r_n\}$ of objects of $\mathcal R$, a finite family $\{V_1,...,V_n\}$ of finite dimensional projective $C$-comodules and a morphism $\eta'':\underset{k=1}{\overset{n}{\bigoplus}}V_k\otimes H_{r_k}\longrightarrow \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$ such that $\zeta$ factors through $\alpha^\ast\eta''$. \end{lem} \begin{proof} (a) From the proof of Theorem \ref{T3.5}, we know that there exists an epimorphism in $\mathbf M^C_{\mathcal R}(\psi)$ \begin{equation} \eta' : \underset{i\in I}{\bigoplus}V_i\otimes H_{r_i}\longrightarrow \mathcal M \end{equation} where each $r_i\in \mathcal R$ and each $V_i$ is a finite dimensional projective $C$-comodule. Since $\alpha^\ast:\mathbf M^C_{\mathcal R}(\psi)\longrightarrow \mathbf M^C_{\mathcal S}(\psi')$ is a left adjoint, it induces an epimorphism $\alpha^\ast(\eta')$ in $\mathbf M^C_{\mathcal S}(\psi')$. From the definition in \eqref{ke2.2} and the construction in Proposition \ref{P2.2}, it is clear that $\alpha^\ast(V_i\otimes H_{r_i})=V_i\otimes \alpha^\ast H_{r_i}=V_i\otimes H_{\alpha(r_i)}$. This proves (a). (b) We consider the epimorphism $\alpha^\ast\eta'=\eta: \underset{i\in I}{\bigoplus}\textrm{ } (V_i\otimes H_{\alpha(r_i)})\longrightarrow \alpha^\ast\mathcal M$ constructed in (a). From Proposition \ref{P6.1}, we know that $W\otimes H_s$ is a finitely generated projective object in $\mathbf M^C_{\mathcal S}(\psi')$. As such $\zeta:W\otimes H_s\longrightarrow \alpha^\ast\mathcal M$ can be lifted to a morphism $\zeta': W\otimes H_s\longrightarrow \underset{i\in I}{\bigoplus}\textrm{ } (V_i\otimes H_{\alpha(r_i)})$ and $\zeta'$ factors through a finite direct sum of objects from the family $\{V_i\otimes H_{\alpha(r_i)}\}_{i\in I}$. The result is now clear. \end{proof} \begin{lem}\label{L6.4} Let $\alpha:(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ be a flat morphism in $\mathscr Ent_C$. Let $\kappa_1$ be any cardinal such that \begin{equation}\kappa_1\geq max\{ \mbox{$\mathbb N$, $|Mor(\mathcal R)|$, $|C|$, $|K|$}\} \end{equation} Let $\mathcal M\in \mathbf M^C_{\mathcal R}(\psi)$ and let $A\subseteq el(\alpha^\ast\mathcal M)$ be a set of elements such that $|A|\leq \kappa_1$. Then, there is a submodule $\mathcal N\hookrightarrow \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$ with $|\mathcal N|\leq \kappa_1$ such that $A\subseteq el(\alpha^\ast\mathcal N)$. \end{lem} \begin{proof} We consider some element $a\in A\subseteq el(\alpha^\ast\mathcal M)$. Then, we can choose a morphism $\zeta^a:W^a\otimes H_{s^a} \longrightarrow \alpha^\ast\mathcal M$ in $\mathbf M_{\mathcal S}^C(\psi')$ such that $a\in el(Im(\zeta^a))$, where $s^a\in \mathcal S$ and $W^a$ is a finite dimensional projective in $Comod-C$. Using Lemma \ref{L6.3}(b), we can now choose a finite set $\{r^a_1,...,r^a_{n^a}\}$ of objects of $\mathcal R$, a finite family $\{V^a_1,...,V^a_{n^a}\}$ of finite dimensional projective $C$-comodules and a morphism $\eta^{a''}:\underset{k=1}{\overset{n^a}{\bigoplus}}V_k^a\otimes H_{r_k^a}\longrightarrow \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$ such that $\zeta^a$ factors through $\alpha^\ast\eta^{a''}$. We now set \begin{equation} \mathcal N:=Im\left(\eta'':=\underset{a\in A}{\bigoplus}\eta^{a''}:\underset{a\in A}{\bigoplus}\textrm{ }\underset{k=1}{\overset{n^a}{\bigoplus}}V_k^a\otimes H_{r_k^a}\longrightarrow \mathcal M\right) \end{equation} Since $\alpha$ is flat and $\alpha^\ast$ is a left adjoint, we obtain \begin{equation} \alpha^\ast\mathcal N=Im\left(\alpha^\ast\eta''=\underset{a\in A}{\bigoplus}\alpha^\ast\eta^{a''}:\underset{a\in A}{\bigoplus}\textrm{ }\underset{k=1}{\overset{n^a}{\bigoplus}}V_k^a\otimes H_{\alpha(r_k^a)}\longrightarrow\alpha^\ast \mathcal M\right) \end{equation} Since each $a\in el(Im(\zeta^a))$ and $\zeta^a$ factors through $\alpha^\ast\eta^{a''}$, we get $A\subseteq el(\alpha^\ast\mathcal N)$. It remains to show that $|\mathcal N|\leq \kappa_1$. Since $\mathcal N$ is a quotient of $\underset{a\in A}{\bigoplus}\textrm{ }\underset{k=1}{\overset{n^a}{\bigoplus}}V_k^a\otimes H_{r_k^a}$ and $|A|\leq \kappa_1$, it suffices to show that each $|V_k^a\otimes H_{r_k^a}|\leq \kappa_1$. This is clear from the definition of $\kappa_1$, using the fact that each $V_k^a$ is finite dimensional. \end{proof} \begin{rem}\label{Rem6.5} \emph{By considering $\alpha=id$ in Lemma \ref{L6.4}, we obtain the following simple consequence: if $A\subseteq el(\mathcal M)$ is any subset with $|A|\leq \kappa_1$, there is a submodule $\mathcal N\hookrightarrow \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$ with $|\mathcal N|\leq \kappa_1$ such that $A\subseteq el(\mathcal N)$. } \end{rem} \begin{lem}\label{L6.6} Let $\alpha:(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ be a flat morphism in $\mathscr Ent_C$ and let $\mathcal M\in \mathbf M^C_{\mathcal R}(\psi)$. Let $\kappa_2$ be any cardinal such that $\kappa_2 \geq max\{ \mbox{$\mathbb N$, $|Mor(\mathcal R)|$, $|Mor(\mathcal S)|$, $|C|$, $|K|$}\}$ and let $A\subseteq el(\mathcal M)$ and $B\subseteq el(\alpha^\ast\mathcal M)$ be subsets with $|A|$, $|B|\leq \kappa_2$. Then, there exists a submodule $\mathcal N\subseteq \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$ such that (1) $|\mathcal N|\leq \kappa_2$, $|\alpha^\ast\mathcal N|\leq \kappa_2$ (2) $A\subseteq el(\mathcal N)$ and $B\subseteq el(\alpha^\ast\mathcal N)$. \end{lem} \begin{proof} Applying Lemma \ref{L6.4} (and Remark \ref{Rem6.5}), we obtain submodules $\mathcal N_1$, $\mathcal N_2\subseteq \mathcal M$ such that (1) $|\mathcal N_1|, |\mathcal N_2|\leq \kappa_2$ (2) $A\subseteq el(\mathcal N_1)$, $B\subseteq el(\alpha^\ast\mathcal N_2)$. We set $\mathcal N:=(\mathcal N_1+\mathcal N_2)\subseteq \mathcal M$. Then, $(\mathcal N_1+\mathcal N_2)$ is a quotient of $\mathcal N_1\oplus\mathcal N_2$ and hence $|\mathcal N|\leq \kappa_2$. Also, it is clear that $A\subseteq el(\mathcal N_1) \subseteq el(\mathcal N)$. Since $\alpha$ is flat, we get $B\subseteq el(\alpha^\ast\mathcal N_2)\subseteq el(\alpha^\ast\mathcal N)$. It remains to show that $|\alpha^\ast\mathcal N|\leq \kappa_2$. By the definition in \eqref{ke2.2}, we know that $\alpha^*(\mathcal N)(s)$ is a quotient of \begin{equation}\label{ke6.9} \left(\underset{r\in \mathcal R}{\bigoplus}\mathcal N(r)\otimes \mathcal S(s,\alpha(r))\right) \end{equation} for each $s\in \mathcal S$. Since $\kappa_2 \geq |Mor(\mathcal R)|, |Mor(\mathcal S)|$, it follows from \eqref{ke6.9} that $|\alpha^*(\mathcal N)(s)|\leq\kappa_2$. Again since $\kappa_2 \geq |Mor(\mathcal S)|$, we get $|\alpha^\ast\mathcal N|\leq \kappa_2$. \end{proof} We will now show that $Cart^C-\mathscr R$ has a generator when $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is a flat representation of the poset $\mathscr X$. This will be done using induction on $\mathbb N\times Mor(\mathscr X)$ in a manner similar to the proof of \cite[Proposition 3.25]{EV}. As in Section 4, we set \begin{equation} \kappa =sup\{ \mbox{$|\mathbb N|$, $|C|$, $|K|$, $|Mor(\mathscr X)|$, $|Mor(\mathscr R_x)|$, $x\in \mathscr X$}\} \end{equation} Let $\mathscr M$ be a cartesian module over $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$. We now consider an element $m\in el_{\mathscr X}(\mathscr M)$. Suppose that $m\in \mathscr M_x(r)$ for some $x\in \mathscr X$ and $r\in \mathscr R_x$. As in the proof of Theorem \ref{T4.9}, we fix a finite dimensional projective $C$-comodule $V$ and a morphism $\eta: V\otimes H_r \longrightarrow \mathscr M_x$ in $\mathbf M^C_{\mathscr R_x}(\psi_x)$ such that $m$ is an element of the image of $\eta$. Corresponding to $\eta$, we define $\mathscr N\subseteq \mathscr M$ as in \eqref{eq4.6}. It is clear that $m\in el_{\mathscr X}(\mathscr N)$. By Lemma \ref{L4.8}, we know that $|\mathscr N|\leq \kappa$. Next, we choose a well ordering of the set $Mor(\mathscr X)$ and consider the induced lexicographic ordering of $\mathbb N\times Mor(\mathscr X)$. Corresponding to each pair $(n,\alpha:y\longrightarrow z)\in \mathbb N\times Mor(\mathscr X)$, we will now define a subobject $\mathscr P(n,\alpha) \hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$ satisfying the following conditions. (1) $m\in el_{\mathscr X}(\mathscr P(1,\alpha_0))$, where $\alpha_0$ is the least element of $Mor(\mathscr X)$. (2) $\mathscr P(n,\alpha)\subseteq \mathscr P(m,\beta)$, whenever $(n,\alpha)\leq (m,\beta)$ in $\mathbb N\times Mor(\mathscr X)$ (3) For each $(n,\alpha:y\longrightarrow z)\in \mathbb N\times Mor(\mathscr X)$, the morphism $\mathscr P(n,\alpha)^\alpha:\alpha^\ast \mathscr P(n,\alpha)_y \longrightarrow \mathscr P(n,\alpha)_z$ is an isomorphism in $\mathbf M^C_{\mathscr R_z}(\psi_z)$. (4) $|\mathscr P(n,\alpha)|\leq \kappa$. For $(n,\alpha:y\longrightarrow z)\in \mathbb N\times Mor(\mathscr X)$, we start the process of constructing the module $\mathscr P(n,\alpha)$ as follows: we set \begin{equation}\label{6.11dp} A^0_0(w):=\left\{ \begin{array}{ll} \mathscr N_w & \mbox{if $n=1$ and $\alpha=\alpha_0$}\\ \underset{(m,\beta)<(n,\alpha)}{\bigcup}\textrm{ }\mathscr P(m,\beta)_w& \mbox{otherwise} \\ \end{array}\right. \end{equation} for each $w\in \mathscr X$. It is clear that each $A^0_0(w)\subseteq el(\mathscr M_w)$ and $|A^0_0(w)|\leq \kappa$. Since $\mathscr M$ is cartesian, we know that $\alpha^\ast\mathscr M_y=\mathscr M_z$. Since $\alpha:(\mathscr R_y,C,\psi_y)\longrightarrow (\mathscr R_z,C,\psi_z)$ is flat in $\mathscr Ent_C$, we use Lemma \ref{L6.6} with $A^0_0(y)\subseteq el(\mathscr M_y)$ and $A^0_0(z)\subseteq el(\alpha^\ast\mathscr M_y)= el( \mathscr M_z)$ to obtain $A^0_1(y)\hookrightarrow \mathscr M_y$ in $\mathbf M^C_{\mathscr R_y}(\psi_y)$ such that \begin{equation}\label{card6.12} |A^0_1(y)|\leq \kappa \qquad |\alpha^\ast A^0_1(y)|\leq \kappa \qquad A^0_0(y) \subseteq el(A^0_1(y))\qquad A^0_0(z)\subseteq el(\alpha^\ast A^0_1(y)) \end{equation} We now set $A^0_1(z):=\alpha^\ast A^0_1(y)$. Then, \eqref{card6.12} can be rewritten as \begin{equation}\label{card6.13} |A^0_1(y)|\leq \kappa \qquad |A^0_1(z)|\leq \kappa \qquad A^0_0(y) \subseteq el(A^0_1(y))\qquad A^0_0(z)\subseteq el(A^0_1(z)) \end{equation} We observe here that since $\mathscr X$ is a poset, then $y=z$ implies $\alpha:y\longrightarrow z$ is the identity and hence $A^0_1(y)= A^0_1(z)$. For any $w\ne y,z$ in $\mathscr X$, we set $A^0_1(w)=A^0_0(w)$. Combining with \eqref{card6.13}, we have $A^0_0(w)\subseteq A^0_1(w)$ for every $w\in \mathscr X$ and each $|A^0_1(w)|\leq \kappa$. \begin{lem}\label{L6.61} Let $B\subseteq el_{\mathscr X}(\mathscr M)$ with $|B|\leq \kappa$. Then, there is a submodule $\mathscr Q\hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$ such that $B\subseteq el_{\mathscr X}(\mathscr Q)$ and $|\mathscr Q|\leq \kappa$. \end{lem} \begin{proof} For any $m\in B\subseteq el_{\mathscr X}(\mathscr M)$ we can choose, as in the proof of Theorem \ref{T4.9}, a subobject $\mathscr Q_m\subseteq \mathscr M$ such that $m\in el_{\mathscr X}(\mathscr Q_m)$ and $|\mathscr Q_m|\leq \kappa$. Then, we set $\mathscr Q:=\underset{m\in B}{\sum} \mathscr Q_m$. In particular, $\mathscr Q$ is a quotient of $ \underset{m\in B}{\bigoplus} \mathscr Q_m$. Since $|B|\leq \kappa$, the result follows. \end{proof} Using Lemma \ref{L6.61}, we now choose a submodule $\mathscr Q^0(n,\alpha)\hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$ such that $\underset{w\in \mathscr X}{\bigcup}\textrm{ }A^0_1(w)\subseteq el_{\mathscr X}(\mathscr Q^0(n,\alpha))$ and $|\mathscr Q^0(n,\alpha)|\leq \kappa$. In particular, $A^0_1(w)\subseteq \mathscr Q^0(n,\alpha)_w$ for each $w\in \mathscr X$. We now iterate this construction. Suppose we have constructed a submodule $\mathscr Q^l(n,\alpha)\hookrightarrow \mathscr M$ for every $l\leq m$ such that $\underset{w\in \mathscr X}{\bigcup}\textrm{ }A^l_1(w)\subseteq el_{\mathscr X}(\mathscr Q^l(n,\alpha))$ and $|\mathscr Q^l(n,\alpha)|\leq \kappa$. Then, we set $A^{m+1}_0(w):=\mathscr Q^m(n,\alpha)_w$ for each $w\in \mathscr X$. We then use Lemma \ref{L6.6} with $A^{m+1}_0(y)\subseteq el(\mathscr M_y)$ and $A^{m+1}_0(z)\subseteq el(\alpha^\ast\mathscr M_y)= el( \mathscr M_z)$ to obtain $A^{m+1}_1(y)\hookrightarrow \mathscr M_y$ in $\mathbf M^C_{\mathscr R_y}(\psi_y)$ such that \begin{equation}\label{card6.14} |A^{m+1}_1(y)|\leq \kappa \qquad |\alpha^\ast A^{m+1}_1(y)|\leq \kappa \qquad A^{m+1}_0(y) \subseteq el(A^{m+1}_1(y))\qquad A^{m+1}_0(z)\subseteq el(\alpha^\ast A^{m+1}_1(y)) \end{equation} We now set $A^{m+1}_1(z):=\alpha^\ast A^{m+1}_1(y)$. Then, \eqref{card6.14} can be rewritten as \begin{equation}\label{card6.15} |A^{m+1}_1(y)|\leq \kappa \qquad |A^{m+1}_1(z)|\leq \kappa \qquad A^{m+1}_0(y) \subseteq el(A^{m+1}_1(y))\qquad A^{m+1}_0(z)\subseteq el(A^{m+1}_1(z)) \end{equation} For any $w\ne y,z$ in $\mathscr X$, we set $A^{m+1}_1(w)=A^{m+1}_0(w)$. Combining with \eqref{card6.15}, we have $A^{m+1}_0(w)\subseteq A^{m+1}_1(w)$ for every $w\in \mathscr X$ and each $|A^{m+1}_1(w)|\leq \kappa$. Using Lemma \ref{L6.61}, we now choose a submodule $\mathscr Q^{m+1}(n,\alpha)\hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$ such that $\underset{w\in \mathscr X}{\bigcup}\textrm{ }A^{m+1}_1(w)\subseteq el_{\mathscr X}(\mathscr Q^{m+1}(n,\alpha))$ and $|\mathscr Q^{m+1}(n,\alpha)|\leq \kappa$. In particular, $A^{m+1}_1(w)\subseteq \mathscr Q^{m+1}(n,\alpha)_w$ for each $w\in \mathscr X$. Finally, we set \begin{equation}\label{6.16ep} \mathscr P(n,\alpha):=\underset{m\geq 0}{\varinjlim}\textrm{ }\mathscr Q^m(n,\alpha) \end{equation} in $Mod^C-\mathscr R$. \begin{lem}\label{L6.62} The family $\{\mbox{$\mathscr P(n,\alpha)$ $\vert$ $(n,\alpha)\in \mathbb N\times Mor(\mathscr X)$}\}$ satisfies the following conditions. (1) $m\in el_{\mathscr X}(\mathscr P(1,\alpha_0))$, where $\alpha_0$ is the least element of $Mor(\mathscr X)$. (2) $\mathscr P(n,\alpha)\subseteq \mathscr P(m,\beta)$, whenever $(n,\alpha)\leq (m,\beta)$ in $\mathbb N\times Mor(\mathscr X)$ (3) For each $(n,\alpha:y\longrightarrow z)\in \mathbb N\times Mor(\mathscr X)$, the morphism $\mathscr P(n,\alpha)^\alpha:\alpha^\ast \mathscr P(n,\alpha)_y \longrightarrow \mathscr P(n,\alpha)_z$ is an isomorphism in $\mathbf M^C_{\mathscr R_z}(\psi_z)$. (4) $|\mathscr P(n,\alpha)|\leq \kappa$. \end{lem} \begin{proof} The conditions (1) and (2) are immediate from the definition in \eqref{6.11dp}. The condition (4) follows from \eqref{6.16ep} and the fact that each $|\mathscr Q^{m+1}(n,\alpha)|\leq \kappa$. To prove (3), we notice that $\mathscr P(n,\alpha)_y$ may be expressed as the filtered union \begin{equation} A^0_1(y)\hookrightarrow \mathscr Q^0(n,\alpha)_y\hookrightarrow A^1_1(y)\hookrightarrow \mathscr Q^1(n,\alpha)_y\hookrightarrow \dots \hookrightarrow A^{m+1}_1(y)\hookrightarrow\mathscr Q^{m+1}(n,\alpha)_y\hookrightarrow ... \end{equation} of objects in $\mathbf M^C_{\mathscr R_y}(\psi_y)$. Since $\alpha^\ast$ is exact and a left adjoint, we can express $\alpha^\ast\mathscr P(n,\alpha)_y$ as the filtered union \begin{equation} \label{c6.18} \alpha^\ast A^0_1(y)\hookrightarrow \alpha^\ast\mathscr Q^0(n,\alpha)_y\hookrightarrow \alpha^\ast A^1_1(y)\hookrightarrow \alpha^\ast\mathscr Q^1(n,\alpha)_y\hookrightarrow \dots \hookrightarrow \alpha^\ast A^{m+1}_1(y)\hookrightarrow\alpha^\ast\mathscr Q^{m+1}(n,\alpha)_y\hookrightarrow ... \end{equation} in $\mathbf M^C_{\mathscr R_z}(\psi_z)$. Similarly, $\mathscr P(n,\alpha)_z$ may be expressed as the filtered union \begin{equation} \label{c6.19} A^0_1(z)\hookrightarrow \mathscr Q^0(n,\alpha)_z\hookrightarrow A^1_1(z)\hookrightarrow \mathscr Q^1(n,\alpha)_z\hookrightarrow \dots \hookrightarrow A^{m+1}_1(z)\hookrightarrow\mathscr Q^{m+1}(n,\alpha)_z\hookrightarrow ... \end{equation} in $\mathbf M^C_{\mathscr R_z}(\psi_z)$. By definition, we know that $A^m_1(z)=\alpha^\ast A^m_1(y)$ for each $m\geq 0$. From \eqref{c6.18} and \eqref{c6.19}, it is clear that the filtered colimit of the isomorphisms $\alpha^\ast A^m_1(y)=A^m_1(z)$ induces an isomorphism $\mathscr P(n,\alpha)^\alpha:\alpha^\ast \mathscr P(n,\alpha)_y \longrightarrow \mathscr P(n,\alpha)_z$. \end{proof} \begin{lem}\label{L6.7} Let $\mathscr M$ be a cartesian module over a flat representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$. Choose $m\in el_{\mathscr X}(\mathscr M)$. Let $\kappa =max\{ \mbox{$|\mathbb N|$, $|C|$, $|K|$, $|Mor(\mathscr X)|$, $|Mor(\mathscr R_x)|$, $x\in \mathscr X$}\}$. Then, there is a cartesian submodule $\mathscr P\subseteq \mathscr M$ with $m\in el_{\mathscr X}(\mathscr P)$ such that $|\mathscr P|\leq \kappa$. \end{lem} \begin{proof} It is clear that $\mathbb N\times Mor(\mathscr X)$ with the lexicographic ordering is filtered. We set \begin{equation} \mathscr P:=\underset{(n,\alpha)\in \mathbb N\times Mor(\mathscr X)}{\bigcup}\textrm{ }\mathscr P(n,\alpha)\subseteq \mathscr M \end{equation} in $Mod^C-\mathscr R$. It is immediate that $m\in el_{\mathscr X}(\mathscr P)$. Since each $|\mathscr P(n,\alpha)|\leq \kappa$, it is clear that $|\mathscr P|\leq \kappa$. We now consider a morphism $\beta:z\longrightarrow w$ in $\mathscr X$. Then, the family $\{(m,\beta)\}_{m\geq 1}$ is cofinal in $\mathbb N\times Mor(\mathscr X)$ and hence it follows that \begin{equation} \mathscr P:=\underset{m\geq 1}{\varinjlim}\textrm{ }\mathscr P(m,\beta) \end{equation} Since each $\mathscr P(m,\beta)^\beta:\beta^\ast \mathscr P(m,\beta)_z \longrightarrow \mathscr P(m,\beta)_w$ is an isomorphism, the filtered colimit $\mathscr P^\beta:\beta^\ast\mathscr P_z\longrightarrow \mathscr P_w$ is an isomorphism. \end{proof} \begin{Thm}\label{T6.10} Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr X$ be a poset and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of $\mathscr X$. Suppose that $\mathscr R$ is flat. Then, $Cart^C-\mathscr R$ is a Grothendieck category. \end{Thm} \begin{proof} It is already clear that $Cart^C-\mathscr R$ satisfies the (AB5) condition. From Lemma \ref{L6.7}, it is clear that any $\mathscr M\in Cart^C-\mathscr R$ can be expressed as a sum of a family $\{\mathscr P_m\}_{m\in el_{\mathscr X}(\mathscr M)}$ of cartesian submodules such that each $|\mathscr P_m| \leq \kappa$. As such, isomorphism classes of cartesian modules $\mathscr P$ with $|\mathscr P|\leq \kappa$ form a family of generators for $Cart^C-\mathscr R$. \end{proof} \section{Separability of the forgetful functor} Let $(\mathcal R,C,\psi)$ be an entwining structure. We consider the forgetful functor $\mathcal F:\mathbf M^C_{\mathcal R}(\psi)\longrightarrow \mathbf M_{\mathcal R}$. By \cite[Lemma 2.4 \& Lemma 3.1]{BBR}, we know that $\mathcal F$ has a right adjoint $\mathcal G:\mathbf M_{\mathcal R}\longrightarrow \mathbf M^C_{\mathcal R}(\psi)$ given by setting $\mathcal G( \mathcal N):=\mathcal N\otimes C$, i.e. $\mathcal G(\mathcal N)(r):=\mathcal N(r)\otimes C$ for each $r\in \mathcal R$. The right $\mathcal R$-module structure on $\mathcal G(\mathcal N)$ is given by $(n\otimes c)\cdot f:=nf_\psi\otimes c^\psi$ for $f\in \mathcal R(r',r)$, $n\in \mathcal N(r)$ and $c\in C$. We continue with $\mathscr X$ being a poset, $C$ being a right semiperfect coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. We denote by $\mathscr Lin$ the category of small $K$-linear categories. Then, for each $x\in \mathscr X$, we may replace the entwining structure $(\mathscr R_x,C,\psi_x)$ by the $K$-linear category $\mathscr R_x$ to obtain a functor that we continue to denote by $\mathscr R:\mathscr X\longrightarrow \mathscr Lin$. We consider modules over $\mathscr R:\mathscr X\longrightarrow \mathscr Lin$ in the sense of Estrada and Virili \cite[Definition 3.6]{EV} and denote their category by $Mod-\mathscr R$. Explicitly, an object $\mathscr N$ in $Mod-\mathscr R$ consists of a module $\mathscr N_x\in \mathbf M_{\mathscr R_x}$ for each $x\in \mathscr X$ as well as compatible morphisms $\mathscr N_\alpha:\mathscr N_x\longrightarrow \alpha_\ast\mathscr N_y$ (equivalently $\mathscr N^\alpha:\alpha^\ast\mathscr N_x\longrightarrow \mathscr N_y$) for each $\alpha:x\longrightarrow y$ in $\mathscr X$. The module $\mathscr N$ is said to be cartesian if each $\mathscr N^\alpha:\alpha^\ast\mathscr N_x\longrightarrow \mathscr N_y$ is an isomorphism. We denote by $Cart-\mathscr R$ the full subcategory of cartesian modules on $\mathscr R$. For each $x\in \mathscr X$, we have a forgetful functor $\mathscr F_x:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{\mathscr R_x}$ having right adjoint $\mathscr G_x: \mathbf M_{\mathscr R_x}\longrightarrow \mathbf M^C_{\mathscr R_x}(\psi_x)$. From the proofs of Propositions \ref{P2.2} and \ref{P2.3}, it is clear that we have commutative diagrams \begin{equation}\label{cd7.1} \begin{CD} \mathbf M^C_{\mathscr R_y}(\psi_y) @>\alpha_\ast >> \mathbf M^C_{\mathscr R_x}(\psi_x)\\ @V\mathscr F_yVV @VV\mathscr F_xV \\ \mathbf M_{\mathscr R_y}@>\alpha_\ast >> \mathbf M_{\mathscr R_x}\\ \end{CD} \qquad \begin{CD} \mathbf M^C_{\mathscr R_x}(\psi_x) @>\alpha^\ast >> \mathbf M^C_{\mathscr R_y}(\psi_y)\\ @V\mathscr F_xVV @VV\mathscr F_yV \\ \mathbf M_{\mathscr R_x}@>\alpha^\ast >> \mathbf M_{\mathscr R_y}\\ \end{CD}\qquad \begin{CD} \mathbf M_{\mathscr R_y} @>\alpha_\ast >> \mathbf M_{\mathscr R_x}\\ @V\mathscr G_yVV @VV\mathscr G_xV \\ \mathbf M_{\mathscr R_y}^C(\psi_y)@>\alpha_\ast >> \mathbf M^C_{\mathscr R_x}(\psi_x)\\ \end{CD} \end{equation} for each $\alpha:x\longrightarrow y$ in $\mathscr X$. \begin{thm} Let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. Then, the collection $\{\mathscr F_x:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{\mathscr R_x}\}_{x\in \mathscr X}$ (resp. the collection $\{\mathscr G_x: \mathbf M_{\mathscr R_x}\longrightarrow \mathbf M^C_{\mathscr R_x}(\psi_x)\}_{x\in \mathscr X}$ ) together defines a functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ (resp. a functor $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$). \end{thm} \begin{proof} We consider $\mathscr M\in Mod^C-\mathscr R$ and set $\mathscr F(\mathscr M)_x:=\mathscr F_x(\mathscr M_x)\in \mathbf M_{\mathscr R_x}$. For a morphism $\alpha:x\longrightarrow y$, we obtain from \eqref{cd7.1} a morphism $\mathscr F(\mathscr M)_\alpha:=\mathscr F_x(\mathscr M_\alpha):\mathscr F_x(\mathscr M_x) \longrightarrow \mathscr F_x(\alpha_\ast\mathscr M_y)=\alpha_\ast\mathscr F_y(\mathscr M_y)$. This shows that $\mathscr F(\mathscr M)$ is an object of $Mod-\mathscr R$. Similarly, it follows from \eqref{cd7.1} that for any $\mathscr N\in Mod-\mathscr R$, we have $\mathscr G(\mathscr N)\in Mod^C-\mathscr R$ obtained by setting $\mathscr G(\mathscr N)_x:=\mathscr G_x(\mathscr N_x) =\mathscr N_x\otimes C$. \end{proof} \begin{thm}\label{P7.2} Let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. Then, the functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ has a right adjoint, given by $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$. \end{thm} \begin{proof} We consider $\mathscr M\in Mod^C-\mathscr R$ and $\mathscr N\in Mod-\mathscr R$ along with a morphism $\eta:\mathscr F(\mathscr M)\longrightarrow \mathscr N$ in $Mod-\mathscr R$. We will show how to construct a morphism $\zeta:\mathscr M\longrightarrow \mathscr G(\mathscr N)$ in $Mod^C-\mathscr R$ corresponding to $\eta$. For each $x\in \mathscr X$, we consider $\eta_x:\mathscr F(\mathscr M)_x=\mathscr F_x(\mathscr M_x)\longrightarrow \mathscr N_x$ in $\mathbf M_{\mathscr R_x}$. By \cite[Lemma 3.1]{BBR}, we already know that $(\mathscr F_x,\mathscr G_x)$ is a pair of adjoint functors, which gives us $\mathbf M_{\mathscr R_x}(\mathscr F_x(\mathscr M_x),\mathscr N_x)\cong \mathbf M^C_{\mathscr R_x}( \mathscr M_x,\mathscr G_x(\mathscr N_x))$. Accordingly, we define $\zeta_x:\mathscr M_x\longrightarrow \mathscr G_x(\mathscr N_x)=\mathscr N_x\otimes C$ by setting $\zeta_x(m'):=\eta_x(r)(m'_0)\otimes m'_1$ for $m'\in \mathscr M_x(r)$, $r\in \mathscr R_x$. We now consider the diagrams \begin{equation}\label{cd7.2} \begin{CD} \mathscr F_x(\mathscr M_x) @>\eta_x>> \mathscr N_x \\ @V\mathscr F_x(\mathscr M_\alpha)VV @VV\mathscr N_\alpha V\\ \alpha_\ast\mathscr F_y(\mathscr M_y) @>\alpha_\ast(\eta_y)>> \alpha_\ast\mathscr N_y\\ \end{CD} \qquad \Rightarrow \qquad \begin{CD} \mathscr M_x @>\zeta_x>> \mathscr G_x(\mathscr N_x)\\ @V\mathscr M_\alpha VV @VV\mathscr G_x(\mathscr N_\alpha)V\\ \alpha_\ast\mathscr M_y @>\alpha_\ast(\zeta_y)>> \alpha_\ast \mathscr G_y(\mathscr N_y) \\ \end{CD} \end{equation} The left hand side diagram in \eqref{cd7.2} is commutative because $\eta:\mathscr F(\mathscr M)\longrightarrow \mathscr N$ is a morphism in $Mod-\mathscr R$. In order to prove that we have a morphism $\zeta:\mathscr M\longrightarrow \mathscr G(\mathscr N)$ in $Mod^C-\mathscr R$, it suffices to show that this implies the commutativity of the right hand side diagram in \eqref{cd7.2}. We consider $m\in el(\mathscr M_x)$. Then, we have $\mathscr G_x(\mathscr N_\alpha)(\zeta_x(m))=\mathscr N_\alpha(\eta_x(m_0))\otimes m_1$. On the other hand, we have $\alpha_\ast(\zeta_y)(\mathscr M_\alpha(m))=\eta_y((\mathscr M_\alpha(m))_0)\otimes (\mathscr M_\alpha(m))_1$. Since $\mathscr M_\alpha$ is $C$-colinear, we have $(\mathscr M_\alpha(m))_0 \otimes (\mathscr M_\alpha(m))_1=\mathscr M_\alpha(m_0)\otimes m_1$. It follows that $\alpha_\ast(\zeta_y)(\mathscr M_\alpha(m))=\eta_y(\mathscr M_\alpha(m_0))\otimes m_1$. From the left hand side commutative diagram in \eqref{cd7.2}, we get $\eta_y(\mathscr M_\alpha(m_0))=\mathscr N_\alpha(\eta_x(m_0))$, which shows that the right hand diagram in \eqref{cd7.2} is commutative. Similarly, we may show that a morphism $\zeta':\mathscr M\longrightarrow \mathscr G(\mathscr N)$ in $Mod^C-\mathscr R$ induces a morphism $\eta':\mathscr F(\mathscr M)\longrightarrow \mathscr N$ in $Mod-\mathscr R$ and that these two associations are inverse to each other. This proves the result. \end{proof} We now recall that a functor $F:\mathcal A\longrightarrow \mathcal B$ is said to be separable if the natural transformation $\mathcal A(\_\_,\_\_)\longrightarrow \mathcal B(F(\_\_),F(\_\_))$ is a split monomorphism (see \cite{NBO}, \cite{Raf}). If $F$ has a right adjoint $G:\mathcal B\longrightarrow \mathcal A$, then $F$ is separable if and only if there exists a natural transformation $\upsilon \in Nat(GF,1_{\mathcal A})$ satisfying $\upsilon\circ \mu=1_{\mathcal A}$, where $\mu$ is the unit of the adjunction (see \cite[Theorem 1.2]{Raf}). We now consider the forgetful functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ as well as its right adjoint $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ constructed in Proposition \ref{P7.2}. We will need an alternate description for the natural transformations $\mathscr G\mathscr F\longrightarrow 1_{Mod^C-\mathscr R}$. \begin{thm}\label{P7.25} A natural transformation $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ corresponds to a collection of natural transformations $\{\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})\}_{x\in \mathscr X}$ such that for any $\alpha:x\longrightarrow y$ in $\mathscr X$ and object $\mathscr M\in Mod^C-\mathscr R$, we have a commutative diagram \begin{equation}\label{cd7.3} \begin{CD} \mathscr G_x\mathscr F_x(\mathscr M_x) @>\upsilon_x(\mathscr M_x)>> \mathscr M_x\\ @V\mathscr G_x\mathscr F_x(\mathscr M_\alpha) VV @VV\mathscr M_\alpha V \\ \alpha_\ast\mathscr G_y\mathscr F_y(\mathscr M_y) @>\alpha_\ast\upsilon_y(\mathscr M_y)>> \alpha_\ast\mathscr M_y\\ \end{CD} \end{equation} in $\mathbf M^C_{\mathscr R_x}(\psi_x)$. \end{thm} \begin{proof} We consider $\upsilon \in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$. For $x\in \mathscr X$, we define the natural transformation $\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})$ by setting \begin{equation}\label{eq7.35b} \upsilon_x(\mathcal M):=\upsilon(ex^C_x(\mathcal M))_x:\mathscr G_x\mathscr F_x(\mathcal M)=\mathscr G_x\mathscr F_x((ex^C_x(\mathcal M))_x)\longrightarrow (ex_x^C(\mathcal M))_x=\mathcal M\end{equation} for $\mathcal M\in \mathbf M^C_{\mathscr R_x}(\psi_x)$. We now consider $\mathscr M\in Mod^C-\mathscr R$. For $\alpha:x\longrightarrow y$ in $\mathscr X$, the morphism $\upsilon(\mathscr M):\mathscr G\mathscr F(\mathscr M)\longrightarrow \mathscr M$ in $Mod^C-\mathscr R$ leads to a commutative diagram \begin{equation}\label{7.36b} \begin{CD} (\mathscr G\mathscr F(\mathscr M))_x = \mathscr G_x\mathscr F_x(\mathscr M_x) @>\upsilon(\mathscr M)_x>> \mathscr M_x \\ @V\mathscr G_x\mathscr F_x(\mathscr M_\alpha) VV @VV\mathscr M_\alpha V \\ \alpha_\ast(\mathscr G\mathscr F(\mathscr M))_y=\alpha_\ast\mathscr G_y\mathscr F_y(\mathscr M_y) @> \alpha_\ast(\upsilon(\mathscr M)_y)>> \alpha_\ast\mathscr M_y\\ \end{CD} \end{equation} We now claim that $\upsilon(\mathscr M)_x=(\upsilon(ex^C_x(\mathscr M_x)))_x=\upsilon_x(\mathscr M_x)$ for each $x\in \mathscr X$. For this, we consider the canonical morphism $\zeta: ex_x^C(\mathscr M_x)=ex_x^C(ev_x^C(\mathscr M))\longrightarrow \mathscr M$ in $Mod^C-\mathscr R$ corresponding to the adjoint pair $(ex_x^C,ev_x^C)$ in Proposition \ref{P5.3}. It is clear that $ev_x^C(\zeta)=id$. Then, we have commutative diagrams \begin{equation}\label{7.37b} \begin{array}{ccc} \begin{CD} \mathscr G\mathscr F(ex^C_x(\mathscr M_x)) @>\upsilon(ex^C_x(\mathscr M_x))>> ex_x^C(\mathscr M_x)\\ @V\mathscr G\mathscr F(\zeta)VV @VV\zeta V\\ \mathscr G\mathscr F(\mathscr M) @>\upsilon(\mathscr M)>> \mathscr M \\ \end{CD} & \qquad \Rightarrow \qquad & \begin{CD} \mathscr G_x\mathscr F_x(\mathscr M_x) @>(\upsilon(ex^C_x(\mathscr M_x)))_x>> \mathscr M_x\\ @Vid VV @VVid V\\ \mathscr G_x\mathscr F_x(\mathscr M_x) @>\upsilon(\mathscr M)_x>> \mathscr M_x \\ \end{CD} \\ \end{array} \end{equation} This proves that $\upsilon(\mathscr M)_x=(\upsilon(ex^C_x(\mathscr M_x)))_x=\upsilon_x(\mathscr M_x)$ for each $x\in \mathscr X$. The commutativity of the diagram \eqref{cd7.3} now follows from \eqref{7.36b}. Conversely, given a collection of natural transformations $\{\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})\}_{x\in \mathscr X}$ satisfying \eqref{cd7.3} for each $\mathscr M \in Mod^C-\mathscr R$, we get $\upsilon(\mathscr M):\mathscr G\mathscr F(\mathscr M)\longrightarrow \mathscr M$ in $Mod^C-\mathscr R$ by setting $\upsilon(\mathscr M)_x= \upsilon_x(\mathscr M_x)$ for each $x\in \mathscr X$. From \eqref{cd7.3}, it is clear that $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$. \end{proof} More explicitly, the diagram in \eqref{cd7.3} shows that for each $\alpha:x\longrightarrow y$ in $\mathscr X$ and $r\in \mathscr R_x$, we have a commutative diagram \begin{equation}\label{cd7.4} \begin{CD} \mathscr M_x(r)\otimes C=(\mathscr G_x\mathscr F_x(\mathscr M_x))(r) @>(\upsilon_x(\mathscr M_x))(r)>> \mathscr M_x(r)\\ @V(\mathscr G_x\mathscr F_x(\mathscr M_\alpha))(r) VV @VV\mathscr M_\alpha(r) V \\ \mathscr M_y(\alpha(r))\otimes C=(\mathscr G_y\mathscr F_y(\mathscr M_y))(\alpha(r))=(\alpha_\ast\mathscr G_y\mathscr F_y(\mathscr M_y))(r) @>(\alpha_\ast\upsilon_y(\mathscr M_y))(r)>=(\upsilon_y(\mathscr M_y))(\alpha(r))> (\alpha_\ast\mathscr M_y)(r)=\mathscr M_y(\alpha(r))\\ \end{CD} \end{equation} We note that all morphisms in \eqref{cd7.4} are $C$-colinear. We now give another interpretation of the space $ Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$. For this, we consider a collection $\theta:=\{\theta_x(r):C\otimes C\longrightarrow \mathscr R_x(r,r)\}_{x\in \mathscr X,r\in \mathscr R_x}$ of $K$-linear maps satisfying the following conditions. (1) Fix $x\in \mathscr X$ and $r\in \mathscr R_x$. Then, for $c$, $d\in C$, we have \begin{equation}\label{theta1} \theta_x(r)(c\otimes d_1)\otimes d_2=(\theta_x(r)(c_2\otimes d))_{\psi_x}\otimes {c_1}^{\psi_x} \end{equation} (2) Fix $x\in \mathscr X$ and $c$, $d\in C$. Then, for $f:s\longrightarrow r$ in $\mathscr R_x$, we have \begin{equation}\label{theta2} (\theta_x(r)(c\otimes d))\circ f=f_{{\psi_x}_{\psi_x}}\circ (\theta_x(s)(c^{\psi_x}\otimes d^{\psi_x})) \end{equation} (3) Fix $c$, $d\in C$. Then, for any $\alpha:x\longrightarrow y$ in $\mathscr X$ and $r\in \mathscr R_x$, we have \begin{equation}\label{theta3} \alpha(\theta_x(r)(c\otimes d))=\theta_y(\alpha(r))(c\otimes d) \end{equation} The space of all such $\theta$ will be denoted by $V_1$. \begin{thm}\label{P7.3} Let $\theta\in V_1$. Then, $\theta$ induces a natural transformation $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$, such that for each $x\in \mathscr X$, $\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})$ is given by \begin{equation}\label{eq7.8} \upsilon_x(\mathcal M):\mathcal M\otimes C\longrightarrow \mathcal M\qquad (m\otimes c)\mapsto \mathcal M(\theta_x(r)(m_1\otimes c))(m_0) \end{equation} for any $\mathcal M\in \mathbf M^C_{\mathscr R_x}(\psi_x)$, $r\in \mathscr R_x$, $m\in \mathcal M(r)$ and $c\in C$. \end{thm} \begin{proof} From \cite[Proposition 3.6]{BBR}, it follows that each $\upsilon_x$ as defined in \eqref{eq7.8} by the collection $\theta_x:= \{\theta_x(r):C\otimes C\longrightarrow \mathscr R_x(r,r)\}_{r\in \mathscr R_x}$ gives a natural transformation $\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})$. To prove the result, it therefore suffices to show the commutativity of the diagram \eqref{cd7.4} for any $\mathscr M\in Mod^C-\mathscr R$. Accordingly, for $\alpha: x\longrightarrow y$ in $\mathscr X$ and $r\in \mathscr R_x$, we have \begin{equation}\label{eq7.9} ((\mathscr M_\alpha(r))\circ (\upsilon_x(\mathscr M_x))(r))(m\otimes c)=(\mathscr M_\alpha(r))(\mathscr M_x(\theta_x(r)(m_1\otimes c))(m_0)) \end{equation} for $m\otimes c\in \mathscr M_x(r)\otimes C$. On the other hand, we have \begin{equation}\label{eq7.10} \begin{array}{ll} (((\upsilon_y(\mathscr M_y))(\alpha(r)))\circ ((\mathscr G_x\mathscr F_x(\mathscr M_\alpha))(r)))(m\otimes c)&=\mathscr M_y(\theta_y(\alpha(r))(\mathscr M_\alpha(m)_1\otimes c ))(\mathscr M_\alpha(r)(m))_0\\ &=\mathscr M_y(\theta_y(\alpha(r))(m_1\otimes c))(\mathscr M_\alpha(r)(m_0))\\ &=\mathscr M_y(\alpha(\theta_x(r)(m_1\otimes c)))(\mathscr M_\alpha(r)(m_0))\\ \end{array} \end{equation} The second equality in \eqref{eq7.10} follows from the $C$-colinearity of $\mathscr M_\alpha(r)$ and the third equality follows by applying condition \eqref{theta3}. We now notice that for any $f\in \mathscr R_x(r,r)$, we have a commutative diagram \begin{equation}\label{cd7.11} \begin{CD} \mathscr M_x(r) @>\mathscr M_\alpha(r)>> \mathscr M_y(\alpha(r))\\ @V\mathscr M_x(f)VV @VV\mathscr M_y(\alpha(f))V \\ \mathscr M_x(r) @>\mathscr M_\alpha(r)>> \mathscr M_y(\alpha(r))\\ \end{CD} \end{equation} Applying \eqref{cd7.11} to $f=\theta_x(r)(m_1\otimes c)\in \mathscr R_x(r,r)$, we obtain from \eqref{eq7.10} that \begin{equation}\label{eq7.12} (((\upsilon_y(\mathscr M_y))(\alpha(r)))\circ ((\mathscr G_x\mathscr F_x(\mathscr M_\alpha))(r)))(m\otimes c)=(\mathscr M_\alpha(r))(\mathscr M_x(\theta_x(r)(m_1\otimes c))(m_0)) \end{equation} This proves the result. \end{proof} Fix $x\in \mathscr X$ and $r\in \mathscr R_x$. We now set \begin{equation} \mathscr H_y^{(x,r)}:=\left\{\begin{array}{ll} \mathscr R_y(\_\_,\alpha(r)) \otimes C & \mbox{if $\alpha:x\longrightarrow y$} \\ 0 & \mbox{if $x\not\leq y$}\\ \end{array}\right. \end{equation} for each $y\in \mathscr X$. \begin{lem}\label{L7.4} For each $x\in \mathscr X$ and $r\in \mathscr R_x$, the collection $\mathscr H^{(x,r)}:=\{\mathscr H_y^{(x,r)}\}_{y\in \mathscr X}$ determines an object of $Mod^C-\mathscr R$. \end{lem} \begin{proof} For each $y\in \mathscr X$, it follows by \cite[Lemma 2.4]{BBR} that $\mathscr H_y^{(x,r)}$ is an object of $\mathbf M^C_{\mathscr R_y}(\psi_y)$. We consider $\beta:y\longrightarrow z$ in $\mathscr X$ and suppose we have $\alpha:x\longrightarrow y$, i.e., $x\leq y$. Then, for $r'\in \mathscr R_y$, we have an obvious morphism \begin{equation}\beta(\_\_)\otimes C: \mathscr H_y^{(x,r)}(r')= \mathscr R_y(r',\alpha(r))\otimes C\longrightarrow \beta_\ast(\mathscr R_z(\_\_,\beta\alpha(r))\otimes C)(r')=\mathscr R_z(\beta(r'),\beta\alpha(r))\otimes C \end{equation} which is $C$-colinear. To prove that $\mathscr H_y^{(x,r)}\longrightarrow \beta_\ast\mathscr H_z^{(x,r)}$ is a morphism in $\mathbf M^C_{\mathscr R_y}( \psi_y)$, it remains to show that for any $g:r''\longrightarrow r'$ in $\mathscr R_y$, the following diagram commutes \begin{equation}\label{cd7.15} \begin{CD} \mathscr R_y(r',\alpha(r))\otimes C @>\cdot g>> \mathscr R_y(r'',\alpha(r))\otimes C\\ @V\beta(\_\_)\otimes CVV @VV\beta(\_\_)\otimes CV \\ \mathscr R_z(\beta(r'),\beta\alpha(r))\otimes C @>\cdot \beta(g)>>\mathscr R_z(\beta(r''),\beta\alpha(r))\otimes C \\ \end{CD} \end{equation} For $f\otimes c\in \mathscr R_y(r',\alpha(r))\otimes C$, we have \begin{equation*} (\beta(\_\_)\otimes C)((f\otimes c)\cdot g)=(\beta(\_\_)\otimes C)(fg_{\psi_y}\otimes c^{\psi_y})=\beta(f)\beta(g_{\psi_y})\otimes c^{\psi_y}=\beta(f)\beta(g)_{\psi_z} \otimes c^{\psi_z}=(\beta(f)\otimes c)\cdot \beta(g) \end{equation*} This shows that \eqref{cd7.15} is commutative. Finally, if $x\not\leq y$, then $0=\mathscr H_y^{(x,r)}\longrightarrow \beta_\ast\mathscr H_z^{(x,r)}$ is obviously a morphism in $\mathbf M^C_{\mathscr R_y}( \psi_y)$. This proves the result. \end{proof} \begin{thm}\label{P7.5} Let $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$. For each $x\in \mathscr X$ and $r\in \mathscr R_x$, define $\theta_x(r): C\otimes C\longrightarrow \mathscr R_x(r,r)$ by setting \begin{equation}\label{eq7.16} \theta_x(r)(c\otimes d):=((id\otimes \varepsilon_C)\circ (\upsilon_x(\mathscr H^{(x,r)}_x)(r)))(id_r\otimes c\otimes d) \end{equation} for $c$, $d\in C$. Then, the collection $\theta:=\{\theta_x(r):C\otimes C\longrightarrow \mathscr R_x(r,r)\}_{x\in \mathscr X,r\in \mathscr R_x}$ is an element of $V_1$. \end{thm} \begin{proof} From the definition in \eqref{eq7.16}, we have explicitly that \begin{equation}\label{eq7.17} \theta_x(r)(c\otimes d)=((id\otimes \varepsilon_C)\circ (\upsilon_x(\mathscr R_x(\_\_,r)\otimes C)(r)))(id_r\otimes c\otimes d) \end{equation} Then, it follows from \cite[Proposition 3.5]{BBR} that $\theta_x(r)$ satisfies the conditions in \eqref{theta1} and \eqref{theta2}. It remains to verify the condition \eqref{theta3}. For this we take $\alpha:x\longrightarrow y$ in $\mathscr X$ and consider the commutative diagram \begin{equation}\label{cd7.18} \begin{CD} \mathscr R_x(r',r)\otimes C\otimes C @>\upsilon_x(\mathscr H^{(x,r)}_x)(r')>> \mathscr R_x(r',r)\otimes C @>id\otimes \varepsilon_C>> \mathscr R_x(r',r)\\ @V\alpha(\_\_)\otimes C\otimes CVV @VV\alpha(\_\_)\otimes CV @VV\alpha(\_\_)V\\ \mathscr R_y(\alpha(r'),\alpha(r))\otimes C\otimes C @>\upsilon_y(\mathscr H^{(x,r)}_y)(\alpha(r'))>>\mathscr R_y(\alpha(r'),\alpha(r))\otimes C @>id\otimes \varepsilon_C>>\mathscr R_y(\alpha(r'),\alpha(r))\\ \end{CD} \end{equation} for any $r,r'\in \mathscr R_x$. Since $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$, the commutativity of the left hand side square in \eqref{cd7.18} follows from \eqref{cd7.4}. It is clear that the right hand square in \eqref{cd7.18} is commutative. We notice that $\mathscr H_y^{(y,\alpha(r))}=\mathscr H_y^{(x,r)}$ in $\mathbf M_{\mathscr R_y}^C(\psi_y)$. Applying \eqref{cd7.18} with $r'=r\in\mathscr R_x$ and $id_r\otimes c\otimes d\in \mathscr R_x(r,r)\otimes C\otimes C$, it follows from \eqref{eq7.17} that $\alpha(\theta_x(r)(c\otimes d))=\theta_y(\alpha(r))(c\otimes d)$. This proves \eqref{theta3}. \end{proof} \begin{thm}\label{P7.6} $Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ is isomorphic to $V_1$. \end{thm} \begin{proof} From Proposition \ref{P7.3} and Proposition \ref{P7.5}, we see that we have maps $\psi:V_1\longrightarrow Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ and $\phi:Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})\longrightarrow V_1$ in opposite directions. We consider $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$. By Proposition \ref{P7.5}, $\upsilon$ induces an element $\theta\in V_1$. Applying Proposition \ref{P7.3}, $\theta$ induces an element in $Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$, which we denote by $\upsilon'$. Then, $\upsilon$ and $\upsilon'$ are determined respectively by natural transformations $\{\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})\}_{x\in \mathscr X}$ and $\{\upsilon'_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})\}_{x\in \mathscr X}$ satisfying compatibility conditions as in \eqref{cd7.3}. From \cite[Proposition 3.7]{BBR}, it follows that $\upsilon'_x=\upsilon_x$ for each $x\in \mathscr X$. Hence, $\upsilon'=\upsilon$ and $\psi\circ \phi=id$. Similarly, we can show that $\phi\circ \psi=id$. \end{proof} \begin{Thm}\label{T7.7} Let $\mathscr X$ be a partially ordered set. Let $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. Then, the functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ is separable if and only if there exists $\theta\in V_1$ such that \begin{equation}\label{eq7.19d} \theta_x(r)(c_1\otimes c_2)=\varepsilon_C(c)\cdot id_r \end{equation} for every $x\in \mathscr X$, $r\in\mathscr R_x$ and $c\in C$. \end{Thm} \begin{proof} We suppose that $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ is separable. As mentioned before, this implies that there exists $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ such that $\upsilon\circ \mu=1_{Mod^C-\mathscr R}$, where $\mu$ is the unit of the adjunction $(\mathscr F,\mathscr G)$. We set $\theta=\phi(\upsilon)$, where $\phi:Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})\longrightarrow V_1$ is the isomorphism described in the proof of Proposition \ref{P7.6}. In particular, for every $x\in \mathscr X$, $r\in\mathscr R_x$, we have $\upsilon(\mathscr H^{(x,r)})\circ \mu(\mathscr H^{(x,r)})=id$. From \eqref{eq7.16}, it now follows that for every $c\in C$, we have \begin{equation} \begin{array}{ll} \theta_x(r)(c_1\otimes c_2)&=((id\otimes \varepsilon_C)\circ (\upsilon_x(\mathscr H^{(x,r)}_x)(r)))(id_r\otimes c_1\otimes c_2)\\ &=((id\otimes \varepsilon_C)\circ (\upsilon_x(\mathscr H^{(x,r)}_x)(r))\circ \mu(\mathscr H^{(x,r)})_x(r))(id_r\otimes c)\\ &=(id\otimes \varepsilon_C)(id_r\otimes c)=\varepsilon_C(c)\cdot id_r\\ \end{array} \end{equation} Conversely, suppose that there exists $\theta\in V_1$ satisfying the condition in \eqref{eq7.19d}. We set $\upsilon:=\psi(\theta)$, where $\psi:V_1\longrightarrow Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ is the other isomorphism described in the proof of Proposition \ref{P7.6}. We consider $\mathscr M\in Mod^C-\mathscr R$. By \eqref{eq7.8}, we know that \begin{equation}\label{eq7.21} \upsilon_x(\mathscr M_x):\mathscr M_x\otimes C\longrightarrow \mathscr M_x\qquad (m\otimes c)\mapsto \mathscr M_x(\theta_x(r)(m_1\otimes c))(m_0) \end{equation} for any $x\in \mathscr X$, $r\in \mathscr R_x$, $m\in \mathscr M_x(r)$ and $c\in C$. We claim that $\upsilon\circ \mu=1_{Mod^C-\mathscr R}$. For this, we see that \begin{equation} \begin{array}{ll} ((\upsilon(\mathscr M)\circ \mu(\mathscr M))_x(r))(m)&=(\upsilon_x(\mathscr M_x)(r))(m_0\otimes m_1) \\ &= \mathscr M_x(\theta_x(r)(m_{01}\otimes m_1))(m_{00})\\ &= \mathscr M_x(\theta_x(r)(m_{11}\otimes m_{12}))(m_{0})\\ &= \varepsilon_C(m_1)m_0=m\\ \end{array} \end{equation} This proves the result. \end{proof} We now turn to cartesian modules over entwined $C$-representations. For this, we assume additionally that $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat. Then, it follows from Theorem \ref{T6.10} that $Cart^C-\mathscr R$ is a Grothendieck category. In particular, by taking $C=K$, we note that $Cart-\mathscr R$ is also a Grothendieck category. \begin{thm}\label{P7.8} Let $\mathscr X$ be a poset, $C$ be a right semiperfect $K$-coalgebra and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation that is also flat. Then, the functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ restricts to a functor $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$. Additionally, $\mathscr F^c$ has a right adjoint $\mathscr G^c:Cart-\mathscr R\longrightarrow Cart^C-\mathscr R$. \end{thm} \begin{proof} We consider $\mathscr M\in Cart^C-\mathscr R$. We claim that $\mathscr F(\mathscr M)\in Mod-\mathscr R$ actually lies in the subcategory $Cart-\mathscr R$. Indeed, for $\alpha:x\longrightarrow y$ in $\mathscr X$, we have $\mathscr F(M)_\alpha:\mathscr F_x(\mathscr M_x)=\mathscr M_x\longrightarrow\alpha_\ast\mathscr M_y=\alpha_\ast \mathscr F_y(\mathscr M_y)$ in $\mathbf M_{\mathscr R_x}$. By adjunction, this corresponds to a morphism $\alpha^\ast\mathscr M_x\longrightarrow \mathscr M_y$ in $\mathbf M_{\mathscr R_y}$. But since $\mathscr M\in Cart^C-\mathscr R$, we already know that $\alpha^\ast\mathscr M_x\longrightarrow \mathscr M_y$ is an isomorphism. Hence, $\mathscr F^c(\mathscr M):= \mathscr F(\mathscr M)\in Cart-\mathscr R$. We also notice that $Cart^C-\mathscr R$ is closed under taking colimits in $Mod^C-\mathscr R$. Then $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$ preserves colimits and we know from Theorem \ref{T6.10} that both $Cart^C-\mathscr R$ and $Cart-\mathscr R$ are Grothendieck categories. It now follows from \cite[Proposition 8.3.27]{KSch} that $\mathscr F^c$ has a right adjoint. \end{proof} \begin{thm}\label{P7.9} Let $\mathscr X$ be a poset, $C$ be a right semiperfect $K$-coalgebra and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation that is also flat. Suppose there exists $\theta\in V_1$ such that \begin{equation}\label{eq7.19} \theta_x(r)(c_1\otimes c_2)=\varepsilon_C(c)\cdot id_r \end{equation} for every $x\in \mathscr X$, $r\in\mathscr R_x$ and $c\in C$. Then, $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$ is separable. \end{thm} \begin{proof} From Theorem \ref{T7.7}, it follows that $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ is separable. In other words, for any $\mathscr M$, $\mathscr N\in Mod^C-\mathscr R$, the canonical morphism $Mod^C-\mathscr R(\mathscr M,\mathscr N)\longrightarrow Mod-\mathscr R(\mathscr F(\mathscr M),\mathscr F(\mathscr N))$ is a split monomorphism. Since $Cart^C-\mathscr R$ and $Cart-\mathscr R$ are full subcategories of $Mod^C-\mathscr R$ and $Mod-\mathscr R$ respectively and $\mathscr F^c$ is a restriction of $\mathscr F$, the result follows. \end{proof} \section{Separability of the functor $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$} We continue with $\mathscr X$ being a poset, $C$ being a right semiperfect coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. In this section, we will give conditions for the right adjoint $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ to be separable. Putting $C=K$ in Proposition \ref{P5.3}, we see that for each $x\in \mathscr X$, there is a functor $ex_x:\mathbf M_{\mathscr R_x}\longrightarrow Mod-\mathscr R$ having right adjoint $ev_x:Mod-\mathscr R\longrightarrow \mathbf M_{\mathscr R_x}$. In a manner similar to Proposition \ref{P7.25}, we now can show that a natural transformation $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ consists of a collection of natural transformations $\{\omega_x\in Nat(1_{\mathbf M_{\mathscr R_x}},\mathscr F_x\mathscr G_x)\}_{x\in \mathscr X}$ such that for any $\alpha:x\longrightarrow y$ in $\mathscr X$ and any $\mathscr N\in Mod-\mathscr R$, we have the following commutative diagram \begin{equation}\label{eq8.1} \begin{CD} \mathscr N_x @>\omega_x(\mathscr N_x)>> \mathscr F_x\mathscr G_x(\mathscr N_x)\\ @V\mathscr N_\alpha VV @VV\mathscr F_x\mathscr G_x(\mathscr N_\alpha)V \\ \alpha_\ast\mathscr N_y @>\alpha_\ast\omega_y(\mathscr N_y)>> \alpha_\ast\mathscr F_y\mathscr G_y(\mathscr N_y)\\ \end{CD} \end{equation} Here, $\omega_x\in Nat(1_{\mathbf M_{\mathscr R_x}},\mathscr F_x\mathscr G_x)$ is determined by setting \begin{equation}\label{Yeq7.35b} \omega_x(\mathcal N):=\omega(ex_x(\mathcal N))_x: (ex_x(\mathcal N))_x=\mathcal N\longrightarrow \mathscr F_x\mathscr G_x(\mathcal N)=\mathscr F_x\mathscr G_x((ex_x(\mathcal N))_x)\end{equation} for $\mathcal N\in \mathbf M_{\mathscr R_x}$. As in the proof of Proposition \ref{P7.25}, we can also show that \begin{equation}\label{Ye8.15} \omega_x(\mathscr N_x)=\omega(ex_x(\mathscr N_x))_x=\omega(\mathscr N)_x \end{equation} for any $\mathscr N\in Mod-\mathscr R$ and $x\in \mathscr X$. More explicitly, for each $x\in \mathscr X$ and $r\in \mathscr R_x$, we have a commutative diagram \begin{equation}\label{eq8.2} \begin{CD} \mathscr N_x(r) @>(\omega_x(\mathscr N_x))(r)>> (\mathscr F_x\mathscr G_x(\mathscr N_x))(r)=\mathscr N_x(r)\otimes C\\ @V\mathscr N_\alpha(r)VV @VV(\mathscr F_x\mathscr G_x(\mathscr N_\alpha))(r)V\\ \mathscr N_y(\alpha(r))=(\alpha_\ast\mathscr N_y)(r)@>(\alpha_\ast\omega_y(\mathscr N_y))(r)>=\omega_y(\mathscr N_y)(\alpha(r))> ( \alpha_\ast\mathscr F_y\mathscr G_y(\mathscr N_y))(r)=\mathscr N_y(\alpha(r))\otimes C \\ \end{CD} \end{equation} We will now give another interpretation for the space $Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$. For this, we consider a collection $\eta=\{\eta_x(s,r):H_r^x(s)=\mathscr R_x(s,r)\longrightarrow H_r^x(s)\otimes C=\mathscr R_x(s,r)\otimes C:f\mapsto \hat{f}\otimes c_f\}_{x\in \mathscr X,r,s\in \mathscr R_x}$ of $K$-linear maps satisfying the following conditions: (1) Fix $x\in \mathscr X$. Then, for $s' \xrightarrow{h} s \xrightarrow{f} r \xrightarrow{g} r'$ in $\mathscr R_x$, we have \begin{equation}\label{8.3eta} \eta_x(s',r')(gfh)=\sum \widehat{gfh}\otimes c_{gfh}=g\hat{f}h_{\psi_x}\otimes c_f^{\psi_x}\in \mathscr R_x(s',r')\otimes C \end{equation} (2) For $\alpha:x\longrightarrow y$ in $\mathscr X$ and $f\in \mathscr R_x(s,r)$ we have \begin{equation}\label{8.4eta} \alpha(\hat{f})\otimes c_f=\widehat{\alpha(f)}\otimes c_{\alpha(f)} \in \mathscr R_y(\alpha(s),\alpha(r))\otimes C \end{equation} The space of all such $\eta$ will be denoted by $W_1$. We note that condition (1) is equivalent to saying that for each $x\in \mathscr X$, the element $\eta_x=\{\eta_x(s,r):\mathscr R_x(s,r)\longrightarrow \mathscr R_x(s,r)\otimes C:f\mapsto \hat{f}\otimes c_f\}_{r,s\in \mathscr R_x}\in Nat(H^x,H^x\otimes C)$, i.e., $\eta_x$ is a morphism in the category of $\mathscr R_x$-bimodules (functors $\mathscr R_x^{op} \otimes \mathscr R_x\longrightarrow Vect_K$). Here $H^x$ is the canonical $\mathscr R_x$-bimodule that takes a pair of objects $(s,r)\in Ob(\mathscr R_x^{op} \otimes \mathscr R_x)$ to $\mathscr R_x(s,r)$. Further, $H^x\otimes C$ is the $\mathscr R_x$-bimodule defined by setting \begin{equation} (H^x\otimes C)(s,r)=\mathscr R_x(s,r)\otimes C\qquad (H^x\otimes C)(h,g)(f\otimes c)=gfh_{\psi_x}\otimes c^{\psi_x} \end{equation} for $s' \xrightarrow{h} s \xrightarrow{f} r \xrightarrow{g} r'$ in $\mathscr R_x$ and $c\in C$. \begin{lem}\label{Lem8.1} There is a canonical morphism $Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)\longrightarrow W_1$. \end{lem} \begin{proof} As mentioned above, any $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ corresponds to a collection of natural transformations $\{\omega_x\in Nat(1_{\mathbf M_{\mathscr R_x}},\mathscr F_x\mathscr G_x)\}_{x\in \mathscr X}$ satisfying \eqref{eq8.1}. From the proof of \cite[Proposition 3.10]{BBR}, we know that each $\omega_x\in Nat(1_{\mathbf M_{\mathscr R_x}},\mathscr F_x\mathscr G_x)$ corresponds to $\eta_x\in Nat(H^x,H^x\otimes C)$ determined by setting \begin{equation}\label{eqr8.6} \eta_x(s,r):H_r^x(s)=\mathscr R_x(s,r)\longrightarrow H_r^x(s)\otimes C=\mathscr R_x(s,r)\otimes C \qquad \eta_x(s,r):=\omega_x(H^x_r)(s) \end{equation} for $r$, $s\in \mathscr R_x$. Here, $H^x_r$ is the right $\mathscr R_x$-module $H_r^x:=\mathscr R_x(\_\_,r):\mathscr R_x^{op}\longrightarrow Vect_K$. We now consider $\alpha:x\longrightarrow y$ in $\mathscr X$ and some $f\in \mathscr R_x(s,r)$. By applying Lemma \ref{L5.1} with $C=K$, we have $ex_x(H^x_r)\in Mod-\mathscr R$ which satisfies $(ex_x(H^x_r))_y=\alpha^\ast H^x_r=H^y_{\alpha(r)}$. Setting $\mathscr N=ex_x(H^x_r)$ in \eqref{eq8.2}, we have \begin{equation}\label{eq8.7} \begin{CD} \mathscr N_x(s)=H^x_r(s) @>(\omega_x(H^x_r))(s)>=\eta_x(s,r)> (\mathscr F_x\mathscr G_x(\mathscr N_x))(s)=H^x_r(s)\otimes C\\ @V\mathscr N_\alpha(s)VV @VV(\mathscr F_x\mathscr G_x(\mathscr N_\alpha))(s)V\\ \mathscr N_y(\alpha(s))=H^y_{\alpha(r)}(\alpha(s))@>\eta_y(\alpha(s),\alpha(r))>=\omega_y(H^y_{\alpha(r)})(\alpha(s))>\mathscr N_y(\alpha(s))\otimes C =H^y_{\alpha(r)}(\alpha(s)) \otimes C\\ \end{CD} \end{equation} It follows that that the collection $\eta_x(s,r)$ satisfies condition \eqref{8.4eta}. This proves the result. \end{proof} \begin{thm}\label{Pro8.2} The spaces $Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ and $ W_1$ are isomorphic. \end{thm} \begin{proof} We consider an element $\eta\in W_1$. As mentioned before, this gives a collection $\{\eta_x\in Nat(H^x,H^x\otimes C)\}_{x\in\mathscr X}$ satisfying the compatibility condition in \eqref{8.4eta}. From the proof of \cite[Proposition 3.10]{BBR}, it follows that each $\eta_x$ corresponds to a natural transformation $\omega_x\in Nat(1_{\mathbf M_{\mathscr R_x}},\mathscr F_x\mathscr G_x)$ which satisfies $\omega_x(H^x_r)(s)=\eta_x(s,r)$ for $r$, $s\in \mathscr R_x$. We claim that the collection $\{\omega_x\}_{x\in \mathscr X}$ satisfies the compatibility condition in \eqref{eq8.1} for each $\mathscr N\in Mod-\mathscr R$, thus determining an element $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$. We start with $\mathscr N=ex_x(H^x_r)$ for some $x\in \mathscr X$ and $r\in \mathscr R_x$. We consider a morphism $\alpha:y\longrightarrow z$ in $\mathscr X$. If $x\not\leq y$, then $\mathscr N_y=0$ and the condition in \eqref{eq8.1} is trivially satisfied. Otherwise, let $\beta:x\longrightarrow y$ in $\mathscr X$ and set $s=\beta(r)$. In particular, $\mathscr N_y=\beta^\ast H^x_r=H^y_{\beta(r)}=H^y_s$ and $\mathscr N_z=H^z_{\alpha\beta(r)}=H^z_{\alpha(s)}$. Applying the condition \eqref{8.4eta}, we see that the following diagram is commutative for any $s'\in \mathscr R_y$: \begin{equation} \begin{CD} \mathscr R_y(s',s)=\mathscr N_y(s')@>\eta_y(s',s)>=\omega_y(\mathscr N_y)(s')> \mathscr N_y(s')\otimes C=\mathscr R_y(s',s)\otimes C\\ @V\mathscr N_\alpha(s')VV @VV(\mathscr F_y\mathscr G_y(\mathscr N_\alpha))(s')V \\ \mathscr R_z(\alpha(s'),\alpha(s))=\mathscr N_z(\alpha(s'))@>\eta_z(\alpha(s'),\alpha(s))>=\omega_z(\mathscr N_z)(\alpha(s'))> \mathscr N_z(\alpha(s'))\otimes C=\mathscr R_z(\alpha(s'), \alpha(s))\otimes C \\ \end{CD} \end{equation} In other words, the condition in \eqref{eq8.1} is satisfied for $\mathscr N=ex_x(H^x_r)$. From Theorem \ref{T5.5}, we know that the collection \begin{equation}\label{8gen} \{\mbox{$ex_x(H_r^x)$ $\vert$ $x\in \mathscr X$, $r\in \mathscr R_x$}\} \end{equation} is a set of generators for $Mod-\mathscr R$. Accordingly, for any $\mathscr N'\in Mod-\mathscr R$, we can choose an epimorphism $\phi:\mathscr N\longrightarrow \mathscr N'$ where $\mathscr N$ is a direct sum of copies of objects in \eqref{8gen}. Then, $\mathscr N$ satisfies \eqref{eq8.1} and we have commutative diagrams \begin{equation} \begin{CD} \mathscr N_y @>\mathscr N_\alpha>> \alpha_\ast\mathscr N_z @>\alpha_\ast\omega_z(\mathscr N_z)>> \alpha_\ast\mathscr F_z\mathscr G_z(\mathscr N_z) \\ @V\phi_yVV @V\alpha_\ast\phi_zVV @VV\alpha_\ast\mathscr F_z\mathscr G_z(\phi_z)V \\ \mathscr N'_y @>\mathscr N'_\alpha>> \alpha_\ast\mathscr N'_z @>\alpha_\ast\omega_z(\mathscr N'_z)>> \alpha_\ast\mathscr F_z\mathscr G_z(\mathscr N'_z) \\ \end{CD} \end{equation} \begin{equation} \begin{array}{c} \begin{CD} \mathscr N_y @>\omega_y(\mathscr N_y)>> \mathscr F_y\mathscr G_y(\mathscr N_y) @>\mathscr F_y\mathscr G_y(\mathscr N_\alpha)>> \alpha_\ast\mathscr F_z\mathscr G_z(\mathscr N_z) \\ @V\phi_yVV @V\mathscr F_y\mathscr G_y(\phi_y)VV @VV\alpha_\ast\mathscr F_z\mathscr G_z(\phi_z)V \\ \mathscr N'_y @>\omega_y(\mathscr N'_y)>> \mathscr F_y\mathscr G_y(\mathscr N'_y) @>\mathscr F_y\mathscr G_y(\mathscr N'_\alpha)>> \alpha_\ast\mathscr F_z\mathscr G_z(\mathscr N'_z) \\ \end{CD}\qquad \qquad \begin{CD} \mathscr N_y @>\omega_y(\mathscr N_y)>> \mathscr F_y\mathscr G_y(\mathscr N_y)\\ @V\mathscr N_\alpha VV @VV\mathscr F_y\mathscr G_y(\mathscr N_\alpha)V \\ \alpha_\ast\mathscr N_z @>\alpha_\ast\omega_z(\mathscr N_z)>> \alpha_\ast\mathscr F_z\mathscr G_z(\mathscr N_z)\\ \end{CD}\\ \end{array} \end{equation} for any $\alpha:y\longrightarrow z$ in $\mathscr X$. Since $\phi_y:\mathscr N_y\longrightarrow \mathscr N'_y$ is an epimorphism, it follows that $\mathscr N'$ also satisfies the condition in \eqref{eq8.1}. This gives a morphism $W_1\longrightarrow Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$. It may be verified that this is inverse to the morphism $Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)\longrightarrow W_1$ in Lemma \ref{Lem8.1}, which proves the result. \end{proof} We will now give conditions for the functor $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ to be separable. Since $\mathscr G$ has a left adjoint, it follows (see \cite[Theorem 1.2]{Raf}) that $\mathscr G$ is separable if and only if there exists a natural transformation $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ such that $\nu\circ \omega=1_{Mod-\mathscr R}$, where $\nu$ is the counit of the adjunction. \begin{Thm}\label{T8.3} Let $\mathscr X$ be a partially ordered set, $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. Then, the functor $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ is separable if and only if there exists $\eta\in W_1$ such that \begin{equation}\label{cond8.3} \begin{CD} id=(id\otimes \varepsilon_C)\circ \eta_x(s,r):\mathscr R_x(s,r)@>\eta_x(s,r)>> \mathscr R_x(s,r)\otimes C@>(id\otimes \varepsilon_C)>> \mathscr R_x(s,r)\end{CD} \end{equation} for each $x\in \mathscr X$ and $s$, $r\in \mathscr R_x$. \end{Thm} \begin{proof} First, we suppose that $\mathscr G$ is separable, i.e., there exists a natural transformation $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ such that $\nu\circ \omega=1_{Mod-\mathscr R}$. Using Proposition \ref{Pro8.2}, we consider $\eta\in W_1$ corresponding to $\omega$. By definition, the counit $\nu$ of the adjunction $(\mathscr F,\mathscr G)$ is described as follows: for any $\mathscr N\in Mod-\mathscr R$, we have \begin{equation} \nu(\mathscr N)_x(s):\mathscr N_x(s)\otimes C\longrightarrow \mathscr N_x(s) \qquad n\otimes c\mapsto n\varepsilon_C(c) \end{equation} for each $x\in \mathscr X$, $s\in \mathscr R_x$. We choose $x\in\mathscr X$, $r\in \mathscr R_x$ and set $\mathscr N=ex_x(H^x_r)$. Since $\nu\circ \omega=1_{Mod-\mathscr R}$, it now follows from \eqref{eqr8.6} that \begin{equation}\label{cond8.13} id=\nu(ex_x(H^x_r))_x(s)\circ \omega(ex_x(H^x_r))_x(s)=(id\otimes \varepsilon_C)\circ \omega_x(H^x_r)(s)=(id\otimes \varepsilon_C)\circ \eta_x(s,r) \end{equation} Conversely, suppose that we have $\eta\in W_1$ such that the condition in \eqref{cond8.3} is satisfied. Using the isomorphism in Proposition \ref{Pro8.2}, we obtain the natural transformation $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ corresponding to $\eta$. Then, it is clear from \eqref{cond8.13} that $\nu(\mathscr N)\circ \omega(\mathscr N)=id$ for $\mathscr N=ex_x(H^x_r)$. Since $\{\mbox{$ex_x(H_r^x)$ $\vert$ $x\in \mathscr X$, $r\in \mathscr R_x$}\}$ is a set of generators for $Mod-\mathscr R$, it follows that for any $\mathscr N'\in Mod-\mathscr R$, there is an epimorphism $\phi:\mathscr N\longrightarrow \mathscr N'$ such that $\nu(\mathscr N)\circ \omega(\mathscr N)=id$. We now consider the commutative diagram \begin{equation}\label{cd8.14} \begin{CD} \mathscr N @>\omega(\mathscr N)>> \mathscr F\mathscr G(\mathscr N) @>\nu(\mathscr N)>> \mathscr N \\ @V\phi VV @V\mathscr F\mathscr G(\phi) VV @VV\phi V\\ \mathscr N' @>\omega(\mathscr N')>> \mathscr F\mathscr G(\mathscr N') @>\nu(\mathscr N')>> \mathscr N' \\ \end{CD} \end{equation} Since the upper horizontal composition in \eqref{cd8.14} is the identity and $\phi$ is an epimorphism, it follows that $\nu(\mathscr N')\circ \omega(\mathscr N')=id$. This proves the result. \end{proof} \section{$(\mathscr F,\mathscr G)$ as a Frobenius pair} In Sections 7 and 8, we have given conditions for the functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ and its right adjoint $\mathscr G:Mod-\mathscr R \longrightarrow Mod^C-\mathscr R$ to be separable. In this section, we will give necessary and sufficient conditions for $(\mathscr F,\mathscr G)$ to be a Frobenius pair, i.e., $\mathscr G$ is both a right and a left adjoint of $\mathscr F$. First, we note that it follows from the characterization of Frobenius pairs (see for instance, \cite[$\S$ 1]{uni}) that $(\mathscr F,\mathscr G)$ is a Frobenius pair if and only if there exist $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ and $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F \mathscr G)$ such that \begin{equation}\label{eq9.1} \mathscr F(\upsilon(\mathscr M))\circ\omega(\mathscr F(\mathscr M))= id_{\mathscr F(\mathscr M)}\qquad \upsilon(\mathscr G(\mathscr N))\circ \mathscr G(\omega(\mathscr N))=id_{\mathscr G(\mathscr N)} \end{equation} for any $\mathscr M\in Mod^C-\mathscr R$ and $\mathscr N\in Mod-\mathscr R$. Equivalently, for each $x\in \mathscr X$, we must have \begin{equation}\label{eq9.15} \begin{array}{c} (\mathscr F(\upsilon(\mathscr M)))_x\textrm{ }\circ\textrm{ }\omega(\mathscr F(\mathscr M))_x= \mathscr F_x(\upsilon_x(\mathscr M_x))\textrm{ }\circ\textrm{ }\omega_x(\mathscr F_x(\mathscr M_x))= id_{\mathscr F_x(\mathscr M_x)}\\ \upsilon(\mathscr G(\mathscr N))_x\textrm{ }\circ\textrm{ } \mathscr G(\omega(\mathscr N))_x=\upsilon_x(\mathscr G_x(\mathscr N_x))\textrm{ }\circ\textrm{ }\mathscr G_x(\omega_x(\mathscr N_x))=id_{\mathscr G_x(\mathscr N_x)} \\ \end{array} \end{equation} for any $\mathscr M\in Mod^C-\mathscr R$ and $\mathscr N\in Mod-\mathscr R$. \begin{Thm}\label{T9.1} Let $\mathscr X$ be a partially ordered set, $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. Let $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ be the forgetful functor and $\mathscr G:Mod-\mathscr R \longrightarrow Mod^C-\mathscr R$ its right adjoint. Then, $(\mathscr F,\mathscr G)$ is a Frobenius pair if and only if there exist $\theta\in V_1$ and $\eta\in W_1$ such that \begin{equation}\label{eq9.2} \varepsilon_C(d)f=\sum \widehat{f}\circ \theta_x(r)(c_f\otimes d)\qquad \varepsilon_C(d)f=\sum \widehat{f_{\psi_x}} \circ \theta_x(r)(d^{\psi_x}\otimes c_f) \end{equation} for every $x\in \mathscr X$, $r\in \mathscr R_x$, $f\in \mathscr R_x(r,s)$ and $d\in C$, where $\eta_x(r,s)(f)=\widehat{f}\otimes c_f$. \end{Thm} \begin{proof} We suppose there exist $\theta\in V_1$ and $\eta\in W_1$ satisfying \eqref{eq9.2} and consider $\mathscr M\in Mod^C-\mathscr R$, $\mathscr N\in Mod-\mathscr R$. Using the isomorphisms in Proposition \ref{P7.6} and Proposition \ref{Pro8.2}, we obtain $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ and $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F \mathscr G)$ corresponding to $\theta$ and $\eta$ respectively. For fixed $x\in \mathscr X$, it follows that $\theta_x=\{\theta_x(r):C\otimes C \longrightarrow \mathscr R_x(r,r)\}_{r\in \mathscr R_x}$ and the $\mathscr R_x$-bimodule morphism $\eta_x\in Nat(H^x,H^x\otimes C)$ satisfy the conditions in \cite[Theorem 3.14]{BBR}. Hence, we have \begin{equation}\label{eq9.4} \mathscr F_x(\upsilon_x(\mathcal M))\textrm{ }\circ\textrm{ }\omega_x(\mathscr F_x(\mathcal M))= id_{\mathscr F_x(\mathcal M)}\qquad \upsilon_x(\mathscr G_x(\mathcal N))\textrm{ }\circ\textrm{ }\mathscr G_x(\omega_x(\mathcal N))=id_{\mathscr G_x(\mathcal N)} \end{equation} for any $\mathcal M\in \mathbf M^C_{\mathscr R_x}(\psi_x)$ and $\mathcal N\in \mathbf M_{\mathscr R_x}$. In particular, \eqref{eq9.15} holds for $\mathscr M_x\in \mathbf M^C_{\mathscr R_x}(\psi_x)$ and $\mathscr N_x\in \mathbf M_{\mathscr R_x}$. Conversely, suppose that $(\mathscr F,\mathscr G)$ is a Frobenius pair. Then, there exist $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ and $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F \mathscr G)$ satisfying \eqref{eq9.15} for each $x\in \mathscr X$. Again using the isomorphisms in Proposition \ref{P7.6} and Proposition \ref{Pro8.2}, we obtain corresponding $\theta\in V_1$ and $\eta\in W_1$. We now consider $\mathcal M\in \mathbf M^C_{\mathscr R_x}(\psi_x)$ and $\mathcal N\in \mathbf M_{\mathscr R_x}$. Applying \eqref{eq9.15} with $\mathscr M=ex^C_x(\mathcal M)$ and $\mathscr N=ex_x(\mathcal N)$, we have \begin{equation} \mathscr F_x(\upsilon_x(\mathcal M))\textrm{ }\circ\textrm{ }\omega_x(\mathscr F_x(\mathcal M))= id_{\mathscr F_x(\mathcal M)}\qquad \upsilon_x(\mathscr G_x(\mathcal N))\textrm{ }\circ\textrm{ }\mathscr G_x(\omega_x(\mathcal N))=id_{\mathscr G_x(\mathcal N)} \end{equation} It now follows from \cite[Theorem 3.14]{BBR} that $\theta_x=\{\theta_x(r):C\otimes C \longrightarrow \mathscr R_x(r,r)\}_{r\in \mathscr R_x}$ and the $\mathscr R_x$-bimodule morphism $\eta_x\in Nat(H^x,H^x\otimes C)$ satisfy \eqref{eq9.2}. This proves the result. \end{proof} \begin{cor}\label{C9.2} Let $(\mathscr F,\mathscr G)$ be a Frobenius pair. Then, for each $x\in \mathscr X$, $(\mathscr F_x,\mathscr G_x)$ is a Frobenius pair of adjoint functors. \end{cor} \begin{proof} This is immediate from \eqref{eq9.4}. \end{proof} We consider $\alpha:x\longrightarrow y$ in $\mathscr X$. In \eqref{cd7.1}, we observed directly that the functors $\{\mathscr F_x:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{\mathscr R_x}\}_{x\in \mathscr X}$ commute with both $\alpha^\ast$ and $\alpha_\ast$, while the functors $\{\mathscr G_x: \mathbf M_{\mathscr R_x}\longrightarrow \mathbf M^C_{\mathscr R_x}(\psi_x)\}_{x\in \mathscr X}$ commute only with $\alpha_\ast$. We will now give a sufficient condition for the functors $\{\mathscr G_x: \mathbf M_{\mathscr R_x}\longrightarrow \mathbf M^C_{\mathscr R_x}(\psi_x)\}_{x\in \mathscr X}$ to commute with $\alpha^\ast$. \begin{lem}\label{L9.3} Let $(\mathscr F,\mathscr G)$ be a Frobenius pair. Then, for any $\alpha:x\longrightarrow y$ in $\mathscr X$, we have a commutative diagram \begin{equation}\label{eq9.6} \begin{CD} \mathbf M _{\mathscr R_x}@>\alpha^\ast >> \mathbf M_{\mathscr R_y}\\ @V\mathscr G_xVV @VV\mathscr G_yV \\ \mathbf M^C_{\mathscr R_x}(\psi_x)@>\alpha^\ast >> \mathbf M^C_{\mathscr R_y}(\psi_y)\\ \end{CD} \end{equation} \end{lem} \begin{proof} For $\mathcal M\in \mathbf M _{\mathscr R_x}$, we will show that $\mathscr G_y\alpha^\ast(\mathcal M)=\alpha^\ast\mathscr G_x(\mathcal M)\in \mathbf M^C_{\mathscr R_y}(\psi_y)$. From Corollary \ref{C9.2} we know that each $(\mathscr F_x,\mathscr G_x)$ is a Frobenius pair of adjoint functors. Using this fact and the commutative diagrams in \eqref{cd7.1}, we now have that for any $\mathcal N\in \mathbf M^C_{\mathscr R_y}(\psi_y)$: \begin{equation} \mathbf M^C_{\mathscr R_y}(\psi_y)(\mathscr G_y\alpha^\ast(\mathcal M),\mathcal N)=\mathbf M_{\mathscr R_x}(\mathcal M,\alpha_\ast\mathscr F_y(\mathcal N))=\mathbf M_{\mathscr R_x}(\mathcal M,\mathscr F_x\alpha_\ast(\mathcal N))=\mathbf M^C_{\mathscr R_y}(\psi_y)(\alpha^\ast\mathscr G_x(\mathcal M),\mathcal N) \end{equation} \end{proof} \begin{thm}\label{P9.4} Let $(\mathscr F,\mathscr G)$ be a Frobenius pair. Suppose that $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat. Then, $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ restricts to a functor $\mathscr G^c:Cart-\mathscr R\longrightarrow Cart^C-\mathscr R$. \end{thm} \begin{proof} For any $\mathscr N\in Cart-\mathscr R$, we claim that $\mathscr G(\mathscr N)\in Mod^C-\mathscr R$ actually lies in $Cart^C-\mathscr R$. By definition of $\mathscr G$, we have for any $\alpha:x\longrightarrow y$, a morphism $\mathscr G(\mathscr N)_\alpha=\mathscr G_x(\mathscr N_\alpha):\mathscr G_x(\mathscr N_x)\longrightarrow \mathscr G_x(\alpha_\ast(\mathscr N_y))=\alpha_\ast(\mathscr G_y(\mathscr N_y))$ in $\mathbf M^C_{\mathscr R_x}(\psi_x)$ which corresponds to a morphism $\mathscr G(\mathscr N)^\alpha:\alpha^\ast(\mathscr G_x(\mathscr N_x))\longrightarrow \mathscr G_y(\mathscr N_y)$ in $\mathbf M^C_{\mathscr R_y}(\psi_y)$. Since $(\mathscr F,\mathscr G)$ is a Frobenius pair, it follows from Lemma \ref{L9.3} that $\mathscr G_y\alpha^\ast(\mathscr N_x)=\alpha^\ast\mathscr G_x(\mathscr N_x)\in \mathbf M^C_{\mathscr R_y}(\psi_y)$. Since $\mathcal N$ is cartesian, we know that $\alpha^\ast\mathscr N_x$ is isomorphic to $\mathscr N_y$ and hence $\mathscr G(\mathscr N)^\alpha=\mathscr G_y(\mathscr N^\alpha)$ is an isomorphism. \end{proof} \begin{cor}\label{C9.5} Let $(\mathscr F,\mathscr G)$ be a Frobenius pair. Suppose that $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat. Then, $(\mathscr F^c,\mathscr G^c)$ is a Frobenius pair of adjoint functors between $Cart^C-\mathscr R$ and $Cart-\mathscr R$. \end{cor} \begin{proof} From Proposition \ref{P7.8}, we know that $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ restricts to a functor $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$. From Proposition \ref{P9.4}, we know that $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ restricts to a functor $\mathscr G^c:Cart-\mathscr R\longrightarrow Cart^C-\mathscr R$ on the full subcategories of cartesian modules. Since $\mathscr G$ is both right and left adjoint to $\mathscr F$, it is clear that $\mathscr G^c$ is both right and left adjoint to $\mathscr F^c$. \end{proof} \section{Constructing entwined representations} In this final section, we will give examples of how to construct entwined representations and describe modules over them. Let $(\mathcal R,C,\psi)$ be an entwining structure. Then, we consider the $K$-linear category $(C,\mathcal R)_\psi$ defined as follows \begin{equation} Ob((C,\mathcal R)_\psi)=Ob(\mathcal R)\qquad (C,\mathcal R)_\psi(s,r):=Hom_K(C,\mathcal R(s,r)) \end{equation} for $s$, $r\in \mathcal R$. The composition in $(C,\mathcal R)_\psi$ is as follows: given $\phi:C\longrightarrow \mathcal R(s,r)$ and $\phi':C\longrightarrow \mathcal R(t,s)$ respectively in $(C,\mathcal R)_\psi(s,r)$ and $(C,\mathcal R)_\psi(t,s)$, we set \begin{equation} \phi\ast\phi': C\longrightarrow \mathcal R(t,r)\qquad c\mapsto \sum \phi(c_2)_\psi\circ \phi'(c_1^\psi) \end{equation} \begin{lem}\label{L99.1} Let $(\mathcal R,C,\psi)$ be an entwining structure. Then, there is a canonical functor $P_\psi: \mathbf M_{\mathcal R}^C(\psi)\longrightarrow \mathbf M_{(C,\mathcal R)_\psi}$. \end{lem} \begin{proof} We consider $\mathcal M\in \mathbf M_{\mathcal R}^C(\psi)$. We will define $\mathcal N=P_\psi(\mathcal M)\in \mathbf M_{(C,\mathcal R)_\psi}$ by setting $\mathcal N(r):=\mathcal M(r)$ for each $r\in (C,\mathcal R)$. Given $\phi:C\longrightarrow \mathcal R(s,r)$ in $(C,\mathcal R)_\psi(s,r)$, we define $m\ast \phi\in \mathcal N(s)=\mathcal M(s)$ by setting $m\ast \phi= \sum m_0\phi(m_1)$. Here, $\rho_{\mathcal M(r)}(m)=\sum m_0\otimes m_1$ is the right $C$-comodule structure on $\mathcal M(r)$. For $\phi':C\longrightarrow \mathcal R(t,s)$ in $(C,\mathcal R)_\psi(t,s)$, we now have \begin{equation} \begin{array}{ll} m\ast (\phi\ast\phi') & = \sum m_0(\phi\ast \phi')(m_1) =\sum m_0\phi(m_{12})_\psi\phi'(m_{11}^\psi)=\sum m_0\phi(m_{2})_\psi\phi'(m_{1}^\psi) \\ &\\ & \\ (m\ast \phi)\ast \phi' & =\sum (m\ast \phi)_0\phi'((m\ast \phi)_1 ) =\sum (m_0 \phi(m_1))_0\phi'((m_0 \phi(m_1))_1 ) \\ &=\sum (m_{00}\phi(m_1)_\psi)\phi'(m_{01}^\psi)=\sum m_0\phi(m_{2})_\psi\phi'(m_{1}^\psi) \\ \end{array} \end{equation} This proves the result. \end{proof} \begin{lem}\label{L99.2} Let $(\alpha,id):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ be a morphism of entwining structures. Then, $P_{\psi}\circ (\alpha,id)_\ast=\alpha_\ast \circ P_{\psi'}:\mathbf M_{\mathcal S}^C(\psi')\longrightarrow \mathbf M_{(C,\mathcal R)}$. \end{lem} \begin{proof} We begin with $\mathcal N\in \mathbf M_{\mathcal S}^C(\psi')$. From the construction in Lemma \ref{L99.1}, it is clear that for any $r\in (C,\mathcal R)_\psi$, we have $(P_{\psi}\circ (\alpha,id)_\ast)(\mathcal N)(r)=(\alpha_\ast \circ P_{\psi'})(\mathcal N)(r)=\mathcal N(\alpha(r))$. We set $\mathcal N_1:=(P_{\psi}\circ (\alpha,id)_\ast)(\mathcal N)$ and $\mathcal N_2:=(\alpha_\ast \circ P_{\psi'})(\mathcal N)$ and consider $n\in \mathcal N_1(r)=\mathcal N_2(r)$ as well as $\phi:C\longrightarrow \mathcal R(s,r)$ in $(C,\mathcal R)_\psi(s,r)$. Then, in both $\mathcal N_1(s)$ and $\mathcal N_2(s)$, we have $n\ast \phi=\sum n_0 \alpha(\phi(n_1))$. This proves the result. \end{proof} Now let $\mathscr X$ be a small category and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ an entwined $C$-representation. By replacing each entwining structure $(\mathscr R_x,C,\psi_x)$ with the category $(C,\mathscr R_x)_{\psi_x}$, we obtain an induced representation $(C,\mathscr R)_\psi:\mathscr X\longrightarrow \mathscr Lin$ (we recall that $\mathscr Lin$ is the category of small $K$-linear categories). \begin{thm}\label{P99.3} There is a canonical functor $Mod^C-\mathscr R\longrightarrow Mod-(C,\mathscr R)_\psi$. \end{thm} \begin{proof} By definition, an object $\mathscr M\in Mod^C-\mathscr R$ consists of a collection $\{\mathscr M_x\in \mathbf M^C_{\mathscr R_x}(\psi_x)\}_{x\in \mathscr X}$ and for each $\alpha:x\longrightarrow y$ in $\mathscr X$, a morphism $\mathscr M_\alpha:\mathscr M_x\longrightarrow \alpha_\ast\mathscr M_y$ in $\mathbf M^C_{\mathscr R_x}(\psi_x)$. Applying the functors $P_{\psi_x}:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{(C,\mathscr R_x)_{\psi_x}}$ for $x\in \mathscr X$ and using Lemma \ref{L99.2}, the result is now clear. \end{proof} Now let $C$ be finitely generated as a $K$-vector space and let $C^\ast$ denote its $K$-linear dual. Then, the canonical map $C^\ast\otimes V\longrightarrow Hom_K(C,V)$ is an isomorphism for any vector space $V$. For an entwining structure $(\mathcal R,C,\psi)$, the category $(C,\mathcal R)_\psi$ can now be rewritten as $(C^\ast\otimes \mathcal R)_\psi$ where $(C^\ast\otimes \mathcal R)_\psi (s,r)=C^\ast\otimes \mathcal R(s,r)$ for $s$, $r\in Ob((C^\ast\otimes \mathcal R)_\psi)=Ob(\mathcal R)$. Given $c^\ast\otimes f\in C^\ast\otimes \mathcal R(s,r)$ and $d^\ast\otimes g\in C^\ast\otimes \mathcal R(t,s)$, the composition in $(C^\ast\otimes \mathcal R)_\psi$ is expressed as \begin{equation}\label{eq99.47} (c^\ast\otimes f)\circ (d^\ast \otimes g):C\longrightarrow \mathcal R(t,r)\qquad x\mapsto \sum c^\ast(x_2)d^\ast (x_1^\psi)(f_\psi\circ g) \end{equation} for $x\in C$. It is important to note that when $f$ and $g$ are identity maps, the composition in \eqref{eq99.47} simplifies to \begin{equation}\label{eq99.48} (c^\ast\otimes id_r)\circ (d^\ast \otimes id_r):C\longrightarrow \mathcal R(t,r)\qquad x\mapsto \sum c^\ast(x_2)d^\ast (x_1)id_r \end{equation} In other words, for the canonical morphism $C^\ast\longrightarrow C^\ast\otimes\mathcal R(r,r)$ given by $c^\ast\mapsto c^\ast\otimes id_r$ to be a morphism of algebras, we must use the opposite of the usual convolution product on $C^\ast$. Similarly, given an entwined $C$-representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ with $C$ finitely generated as a $K$-vector space, we can replace the induced representation $(C,\mathscr R)_\psi:\mathscr X\longrightarrow \mathscr Lin$ by $(C^\ast\otimes\mathscr R)_\psi$. Then, $Mod-(C,\mathscr R)_\psi$ may be replaced by $Mod-(C^\ast\otimes \mathscr R)_\psi$. \begin{thm}\label{P99.4} Let $\mathscr X$ be a small category and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ an entwined $C$-representation. Suppose that $C$ is finitely generated as a $K$-vector space. Then, the categories $Mod^C-\mathscr R$ and $Mod-(C^\ast\otimes \mathscr R)_\psi$ are equivalent. \end{thm} \begin{proof} By Proposition \ref{P99.3}, we already know that any object in $Mod^C-\mathscr R$ may be equipped with a $(C^\ast\otimes\mathscr R)_\psi$-module structure. For the converse, we consider some $\mathscr M\in Mod-(C^\ast\otimes\mathscr R)_\psi$ and choose some $x\in \mathscr X$. We make $\mathscr M_x$ into an $\mathscr R_x$-module as follows: for $f\in \mathscr R_x(s,r)$ and $m\in \mathscr M_x(r)$, we set $mf\in \mathscr M_x(s)$ to be $mf:=m(\varepsilon_C\otimes f)$. By considering the canonical morphism $C^\ast\longrightarrow C^\ast\otimes\mathscr R_x(r,r)$, it follows that the right $(C^\ast\otimes \mathscr R_x)_{\psi_x}(r,r)$ module $\mathscr M_x(r)$ carries a right $C^\ast$-module structure. As observed in \eqref{eq99.48}, here the product on $C^\ast$ happens to be the opposite of the usual convolution product. Hence, the right $C^\ast$-module structure on $\mathscr M_x(r)$ leads to a left $C^\ast$-module structure on $\mathscr M_x(r)$ when $C^\ast$ is equipped with the usual product. Since $C$ is finite dimensional, it is well known (see, for instance, \cite[$\S$ 2.2]{book3}) that we have an induced right $C$-comodule structure on $\mathscr M_x(r)$. It may be verified by direct computation that $\mathscr M_x\in \mathbf M^C_{\mathscr R_x}(\psi_x)$. Finally, for a morphism $\alpha: x\longrightarrow y$ in $\mathscr X$, the map $\mathscr M_\alpha:\mathscr M_x \longrightarrow \alpha_\ast\mathscr M_y$ in $\mathbf M_{(C^\ast\otimes \mathscr R_x)_{\psi_x}}$ induces a morphism in $\mathbf M^C_{\mathscr R_x}(\psi_x)$. Hence, $\mathscr M\in Mod-(C^\ast\otimes\mathscr R)_\psi$ may be treated as an object of $Mod^C-\mathscr R$. It may be directly verified that this structure is the inverse of the one defined by Propostion \ref{P99.3}. \end{proof} Finally, we will give an example of constructing entwined representations starting from $B$-comodule categories, where $B$ is a bialgebra. So let $B$ be a bialgebra over $K$, having multiplication $\mu_B$, unit map $u_B$ as well as comultiplication $\Delta_B$ and counit map $\varepsilon_B$. Then, the notion of a ``$B$-comodule category,'' which behaves like a $B$-comodule algebra with many objects, is implicit in the literature. \begin{defn}\label{D100.1} Let $B$ be a $K$-bialgebra. We will say that a small $K$-linear category $\mathcal R$ is a right $B$-comodule category if it satisfies the following conditions: (i) For any $r$, $s\in \mathcal R$, there is a coaction $\rho=\rho(r,s):\mathcal R(r,s)\longrightarrow \mathcal R(r,s)\otimes B$, $f\mapsto \sum f_0\otimes f_1$, making $\mathcal R(r,s)$ a right $B$-comodule. Further, $\rho(id_r)=id_r\otimes 1_B$ for each $r\in \mathcal R$. (ii) For $f\in \mathcal R(r,s)$ and $g\in \mathcal R(s,t)$, we have \begin{equation}\label{eq100.1} \rho(g\circ f)=(g\circ f)_0\otimes (g\circ f)_1= (g_0\circ f_0)\otimes (g_1f_1) \end{equation} We have suppressed the summation signs in \eqref{eq100.1}. We will always refer to a right $B$-comodule category as a co-$B$-category. We will only consider those $K$-linear functors between co-$B$-categories whose action on morphisms is $B$-colinear. Together, the co-$B$-categories form a new category, which we will denote by $Cat^B$. \end{defn} \begin{lem}\label{L100.2} Let $B$ be a bialgebra over $K$. Let $\mathcal R$ be a co-$B$-category and let $C$ be a right $B$-module coalgebra. The collection $\psi:=\psi_{\mathcal R}=\{\psi_{rs}:C\otimes \mathcal R(r,s)\longrightarrow \mathcal R(r,s)\otimes C\}_{r,s \in \mathcal R}$ defined by setting \begin{equation} \psi_{rs}(c\otimes f)=f_\psi\otimes c^\psi=f_0\otimes cf_1 \qquad f\in \mathcal R(r,s), \textrm{ }c\in C \end{equation} makes $(\mathcal R,C,\psi)$ an entwining structure. \end{lem} \begin{proof} We consider morphisms $f$, $g$ in $\mathcal R$ so that $gf$ is defined. Then, for $c\in C$, we see that \begin{equation} \begin{array}{c} (gf)_\psi\otimes c^\psi=(gf)_0\otimes c(gf)_1=(g_0f_0)\otimes c(g_1f_1)=g_\psi f_\psi\otimes c^{\psi\psi}\\ f_\psi\otimes \Delta_C(c^\psi)=f_0\otimes \Delta_C(cf_1)=f_0\otimes c_1f_1\otimes c_2f_2=f_{00}\otimes c_1f_{01}\otimes c_2f_1=f_{\psi\psi}\otimes c_1^\psi\otimes c_2^\psi\\ \varepsilon_C(c^\psi)f_\psi=\varepsilon_C(c)\varepsilon_B(f_1)f_0=\varepsilon_C(c)f\qquad \psi(c\otimes id_r)=id_r\otimes c1_B\\ \end{array} \end{equation} This proves the result. \end{proof} \begin{thm}\label{P100.3} Let $B$ be a $K$-bialgebra and let $C$ be a right $B$-module coalgebra. If $\mathscr X$ is a small category, a functor $\mathscr R':\mathscr X\longrightarrow Cat^B$ induces an entwined $C$-representation of $\mathscr X$ \begin{equation} \mathscr R:\mathscr X\longrightarrow \mathscr Ent_C\qquad x\mapsto (\mathscr R_x,C,\psi_x):=(\mathscr R'_x,C,\psi_{\mathscr R'_x}) \end{equation} \end{thm} \begin{proof} It may be easily verified that the entwining structures constructed in Lemma \ref{L100.2} are functorial with respect to $B$-colinear functors between $B$-comodule categories. This proves the result. \end{proof} We now consider a representation $\mathscr R':\mathscr X\longrightarrow Cat^B$ as in Proposition \ref{P100.3} and the corresponding entwined $C$-representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$. By considering the underlying $K$-linear category of any co-$B$-category, we obtain an induced representation that we continue to denote by $\mathscr R':\mathscr X\longrightarrow Cat^B\longrightarrow \mathscr Lin$. We conclude by showing how entwined modules over $\mathscr R$ are related to modules over $\mathscr R'$ in the sense of Estrada and Virili \cite{EV}. \begin{thm} \label{P100.4} Let $B$ be a $K$-bialgebra and let $C$ be a right $B$-module coalgebra. Let $\mathscr X$ be a small category, $\mathscr R':\mathscr X\longrightarrow Cat^B$ a functor and let $\mathscr R:\mathscr X\longrightarrow\mathscr Ent_C$ be the corresponding entwined $C$-representation. Then, a module $\mathscr M$ over $\mathscr R$ consists of the following data: (1) A module $\mathscr M$ over the induced representation $\mathscr R':\mathscr X\longrightarrow Cat^B\longrightarrow \mathscr Lin$. (2) For each $x\in \mathscr X$ and $r\in \mathscr R_x$ a right $C$-comodule structure $\rho_r^x:\mathscr M_x(r)\longrightarrow \mathscr M_x(r)\otimes C$ such that \begin{equation*} \rho_{s}^x(mf)= \big(mf\big)_0 \otimes \big(mf\big)_{1}=m_0f_0\otimes {m_1}f_1 \end{equation*} for every $f \in \mathscr{R}_x(s,r)$ and $m \in \mathscr{M}_x(r).$ (3) For each morphism $\alpha:x\longrightarrow y$ in $\mathscr X$, the morphism $\mathscr M_\alpha(r):\mathscr M_x(r)\longrightarrow (\alpha_\ast\mathscr M_y)(r)$ is $C$-colinear for each $r\in \mathscr R_x$. \end{thm} \begin{proof} We consider a datum as described by the three conditions above. The conditions (1) and (2) ensure that each $\mathscr M_x\in \mathbf M^C_{\mathscr R_x}(\psi_x)$. For each $x\in \mathscr X$, there is a forgetful functor $\mathscr F_x:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{\mathscr R_x}$. Let $\alpha:x\longrightarrow y$ be a morphism in $\mathscr X$. From \eqref{cd7.1}, we know that $(\alpha,id)_\ast: \mathbf M_{\mathscr R_y}^C(\psi_y) \longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x)$ and $\alpha_\ast: \mathbf M_{\mathscr R_y} \longrightarrow \mathbf M_{\mathscr R_x}$ are well behaved with respect to these forgetful functors. For each $r\in \mathscr R_x$, if $\mathscr M_\alpha(r):\mathscr M_x(r)\longrightarrow (\alpha_\ast\mathscr M_y)(r)$ is also $C$-colinear, it follows that $\mathscr M_\alpha$ is a morphism in $\mathbf M_{\mathscr R_x}^C(\psi_x)$. The result is now clear. \end{proof} \small \begin{bibdiv} \begin{biblist} \bib{Abu}{article}{ author={Abuhlail, J. Y.}, title={Dual entwining structures and dual entwined modules}, journal={Algebr. Represent. Theory}, volume={8}, date={2005}, number={2}, pages={275--295}, } \bib{AR}{book}{ author={Ad\'{a}mek, J.}, author={Rosick\'{y}, J.}, title={Locally presentable and accessible categories}, series={London Mathematical Society Lecture Note Series}, volume={189}, publisher={Cambridge University Press, Cambridge}, date={1994}, pages={xiv+316}, } \bib{BBR0}{article}{ author={Balodi, M.}, author={Banerjee, A.}, author={Ray, S.}, title={Cohomology of modules over $H$-categories and co-$H$-categories}, journal={Canad. J. Math.}, volume={72}, date={2020}, number={5}, pages={1352--1385}, } \bib{BBR}{article}{ author={Balodi, M.}, author={Banerjee, A.}, author={Ray, S.}, title={Entwined modules over linear categories and Galois extensions}, journal={Israel J. Math.}, volume={241}, date={2021}, number={2}, pages={623--692}, } \bib{Brx1}{article}{ author={Brzezi\'{n}ski, T.}, title={On modules associated to coalgebra Galois extensions}, journal={J. Algebra}, volume={215}, date={1999}, number={1}, pages={290--317}, } \bib{Brx5}{article}{ author={Brzezi\'{n}ski, T.}, title={Frobenius properties and Maschke-type theorems for entwined modules}, journal={Proc. Amer. Math. Soc.}, volume={128}, date={2000}, number={8}, pages={2261--2270}, } \bib{Brx2}{article}{ author={Brzezi\'{n}ski, T.}, title={The cohomology structure of an algebra entwined with a coalgebra}, journal={J. Algebra}, volume={235}, date={2001}, number={1}, pages={176--202}, } \bib{Brx3}{article}{ author={Brzezi\'{n}ski, T.}, title={The structure of corings: induction functors, Maschke-type theorem, and Frobenius and Galois-type properties}, journal={Algebr. Represent. Theory}, volume={5}, date={2002}, number={4}, pages={389--410}, } \bib{BrMj}{article}{ author={Brzezi\'{n}ski, T.}, author={Majid, S.}, title={Coalgebra bundles}, journal={Comm. Math. Phys.}, volume={191}, date={1998}, number={2}, pages={467--492}, } \bib{uni}{article}{ author={Brzezi\'{n}ski, T.}, author={Caenepeel, S.}, author={Militaru, G.}, author={Zhu, S.}, title={Frobenius and Maschke type theorems for Doi-Hopf modules and entwined modules revisited: a unified approach}, conference={ title={Ring theory and algebraic geometry}, address={Le\'{o}n}, date={1999}, }, book={ series={Lecture Notes in Pure and Appl. Math.}, volume={221}, publisher={Dekker, New York}, }, date={2001}, pages={1--31}, } \bib{Wibook}{book}{ author={Brzezinski, T.}, author={Wisbauer, R.}, title={Corings and comodules}, series={London Mathematical Society Lecture Note Series}, volume={309}, publisher={Cambridge University Press, Cambridge}, date={2003}, pages={xii+476}, } \bib{BuTa2}{article}{ author={Bulacu, D.}, author={Caenepeel, S.}, author={Torrecillas, B.}, title={Frobenius and separable functors for the category of entwined modules over cowreaths, II: applications}, journal={J. Algebra}, volume={515}, date={2018}, pages={236--277}, } \bib{BuTa1}{article}{ author={Bulacu, D.}, author={Caenepeel, S.}, author={Torrecillas, B.} title={Frobenius and Separable Functors for the Category of Entwined Modules over Cowreaths, I: General Theory}, journal={Algebra and Representation Theory}, volume={23}, date={2020}, pages={1119-1157}, } \bib{CaDe}{article}{ author={Caenepeel, S.}, author={De Groot, E.}, title={Modules over weak entwining structures}, conference={ title={New trends in Hopf algebra theory}, address={La Falda}, date={1999}, }, book={ series={Contemp. Math.}, volume={267}, publisher={Amer. Math. Soc., Providence, RI}, }, date={2000}, pages={31--54}, } \bib{X13}{article}{ author={Caenepeel, S.}, author={Militaru, G.}, author={Ion, Bogdan}, author={Zhu, Shenglin}, title={Separable functors for the category of Doi-Hopf modules, applications}, journal={Adv. Math.}, volume={145}, date={1999}, number={2}, pages={239--290}, } \bib{X14}{article}{ author={Caenepeel, S.}, author={Militaru, G.}, author={Zhu, Shenglin}, title={A Maschke type theorem for Doi-Hopf modules and applications}, journal={J. Algebra}, volume={187}, date={1997}, number={2}, pages={388--412}, issn={0021-8693}, review={\MR{1430990}}, doi={10.1006/jabr.1996.6794}, } \bib{X15}{article}{ author={Caenepeel, S.}, author={Militaru, G.}, author={Zhu, S.}, title={Doi-Hopf modules, Yetter-Drinfel\cprime d modules and Frobenius type properties}, journal={Trans. Amer. Math. Soc.}, volume={349}, date={1997}, number={11}, pages={4311--4342}, } \bib{book3}{book}{ author={D\u{a}sc\u{a}lescu, S.}, author={N\u{a}st\u{a}sescu, C.}, author={Raianu, \c{S}.}, title={Hopf algebras}, series={Monographs and Textbooks in Pure and Applied Mathematics}, volume={235}, note={An introduction}, publisher={Marcel Dekker, Inc., New York}, date={2001}, pages={x+401}, } \bib{EV}{article}{ author={Estrada, S.}, author={Virili, S.}, title={Cartesian modules over representations of small categories}, journal={Adv. Math.}, volume={310}, date={2017}, pages={557--609}, } \bib{Tohoku}{article}{ author={Grothendieck, A.}, title={Sur quelques points d'alg\`ebre homologique}, language={French}, journal={Tohoku Math. J. (2)}, volume={9}, date={1957}, pages={119--221}, } \bib{HP}{article}{ author={Hobst, D.}, author={Pareigis, B.}, title={Double quantum groups}, journal={J. Algebra}, volume={242}, date={2001}, number={2}, pages={460--494}, } \bib{Jia}{article}{ author={Jia, L.}, title={The sovereign structure on categories of entwined modules}, journal={J. Pure Appl. Algebra}, volume={221}, date={2017}, number={4}, pages={867--874}, } \bib{KSch}{book}{ author={Kashiwara, M.}, author={Schapira, P.}, title={Categories and sheaves}, series={Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, volume={332}, publisher={Springer-Verlag, Berlin}, date={2006}, pages={x+497}, } \bib{Mit}{article}{ author={Mitchell, B.}, title={Rings with several objects}, journal={Advances in Math.}, volume={8}, date={1972}, pages={1--161}, } \bib{MF}{book}{ author={Mumford, D.}, author={Fogarty, J.}, title={Geometric invariant theory}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete [Results in Mathematics and Related Areas]}, volume={34}, edition={2}, publisher={Springer-Verlag, Berlin}, date={1982}, pages={xii+220}, } \bib{NBO}{article}{ author={N\u{a}st\u{a}sescu, C.}, author={Van den Bergh, M.}, author={Van Oystaeyen, F.}, title={Separable functors applied to graded rings}, journal={J. Algebra}, volume={123}, date={1989}, number={2}, pages={397--413}, } \bib{Raf}{article}{ author={Rafael, M. D.}, title={Separable functors revisited}, journal={Comm. Algebra}, volume={18}, date={1990}, number={5}, pages={1445--1459}, } \bib{Schb}{article}{ author={Schauenburg, Peter}, title={Doi-Koppinen Hopf modules versus entwined modules}, journal={New York J. Math.}, volume={6}, date={2000}, pages={325--329}, } \bib{Schn}{article}{ author={Schneider, H.-J}, title={Principal homogeneous spaces for arbitrary Hopf algebras}, journal={Israel J. Math.}, volume={72}, date={1990}, number={1-2}, pages={167--195}, } \end{biblist} \end{bibdiv} \end{document}
\begin{equation}gin{document} \title{Entanglement-assisted detection of fading targets via correlation-to-coherence conversion} \author{Xin Chen} \affiliation{ Department of Electrical and Computer Engineering, University of Arizona, Tucson, Arizona 85721, USA } \author{Quntao Zhuang} \email{[email protected]} \affiliation{ Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California 90089, USA } \affiliation{ Department of Electrical and Computer Engineering, University of Arizona, Tucson, Arizona 85721, USA } \affiliation{ James C. Wyant College of Optical Sciences, University of Arizona, Tucson, Arizona 85721, USA } \begin{equation}gin{abstract} Quantum illumination utilizes an entanglement-enhanced sensing system to outperform classical illumination in detecting a suspected target, despite the entanglement-breaking loss and noise. However, practical and optimal receiver design to fulfil the quantum advantage has been a long open problem. Recently, [arXiv:2207.06609] proposed the correlation-to-displacement (‘C$\veryshortrightarrow$D’) conversion module to enable an optimal receiver design that greatly reduces the complexity of the previous known optimal receiver [Phys. Rev. Lett. {\bf 118}, 040801 (2017)]. There, the analyses of the conversion module assume an ideal target with a known reflectivity and a fixed return phase. In practical applications, however, targets often induce a random return phase; moreover, their reflectivities can have fluctuations obeying a Rayleigh-distribution. In this work, we extend the analyses of the C$\veryshortrightarrow$D module to realistic targets and show that the entanglement advantage is maintained albeit reduced. In particular, the conversion module allows exact and efficient performance evaluation despite the non-Gaussian nature of the quantum channel involved. \end{abstract} ^\daggerate{\today} \maketitle \section{Introduction} Quantum entanglement enables performance boost in a wide range of optical sensing tasks, such as phase sensing~\cite{Escher_2011,gagatsos2017bounding}, target detection and ranging~\cite{Lloyd2008,tan2008quantum,zhuang2017optimum,zhuang2021quantum,zhuang2022ultimate}, loss sensing~\cite{sarovar2006optimal,venzl2007,monras2007,adesso2009,monras2010,monras2011,Nair_2011,nair2016,nair2018}, noise sensing~\cite{pirandola2017ultimate,shi2022ultimate} and gain sensing~\cite{nair2022optimal}. Despite the varieties of the applications, the sensing processes can often be modeled as bosonic Gaussian channels~\cite{weedbrook2012gaussian}, which preserve the Gaussian form of input Wigner functions. The Gaussian nature of the quantum channel enables efficient exact evaluation of the sensing precision, especially when the source is also Gaussian~\cite{Pirandola2008,banchi2020quantum}. Moreover, the structure of the Kraus operators of the bosonic Gaussian channel also allows the proof that Gaussian probes are optimal among all possible input states~\cite{Escher_2011,nair2020fundamental,nair2018,nair2022optimal,shi2022ultimate}. Take target detection as an example, the transceiver-to-receiver path in presence of a distant target can be modeled as a Gaussian thermal-loss channel with low transmissivity; when the target is absent, the thermal-loss channel degrades to its zero transmissivity limit. In a quantum illumination (QI) protocol with the common Gaussian entangled source of two-mode squeezed vacuum, the error probability performance limit can be obtained via the efficiently calculable quantum Chernoff bound (QCB)~\cite{Audenaert2007,Pirandola2008}, which enables the surprising discovery of a six-decibel error exponent advantage over classical illumination (CI) despite loss and noise~\cite{tan2008quantum}. Things become challenging when non-Gaussian elements are inevitably involved. To begin with, although the channel and source are Gaussian, receivers based on only Gaussian operations (e.g., optical-parametric amplification and phase conjugation) are only able to achieve half of the error exponent advantage~\cite{Guha2009}. Previously proposed optimal receiver design relies on complex non-Gaussian operations that forbid exact performance evaluations~\cite{zhuang2017optimum,zhuang2017fading}. Moreover, a practical target detection scenario involves fading targets, where the random phase noise and fluctuating reflectivity make the quantum channel non-Gaussian. The non-Gaussian nature of the problem makes it difficult to evaluate entanglement's advantage in detecting fading targets. In this paper, we utilize the recently proposed correlation-to-displacement ('${\rm C}\veryshortrightarrow {\rm D}$') conversion module~\cite{shi2022} to evaluate entanglement's advantage in a practical QI target detection scenario with fading targets. The conversion module reduces multi-mode correlated state detection to single-mode coherent-state detection, enabling optimal receiver design and also efficient evaluation even when non-Gaussian elements are involved. Our results show that when there is only correlated phase noise across the probing, the error probability still decays exponentially with the number of probing. Entanglement's error-exponent advantage is still six-decibel when the signal brightness is extremely small, but degrades as the brightness increases. Such robustness resembles previous findings in the communication case~\cite{zhuang2021quantum-enabled}. In the presence of transmissivity fluctuation of the Rayleigh type, however, the error probability decays polynomially with the number of probing probes, and the advantage from entanglement is small, despite being non-zero. \section{Model for fading target detection} \begin{equation}gin{figure} \centering \includegraphics[width=0.6\linewidth]{Fig0.pdf} \caption{Concept of entanglement-assisted target detection. Target surface can be rough causing fading effects.} \label{sch1} \end{figure} As shown in Fig.~\ref{sch1}, in an entanglement-assisted QI target detection scenario, the probe signal is entangled with an ancilla. The signal is reflected by a stationary target in a highly lossy and noisy environment before being detected. A properly structured receiver is required to measure the received signal and the ancilla to boost the sensing precision over CI. In the ideal case of a known phase and a fixed target reflectivity, this process can be modeled as an overall phase-shift thermal-loss channel $\Phi_{\kappa, \theta}$~\cite{weedbrook2012gaussian}, with $\kappa$ being the transmissivity and $\theta$ being the phase shift (as shown in Fig.~\ref{Sch2}). For an input mode described by the annihilation operators $\hat{a}_{\rm S}$, the received mode is \begin{equation} \hat{a}_{\rm R}=e^{i\theta}\sqrt{\kappa}\hat{a}_{\rm S}+\sqrt{1-\kappa}\hat{a}_{\rm B}, \label{input_output_main} \end{equation} where the mode $\hat{a}_{\rm B}$ is in a thermal state with mean photon number $N_{\rm E}$ to model the noise. To model a realistic setting, we consider a target with a time-independent $P_K(\cdot)$-distributed random reflectivity and $P_\Theta(\cdot)$-distributed random phase shift. This leads to the overall quantum channel \begin{equation} \begin{eqnarray}r{\Phi}=\int {\rm d\theta}{\rm d\kappa} P_\Theta(\theta)P_K(\kappa) \Phi_{\kappa,\theta}. \end{equation} The target detection hypothesis testing problem is therefore a quantum channel discrimination problem between the channel $\begin{eqnarray}r{\Phi}$ (fading target present) and a pure noise channel $\Phi_{0,0}$. \begin{equation}gin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{FIG18.pdf} \caption{Schematic illustration of C$\veryshortrightarrow$D conversion module.} \label{Sch2} \end{figure} To benefit from entanglement in QI, we consider $M$ signal-idler pairs $\{\hat{a}_{S_m},\hat{a}_{I_m}\}_{m=1}^M$, where each pair is in a two-mode squeezed-vacuum (TMSV) state with the wave-function \begin{equation} \hat{^\primehi}_{S_m I_m}=\sum_{n=0}^\infty \sqrt{\frac{N_{{\rm S}}^{n}}{(N_{{\rm S}}+1)^{n+1}}} \ket{n}_{S_m}\ket{n}_{I_m}. \label{eq:state_TMSV} \end{equation} Here $\ket{n}$ is the number state and $N_{{\rm S}}$ is the mean photon number of the signal (or idler) mode. When the target is present, after the channel $\begin{eqnarray}r{\Phi}$, the density operator of the return and idler field is \begin{equation} \hat{\rho}_{\rm RI}=\int {\rm d\theta}{\rm d\kappa} P_\Theta(\theta)P_K(\kappa)\hat{\rho}_{\rm RI}(\theta,\kappa). \end{equation} Here the state $\hat{\rho}_{\rm RI}(\theta,\kappa)$ describes the $M$ return-idler pairs $\{\hat{a}_{R_m},\hat{a}_{I_m}\}_{m=1}^M$ from channel $\Phi_{\kappa,\theta}$, each maintaining a phase-sensitive cross-correlation $ \expval{\hat{a}_{R_m} \hat{a}_{I_m}}=e^{i \theta} C_p $ with the amplitude $C_p\equiv \sqrt{\kappa N_{{\rm S}} \left( N_{{\rm S}}+1\right)}$. \section{Analyses of correlation-to-displacement conversion module } As shown in Fig.~\ref{Sch2}, in a ${\rm C}\veryshortrightarrow {\rm D}$ conversion module~\cite{shi2022}, we perform heterodyne measurement on each return mode and retain the idlers for further information processing. In general, the measurement can be described by positive operator-valued measure (POVM) elements {$\hat{E}^^\daggeragger_{{\bm x}}\hat{E}_{{\bm x}}$} satisfying the completeness relation $\int {d}^{2M}{{\bm x}} \,\hat{E}^^\daggeragger_{{\bm x}}\hat{E}_{{\bm x}} =\hat{I}$, where the overall measurement result across the $M$ returns ${\bm x}=(x_1, \cdots, x_M)^T $ with each $x_m$ being complex. The corresponding probability of having measurement result $\bm X=\bm x$ is given by \begin{equation}gin{align} P_{\bm X}(\bm x)&={\rm Tr}(\hat{\rho}_{\rm RI}\hat{E}^^\daggeragger_{\bm x}\hat{E}_{\bm x} ) \\ &=\int {\rm d\theta}{\rm d\kappa} P_\Theta(\theta)P_K(\kappa)P_{\bm X|\Theta,K}(\bm x|\theta,\kappa), \end{align} with $P_{{\bm X}|\Theta,K}({\bm x}|\theta,\kappa)={\rm Tr}(\hat{\rho}_{\rm RI}(\theta,\kappa)\hat{E}^^\daggeragger_{\bm x}\hat{E}_{\bm x} )$ as the conditional probability when the channel is $\Phi_{\kappa,\theta}$. For a given fixed phase and reflectivity, the distribution has been solved in Ref.~\cite{shi2022} as a complex Gaussian distribution with variance $2\sigma_\kappa^2=\kappa N_{{\rm S}}+(1-\kappa)N_{\rm E}+1$, i.e., \begin{equation} P_{{\bm X}|\Theta,K}({\bm x}|\theta,\kappa) = g(|\bm x|,\sigma_k), \label{eq:p_M} \end{equation} where we define $g(x,\sigma)= e^{-x^2/2\sigma^2}/(2^\primei\sigma^2)^M$. Note that $P_{{\bm X}|\Theta,K}({\bm x}|\theta,\kappa)$ does not depend on the phase shift $\theta$; therefore, we obtain the unconditional distribution of the measurement result as \begin{equation} P_{\bm X}({\bm x})=\int {\rm d\kappa} P_{K,{\bm X}}(\kappa,{\bm x}), \label{PM_def} \end{equation} with $P_{K,{\bm X}}(\kappa,{\bm x})\equiv P_K(\kappa)P_{{\bm X}|\Theta,K}({\bm x}|\theta,\kappa)=P_K(\kappa) g(|\bm x|,\sigma_k)$. At the same time, the conditional distribution can be obtained as \begin{equation}gin{align} P_{K|\bm X}(\kappa|\bm x)=\frac{P_K(\kappa) g(|\bm x|,\sigma_k)}{\int {\rm d}\kappa P_K(\kappa) g(|\bm x|,\sigma_k)}\equiv f(\kappa,|\bm x|), \label{f_func} \end{align} which is only a function of the module $|\bm x|$ and $\kappa$. Conditioned on the measurement result of the return mode, the signal-idler joint state is projected to \begin{equation}gin{align} &\hat{\rho}_{\rm RI}'({\bm x})=\frac{\hat{E}_{\bm x}\hat{\rho}_{\rm RI}\hat{E}^^\daggeragger_{\bm x}}{P_{\bm X}({\bm x})} \\ &=\int {\rm d\theta}{\rm d\kappa}\frac{ P_\Theta(\theta)P_{K,{\bm X}}(\kappa,{\bm x})}{P_{\bm X}(\bm x)} \hat{\rho}_{\rm RI}(\theta,\kappa|{\bm x}), \end{align} where the conditional state \begin{equation} \hat{\rho}_{\rm RI}(\theta,\kappa|{\bm x})=\frac{\hat{E}_{\bm x} \hat{\rho}_{\rm RI}(\theta,\kappa)\hat{E}^^\daggeragger_{\bm x} }{P_{{\bm X}|\Theta,K}({\bm x}|\theta,\kappa)} \end{equation} is identical to the return state after the heterodyne detection, when the target has a fixed phase shift $\theta$ and a reflectivity $\kappa$~\cite{shi2022}. Therefore, the idler modes of $\hat{\rho}_{\rm RI}(\theta,\kappa|{\bm x})$ is in product of displaced thermal state \begin{equation} {\rm Tr}_R[\hat{\rho}_{\rm RI}(\theta,\kappa|{\bm x})]=\overlinetimes_m \hat{\rho}_{d_m, E_\kappa}. \end{equation} The complex displacement of idler conditioned on the measurement result is $ d_m=\mu_\kappa{\rm e}^{{\rm i}\theta}{\bm x}_m^{*}, $ with \begin{equation} \mu_\kappa=\frac{\sqrt{\kappa N_{{\rm S}}(N_{{\rm S}}+1)}}{[\kappa N_{{\rm S}}+(1-\kappa)N_{\rm E}+1]}, \end{equation} and the thermal noise mean photon number \begin{equation} E_\kappa=\frac{(1-\kappa)(1+N_{\rm E})N_{{\rm S}}}{[\kappa N_{{\rm S}}+(1-\kappa)N_{\rm E}+1]}. \end{equation} Conditioned on phase $\theta$ and reflectivity $\kappa$, one can apply the beamsplitter array strategy in Ref.~\cite{shi2022} on the idler modes with the weights of the beamsplitter properly chosen based on the heterodyne detection result (indepedent of $\theta$ or $\kappa$), producing a one-mode displaced thermal state with the complex displacement, $ d=\sum \overlinemega_m d_m=\mu_\kappa{\rm e}^{{\rm i}\theta}|{\bm x}|, $ where the weight $\overlinemega_m={\bm x}_m/|{\bm x}|$ is independent of $\kappa,\theta$. The mean photon number of the displaced thermal state is still $E_\kappa$. Considering the phase shift and reflectivity distribution, the unconditional output state of the single output mode is \begin{equation}gin{equation} \hat{\rho}_{\rm I}({\bm x})=\int {\rm d\kappa} P_{K|{\bm X}}(\kappa|{\bm x}) \hat{\rho}_{\rm I,\kappa}({\bm x}) . \label{rho_I} \end{equation} where the conditional state \begin{equation} \hat{\rho}_{\rm I,\kappa}({\bm x})\equiv \int {\rm d\theta} \,P_\Theta(\theta)\hat{\rho}_{\mu_\kappa{\rm e}^{{\rm i}\theta}|{\bm x}|, E_\kappa}. \label{rho_I_k} \end{equation} Note that when the phase is uniform random in $[0,2^\primei)$, $\hat{\rho}_{\rm I,\kappa}({\bm x})$ is photon-number diagonal (see Appendix~\ref{diagonal}). Similar to Eq.~(3) of Ref.~\cite{shi2022}, the error probability performance limit of QI based on the ${\rm C}\veryshortrightarrow {\rm D}$ conversion module is therefore \begin{equation}gin{align} P_{\rm C\veryshortrightarrow D}&=\int {\rm d}^{2M}{\bm x} P_{\bm X}(\bm x)P_{\rm H}\left[\hat{\rho}_{0,N_{{\rm S}}},\hat{\rho}_{\rm I}\left({\bm x}\right)\right]. \label{HEL_general} \end{align} Noticing that the state $\hat{\rho}_{\rm I}({\bm x})$ and the distribution $P_{\bm X}(\bm x)$ are only functions of the amplitude $|\bm x|$ and making use of Eqs.~\eqref{f_func} and~\eqref{eq:p_M} explicitly, we can further simplify the result via integrating out $2M-1$ degree of freedom to obtain \begin{equation}gin{align} P_{\rm C\veryshortrightarrow D} = \int {\rm d}x P_X(x) P_{\rm H}\left[\hat{\rho}_{0,N_{{\rm S}}},\hat{\rho}_{\rm I}\left(x\right)\right] \label{HEL_general_simple} \end{align} Here \begin{equation}gin{align} P_X(x)=\frac{2^\primei^M}{\Gamma(M)}\int {\rm d}\kappa P_{K}(\kappa) x^{2M-1}\, g(x,\sigma_k). \label{Px} \end{align} is the distribution of the module of measurement result $\bm x$, and the corresponding conditional state \begin{equation}gin{equation} \hat{\rho}_{\rm I}(x)=\int {\rm d\theta}{\rm d\kappa} P_\Theta(\theta)f(\kappa,x) \hat{\rho}_{\mu_\kappa{\rm e}^{{\rm i}\theta}x, E_\kappa}. \label{rho_I_x} \end{equation} \section{Performance for random phase model (known reflectivity)} \subsection{Evaluating the performance of conversion module} To understand the effect of phase noise, we begin with the scenario of uniformly distributed phase shift and a fixed known reflectivity $\kappa$. Therefore, the phase noise distribution $P_\Theta(\theta)=1/2^\primei$ and the reflectivity is a delta-function, $P_K(\kappa')=^\daggerelta(\kappa'-\kappa)$. Consequently, $\hat{\rho}_{\rm I}=\hat{\rho}_{\rm I,\kappa}$ in Eq.~\eqref{rho_I} is diagonal in the number basis regardless of the target's presence or absence. Therefore, photon counting is the optimal measurement and the error probability performance limit can be analytically solved from Eq.~\eqref{HEL_general_simple} and Eq.~\eqref{Px}, \begin{equation} P_{\rm C\veryshortrightarrow D} =\int{\rm d}y_{\kappa}P_{\chi^2}^{(2M)}(y_{\kappa})P_{\rm H}\left[\hat{\rho}_{0,N_{{\rm S}}},\hat{\rho}_{{\rm I},\kappa}\left(\sigma_\kappa\sqrt{y_{\kappa}}\right)\right] \label{HEL} \end{equation} where $P_{\chi^2}^{(2M)}(\cdot)$ is the $\chi^2$ distribution of $2M$ degrees of freedom and we have changed the variable $x$ to $y_{\kappa}= x^2/\sigma_\kappa^2$ from Eq.~\eqref{HEL_general_simple}. At the same time, we can explicitly solve \begin{equation} P_{\rm H}\left[\hat{\rho}_{0,N_{{\rm S}}},\hat{\rho}_{{\rm I},\kappa}\left(\sigma_\kappa\sqrt{y_{\kappa}}\right)\right]=\left[1-\sum_{n: \gamma_{n,\kappa}\left(y_k\right)>0}\gamma_{n,\kappa}\left(y_k\right)\right]/2, \end{equation} where we have defined (see Appendix~\ref{diagonal}) \begin{equation}gin{align} \gamma_{n,\kappa}(y_{\kappa})&=\frac{N_{{\rm S}}^n}{(1+N_{{\rm S}})^{n+1}} \nonumber \\ &-\frac{E^n}{(1+E)^{1+n}}{\rm e}^{-\xi_\kappa y_{\kappa}/E} {_1\Tilde{F}_1\Big[n+1,1,\frac{\xi_\kappa y_{\kappa}}{E(1+E)}\Big]}, \label{gamma_y} \end{align} and the summation includes all positive values of $\gamma_n\left(y\right)$. Here $_1\Tilde{F}_1$ is the regularized confluent hypergeometric function \cite{shi2020practical} and \begin{equation} \xi_\kappa=\mu_\kappa^2\sigma_\kappa^2=\frac{\kappa N_{{\rm S}}(N_{{\rm S}}+1)}{2[\kappa N_{{\rm S}}+(1-\kappa)N_{\rm E}+1]}. \label{xi_k} \end{equation} Moreover, due to $M\gg1$, the $\chi^2$ distribution in Eq.~\eqref{HEL} can be approximated as a delta function, and we arrive at the analytical result \begin{equation}gin{align} P_{\rm C\veryshortrightarrow D}&\approx P_{\rm H}\left[\hat{\rho}_{0,N_{{\rm S}}},\hat{\rho}_{{\rm I},\kappa}\left(\sigma_\kappa\sqrt{2M}\right)\right] \label{conHel_PH} \\ &=\left[1-\sum_{n: \gamma_{n,\kappa}\left(2M\right)>0}\gamma_{n,\kappa}\left(2M\right)\right]/2, \label{conHel} \end{align} We numerically verified that the above expression agrees with the exact result with negligible error in all the parameter regions relevant to this paper (see Appendix \ref{largeM}). \begin{equation}gin{figure} \centering \includegraphics[width=0.8\linewidth]{FIG25.pdf} \caption{(a) Optimal decision threshold of the photon counts $N$. (b) Error probability versus the number of copies $M$ with $N_{{\rm S}}=0.001$, $N_{\rm E}=20$ and $\kappa=0.01$. The abrupt changes of $P_{\rm C\veryshortrightarrow D}$ happen when the optimal decision threshold changes in (a). The dashed lines from dark to light represent the error probabilities corresponding to the decision threshold $N=0, 1, 2$, respectively, according to the approximation in Eq.~(\ref{app1}). The dotted lines from dark to light indicate the exact error probabilities for the same thresholds calculated with Eq.~(\ref{app2}). } \label{fig1} \end{figure} In Fig.~\ref{fig1} (b), we plot QI performance $P_{\rm C\veryshortrightarrow D}$ as the red curve for the same parameter choice of Refs.~\cite{shi2022,Zhuang2017}. We see abrupt changes in the error probability when the number of modes $M$ increases, due to the integer summation in Eq.~\eqref{conHel}. To better understand the performance, we consider a threshold decision strategy, where one compares the measured photon number against a threshold $N$: target presence is declared if and only if the photon number is larger than $N$. From Eq.~\eqref{conHel}, the error probability of such a threshold decision is \begin{equation} P_{{\rm C\veryshortrightarrow D},\kappa}^{N}=\frac{1}{2}\left[1-\sum_{n=0}^{N}\gamma_{n,\kappa}\left(2M\right)\right]. \label{app2} \end{equation} We plot $P_{{\rm C\veryshortrightarrow D},\kappa}^{N}$ as the dotted lines for different values of $N$ and they agree with $P_{\rm C\veryshortrightarrow D}$ within each continuous sector (solid red curve). The abrupt changes of $P_{\rm C\veryshortrightarrow D}$ also corresponds well with the change in the optimal decision threshold $\argmin_N{P_{{\rm C\veryshortrightarrow D},\kappa}^{N}}$ in Fig.~\ref{fig1} (a). \begin{equation}gin{figure*} \centering \includegraphics[width=\linewidth]{FIG36.pdf} \caption{The error performance for the uniform phase and known reflectivity model with the parameters: $N_{{\rm S}}=0.001$, $N_{\rm E}=20$ and $\kappa=0.01$. Note: some lines are only partially plotted in a range of x-axis limited by numerical precision. (a) The error probabilities and bounds thereof as a function of $M$. The red line indicates the error probability limit of the C$\veryshortrightarrow$D conversion. The black dashed line is the asymptote of this probability. The blue and green lines are the QCB (upper bound) and NG lower bound, respectively. The black line indicates the optimum CI’s error probability. (b)The asymptotic behaviors of $-\ln{P_{\rm E}}/M$ (whose asymptotic limits are the error exponents) with $P_{\rm E}$ being the error performance of the C$\veryshortrightarrow$D conversion module (red), the asymptote (black, dashed) and the QCB (blue, dashed), respectively. The orange line indicates the error exponent of the asymptote. (c) The error exponent (normalised by the error exponent of the CI) of the asymptote and QCB. When $N_{\rm S}\to 1^-$, the error exponent of asymptote deviates from QCB for the low brightness condition doesn't hold anymore. The red dotted line indicates the 6 dB advantage over CI.} \label{fig2} \end{figure*} After understanding the performance enabled by the conversion module, now we compare the QI error probability $P_{\rm C\veryshortrightarrow D}$ of Eq.~\eqref{conHel} with that of CI to show the entanglement's advantage. In CI with coherent-state probes, due to the uniform random phase noise, the received state is photon-number diagonal, and the Helstrom limit can be efficiently evaluated (see Appendix~\ref{CI}). As Fig.~\ref{fig1} already has too many lines, we re-print $P_{\rm C\veryshortrightarrow D}$ (red solid) in Fig.~\ref{fig2}(a) in comparison with the error probability of CI (black solid) and, showing orders of magnitude advantage. In particular, the curves indicate that QI and CI still have different error exponents despite the fully random phase noise, as we will confirm in the next section with asymptotic analyses. \subsection{Asymptotic results and error exponent} To better understand the QI performances, and in particular to understand the error exponent in presence of the random phase noise, we explore asymptotic solutions of $P_{\rm C\veryshortrightarrow D}$. Considering Eqs.~\eqref{conHel_PH} and~\eqref{rho_I_k} at the low brightness ($N_{{\rm S}}\ll 1$) and low reflectivity ($\kappa\ll 1$) limit, we can approximate the noisy displaced coherent state in Eq.~\eqref{rho_I_k} as a coherent state and $\hat{\rho}_{0,N_{{\rm S}}}$ as a vacuum state. Therefore, Eq.~\eqref{gamma_y} can be approximated as $ \gamma_{n,\kappa}(y_{\kappa})=^\daggerelta_{n,0}-e^{-\xi_\kappa y_{\kappa}}(\xi_\kappa y_{\kappa})^{n}/n!, $ where $^\daggerelta_{n,0}$ is the Kronecker delta function. Given a threshold $N$, from Eq.~\eqref{app2}, the error probability of the C$\veryshortrightarrow$D conversion module in large-$M$ limit is \begin{equation}gin{align} P_{{\rm C\veryshortrightarrow D},\kappa}^{N} \approx\frac{1}{2}\sum_{n=0}^N p_n, \mbox{\ with $p_n=e^{-2\xi_{\kappa} M}(2\xi M)^{n}/n!$} \label{app1} \end{align} and the minimum error probability $P_{\rm C\veryshortrightarrow D}=\min_N P_{{\rm C\veryshortrightarrow D},\kappa}^{N}$. When the photon number threshold $N=0$, it is just the error probability of Kennedy receiver and $P_{{\rm C\veryshortrightarrow D},\kappa}^{0}=(1/2)e^{-2\xi_{\kappa} M}$ ~\cite{Kennedy_1972}. The dashed lines in Fig.~\ref{fig1}(b) show the approximated error probabilities for the decision threshold $N=0,1,2$, respectively. We see a good recovery of the $P_{\rm C\veryshortrightarrow D}$ (solid red curve) in each continuous sector, which allows us to proceed with the asymptotic analyses. Next, we obtain the asymptotic optimal decision threshold. Consider Eq.~\eqref{conHel_PH}, now we treat $\hat{\rho}_{0,N_{{\rm S}}}$ as thermal state again. Its density matrix is diagonal, with elements $p'_n=N_{{\rm S}}^n/(1+N_{{\rm S}})^{n+1}$. The optimal threshold is determined by solving $p_N=p'_N$, where $p_N$ is defined in Eq.~\eqref{app1}, we obtain \begin{equation} N\approx \frac{2\xi_{\kappa} M}{\end{pmatrix}silon}, \label{Ny} \end{equation} where $\end{pmatrix}silon=-W_{-1}(-N_{{\rm S}}/e)\gg 1$ and $W_{-1}$ is Lambert $W$ function. The approximation holds when $M\gg 1$. An asymptote of the Helstrom limit $P_{\rm C\veryshortrightarrow D}^{\rm ASY}$ can be obtained by substituting Eq.~(\ref{Ny}) into Eq.~(\ref{app1}) and its error exponent can be obtained as (see Appendix~\ref{asym}) \begin{equation} r_{\rm C\veryshortrightarrow D}^{\rm ASY}=\lim_{M\to\infty}\tilde{r}_{\rm C\veryshortrightarrow D}^{\rm ASY}(M) =[1-{\rm ln}(e\end{pmatrix}silon)/\end{pmatrix}silon]2\xi_{\kappa}, \label{rCD} \end{equation} where we defined the finite-$M$ exponent \begin{equation} \tilde{r}_{\rm C\veryshortrightarrow D}^{\rm ASY}\equiv -{\rm ln}P_{\rm C\veryshortrightarrow D}^{\rm ASY}(M)/M. \end{equation} Now we evaluate $P_{\rm C\veryshortrightarrow D}^{\rm ASY}$ in Fig.~\ref{fig2}(a) as the black dashed curve. Indeed, we see a good agreement with $P_{\rm C\veryshortrightarrow D}$ of Eq.~\eqref{conHel} (red solid). To understand the error exponent, we plot the error probability in a logarithmic version $-\ln P_{\rm E}/M$ in units of $2\xi_\kappa$ [see Eq.~\eqref{xi_k}] versus the number of modes $M$ in Fig.~\ref{fig2}(b). As expected, $\tilde{r}_{\rm C\veryshortrightarrow D}^{\rm ASY}$ (black dashed) approaches $r_{\rm C\veryshortrightarrow D}^{\rm ASY}$ (orange solid) in the large $M$ limit. The exact results $P_{\rm C\veryshortrightarrow D}$ (red solid) agrees well with $\tilde{r}_{\rm C\veryshortrightarrow D}^{\rm ASY}$, however, its evaluation is limited to rather small $M$ due to numerical precision constraints. With the error exponent $r_{\rm C\veryshortrightarrow D}^{\rm ASY}$ in hand, we can now compare with the error exponent of CI $r_{\rm CI}=\lim_{M\to\infty}-\ln \left(P_{\rm CI}\right)/M$ (see Appendix~\ref{CI} for the calculation of $P_{\rm CI}$) to understand the quantum advantage in the error exponent under different signal brightness. As shown in Fig.~\ref{fig2}(c), $r_{\rm C\veryshortrightarrow D}^{\rm ASY}$ (orange solid) is always larger than $r_{\rm CI}$, confirming quantum advantage, moreover, the error exponent ratio approaches six decibels (indicated by the red dotted line) as $N_S$ approaches zero, although the rate of convergence is very slow. This can be confirmed analytically from Eq.~\eqref{rCD} via \begin{equation} \lim_{N_S\to 0} r_{\rm C\veryshortrightarrow D}^{\rm ASY}=2\xi_{\kappa} \simeq \kappa N_S/N_E. \end{equation} As we have $r_{\rm CI}\lesssim \kappa N_S/4N_E$, there is indeed a six-decibel advantage of QI over CI. From the numerical results as well as asymptotic analyses, we see that in the weak signal limit, phase noise essentially does not change the error exponent, compared to the case without phase noise~\cite{tan2008quantum,shi2022}. \subsection{Upper and lower bounds} Finally, we provide additional comparison of the QI performance with upper and lower bounds. We obtain upper bound from the asymptotically tight QCB~\cite{Audenaert2007,Pirandola2008} and lower bound from the Nair-Gu (NG) bound~\cite{nair2020fundamental}. Given any two quantum state $\hat{\rho}_0,\hat{\rho}_1$, the QCB $P_{\rm QCB}(\hat{\rho}_0,\hat{\rho}_1)=(1/2){\rm inf}_{s\in[0,1]}Q_s$, where $Q_s={\rm Tr}(\hat{\rho}_0^s\hat{\rho}_1^{1-s})$, is an asymptotically tight upper bound for the Helstrom limit $P_{\rm H}\left[\hat{\rho}_0,\hat{\rho}_1\right]$. Therefore, for the uniform phase and known reflectivity model, we can apply the QCB on the Helstrom limit $P_{\rm H}\left[\hat{\rho}_{0,N_{{\rm S}}},\hat{\rho}_{{\rm I},\kappa}\left(\sigma_\kappa\sqrt{y_{\kappa}}\right)\right]$ in Eq.~\eqref{conHel_PH} to obtain the upper bound \begin{equation}gin{align} &P_{\rm C\veryshortrightarrow D}\le P_{\rm QCB,U} \equiv \frac{{\rm inf}_{s\in[0,1]}{\rm Tr}[\hat{\rho}_{0,N_{{\rm S}}}^s\hat{\rho}_{{\rm I},\kappa}^{1-s}(\sigma_\kappa\sqrt{2M})]}{2}. \end{align} Here both $\hat{\rho}_{0,N_{{\rm S}}}$ and $\hat{\rho}_{{\rm I},\kappa}$ are diagonal in the number state basis and therefore can be efficiently evaluated. Nair and Gu derived a lower bound on the error probability of quantum illumination (QI)~\cite{nair2020fundamental} target detection assisted by arbitrary form of entanglement. As this is the lower bound in the ideal case, it also holds as a lower bound in presence of additional noise. Consider $M$ probes with mean photon number $N_{{\rm S}}$, we then have \begin{equation} P_{\rm C\veryshortrightarrow D} \geq P_{\rm NG}=\frac{1}{4}e^{-\begin{equation}ta M N_{{\rm S}}}, \end{equation} where $\begin{equation}ta=-\ln[1-\kappa/(N_E(1-\kappa)+1)]$. We plot the upper bound $P_{\rm QCB,U}$ (blue dashed) and lower bound $P_{\rm NG}$ (green solid) in Fig.~\ref{fig1} (a). Meanwhile, we also plot the QCB error exponent $r_{\rm QCB}\equiv\lim_{M\to\infty}-\ln P_{\rm QCB,U}/M$ and $\tilde{r}_{\rm QCB}\equiv -\ln P_{\rm QCB,U}/M$ in Fig.~\ref{fig1} (b) and (c). Indeed, we see that QCB verifies our previous asymptotic evaluations. \begin{equation}gin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{FIG59.pdf} \caption{The error performance for the Rayleigh-fading model. The parameters are $N_{{\rm S}}=0.001$, $N_{\rm E}=20$ and $\begin{eqnarray}r{\kappa}=0.01$. Comparison of the achievable error performance given by Eq.~(\ref{heli}) (red), the lower bound given by Eq.~(\ref{LB}) (purple, dashed), the optimum error probability for CI (black) and the error performance of SFG receiver (blue, dotted). } \label{fig5} \end{figure} \section{Performance for Rayleigh-fading Model} With the performance degradation from phase noise well understood, now we consider Rayleigh-fading targets, where the target has a Rayleigh-distributed reflectivity besides a uniform random phase, i.e., \begin{equation} P_K(\kappa)= e^{-\kappa/\begin{eqnarray}r{\kappa}}/\begin{eqnarray}r{\kappa}, \label{PDFK} \end{equation} with $\begin{eqnarray}r{\kappa}$ being the average reflectivity of the target. Note the above distribution is up to a cut-off so that $\kappa\in[0,1]$. As Eqs.~(\ref{rho_I}) and (\ref{HEL_general}) are now difficult to calculate numerically, to understand the QI performance for Rayleigh-fading targets, we consider lower bounds and achievable performance (upper bounds). \subsection{Lower bound} Applying concavity of the Helstrom limit (see Lemma 1 in \cite{zhuang2017fading}) to Eqs.~(\ref{rho_I}) and (\ref{HEL_general}), we have \begin{equation}gin{align} &P_{\rm C\veryshortrightarrow D} \geq P_{\rm E,LB}\equiv \int {\rm d}^{2M}{\bm x} {\rm d\kappa} P_{K,{\bm X}}(\kappa,{\bm x})P_{\rm H}\left[\hat{\rho}_{0,N_{{\rm S}}},\hat{\rho}_{\rm I,\kappa}(\bm x)\right]\nonumber \\ &=\int{\rm d}\kappa {\rm d}y_{\kappa} \frac{1}{\begin{eqnarray}r{\kappa}}e^{-\kappa/\begin{eqnarray}r{\kappa}}P_{\chi^2}^{(2M)}(y_{\kappa}) P_{\rm H}\left[\hat{\rho}_{0,N_{{\rm S}}},\hat{\rho}_{{\rm I},\kappa}\left(\sigma_\kappa\sqrt{y_{\kappa}}\right)\right]\nonumber \\ &\approx\int {\rm d}\kappa \frac{1}{\begin{eqnarray}r{\kappa}}e^{-\kappa/\begin{eqnarray}r{\kappa}} P_{\rm H}\left[\hat{\rho}_{0,N_{{\rm S}}},\hat{\rho}_{{\rm I},\kappa}(\sqrt{2M}\sigma_{\kappa})\right]. \label{LB} \end{align} In the last step, we have taken the approximation at the $M\gg 1$ limit, similar to Eq.~\eqref{conHel_PH}. Now Eq.~\eqref{LB} can be evaluated via an approach similar to Eq.~\eqref{HEL}. \subsection{Achievable performance} We then explore an achievable performance of the ${\rm C \veryshortrightarrow D}$ conversion module for the Rayleigh-fading model. Upon the heterodyne measurement results on the return $\bm x$, we perform direct photon counting on the idler output in state $\hat{\rho}_{\rm I}({\bm x})$ from the conversion module, then finish with a threshold decision strategy at a fixed threshold independent of $\bm x$. With the decision threshold optimized, the error probability can be expressed as \begin{equation}gin{align} &P_{\rm C\veryshortrightarrow D}= P_{\rm H}\bigg[\hat{\rho}_{0,N_{{\rm S}}},\int {\rm d}^{2M}{\bm x}P_{\bm X}(\bm x)\hat{\rho}_{\rm I}\left({\bm x}\right)\bigg]\nonumber \\ &=P_{\rm H}\bigg[\hat{\rho}_{0,N_{{\rm S}}}, \int{\rm d}\kappa {\rm d}y_{\kappa} \frac{1}{\begin{eqnarray}r{\kappa}}e^{-\kappa/\begin{eqnarray}r{\kappa}}P_{\chi^2}^{(2M)}(y_{\kappa})\hat{\rho}_{{\rm I},\kappa}\left(\sigma_\kappa\sqrt{y_{\kappa}}\right)\bigg] \nonumber \\ &\approx P_{\rm H}\left[\hat{\rho}_{0,N_{{\rm S}}},\int {\rm d}\kappa \frac{1}{\begin{eqnarray}r{\kappa}} e^{-\kappa/\begin{eqnarray}r{\kappa}}\hat{\rho}_{{\rm I},\kappa}(\sqrt{2M}\sigma_{\kappa})\right], \label{heli} \end{align} where in the last step the measurement distribution is approximated as a delta-function at the large $M$ limit. Fig.~\ref{fig5} plots the achievable performance $P_{\rm C\veryshortrightarrow D}$ (red solid), the lower bound $P_{\rm E,LB}$ (purple dashed) and the optimum CI’s error probability (black solid, see Appendix~\ref{CI}) versus the number of modes. We see that the quantum advantage over CI persists for the Rayleigh-fading model, although it is further reduced when compared with the random phase model. The plot also shows that our results agree with the QI detection for the Rayleigh-fading targets with the SFG reception $P_{\rm SFG}$~\cite{zhuang2017fading} (blue dashed), where the error probability decays with the number of modes in a polynomial fashion. Indeed, we find the achievable result $P_{\rm C\veryshortrightarrow D}$ of the conversion module agrees fairly well with $P_{\rm SFG}$. While the SFG results require an approximate solution of a complex quantum nonlinear optical process, the conversion module's achievable performance is almost exact, and requires little effort in calculations. \section{CONCLUSIONS} We study the entanglement-assisted target detection performance of the recently proposed correlation-to-displacement conversion module, in the more practical scenario of random phase noise and reflectivity fluctuation. The results show, in the scenario of only random phase noise, this module still affords six-decibel error exponent advantage over the optimum classical illumination when the signal brightness is small. While in consideration of the Rayleigh reflection, the advantage is much smaller, although being non-zero. \begin{equation}gin{acknowledgements} This project is supported by the NSF CAREER Award CCF-2142882, NSF OIA-2134830 and NSF OIA-2040575. QZ also acknowledges support from Defense Advanced Research Projects Agency (DARPA) under Young Faculty Award (YFA) Grant No. N660012014029, National Science Foundation (NSF) Engineering Research Center for Quantum Networks Grant No. 1941583. \end{acknowledgements} \appendix \section{Proof of diagonal density matrix of $\hat{\rho}_{\rm I}$ under uniform phase rotation} \label{diagonal} Phase rotation $\hat{a}\to e^{-i\theta}\hat{a}$ on mode $\hat{a}$ is described by the unitary $\hat{R}(\theta)=\exp\left[-i\theta \hat{a}^^\daggeragger\hat{a}\right]$. Under a uniform random phase, any single-mode input state becomes number-state diagonal, because \begin{equation}gin{align} \expval{m|\int {\rm d}\theta\, \hat{R}(\theta)\hat{\rho}\hat{R}^^\daggeragger(\theta)|n}&=\int {\rm d}\theta\, e^{-i\theta(m-n) }\expval{m|\hat{\rho}|n} \nonumber \\ &^\primeropto ^\daggerelta_{mn}\expval{n|\hat{\rho}|n}, \end{align} where we utilized the fact $\hat{R}(\theta)\ket{n}=e^{-in\theta}\ket{n}$. In the case of displaced thermal state, we have~\cite{shi2020practical} \begin{equation}gin{align} &\bra{m}\hat{\rho}_{{\rm I},\kappa}(x)\ket{n}=\int {\rm d\theta}\frac{1}{2^\primei}\bra{m}\hat{\rho}_{\mu_\kappa x{\rm e}^{{\rm i}\theta}, E_\kappa}\ket{n}\nonumber \\ &=\frac{^\daggerelta_{m,n}E^n}{(1+E)^{1+n}}{\rm e}^{-|\mu_\kappa x|^2/E} {_1\Tilde{F}_1\Big[n+1,1,\frac{|\mu_\kappa x|^2}{E(1+E)}\Big]}. \end{align} \section{Optimum performance limit of classical illumination } \label{CI} \begin{equation}gin{figure} \centering \includegraphics[width=0.6\linewidth]{FIG43.pdf} \caption{Comparison of the error performances for CI evaluated by Eq.~(\ref{PCI}) (solid) and Eq.~(\ref{PROC}) (dotted), respectively. $P_{\rm CI,U}$ and $\tilde{P}_{\rm CI,U}$ are for the random phase model. $P_{\rm CI,R}$ and $\tilde{P}_{\rm CI,R}$ are for the Rayleigh-fading model.} \label{figCI} \end{figure} For comparison, the Helstrom limit of classical illumination (CI) is calculated with a coherent-state transmitter. If $\kappa$ and $\theta$ are fixed, the returned mode is in a displaced thermal state $\hat{\rho}_{\sqrt{\kappa N_S},(1-\kappa) N_E}$. When $\kappa$ and $\theta$ are random variables, the output state is then $\int {\rm d\theta}{\rm d\kappa} P_\Theta(\theta)P_K(\kappa)\hat{\rho}_{\sqrt{\kappa N_S},(1-\kappa) N_E}$. Therefore, the performance limit is \begin{equation}gin{align} P_{\rm CI}=\frac{1}{2} \left(1-\sum_{n:\gamma_{n,{\rm CI}}>0}\gamma_{n,{\rm CI}}\right), \label{PCI} \end{align} where the summation $\sum$ includes all the positive values of \begin{equation}gin{align} \gamma_{n,{\rm CI}}&=\frac{N_E^n}{(1+N_E)^{n+1}}-\int {\rm d}\kappa P_K(\kappa)\frac{E^{^\primerime n}}{(1+E')^{1+n}}\nonumber\\&\times{\rm e}^{-M\kappa N_{{\rm S}}/E'} {_1\Tilde{F}_1\Big[n+1,1,\frac{M\kappa N_{{\rm S}}}{E'(1+E')}\Big]}. \end{align} Here $E'=(1-\kappa) N_E$. To double check the result, we calculate the performance limit with another method~\cite{zhuang2017fading}: \begin{equation}gin{align} P_{\rm CI}={\rm min}_{P_{\rm F}^{\rm CI}}\left[P_{\rm F}^{\rm CI}/2+(1-P_{\rm D}^{\rm CI})/2\right] \label{PROC} \end{align} and compare the results. Here the conditional false-alarm probability $P_{\rm F}^{\rm CI}$ denotes the chance that target present is declared when no target is present, and the conditional detection probability $P_{\rm D}^{\rm CI}$ denotes the chance that target present is declared when a target is present. The relation between $P_{\rm F}^{\rm CI}$ and $P_{\rm D}^{\rm CI}$ is referred to as the receiver operating characteristic (ROC). The ROC for the CI detection of the uniform-phase and known-reflectivity targets is $P_{\rm D}^{\rm CI}=Q(\sqrt{2\kappa M N_S/E'},\sqrt{-2{\rm ln}P_{\rm F}^{\rm CI}})$, where $Q(a,b)$ is the Marcum’s $Q$ function; The CI ROC for the Rayleigh-fading targets is $P_{\rm D}^{\rm CI}=(P_{\rm F}^{\rm CI})^{1/(1+M\begin{eqnarray}r{\kappa}N_{\rm S}/E')}$~\cite{van2001detection3}. Fig.~\ref{figCI} shows the results calculated with the two methods are consistent. \section{Large-$M$ approximation} \label{largeM} \begin{equation}gin{figure} \centering \includegraphics[width=0.6\linewidth]{FIG64.pdf} \caption{Comparison of the exact (red) and approximated (black, dotted) performance limit for the random phase model in the large-$M$ limit.} \label{app} \end{figure} Fig.~\ref{app} shows the exact result of Eq.~\eqref{HEL} and approximated performance limits of Eq.~\eqref{conHel} for $M$ in the range of $10^6$ to $6\times10^7$. The maximal deviation for the data we have is $0.25\%$, which happens when $M=10^6$. This approximation is also used in the calculation of the QCB performance for the random phase model and the error performance for the Rayleigh-fading model under the condition of large $M$. \section{The asymptote of the error probability for random phase model} \label{asym} Consider the scenario of uniformly distributed phase shift and a fixed known reflectivity. In the asymptotic limit of low brightness $N_{{\rm S}}\ll 1$ and low reflectivity $\kappa\ll 1$, the optimal decision threshold $N$ is determined by solving \begin{equation} \frac{N_{{\rm S}}^N}{(1+N_{{\rm S}})^{N+1}}=e^{-2\xi_{\kappa} M}(2\xi_{\kappa} M)^{N}/N!, \end{equation} which leads to the solution \begin{equation} N\approx-W_{-1}^{-1}\left[-N_{{\rm S}}(\frac{ 1}{1+N_{{\rm S}}})^{1+1/N}\frac{(2^\primei N)^{1/N}}{e}\right]2\xi_{\kappa} M. \label{dec} \end{equation} In the derivation above, we have used Stirling's approximation $N!\approx\sqrt{2^\primei N}(N/e)^N$. When $N\gg 1$ and $N_S\ll 1$, Eq.~(\ref{Ny}) is obtained by further approximation. Substituting Eq.~(\ref{Ny}) into Eq.~(\ref{app1}), an asymptote of $P_{\rm C\veryshortrightarrow D}$ is obtained, \begin{equation}gin{align} P_{\rm C\veryshortrightarrow D}^{\rm ASY}&\approx\frac{1}{2} e^{-2\xi_{\kappa} M}(2\xi_{\kappa} M)^{N}/N!\nonumber\\&\approx \frac{1}{2} e^{-2\xi_{\kappa} M}\frac{(2\xi_{\kappa} M)^{N}}{\sqrt{2^\primei N}(N/e)^N}\nonumber\\&\approx \frac{1}{2} e^{-2\xi_{\kappa} M}\frac{(2\xi_{\kappa} M)^{2\xi_{\kappa} M/\end{pmatrix}silon}}{\sqrt{4^\primei \xi_{\kappa} M/\end{pmatrix}silon}(2\xi_{\kappa} M/\end{pmatrix}silon e)^{2\xi_{\kappa} M/\end{pmatrix}silon}}. \label{pny} \end{align} The approximation in the first line holds because $2\xi_{\kappa} M/N\gg 1$. The final-$M$ error exponent \begin{equation}gin{align} \tilde{r}_{\rm C\veryshortrightarrow D}^{\rm ASY}&(M)=-\frac{1}{M}{\rm ln}P_{\rm C\veryshortrightarrow D}^{\rm ASY}\nonumber\\&\approx\frac{1}{M}\Big[(1-\frac{{\rm ln}e\end{pmatrix}silon}{\end{pmatrix}silon})2\xi_{\kappa} M+ \frac{1}{2}{\rm ln}2\xi_{\kappa} M+{\rm ln}2\sqrt{2^\primei/\end{pmatrix}silon}\Big]. \label{asy1} \end{align} \begin{equation}gin{thebibliography}{32} \makeatletter ^\primerovidecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } ^\primerovidecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } ^\primerovidecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } ^\primerovidecommand \natexlab [1]{#1} ^\primerovidecommand \enquote [1]{``#1''} ^\primerovidecommand \bibnamefont [1]{#1} ^\primerovidecommand \bibfnamefont [1]{#1} ^\primerovidecommand \citenamefont [1]{#1} ^\primerovidecommand \href@noop [0]{\@secondoftwo} ^\primerovidecommand \href [0]{\begin{equation}gingroup \@sanitize@url \@href} ^\primerovidecommand \@href[1]{\@@startlink{#1}\@@href} ^\primerovidecommand \@@href[1]{\endgroup#1\@@endlink} ^\primerovidecommand \@sanitize@url [0]{\catcode `\^{(1)}2\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} ^\primerovidecommand \@@startlink[1]{} ^\primerovidecommand \@@endlink[0]{} ^\primerovidecommand \url [0]{\begin{equation}gingroup\@sanitize@url \@url } ^\primerovidecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} ^\primerovidecommand \urlprefix [0]{URL } ^\primerovidecommand \Eprint [0]{\href } ^\primerovidecommand ^\daggeroibase [0]{https://doi.org/} ^\primerovidecommand \selectlanguage [0]{\@gobble} ^\primerovidecommand \bibinfo [0]{\@secondoftwo} ^\primerovidecommand \bibfield [0]{\@secondoftwo} ^\primerovidecommand \translation [1]{[#1]} ^\primerovidecommand \BibitemOpen [0]{} ^\primerovidecommand \bibitemStop [0]{} ^\primerovidecommand \bibitemNoStop [0]{.\EOS\space} ^\primerovidecommand \EOS [0]{\spacefactor3000\relax} ^\primerovidecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Escher}\ \emph {et~al.}(2011)\citenamefont {Escher}, \citenamefont {de~Matos~Filho},\ and\ \citenamefont {Davidovich}}]{Escher_2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont {Escher}}, \bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {de~Matos~Filho}},\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Davidovich}},\ }\bibfield {title} {\bibinfo {title} {General framework for estimating the ultimate precision limit in noisy quantum-enhanced metrology},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nat Phys}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {406} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gagatsos}\ \emph {et~al.}(2017)\citenamefont {Gagatsos}, \citenamefont {Bash}, \citenamefont {Guha},\ and\ \citenamefont {Datta}}]{gagatsos2017bounding} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~N.}\ \bibnamefont {Gagatsos}}, \bibinfo {author} {\bibfnamefont {B.~A.}\ \bibnamefont {Bash}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Guha}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Datta}},\ }\bibfield {title} {\bibinfo {title} {Bounding the quantum limits of precision for phase estimation with loss and thermal noise},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {062306} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lloyd}(2008)}]{Lloyd2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\bibfield {title} {\bibinfo {title} {Enhanced sensitivity of photodetection via quantum illumination},\ }\href {https://doi.org/10.1126/science.1160627} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {321}},\ \bibinfo {pages} {1463} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tan}\ \emph {et~al.}(2008)\citenamefont {Tan}, \citenamefont {Erkmen}, \citenamefont {Giovannetti}, \citenamefont {Guha}, \citenamefont {Lloyd}, \citenamefont {Maccone}, \citenamefont {Pirandola},\ and\ \citenamefont {Shapiro}}]{tan2008quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-H.}\ \bibnamefont {Tan}}, \bibinfo {author} {\bibfnamefont {B.~I.}\ \bibnamefont {Erkmen}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Giovannetti}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Guha}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Maccone}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pirandola}},\ and\ \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Shapiro}},\ }\bibfield {title} {\bibinfo {title} {Quantum illumination with gaussian states},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {253601} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhuang}\ \emph {et~al.}(2017{\natexlab{a}})\citenamefont {Zhuang}, \citenamefont {Zhang},\ and\ \citenamefont {Shapiro}}]{zhuang2017optimum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhuang}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhang}},\ and\ \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Shapiro}},\ }\bibfield {title} {\bibinfo {title} {Optimum mixed-state discrimination for noisy entanglement-enhanced sensing},\ }\href {https://doi.org/10.1103/PhysRevLett.118.040801} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {040801} (\bibinfo {year} {2017}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhuang}(2021{\natexlab{a}})}]{zhuang2021quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhuang}},\ }\bibfield {title} {\bibinfo {title} {Quantum ranging with gaussian entanglement},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {126}},\ \bibinfo {pages} {240501} (\bibinfo {year} {2021}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhuang}\ and\ \citenamefont {Shapiro}(2022)}]{zhuang2022ultimate} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhuang}}\ and\ \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Shapiro}},\ }\bibfield {title} {\bibinfo {title} {Ultimate accuracy limit of quantum pulse-compression ranging},\ }\href {https://doi.org/10.1103/PhysRevLett.128.010501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {128}},\ \bibinfo {pages} {010501} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sarovar}\ and\ \citenamefont {Milburn}(2006)}]{sarovar2006optimal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sarovar}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Milburn}},\ }\bibfield {title} {\bibinfo {title} {Optimal estimation of one-parameter quantum channels},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Physics A: Mathematical and General}\ }\textbf {\bibinfo {volume} {39}},\ \bibinfo {pages} {8487} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Venzl}\ and\ \citenamefont {Freyberger}(2007)}]{venzl2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Venzl}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Freyberger}},\ }\bibfield {title} {\bibinfo {title} {Quantum estimation of a damping constant},\ }\href {https://doi.org/10.1103/PhysRevA.75.042322} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {75}},\ \bibinfo {pages} {042322} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Monras}\ and\ \citenamefont {Paris}(2007)}]{monras2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Monras}}\ and\ \bibinfo {author} {\bibfnamefont {M.~G.~A.}\ \bibnamefont {Paris}},\ }\bibfield {title} {\bibinfo {title} {Optimal quantum estimation of loss in bosonic channels},\ }\href {https://doi.org/10.1103/PhysRevLett.98.160401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {160401} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Adesso}\ \emph {et~al.}(2009)\citenamefont {Adesso}, \citenamefont {Dell'Anno}, \citenamefont {De~Siena}, \citenamefont {Illuminati},\ and\ \citenamefont {Souza}}]{adesso2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Adesso}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Dell'Anno}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {De~Siena}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Illuminati}},\ and\ \bibinfo {author} {\bibfnamefont {L.~A.~M.}\ \bibnamefont {Souza}},\ }\bibfield {title} {\bibinfo {title} {Optimal estimation of losses at the ultimate quantum limit with non-gaussian states},\ }\href {https://doi.org/10.1103/PhysRevA.79.040305} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {040305} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Monras}\ and\ \citenamefont {Illuminati}(2010)}]{monras2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Monras}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Illuminati}},\ }\bibfield {title} {\bibinfo {title} {Information geometry of gaussian channels},\ }\href {https://doi.org/10.1103/PhysRevA.81.062326} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {062326} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Monras}\ and\ \citenamefont {Illuminati}(2011)}]{monras2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Monras}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Illuminati}},\ }\bibfield {title} {\bibinfo {title} {Measurement of damping and temperature: Precision bounds in gaussian dissipative channels},\ }\href {https://doi.org/10.1103/PhysRevA.83.012315} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {012315} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nair}(2011)}]{Nair_2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Nair}},\ }\bibfield {title} {\bibinfo {title} {Discriminating quantum-optical beam-splitter channels with number-diagonal signal states: Applications to quantum reading and target detection},\ }\href {https://doi.org/10.1103/PhysRevA.84.032312} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {032312} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nair}\ and\ \citenamefont {Tsang}(2016)}]{nair2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Nair}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Tsang}},\ }\bibfield {title} {\bibinfo {title} {Far-field superresolution of thermal electromagnetic sources at the quantum limit},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {190801} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nair}(2018)}]{nair2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Nair}},\ }\bibfield {title} {\bibinfo {title} {Quantum-limited loss sensing: Multiparameter estimation and bures distance between loss channels},\ }\href {https://doi.org/10.1103/PhysRevLett.121.230801} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {121}},\ \bibinfo {pages} {230801} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pirandola}\ and\ \citenamefont {Lupo}(2017)}]{pirandola2017ultimate} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pirandola}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Lupo}},\ }\bibfield {title} {\bibinfo {title} {Ultimate precision of adaptive noise estimation},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {100502} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shi}\ and\ \citenamefont {Zhuang}(2022)}]{shi2022ultimate} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Shi}}\ and\ \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhuang}},\ }\bibfield {title} {\bibinfo {title} {Ultimate precision limit of noise sensing and dark matter search},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv:2208.13712}\ } (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nair}\ \emph {et~al.}(2022)\citenamefont {Nair}, \citenamefont {Tham},\ and\ \citenamefont {Gu}}]{nair2022optimal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Nair}}, \bibinfo {author} {\bibfnamefont {G.~Y.}\ \bibnamefont {Tham}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gu}},\ }\bibfield {title} {\bibinfo {title} {Optimal gain sensing of quantum-limited phase-insensitive amplifiers},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {128}},\ \bibinfo {pages} {180506} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Weedbrook}\ \emph {et~al.}(2012)\citenamefont {Weedbrook}, \citenamefont {Pirandola}, \citenamefont {Garc{\'\i}a-Patr{\'o}n}, \citenamefont {Cerf}, \citenamefont {Ralph}, \citenamefont {Shapiro},\ and\ \citenamefont {Lloyd}}]{weedbrook2012gaussian} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Weedbrook}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pirandola}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Garc{\'\i}a-Patr{\'o}n}}, \bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Cerf}}, \bibinfo {author} {\bibfnamefont {T.~C.}\ \bibnamefont {Ralph}}, \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Shapiro}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\bibfield {title} {\bibinfo {title} {Gaussian quantum information},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {621} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pirandola}\ and\ \citenamefont {Lloyd}(2008)}]{Pirandola2008} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pirandola}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\bibfield {title} {\bibinfo {title} {Computable bounds for the discrimination of gaussian states},\ }\href {https://doi.org/10.1103/PhysRevA.78.012331} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {012331} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Banchi}\ \emph {et~al.}(2020)\citenamefont {Banchi}, \citenamefont {Zhuang},\ and\ \citenamefont {Pirandola}}]{banchi2020quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Banchi}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhuang}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pirandola}},\ }\bibfield {title} {\bibinfo {title} {Quantum-enhanced barcode decoding and pattern recognition},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review Applied}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {064026} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nair}\ and\ \citenamefont {Gu}(2020)}]{nair2020fundamental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Nair}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gu}},\ }\bibfield {title} {\bibinfo {title} {Fundamental limits of quantum illumination},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Optica}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {771} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Audenaert}\ \emph {et~al.}(2007)\citenamefont {Audenaert}, \citenamefont {Calsamiglia}, \citenamefont {Mu\~noz Tapia}, \citenamefont {Bagan}, \citenamefont {Masanes}, \citenamefont {Acin},\ and\ \citenamefont {Verstraete}}]{Audenaert2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~M.~R.}\ \bibnamefont {Audenaert}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Calsamiglia}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Mu\~noz Tapia}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Bagan}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Masanes}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Acin}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Verstraete}},\ }\bibfield {title} {\bibinfo {title} {Discriminating states: The quantum chernoff bound},\ }\href {https://doi.org/10.1103/PhysRevLett.98.160501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {160501} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Guha}\ and\ \citenamefont {Erkmen}(2009)}]{Guha2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Guha}}\ and\ \bibinfo {author} {\bibfnamefont {B.~I.}\ \bibnamefont {Erkmen}},\ }\bibfield {title} {\bibinfo {title} {Gaussian-state quantum-illumination receivers for target detection},\ }\href {https://doi.org/10.1103/PhysRevA.80.052310} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {052310} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhuang}\ \emph {et~al.}(2017{\natexlab{b}})\citenamefont {Zhuang}, \citenamefont {Zhang},\ and\ \citenamefont {Shapiro}}]{zhuang2017fading} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhuang}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhang}},\ and\ \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Shapiro}},\ }\bibfield {title} {\bibinfo {title} {Quantum illumination for enhanced detection of rayleigh-fading targets},\ }\href {https://doi.org/10.1103/PhysRevA.96.020302} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {020302} (\bibinfo {year} {2017}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shi}\ \emph {et~al.}(2022)\citenamefont {Shi}, \citenamefont {Zhang},\ and\ \citenamefont {Zhuang}}]{shi2022} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Zhang}},\ and\ \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhuang}},\ }\bibfield {title} {\bibinfo {title} {Fulfilling entanglement's benefit via converting correlation to coherence},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv:2207.06609}\ } (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhuang}(2021{\natexlab{b}})}]{zhuang2021quantum-enabled} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhuang}},\ }\bibfield {title} {\bibinfo {title} {Quantum-enabled communication without a phase reference},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {126}},\ \bibinfo {pages} {060502} (\bibinfo {year} {2021}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shi}\ \emph {et~al.}(2020)\citenamefont {Shi}, \citenamefont {Zhang},\ and\ \citenamefont {Zhuang}}]{shi2020practical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhang}},\ and\ \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhuang}},\ }\bibfield {title} {\bibinfo {title} {Practical route to entanglement-assisted communication over noisy bosonic channels},\ }\href {https://doi.org/10.1103/PhysRevApplied.13.034029} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Applied}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {034029} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhuang}\ \emph {et~al.}(2017{\natexlab{c}})\citenamefont {Zhuang}, \citenamefont {Zhang},\ and\ \citenamefont {Shapiro}}]{Zhuang2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhuang}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhang}},\ and\ \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Shapiro}},\ }\bibfield {title} {\bibinfo {title} {Optimum mixed-state discrimination for noisy entanglement-enhanced sensing},\ }\href {https://doi.org/10.1103/PhysRevLett.118.040801} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {040801} (\bibinfo {year} {2017}{\natexlab{c}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kennedy}(1972)}]{Kennedy_1972} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~S.}\ \bibnamefont {Kennedy}},\ }\href@noop {} {}\bibinfo {type} {Technical {Report}}\ (\bibinfo {institution} {Research Laboratory of Electronics (RLE) at the Massachusetts Institute of Technology (MIT)},\ \bibinfo {year} {1972})\BibitemShut {NoStop} \bibitem [{\citenamefont {Van~Trees}(2001)}]{van2001detection3} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~L.}\ \bibnamefont {Van~Trees}},\ }\href@noop {} {\bibinfo {title} {Detection, estimation, and modulation theory, part iii: Radar–sonar signal process. and gaussian signals in noise}} (\bibinfo {year} {2001})\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \twocolumn[{ \draft \widetext \title {Early times in tunneling } \author{Gast\'on Garc\'{\i}a-Calder\'on} \address{{ \it Instituto de F\'{\i}sica, Universidad Nacional Aut\'onoma de M\'exico\\ Apartado Postal 20-364, 01000 M\'exico, D.F., M\'exico}} \author{Jorge Villavicencio} \address{\it { Facultad de Ciencias, Universidad Aut\'onoma de Baja California\\ Apartado Postal 1880, Ensenada, B.C., M\'exico}} \date{20 June 2000} \mediumtext \begin{abstract} Exact analytical solutions of the time-dependent Schr\"odinger equation with the initial condition of an incident cutoff wave are used to investigate the traversal time for tunneling. The probability density starts from a vanishing value along the tunneling and transmitted regions of the potential. At the barrier width it exhibits, at early times, a distribution of traversal times that typically has a peak $\tau_p$ and a width $\Delta \tau$. Numerical results for other tunneling times, as the phase-delay time, fall within $\Delta \tau$. The B\"uttiker traversal time is the closest to $\tau_p$. Our results resemble calculations based on Feynman paths if its noisy behaviour is ignored. \end{abstract} \pacs {PACS numbers: 03.65.Bz, 03.65.Ca, 73.40.Gk} \maketitle }] \narrowtext Quantum tunneling, that refers to the possibility that a particle traverses through a classically forbidden region, constitutes one of the paradigms of quantum mechanics. In the energy domain, where one solves the stationary Schr\"odinger equation at a fixed energy $E$, tunneling is well understood. In the time domain, however, there are aspects still open to investigation. Recent technological achievements as the possibility of constructing artificial quantum structures at nanometric scales\cite{qs} or the manipulation of individual atoms\cite{corral} have stimulated work on time-dependent tunneling both at applied and fundamental levels. A problem that has remained controversial and the subject of a great deal of attention over the years is the tunneling time problem\cite {traversal}, that can be stated as the question: How long it takes to a particle to traverse a classically forbidden region? In time-dependent tunneling, many works attempting to answer the above question consider the numerical analysis of the time-dependent Schr\"odinger equation with the initial condition of a Gaussian wave packet\cite{hartman,collins,muga}. A common feature of the majority of these approaches is that the initial wave packet extends through all space. As a consequence the initial state, although it is manipulated to reduce as much as possible its value along the tunneling and transmitted regions, contaminates from the beginning the tunneling process and hence usually it is required a long time analysis of the solutions. The above situation may be circumvented by considering cutoff wave initial conditions\cite{mm,stevens83,morettipra92,muga96,gcr97}. In this work we consider a analytic time-dependent solutions to the Schr\"{o}dinger equation with the initial condition at $\tau=0$ of a incident cutoff wave, to investigate the traversal time for tunneling through a potential barrier. The problem may be visualized as a {\it gedanken experiment} consisting of a shutter, situated at $x=0$, that separates a beam of particles from a potential barrier of height $V_0$ located in the region $0\leq x\leq L$. At $\tau=0$ the shutter is opened. The probability density rises initially from a vanishing value and evolves with time through $x > 0$. At the barrier edge $x=L$, the probability density at time $\tau$, yields the probability of finding the particle after a time $\tau$ has elapsed. Since initially there is no particle along the tunneling region, detecting the particle at the barrier edge at time $\tau$, provides a measure of its traversal time through the tunneling region. The transient behavior of the time-dependent solution at early times and at distances close to the interaction region plays a significant role in our approach. Other formulations, based on the stationary solutions of the Schr\"{o}dinger equation\cite{hartman,smith}, refer to asymptotically long times at large distances and hence ignore transient effects. These approaches provide a single value for the traversal time. In contrast, our approach leads to a distribution of traversal times as in works based on the Feynman path integral method\cite{sokolovski,fertig,yamada}, though as indicated below both approaches differ in important aspects. In a recent paper we have obtained the time-dependent solution to the Schr\"{o}dinger equation for tunneling through an arbitrary potential of finite range with the initial condition of a cutoff plane wave of momentum $k$. The solution may be written as a term proportional to the free solution plus a contribution involving an infinite sum of resonance terms associated with the $S-$matrix poles of the potential\cite{gcr97}. Our approach is based on the Laplace transform technique and considers some analytical properties of the outgoing wave propagator. Some decades ago Moshinsky considered the free case solution to the above problem\cite{mm}. Moshinsky showed that the probability density, for a fixed value of the distance $x_0$ as a function of $t$, exhibits a transient regime that he named diffraction in time. Recently, observations of that phenomenon have been reported\cite{exp}. For the sake of simplicity in our approach, as Moshinsky also did, we consider the instantaneous removal of the shutter. This may be seen as a kind of `sudden approximation' to a shutter opening with finite velocity, where the treatment becomes more involved\cite{gahler}. As shown below the terms depending on the $S-$matrix poles provide a novel transient behavior that may dominate the early times in the tunneling process. The plane wave cutoff initial condition discussed in Refs. \cite{mm,gcr97} refers to a shutter that acts as a perfect absorber (no reflected wave). One can also envisage a shutter that acts as a perfect reflector. In such a case the initial wave may be written as, \begin{equation} \psi _s(x,k,\tau =0)=\left\{ \begin{array}{cc} e^{ikx}-e^{-ikx}, & x<0 \\ 0, & x>0. \end{array} \right. \label{2a} \end{equation} One can then proceed along lines similar to those discussed in Ref.\ \cite {gcr97} to derive the time-dependent solution $\psi _s(x,k,\tau )$ of the Schr\"{o}dinger equation for a potential $V(x)$ that vanishes outside the region $0\leq x\leq L$. The solution along the internal region reads, \begin{eqnarray} \psi _s(x,k,\tau ) &=&\phi (x,k)M(0,k,\tau )-\phi (x,-k)M(0,-k,\tau) \nonumber \\ &&-\sum_n^\infty \phi _n(x)M(0,k_n,\tau ),\,\,\,\,(0\leq x\leq L) \label{3b} \end{eqnarray} where $\phi (x,k)$ refers to the stationary solution and $\phi_n(x)=2iku_n(0)u_n(x)/(k^2-k_n^2)$ is given in terms of the resonant (Gamow) states $\{u_n(x)\}$ and complex poles $\{k_n\}$ of the problem\cite{gcr97,gcp76}. Similarly the transmitted solution\cite{foot} becomes, \begin{eqnarray} \psi _s(x,k,\tau ) &=&T(k)M(x,k,\tau )-T(-k)M(x,-k,\tau ) \nonumber \\ &&-\sum_n^\infty T_nM(x,k_n,\tau ),\,\,\,\,(x\geq L) \label{3c} \end{eqnarray} where $T(k)$ and $T(-k)$ are transmission amplitudes, and $T_n=2iku_n(0)u_n(L){\rm exp}(-ik_nL)/(k^2-k_n^2)$. In the above two equations the functions $M(x,k,\tau )$ and $M(x,k_n,\tau)$ are defined as, \begin{equation} M(x,q,t)=\frac 12{\rm e}^{(imx^2/2\hbar \tau )} {\rm e}^{y_q^2}{\rm erfc} (y_q), \label{4} \end{equation} where the argument $y_q$ is given by \begin{equation} y_q\equiv {\rm e}^{-i\pi /4}\left( \frac m{2\hbar \tau} \right) ^{1/2}\left[x-\frac{\hbar q}m\tau \right]. \label{5} \end{equation} In Eqs. (\ref{4}) and (\ref{5}) $q$ stands either for $k$ or $k_n$, the index $n$ refers to a given complex pole. Poles are located on the third and fourth quadrants of the complex $k$-plane. The solution for the free case with the reflecting initial condition is, $\psi _s^0(x,k,\tau )=M(x,k,\tau )-M(x,-k,\tau )$. >From the analysis given in Ref.\ \cite{gcr97} one can see that the above exact solutions satisfy the corresponding initial conditions, i.e., they vanish exactly for $x>0$. At very long times it is also shown in Ref.\ \cite{gcr97} that the terms $M(x,k_n,\tau )$ that appear in the above equations vanish. The same occurs for $M(x,-k,\tau )$ while, as shown firstly in Ref. \cite{mm}, $M(x,k,\tau )$ tends to the stationary solution. Hence, at long times, each of the above exact solutions go into the corresponding stationary solutions, namely, along the internal region as $\psi (x,\tau)=\phi (x,k){\rm exp}(-iE\tau /\hbar )$ and along the external region as $\psi (x,\tau )=T(k){\rm exp}(ikx) {\rm exp}(-iE\tau /\hbar )$. Note that at early times and short distances there is a competition between the contribution of the free-type terms ($M$ functions depending on $k$) and the pole terms ($M$ functions depending on either $k_n$ or $k_{-n}$) in Eqs. (\ref{3b}) and (\ref{3c}). As exemplified below, depending on the potential parameters one may have the predominance of one or the other type of terms. Note also that the initial state is not strictly monochromatic (it extends from $-\infty $ to $0$) and hence it has a distribution of components around $k$ in momentum space. One could construct an initial cutoff wavepacket as a linear combination of cutoff waves. However, since we compare below with definitions of tunneling times involving plane waves, wavepackets will not be considered here. Besides, in general they involve no negligible momentum components above the barrier potential and hence obscure the dynamics of tunneling. In order to apply the above ideas, we consider a model that has been used extensively for the tunneling time problem, namely, the rectangular barrier potential, characterized by a height $V_0$ in the region $0\leq x\leq L$. The shutter is located at $x=0$. In order to calculate Eqs.\ (\ref{3b}) and (\ref{3c}) for the initial condition (\ref{2a}), in addition to the parameters $V_0$, $L$, and that corresponding to the incident energy $E=\hbar ^2k^2/2m$, we need to determine the complex poles $\{k_n\}$ and resonant states $\{u_n(x)\}$. It is well known that for a finite range potential there are an infinite number of poles. The $S-$matrix poles for the rectangular barrier potential may be obtained from the corresponding transmission amplitude $T(k)=4kq{\rm exp}(-ikL)/J(k)$, where $q=[k^2-k_0^2]^{1/2}$ with $k_0^2=2mV_0/\hbar ^2$. They correspond to the zeros of $J(k)$ in the $k-$plane, namely, \begin{equation} J(k)=(q+k)^2{\rm exp}(-iqL)-(q-k)^2{\rm exp}(iqL)=0. \label{6} \end{equation} We follow a well established method to obtain the solutions to the above equation\cite{gcr97,nussenzveig}. The resonant states of the problem satisfy the time-independent Schr\"{o}dinger equation of the problem with outgoing boundary conditions\cite{gcr97}. They read, \begin{equation} u_n(x)=C_n\left[ {\rm e}^{iq_nx}+b_n{\rm e}^{-iq_nx}\right] ,\,\,\,(0\leq x\leq L) \label{7} \end{equation} where $b_n=(q_n+k_n)/(q_n-k_n)$ and $C_n$ may be obtained from the normalization condition\cite{gcr97}, \begin{equation} \int_0^Lu_n^2(x)dx+i{\frac{u_n^2(0)+u_n^2(L)}{2k_n}}=1. \label{7a} \end{equation} Note that both the complex poles and the resonant states are a function of $V_0$ and $L$ and hence are a property of the system. To exemplify the time evolution of the probability density we consider the set of parameters: $V_0=0.711\,eV$, $L=10\,nm$, $E=0.1422\,eV$, $m^{*}=0.067\,m_e$, inspired in semiconductor quantum structures\cite{qs}. Our choice of parameters guarantees that most momentum components of the initial state tunnel through the potential. Figure \ref{fig1} shows a plot of $|\psi(L,\tau )|^2$, calculated at the barrier edge $x=L$ as a function of time in units of the free passage time $\tau_f=mL/(\hbar k)=11.56\,fs$. We have used Eq.\ (\ref{3c}), though Eq.\ (\ref{3b}) holds the same. The time-dependent solution is normalized by $|T(k)|^2=5.332\times 10^{-9}$. One sees that as soon as $\tau \neq 0$ the probability density starts to grow up. As discussed elsewhere\cite{gcrv99}, this is due to the non-relativistic character of the description. Einstein causality may be fulfilled by cutting off the contributions to the probability density smaller than $\tau _0=L/c$. In our example $\tau_0=0.033\,fs$ or $\tau_0/\tau_f =0.0028$, too small to be appreciated in Fig. \ref{fig1}. At early times one sees a time domain resonance structure. Thereafter the probability density approaches essentially its asymptotic value. We found that the resonant sum is the relevant contribution to the time domain resonance since that of the free-type term is quite small and varies smoothly with time. In the transmitted region, $x>L$, not shown here, the time domain resonance becomes a propagating structure, as follows from Eq.\ (\ref{3c}). The time domain resonance corresponds to a transient effect and as it propagates through the transmitted region becomes smaller and smaller. Asymptotically, at large distances and times, it becomes very small while the free-type term becomes the dominant contribution with its wavefront propagating with velocity $v=\hbar k/m$. Calculations using the absorbing initial condition exhibit a similar time domain resonance. Hence a linear combination of reflecting and absorbing initial conditions should also exhibit it. The time domain resonance represents a distribution of traversal times. The corresponding peak represents the largest probability to find the tunneling particle at the barrier edge $x=L$. In our example, as shown in Fig. \ref{fig1}, the time domain resonance peaks at $\tau _p=5.326\,fs$, faster than the free passage time across the same distance of $10$ $nm$, that is, $\tau _p/\tau _f=0.46$. Note that the distribution is quite asymmetric. Altough the first resonance term of the solution provides the main contribution, convergence of the series usually requires to sum up to $100$ terms. The inset displays the probability density from $\tau /\tau _f=2$ up to $\tau /\tau _f=20$. One sees a small structure around $\tau /\tau _f=3$ and then the probability density decreases very fast towards unity, the stationary regime. The main range of traversal times occurs around the peak value $\tau_p$. We define the width of the distribution, $\Delta \tau$, by the rule of the half-width at half-maximium. This yields $\Delta \tau = 13.48\,fs$ or $\Delta \tau/\tau_f = 1.16$. The resonance is broad, since $\Delta \tau \approx 2 \tau_p$. We have found that for fixed $V_0$ and $E$, and a decreasing $L$, the width diminishes. The same occurs for fixed $E$ and $L$, and an increasing $V_0$. Systematically, however, $\Delta \tau > \tau_p$. For the sake of comparison, the arrows in Fig. \ref{fig1} indicate the values calculated for a number of definitions of tunneling times existing in the literature for the rectangular barrier potential\cite{buttiker}, as the Larmor time of Ba\'{z} and Rybachenko, $\tau _{LM}$; the semi-classical or B\"{u}ttiker-Landauer time, $\tau _{BL}$; the B\"{u}ttiker traversal time, $\tau _B$, and the phase-delay time, $\tau_D$ \cite{eqs}. All of them fall within the broad range of values given by $\Delta \tau$. Note, however, that the B\"{u}ttiker traversal time $\tau _B$ is the closest to $\tau_p$. As shown below we have found this situation extensively in our numerical calculations. Also, since the barrier is opaque, $\tau_B$ is close to $\tau_{BL}$. We refer briefly to approaches to the tunneling time problem based on the Feynman path integral method\cite{sokolovski,fertig,yamada}. For plane waves and a rectangular barrier potential Fertig\cite{fertig} has derived an expression, $C(\tau)$, that gives the probability amplitude that a particle remains a time $\tau $ in a region, (Eq. (3) of Ref. \cite{fertig}). Recently Yamada\cite{yamada} has plotted $G(\tau )=|C(\tau )|^2$ versus $\tau $ (Fig. 2 of Ref.\cite{yamada}). His parameters are the same as in our Fig. \ref{fig1}, {\it i.e.}, $V_0/E=5$ and $kL=5$. Our calculation for $|\psi(L,\tau )|^2$ resembles the average shape of $G(\tau )$, provided its noisy behavior is ignored. Note, however, that the meaning of both quantities is different. As indicated by Yamada, $G(\tau )$ refers to a `residence time'\cite{residence} whereas our approach corresponds to a `passage' or traversal time\cite{passage}. In Fig. \ref{fig2} we plot $\tau _p$ (solid squares)for different values of the opacity $\alpha =k_0L$, with $k_0=[2mV_0]^{1/2}/\hbar $. Keeping $V_0$ fixed and varying $L$ defines $\alpha (L)$. We can then identify two regimes, one in the range $2\leq \alpha (L)\leq 5$, the tunneling regime, where $\tau _p$ remains almost constant as $\alpha (L)$ increases, and another regime, the opaque regime, with $\alpha (L)>5$, where we find that $\tau _p$ increases linearly. The first behaviour above is related to the first top-barrier S-matrix pole and the second one to the components of the incident wave that go above the barrier. There is still another regime, not shown in Fig. \ref{fig2}, where $\alpha (L)<1$, that corresponds to very shallow or very thin barriers or both and will not be considered here. There the free-type terms in Eq. \ (\ref{3c}) dominate over the resonant contribution. For comparison we plot the B\"{u}ttiker traversal time $\tau _B$ (hollow circles). We see that $\tau _B$ remains rather close to $\tau _p$. Note, however, that $\tau_B$ behaves linearly in the whole range. This different qualitative behaviour as a function of $L$ between both times deserves further study. The inset in Fig. \ref{fig2} exhibits a similar comparison for the opacity $\alpha (V_0)$, with $L$ fixed and varying $V_0$. Here we observe that $\tau _B$ remains quite close to $\tau _p$ in the whole range. Regarding the phase-delay time $\tau_D$, its predictions usually fall within the width $\Delta \tau$. For fixed $V_0$ and $E$, $\tau_D$ as a function of $L$ exhibits qualitatively a different behaviour than that of Fig. \ref{fig1} (See Fig. 5 in ref. \cite{hartman}). To end we stress that the largest probability to find the tunneling particle at the barrier width, given by $\tau_p$, is sensitive to both variations of the barrier width $L$ and of the height $V_0$, and also, that the B\"uttiker traversal time is found very close to the value of $\tau_p$ though we find qualitative differences between them as a function of $L$. G. G-C. thanks M. Moshinsky for useful discussions and acknowledges support of DGAPA-UNAM under grant IN116398. We also acknowledge partial financial support of Conacyt under contract no. 431100-5- 32082E. \begin{references} \bibitem{qs} E. E. Mendez, in {\it Physics and Applications of Quantum Wells and Superlattices}, edited by E. E. Mendez and K. Von Klitzing (Plenum, New York, 1987) p. 159. \bibitem{corral} M. F. Crommie, C. P. Lutz, and D. M. Eigler, Science {\bf 262}, 218 (1993). \bibitem{traversal} E. H. Hauge and J. A. Stovneng, Rev. Mod. Phys {\bf 61}, 917 (1989); R. Landauer and Th. Martin,{\it \ ibid.} {\bf 66}, 217 (1994). \bibitem{hartman} T. E. Hartman, J. Appl. Phys. {\bf 33}, 3427 (1962). \bibitem{smith} F. Smith, Phys. Rev. {\bf 118}, 349 (1960). \bibitem{sokolovski} See for example: D. Sokolovski, S. Brouard, and J. N. L. Connor, Phys. Rev. A {\bf 50}, 1240 (1994). \bibitem{fertig} H. A Fertig, Phys. Rev. Lett. {\bf 65}, 2321 (1990); Phys. Rev. B {\bf 47}, 1346 (1993). \bibitem{yamada} N. Yamada, Phys. Rev. Lett. {\bf 83}, 3350 (1999). \bibitem{collins} S. Collins, D. Lowe, and J. R. Barker, J. Phys. C {\bf 20}, 6213 (1987). \bibitem{muga} V. Delgado and J. G. Muga, Ann. Phys. (N.Y) {\bf 248}, 122 (1996). \bibitem{mm} M. Moshinsky, Phys. Rev. {\bf 88}, 625 (1952). \bibitem{stevens83} K. W. H. Stevens. J. Phys. C {\bf 16}, 3649 (1983). \bibitem{morettipra92} P. Moretti, Phys. Rev. A {\bf 46}, 1233 (1992). \bibitem{muga96} S. Brouard and J. G. Muga, Phys. Rev. A {\bf 54}, 3055 (1996). \bibitem{foot} For the rectangular barrier, an expression analogous to Eq. (3) may be derived without using resonant states, G. Garc\'{\i }a-Calder\'{o}n, J. L. Mateos and M. Moshinsky (unpublished). \bibitem{gcr97} G. Garc\'{\i }a-Calder\'{o}n and A. Rubio, Phys. Rev. A {\bf 55}, 3361 (1997). \bibitem{exp} P. Szriftgiser, D. Gu\'{e}ry-Odelin, M. Arndt, and J. Dalibard, Phys. Rev. Lett. {\bf 77}, 4 (1996); Th. Hils, J. Felber, R. G\"{a}hler, W. Gl\"{a}ser, R. Golub, K. Habicht, and P. Wille, Phys. Rev. A {\bf 58}, 4784 (1998). \bibitem{gahler} R. G\"{a}hler and R. Golub, Z. Phys. B {\bf 56}, 5 (1984). \bibitem{gcp76} G. Garc\'{\i }a-Calder\'{o}n and R. E. Peierls, Nucl. Phys. A {\bf 265}, 443 (1976). \bibitem{gcrv99} G. Garc\'{\i }a-Calder\'{o}n, A. Rubio and J. Villavicencio, Phys. Rev. A {\bf 59}, 1758 (1999). \bibitem{nussenzveig} H. M. Nussenzveig, Nucl. Phys. {\bf 11}, 499 (1957). \bibitem{buttiker} M. B\"{u}ttiker, Phys. Rev. B {\bf 27}, 6178 (1983). \bibitem{eqs} See, respectively, Eqs. (1.4), (1.7), (3.12) and (3.2) of Ref. \cite{buttiker}. \bibitem{residence} The `residence time' involves the integral of the probability density along the internal region of the interaction. This definition does not distinguish whether particles are finally transmitted or reflected and hence it is not appropriate to describe traversal times. \bibitem{passage} This difference is relevant because Yamada has argued, using a 'weak decoherence condition', that for `residence time' type a probability distribution of tunneling times is not definable (see ref. \cite{yamada}). \end{references} \begin{figure} \caption{Plot of $|\psi(L,\tau)|^2$ at the barrier edge $x=L=10\, nm$ as a function of time in units of the free passage time $\tau_f$. The inset shows $|\psi(L,\tau)|^2$ at larger times. The arrows indicate the values of the Larmor time LM, the semi-classical time BL, the B\"uttiker traversal time B, and the phase-delay time D. See text.} \label{fig1} \end{figure} \begin{figure} \caption{Plot of the exact time domain resonance peak $\tau_p$ (solid squares) versus the opacity $\alpha(L)$ ($V_0$ fixed). For comparison we plot the B\"uttiker traversal time $\tau_B$ (hollow circles). The inset shows a similar calculation versus the opacity $\alpha (V_0)$ (L fixed). See text.} \label{fig2} \end{figure} \end{document}
\begin{document} \title{A low complexity algorithm for non-monotonically evolving fronts hanks{Submitted on Friday, September 12th, 2014.} \slugger{sisc}{xxxx}{xx}{x}{x--x} \begin{abstract} A new algorithm is proposed to describe the propagation of fronts advected in the normal direction with prescribed speed function $F$. The assumptions on $F$ are that it does not depend on the front itself, but can depend on space and time. Moreover, it can vanish and change sign. To solve this problem the Level-Set Method [Osher, Sethian; 1988] is widely used, and the Generalized Fast Marching Method [Carlini et al.\mbox{}; 2008] has recently been introduced. The novelty of our method is that its overall computational complexity is predicted to be comparable to that of the Fast Marching Method [Sethian; 1996], [Vladimirsky; 2006] in most instances. This latter algorithm is $\mathcal{O}(N^{n}\log N^{n})$ if the computational domain comprises $N^{n}$ points. Our strategy is to use it in regions where the speed is bounded away from zero -- and switch to a different formalism when $F \approx 0$. To this end, a collection of so-called \emph{sideways} partial differential equations is introduced. Their solutions locally describe the evolving front and depend on both space and time. The well-posedness of those equations, as well as their geometric properties are adressed. We then propose a convergent and stable discretization of those PDEs. Those alternative representations are used to augment the standard Fast Marching Method. The resulting algorithm is presented together with a thorough discussion of its features. The accuracy of the scheme is tested when $F$ depends on both space and time. Each example yields an $\mathcal{O}(1/N)$ global truncation error. We conclude with a discussion of the advantages and limitations of our method. \end{abstract} \begin{keywords} front propagation, Hamilton-Jacobi equations, fast marching method, level-set method, optimal control, viscosity solutions. \end{keywords} \begin{AMS} 65M06, 65M22, 65H99, 65N06, 65N12, 65N22. \end{AMS} \pagestyle{myheadings} \thispagestyle{plain} \markboth{A.TCHENG, J.-C.NAVE}{A low complexity algorithm for evolving fronts} \section{Introduction} \label{sec:Introduction} The design of robust numerical schemes describing front propagation has been a subject of active research for several decades. The need for such schemes is felt across many areas of applied sciences: geometric optics \cite{OsherTsai}, optimal control \cite{FalconeMin,TakeiTsai}, lithography \cite{AdalSethian1,AdalSethian2,AdalSethian3}, shape recognition \cite{ShapeRecog1,ShapeFromShading}, dendritic growth \cite{Dendritic1,Dendritic2}, gas and fluid dynamics \cite{TriplePoint,GasDynamics,LevelSetFluids}, combustion \cite{Combustion}, etc. Depending on the problem at hand, various issues may arise. Consider the following two interface propagation phenomena: A fire propagating through a forest, and a large evolving population of bacteria in a Petri dish. In either case, space can be divided into distinct regions: burnt vs.~ unburnt, and populated vs.~ unpopulated. The boundaries between those regions form fronts that evolve in time. Those examples differ from one another in that a fire front can only propagate \emph{monotonically}, whereas bacteria may advance or recede, depending on the stimuli present in their environment. This distinction led to different approaches when modelling those evolutions. Monotone propagation can be recast into a `static' problem, as opposed to non-monotone evolution, which is instrinsically time-dependent. As a result, efficient single-pass algorithms for monotone propagation have been developed. In contrast, accurate algorithms for non-monotonically evolving fronts require a larger number of computations. In this paper, we propose a model that reconciles the advantages of previous methods -- We accurately describe non-monotone front evolution with an algorithm that performs a low number of operations. One of the early means of accurately propagating fronts was to use the Level-Set Method (LSM) \cite{OsherSethian}. This implicit approach embeds the front as the zero-level-set of an auxiliary function $\phi$. In the above example, $\phi$ could be negative in regions occupied by bacteria, and positive in other regions. Each contour of this level-set function is then evolved under the given speed function $F$, which guarantees that the front itself moves properly. The robustness and simplicity of the first order discretization of this problem made it popular. Additionally, this approach can handle a very wide class of speed functions, including those that change sign. However, describing the evolution of an $(n-1)$-dimensional front in $\mathbb{R}^{n}$ requires solving for a function of $n+1$ variables, since $\phi$ depends on space as well as time. Moreover, in order for the solution to remain accurate, it is often desirable to enforce the signed distance property $|\nabla \phi | \approx 1$ in a neighbourhood of the front. There exists a vast literature on lowering the computational complexity of the LSM, cf.\mbox{} \cite{AdalSethian,ReInit1,sethian1999level,OsherFedkiw}, and on maintaining the accuracy of the solution, cf.\mbox{} \cite{TsaiRedistancing,chopp2001some,ReInit1,SussmanFatemi,Reinit2}. Nevertheless, those features are incorporated at the expense of the simplicity and the efficiency of the original LSM. The Fast Marching Method (FMM) \cite{Sethian,Tsitsi} constitutes the second significant advance in the field. This approach requires the speed function to be bounded away from zero, and to be only space-dependent. Under those conditions, the FMM builds the `first arrival time' function $\psi$ such that to every point $\vec{\mathbf{x}}$ in space is associated the value $t=\psi(\vec{\mathbf{x}})$ at which the front reaches $\vec{\mathbf{x}}$, cf.\mbox{} \cite{Sethian,SethianFMM,SethianBookVariational,sethian1999level}. In the context of fire propagation, $\psi$ records the time at which the parcel of land burnt. The use of a Dijkstra-like data structure \cite{Dijkstra} renders this scheme very efficient. A variant of this algorithm known as the Fast Sweeping Method runs in $\mathcal{O}(N^{n})$ complexity \cite{zhao2005fast} when the computational domain comprises $N^{n}$ points. Recently, Falcone et al. \cite{GFMM} proposed a Generalized FMM (GFMM) that is able to handle vanishing speeds. This algorithm is supported by theoretical results on its convergence in the class of viscosity solutions. The examples presented are found to accurately propagate the fronts subject to a wide range of speed functions. However, when $F$ depends on time, the GFMM no longer makes use of a Dijkstra-like data structure. Its overall complexity is expected to revert to that of the LSM in such instances. In the light of this previous work, it is desirable to design an algorithm able to handle speed functions that change sign, while retaining the efficiency of the FMM. This is the main purpose of this article. Note that if $F$ changes sign, a point $\vec{\mathbf{x}}$ in space may be reached by the front several times. This implies that the arrival time can no longer be described as a function depending solely on space. However, it is still possible to locally describe it as the graph of a function. Consider the set $\mathcal{M} := \{ (\vec{\mathbf{x}},t) : \vec{\mathbf{x}} $ belongs to the front at time $t \}$. The set $\mathcal{M}$ consists of the surface traced out by the fronts as they evolve through space and time. If $\mathcal{M}$ embeds as a $C^{k}$-manifold of dimension $n$ in $\mathbb{R}^{n} \times (0,T)$, then by definition, each point $(\vec{\mathbf{x}},t)\in \mathcal{M}$ belongs to a neighbourhood that is locally the image of a $C^{k}$-function of $n$ variables. The fact that under mild assumptions $\mathcal{M}$ is a compact subset of $\mathbb{R}^{n} \times (0,T)$ guarantees that we only need a finite number of neighbourhoods to cover $\mathcal{M}$, or equivalently, a finite number of functions to parametrize $\mathcal{M}$. The images of those functions -- which possibly depend on time as well as space -- provide local representations of the set $\mathcal{M}$. Our approach makes use of those other representations whenever the purely spatial one is not available -- e.g.,~ when $n=2$ and $\mathcal{M}$ cannot be locally described by the standard first arrival time function $\{ t = \psi(x,y) \}$, we may describe it as $\{ x = \tilde{\psi}(y,t) \}$ or $\{ y = \bar{\psi}(x,t) \}$. To this end, we introduce \emph{sideways} PDEs solved by those $C^{k}$-functions. We illustrate in detail how they relate to previous work, argue that they are well-posed, and show that their solution does provide a local description of $\mathcal{M}$. Moreover, we provide a scheme to discretize them, prove that it converges to the correct viscosity solution, and show that it is stable. In practice, the proposed algorithm amounts to augmenting the FMM to be able to describe $\mathcal{M}$ near those points $(\vec{\mathbf{x}},t)$ where $F(\vec{\mathbf{x}},t)=0$. The fact that different representations are used to build different parts of $\mathcal{M}$ implies that those pieces need to be woven together along their overlapping parts, to form a single codimension one subset of $\mathbb{R}^{n} \times (0,T)$. This is done by storing the $(n+1)$-dimensional normal associated to each point and by using interpolation. To illustrate the overall method, examples are presented where an $\mathcal{O}(1/N)$ global truncation error is achieved. Those tests all feature speed functions that vanish, and possibly depend on time. Since the algorithm always approximates a function of $n$-variables, the dimensionality of the problem is never raised, unlike what happens in the LSM. As a result, the computational complexity is expected to be comparable to that of the FMM. \paragraph{Outline of the article} This paper is organized as follows. We state the problem we are addressing in \S \ref{sec:Preliminaries}. We also present the LSM and the FMM, before providing a simple example to motivate our method. The case where $F$ is bounded away from zero and depends on time is addressed in \S \ref{subsec:tFMM}. The \emph{sideways} PDEs we use in regions where $F \approx 0$ are introduced in \S \ref{subsec:FrontFunction}. A discussion of their properties is provided along with a convergent and stable scheme to discretize them. We explain how the different formalisms can be woven into a single method in \S \ref{sec:Weaving}. The pseudo-codes are given and discussed in \S \ref{sec:AlgoDiscussion}. We predict the complexity and accuracy of the overall method in \S \ref{sec:Complexity} and \S \ref{sec:Accuracy}. Four examples are then covered in details in \S \ref{sec:Accuracy}. Those assess the global behaviour and the accuracy of the scheme. The advantages and weaknesses of our approach are discussed in \S \ref{sec:Discussion}, where an additional example is covered to address the limitations of the method. We conclude in \S \ref{sec:Conclusions}. \section{Preliminaries} \label{sec:Preliminaries} \subsection{Problem statement} \label{subsec:Statement} Let a subset $\mathcal{C}_{0} \subset \mathbb{R}^{n}$ be closed with no boundary. Assume it is an orientable manifold of codimension one, with a well defined unique outer normal $\hat{\mathfrak{n}}_{0} (\vec{\mathbf{x}})$. Suppose $\mathcal{C}_{0}$ is advected in time, and denote the resulting subset of $\mathbb{R}^{n}$ at time $t$ by $\mathcal{C}_{t}$. We want to describe $\mathcal{C}_{t}$ for $0<t<T$ in the case where each point $\vec{\mathbf{x}} \in \mathcal{C}_{t}$ is advected under the velocity \begin{eqnarray} \vec{v} = \vec{v}(\vec{\mathbf{x}},t) = F(\vec{\mathbf{x}},t) \hat{\mathfrak{n}}(\vec{\mathbf{x}},t) \end{eqnarray} i.e., with the prescribed speed function $F=F(\vec{\mathbf{x}},t)$, in the direction of the outward normal to $\mathcal{C}_{t}$, $\hat{\mathfrak{n}} = \hat{\mathfrak{n}}(\vec{\mathbf{x}},t)$. \subsection{Assumptions} \label{subsec:Assumptions} In addition to the assumptions already stated, in the rest of this paper we assume that the following hold. The initial set $\mathcal{C}_{0}$ is known exactly, and is assumed to be $C^{2}$ in the sense that if it is given as the image of a map, e.g.,~ $\vec{\gamma}: S^{n-1} \longrightarrow \mathbb{R}^{n}$, then $\vec{\gamma} \in C^{2}(S^{n-1})$. The speed $F=F(\vec{\mathbf{x}},t)$ is known exactly for all $(\vec{\mathbf{x}},t)$. Unless otherwise specified, it is allowed to vanish and change sign. It does not depend on the curve itself, or any of its derivatives. For simplicity, we also make the following strong assumption: the map $F : \mathbb{R}^{n} \times (0,T) \longrightarrow \mathbb{R} $ is analytic. In particular, this implies that the subset defined as $\mathcal{F} : = \{ (\vec{\mathbf{x}},t) : F(\vec{\mathbf{x}},t)=0 \}$ is closed and has codimension one in $\mathbb{R}^{n}\times [0,T]$. We let $K$ be the Lipschitz constant of $F$. Together, those assumptions guarantee that for any given $t\in (0,T)$, there exists a well defined normal $\hat{\mathfrak{n}} = \hat{\mathfrak{n}}(\vec{\mathbf{x}},t)$ almost everywhere along $\mathcal{C}_{t}$. \subsection{Previous Work} \label{subsec:PreviousWork} For completeness we briefly go over two of the methods mentioned in the introduction. Considering that the set $\mathbb{R}^{n} \setminus \mathcal{C}_{t}$ consists of two connected components, we define $\mathcal{A}_{t}$ to be the bounded one. \subsubsection{The Level-Set Method} \label{subsubsec:LSM} This approach was introduced by Osher \& Sethian in \cite{OsherSethian}. Their idea is to embed the curve $\mathcal{C}_{t}$ as the zero-level-set of a function $\phi: \mathbb{R}^{n} \times [0,T] \rightarrow \mathbb{R}$, i.e., $\mathcal{C}_{t} = \{ \vec{\mathbf{x}} : \phi(\vec{\mathbf{x}},t) = 0 \}$. In this setting, the outward normal $\hat{\mathfrak{n}}(\vec{\mathbf{x}},t)$ is $\frac{\nabla \phi}{|\nabla \phi|}$. The Level-Set Equation is derived from linear advection $\phi_{t} + \vec{v}(\vec{\mathbf{x}},t) \cdot \nabla \phi = 0$ to yield the following Initial Value Problem (IVP): \begin{eqnarray} \label{eq:LSE} \left\{ \begin{array}{rcll} \phi_{t}+F|\nabla\phi| &=& 0 & \quad \mathrm{on}~ \mathbb{R}^{n} \times (0,T) \\ \phi(\vec{\mathbf{x}},0) &=& \phi_{0}(\vec{\mathbf{x}}) & \quad \mathrm{on}~ \mathbb{R}^{n} \times \{ 0 \} \end{array} \right. \end{eqnarray} where $\phi_{0}(\vec{\mathbf{x}})$ is such that $\{ \vec{\mathbf{x}} : \phi_{0}(\vec{\mathbf{x}}) =0 \} = \mathcal{C}_{0}$. This method enjoys many desirable properties that have been studied in a variety of contexts \cite{EvansSpruck1,EvansSpruck2,EvansSpruck3,EvansSpruck4,OsherFedkiw,sethian1999level}. One of the most prominent is that topological changes are accurately handled, and do not require special treatment. In \cite{OsherSethian}, the authors propose various discretizations of this evolution on a spatial domain that comprises $N^{n}$ points. The resulting method has complexity $\mathcal{O}(N^{n})$ at each time step, due to the fact that all the contours of the level-set function are advected. To lower this high computational cost, it is possible to work only within a neighbourhood of the zero-level-set: This yields the Narrow Band LSM \cite{AdalSethian}. To be able to render the curve $\mathcal{C}_{t}$ accurately, it is desirable to preserve the signed distance property $|\nabla \phi| \approx 1$. To this end, the reinitialization method has been studied extensively \cite{ReInit1,Reinit2,SussmanFatemi,TsaiRedistancing}. Early versions of this method tend to displace the zero-level-set, yielding inaccuracies in the final $\mathcal{C}_{T}$. Moreover, they usually involve a large number of computations. \subsubsection{The Fast Marching Method} \label{subsubsec:FMM} The Fast Marching Method was independently proposed by Sethian \cite{Sethian} \& Tsitsiklis \cite{Tsitsi}. Strongly rooted in control theory, it requires that $F=F(\vec{\mathbf{x}}) \geq \delta >0$ on $\mathbb{R}^{n}$. Under those conditions, the FMM solves the following Eikonal equation, whose unknown is the time $\psi:\mathbb{R}^{n} \mapsto \mathbb{R}$ at which each point is reached by the curve \begin{eqnarray} \label{eq:EikonalNoTime} \left\{ \begin{array}{rcll} |\nabla\psi| &=& \frac{1}{F} & \quad \mathrm{on}~ \mathcal{A}^{c}_{0} \setminus \mathcal{C}_{0} \\ \psi(\vec{\mathbf{x}}) &=& 0 & \quad \mathrm{on}~ \mathcal{C}_{0} \end{array} \right. \end{eqnarray} The FMM makes use of a Narrow Band to advance the front in a manner that enforces the characteristic structure of the PDE into the solution. See \cite{Sethian,SethianFMM,SethianBookVariational,sethian1999level} and \cite{FalconeMin} for details. Recent improvements of this method include on the one hand the work of Zhao \cite{zhao2005fast}, who further lowered the complexity of the algorithm to develop the Fast Sweeping Method. On the other hand, Vladimirsky relaxed the restrictions on the speed by allowing it to be time-dependent. We discuss this latter method in \S \ref{subsec:tFMM}. \subsection{Motivation} \label{subsec:Motivation} We first present a simple example to motivate the need for an augmented FMM. Consider the initial curve $\mathcal{C}_{0} = \{ \vec{\mathbf{x}} : x^{2}+y^{2} = r_{0}^{2} \} \subset \mathbb{R}^{2}$ and the time-dependent speed $F(t) = 1-ct$, where $c$ and $r_{0}$ are positive constants. Let $\phi_{0}(\vec{\mathbf{x}})$ be the signed distance function $ \phi_{0}(\vec{\mathbf{x}}) = \sqrt{x^{2}+y^{2}}- r_{0} = : r(\vec{\mathbf{x}}) - r_{0}$. The exact solution to the IVP (\ref{eq:LSE}) is then $\phi(x,y,t) = r(\vec{\mathbf{x}}) -\left( r_{0} - \left( c \, t^{2}/2-t\right) \right)$. The evolution of the curve can be formally split into two parts: \textbf{(1)} For $t \in [0,\frac{1}{c}]$, the circle expands until it reaches the maximal radius $R = r_{0}+\frac{1}{2c}$. \textbf{(2)} For $t \in (\frac{1}{c}, T]$, where $T=-\frac{1}{c}\left( 1 - \sqrt{1+2cr_{0}}\right)$, the circle contracts until it collapses to the point $(0,0)$ at time $T$. \begin{figure} \caption{\textsc{Chart decomposition of $\mathcal{M} \label{fig:RugbyDraft} \end{figure} Consider the following atlas $\mathscr{A}$ to describe the resulting $C^{0}$-manifold $\mathcal{M}$ featured on Figure \ref{fig:RugbyDraft}. Let $\mathcal{U}:=\mathbb{R} \times [0,T]$. Then $\mathscr{A}=\cup^{3}_{i=1} \{ ( \psi_{i,\pm}, \mathcal{W}_{i,\pm}) \}$ where the real-valued functions $ \psi_{i,\pm}$ are defined as: \begin{eqnarray} \begin{array}{ll} \psi_{1,- }: \mathcal{U} \longrightarrow [- R,0] & \qquad \psi_{1,+}: \mathcal{U} \longrightarrow [0,R] \\ \psi_{2,- }: \mathcal{U} \longrightarrow [- R,0] & \qquad \psi_{2,+}: \mathcal{U} \longrightarrow [0,R] \\ \psi_{3,- }: \mathbb{R}^{2} \longrightarrow [0,\frac{1}{c}) & \qquad \psi_{3,+}: \mathbb{R}^{2} \longrightarrow (\frac{1}{c},T] \end{array} \end{eqnarray} and \begin{eqnarray} \psi_{1,\pm}(y,t) &=& \pm \sqrt{\left( r_{0}- c\, t^{2}/2+t \right)^{2} - y^{2}} \\ \psi_{2,\pm}(x,t) &=& \pm \sqrt{\left( r_{0}- c\, t^{2}/2+t \right)^{2} - x^{2}} \\ \psi_{3,\pm}(x,y) &=& \frac{1}{c}\left( 1 \pm \sqrt{1-2c(r(\vec{\mathbf{x}}) -r_{0})}\right) \end{eqnarray} We also define the sets $\mathcal{W}_{i,\pm}$ as the real part of the image of the functions $\psi_{i,\pm}$. Those sets are featured on Figure \ref{fig:RugbyDraft}. The functions $\psi_{3,\pm}$ can be verified to be the unique classical solutions to: \begin{eqnarray} \left\{ \begin{array}{rcll} |\nabla \psi_{3,-}(\vec{\mathbf{x}})| &=& \frac{1}{F(\psi_{3,-}(\vec{\mathbf{x}}))} & \quad \mathrm{on}~ \mathcal{U}_{3,-} \\ \psi_{3,-}(\vec{\mathbf{x}}) &=& 0 & \quad \mathrm{on}~ \mathcal{C}_{0} \end{array} \right. \end{eqnarray} \begin{eqnarray} \left\{ \begin{array}{rcll} |\nabla \psi_{3,+}(\vec{\mathbf{x}})| &=& - \frac{1}{F(\psi_{3,+}(\vec{\mathbf{x}}))} & \quad \mathrm{on}~ \mathcal{U}_{3,+} \\ \psi_{3,+}(\vec{\mathbf{x}}) &=& \frac{1}{c} & \quad \mathrm{on}~ \mathcal{C}_{1/c} \end{array} \right. \end{eqnarray} where $\mathcal{U}_{3,-} = \{ \vec{\mathbf{x}} : r_{0}< r(\vec{\mathbf{x}}) < R \}$ and $\mathcal{U}_{3,+} = \{ \vec{\mathbf{x}} : 0 \leq r(\vec{\mathbf{x}}) < R \}$. Together, the graphs of $\psi_{3,-}$ and $\psi_{3,+}$ describe all of $\mathcal{M}$ but the circle of radius $R$ reached at time $t=\frac{1}{c}$. On the other hand this circle lies in the union of the images of $\psi_{1,\pm}$ and $\psi_{2,\pm}$. Those functions are the unique classical solutions to \begin{eqnarray} \left\{ \begin{array}{cl} \mp (\psi_{1,\pm})_{t} + F(t) \sqrt{1+(\psi_{1,\pm})^{2}_{y}} = 0 & ~ \mathrm{on}~ \mathbb{R} \times (0,T] \\ \psi_{1,\pm}(y,0) = \pm \sqrt{r^{2}_{0}-y^{2}} & ~ \mathrm{on}~ \mathbb{R} \times \{ 0 \} \end{array} \right. \end{eqnarray} \begin{eqnarray} \left\{ \begin{array}{cl} \mp (\psi_{2,\pm})_{t} + F(t) \sqrt{(\psi_{2,\pm})^{2}_{x}+1} = 0 & ~ \mathrm{on}~ \mathbb{R} \times (0,T] \\ \psi_{2,\pm}(x,0) = \pm \sqrt{r^{2}_{0}-x^{2}} & ~ \mathrm{on}~ \mathbb{R} \times \{ 0 \} \end{array} \right. \end{eqnarray} This suggests the following procedure to build $\mathcal{M}$: \textbf{(1)} First, solve for $\psi_{3,-}$. \textbf{(Inter.)} Then solve for $\psi_{1,\pm}$ and $\psi_{2,\pm}$ restricted to $[-R,R] \times [\frac{1}{c}-\epsilon,\frac{1}{c}+\epsilon]$ for some $\epsilon>0$. \textbf{(2)} Finally, solve for $\psi_{3,+}$. Some questions immediately come to mind. Criteria to decide when to move from (1) to the intermediate step must be chosen. Similarly, knowing which equation to solve within the intermediate step is a concern. The practical aspects of how a code reconciles the results of those steps need to be addressed carefully. We discuss all of these issues, and, as a result, turn the above formal idea into an efficient algorithm that constructs $\mathcal{M}$. \subsection{Notation} \label{subsec:NotationI} To lighten the notation, we will now work in the setting where $n=2$. All the results discussed extend to arbitrary $n$. \paragraph{Continuous setting} We use the letter $\psi$ to denote functions whose image locally describes $\mathcal{M}$. Suppose $\psi : \mathcal{U} \mapsto \mathbb{R}$ with $\psi : (y,t) \mapsto \psi(y,t)=x$. We introduce the following subsets of $\mathbb{R}^{2}$: \begin{eqnarray} \Gamma_{t} &:=& \{ (x,y) \in \mathbb{R}^{2} : \psi(y,t)=x, (y,t) \in \mathcal{U} \} \end{eqnarray} See Figure \ref{fig:NotationPartI} for an illustration. We distinguish between $\hat{\mathfrak{n}}(\vec{\mathbf{x}},t)$ the two-dimensional outward normal to $\mathcal{C}_{t}$ at $\vec{\mathbf{x}}$; and $\hat{n}(\vec{\mathbf{x}},t)$ the three-dimensional outward normal to $\mathcal{M}$ at $(\vec{\mathbf{x}},t)$. \paragraph{Discrete setting} The spatial grids have fixed meshsize $\Delta x = \Delta y =: h$. We use \begin{eqnarray} x_{i}=i \cdot h \quad y_{j}=j \cdot h \quad t^{k}=k \cdot \Delta t \qquad (i,j,k) \in \mathbb{Z}\times \mathbb{Z} \times \{ \mathbb{N}\cup \{0 \} \} \end{eqnarray} to denote discrete values of space and time. We usually make no distinction between the continuous functions $\psi$ and their discrete approximations, except in \S \ref{subsec:FrontFunction}. We will be using indices consistently, so that $\psi_{ij}$ can be understood as $\psi(x_{i},y_{j})$ and $\psi^{k}_{i}$ as $\psi(x_{i},t^{k})$. Nevertheless, we will explicitly mention which representation is used. If a point $p$ belongs to $\mathcal{M}$, then it may be described by one or more of the following three expressions: \begin{eqnarray} p^{k}_{j} = (\psi^{k}_{j},y_{j},t^{k}) \qquad p^{k}_{i} = (x_{i},\psi^{k}_{i},t^{k}) \qquad p_{ij} = (x_{i},y_{j},\psi_{ij}) \end{eqnarray} \begin{figure} \caption{The subset $\Gamma_{0.55} \label{fig:NotationPartI} \end{figure} \section{A FMM for time-dependent speeds: The $t$-FMM} \label{subsec:tFMM} We first address the problem stated in \S \ref{subsec:Statement} under the following restriction: \begin{eqnarray} F=F(\vec{\mathbf{x}},t)\geq \delta >0 \qquad \quad \forall ~ (\vec{\mathbf{x}},t) \in \mathbb{R}^{2} \times [0, T] \end{eqnarray} Allowing the speed to depend on time yields a non-autonomous control problem. In \cite{Vlad}, the author studies this min-time-from-the-boundary problem in the context of anisotropic front propagation. In our context, the main result of \cite{Vlad} may be formulated as follows: The value function $\psi$ for this control problem satisfies the following Hamilton-Jacobi-Bellman equation: \begin{eqnarray} H(\nabla \psi, \psi, \vec{\mathbf{x}}) := || \nabla \psi(\vec{\mathbf{x}}) || F\left( \vec{\mathbf{x}}, \psi(\vec{\mathbf{x}}) \right) = 1 \end{eqnarray} The implementation of the resulting boundary-value problem: \begin{eqnarray} \left\{ \begin{array}{rcll} || \nabla \psi(\vec{\mathbf{x}}) || &=& \frac{1}{ F\left( \vec{\mathbf{x}} , \psi(\vec{\mathbf{x}}) \right) } \leq \frac{1}{\delta} & \qquad \mathrm{on}~ \mathcal{A}^{c}_{0} \setminus \mathcal{C}_{0} \\ \psi(\vec{\mathbf{x}}) &=& 0 & \qquad \mathrm{on}~ \mathcal{C}_{0} \end{array} \right. \end{eqnarray} closely mimicks that of the classical FMM. The only step that requires modifications is the one where a tentative value is assigned to each point in the Narrow Band. Following \cite{Vlad} this step is adjusted as follows. Let $\vec{\mathbf{x}}_{ij}=(x_{i},y_{j})$. Without loss of generality, assume that $\vec{\mathbf{x}}_{i-1,j}$ and $\vec{\mathbf{x}}_{i,j+1}$ are Accepted neighbours of $\vec{\mathbf{x}}_{ij}$. Consider a straight line lying in Quadrant II and ending at $\vec{\mathbf{x}}_{ij}$, and suppose it intersects the line joining $\vec{\mathbf{x}}_{i-1,j}$ and $\vec{\mathbf{x}}_{i,j+1}$ at the point $\tilde{\mathbf{x}}$. See Figure \ref{fig:Quadrant}. Then: $\tilde{\mathbf{x}} = \xi \vec{\mathbf{x}}_{i-1,j}+(1-\xi)\vec{\mathbf{x}}_{i,j+1}$ for some $\xi \in [0,1]$. Letting $\vec{v} = \vec{\mathbf{x}}_{ij}-\tilde{\mathbf{x}}$, we get $|\vec{v}| = \sqrt{\xi^{2}+(1-\xi)^{2}}~h$. Associate the following value to Quadrant II: \begin{eqnarray} \label{eq:VladMinimization} \psi_{\mathrm{II}} = \min_{\xi\in [0,1]} \left\{ \psi(\tilde{\mathbf{x}}) + \sqrt{\xi^{2}+(1-\xi)^{2}}~ \frac{~h}{F(\vec{\mathbf{x}}_{ij},\psi(\tilde{\mathbf{x}}))} \right\} \end{eqnarray} Proceeding similarly in the other quadrants yields the values $\psi_{\mathrm{I}}$, $\psi_{\mathrm{III}}$ and $\psi_{\mathrm{IV}}$. The tentative value assigned to $\psi_{ij}$ is then $\psi_{ij} = \min \{ \psi_{\mathrm{I}}, ~ \psi_{\mathrm{II}},~ \psi_{\mathrm{III}},~ \psi_{\mathrm{IV}} \}$. Note that in two dimensions the minimization problem (\ref{eq:VladMinimization}) may be solved using a direct method; see Appendix \ref{app:tFMM}. This method converges to the correct viscosity solution, and is globally $1^{\mathrm{st}}$ order \cite{OUM1,OUM2,Vlad}. Its complexity is $\mathcal{O}(N^{n}\log N^{n})$. In subsequent sections of this paper, we will refer to this modified FMM as the `$t$-FMM'. The results presented in this section yield Algorithm \ref{AlgotFMM} given in \S \ref{sec:AlgoDiscussion}. Finally, in the general case $|F| \geq \delta > 0$, the PDE we wish to solve is $|| \nabla \psi(\vec{\mathbf{x}}) || \, |F\left( \vec{\mathbf{x}}, \psi(\vec{\mathbf{x}}) \right)| = 1$. \begin{figure} \caption{ If the characteristic comes from Quadrant II.} \label{fig:Quadrant} \end{figure} \section{A local description of the evolving front: The sideways representation} \label{subsec:FrontFunction} An option to study the evolution of propagating curves or surfaces is to represent the front as a function that depends on time, e.g.,~ $y=Y(x,t)$ \cite{SethianFMM}. Although successful at describing the evolution locally, this approach fails to capture the global properties of the front. Nevertheless we believe that this approach can be used near regions where $F$ vanishes. \subsection{Heuristics} \label{subsec:Heuristics} We first present an argument in the smooth setting. Consider the solution $\phi$ to IVP (\ref{eq:LSE}). Suppose $\phi \in C^{1}(\vec{\mathbf{x}}_{0},t_{0})$ and $\phi(\vec{\mathbf{x}}_{0},t_{0})=0$. Assume furthermore that $\phi_{x}(\vec{\mathbf{x}}_{0},t_{0}) \neq 0$, so that the mapping is locally invertible. From the Implicit Function Theorem there exist open neighbourhoods $(\vec{\mathbf{x}}_{0},t_{0}) \in \mathcal{V}$ and $\mathcal{U} \subset \mathbb{R} \times [0,T]$, as well as a function \begin{eqnarray} \psi: \mathcal{U} \longrightarrow \mathbb{R} ~, \quad \psi: (y,t) \mapsto x=\psi(y,t) ~, \quad \psi \in C^{1}(\mathcal{U}) ~, \quad (\psi(y,t),y,t) \in \mathcal{V} \end{eqnarray} satisfying $\phi(\psi(y,t),y,t) = 0$ $\forall ~ (y,t) \in \mathcal{U}$. Taking full derivatives of $\phi$ with respect to $y$ and $t$, and using the fact that in $\mathcal{V}$, $\phi$ satisfies the LSE pointwise gives: \begin{eqnarray} (-\phi_{x}\psi_{t})+F\sqrt{\phi^{2}_{x} + (-\phi_{x}\psi_{y})^{2}} = 0 \quad \Longleftrightarrow \quad -\psi_{t} \pm F\sqrt{1 + \psi^{2}_{y}} = 0 \end{eqnarray} where $\phi_{x}$ and $F$ are evaluated at $(x,y,t)=(\psi(y,t),y,t)$. The sign used in the last equation depends on $\phi_{x} = \pm \sqrt{\phi^{2}_{x}}$. We let $a:= -\mathrm{sign}(\phi_{x}(\vec{\mathbf{x}}_{0},t_{0}))$. Now, let $\psi$ satisfy the following Initial Value Problem: \begin{eqnarray} \label{eq:Sideways} \left\{ \begin{array}{cl} \psi_{t} + a F(\psi,y,t) \sqrt{1+\psi^{2}_{y}} = 0 & ~ \mathrm{on}~ \mathcal{U} \cap \left( \mathbb{R} \times (t_{0},T) \right)\\ \psi(y,t_{0}) = \psi_{0}(y) & ~ \mathrm{on}~ \mathcal{U} \cap \left( \mathbb{R} \times \{ t_{0} \} \right) \end{array} \right. \end{eqnarray} where $\psi_{0}$ is chosen such that $\phi(\psi_{0}(y),y,t_{0})=0$. Then for all $t \in (t_{0},T)$ the set $\Gamma_{t}$ locally describes the curve at time $t$, i.e., $\Gamma_{t} = \mathcal{C}_{t}\cap \mathcal{V}$. We now investigate the case where $\mathcal{M}$ is merely $C^{0}$. For simplicity, we work with $t_{0}=0$. \paragraph{Remark} Applying the same argument assuming $\phi_{t}(\vec{\mathbf{x}}_{0},t_{0}) \neq 0$ allows one to formally relate the LSE to the Eikonal equation \cite{OsherSethian}: \begin{eqnarray} \phi_{t}+F(x,y,\psi) \sqrt{(-\phi_{t}\psi_{x})^2 + (-\phi_{t}\psi_{y})^2} = 0 \quad \Longleftrightarrow \quad ||\nabla \psi|| = \frac{-\mathrm{sign}({\phi}_{t})}{F(x,y,\psi)} \end{eqnarray} But since by the LSE we have $a:= -\mathrm{sign}(\phi_{t}) = \mathrm{sign}\left(F(x,y,\psi)\right)$, this simplifies to $||\nabla \psi|| = \frac{1}{|F(x,y,\psi)|}$. \subsection{Theory} Equation (\ref{eq:Sideways}) is a Cauchy problem of the form \begin{eqnarray} \label{eq:Cauchy} \left\{ \begin{array}{cl} \psi_{t} +H(y,t,\psi,\psi_{y}) = 0 & ~ \mathrm{on}~ \mathcal{U} \cap \left( \mathbb{R} \times (0,T) \right) \\ \psi(y,0) = \psi_{0}(y) & ~ \mathrm{on}~ \mathcal{U} \cap \left( \mathbb{R} \times \{ 0 \} \right) \end{array} \right. \end{eqnarray} where the Hamiltonian $H:\mathbb{R}^{1} \times (0,T) \times \mathbb{R} \times \mathbb{R}^{1} \rightarrow \mathbb{R}$ is defined as $H(y,t,\psi,\psi_{y}) = aF(\psi,y,t)\sqrt{1+\psi^{2}_{y}} $. The function $\psi_{0}$ is defined such that for all $y \in \mathcal{U} \cap \left( \mathbb{R} \times \{ 0 \} \right)$ we have $(\psi_{0}(y),y) \in \mathcal{C}_{0}$. We resort to the rich theory of viscosity solutions of Hamilton-Jacobi equations to study various properties of this problem \cite{bardi2008optimal, BarlesExistence, MR732102, UserGuideViscosity, CrandallLions, evans2010partial, koike2004beginner, souganidisExistence, subbotin1994generalized}. We first address the well-posedness of the PDE. It is a simple matter to verify that the assumptions on $H$ required to apply Theorem 1.1 in \cite{souganidis1985approximation} hold in our context.\footnote{with the exception of (H3) in \cite{souganidis1985approximation}. However, it may be modified to get $\gamma_{R,P} \in \mathbb{R}$ if $p\in B_{N}(0,P)$ for some $P>0$. } This yields \begin{theorem}[Existence \& Uniqueness] \label{thm:ExistenceUniqueness} There exists a unique viscosity solution $\psi$ to problem (\ref{eq:Cauchy}). \end{theorem} We next verify that (\ref{eq:Cauchy}) does have the geometric interpretation advertised in the previous section. \begin{theorem}[$\Gamma_{t}$ locally describes $\mathcal{C}_{t}$] The set $\Gamma_{t}$ enjoys the following property: $\Gamma_{t} = \mathcal{C}_{t} \cap \mathcal{V}$. \end{theorem} \begin{proof} Consider IVP (\ref{eq:LSE}) again: \begin{eqnarray} \left\{ \begin{array}{rcll} \phi_{t}+F|\nabla\phi| &=& 0 & \quad \mathrm{on}~ \mathbb{R}^{2} \times (0,T) \\ \phi(\vec{\mathbf{x}},0) &=& \phi_{0}(\vec{\mathbf{x}}) & \quad \mathrm{on}~ \mathbb{R}^{2} \times \{ 0 \} \end{array} \right. \end{eqnarray} Since it is known that $\vec{\mathbf{x}} \in \mathcal{C}_{t} \cap \mathcal{V}$ if and only if $\phi(\vec{\mathbf{x}},t)=0$, we may prove the theorem by showing that: $\vec{\mathbf{x}} \in \Gamma_{t}$ if and only if $\phi(\vec{\mathbf{x}},t)=0$. \fbox{$\Longrightarrow$} We argue by contradiction. Suppose the set $\mathcal{T} = \{ T>t>0 : \exists \vec{\mathbf{x}} \in \Gamma_{t} \mathrm{~s.t.~} \phi(\vec{\mathbf{x}},t) \neq 0 \}$ is not empty and define $t^{\ast} = \inf \mathcal{T}$. Since $\phi$ is continuous, $\mathcal{T}$ is open and $t^{\ast} \not \in \mathcal{T}$. Therefore, for all $\vec{\mathbf{x}}^{\ast} \in \Gamma_{t^{\ast}}$, $\phi(\vec{\mathbf{x}}^{\ast},t^{\ast})=0$, but for any $\epsilon>0$ sufficiently small, there exists $\vec{\mathbf{x}}_{\epsilon} \in \Gamma_{t+\epsilon}$ such that $\phi(\vec{\mathbf{x}}_{\epsilon},t+\epsilon) \neq 0$. If $\mathcal{M}$ is differentiable at $(\vec{\mathbf{x}}^{\ast},t^{\ast})$, this contradicts the argument presented in \S \ref{subsec:Heuristics}: The Implicit Function Theorem guarantees that the set $\mathcal{V}$ is open. If $\mathcal{M}$ is not differentiable at $(\vec{\mathbf{x}}^{\ast},t^{\ast})$, then fix $\epsilon$ and for $\delta >0$ consider $\vec{\mathbf{x}}^{0} \in \Gamma_{t+\epsilon}$ such that $\|\vec{\mathbf{x}}_{\epsilon}-\vec{\mathbf{x}}^{0} \| \leq \delta$ and $\mathcal{M}$ is differentiable at $(\vec{\mathbf{x}}^{0},t+\epsilon)$. For any $\delta$, such a point can be found since for any $T>t+\epsilon>0$ the singularities of $\Gamma_{t+\epsilon}$ are subsets of measure 0.\footnote{This follows directly from the fact that Problem (\ref{eq:Cauchy}) is a first order Hamilton-Jacobi equation.} Again, the Implicit Function Theorem guarantees that there is a neighbourhood $\tilde{\mathcal{V}}$ of $(\vec{\mathbf{x}}^{0},t+\epsilon)$ where $\phi(\vec{\mathbf{x}},t)=0$ for any $\vec{\mathbf{x}} \in \Gamma_{t} \cap \tilde{\mathcal{V}}$. Considering the sequence $\delta_{n} = \{ \frac{1}{n} : n \in \mathbb{N} \}$ and the corresponding sequence $\{ \vec{\mathbf{x}}^{n} \}^{\infty}_{n=1}$, we arrive at the conclusion that $\phi(\vec{\mathbf{x}}_{\epsilon},t+\epsilon) \neq 0$ contradicts the continuity of $\phi$. \fbox{$\Longleftarrow$} Assume that there exists $(\vec{\mathbf{x}},t) \in \mathcal{V}$ such that $\phi(\vec{\mathbf{x}},t)=0$, but there is no $y$ such that $\vec{\mathbf{x}}=(\psi(y),y) \in \Gamma_{t}$. We re-use the arguments given in the proof of \fbox{$\Longrightarrow$}: If $\mathcal{M}$ is differentiable at $(\vec{\mathbf{x}},t)$ then this contradicts the argument in \S \ref{subsec:Heuristics}. If $\mathcal{M}$ is not differentiable at $(\vec{\mathbf{x}},t)$, then we can find a sequence $\vec{\mathbf{x}}^{n} \in \Gamma_{t}$ converging to $\vec{\mathbf{x}}$ such that $\phi(\vec{\mathbf{x}}^{n},t)=0$, and obtain the contradiction that $\psi$ is not continuous. \qquad \end{proof} \subsection{Generalizations} \label{subsec:Generalizations} More generally, the above arguments can be applied to yield that there exists neighbourhoods $(\vec{\mathbf{x}}_{0},t_{0}) \in \mathcal{V}$ and $\mathcal{U} \subset \mathbb{R} \times [0,T]$, as well as a unique function $\psi: \mathcal{U} \longrightarrow \mathbb{R}$, $\psi: (z,t) \mapsto w=\psi(z,t)$ with $(w \cos(\theta)+z,w \sin(\theta)+z,t) \in \mathcal{V}$ satisfying \begin{eqnarray} \label{eq:Skewed} \left\{ \begin{array}{cl} \psi_{t} + a F \left( w \cos(\theta)+z,w \sin(\theta)+z,t \right) \sqrt{\psi^{2}_{z}+1} = 0 & ~ \mathrm{on}~ \mathcal{U} \cap \left( \mathbb{R} \times (0,T) \right) \\ \psi(z,0) = \psi_{0}(z) & ~ \mathrm{on}~ \mathcal{U} \cap \left( \mathbb{R} \times \{ 0 \} \right) \end{array} \right. \end{eqnarray} in the viscosity sense. Here $\theta$ is the polar angle of $\vec{\mathbf{x}}_{0}$, $a = -\mathrm{sign} ( \vec{\mathbf{x}}_{0} \cdot \hat{\mathfrak{n}}(\vec{\mathbf{x}}_{0},t_{0}))$ and $\psi_{0}$ is chosen such that for all $z \in \mathcal{U} \cap \left( \mathbb{R} \times \{ 0 \} \right)$, we have $(\psi_{0}(z) \cos(\theta)+z,\psi_{0}(z) \sin(\theta)+z) \in \mathcal{C}_{0}$. When $\theta = 0$ we recover Problem (\ref{eq:Sideways}), whereas when $\theta = \pi/2$, we get that $\psi: (x,t) \mapsto y=\psi(x,t)$ with $(x,\psi(x,t),t) \in \mathcal{V}$ satisfies: \begin{eqnarray} \label{xtproblem} \left\{ \begin{array}{cl} \psi_{t} + a F(x,\psi,t) \sqrt{\psi^{2}_{x}+1} = 0 & ~ \mathrm{on}~ \mathcal{U} \cap \left( \mathbb{R} \times (0,T) \right) \\ \psi(x,0) = \psi_{0}(x) & ~ \mathrm{on}~ \mathcal{U} \cap \left( \mathbb{R} \times \{ 0 \} \right) \end{array} \right. \end{eqnarray} in the viscosity sense. In subsequent sections, we will refer to Problems (\ref{eq:Sideways}) and (\ref{xtproblem}) as the $yt$- and $xt$-representations of $\mathcal{M}$, whereas Problem (\ref{eq:Skewed}) will be the \emph{skewed} representation. Those problems provide \emph{sideways} representations of the evolving front. For clarity, remarks pertaining to those will usually be made for the special case of Problem (\ref{eq:Sideways}). \subsection{Discretization} \label{subsubsec:Discretization} Finite-differences schemes for problems such as (\ref{eq:Cauchy}) have been discussed \cite{crandall1984two, CrandallTartar, souganidis1985approximation}. Based on these works, we propose the following discretization for Equation (\ref{eq:Sideways}). In this subsection only, we will distinguish between the continuous function $\psi$, and its discrete approximation which we denote as $\chi$. The spatial derivative $\chi_{y}$ must be computed in an upwind fashion. To this end, we introduce the one-sided operators \begin{eqnarray} D^{+}_{l}\chi^{r} := \frac{\chi^{r}_{l+1} - \chi^{r}_{l}}{h} \qquad \qquad D^{-}_{l}\chi^{r} := \frac{\chi^{r}_{l} - \chi^{r}_{l-1}}{h} \end{eqnarray} and suggest: \begin{eqnarray} \chi^{r+1}_{l} = \chi^{r}_{l} - a \cdot \Delta t \cdot F(\chi^{r}_{l},y_{l},t^{r}) \cdot \sqrt{1+ \mathrm{upw}(\chi^{r},l,r,\alpha)} \end{eqnarray} where \begin{eqnarray} \mathrm{upw}(\chi^{r},l,n,\alpha) &:=& \max \{ \alpha,0 \} \left( \min \left\{ D^{+}_{l}\chi^{r},0 \right\}^{2} + \max \left\{ D^{-}_{l}\chi^{r},0 \right\}^{2} \right) \nonumber \\ &~& -\min \{ 0, \alpha \} \left( \max \left\{ D^{+}_{l}\chi^{r},0 \right\}^{2} + \min \left\{ D^{-}_{l}\chi^{r},0 \right\}^{2} \right) \end{eqnarray} The constant $\alpha$ acts as a switch and is defined as $\alpha = \mathrm{sign} \left( aF(\chi^{r}_{l},y_{l},t^{r} ) \right)$. \begin{proposition}{(Convergence.)} \label{claim:convergence} Let $M$ be defined as the local bound on $F$, i.e., $M^{r}_{l} = \sup_{(x,y,t) \in B(p^{r}_{l},2h)}\{ |F(x,y,t)| \}$, where $p^{r}_{l} = (\chi^{r}_{l},y_{l},t^{r})$. Assume that $\max \left\{ | D^{+}_{l}\chi^{r} |, | D^{-}_{l}\chi^{r} | \right\} \leq P$ for all $l \in L$ and $0\leq r \leq R$. Suppose $\Delta t$ satisfies \begin{eqnarray} \label{CFLbound} M^{r}_{l} \cdot \Delta t \leq \frac{h}{2P} \end{eqnarray} Then the above scheme is such that $\chi \rightarrow \psi$ as $h$ and $\Delta t \rightarrow 0$, with rate \begin{eqnarray} \| \chi - \psi \|_{\infty} \leq c \sqrt{\Delta t} \end{eqnarray} for all $l$, where the constant $c$ depends on $\| \psi_{0} \|$, $\| D\psi_{0} \|$, the numerical Hamiltonian $g$, and $R\Delta t$ where $0 \leq r \leq R$. \end{proposition} \begin{proof} We proceed by showing that the scheme is \textsl{monotone} and \textsl{consistent} in the sense of \cite{souganidis1985approximation}. The results then follow from Theorem 3.1 of that same paper. The scheme can be rewritten as \begin{eqnarray} \chi^{r+1}_{l} = \chi^{r}_{l} - \Delta t \cdot g \left(y_{l},t^{r},\chi^{r}_{l},D^{+}_{l}\chi^{r},D^{-}_{l}\chi^{r} \right) \end{eqnarray} where the numerical Hamiltonian $g$ is easily verified to be consistent, i.e., \begin{eqnarray} g \left(y,t, s, \delta , \delta \right) = H(y,t,s,\delta) \qquad \forall (y,t) \in \mathcal{U},~ s \in \mathbb{R},~|\delta|<P \end{eqnarray} We verify monotonicity by showing that the function \begin{eqnarray} G(\chi^{r}_{l-1},\chi^{r}_{l},\chi^{r}_{l+1}) = \chi^{r}_{l} - a \cdot \Delta t \cdot F(u,y_{l},t^{r}) \cdot \sqrt{1+ \mathrm{upw}(\chi^{r},l,r,\alpha)} \end{eqnarray} is a non-decreasing function of each of its argument, for fixed $u$, $y_{l}$ and $t^{r}$. We only treat the case $\alpha >0$, since the other case is symmetric. Writing $F=F(u,y_{l},t^{r})$ for short gives \begin{eqnarray} G(b,c,d) = \left\{ \begin{array}{ll} c - a \Delta t ~ F \sqrt{1+\left( \frac{d-c}{h} \right)^{2}} & \mathrm{if}~ d-c<0, ~ c-b<0 \\ c - a \Delta t ~ F \sqrt{1+\left( \frac{c-b}{h} \right)^{2}} & \mathrm{if}~ d-c>0, ~ c-b>0 \\ c - a \Delta t ~ F \sqrt{1+\left( \frac{d-c}{h} \right)^{2}+\left( \frac{c-b}{h} \right)^{2}} & \mathrm{if}~ d-c<0, ~ c-b>0 \\ c - a \Delta t ~ F & \mathrm{if}~ d-c>0, ~ c-b<0 \end{array} \right. \end{eqnarray} For the first case: $G_{b}$, $G_{d} \geq 0$ are trivial to check while $G_{c} \geq 0$ only if \begin{eqnarray} 1 \geq \left( F^{2} \left( \frac{\Delta t}{h} \right)^{2}-1 \right) \left( - \frac{d-c}{h}\right)^{2} \quad \Longleftarrow \quad \frac{\sqrt{1+P^{2}}}{P} \geq M^{r}_{l} \frac{\Delta t}{h} \end{eqnarray} Case 2 yields the same condition, whereas Case 3 gives the more restrictive one present in the assumption of the claim. Case 4 is trivial. \qquad \end{proof} \begin{proposition}{(Stability.)} \label{claim:stability} The above scheme is stable, provided that \begin{eqnarray} \label{UpperBounds} \Delta t < \min \left\{ \frac{h}{2PM^{r}_{l}} ~,~ \frac{P-2}{KP\sqrt{1+2P^{2}}} ~,~ \frac{2}{P\delta} \right\} \end{eqnarray} for some $\delta >0$. The constant $P$ is such that $\max \left\{ | D^{+}_{l}\chi^{r} |, | D^{-}_{l}\chi^{r} | \right\} \leq P$ for all $l \in L$ and $0\leq r \leq R$. \end{proposition} \begin{proof} Applying Theorem 7 of \cite{Oberman} to our scheme, it is possible to show that for $h$ small enough, the explicit Euler map defined as \begin{eqnarray} S^{l}_{\Delta t}(\chi) = \chi_{l} - a\Delta t \cdot F(\chi_{l},y_{l},t) \sqrt{1+\mathrm{upw}(\chi^{r},l,r,\alpha)} \end{eqnarray} is a strict contraction in $\ell_{\infty}$. Bounding $S^{l}_{\Delta t}(\chi) - S^{l}_{\Delta t}(\tau) $ from below (resp.\mbox{~}above) yields the $2^{\mathrm{nd}}$ (resp.\mbox{~}$3^{\mathrm{rd}}$) bound in (\ref{UpperBounds}). \end{proof} When defining `upw', we implicitly assumed that both $\chi^{r}_{l+1}$ and $\chi^{r}_{l-1}$ were known. In the instance where one of those values is not known, we set $\chi^{r+1}_{l}$ to $+\infty$. Indeed, no value can be assigned to $\chi^{r+1}_{l}$ since it is not possible to infer where the characteristic going through the point $p^{r}_{l}=(\chi^{r}_{l},y_{l},t^{r})$ comes from. \paragraph{Remark} Assuming $P=\mathcal{O}(1/h)$, we may revisit the bounds on $\Delta t$ given in (\ref{UpperBounds}). The first bound is not very restrictive, even though it scales like $\mathcal{O}(h^{2})$. Indeed $F\approx 0$ implies that $M^{r}_{l}$ should always be small. The bounds imposed by stability are $\mathcal{O}(h)$, which agrees with the usual CFL number of an advection problem. \section{Weaving the representations} \label{sec:Weaving} Both approaches just discussed in \S \ref{subsec:tFMM} and \S \ref{subsec:FrontFunction} provide methods that locally build the manifold $\mathcal{M}$. We now address the question of when to use a specific representation. \subsection{The Sign Test} \label{subsec:TheSignTest} Since the approach presented in \S \ref{subsec:tFMM} relies on the assumption that the speed is bounded away from 0, the sign of $F$ is monitored throughout the algorithm. In particular, whenever a point in $(x_{i},y_{j})$ is assigned a value $\psi_{ij}$ using the $(t)$-FMM, the Sign Test is performed as follows. Suppose the point $p_{i-1,j}=(x_{i-1},y_{j},\psi_{i-1,j})$ was used in the computation of $\psi_{ij}$. Considering the line in $xyt$-space joining the point $p_{i-1,j}$ and $p_{ij}$, we check the number of times $d$ that the speed changes sign along this line. If $d=0$, the algorithm can keep running the ($t$-)FMM: The pair $(p_{i-1,j},p_{ij})$ is said to \textsl{pass the Sign Test}. If $d=1$, we should change representation: The pair $(p_{i-1,j},p_{ij})$ \textsl{fails the Sign Test}. If $d>1$, the grid has to be refined. \subsection[Interpolation]{Conversion of data: Interpolation} \label{subsec:Interpolation} Suppose that the pair $(p_{i-1,j},p_{ij})$ just failed the Sign Test discussed in \S \ref{subsec:TheSignTest}. Then the algorithm must change representation. Without loss of generality, let us suppose that the algorithm switches from the $xy$- to the $yt$-representation. This means the manifold is locally sampled by points of the form $p_{lm} = (x_{l},y_{m},t)$, where $l\in L \subset I$ and $m \in M \subset J$. The $yt$-representation requires points of the form $p^{r}_{m} = (x,y_{m},t^{r})$, where $r \in R \subset K$. See Figure \ref{fig:ConversionOfData}. \begin{figure} \caption{Converting the data using interpolation. (a) Some data in the $xy$-representation. The point $p_{\alpha \beta} \label{fig:ConversionOfData} \end{figure} \subsection{Computing the outward normal} \label{subsec:OutwardNormal} Computing the outward normal $\hat{n}$ accurately at each point sampling $\mathcal{M}$ is a crucial component of the algorithm. In regions where the level-set function $\phi$ is $C^{1}$, we have $\hat{n} = \frac{(\phi_{x},\phi_{y},\phi_{t})}{|(\phi_{x},\phi_{y},\phi_{t})|}$. We use the Implicit Function Theorem: If $\psi(y,t)=x$ satisfies $\phi(\psi(y,t),y,t)=0$, then $\phi_{y} = -\phi_{x}\psi_{y}$ and $\phi_{t} = -\phi_{x}\psi_{t}$. Since $\phi_{x} \neq 0$, we set $\vec{n} = (+\mathrm{sign}(\phi_{x}),\psi_{y},\psi_{t})$ and $\hat{n} = \vec{n}/|\vec{n}|$. We keep track of the normal associated to each point by defining the function \begin{eqnarray} \mathrm{\textsl{Norm}~}:\mathbb{R}^{2} \times \mathbb{R}^{+}\longrightarrow S^{2} \qquad \qquad \mathrm{\textsl{Norm}~}(p_{ij})=\hat{n}(p_{ij}) \end{eqnarray} \subsection{The Orientation Test} \label{subsec:TheOrientationTest} Whenever a point is computed, the algorithm determines the orientation of the outward normal at this point. As explained in \S \ref{subsec:FrontFunction}, this can be done based on the sign of $\hat{n}_{3}$, the time component of $\hat{n}$. We define \begin{eqnarray} \mathrm{\textsl{Orient}3~}:\mathbb{R}^{2} \times \mathbb{R}^{+}\longrightarrow \{ -1, +1 \} \qquad \qquad \mathrm{\textsl{Orient}3~}(p_{ij})= - \mathrm{sign}\left( \hat{n}_{3} \right) \end{eqnarray} The algorithm requires finding which points $p_{ab}$ in a neighbourhood of $p_{ij}$ have the same orientation as $p_{ij}$. This is done using the Orientation Test. A pair $(p_{ij},p_{ab})$ is said to \textsl{pass the Orientation Test} if $\mathrm{\textsl{Orient}3~}(p_{ij})=\mathrm{\textsl{Orient}3~}(p_{ab})$, and to fail it otherwise. \section{Algorithms \& Discussion} \label{sec:AlgoDiscussion} We introduce some notation before giving the details of the algorithms. We make use of four lists. \texttt{Accepted} ~and \texttt{Narrow Band} ~are lists of triplets, e.g.,~ $p_{ij} = (x_{i},y_{j},\psi_{ij})$. \texttt{Pile} ~and \texttt{Far Away} ~are lists of coordinates, e.g.,~ $(x_{i},y_{j})$. We define the space and time projection operators as follows: if $p_{ij} = (x_{i},y_{j},\psi_{ij})$, then \begin{eqnarray} \pi_{s}: \mathbb{R}^{2} \times \mathbb{R}^{+} \longrightarrow \mathbb{R}^{2} &\qquad& \pi_{s}(p_{ij}) = (x_{i},y_{j}) \\ \pi_{t}: \mathbb{R}^{2} \times \mathbb{R}^{+} \longrightarrow \mathbb{R}^{+} &\qquad& \pi_{t}(p_{ij}) = \psi_{ij} \end{eqnarray} The following function will be used: \begin{eqnarray} \mathrm{\textsl{Grid}}:\mathbb{R}^{2}\longrightarrow \mathbb{R}^{+} \qquad \mathrm{\textsl{Grid}}:(x_{i},y_{j}) \longrightarrow \psi_{ij} \end{eqnarray} The set of coordinates $N((x_{i},y_{j})) = \{ (x_{a},y_{b}) : |(i,j)-(a,b)|=1 \}$ consists of the nearest neighbours of $(x_{i},y_{j})$. We use Table \ref{tab:Neighbourhoods} to define two sets of triplets: $\mathrm{NeighEik}((x_{i},y_{j}))$ and $\mathrm{NeighSide}(p_{\alpha \beta})$. The first one is used to compute the value $\psi_{ij}$ in Algorithms \ref{AlgoFMM} and \ref{AlgotFMM}. Similarly the second set is used in Algorithm \ref{AlgoSidewaysPDE}, where the relevant component of $\hat{n}(p_{\alpha \beta})$, the normal at $p_{\alpha\beta}$, is denoted by $\eta_{i}$. We are now ready to present the main algorithms. \begin{table} \footnotesize \begin{center} \begin{tabular}{|c|c|c|} \hline ~ & $\mathcal{S} = \mathrm{NeighEik}((x_{i},y_{j}))$ & $\mathcal{S} =\mathrm{NeighSide}(p_{\alpha\beta})$ \\ \hline \hline $p_{ab} =(x_{a},y_{b},\psi_{ab})$ & \textbullet~ $(x_{a},y_{b}) \in N((x_{i},y_{j}))$ & \textbullet~$(x_{a},y_{b}) \in \{ (x_{l},y_{m}) : l \in L , m \in M \}$ \\ belongs to $\mathcal{S}$ & \textbullet~ $p_{ab} \in$ \texttt{Accepted} & \textbullet~ $p_{ab} \in$ \texttt{Accepted} \\ if it satisfies & \textbullet~ \textsl{Grid}$(x_{a},y_{b}) = \psi_{ab}$ & \textbullet~ $\mathrm{sign}(\eta_{i}) = \mathrm{sign}(\hat{n}_{i})$ \\ ~ & \textbullet~ $(p_{\alpha \beta},p_{ab})$ passes the Orient.\mbox{}Test & where $\hat{n} = $\textsl{Norm}~$(p_{ab})$ \\ \hline \end{tabular} \caption{Definitions of two sets used in Algorithms \ref{AlgotFMM}, \ref{AlgoSidewaysPDE} and \ref{AlgoFMM}} \label{tab:Neighbourhoods} \end{center} \end{table} \begin{algorithm} \caption{Main Loop}\label{AlgoMainLoop} \begin{algorithmic}[1] \While{\texttt{Narrow Band} $\neq \emptyset$} \Statex \Procedure{Accept a point}{} \State $\psi_{\alpha \beta} \gets \min \{ \pi_{t}(p_{ij}) : p_{ij} \in$ \texttt{Narrow Band} $ \}$ \State \textsl{Grid} $(x_{\alpha},y_{\beta}) \gets \psi_{\alpha \beta}$, ~~$p_{\alpha \beta} \gets (x_{\alpha},y_{\beta},\psi_{\alpha \beta})$ \State remove $p_{\alpha \beta}$ from \texttt{Narrow Band} $\qquad$ add $p_{\alpha \beta}$ to \texttt{Accepted} \If{$(x_{\alpha},y_{\beta}) \in$ \texttt{Far Away} } \State remove $(x_{\alpha},y_{\beta})$ from \texttt{Far Away} \EndIf \EndProcedure \Statex \If{$\psi_{\alpha \beta}<T$} \Procedure{Update Pile}{} \ForAll{$(x_{a},y_{b}) \in N((x_{\alpha},y_{\beta}))$} \State $\vec{v} \gets (x_{a},y_{b}) - (x_{\alpha},y_{\beta})$ \If{$\mathrm{sign}(\vec{v} \cdot \hat{\mathfrak{n}}(p_{\alpha \beta})) = \mathrm{sign}(F(p_{\alpha,\beta}))$ or 0} \If{\textsl{Grid}$(x_{a},y_{b}) =\psi_{ab} < +\infty$} \State $p_{ab} \gets (x_{a},y_{b},\psi_{ab})$ \If{\textsl{Orient}3~$(p_{ab}) \neq$\textsl{Orient}3~$(p_{\alpha \beta})$ } \State add $(x_{a},y_{b})$ to \texttt{Pile} \EndIf \ElsIf{$(x_{a},y_{b}) \in$ \texttt{Far Away}} \State add $(x_{a},y_{b})$ to \texttt{Pile} \EndIf \EndIf \EndFor \EndProcedure \EndIf \Statex \Procedure{Update the Narrow Band}{} \ForAll{$(x_{i},y_{j}) \in$ \texttt{Pile}} \State compute $\psi_{ij}$ and $\hat{n}_{ij}$ using Algo. \ref{AlgoFMM} if $F=F(\vec{\mathbf{x}})$ or Algo. \ref{AlgotFMM} if $F=F(\vec{\mathbf{x}},t)$ \State $p_{ij} \gets (x_{i},y_{j},\psi_{ij})$, ~~\textsl{Norm}~$(p_{ij}) \gets \hat{n}(p_{ij})$ \State remove $(x_{i},y_{j})$ from \texttt{Pile}. \ForAll{$p \in \mathrm{NeighEik}((x_{i},y_{j}))$} \State perform the Sign Test for the pair $(p_{ij},p)$ \EndFor \If{at least one pair fails the Sign Test} \State proceed to Algo. \ref{AlgoSideways}, which returns $(k,l,\psi_{kl})$, FAIL and $\hat{n}$ \State $p_{ij} \gets (x_{k},y_{l},\psi_{kl})$, ~~\textsl{Norm}~$(p_{ij}) \gets \hat{n}$, ~~ $i \gets k$,~~ $j \gets l$ \EndIf \State\textsl{Orient}3~$(p_{ij}) \gets - \mathrm{sign}(\hat{n}_{3}(p_{ij}))$ \If{FAIL$==0$} \If{$\exists~q_{ij} \in $ \texttt{Narrow Band} ~with $\pi_{s}(q_{ij}) = \pi_{s}(p_{ij})$} \State remove $q_{ij}$ from \texttt{Narrow Band} \EndIf \State add $p_{ij}$ to \texttt{Narrow Band} \EndIf \EndFor \EndProcedure \EndWhile \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Solve $|\nabla \psi(x,y)| = \frac{1}{|F(x,y,\psi)|}$}\label{AlgotFMM} \begin{algorithmic} \State $u_{\pm} \gets \pi_{t}(p_{i\pm1j})$ if $p_{i\pm1j} \in \mathrm{NeighEik}((x_{i},y_{j}))$, $+\infty$ otherwise. \State $v_{\pm} \gets \pi_{t}(p_{ij\pm1})$ if $p_{ij\pm1} \in \mathrm{NeighEik}((x_{i},y_{j}))$, $+\infty$ otherwise. \State $\Theta \gets [0,0,0,0]$ \For{Quadrant=1\ldots 4} \If{Quadrant=1} \State $\psi_{v} \gets v_{+}$,~ $\psi_{u} \gets u_{+}$,~ $\tau_{v} \gets \frac{h}{|F(x_{i},y_{j+1},\psi_{v})|}$,~ $\tau_{u} \gets \frac{h}{|F(x_{i+1},y_{j},\psi_{u})|}$,~ \EndIf \State (and similarly for other quadrants) \If{$(\psi_{v}=+\infty)$ and $(\psi_{u}=+\infty)$} \State $\theta \gets +\infty$,~~ \Else \State $\theta \gets \min_{\xi \in [0,1]} \{ \xi \psi_{v} + (1-\xi)\psi_{u} + \sqrt{ \xi^2+(1- \xi)^2} ~\left( \xi\tau_{v}+(1- \xi)\tau_{u} \right) \}$ \State (see Appendix \ref{app:tFMM} for details) \EndIf \State $\Theta$(Quadrant)$\gets \theta$,~~ \EndFor \State $\psi_{ij} \gets \min(\Theta)$,~~ $Q \gets \mathrm{argmin}(\Theta)$ \end{algorithmic} \end{algorithm} \subsection{Algorithm \ref{AlgoMainLoop}, Main loop} \label{subsec:AlgoMainLoop} All steps of the main loop can be checked to be such that if $F=F(x,y) \geq \delta >0$, $\forall (x,y) \in \mathbb{R}^{2}$, it reduces to the classical FMM. The sideways formulations are only used when $F\approx 0$. The first procedure, `Accept a point' is identical to the acceptance procedure in the standard FMM \cite{Sethian}, and we therefore omit to discuss it. For clarity, the point accepted during this step is labelled as $p_{\alpha \beta} = (x_{\alpha},y_{\beta},\psi_{\alpha \beta})$ in the rest of the discussion. \subsubsection{Update Pile} \label{subsubsec:AlgoUpdatePile} This step is only performed if $\psi_{\alpha \beta}$ is below a certain predefined time $T$ to ensure that \texttt{Narrow Band} ~is eventually empty. At this stage the algorithm needs to decide whether a nearest neighbour $(x_{a},y_{b})$ of $p_{\alpha \beta}$ should be put in \texttt{Pile}. To this end three criteria are used: the position, status and orientation of that neighbour. Simply put, \texttt{line 12} has the following effect: If $F(p_{\alpha \beta})>0$ and the considered neighbour lies inside the curve $\mathcal{C}_{\psi_{\alpha \beta}}$, then the pair $(x_{a},y_{b})$ is not added to \texttt{Pile}. Next the status of this nearest neighbour is considered. If the pair $(x_{a},y_{b})$ was traversed by the curve in the past, then it is only added to \texttt{Pile} ~if $p_{ab}:= (x_{a},y_{b},$\textsl{Grid}$(x_{a},y_{b}))$ and $p_{\alpha \beta}$ have different orientations (\texttt{lines 13-16}). Indeed a point in the plane can only be traversed twice if the speed has changed sign in the meantime. If $(x_{a},y_{b})$ is still in \texttt{Far Away}, then it is automatically added to \texttt{Pile} ~(\texttt{lines 17-18}). \paragraph{Remark} The presence of the `if $\psi_{\alpha \beta}<T$' in \texttt{line 8} is in contrast with the standard FMM, where it is proved that since $F\geq \delta >0$, all characteristics exit the domain in finite time. In this context, the size of the computational domain determines $T$. \subsubsection{Update the Narrow Band} \label{subsubsec:AlgoUpdateNB} This procedure assigns tentative values to the points in \texttt{Pile} ~using either the standard FMM (see Appendix \ref{subsec:AlgoFMM}) or Algorithm \ref{AlgotFMM}, depending on the domain of $F$. Since $\psi$ only solves the Eikonal equation in regions where $|F| \geq \delta >0$, the first lines of those algorithms ensure that the points involved in the computation of $\psi_{ij}$ all lie in one such region. The steps outlined in \texttt{lines 24-28} represent the main modification to the standard FMM algorithm. The Sign Test is performed to check if the value returned by Algorithm \ref{AlgoFMM} or \ref{AlgotFMM} is valid. If it is not, then Algorithm \ref{AlgoSideways} is called. Using a sideways representation, it attempts to return a point $(x_{k},y_{l},\psi_{kl}) \in \mathcal{C}_{\psi_{kl}}$. If it manages to do so, note that as explained in \S \ref{subsubsec:AlgoGetSideways}, the triplet returned may not be $(x_{i},y_{j},\psi_{ij})$, which is why $i$ and $j$ are relabelled in \texttt{line 28}. As in the standard FMM, if there already is a point in \texttt{Narrow Band} ~with the same spatial coordinates $(x_{i},y_{j})$, then it is automatically removed from that list. The triplet $(x_{i},y_{j},\psi_{ij})$ is added to \texttt{Narrow Band}. In the event where Algorithm \ref{AlgoSideways} fails, no new point is added to \texttt{Narrow Band}. \subsection{Algorithm \ref{AlgoSideways}, Sideways representation} \label{subsec:AlgoSolveSideways} This algorithm is called by the main loop when the speed $F$ is close to 0. \subsubsection{Determine representation} \label{subsubsec:AlgoDetermineRep} In order to work locally, the first step of this procedure defines a square of side length at most $2sh$ for some $s \in \mathbb{N}$ as the new computational grid. Then the representation is chosen based on the normal at $p_{\alpha \beta}$. \subsubsection{Initialization} \label{subsubsec:AlgoInitializationSideways} This is the step where data are converted, as was mentioned in \S \ref{subsec:Interpolation}. The set NeighSide$(p_{\alpha \beta})$ is found; This ensures that the orientation of the points used next is compatible with the current representation. We take time to explain what we mean in \texttt{line 12} in details. It is ideal to build the sideways grid in such a way that the triplet $p_{\alpha \beta}$ is represented exactly on this grid. i.e., For example, if data are being converted to the $yt$-representation, then there should be $\tilde{l} \in L$ and $\tilde{r} \in R$ such that $(y_{\tilde{l}},t^{\tilde{r}}) = (y_{\beta},\psi_{\alpha\beta})$. The function $\psi_{1}:(y,t) \mapsto x$ then satisfies $\psi_{1}(y_{\tilde{l}},t^{\tilde{r}})=x_{\alpha}$, and $(x_{\alpha},y_{\beta},\psi_{\alpha\beta}) = (\psi_{1}(y_{\tilde{l}},t^{\tilde{r}}),y_{\tilde{l}},t^{\tilde{r}})$. This avoids rediscovering the point $p_{\alpha\beta}$ in the procedure `Get $(x_{k},y_{l},\psi_{kl})$' discussed in \S \ref{subsubsec:AlgoGetSideways}. Assigning values to the sideways grid in \texttt{line 13} is an interpolation problem. See Figure \ref{fig:ConversionOfData} (c). \subsubsection{Main loop} \label{subsubsec:AlgoMainLoopSideways} The sideways PDE can now be solved. As mentioned in \S \ref{subsubsec:Discretization}, if either $\psi^{r-1}_{l-1}$ or $\psi^{r-1}_{l+1}$ are set to $+\infty$, then Algorithm \ref{AlgoSidewaysPDE} sets $\psi^{r}_{l}$ to $+\infty$. As depicted on Figure \ref{fig:ConversionOfData} (d), this has the effect of shrinking the size of the set where the PDE is solved: At most $s$ time steps can be taken before all the boundary information available has been used up. When the speed depends on time, we believe that using adaptive time stepping increases the success rate of Algorithm \ref{AlgoSideways}. We pick a small $\Delta t$ as long as the speed has not changed sign. This makes the scheme more accurate, thereby increasing the chances of assigning a value to $(x_{i},y_{j})$. Once $F$ changes sign, a large $\Delta t$ is chosen to increase the likelihood of assigning a value to $(x_{\alpha},y_{\beta})$. \subsubsection{Get $(x_{k},y_{l},p_{kl})$} \label{subsubsec:AlgoGetSideways} Deciding which value is returned by the algorithm is delicate and may be summarized as follows: By default, the algorithm always tries to assign a value to the pair in the \texttt{Narrow Band} ~(\texttt{lines 22-25}). If this is not possible, then it tries to assign a new value to the pair $(x_{\alpha},y_{\beta}) = \pi_{s}(p_{\alpha \beta})$ (\texttt{lines 26-29}). If this cannot be done either, then this representation failed. The algorithm must attempt using another representation which is chosen based on the ones already attempted. \textit{When the $1^{\mathrm{st}}$ attempt fails.} Suppose the $xt$-representation failed, then the algorithm attempts to use the $yt$-representation. \textit{When the $2^{\mathrm{nd}}$ attempt fails.} Then the scheme resorts to the skewed representation. \textit{When the $3^{\mathrm{rd}}$ attempt fails.} If the skewed representation also fails, then Algorithm \ref{AlgoSideways} fails entirely. Note that this is expected to happen if $(x_{i},y_{j})$ and $(x_{\alpha},y_{\beta}) \not \in \mathcal{C}_{t}$ for any $t\in (p_{\alpha \beta},T)$. See Example 2 in \S \ref{sec:Accuracy}. \paragraph{Remark} In practice, after each iteration of the for loop \texttt{line 17}, we check if either $(x_{i},y_{j})$ or $(x_{\alpha},y_{\beta})$ has been traversed by the curve. If not, then the for loop keeps going. \begin{algorithm} \caption{Sideways representation}\label{AlgoSideways} \begin{algorithmic}[1] \Procedure{Determine representation}{} \State $s \in \mathbb{N}$ is picked, ~~$\vec{v} \gets (-s,-s+1,\ldots,s-1,s)$, ~~ $L \gets i+\vec{v}$, ~~ $M \gets j+\vec{v}$ \State $L \gets L \cap I$, ~~ $M \gets M \cap J$ \If{$|\hat{n}_{1}(p_{\alpha \beta})|>|\hat{n}_{2}(p_{\alpha \beta})|$} \State use $yt$-representation: $z \gets x$, ~~ $a \gets -\mathrm{sign}(\hat{n}_{1}(p_{\alpha \beta}))$ \Else \State use $xt$-representation: $z \gets y$, ~~ $a \gets -\mathrm{sign}(\hat{n}_{2}(p_{\alpha \beta}))$ \EndIf \EndProcedure \State Attempt $ \gets 1$ \While{Attempt$>0$} \Procedure{Initialization}{} \State get $\mathrm{NeighSide}(p_{\alpha \beta})$ \State the sideways grid $(z_{l},t^{r})$, $l \in L$, $r\in R$ is built \State $\textsl{Grid}(z_{l},t^{r}) \gets \psi^{r}_{l}$ using interpolation and $\mathrm{NeighSide}(p_{\alpha \beta})$ where possible. \State $\textsl{Grid}(z_{l},t^{r}) \gets +\infty$ where interpolation cannot be used. \EndProcedure \Procedure{Main loop}{} \If{$a \neq 0$} \For{$n=1:R_{\max}$} \State $\Delta t$ is determined \For{$l=2:L_{\max}-1$} \State compute $\psi^{r}_{l}$ using Algo. \ref{AlgoSidewaysPDE}. \EndFor \EndFor \EndIf \EndProcedure \Procedure{Get $(x_{k}$, $y_{l},\psi_{kl})$}{} \If{$(x_{i},y_{j})$ is traversed by the curve} \State $\psi_{ij}$ is computed using interpolation \State $\psi_{kl} \gets \psi_{ij}$, $x_{k} \gets x_{i}$, $y_{l} \gets y_{j}$, $\hat{n}(\psi_{kl})$ is computed \State Attempt $ \gets 0$, FAIL $\gets 0$ \ElsIf{$(x_{\alpha},y_{\beta})$ is traversed by the curve} \State $\psi_{\alpha \beta}$ is computed using interpolation \State $\psi_{kl} \gets \psi_{\alpha \beta}$, $x_{k} \gets x_{\alpha}$, $y_{l} \gets y_{\beta}$, $\hat{n}(\psi_{kl})$ is computed \State Attempt $ \gets 0$, FAIL $\gets 0$ \Else{~This sideways representation failed.} \If{Attempt =1} \If{in $xt$-representation} \State use $yt$-representation: $z \gets x$, ~~$a \gets -\mathrm{sign}(\hat{n}_{1}(p_{\alpha \beta}))$ \EndIf \If{in $yt$-representation} \State use $xt$-representation: $z \gets y$, ~~$a \gets -\mathrm{sign}(\hat{n}_{2}(p_{\alpha \beta}))$ \EndIf \State Attempt = Attempt +1 \ElsIf{Attempt=2} \State use the skewed representation: $z \gets w$, ~~$a \gets -\mathrm{sign}((x_{\alpha}$, $y_{\beta})\cdot \hat{\mathfrak{n}}(p_{\alpha \beta}))$ \State Attempt = Attempt +1 \ElsIf{Attempt=3} \State Point is not reached before $T$. ~~$\psi_{kl} \gets +\infty$, $x_{k} \gets +\infty$, $y_{l} \gets +\infty$ \State Attempt $ \gets 0$, FAIL $\gets 1$ \EndIf \EndIf \EndProcedure \EndWhile \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Solve $\psi_{t} + a F(\psi,y,t)\sqrt{1+\psi^{2}_{y}} = 0$}\label{AlgoSidewaysPDE} \begin{algorithmic}[0] \If{$(\psi^{r-1}_{l-1}<+\infty)$ \& $(\psi^{r-1}_{l+1}<+\infty)$} \State $\alpha \gets$ sign$(aF(\psi^{r-1}_{l},y_{l},t^{r-1}))$ \State $\psi^{r}_{l} \gets \psi^{r-1}_{l} - a \cdot \Delta t \cdot F(\psi^{r-1}_{l},y_{l},t^{r-1}) \cdot \sqrt{1+ \mathrm{upw}(\psi^{r-1},l,r,\alpha)} $ \Else \State $\psi^{r}_{l} \gets +\infty$ \EndIf \end{algorithmic} \end{algorithm} \subsection{General remarks} \label{subsec:AlgoGeneralRemarks} \subsubsection{Data structure} \label{subsubsec:DataStructure} One of the main differences with the standard FMM is the way we keep track of the various properties associated to each point. The fact that a point $(x_{\alpha},y_{\beta})$ on the plane may be traversed by the curve more than once requires a slightly richer data structure. For example, the functions \textsl{Norm}~ and \textsl{Orient}3~ have to be defined over triplets rather than over $\mathbb{R}^{2}$. On the other hand, the lists \texttt{Pile}~ and \texttt{Far Away}~ still consist of coordinates. Note that when the code ends, \texttt{Narrow Band}~ is empty whereas \texttt{Far Away}~ may still contain points. The \texttt{Accepted}~ list may contain multiple triplets sharing the same spatial coordinates. In order to keep track of what the most `up-to-date' value associated with $(x_{a},y_{b})$ is, we make use of \textsl{Grid}. Indeed, this function enjoys the following property: If there are distinct points $p_{ij},~q_{ij} \in$ \texttt{Accepted}~ such that $\pi_{s}(p_{ij})=\pi_{s}(q_{ij})$, then \textsl{Grid}$(x_{i},y_{j}) = \max \{ \pi_{t}(p_{ij}),\pi_{t}(q_{ij}) \}$. Viewed as a set, \textsl{Grid}$(\pi_{s}($\texttt{Accepted}$))$ is the upper semi-continuous envelope of $\mathcal{M}$. \subsubsection{Recovering the curve from $\mathcal{M}$} \label{subsubsec:RecoveringTheCurve} The set \texttt{Accepted}~ provides a discrete sampling of $\mathcal{M}$. Using this point cloud, and possibly the normal $\hat{n}$ to $\mathcal{M}$ at each point, a continuous representation of $\mathcal{M}$ can be obtained. See for example \cite{Amenta,SurfaceFromPointCloud,SurfaceFromPointCloud2,Hoppe,MarchingCubes}, and \cite{Zhao}. Given a time $t \in (0,T)$, a contouring algorithm can then be used to find $\mathcal{C}_{t}$ (see \cite{MarchingCubes}). \subsubsection{Resolution} \label{subsubsec:Resolution} By construction, the density of points sampling $\mathcal{M}$ is expected to be lower in regions where $F \approx 0$. A remedy to this situation is to also record the points computed in the sideways representations . \section{Complexity of the method} \label{sec:Complexity} We derive some estimates for the computational time of the method when $n=2$, i.e.,~ two spatial dimensions. Consider a spatial grid of $N^2$ points with meshsize $h$. Let $\Delta t \sim h$, and define $N^{\ast}$ to be the number of gridpoints traversed by $\mathcal{C}_{t}$ when $0<t<T$. (i.e.,~ if a given gridpoint $(x_{i},y_{j})$ is traversed twice, say at times $t_{1}$ and $t_{2}$ where $0<t_{1}<t_{2}<T$, then this contributes $+2$ to $N^{\ast}$.) By construction, the computational time depends on the size of the set $\mathcal{F}_{\mathcal{M}} := \mathcal{F} \cap \mathcal{M}$. Indeed, Algorithm \ref{AlgoSideways} is only called when Algorithm \ref{AlgoMainLoop} fails, which occurs whenever an accepted point computed by Algorithm \ref{AlgoMainLoop} is within a spatial distance $h$ of $\mathcal{F}_{\mathcal{M}}$. Let the number of points computed by Algorithm \ref{AlgoSideways} be $\tilde{N}$. Since the complexity of Algorithm \ref{AlgoMainLoop} is well-known \cite{Sethian}, let us focus on estimating the complexity of a single call to Algorithm \ref{AlgoSideways}. On the square of side $2s$, the Narrow Band forms a one-dimensional subset. Using interpolation to convert the points in a neighbourhood of this set takes $\mathcal{O}(s)$ operations. Algorithm \ref{AlgoSidewaysPDE} makes at most $s^{2}$ operations. Those two steps are performed at most three times. We formally argue that the parameters of the algorithm can be chosen such that this \emph{worse case} complexity is not achieved. The procedure mentioned in the remark of \S \ref{subsubsec:AlgoGetSideways} can be used to prevent Algorithm \ref{AlgoSidewaysPDE} from making unnecessary computations. In \S \ref{subsubsec:AlgoMainLoopSideways}, we explain how using adaptive time-stepping increases the success rate of Algorithm \ref{AlgoSideways}. Moreover, as $N$ increases, the time distance between the accepted point computed by Algorithm \ref{AlgoMainLoop} and $\mathcal{F}_{\mathcal{M}}$ decreases, which in turn makes Algorithm \ref{AlgoSideways} more successful on average. Altogether, this suggests that the number of attempts taken by Algorithm \ref{AlgoSideways} tends to one for almost all points; this is confirmed by the examples presented in the next section. As a result, the complexity of Algorithm \ref{AlgoSideways} tends to $\mathcal{O}(s)$ for large $N$. Given the assumption that $F$ is analytic, we expect $N^{\ast}-\tilde{N} = \mathcal{O}(N^{2})$ and $\tilde{N} = \mathcal{O}(N)$. In practice, the number of points in the local grid $s$ can be chosen as $kN$ for $k \ll 1$. The overall complexity can therefore be estimated as: \begin{eqnarray} \mathcal{O} (N^{2} \log (N^{2})) + \mathcal{O}(N) \times \mathcal{O}(kN) = \underbrace{\mathcal{O} (N^{2} \log (N^{2}))}_{(t)\mathrm{-FMM}} + \underbrace{\mathcal{O}(kN^{2})}_{\mathrm{augmented~part}} \end{eqnarray} Note that in the instance where $\mathcal{F}=\emptyset$, we recover the usual complexity of the FMM, namely $ \mathcal{O} (N^{2} \log (N^{2}))$. \section{Numerical Tests} \label{sec:Accuracy} In this section, we illustrate how the method works with a variety of examples. We first discuss the methodology used to assess the convergence of the algorithms, and briefly summarize which features and results are expected. We then present the examples. More details are provided in Appendix \ref{app:Details}. \subsection{Error measurement} \label{subsec:ErrorMeasurement} To assess the convergence of our algorithm, we compute the error associated to each point $p_{ij}$ returned by our scheme. \paragraph{Method 1: $E_{ij}$} Suppose that an exact solution to the Level-Set Equation (\ref{eq:LSE}), $\phi(x,y,t)>0$ is known, with the property that $|\nabla \phi|=1$ for all $t$. Then evaluating $\phi$ at $p_{ij}=(x_{i},y_{j},\psi_{ij})$ returns the distance to the curve $\mathcal{C}_{\psi_{ij}}$. We define $E_{ij} = |\phi(p_{ij})|$. This method is used for all examples except Example 4 when $F<0$. \paragraph{Method 2: $G_{ij}$} If an exact solution is not available, we get a numerical solution accurate enough to be considered exact. To this end, the Level-Set Equation is solved on a very fine grid using $2^{\mathrm{nd}}$ order stencils in space, and RK2 in time. At each time step, the zero-contour of $\phi$ is found and sampled. The resulting list of points $\mathcal{B}$ provides a discrete approximation of $\mathcal{M}$. The error associated to $p_{ij}$ is defined as the smallest three-dimensional distance to this exact cloud of points, i.e.,~ $G_{ij} = \min_{q \in \mathcal{B}} \{ |p_{ij}-q| \}$. This method is used for Example 4, when $F<0$. \subsection{Tests performed} \label{subsec:TestsPerformed} \paragraph{Accuracy of Algorithm \ref{AlgoSidewaysPDE}} In \S \ref{subsubsec:Discretization}, it is mentioned that the sideways method we propose converges with at least $\mathcal{O}(h^{1/2})$ accuracy. To verify this, we pick a domain $\mathcal{U}$, initialize say $x=\psi(y_{m},t^{0})$ with exact data for some initial time $t^{0}$, and run Algorithm \ref{AlgoSidewaysPDE} for different gridsizes. The result is a subset of $\mathcal{M}$, encoded as a list of points of the form $p^{r}_{m}=(\psi^{r}_{m},y_{m},t^{r})$. An error is associated to each point $p^{r}_{m}$ such that $\psi^{r}_{m}<\infty$ using either Method 1 or 2, i.e., $E^{r}_{m} = | \psi_{\mathrm{exact}}(y_{m},t^{r}) - \psi^{r}_{m}|$ or $G^{r}_{m} = \min_{q \in \mathcal{B}} \{ |p^{r}_{m}-q| \}$. A two-dimensional $L_{1}$ norm is then used to report the results in Figure \ref{fig:SidewaysConvergence}, e.g.,~ $L_{1} =h^{2} \cdot \sum_{m \in M} \sum_{r \in R} E^{r}_{m}$. \paragraph{Accuracy of the full scheme} When testing the accuracy of the full scheme, we distinguish between different regions of the resulting set \texttt{Accepted}. When studying a region computed by the $(t)$-FMM, a two-dimensional $L_{1}$ norm is used: $L_{1} =h^{2} \cdot \sum_{i \in I} \sum_{j \in J} E_{ij}$. Note that our assumptions on $F$ imply that the points computed using the sideways representations form one-dimensional sets of $\mathbb{R}^{2}\times [0,T]$. Consequently, a one-dimensional $L_{1}$ norm is used to study those points: $L_{1} =h \cdot \sum_{i \in I} \sum_{j \in J} E_{ij}$. The global error (computed using all the points in \texttt{Accepted}) is a two-dimensional $L_{1}$ norm. It may be interpreted as an approximation of the volume enclosed by the exact and the approximated surfaces. We report the $L_{\infty}$ error qualitatively, through the black \& white representations of the set \texttt{Accepted}. Those figures are obtained by computing the relative error at each point, i.e., if $L_{\infty} = \max_{i \in I, ~ j \in J} \{ E_{ij} \}$, then $e_{ij} = E_{ij} / L_{\infty}$; and then shading the point accordingly: The darker a point, the larger its relative error $e_{ij}$. \subsection{Expectations} By assumption, as $h \rightarrow 0$, the 1st order $(t)$-FMM scheme is used almost everywhere. This should reflect in the global error: It should follow the same trend as the $(t)$-FMM . Moreover, we expect the call to Algorithm \ref{AlgoSideways} to increase the constant of convergence. A question that we address is the extent to which this degrades the local and global accuracy. We investigate the behaviour of the scheme in the presence of shocks \& rarefactions in Example 4, as well as in \S \ref{sec:Discussion}. \\ In all examples but the fourth one, the initial curve $\mathcal{C}_{0}$ is the circle centred at the origin, with radius $r_{0}=1/4$. In all tests, data are initialized with exact values. \subsection{Example 1: $F=F(t)= 1-e^{10t-1}$} \label{subsec:Example1} The main purpose of this example is to illustrate the basic ideas at play in the method. The speed is such that we expect the circle to first expand up to time $t=0.1$ and then contract until it collapses to the origin. We first assess the order of convergence of the method for the sideways representation. The results reported on Figure \ref{fig:SidewaysConvergence} clearly indicate that it is $\mathcal{O}(h)$. This is higher than the $\mathcal{O}(h^{1/2})$ rate that was predicted in \S \ref{subsubsec:Discretization}. When the entire code is run, the set of \texttt{Accepted} ~points is presented on Figure \ref{fig:Ex1Surface} (a)-(b). One-dimensional optimization is used for those points traversed by a characteristic that is almost aligned with one of the spatial axis. We note that the sideways points are computed in the $yt$- (resp.\mbox{~}$xt$-)representation when $\hat{\mathfrak{n}}$ aligns better with the $x$- (resp.\mbox{~}$y$-)axis. As expected, the sampling of the surface is sparser near the plane $t = 0.1$. Remark that in this example, none of the sideways points were computed in the skewed representation. The global convergence results are presented in Figure \ref{fig:Ex1Surface}, (d). We distinguish between the bottom part of the surface, the top part, and those points computed using the sideways representation. On the one hand, the results pertaining to the bottom part allow us to conclude that the $t$-FMM is $\mathcal{O}(h)$, as predicted in \S \ref{subsec:tFMM}. On the other hand, we can study the effect of the call to Algorithm \ref{AlgoSideways} on the behaviour of the scheme. Indeed, although the $t$-FMM also converges with $\mathcal{O}(h)$ when used to build the top part of the surface, it does so with a larger constant. We conclude that changing representation does deteriorate the accuracy of the sampling but only to a mild extent. To gain a better understanding of where the loss of accuracy from the bottom to the top part stems from, the relative $L_{\infty}$ error $e_{ij}$ associated to each point can be viewed on Figure \ref{fig:Ex1Surface} (c). Those points computed using one-dimensional optimization in the $t$-FMM, just after $t=0.1$ bear the largest errors. Two reasons can explain this: Some of those points are clearly traversed by characteristics that are not aligned with the spatial axes. Nevertheless, the scheme resorts to one-dimensional optimization to assign them values, for lack of a better method. Indeed, when those points are put in \texttt{Pile}, there are not enough neighbours with negative orientation available to use two-dimensional optimization. We also suspect the constant of convergence of the $t$-FMM to depend on $\delta$ where $|F|\geq \delta >0$. In practice, this method is found to perform poorly when using points $p_{ij}$ such that $F(p_{ij}) \approx 0$. \paragraph{Remark} Those outliers do not degrade the accuracy of the method, even locally. This is because by design, Fast Marching Methods assign values to the points in \texttt{Pile} ~using only those neighbours with a smaller value. As a result, those outliers are not used in any of the calculations of the values of their neighbours. In practice, it is found that they eventually become isolated points of the \texttt{Narrow Band} ~ before getting accepted. \begin{figure} \caption{Convergence results for the sideways scheme, using (left) Method 1 (right) Method 2.} \label{fig:SidewaysConvergence} \end{figure} \begin{figure} \caption{Example 1: (a) - (b) Different perpectives of the set \texttt{Accepted} \label{fig:Ex1Surface} \end{figure} \subsection{Example 2: $F=F(x)=x$} The given speed is such that the curve remains a circle whose radius grows while its center shifts to the right. Our method adequately handles this case as a single problem, although the speed changes sign across the $y$-axis. As expected, Algorithm \ref{AlgoSideways} fails near the points $(0,0.25)$ and $(0,-0.25)$, as shown on Figure \ref{fig:Ex3Surface} (a). The sideways scheme was tested both in the $xt$- and the $yt$-charts, and was found to be $1^{\mathrm{st}}$ order in each case (Figure \ref{fig:SidewaysConvergence}). The results for the full scheme show that it converges with $\mathcal{O}(h)$ accuracy everywhere (Figure \ref{fig:Ex3Surface} (b)). Let us bring up that a bi-directional FMM was proposed in \cite{chopp2009another} to solve a related problem. \begin{figure} \caption{Example 2: (a) The set \texttt{Accepted} \label{fig:Ex3Surface} \end{figure} \subsection{Example 3: $F=F(x,y,t)$} (See Appendix \ref{app:Example4} for details about $F$.) This example differs significantly from the previous ones in that the set $\mathcal{F}$ no longer consists of planes. The exact solution $\mathcal{C}_{t}$ is a circle that only grows at first, and then starts moving in the positive $x$-direction. Our method is observed to perform very well on this example; We present the resulting surface and the first order convergence results on Figures \ref{fig:SidewaysConvergence} \& \ref{fig:Ex4Conv}. \begin{figure} \caption{Example 3: (a) The set \texttt{Accepted} \label{fig:Ex4Conv} \end{figure} \subsection{Example 4: Two merging circles} This example tests the ability of the scheme to capture topological changes. The initial codimension-one manifold consists of two disjoint circles of radius $r_{0}=1/4$, with centres at $(-.3,0)$ and $(.3,0)$. The speed is such that the circles first expand, until they touch and merge. Then the speed changes sign, which makes the curve shrink until it pinches off and splits into two distinct curves. The set \texttt{Accepted} ~is presented in Figure \ref{fig:Ex5Surface} (a). The accuracy of the sideways scheme is investigated on a domain that comprises the shock when $F>0$, and the rarefaction when $F<0$. First order convergence is obtained in each case (Figure \ref{fig:SidewaysConvergence}). The full scheme also shows $1^{\mathrm{st}}$ order convergence (Figure \ref{fig:Ex5Surface} (b)). The convergence of the sideways points and the top part is a little shy of first order, but this can be attributed to the measurement method. Those results demonstrate how robust the overall scheme is. Note that a similar example was tackled in \cite{GFMM}, with a speed $F$ that depended linearly on time. \begin{figure} \caption{Example 4: (a) The set \texttt{Accepted} \label{fig:Ex5Surface} \end{figure} \section{Discussion} \label{sec:Discussion} In the light of the examples presented in the previous section, we address the limitations, weaknesses and advantages of the algorithm. We illustrate one of the main limitation of the scheme with an ultimate example. The speed is chosen such that the initial circle immediately develops a kink along the $x$-axis at time $t=0$. Its subsequent shape resembles that of an almond slowly turning in the counterclockwise direction while expanding. The sign of the speed changes, forcing the curve to contract while retaining its slanted shape. See Appendix \ref{subsec:AlmondExample} for details. The most prominent feature of this example is that, as is depicted on Figure \ref{fig:Lemon}, the shock is not a straight line. Remark that the speed $F$ does not satisfy the assumptions of this paper outlined in \S \ref{subsec:Assumptions}: It is only a $C^{0}$ function of $\mathbb{R}^{2} \times [0,T]$. The surface that results from running the algorithm at high resolution is shown on Figure \ref{fig:Ex6Surface}. The shock is clearly visible, and has the expected figure-eight shape. Nevertheless some points `escape' through the shock when the speed changes sign, and start out two new fronts that keep on expanding. The problem stems from the procedure `Update Pile' in Algorithm \ref{AlgoMainLoop}. In order to decide which points go in \texttt{Pile}, the code distinguishes between the inside and the outside of the curve using the normal $\hat{\mathfrak{n}}$ (cf. \texttt{line 12} of Algorithm \ref{AlgoMainLoop}). Consider what happens along the shock, where $\hat{\mathfrak{n}}$ has a discontinuity. So long as the expansion is outwards, this does not cause problems. But when the direction of propagation reverses, some points that should stay in \texttt{Far Away} ~are moved into \texttt{Pile} . A possible remedy to this issue is to approximate the normal cone along the shock. This additional information could be used as an updating criterion. \begin{figure} \caption{$\mathcal{M} \label{fig:Lemon} \end{figure} \begin{figure} \caption{The almond example: The set \texttt{Accepted} \label{fig:Ex6Surface} \end{figure} On a much more general note, the gluing mechanism between the two formalisms heavily relies on an accurate computation of the normal. In practice, we found that the algorithm is rather sensitive to the accuracy of this quantity. Another weakness of the method is that, as it stands, Algorithm \ref{AlgoSideways} may fail when it is not supposed to. i.e., Even though $(x_{i},y_{j})$ or $(x_{\alpha},y_{\beta})$ belongs to $\mathcal{C}_{t}$ for some $t \in (0,T)$, the algorithm does not assign any value to either of those coordinates. Two situations make such a scenario possible: (1) the time steps taken are too small, or (2) too little information obtained from interpolation is available. Recall from Proposition \ref{claim:convergence} that the CFL condition prevents large $\Delta t$. Case (2) can occur if $s\in \mathbb{N}$, the number of points in the local grid in Algorithm \ref{AlgoSideways} is too small. However, if $s$ is large, Algorithm \ref{AlgoSideways} may not be able to carry out the step outlined in \texttt{line 13}. This happens if the points in NeighSide$(p_{\alpha\beta})$ sample more than one connected component of the set $\{ p\in \mathcal{M} : \pi_{s}(p) \in [x_{i-s},x_{i+s}] \times [y_{j-s},y_{j+s}] \}$. See Figure \ref{fig:sTooLarge} for an illustration. However, choosing $s$ systematically so as to prevent this situation seems difficult. Ultimately $h$ and $s$ depend on measurable quantities such as the Lipschitz constants of $F$ and its derivatives, as well as the local curvatures of $\Gamma_{t}$. Nevertheless the way those parameters are intertwined and should be chosen is a question we wish to address in future work. \begin{figure} \caption{Illustration of what happens if $s$ is chosen too large. Data need to be converted to the $xt$-representation, but the set NeighSide$(p_{\alpha\beta} \label{fig:sTooLarge} \end{figure} The fact that our method is a rather mild modification of the standard FMM has obvious benefits. As featured in all the examples, the sideways representations need only be used to compute a relatively small number of points sampling $\mathcal{M}$. This allows us to safely predict that the computational complexity of the algorithm is lower than that of pre-existing algorithms used to tackle this problem, such as the LSM or the GFMM. Nonetheless, it is hard at this point to make more precise complexity statements. \section{Conclusion} \label{sec:Conclusions} Our aim was to devise an algorithm with low complexity able to describe the non-linear evolution of codimension one manifolds subject to a space- \& time-dependent speed function that changes sign. To this end, we illustrated how pre-existing methods can be combined to achieve this goal. The fact that we always dealt with explicit representations of the manifold implied that the dimensionality of the problem was never raised. The resulting algorithm was found to have a global truncation error of $\mathcal{O}(h)$. We tested it against a number of examples, some of which cannot be found in the current literature. The algorithm is found to be robust and accurate in all the tests presented. Regarding the complexity of the method, a legitimate concern is to clearly quantify how the success rate of Algorithm \ref{AlgoSideways} depends on the various parameters involved, as well as the speed function $F$ and the manifold $\mathcal{M}$. Once this is done, more precise statements about the runtime of the algorithm can be made and tested. Overall, the present work thoroughly introduces a new algorithm, along with proofs of convergence and stability, as well as sturdy numerical results. We believe that the main idea on which it relies -- i.e.,~ to change representation based on the speed function $F$ -- may be extended and improved in many ways that shall be explored. \appendix \section[Quartic involved in the $t$-FMM]{A direct method to compute $\psi_{\mathrm{II}}$ in the $t$-FMM, in 2D} \label{app:tFMM} We provide a direct method for solving the minimization problem appearing in Equation (\ref{eq:VladMinimization}), in two dimensions. Introducing $\tau(y)=\frac{h}{|F(\vec{\mathbf{x}}_{ij},\psi(y))|}$, we first use linear interpolation to simplify the quantity we wish to minimize: \begin{eqnarray} &~& \psi(\tilde{\mathbf{x}}) + \sqrt{\xi^{2}+(1-\xi)^{2}}~ \frac{~h}{|F(\vec{\mathbf{x}}_{ij},\psi(\tilde{\mathbf{x}}))|} ~=~ \psi(\tilde{\mathbf{x}}) + \sqrt{\xi^{2}+(1-\xi)^{2}}~ \tau(\tilde{\mathbf{x}}) \nonumber \\ &\approx& \xi \psi(\vec{\mathbf{x}}_{i-1,j})+(1-\xi) \psi(\vec{\mathbf{x}}_{i,j+1}) + \sqrt{\xi^{2}+(1-\xi)^{2}}~ \left( \xi \tau(\vec{\mathbf{x}}_{i-1,j})+(1-\xi) \tau(\vec{\mathbf{x}}_{i,j+1}) \right) \nonumber \\ &=:& f(\xi) \end{eqnarray} Minimizing $f$ over $\xi \in (0,1)$ amounts to finding the roots of $0 = c_{4}\lambda^{4} + c_{3}\lambda^{3} + c_{2}\lambda^{2} + c_{1}\lambda + c_{0}$ where $\lambda \in (0,1)$ is such that $f'(\lambda)=0$. This quartic can be solved either directly with closed formulas, or with Newton's method --- we use the latter. For each root $r_{i} \in (0,1)$ the corresponding value of $\psi$ is computed as $\psi_{\mathrm{II},r_{i}}=f(r_{i})$. If $\psi_{\mathrm{II},r_{i}}< \psi(\vec{\mathbf{x}}_{i-1,j})$ or $\psi_{\mathrm{II},r_{i}}<\psi(\vec{\mathbf{x}}_{i,j+1})$, then $\psi_{\mathrm{II},r_{i}}$ is discarded. Values arising from minimization in one dimension are also computed as $\psi_{\mathrm{II},0} = \psi(\vec{\mathbf{x}}_{i,j+1}) + \tau(\vec{\mathbf{x}}_{i,j+1})$ and $\psi_{\mathrm{II},1} = \psi(\vec{\mathbf{x}}_{i-1,j}) + \tau(\vec{\mathbf{x}}_{i-1,j})$. The global minimum is found by comparing all those values. \section{Algorithm \ref{AlgoFMM}, standard FMM} \label{subsec:AlgoFMM} We revisit the standard Fast Marching Method algorithm, using some of the notation we have introduced. \begin{algorithm}[h] \caption{Solve $|\nabla \psi(x,y)| = \frac{1}{|F(x,y)|}$}\label{AlgoFMM} \begin{algorithmic} \State $u_{\pm} \gets \pi_{t}(p_{i\pm1j})$ if $p_{i\pm1j} \in \mathrm{NeighEik}((x_{i},y_{j}))$, $+\infty$ otherwise. \State $v_{\pm} \gets \pi_{t}(p_{ij\pm1})$ if $p_{ij\pm1} \in \mathrm{NeighEik}((x_{i},y_{j}))$, $+\infty$ otherwise. \State $u \gets \min(u_{-},u_{+})$, $\qquad$ $v \gets \min(v_{-},v_{+})$, ~ \If{$\max(u,v)-\min(u,v)<\frac{h}{|F(x_{i},y_{j})|}$} \State $\psi_{ij}=\frac{1}{2} \left( (u+v)+\sqrt{2 \left( \frac{h}{F(x_{i},y_{j})}\right)^2-(u-v)^2} \right)$ \Else \State $\psi_{ij} = \min(u,v)+\frac{h}{|F(x_{i},y_{j})|}$ \EndIf \end{algorithmic} \end{algorithm} \section{Implementation details for the examples} \label{app:Details} \subsection{Solvers used} \label{subsec:SolversUsed} We give some details about the examples presented in \S \ref{sec:Accuracy}. All tests were performed using \textsc{Matlab}$^\circledR$ \cite{Matlab}. In particular, finding the minimum value in the Narrow Band is done using the command \verb min ~. Whenever a value $\psi_{ij}$ is computed by the $(t)$-FMM, the normal $\hat{n}_{ij}$ is approximated using the one-sided derivatives involving the points used in the computation of $\psi_{ij}$. For example: if two-dimensional optimization was used in Quadrant III to obtain $\psi_{ij}$, then \begin{eqnarray} \vec{v} = \left( \frac{\psi_{ij}-\psi_{i-1,j}}{h}, \frac{\psi_{ij}-\psi_{i,j-1}}{h}, -\mathrm{\textsl{Orient}3~}(p_{ij}) \right) \quad \mathrm{and} \quad \hat{n}(p_{ij}) = \frac{\vec{v}}{|\vec{v}|} \end{eqnarray} Within Algorithm \ref{AlgoSideways}, we approximate the normal as follows. For clarity, say the points $p^{k}_{j} = (\psi^{k}_{j},y_{j},t^{k})$ and $p^{k-1}_{j} = (\psi^{k-1}_{j},y_{j},t^{k-1})$ computed in the $yt$-representation with $x$-orientation $a$ were used to obtain $p_{ij} = (x_{i},y_{j},\psi_{ij})$. Then \begin{eqnarray} \vec{v} = \left( -a, a ~\frac{\psi^{k-1}_{j+1}-\psi^{k-1}_{j-1}}{2h}, a ~ \frac{\psi^{k}_{j}-\psi^{k-1}_{j}}{dt} \right) \quad \mathrm{and} \quad \hat{n}(p_{ij}) = \frac{\vec{v}}{|\vec{v}|} \end{eqnarray} Note that this is not an approximation of the true normal at $p_{ij}$, which is $( -a, -\phi_{x} \psi_{y},$ $ -\phi_{x} \psi_{t} )$. However, the only two salient information we need from $\hat{n}$ are: the sign of $\hat{n}_{3}$ and the direction of $\hat{\mathfrak{n}}$. The two-dimensional normal is simply obtained from $\hat{n}$ as $\hat{\mathfrak{n}} = \frac{(\hat{n}_{1},\hat{n}_{2})}{|(\hat{n}_{1},\hat{n}_{2})|}$. \subsection{Choice of parameters} In all examples, the number of points in each dimension is $N+1$, and the spatial grid spacings are even: $h=dx=dy$. The size of the local grid in Algorithm \ref{AlgoSideways} was set to be $s= \lfloor \frac{N}{3} \rfloor$. As discussed in \S \ref{subsubsec:AlgoMainLoopSideways}, we use adaptive time-stepping, in those examples where $F$ depends on time. In the fine part, before the time where $F=0$, we set $\Delta t = r_{1} h$. Passed that time, we let $\Delta t = r_{2} h$. To assess the convergence of the sideways methods, a $yt$-grid with spacings $h$ and $\Delta t = h/2$ was built. Remark that the exact normal $\hat{n}$ was assigned to the points as they were accepted in all the examples, except Example 1 where it was computed as explained in \S \ref{subsec:SolversUsed}. \subsection{Example 1} The exact solution to the Level-Set Equation is $\phi(x,y,t) = \sqrt{x^{2}+y^{2}} -R(t)$ where $R(t) = \left( r_{0} - \frac{e^{10t}-1}{10e}+t \right)$. Domain: $[-.321,.319]^{2}$. $T_{F}=0.3$. $xt$- and $yt$-rep.: $r_{1} = 1/3$, $r_{2} = 2$. Skewed rep.: $r_{1} = r_{2} = 1$. Domain for convergence of Algo. \ref{AlgoSidewaysPDE}: $(y,t) \in [-0.25,0.25]\times [0,0.3]$. \subsection{Example 2} The signed distance function to the curve $\mathcal{C}_{t}$ is given as $\phi(x,y,t)=\sqrt{(x-x_{c}(t))^{2}+y^{2}}-r(t)$ where $x_{c}(t) = r_{0}\sinh t$ and $r(t) = r_{0}\cosh t$. Note that $\phi$ does not solve the Level-Set Equation. Domain: $[-1.01,0.99]^2$. $T_{F}=1$. $xt$- and $yt$-rep.: $r_{1} = 1/3$, $r_{2} = 2$. Skewed rep.: $r_{1} = 1/3$, $r_{2} = 5$. Domain for convergence of Algo. \ref{AlgoSidewaysPDE}: $(y,t) \in [-0.25,0.25]\times [0,1]$ and $(x,t) \in [-0.25,0.25]\times [0,1]$. \subsection{Example 3} \label{app:Example4} The exact solution to the Level-Set Equation is $\phi(x,y,t) = \sqrt{(x-g t)^2+y^2}-\left( r_{0}+c t \right)$ where $b=10$, $c=1/2$ and $g(t) = \arctan \left(b(t-0.5)\right) + \frac{\pi}{2}$. The speed is \begin{eqnarray} F = \frac{(x-gt)(g't+g)}{\sqrt{(x-gt)^{2}+y^{2}}} + c ~ \Longrightarrow ~ F \approx \left\{ \begin{array}{ll} c & \mathrm{for~} t \mathrm{~small} \\ \frac{(x-\pi t)\pi}{\sqrt{(x-\pi t)^{2}+y^{2}}} + c & \mathrm{for~} t \mathrm{~large} \\ \end{array} \right. \end{eqnarray} We expect the circle to first expand (when $t$ is small), and then expand while moving to the right with speed $\pi$ (when $t$ is large). Domain: $[-1.51,+1.49]^{2}$. $T_{F}=0.5$. $xt$- and $yt$-rep.: $r_{1} = 1/3$, $r_{2} = 2$. Skewed rep.: $r_{1} = 1/3$, $r_{2} = 5$. Domain for convergence of Algo. \ref{AlgoSidewaysPDE}: $(y,t) \in [-0.25,0.25]\times [0,0.5]$. \subsection{Example 4} The set $\mathcal{C}_{0}$ consists of two disjoint circles of radius $r_{0}=0.25$, with centres at $(-0.3,0)$ and $(0.3,0)$. The speed is $F = 1-e^{2t-1}$. The circles touch along the $y$-axis when $t\approx 0.08$. When $t<0.5$ the exact solution to the Level-Set Equation is $\phi(x,y,t) = \min \left\{ \sqrt{(x+0.3)^{2}+y^{2}} - R(t), \sqrt{(x-0.3)^{2}+y^{2}} - R(t) \right\} $ where $R(t)= r_{0} - \frac{e^{2t}-1}{2e}+t $. Domain: $[-1.5+ 0.01e,+1.5+ 0.01e]^{2}$. $T_{F}=1.2$. $xt$- and $yt$-rep.: $r_{1} = 1/3$, $r_{2} = 2$. Skewed rep.: $r_{1} = 1/3$, $r_{2} = 5$. Domain for convergence of Algo. \ref{AlgoSidewaysPDE}: $(y,t) \in [-0.5,0.5]\times [0.2,0.5]$ and $(y,t) \in [-0.5,0.5]\times [0.5,.52]$. \subsection{The Almond example} \label{subsec:AlmondExample} The exact solution to the Level-Set Equation is \begin{eqnarray} \phi(x,y,t) &=& \left( \sqrt{x^{2}+y^{2}}-r_{0}+\frac{e^{ct}-1}{ce}-t(1+C) \right)+ \frac{t|xt-y|}{\sqrt{1+t^{2}}} \\ &=:& \tilde{\phi}(x,y,t) + g(x,y,t) \end{eqnarray} The constants are set to be: $r_0 = 1/4$, $c = 1$, and $C = .65$. The function $\phi$ is made up of two parts: $\tilde{\phi}$ is qualitatively the same as in Example 1. Domain: $[-0.5,0.5]^2$. $T_{F}=1.9$. $xt$- and $yt$-rep.: $r_{1} = 1/3$, $r_{2} =2$. Skewed rep.: $r_{1} = 1/2$, $r_{2} = 6$. \\ \textbf{Acknowledgements} The authors wish to thank Prof.\mbox{~}A.Oberman for helpful discussions. The second author would like to thank the organizers of the 2011 BIRS workshop ``Advancing numerical methods for viscosity solutions and applications'', Profs.\mbox{~}Falcone, Ferretti, Mitchell, \& Zhao for stimulating discussions which eventually lead to the present work. {} \end{document}
\begin{document} \title{ Assessing the significance of fidelity as a figure of merit in quantum state reconstruction of discrete and continuous variable systems} \author{Antonio Mandarino} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \author{Matteo Bina} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \author{Carmen Porto} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \author{Simone Cialdi} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \affiliation{Istituto Nazionale di Fisica Nucleare, Sezione di Milano, I-20133 Milan, Italy} \author{ Stefano Olivares} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \affiliation{Istituto Nazionale di Fisica Nucleare, Sezione di Milano, I-20133 Milan, Italy} \author{ Matteo G. A. Paris} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \affiliation{Istituto Nazionale di Fisica Nucleare, Sezione di Milano, I-20133 Milan, Italy} \date{\today} \begin{abstract} We experimentally address the significance of fidelity as a figure of merit in quantum state reconstruction of discrete (DV) and continuous variable (CV) quantum optical systems. In particular, we analyze the use of fidelity in quantum homodyne tomography of CV states and maximum-likelihood polarization tomography of DV ones, focussing attention on nonclassicality, entanglement and quantum discord as a function of fidelity to a target state. Our findings show that high values of fidelity, despite well quantifying geometrical proximity in the Hilbert space, may be obtained for states displaying opposite physical properties, e.g. quantum or semiclassical features. In particular, we analyze in details the quantum-to-classical transition for squeezed thermal states of a single-mode optical system and for Werner states of a two-photon polarization qubit system. \mbox{e}nd{abstract} \pacs{03.65.Ta, 42.50.Dv} \maketitle \section{Introduction} In quantum technology, it is very common to summarize the results of a reconstruction technique, either full quantum tomography \cite{revt1,LNP649,revt2,guhne} or some partial reconstruction scheme \cite{jay57,buz98,oli07,zam05,kb03,olom03}, by the use of fidelity \cite{Uhl,fuc99}. Once the information about the state of a system has been extracted from a set of experimental data, the fidelity between the reconstructed state and a given target state, is calculated. Fidelity is bounded to the interval $[0, 1]$. High values such as $0.9$ or $0.99$ are thus considered as a piece of evidence in order to certify that the reconstructed and the target states i) are very close each other in the Hilbert space, ii) they share nearly identical physical properties. In this framework, quantum resources of the prepared state are often benchmarked with those of the target one, e.g. to assess the performances of a teleportation scheme \cite{ban04,cav04}. \par The two statements above may appear rather intuitive, with the second one following from the first one. On the other hand, it has been suggested that the use of fidelity may be misleading in several situations involving either discrete or continuous variable systems \cite{Dod12,AgFid1,AgFid2,Benedetti}. The main goal of the present paper is to experimentally confirm the first statement and, at the same time, to provide neat examples where the second one is clearly proved wrong. \par Given two quantum states described by density matrices $\hat{a}t{\rho}_1$ and $\hat{a}t{\rho}_2$, the fidelity between them is defined as \cite{Uhl} \begin{equation} \label{Fidelity} F(\hat{a}t{\rho}_1, \hat{a}t{\rho}_2) =\hbox{Tr}\left[\sqrt{ \sqrt{\hat{a}t{\rho}_1 } \hat{a}t{\rho}_2 \sqrt{\hat{a}t{\rho}_1 }}\right]^2\,.\mbox{e}nd{equation} Fidelity is not a proper distance in the Hilbert space. However, it can be easily linked to a distance, and in turn to a metric over the manifold of density matrices. In fact, the Bures distance between two states is defined as $$D_B(\hat{a}t{\rho}_1,\hat{a}t{\rho}_2)=\sqrt{2[ 1-\sqrt{F(\hat{a}t{\rho}_1,\hat{a}t{\rho}_2)}]}\,.$$ Fidelity also provides an upper and a lower bound to the trace distance, namely \cite{fuc99}: $$1-\sqrt{F(\hat{a}t{\rho}_1,\hat{a}t{\rho}_2)}\leq \frac12 || \hat{a}t{\rho}_1-\hat{a}t{\rho}_2||_1\leq \sqrt{1-F(\hat{a}t{\rho}_1,\hat{a}t{\rho}_2)}\,.$$ These relationships ensure that higher values of fidelity correspond to geometrical proximity of the two states in the Hilbert space. However, they do not seem straightforwardly related to the physical properties of the two states. In turn, it has been pointed out \cite{Dod12,AgFid1,AgFid2,Benedetti} that a pair of states that appear very close to each other in terms of fidelity, may be very far in terms of physical resources. Relevant examples may be found with bipartite systems of either qubits or CV Gaussian states, where pairs of states composed by one entangled and one separable states may have (very) high value of fidelity one to each other. Besides, for single-mode CV states high values of fidelity may be achieved by pairs including one state with a classical analogue and a genuinely quantum state of the field. \par In this paper, we address the problem experimentally and analyze in details the significance of fidelity as a figure of merit to assess the properties of tomographycally reconstructed state. We address both discrete and continuous variable systems using quantum homodyne tomography to reconstruct CV states and maximum-likelihood polarization tomography for DV ones. In particular, we experimentally address two relevant examples: i) the reconstruction of squeezed thermal states of a single-mode radiation field, analyzing in details the quantum-to-classical transition; and ii) the reconstruction of noisy Werner states of a two-qubit polarization system, inspecting the amount of non-classical correlations. Our results clearly show that high values of fidelity, despite well quantifying geometrical closeness between states in the Hilbert space, may be obtained for quantum states displaying very different physical properties, e.g. quantum resources. \par The paper is structured as follows. Sect.~\ref{s:STS} is devoted to continuous variables: we first describe the experimental generation of single-mode squeezed thermal states using a seeded optical amplifier, as well as the homodyne technique employed for tomography. We then present experimental results, illustrating in details the significance of the fidelity of the reconstructed state to a target one and its non-classicality. In Sect.~\ref{s:Werner} we illustrate the experimental setup for generating two-qubit states of Werner type and the method of maximum-likelihood estimation for tomography. We then present experimental results, analysing the significance of fidelity of the reconstructed states to the target Werner ones in assessing their non-classical correlations, either entanglement or quantum discord. Sect.~\ref{s:conclusions} closes the paper with some concluding remarks. \section{Single-mode Gaussian states}\label{s:STS} \begin{figure}[t] \subfigure[]{\includegraphics[width=0.99\columnwidth]{Fig1a.pdf}} \subfigure[]{\includegraphics[width=0.85\columnwidth]{Fig1b.pdf}} \caption{(Color online) (a) Schematic diagram of the experimental setup to generate squeezed thermal states. See text for details. (b) Enlarged picture of the optical systems MOD1 and MOD2, used to generate the OPO input signals. The optical field is prepared with circular polarization by setting the fast axis of $\lambda$/4 at an angle of $45^\circ$ with respect to the incident \textit{p}-polarization, and then is passed through a KDP crystal whose axis are oriented at $45^\circ$. The PBS select only the horizontal component of output beam which is sent in a LiNiO$_3$ crystal whose extraordinary axis is horizontal. } \label{f:schema} \mbox{e}nd{figure} In this section we deal with the generation and the characterization of squeezed thermal states (STS) of a single-mode radiation field, i.e. states of the form \begin{equation}\label{STS} \hat{a}t{\rho}= \hat{a}t{S}(r)\hat{a}t{\nu} (n_{\text{th}})\hat{a}t{S}^\dag(r)\,, \mbox{e}nd{equation} where $\hat{a}t{S}(r)=\mbox{e}xp \big \{ \frac12 r \big [(\hat{a}t{a}^{\dag})^2 - \hat{a}t{a}^2 \big ] \big \}$ is the squeezing operator, with $r\in\mathbb{R}$, $\hat{a}t{\nu} (n_{\text{th}})= n_{\text{th}}^{\hat{a}t{a}^\dag \hat{a}t{a}}/(1+n_{\text{th}})^{\hat{a}t{a}^\dag \hat{a}t{a}+1}$ is a thermal state with $n_{\text{th}}$ average number of photons and $[\hat{a}t{a},\hat{a}t{a}^\dag]=1$, $\hat{a}t{a}$ and $\hat{a}t{a}^\dag$ being field operators. Upon defining the quadrature operators \begin{equation}\label{Quadrature} \hat{a}t{x}_{\theta} \mbox{e}quiv \hat{a}t{a}\, {\rm e}^{- i \theta } + \hat{a}t{a}^\dag \, {\rm e}^{ i \theta }\,, \mbox{e}nd{equation} with $\theta \in [0,\pi]$, the STS are fully characterized by their first and second moments \begin{subequations}\begin{align} \label{Quad_ave}\langle \hat{a}t{x}_\theta \rangle &= 0 \qquad \forall \theta\\ \label{Quad_var} \langle \Delta \hat{a}t{x}_\theta^2 \rangle &= (1 + 2 n_{\text{th}})({\rm e}^{2r}\cos^2\theta + {\rm e}^{-2r}\sin^2\theta)\,, \mbox{e}nd{align} \mbox{e}nd{subequations} where $\langle\cdots\rangle\mbox{e}quiv \hbox{Tr}[\hat{a}t\rho\,\cdots]$. In terms of the canonical operators $\hat{a}t{x}\mbox{e}quiv\hat{a}t{x}_0$ and $\hat{a}t{p}\mbox{e}quiv\hat{a}t{x}_{\pi/2}$, the covariance matrix (CM) of a STS reads \begin{equation}\label{CM} \sigma=\begin{pmatrix} \langle \Delta \hat{a}t{x}^2 \rangle & 0 \\ 0 & \langle \Delta \hat{a}t{p}^2 \rangle \mbox{e}nd{pmatrix}=\begin{pmatrix} s/\mu & 0\\ 0 & 1/\mu s \mbox{e}nd{pmatrix}, \mbox{e}nd{equation} where $\mu= {\rm Tr}[\hat{a}t{\rho}^2]= (2n_{\text{th}}+1)^{-1}$ is the purity of the state $\hat{a}t{\rho}$ and $s\mbox{e}quiv {\rm e}^{2r}$ is the squeezing factor. A STS is nonclassical, i.e. it corresponds to a singular Glauber P-function, whenever the conditions $s<\mu$ or $s > \mu^{-1} $ are satisfied. The total energy of a STS is given by \begin{equation}\label{Ntot} N_{\text{tot}}=\langle \hat{a}t{a}^\dag \hat{a}t{a}\rangle =n_{\text{th}}+n_{\text{s}}+2 n_{\text{th}} n_{\text{s}}\quad , \mbox{e}nd{equation} where $n_s=\sinh^2 r$ is the number of squeezing photons and $n_{\text{th}}$ is the thermal contribution to energy. \par According to Eq.~(\ref{Ntot}), it is possible to find a suitable parametrization of the single-mode STS CM (\ref{CM}) in terms of the different energy contributions \begin{subequations}\label{FitVariance}\begin{align} \langle \Delta \hat{a}t{x}^2 \rangle&= \left(1 + 2 \frac{N_{\text{tot}} - n_{\rm s}}{2 n_{\rm s} + 1 }\right) (1 + 2 n_s - 2 \sqrt{n_{\rm s} + n_{\rm s}^2})\\ \langle \Delta \hat{a}t{p}^2 \rangle&= \left(1 + 2 \frac{N_{\text{tot}} - n_{\rm s}}{2 n_{\rm s} + 1 }\right)\frac{1}{(1 + 2 n_{\rm s} - 2 \sqrt{n_{\rm s} + n_{\rm s}^2})}, \mbox{e}nd{align}\mbox{e}nd{subequations} from which the linear behavior of the variances as a function of the total energy $N_{\text{tot}}$ is apparent. \par The fidelity between two STS is given by \cite{MarianFidelity} \begin{equation}\label{FidGaussian} F(\sigma_1, \sigma_2) =\frac{1}{\sqrt{\Delta + \delta} - \sqrt{\delta}}, \mbox{e}nd{equation} where $\Delta = \frac{1}{4} \det[\sigma_1 + \sigma_2 ]$ and $\delta= \frac{1}{4} \prod_{i=1,2} (\det \sigma_i - 1)$. \subsection{Experimental setup} \begin{table*}[t!] \caption{\label{t:StateExp} Characterization, via homodyne tomography, of the $m=14$ experimental STS in terms of the position and momentum variances, total energy, squeezing factor and purity. The STS display squeezing in position and anti-squeezing in momentum coordinates ($r<0$) .} \begin{ruledtabular} \begin{tabular}{c | c c c c c} state \#& $ \langle \Delta \hat{a}t{x}^2 \rangle$ & $\langle \Delta \hat{a}t{p}^2 \rangle$ & $\langle \hat{a}t{a}^\dag \hat{a}t{a} \rangle$ & $s_{\text{exp}}$ & $\mu_{\text{exp}}$ \\[1mm] \hline 1 & $0.48\pm 0.03$ & $3.15\pm 0.09$ & $0.41\pm 0.02$ & $0.39\pm0.01$ & $0.81\pm0.03$ \\[.8mm] 2 & $0.67\pm 0.04$ & $3.33\pm 0.09$ & $0.50\pm 0.02$ & $0.45\pm0.01$ & $ 0.67\pm0.02 $ \\[.8mm] 3 & $0.62\pm 0.04$ & $3.77\pm 0.11$ & $0.60\pm 0.02$ & $0.40\pm0.02$ & $0.66\pm0.02$ \\[.8mm] 4 & $0.69\pm 0.05$ & $3.94\pm 0.11$ & $0.66\pm 0.02$ & $ 0.41\pm 0.02$ & $ 0.61\pm0.02 $ \\[.8mm] 5 & $0.70\pm 0.05$ & $4.51\pm 0.12$ & $0.80\pm 0.03$ &$ 0.39\pm 0.02$ & $ 0.56\pm0.02 $ \\[.8mm] 6 & $0.77\pm 0.05$ & $4.54\pm 0.13$ & $0.83\pm 0.03$ & $ 0.41\pm 0.02 $ & $ 0.54\pm0.02 $ \\[.8mm] 7 & $0.77\pm 0.05$ & $4.60\pm 0.13$ & $0.84\pm 0.03$ & $ 0.41\pm 0.02 $ & $ 0.53\pm0.02 $ \\[.8mm] 8 & $0.93\pm 0.06$ & $5.00\pm 0.14$ & $0.98\pm 0.03$ & $ 0.43\pm 0.02 $ & $ 0.46\pm0.02 $ \\[.8mm] 9 & $0.95\pm 0.06$ & $5.36\pm 0.15$ & $1.08\pm 0.03$ & $ 0.42\pm 0.01 $ & $ 0.44\pm0.02 $ \\[.8mm] 10 & $0.93\pm 0.07$ & $5.56\pm 0.15$ & $1.12\pm 0.03$ & $0.41\pm 0.02$ & $ 0.44\pm0.02 $ \\[.8mm] 11 & $1.00\pm 0.07$ & $5.80\pm 0.17$ & $1.20\pm 0.03$ & $ 0.42\pm 0.02$ & $ 0.42\pm0.02 $ \\[.8mm] 12 & $1.13\pm 0.07$ & $5.87\pm 0.16$ & $1.25\pm 0.03$ & $ 0.44\pm 0.02 $ & $ 0.39\pm0.01 $ \\[.8mm] 13 & $1.11\pm 0.08$ & $6.33\pm 0.18$ & $1.36\pm 0.04$ & $ 0.42\pm 0.02 $ & $ 0.38\pm0.01 $ \\[.8mm] 14 & $1.30\pm 0.08$ & $6.16\pm 0.18$ & $1.36\pm 0.04$ & $ 0.46\pm 0.02 $ & $ 0.35\pm0.01 $ \\[.8mm] \mbox{e}nd{tabular} \mbox{e}nd{ruledtabular} \mbox{e}nd{table*} In order to generate STS we employ the experimental setup schematically depicted in Fig.~\ref{f:schema} (a). It consists of three stages: Laser, signal generator (SG) and homodyne detector (HD). Our source is a home-made internally frequency doubled Nd:YAG laser. It is based on a 4 mirrors ring cavity and the active medium is a cylindrical Nd:YAG crystal (diameter 2 mm and length 60 mm) radially pumped by three array of water-cooled laser diodes @ 808 nm. The crystal for the frequency doubling is a periodically poled MgO:LiNbO$_3$ (PPLN) of 10 mm thermally stabilized ($\sim$70$^\circ$C). Inside the cavity is placed a light diode that consist of a half-wave-plate (HWP), a Faraday rotator (15$^\circ$) and a Brewster plate (BP) in order to obtain the single mode operation. \par The laser output $@$ 532~nm is used as the pump for a optical parametric oscillator (OPO) while the output at 1064~nm is split into two beams by using a polarizing beam splitter (PBS): one is used as the local oscillator (LO) for the homodyne detector and the other as the input for the OPO. The OPO cavity is linear with a free spectral range (FSR) of 3300~MHz, the output mirror has a reflectivity of 92\% and the rear mirror 99\%. A phase modulator (PM) generate a signal at frequency of 110~MHz (HF) used as active stabilization of the OPO cavity via the Pound-Drever-Hole (PDH) technique \cite{PDH,SBrec}: the reflected beam from cavity is detected (D) and used to generate the error signal of PDH apparatus. This signal error drives a piezo connected to the rear mirror of the OPO cavity to actively control its length. \par The homodyne detector (HD) consists of a 50:50 beam splitter, two low noise detector and a differential amplifier based on a LMH6624 operational amplifier. The visibility of the interferometer is about 98\%. To remove the low frequency signal we use an high-pass filter @~500~kHz and then the signal is sent to the demodulation apparatus. The information about the signal, that is at frequency $\Omega$ about 3MHz, is retrieved by using an electronic apparatus which consists of a phase shifter a mixer and a low pass filter @~300~kHz. The LO phase is spanned between 0 and $2\pi$ thanks to a piezo-mounted mirror linearly driven by a ramp generator (RG). \par Our goal is to study a single-mode squeezed thermal state and therefore we have to generate a thermal seed to be injected into the OPO. The density matrix of thermal state in the Glauber representation reads as follows \begin{equation}\label{State_exp} \hat{a}t{\nu}_{\text{exp}}(n_{\rm th})=\int_0^\infty d\lvert\alpha\lvert \frac{2|\alpha|}{n_{\rm th}}e^{-\frac{\lvert\alpha\lvert^2}{n_{\rm th}}} \int_0^{2\pi}\frac{d\phi}{2\pi}\lvert\lvert\alpha\lvert e^{i\phi} \rangle \langle\lvert\alpha\lvert e^{i\phi}\lvert\,, \mbox{e}nd{equation} i.e., it can be viewed as a mixture of coherent states with phase $\phi$ uniformly distributed over the range $0$ to $2\pi$, and a given amplitude $\lvert\alpha\lvert$ distribution. Therefore, we have to generate a rapid sequence of coherent states with $\lvert\alpha\lvert$ and $\phi$ randomly selected from these distributions. \par Our strategy is to exploit the combined effect of the two optical systems (MOD1 and MOD2 in Fig.~\ref{f:schema} (a) described in Ref.~\cite{bachor} and sketched in more detail in Fig.~\ref{f:schema} (b). MOD1 generate a coherent state with phase $0$, while MOD2 generate a coherent state with phase $\frac{\pi}{2}$. By matching these coherent states with properly chosen amplitudes, it is possible to generate an arbitrary coherent state. In order to control this process via pc, the MOD1 and MOD2 are driven by two identical electronic circuits which consist of a phase shifter and a mixer. The pc processes the $\lvert\alpha\lvert$ and $\phi$ values of the coherent state which we want to generate, and convert them into the voltage signals which are sent to the mixer together with the sinusoidally varying signals at frequency $\Omega$ in order to obtain the right modulation signals on MOD1 and MOD2. \par Finally, in order to obtain the desired thermal state, the pc generates random $\lvert\alpha\lvert$ and $\phi$ values according to their specified distributions (see Eq. (\ref{State_exp})) and converts them in two simultaneous trains of voltage values which are sent to the crystals in a time window of 70 ms with a repetition rate of 100 kHz. Generation and acquisition operations are synchronized in the same time window at the same sampling rate. Therefore we collect 7000 homodyne data points $\{(\theta_k,x_k)\}$, LO phase and quadrature value, respectively. The sampling is triggered by a signal generated by RG to ensure the synchronization between the acquisition process and the scanning of LO with $\theta_k \in [0,2\pi]$. \par Notice that seeding the OPO is a crucial step to observe the quantum-to-classical transition with STS. As a matter of fact, without seeding the OPO, output signal is a squeezed vacuum state, which is then degraded to a STS with a nonzero thermal component by propagation in a lossy channel. However, STS obtained in this way are always non-classical for any value of the loss and the squeezing parameters \cite{rossi:04,fop:05,oli:rev}. \subsection{Homodyne Tomography} \begin{figure} \includegraphics[width=0.9\columnwidth]{Fig2.pdf} \caption{(Color online) Tomographic reconstruction of the variances of the squeezed quadrature $\hat{a}t{x}$ (red dots) and of the anti-squeezed quadrature $\hat{a}t{p}$ (green dots) as a function of the total energy $N_{\text{tot}}$, for $m=14$ experimental STS. Dashed lines represent linear fits of the experimental data (see Eqs. (\ref{FitVariance})), from which we obtain the number of squeezed photons $n_{\rm s} \simeq 0.2$. The black dotted horizontal line is the shot-noise level at $ \langle \Delta \hat{a}t{x}^2 \rangle= \langle \Delta \hat{a}t{p}^2 \rangle=1$.} \label{f:ThSqVariance} \mbox{e}nd{figure} We perform state reconstruction of single-mode CV systems by quantum homodyne tomography, i.e. by collecting homodyne data at different LO phases and applying the pattern functions method \cite{revt1}. This technique allows one to obtain the expectation value of any observable $\hat{a}t{O}$ on a given state $\hat{a}t{\rho}$ starting from a set of homodyne data $\{(\theta_k , x_k)\}$, $x_k$ being the $k$-th outcome from the measurement of the quadrature (\ref{Quadrature}) at phase $\theta_k$, with $k=1,\ldots,M$. Upon exploiting the Glauber representation of operators in polar coordinates, the average value of a generic observable $\hat{a}t{O}$ may be rewritten as \begin{equation}\label{O_ave} \ave{\hat{a}t{O}}= \int_0^\pi \frac{d\theta}{\pi} \int_{-\infty}^{+\infty} dx\, p(x, \theta) \mathcal{R}[\hat{a}t{O}] (x, \theta), \mbox{e}nd{equation} where $p(x, \theta)= \bra{x_\theta} \hat{a}t{\rho} \ket{x_\theta}$ is the distribution of quadrature outcomes, with $\{\ket{x_\theta}\}$ the set of eigenvectors of $\hat{a}t{x}_\theta$, and $\mathcal{R}[\hat{a}t{O}] (x, \theta )= \int_{-\infty}^{+\infty} dy |y| \hbox{Tr}[\hat{a}t{O}e^{i y (\hat{a}t{x}_\theta- x)}]$ is the estimator of the operator ensemble average $\ave{\hat{a}t{O}}$. For large samples $M\gg 1$, the integral (\ref{O_ave}) can be recast in the discrete form \begin{equation}\label{O_ave_discrete} \ave{\hat{a}t{O}}\simeq \frac{1}{M}\sum_{k=1}^M \mathcal{R}[\hat{a}t{O}] (x_k, \theta_k)\,. \mbox{e}nd{equation} The uncertainty of the estimated value $\ave{\hat{a}t{O}}$ is ruled by the central limit theorem and scales as $\sqrt{M}$, namely \begin{equation}\label{Precision_O} \delta \langle \hat{a}t{O} \rangle=\frac{1}{\sqrt{M}}\sqrt{\sum_{k=1}^M\frac{\big[\mathcal{R}[\hat{a}t{O}] (x_k, \theta_k)\big]^2 - \langle \hat{a}t{O} \rangle^2}{M}}. \mbox{e}nd{equation} In order to properly characterize a single-mode prepared in a Gaussian STS, we need to estimate the first two moments of the quadrature operator $\hat{a}t{x}_\phi$ and reconstruct the first-moment vector and the CM, as well as the total energy $\hat{a}t{a}^\dag \hat{a}t{a}$ of the state. We thus need the following estimators \cite{revt1}: \begin{subequations}\begin{align} \mathcal{R}[\hat{a}t{x}_\phi] (x, \theta) &= 2 x \cos(\theta-\phi)\\ \mathcal{R}[\hat{a}t{x}_\phi^2] (x, \theta) &= (x^2-1)\Big\{1+2\cos[2(\theta-\phi)]\Big \}+1\\ \mathcal{R}[\hat{a}t{a}^\dag \hat{a}t{a}] (x, \theta) &=\frac12 \left (x^2-1\right ). \mbox{e}nd{align}\mbox{e}nd{subequations} In this way it is possible to compute the average value $\langle \hat{a}t{O} \rangle$ and the fluctuations $\langle \Delta\hat{a}t{O}^2 \rangle\mbox{e}quiv \langle \hat{a}t{O}^2 \rangle-\langle \hat{a}t{O} \rangle^2$ for the observables of interest, toghether with the corresponding uncertainties (\ref{Precision_O}). \par We collect $M=7000$ homodyne data $\{(x_k, \theta_k)\}$ for each state and address the quantum-to-classical transition by generating $m=14$ STS with increasing thermal component, as the squeezing is fixed by the geometry of the experimental setup. For all the generated states, we tested the compatibility with the typical form of the STS, i.~e. null first-moment vector (\ref{Quad_ave}) and diagonal CM (\ref{CM}). We characterized these states (see Table \ref{t:StateExp}) in terms of the position $\langle \Delta \hat{a}t{x}^2 \rangle$ and momentum $ \langle \Delta \hat{a}t{p}^2 \rangle$ variances, the total energy $N_{\text{tot}}\mbox{e}quiv\langle \hat{a}t{a}^\dag \hat{a}t{a} \rangle$, together with the squeezing factor $s_{\text{exp}}$ and the purity $\mu_{\text{exp}}$. As already mentioned at the beginning of this section, the shot-noise threshold is set at $ \langle \Delta \hat{a}t{x}^2 \rangle= \langle \Delta \hat{a}t{p}^2 \rangle=1$, under which the state of the detected single-mode radiation displays genuine quantum squeezing. The generated STS display squeezing in position quadrature and anti-squeezing in momentum quadrature (i.e. we have real and negative squezing parameter $r<0$). In Fig.~\ref{f:ThSqVariance} we show the position and momentum variances as a function of the total energy for the $m=14$ experimentally generated STS. A linear fitting, following Eq. (\ref{FitVariance}), provides the value of the number of squeezed photons $n_{\rm s}\simeq 0.2$, which corresponds to $\sim 3.7~ \text{dB}$ of squeezing. Figure~\ref{f:ThSqVariance} makes apparent the capability of the experimental setup to generate STS-on-demand by seeding the OPO with a controlled number of thermal photons and in turn, to monitor the quantum-to-classical transition of a single-mode Gaussian state of light. \subsection{Fidelity} \label{s:CVfidelity} In order to perform the uncertainties budget, to discuss the statistical distribution of relevant quantities, and to assess the statistical significance of fidelity, we generate $N_{\text{MC}}=10^3$ Monte Carlo replica data samples (see Appendix \ref{s:MC}), for each experimental state. Resampled (raw) homodyne data are drawn from Gaussian distributions using the experimental values of Table~\ref{t:StateExp} to build the average values (\ref{Quad_ave}) and the variances (\ref{Quad_var}) of the distributions. For all the $m=14$ STS we apply homodyne tomography and analyze the distribution of the reconstructed states in the neighbouring of the experimental target state. Results are shown in Fig.~\ref{f:ThsqBorder} and Fig.~\ref{f:ThSq09}. using the squeezing-purity plane-$\{s,\mu\}$ representation. Figure~\ref{f:ThsqBorder} focuses on three specific states (number 7, 9 and 13 of Table \ref{t:StateExp}) which are closer to the quantum-classical boundary. Target states correspond to black points whereas the ovoidal regions denotes the set of states having fidelity larger than $F>0.995$ to the target. The darker, stripe-like, regions within each balloon correspond to states satisfying the additional constraint of having fluctuations of the total energy (\ref{Ntot}) at most within one standard deviation. \begin{figure}[h!] \includegraphics[width=0.99\columnwidth]{Fig3.pdf} \caption{(Color Online) Statistical distribution of reconstructed STS in the squeezing-purity plane-$\{s,\mu\}$. Data come from $N_{\text{MC}}$ Monte Carlo resampled set of data for STS (see text and Appendix \ref{s:MC}). From left to right, we have distributions for three experimental STS (state number $7, 9$ and $13$ of Table~\ref{t:StateExp}), shown as black points with the corresponding bars of precision. The triangular-like region $s>\mu$ contains states with a classical analogue. The whole set of reconstructed states are contained in the ovoidal regions, i.e. have fidelity $F>0.995$ to the corresponding target state. The stripe-like regions are obtained adding a constraint to the total energy, i.e. $N_{\text{tot}}^{(\text{exp})}-\delta N_{\text{tot}}^{(\text{exp})} < \langle \hat{a}t{a}^\dag\hat{a}t{a} \rangle < N_{\text{tot}}^{(\text{exp})}+\delta N_{\text{tot}}^{(\text{exp})}$.} \label{f:ThsqBorder} \mbox{e}nd{figure} \par As it is apparent from the plot, the distribution of STS is concentrated within those stripes. On the other hand, despite the distributions are very sharp in terms of fidelity to their targets (remind that the balloons contains states with fidelity larger than $F\geq 0.995$ to the target), their physical properties may be very different. This fact is clearly illustrated looking at nonclassicality, i.e. checking whether the Glauber $P$-function of the state is regular (this happens if $s>\mu$, corresponding to a triangular region in Fig. \ref{f:ThsqBorder}) or singular: states with very high fidelity to a classical or a nonclassical target may not share this property with the target itself. \par This effect may be not particularly surprising for target states at border of the classicality region, even for high values of fidelity. On the other hand, the point becomes far more relevant if values of fidelity commonly used in experiments are considered. In Fig.~\ref{f:ThSq09} we show the balloons of states having fidelity $F\geq0.9$ or $F\geq0.95$ to a nonclassical target STS. As it is apparent from the plot, {\mbox{e}m all} the generated STS are contained in the ballons, irrespectively of their nonclassicality. The compatibility region may be considerably reduced in size by adding an energy constraint but, nonetheless, a large number of states may still fall in the region of classicality. \par Overall, we conclude that fidelity is not a significant figure of merit to assess nonclassicality of STS and should not be employed to benchmark a generation scheme or certify quantum resources for a given protocol. \begin{figure}[h!] \includegraphics[width=0.99\columnwidth]{Fig4.pdf} \caption{(Color online) Statistical distribution of reconstructed STS in the squeezing-purity plane-$\{s,\mu\}$ for all the experimental target states in Table \ref{t:StateExp} (black points). The two balloons include states having fidelity to a nonclassical target STS (with $s=0.41$ and $\mu=0.53$) larger than $F>0.90$ (outer balloon) or $F>0.95$ (inner balloon) respectively, i.e. values commonly recognized as regions of {\mbox{e}m high fidelity}. The size of the compatibility regions may be reduced by adding energy constraints (as discussed in Fig.~\ref{f:ThsqBorder}). A significant amount of states may still display opposite classicality properties compared to the target.} \label{f:ThSq09} \mbox{e}nd{figure} \section{Two-qubit systems} \label{s:Werner} In this Section we deal with discrete two-qubit systems. In particular we focus on two-photon polarization states $\ket{HH}$, $\ket{HV}$, $\ket{VH}$ and $\ket{VV}$, and address the reconstruction of statistical mixtures belonging to the class of Werner states: \begin{equation}\label{Werner} \hat{a}t{\rho}^{(w)}=p\,|\Psi^-\rangle\langle \Psi^-| +\frac{1-p}{4} \hat{a}t{\mathbb{I}}_4, \mbox{e}nd{equation} where $\hat{a}t{\mathbb{I}}_4$ is the identity operator in the 4-dimensional Hilbert space of two qubits and $\ket{\Psi^-}$ is one of the maximally entangled Bell states \begin{equation}\begin{split} \ket{\Phi^\pm}=\frac{\ket{HH} \pm\ket{VV}}{\sqrt{2}} \quad\mbox{and}\quad \ket{\Psi^\pm}=\frac{\ket{HV} \pm\ket{VH}}{\sqrt{2}}. \mbox{e}nd{split}\mbox{e}nd{equation} The parameter $-1/3\leq p\leq 1$ tunes the mixture (\ref{Werner}) from the maximally mixed state $\hat{a}t{\mathbb{I}}_4/4$ for $p=0$ to the maximally entangled Bell state $\ket{\Psi^-}$ for $p=1$. In between, the quantum-to-classical transition is located at $p =1/3$, with entangled states satisfying $p>1/3$ and separable ones $p<1/3$. \par \begin{figure}[h!] \includegraphics[width=0.95\columnwidth]{Fig5.pdf} \caption{(Color online) Schematic diagram of experimental setup. A linearly polarized cw laser diode at 405 nm pumps a couple of BBO crystals cut for type-I downconversion. The horizontal and vertical amplitudes of the photon pairs are balanced by a half-wave plate set along the pump path (HWP1), whereas an additional BBO crystal (TD) is placed on the pump path to compensate the temporal delay. The amplitude modulator (AM) consists of a half-wave-plate and polarizer-beam-splitter. Signal and idler beams travel through the SLM, which provides purification of the generated states. A half-wave plate (HWP2) is inserted on signal path in order to generate the state $\hat{a}t{\rho}_\lambda$ (see the text), whereas a quarter-wave plate, a half-wave plate, and a polarizer (sectors T1 and T2) are used for the tomographic reconstruction. Finally the beams are detected by detectors D1 and D2 ad sent to single-photon counting modules (CC)} \label{f:discreto} \mbox{e}nd{figure} \subsection{Experimental generation of Werner states} A schematic diagram of experimental setup is sketched in Fig.~\ref{f:discreto}. Photon pairs are generated by type-I downconversion from a couple of beta barium borate (BBO) crystals, in a non-collinear configuration, pumped with a linearly polarized cw 405 nm laser diode, whose effective power on the generating crystals is about 10 mW. The experimental apparatus has been already described in detail in Refs.~\cite{generazione,generazione2}. Here a half-wave plate (HWP2 in Fig.~\ref{f:discreto}) has been inserted in front of detector D1 to perform $\ket{\Phi^-} \rightarrow \ket{\Psi^-} $ transformation. A programmable one-dimensional spatial light modulator (SLM) is placed on the path of signal and idler in order to control the visibility of the generated states. The SLM provides the setup with great flexibility, allowing the experimenter to choose and set the visibility of generated states \cite{SLM,SLM2}. Eventually, photons are focused in two multimode fibers and sent to single-photon counting modules (CC). \par Our experimental apparatus allows us to mix two types of Bell states at a time, either $\ket{\Psi^\pm}$ or $\ket{\Phi^\pm}$. In order to obtain a Werner state (\ref{Werner}) we generate the polarization entangled states $\hat{a}t{\rho}_\lambda= \lambda\ketbra{\Psi^-}{\Psi^-}+(1-\lambda)\ketbra{\Psi^+}{\Psi^+}$ and the mixed state $\hat{a}t{\rho}_{\rm mix}=\left( \ketbra{\Phi^+}{\Phi^+}+\ketbra{\Phi^-}{\Phi^-} \right)/2 $ \cite{generazione,visibilita}. Werner states may be obtained by suitably mixing these two states $\hat{a}t{\rho}^{(w)}=f_1\hat{a}t{\rho}_\lambda+f_2\hat{a}t{\rho}_{\rm mix}$ with proper probabilities, given by $f_1=\frac{1+p}{2}$, $f_2=\frac{1-p}{2}$ and $\lambda=\frac{2p}{p+1}$. The mixed state $\hat{a}t{\rho}_{\rm mix}$ is obtained using the same scheme of Fig.~\ref{f:discreto} upon removing the HWP2 from the signal path and setting the SLM in order to get $\lambda\simeq 0$. The frequencies $f_1$ and $f_2$ are tuned by changing the power of the pump beam with an amplitude modulator (AM). The full range of Werner states may be explored. \par \begin{table*}[t!] \caption{\label{t:Werner} Statistical analysis of the tomography of $N_{\text{MC}}=10^3$ two-qubit states, having fidelities $F(\overline{\hat{a}t{\rho}_k},\hat{a}t{\rho}_k^{(w)})$ with target Werner of parameter $p_{k}^{(w)}$. The average values of the least eigenvalue $e_m(\hat{a}t{\rho}^{(\tau)})$ and of quantum discord $D(\hat{a}t{\rho})$ are reported for both the distributions of tomographic states and of the approximated Werner states. } \begin{ruledtabular} \begin{tabular}{c | c c c c } & state 1 & state 2 & state 3 & state 4 \\[1mm] \hline $p_{k}^{(w)}$ & $0.32\pm0.04$ & $0.35\pm0.04$ & $0.28\pm0.04$ & $0.44\pm0.05$ \\[.8mm] $F(\overline{\hat{a}t{\rho}_k},\hat{a}t{\rho}_k^{(w)})$ & $0.985^{+ 0.006}_{- 0.01}$ & $0.988^{+ 0.005}_{- 0.01}$ & $0.987^{+ 0.006}_{- 0.01}$ & $0.985^{+ 0.007}_{- 0.02}$ \\[.8mm] $e_m(\overline{\hat{a}t{\rho}_k}^{(\tau)})$ & $0.01\pm0.03$ & $-0.01\pm0.03$ & $0.04\pm0.03$ & $-0.07\pm 0.03$ \\[.8mm] $e_m([\hat{a}t{\rho}_k^{(w)}]^{(\tau)})$ & $0.01\pm0.03$ & $-0.01\pm0.03$ & $0.04\pm0.03$ & $-0.08\pm 0.04$ \\[.8mm] $D(\overline{\hat{a}t{\rho}_k})$ & $0.08\pm0.02$ & $0.10\pm0.02$ & $0.06\pm0.02$ & $0.14 \pm 0.02$ \\[.8mm] $D(\hat{a}t{\rho}_k^{(w)})$ & $0.11\pm0.03$ & $0.14\pm0.03$ & $0.06\pm0.02$ & $0.21 \pm 0.04$ \\[.8mm] \mbox{e}nd{tabular} \mbox{e}nd{ruledtabular} \mbox{e}nd{table*} The tomographic reconstruction is performed by measuring 16 projective and independent observables in the two-qubit Hilbert space, namely $P_j=\ketbra{\psi_j}{\psi_j}$ (with $j= 1, \ldots,16$). Different settings of the apparatus, obtained by combining a quarter-wave plate, a half-wave plate and a polarizer (sectors T1 and T2 in Fig.~\ref{f:discreto}), are employed \cite{kb00,jam01}. Each of the 16 measurements correspond to 30 acquisitions, in a time window of $1~s$, of coincidence photon counts $\{n_j \}_{j=1}^{16}$, i.e. the outcomes of the projectors $P_j$. \subsection{Tomography with MLE} The density matrix of the two-qubit states generated in the experiment has been reconstructed using maximum-likelihood (MLE) tomographic protocol \cite{kb00,jam01}. This scheme adopts a suitable parametrization of the density matrix, namely $\hat{a}t{\rho}(\mathbb{T})= T^{\dag}T / \hbox{Tr}[T^{\dag}T]$, where $T$ is a complex lower triangular matrix and $\mathbb{T}=\{t_j\}_{j=1}^{16}$ is the set of 16 parameters characterizing the density matrix. In this way it is ensured that $\hat{a}t{\rho}$ is positive and Hermitean (Choleski decomposition). The MLE protocol allows to recover the set $\mathbb{T}$ by means of a constrained optimization procedure with Lagrange multipliers, which accounts for the normalization condition $\hbox{Tr}[\hat{a}t{\rho}]=1$, involving the set of data coming from the 16 experimental measurements. In particular, the logarithmic likelihood functional to be minimized reads \begin{equation} \mathcal{L}(\mathbb{T})= \sum _{j=1}^{16} \frac{ [\,\mathcal{N}\bra {\psi_{j}} \hat{a}t{\rho}(\mathbb{T}) \ket {\psi_{j}} - n_j \,]^2}{2\, \mathcal{N} \bra {\psi_{j}} \hat{a}t{\rho}(\mathbb{T}) \ket {\psi_{j}} }, \mbox{e}nd{equation} where $\mathcal{N} = \sum _{j=1}^{4} n_j $ is a constant proportional to the total number of acquisitions. \par We experimentally generated $N_{\text{exp}}=4$ two-qubit states not too far from the border between separable and entangled states (see Table~\ref{t:Werner}). MLE quantum tomography shows that the reconstructed density matrices do not display the typical X-shape of an ideal Werner state (\ref{Werner}) with real-valued elements. A possible route to extract the desired Werner state, is based on the maximization of the fidelity between the experimental state and the generic Werner state (\ref{Werner}). This procedure sounds reasonable and, in principle, may allow one to assess the quantum resources contained in the generated state, as well as to exploit them in order to accomplish quantum tasks. On the other hand, we will show in the following that a fidelity-based inference is in general misleading and should be avoided in assessing the true quantum properties of the experimentally generated state. \subsection{Fidelity} \label{s:fidelityqubit} In order to perform statistical analysis of data and evaluate uncertainties we have resampled photon counts data to obtain $N_{\text{MC}}=10^3$ repeated samples for each of the $N_{\text{exp}}=4$ experimental states (see Appendix \ref{s:MC} for details). The significance of fidelity may be assessed upon the comparison between two possible strategies to reconstruct quantum properties of the generated states and their distribution. In the first strategy we evaluate properties from the reconstructed states $\hat{a}t{\rho}_k^i$ ($k=1,\ldots,N_{\text{exp}}$ and $i=1,\ldots,N_{\text{MC}}$) as obtained by the MLE tomography, whereas in the second one we look for the closest Werner state, in terms of fidelity, for each reconstructed state and analyze the properties of this class of Werner states $[\hat{a}t{\rho}_k^i]^{(w)}$. \par The first method is based only in tomographic data and provides an average MLE two-qubit state $\overline{\hat{a}t{\rho}_k}=\sum_{i=1}^{N_{\text{MC}}}\hat{a}t{\rho} _k^i/N_{\text{MC}}$, which optimize the likelihood of the experimental data. This average state may be then employed to infer a Werner target state $\hat{a}t{\rho}^{(w)}_k$, of the form given in Eq. (\ref{Werner}), via a maximization of the fidelity $F(\overline{\hat{a}t{\rho}_k},\hat{a}t{\rho}_k^{(w)})$. Upon adopting the second strategy, we obtain a distribution of approximated Werner states, with an average state compatible, at least in principle, with the Werner target state of the first strategy. The parameters $p_{k}^{(w)}$ characterizing the Werner target states are reported in Table~\ref{t:Werner}. \par In the following, we analyze how some properties of the quantum states distribute around the target states in terms of fidelity, depending on which of the two strategies has been adopted. In particular we consider the amount of quantum correlations of two-qubit states, as quantified by entanglement and quantum discord. \begin{figure} \subfigure[]{\includegraphics[width=0.4\textwidth]{Fig6a.pdf}} \subfigure[]{\includegraphics[width=0.4\textwidth]{Fig6b.pdf}} \caption{(Color online) (a) Distribution of $e_m([\hat{a}t{\rho}_k^i]^{(\tau)})$ for $10^3$ resampled states as a function of the Werner parameter $p$. The $N_{\text{exp}}=4$ average states $\overline{\hat{a}t{\rho}_k}$ are highlighted with black dots and error bars, matching the theoretical curve $e_m(\hat{a}t{\rho}^{(w)})=(1-3p)/4$ (dashed black line) relative to the negative eigenvalue of the partially transposed ideal Werner state (\ref{Werner}). Moreover, it is evident how many states may cross the boundary between entangled and separable states. (b) Distribution of $e_m([\hat{a}t{\rho}_4^i]^{(\tau)})$ for $10^3$ simulations of the target state $\overline{\hat{a}t{\rho}_4}$ as a function of the fidelity $F(\hat{a}t{\rho}_4^i,\hat{a}t{\rho}_4^{(w)})$ (green dots). The same distribution as a function of $F([\hat{a}t{\rho}_4^i]^{(w)},\hat{a}t{\rho}_4^{(w)})$ (orange dots), matches with the theoretical parametric curve (dashed black curve) obtained by evaluating $F$ and $e_m$ of the approximated Werner states. The values of entanglement for the average state $\overline{\hat{a}t{\rho}_4}$ and for the target Werner state $\hat{a}t{\rho}_4^{(w)}$, are compatible (black dots and error bars).} \label{f:negativity} \mbox{e}nd{figure} \subsubsection{Entanglement} The separability of two-qubit systems is established by the Peres-Horodecki criterion \cite{per,hor}: a quantum state of two qubits $\hat{a}t{\rho}$ is separable if and only the partially transposed density matrix is positive, i.e. $\hat{a}t{\rho}^{(\tau)}\geq 0$. Thus, it is possible to study entanglement or separability properties by evaluating the eigenvalues of the partially transposed density matrix. We compute the minimum of these eigenvalues \begin{equation} e_m(\hat{a}t{\rho}^\tau)\mbox{e}quiv \text{min} \{\lambda_n^\tau\}_{n=1}^4, \mbox{e}nd{equation} for both the considered strategies, i.e. for the distributions of resampled states $\hat{a}t{\rho}_k^i$ and of the approximated Werner states $[\hat{a}t{\rho}_k^i]^{(w)}$. For a Werner state the minimum eigenvalue, which may assume negative values, is given by $e_m([\hat{a}t{\rho}^{(w)}]^\tau)=(1-3p)/4$. \par In Fig.~\ref{f:negativity}~(a) we plot the distribution of $e_m(\hat{a}t{\rho}^\tau)$ as a function of the Werner parameter $p$ for all the $N_{\text{MC}}=10^3$ resampled states, with average tomographic states $\overline{\hat{a}t{\rho}_k}$. We note that the average states arrange along the curve of the theoretical behavior for a Werner state and that the resampled states follow the same prediction (see Table~\ref{t:Werner}). Nonetheless, there is evidence of some states generated from a separable experimental state in the entangled region, and viceversa. This first observation reveals that statistical fluctuations in an experiment may still produce quantum states with properties radically different from the expected ones, such as separability and entanglement. \begin{figure} \subfigure[]{\includegraphics[width=0.4\textwidth]{Fig7a.pdf}} \subfigure[]{\includegraphics[width=0.4\textwidth]{Fig7b.pdf}} \caption{(Color online) (a) Distribution of $D(\hat{a}t{\rho}_k^i)$ for $10^3$ resampled states as a function of the Werner parameter $p$. The $N_{\text{exp}}=4$ average states $\overline{\hat{a}t{\rho}_k}$ are highlighted with black dots and error bars. The theoretical curve $D(\hat{a}t{\rho}^{(w)})$ (dashed black line), relative to the discord of the ideal Werner state (\ref{Werner}), is systematically higher than the discord distribution of the tomographic states $\hat{a}t{\rho}_k^i$. (b) Distribution of $D(\hat{a}t{\rho}_4^i)$ for $10^3$ simulations of the target state $\overline{\hat{a}t{\rho}_4}$ as a function of the fidelity $F(\hat{a}t{\rho}_4^i,\hat{a}t{\rho}_4^{(w)})$ (green dots). The same distribution as a function of $F([\hat{a}t{\rho}_4^i]^{(w)},\hat{a}t{\rho}_4^{(w)})$ (orange dots), matches with the theoretical parametric curve (dashed black curve) obtained by evaluating $F$ and $D$ of the approximated Werner states. The values of quantum discord for the average state $\overline{\hat{a}t{\rho}_4}$ and for the target Werner state $\hat{a}t{\rho}_4^{(w)}$, are not compatible (black dots and error bars).} \label{f:discord} \mbox{e}nd{figure} In Fig.~\ref{f:negativity}~(b), we focus on the most entangled state $\overline{\hat{a}t{\rho}_4}$ and compare the distribution of the resampled states and of the corresponding Werner states, in terms of $e_m(\hat{a}t{\rho}^\tau)$ and fidelity with the target state $\hat{a}t{\rho}_4^{(w)}$. In this way we can highlight the differences between the two possible strategies for data analysis, i.e. between the evaluation of the properties of the direct tomographic states and the approximated Werner states. This second strategy compel the resampled states to follow the single-parameter Werner state (\ref{Werner}) and, thus, to force the distribution of the least eigenvalue $e_m([\hat{a}t{\rho}_k^{(w)}]^{(\tau)})$ according to the theoretical prediction (dashed black curve in the plot). In this way, we obtain a distribution of Werner states having, obviously, very high values of fidelity with the target (Werner) state. On the other hand, tomographic states display lower values of fidelity with the target state, but $e_m(\hat{a}t{\rho}^\tau)$ is evaluated directly from the tomographic density matrices. From the statistical analysis, we conclude that the estimated value of $e_m(\hat{a}t{\rho}^\tau)$ is compatible within errors for both the adopted strategies. We will see in the following that these two strategies may lead to different and non-compatible results for another property of quantum states, the quantum discord. \subsubsection{Quantum discord} Another widely adopted measure of the amount of quantum correlations in a state $\hat{a}t{\rho}$ is the quantum discord \cite{disc:1,disc:2}, which is defined starting from two equivalent definitions, in the classical domain, of the mutual information, i.e. the total amount of correlations of $\hat{a}t{\rho}$: \begin{subequations}\label{MI}\begin{align} \mathcal{I}(\hat{a}t{\rho})&=S(\hat{a}t{\rho}_A)+S(\hat{a}t{\rho}_B)-S(\hat{a}t{\rho}),\label{MI1}\\ \mathcal{J}_A(\hat{a}t{\rho})&=S(\hat{a}t{\rho}_B)-\min\sum_j c_j S(\hat{a}t{\rho}_{B|j}),\label{MI2} \mbox{e}nd{align}\mbox{e}nd{subequations} where $\hat{a}t{\rho}_A$ ($\hat{a}t{\rho}_B$) is the reduced density matrix of $\hat{a}t{\rho}$ for the subsystems $A$ (or $B$) and $S(\hat{a}t{\rho})=-\text{Tr}[\hat{a}t{\rho}\log_2(\hat{a}t{\rho})]$ the von Neumann entropy. While the first definition (\ref{MI1}) is based only on the von Neumann entropy, the second one in Eq.~(\ref{MI2}) accounts for all the classical correlations that can be detected by local projective measurements only on a subsystem. Here $\hat{a}t{\rho}_{B|j}=\text{Tr}_{\text{A}}[\hat{a}t{\Pi}_j\hat{a}t{\rho}\,\hat{a}t{\Pi}_j]/\pi_j$ is the reduced state of $B$ conditioned to the set of projectors $\{\hat{a}t{\Pi}_j\}$, with probability $\pi_j=\text{Tr}[\hat{a}t{\Pi}_j\hat{a}t{\rho}\,\hat{a}t{\Pi}_j]$ for the outcome $j$, and the minimum in Eq.~(\ref{MI2}) is taken over all the possible $\{\hat{a}t{\Pi}_j\}$. A similar definition applies for local measurements on subsystem $B$. The quantum discord is then evaluated as the residual information stemming from the difference of the two definitions in Eqs. (\ref{MI}), which has a pure quantum character: \begin{equation}\label{discord} D(\hat{a}t{\rho})\mbox{e}quiv \mathcal{I}(\hat{a}t{\rho}) - \mathcal{J}_A(\hat{a}t{\rho}). \mbox{e}nd{equation} For a Werner state (\ref{Werner}) the quantum discord can be analytically evaluated: \begin{equation}\label{discordW}\begin{split} D(\hat{a}t{\rho}^{(w)})=&\frac{1+3p}{4}\log_2(1+3p)+\frac{1-p}{4}\log_2(1-p)\\ &-\frac{1+p}{2}\log_2(1+p). \mbox{e}nd{split}\mbox{e}nd{equation} \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{Fig8.pdf} \caption{The histogram shows the distribution of $F(\hat{a}t{\rho}_3^i,\hat{a}t{\rho}_4^{(w)})$, i.e. the fidelity between a set of separable states and an entangled target state. The data are well fitted by a $\beta$-distribution, centered around $F\approx 0.97$.} \label{f:histogram} \mbox{e}nd{figure} \par In Fig.~\ref{f:discord}(a) we plot the quantum discord $D(\hat{a}t{\rho}_k^i)$ for all the $N_{\text{MC}}=10^3$ resampled states, with average tomographic states $\overline{\hat{a}t{\rho}_k}$, as a function of the Werner parameter $p$. The theoretical behavior of Eq.~(\ref{discordW}), represented in the figure by a dashed curve, puts in evidence the high discrepancy with the quantum discord computed for the tomographic states. The approximation to Werner states leads to a systematic over-estimation of the quantum discord for the two-qubit states (see also Table~\ref{t:Werner}). We can enforce this result by looking at the distribution of the quantum discord relative to the single set of states $\hat{a}t{\rho}_4^i$ only, as a function of the fidelity with the corresponding target Werner state $\hat{a}t{\rho}_4^{(w)}$ (see Fig.~\ref{f:discord}(b)). Most of the states are contained in a region of high values of fidelity, thus suggesting that the approximation to an average Werner state could be correct. Nonetheless, if we consider the distribution of the approximated Werner states, we observe that the values of fidelity increase, but the value of quantum discord of the target Werner state $\hat{a}t{\rho}_4^{(w)}$ is out of the limits of compatibility with the average quantum discord $D(\overline{\hat{a}t{\rho}_4})$. This suggest that the second strategy employing the approximation to Werner states reveals to be too drastic, as it does not account properly for the actual tomographic reconstruction of the density matrix. \par We can conclude that, even though high values of fidelity between a target state and a tomographic state are achieved, the properties of the two can be very different. On an extreme level, we can look at the distribution of the fidelity between the most classical states we generated, $\hat{a}t{\rho}_3^i$, and, at the opposite, the most entangled one as the target state, namely $\hat{a}t{\rho}_4^{(w)}$. The probability density histogram in Fig.~\ref{f:histogram} shows that the two kind of states should result compatible with a level of fidelity $F\approx 0.97$, even though they possess clearly different properties. \section{Conclusions} \label{s:conclusions} In conclusion, we have addressed quantum state reconstruction for DV and CV quantum optical systems and experimentally analyzed the significance of fidelity as a figure of merit to assess the properties of the reconstructed state. State reconstruction, in the two cases, has been performed adopting homodyne and MLE tomography techniques. One of the most natural ways to link the tomographic results to the target states, i.e. the quantum states supposed to be generated by the designed experimental setup, is the evaluation of fidelity. In order to study the relation between fidelity and the experimental states, we performed statistical analysis using Monte Carlo sampling of each experiment, re-generating sets of $N_{\text{MC}}=10^3$ data samples and analyzing the distribution of some of their main properties as a function of fidelity. \par In the CV framework, we employed a thermal-state seeded OPO cavity, an experimental configuration which allows to generate STS on-demand. The accurate control of the thermal and squeezing component of the apparatus, allows us to address the quantum-to-classical transition for these states. Our results show that even for high values of fidelity and imposing energy constraints, one may find neighboring states in terms of fidelity which, however, not share the same quantum/classical properties. \par In the DV context, we experimentally obtained pairs of polarized photons from type-I downconversion and conveniently generate the Werner mixed states. This one-parameter family of states allowed us to analyze the non-classical properties of two-qubit states in terms of entanglement/separability and to evaluate the amount of quantum correlations by means of the quantum discord. We found that a fidelity based approximation of the tomographic states by Werner states may lead to an overestimation e.g. of the quantum discord. Moreover, high values of fidelity may occur between two very different states in terms of their separability properties. \par Overall, we conclude that while fidelity is a good measure of geometrical proximity in the Hilbert space it should not be used as the sole benchmark to certify quantum properties, which should be rather estimated tomographycally in a direct way. \section*{Acknowledgments} This work has been supported by UniMI through the UNIMI14 grant 15-6-3008000-609 and the H2020 Transition Grant 15-6-3008000-625, and by EU through the H2020 Project QuProCS (Grant Agreement 641277). \appendix \section{Evaluation of uncertainties by Monte Carlo resampling} \label{s:MC} In order to avoid the limitations of finite samples and the influence of systematic unpredictable errors that could be present in an experiment, we evaluate uncertainties by Monte Carlo resampling of data, according to standard metrological prescriptions \cite{JCGM} valid for any statistical models having a single output quantity and input quantities with arbitrary distribution. Here we provide a brief summary of the main assumptions and principles. \par The measured quantities of interest $X_i$ are random variables distributed according to a given probability density function (PDF) $\mathcal{G}(X_i)$. In particular, we assume normal distributions characterized by mean value $\langle x_i \rangle$ and standard deviation $\delta x{_i}$. Monte Carlo evaluation of uncertainties is based on sampling random outcomes from $\mathcal{G}(X_i)$ according to experimental data, which themselves fix the the average values and the standard deviations. In particular, as described in the main text, the considered experimental measurements, for CV systems, correspond to homodyne detection of the radiation field quadratures, whereas for DV systems we perform coincidence photon counting measurements of polarized photons. Starting from experimental results, we generate $N_{\text{MC}}=10^3$ resampled replicas of the experiments, thus building a significative sample for the statistical analysis. \begin{thebibliography}{99} \bibitem{revt1} G. M. D'Ariano, M. G. A. Paris, and M. F. Sacchi, Adv. Imag. Electr. Phys. {\bf 128}, 205 (2003). \bibitem{LNP649} M.~G.~A.~Paris and J.~\v Reh\' a\v cek, {\mbox{e}m Quantum State Estimation}, Lect. Notes Phys. {\bf 649} (2004). \bibitem{revt2} A. I. Lvovsky and M. G. Raymer, Rev. Mod. Phys. {\bf 81}, 299 (2009). \bibitem{guhne} C. Schwemmer, L. Knips, D. Richart, H. Weinfurter, T. Moroder, M. Kleinmann, and O. G\"uhne, Phys. Rev. Lett. {\bf 114}, 080403 (2015). \bibitem{jay57} E.~T.~Jaynes, Phys. Rev. {\bf 106}, 620 (1957); {\mbox{e}m ibidem} {\bf 108}, 171 (1957). \bibitem{buz98} V.~Bu\v zek, R. Derka, G. Adam, and P.L. Knight Ann. Phys. {\bf 266}, 454 (1998). \bibitem{oli07} S. Olivares, and M. G. A. Paris, Phys. Rev. A {\bf 76}, 042120 (2007). \bibitem{zam05} G. Zambra, A. Andreoni, M. Bondani, M. Gramegna, M. Genovese, G. Brida, A. Rossi, and M. G. A. Paris, Phys. Rev. Lett. {\bf 95}, 063602 (2005). \bibitem{kb03}K. Banaszek, and I. A. Walmsley, Opt. Lett. {\bf 28}, 52 (2003). \bibitem{olom03} J. $\check{\rm R}$eh$\acute{\rm a}\check{\rm c}$ek, Z. Hradil, O. Haderka, J. Pe{\v{r}}ina Jr, M. Hamar, Phys. Rev. A {\bf 67}, 061801(R) (2003). \bibitem{Uhl} A. Uhlmann, Rep. Math. Phys. {\bf 9}, 273, (1976). \bibitem{fuc99} C. A. Fuchs and J. van de Graaf, IEEE Trans. Inf. Theory {\bf 45}, 1216 (1999). \bibitem{ban04} M. Ban, Phys. Rev. A {\bf 69}, 054304 (2004). \bibitem{cav04} C. M. Caves, and K. Wodkiewicz, Phys. Rev. Lett. {\bf 93}, 040506 (2004). \bibitem{Dod12} V. Dodonov, J. Phys. A {\bf 45}, 032002 (2012). \bibitem{AgFid1} M. Bina, A. Mandarino, S. Olivares, and M. G. A. Paris, Phys. Rev. A {\bf 89}, 012305 (2014). \bibitem{AgFid2} A. Mandarino, M. Bina, S. Olivares, and M. G. A. Paris, Int. J. Q. Inf {\bf 12}, 1461015 (2014). \bibitem{Benedetti} C. Benedetti, A. P. Shurupov, M. G. A. Paris, G. Brida, and M. Genovese, Phys. Rev. A {\bf 87}, 052136 (2013). \bibitem{MarianFidelity} P. Marian, and T. A. Marian, Phys. Rev. A {\bf 86}, 022340 (2012). \bibitem{PDH} R. W. P. Drever, J. L. Hall, F. V. Kowalski, J. Hough, G. M. Ford, A. J. Munley, H. Ward, Appl. Phys. B {\bf 31}, 97 (1983). \bibitem{SBrec} S. Cialdi, C. Porto, D. Cipriani, S. Olivares, M. G. A. Paris, preprint arXiv:1505.03903 [quant-ph], Phys. Rev. A, in press \bibitem{bachor} H. A. Bachor, and T. C. Ralph, \mbox{e}mph{ A Guide to Experiments in Quantum Optics} (Wiley-VCH, Weinheim, 2004). \bibitem{rossi:04} A. R. Rossi, S. Olivares, and M. G. A. Paris, J. Mod. Opt. {\bf 51}, 1057 (2004). \bibitem{fop:05} A. Ferraro, S. Olivares, and M. G. A. Paris, {\it Gaussian States in Quantum Information} (Bibliopolis, Napoli, 2005). \bibitem{oli:rev} S. Olivares, Eur. Phys. J. Special Topics {\bf 203}, 3 (2012). \bibitem{generazione} S. Cialdi, F. Castelli, I. Boscolo, and M. G. A. Paris, Appl. Opt. {\bf 47}, 1832 (2008). \bibitem{generazione2} S. Cialdi, F. Castelli, and M. G. A. Paris, J. Mod. Opt. {\bf 56}, 215 (2009). \bibitem{SLM} S. Cialdi, D. Brivio, and M. G. A. Paris, Appl. Phys. Lett. {\bf 97}, 041108 (2010). \bibitem{SLM2} S. Cialdi, D. Brivio, A. Tabacchini, A. M. Kadhim, and M. G. A. Paris, Opt. Lett. {\bf 37}, 3951 (2012). \bibitem{visibilita} S. Cialdi, D. Brivio, E. Tesio, and M. G. A. Paris, Phys. Rev. A {\bf 84}, 043817 (2011). \bibitem{kb00} K. Banaszek, G. M. D'Ariano, M. G. A. Paris, and M. F. Sacchi, Phys. Rev. A {\bf 61}, 010304(R) (1999). \bibitem{jam01} D. F. V. James, P. G. Kwiat, W. J. Munro, and A. G. White, Phys. Rev. A {\bf 64}, 052312 (2001). \bibitem{per} A. Peres, Phys. Rev. Lett. {\bf 77}, 1413 (1996). \bibitem{hor} M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A {\bf 223}, 1 (1996). \bibitem{disc:1} H. Ollivier, and W. H. Zurek, Phys. Rev. Lett. {\bf 88}, 017901 (2001). \bibitem{disc:2} L. Henderson, and V. Vedral, J. Phys. A {\bf 34}, 6899 (2001). \bibitem{JCGM} BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP, and OIML, \textit{Evaluation of Measurement Data - Supplement 1 to the Guide to the Expression of Uncertainty in Measurement - Propagation of Distributions Using a Monte Carlo Method} (JCGM 101, 2008), http://www.bipm.org/utils/common/documents/jcgm/ JCGM\_101\_2008\_E.pdf . \mbox{e}nd{thebibliography} \mbox{e}nd{document}
\begin{document} \title{Experimental quantum key distribution with simulated ground-to-satellite photon losses and processing limitations} \author{Jean-Philippe Bourgoin} \email{[email protected]} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \affiliation{Department of Physics and Astronomy, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \author{Nikolay Gigov} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \affiliation{Department of Physics and Astronomy, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \author{Brendon~L. Higgins} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \affiliation{Department of Physics and Astronomy, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \author{Zhizhong Yan} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \affiliation{Centre for Ultrahigh Bandwidth Devices for Optical Systems (CUDOS) \& MQ Photonics Research Centre, Department of Physics \& Astronomy, Macquarie University, Sydney, NSW 2109, Australia} \author{Evan Meyer-Scott} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \affiliation{Department of Physics and Astronomy, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \author{Amir~K. Khandani} \affiliation{Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \author{Norbert L{\"u}tkenhaus} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \affiliation{Department of Physics and Astronomy, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \author{Thomas Jennewein} \email{[email protected]} \affiliation{Institute for Quantum Computing, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \affiliation{Department of Physics and Astronomy, University of Waterloo, Waterloo, ON N2L 3G1, Canada} \begin{abstract} Quantum key distribution (QKD) has the potential to improve communications security by offering cryptographic keys whose security relies on the fundamental properties of quantum physics. The use of a trusted quantum receiver on an orbiting satellite is the most practical near-term solution to the challenge of achieving long-distance (global-scale) QKD, currently limited to a few hundred kilometers on the ground. This scenario presents unique challenges, such as high photon losses and restricted classical data transmission and processing power due to the limitations of a typical satellite platform. Here we demonstrate the feasibility of such a system by implementing a QKD protocol, with optical transmission and full post-processing, in the high-loss regime using minimized computing hardware at the receiver. Employing weak coherent pulses with decoy states, we demonstrate the production of secure key bits at up to \SI{56.5}{\dB} of photon loss. We further illustrate the feasibility of a satellite uplink by generating secure key while experimentally emulating the varying channel losses predicted for realistic low-Earth-orbit satellite passes at \SI{600}{\km} altitude. With a \SI{76}{\mega\Hz} source and including finite-size analysis, we extract 3374~bits of secure key from the best pass. We also illustrate the potential benefit of combining multiple passes together: while one suboptimal ``upper-quartile'' pass produces no finite-sized key with our source, the combination of three such passes allows us to extract 165~bits of secure key. Alternatively, we find that by increasing the signal rate to \SI{300}{\MHz} it would be possible to extract \num{21570}~bits of secure finite-sized key in just a single upper-quartile pass. \end{abstract} \maketitle \section{Introduction} Quantum key distribution (QKD) offers communications security without reliance on computational presumptions by taking advantage of fundamental properties of quantum mechanics~\cite{Bennett1984, Scarani2009}. Despite reaching maturity that supports commercial implementation~\cite{idQuantique, MagiQ}, QKD has yet to achieve widespread use, in large part owing to distance limitations, on the order of \SI{200}{\km}, inherent to lossy terrestrial transmissions~\cite{TNZHHTY07, Ursin2007, Schmitt07, Stucki_NJP250k, Liu_10, YHHJIJ12, korzh2014provably}. Quantum repeaters~\cite{Briegel1998} promise to overcome this shortcoming, but they require high-fidelity quantum memories which are still in the fundamental research stage~\cite{SAABDGHJKMNPRDRSSSTWWWWY10, Sangouard2011} and are not yet viable for real-world application. Alternatively, quantum links to orbiting satellites can be implemented using existing technologies~\cite{Gilbert2000, Nordholt02, Ursin2009, Bonato_NJP_09, Bourgoin2013}. One near-term approach to satellite-based QKD is where a satellite, acting as a trusted node, performs two consecutive quantum key exchanges with two different ground stations. A combination of the two keys is then publicly revealed, allowing one ground station to extract the other's key, giving both locations a shared key in a way that no other party (except for the satellite) can surreptitiously intercept. This approach may be implemented with either uplink (photons sent from ground station transmitter to satellite receiver) or downlink (photons sent from satellite transmitter to ground station receiver). The feasibility of both of these has been extensively studied~\cite{BHLMNP00, Rarity_NJP_02, Bonato_NJP_09, Bourgoin2013}. While a downlink benefits from lower transmission losses, allowing higher key rates, an uplink offers the advantage of a simpler satellite design, easier pointing, reduced on-board data collection requirements, and source flexibility, which makes an uplink the preferred scenario for scientific study~\cite{MeyerScott2011, Bourgoin2013}. A significant challenge in the uplink scenario is operating with the high photon loss experienced (\SIrange{40}{60}{\dB}). Previous work demonstrated that key extraction is possible, in principle, beyond \SI{50}{\dB} of loss in the infinite key limit~\cite{MeyerScott2011}. However, this work did not perform all of the steps necessary to implement the QKD protocol and produce a secure key. (Indeed, experimental QKD demonstrations routinely go no further than calculate the expected length of the secure key based on observed parameters.) Here we experimentally demonstrate key extraction at various transmission loss levels, up to \SI{56.5}{\dB}, while including all the QKD processing steps required to finally extract a secure key. We also examine the effect of finite statistics, and assess the time required to achieve near-asymptotic key rates in the high-loss environment. Further, we show the feasibility of ground--satellite QKD by experimentally recreating the varying losses of three realistic uplink satellite passes. Our apparatus has the two parties involved in the high-loss QKD transmission operating independently, each party having separate event time-taggers, global positioning system (GPS) receivers, and classical processing mediated by a classical communication channel. Because we focus on a future satellite implantation, computational requirements are also a key aspect. The system we have developed attempts to reduce, as much as possible, these requirements at the receiver. We analyze the complexity of the classical processing functions, and demonstrate operation on low-power embedded hardware. We show that the requirements are feasible, making our overall design suitable for a satellite payload. This paper is organized as follows. \Cref{sec:QKD} details the steps of the QKD protocol and the approaches we have taken to optimize it for a satellite uplink. \Cref{sec:Apparatus} describes the experimental apparatus we constructed to perform our demonstrations. \Cref{sec:Results} presents the results in two parts: \cref{sec:requirements} shows the results of our computational analysis, while \cref{sec:QKDresults} shows the results of our experimental QKD demonstration. We close with discussion and conclusion in \cref{sec:Conclusion}. \section{Implementing QKD with limited resources}\label{sec:QKD} \subsection{BB84 with decoy states} The seminal QKD protocol, BB84~\cite{Bennett1984} encodes information in the polarization states of single photons. Ideally, at each time-step Alice randomly selects one of four polarizations in two bases---horizontal (H), vertical (V), diagonal (D), or anti-diagonal (A)---and sends a photon with this polarization to Bob. Bob randomly selects a basis, H/V or D/A, and measures the photon to obtain one of four outcomes. This procedure occurs for many time-steps, and after revealing the bases used, Alice and Bob ``sift'' their events, discarding those which have mismatched bases. By defining H and D to correspond to a bit value of 0, and V and A to correspond to a bit value of 1, Alice and Bob retain a common string of random bits---the \emph{sifted} key~\cite{Scarani2009}. Practical implementations have extra complications: photons are lost in transmission, imperfections in photon source and detection devices introduce errors (which must be corrected), and weak coherent pulse sources, which are often used in place of true single-photon sources, exhibit potentially insecure multi-photon emission events. Decoy-state protocols~\cite{Ma2005}, with error correction and privacy amplification as classical post-processing steps, have been developed to overcome these issues. Theoretical QKD security proofs provide equations for the secure key rate based on experimentally measurable parameters such as the quantum bit error ratio (QBER), background noise counts, and decoy parameters. These allow us to make a statement about the security of the \emph{final} key after the quantum transmission is complete. Most importantly, these equations determine the amount of privacy amplification required to be able to claim $\epsilon$ security~\cite{Renner_phd, TLGR12}. Once the post-processing is complete and the final key deemed secure, it can then be used in a classical encryption protocol such as one-time pad. We implement the vacuum+weak decoy-state protocol~\cite{Ma2005}, in which Alice randomly emits signal states with average photon number $\mu$, or decoy states that are either vacuum or have an average photon number $\nu < \mu$. In our implementation (\cref{fig.system_schematic}), Alice employs polarization and intensity modulation~\cite{YMBHGMHJ13} to prepare a random sequence of BB84 polarization encodings which are 92\% signal and 8\% decoy states. Our average photon numbers are $\mu \approx 0.5$ and $\nu \approx 0.05$, which are near the optimal for this protocol. (Further details of the apparatus are given in \cref{sec:Apparatus}.) \begin{figure} \caption{Schematic overview of our high-loss QKD apparatus. The source at Alice produces weak coherent pulses with wavelength \SI{532} \label{fig.system_schematic} \end{figure} The lower bound for the final asymptotic secure key rate per laser pulse is~\cite{Ma2005} \begin{equation}\label{eq:SecureRate} R_\infty = q K_\mu \left\{ - Q_\mu f_\text{EC} H_2(E_\mu) + Q_1^\text{L} \left[ 1 - H_2({E_1^\text{U}}) \right] \right\}. \end{equation} Here, $q$ is a basis reconciliation factor ($\sfrac{1}{2}$ for BB84), $K_\mu$ is the fraction of pulses that are signal states, $f_\text{EC}$ is the efficiency parameter of the error correction algorithm, $H_2$ is the binary entropy function, $Q_{\mu/\nu}$ is the gain for signal/decoy states (the ratio of number of photons detected by Bob to number of pulses sent by Alice), $E_{\mu/\nu}$ is the QBER for signal/decoy states (ascertained in the error correction process), and $Q_1^\text{L}$ and $E_1^\text{U}$ are the lower bound of the gain and the upper bound of the QBER, respectively, for single-photon pulses. The single-photon gain lower bound $Q_1^\text{L}$ is calculated as \begin{equation}\label{eq:Q1L} Q_1^\text{L} = \frac{\mu^2 e^{-\mu}}{\mu\nu - \nu^2}\left(Q_\nu e^\nu - Q_\mu e^\mu \frac{\nu^2}{\mu^2} - \frac{\mu^2 - \nu^2}{\mu^2 }Y_0\right) \end{equation} where $Y_0$ is the vacuum yield, determined by the cumulative probability of detector dark counts and background noise within the coincidence window. The single-photon QBER upper bound $E_1^\text{U}$ is calculated as~\cite{Ma2005, CaiScarani} \begin{align} E_1^{\text{U},\mu} & = \frac{E_\mu Q_\mu}{Q_1^\text{L}} - \frac{E_0 Y_0}{Q_1^\text{L} e^\mu} \\ E_1^{\text{U},\nu} & = \frac{E_\nu Q_\nu e^\nu - E_0 Y_0}{\nu Q_1^\text{L} }\mu e^{-\mu} \\ E_1^\text{U} & = \min\left\{E_1^{\text{U},\mu}, E_1^{\text{U},\nu}\right\} \label{eq:e1U} \end{align} where $E_0$ is the vacuum error rate ($\sfrac{1}{2}$ in a perfect apparatus). In the present work, the parameters in equations (\ref{eq:SecureRate})--(\ref{eq:e1U}) are determined from experimental data to obtain the asymptotic lower bound for the secret key rate per laser pulse, $R_\infty$. To obtain the secure key rate in bits per second, $R_\infty$ is multiplied by the pulse rate of the WCP source. Proper generation of a secure key needs to incorporate the effects of statistical fluctuations due to finite-sized experimental data~\cite{LPDFSDYPS13}. To account for this we use the common heuristic of adding or subtracting $10\sigma$ variation from the experimental parameters in such a way as to minimize the key rate~\cite{Sun2009}. (A recently proposed method may allow to account for statistical fluctuation in a more rigorous fashion~\cite{curty2014finite}.) Finite-size security effects are captured~\cite{ScarRenner08} by the security parameter $\Delta$, resulting in a key rate lower bound \begin{align}\label{eq:FiniteSizeRate} R &= q K_\mu \Big\{ - Q_\mu f_\text{EC} H_2(E_\mu) \nonumber\\ &\quad\quad + Q_1^\text{L} \left[ 1 - H_2({E_1^\text{U}}) \right] - Q_\mu \Delta / N_\mu \Big\} \end{align} where $\Delta = 7\sqrt{N_\mu \log_2[2/(\bar\epsilon - \bar\epsilon')]} - 2\log_2 [2(\epsilon - \epsilon_\text{EC} - \bar\epsilon)]$, $\epsilon_\text{EC}$ is the error correction silent failure probability (we use $10^{-10}$), $N_\mu$ is the raw key size, and $\bar\epsilon$ and $\bar\epsilon'$ are numerically optimized for $R$, constrained by $\epsilon - \epsilon_\text{EC} > \bar\epsilon > \bar\epsilon' \geq 0$. \subsection{Distribution of post-processing tasks} The design of our classical post-processing software follows the principle that Alice should perform as many of the computationally intensive tasks as possible, as the ground station can be made rich in computing resources, compared to the limited capacity of a satellite payload. In our system, Alice, being the source of the optical signal over the high-loss link, is responsible for high-rate data readout. She also performs timing analysis (to match Bob's classically transmitted time-tagged photon detection events to her time-tagged source events) and basis sifting, afterwards sending simplified coincidence information back to Bob. We also choose a one-way error correction algorithm based on low-density parity-check codes~\cite{MN97}, in which Alice performs the computationally expensive decoding algorithm while Bob only runs a linear algorithm to compute his syndromes (see \cref{sec:LDPC}). This scheme has the additional advantage of having low classical communication overhead. Finally, both parties perform a Toeplitz-matrix-based~\cite{Krawczyk94} privacy amplification routine suitable for low-power hardware implementation (see \cref{sec:PA}). We separate Bob's software into two components: a driving control environment, and an embedded processing component. The driving control component is responsible for all platform-dependent tasks, e.g.\ loading time-tagger operating system drivers, configuring time-taggers, reading out time-tags and displaying live statistics. To be suitable for implantation into a yet-to-be-designed satellite, Bob's embedded processing component is implemented in a platform-agnostic way using a portable low-level language (C). It is executed as a separate process on an x86-64 desktop computer, or on a low-power ARM development board, and performs the bulk of Bob's necessary processing tasks. Because Bob's embedded component runs in a standalone process, its usage of computing resources can be accurately monitored. Moreover, the driving control component records the bandwidth used for classical communication. This design allows us to make an accurate assessment of the classical post-processing requirements and guide our analysis of the computing requirements of Bob's part of the QKD protocol. \subsection{Time synchronization and basis sifting} Several practical issues complicate the process of determining which time-tagged photon detections correspond to particular source events, including initial clock synchronization, drift over time, and variation in photon time-of-flight. For our apparatus, an extra complication is that our data acquisition hardware is not capable of operating at the high pulse frequency of the laser source (see \cref{sec:Apparatus}). To reduce the heavy load on the time-tagger at the source, only a subset of the laser's output pulses are time-tagged. With Alice utilizing a predefined (known only to Alice) randomized sequence of pulse states, we assume that the laser's period is stable on the order of \si{\micro\s}, and interpolate to reconstruct the transmitted states for timing analysis. The predefined sequence is not a requirement of the time-tagger but is necessary due to limitations of the modulation electronics at the source, which require a preloaded sequence. To reduce clock drift, we align the time-tagging units' internal clocks to a \SI{10}{\MHz} time-base signal provided by a GPS receiver at each site. Initial synchronization is achieved with the one pulse per second (\SI{1}{PPS}) signal provided by the GPS receivers. Position data is supplied with these signals, which can be used in conjunction with time data to estimate the distance between Alice and Bob, and hence, the time-of-flight of the photons between the source and the receiver. In our system, photon detections are tagged with a resolution of \SI{156.25}{\ps}, but the \SI{1}{PPS} signals are accurate to ${\approx}\SI{100}{\ns}$. Additional analysis is required to identify corresponding emission and detection events to within a desired coincidence window of about \SIrange{0.1}{1.3}{\ns}. The algorithm to achieve this synchronization utilizes the timing information from Bob's time-tags, Alice's transmitted photon states, Alice's and Bob's GPS timing and position data, as well as a small subset (${\approx}\SI{5}{\percent}$) of Bob's measured outcomes. Alice employs a histogram-based optimizing coincidence search within a predefined time span (about \SI{100}{\ns}). The subset of Bob's revealed measured outcomes (which are discarded from the final key) are also used to estimate the QBER to commence the error correction stage of the QKD protocol. Once coincident events have been identified and noncoincident events removed, Alice performs basis sifting of her raw key to produce her sifted key and transmits a list of indices to Bob, which he utilizes to equivalently sift (for both time and basis, simultaneously) his photon detection events. \subsection{Error correction}\label{sec:LDPC} Low-density parity-check (LDPC) codes are highly suitable for satellite-based QKD due to the low communication overhead required and the inherent asymmetry in the computational complexity at each site. First, Alice prepares an $M \times N$ irregular~\cite{Hu_peg} parity-check matrix, where $N$ is the sifted key block size, and $M$ is based on the QBER estimate obtained during timing analysis. We use progressive-edge growth software~\cite{Hu_peg} (modified from~\cite{PEG_SW}), employing known optimal degree distribution profiles~\cite{El_2009, Mateo_phd}, to generate the parity-check matrix. Alice then transmits the matrix in a compact form to Bob over a classical channel. For each $N$-bit block of his sifted key, Bob runs an efficient linear algorithm to compute a syndrome using this matrix, and transmits it to Alice. Alice then attempts to reconcile her sifted key assuming that Bob's sifted key is ``correct'' (it remains unchanged throughout this process). For each block of sifted key, Alice's goal is to resolve Bob's key vector $\mathbf{x}$, based on her key vector $\mathbf{y}$, Bob's syndrome $\mathbf{s}$, the parity-check matrix, and the estimated QBER. To accomplish this task, Alice employs \emph{belief propagation}, an iterative message passing decoding algorithm, also known as the sum-product algorithm~\cite{Pea04, Lucio-Martinez_NJP_09}. Our sum-product LDPC decoder is written in C\# and is based on that found in~\cite{Chan_masc}. Upon success, Alice and Bob both possess the $N$-bit error-corrected key block $\mathbf{k}_\text{EC} = \mathbf{x}$ and obtain the exact QBER for the quantum transmission, $E_\mu$. By Shannon's channel coding theorem~\cite{Shannon1948} applied to the binary symmetric channel~\cite{CoverIT}, we can deduce a closed-form estimate of the appropriate size of the LDPC matrix based on the (estimated, denoted by a tilde) QBER~\cite{El_2011}: $M = N f_\text{EC} H_2(\tilde{E}_\mu)$. The decoding step may yet terminate unsuccessfully with a given matrix and key block---the probability of this decreases as $M$ (and thus $f_\text{EC}$) is increased. In the case of such a termination, we may either discard the key block or retry the algorithm with an augmented matrix containing all the rows of the previous matrix, similar to the ``nested'' LDPC codes proposed in~\cite{Rav_NestedLDPC}. In a satellite mission, the choice can be based on the availability of the classical communication channel. Our implementation exhibits efficiencies ($f_\text{EC}$) ranging from 1.1 to 1.5. The silent failure probability of the belief propagation procedure---i.e., the probability that the process terminates successfully but there remains one or more uncorrected bits---is not well characterized in existing literature. While we have not observed any silent failures during our testing, we cannot be certain that $\epsilon_\text{EC} = 10^{-10}$ is achieved. To ensure such certainty, one could calculate, reveal, and compare a fingerprint hash of $\mathbf{x}$ and $\mathbf{k}_\text{EC}$. (Such an approach using 128-bit MD5 sums~\footnote{Although MD5 is not cryptographically secure as methods to modify a file while preserving its hash value are known, this does not assist Eve's effort to reconstruct the \emph{same} key as Alice and Bob.}, for example, yields a collision probability, and thus a silent failure probability, of order $2^{-128} \approx 3\times10^{-39}$.) To account for the revealed bits, the final key length would need to be reduced by the same (constant) number of bits. Because the necessity of these extra steps and the specific method of implementation are unclear, we do not perform these steps here. \subsection{Privacy amplification}\label{sec:PA} The error-corrected key block $\mathbf{k}_\text{EC}$ is only partially secure, as some information may have leaked to an eavesdropper (Eve)---we attribute the observed QBER to Eve's interaction with the signal, and all parity information communicated during error correction is known to Eve as it was transmitted over a public channel. Privacy amplification is employed to create a new, final, key $\mathbf{k}_\text{F}$ on which Eve no longer holds more than negligible amount of information. The procedure consists of applying a \emph{two-universal hash function}~\cite{Scarani2009, Tsurumaru_11_PA} to $\mathbf{k}_\text{EC}$ to produce a provably secure key block $\mathbf{k}_\text{F}$ of length $L < N$ (recall $N$ is the sifted key block size). $L$ is obtained by multiplying $R$ of \cref{eq:FiniteSizeRate} by the number of pulses sent. For mitigating the nonlinear length reduction due to finite-size effects, $N$ should be kept above a certain value, typically ${\sim}10^5$, as finite-size effects heavily impact keys with lower $N$. This value is taken into consideration when selecting a hash function. Privacy amplification is a symmetric operation which needs to be performed by both Alice and Bob. The choice of hash function dictates the computational complexity of the process and the amount of classical communication required. In our implementation, the privacy amplification procedure loosely follows the methodology outlined in~\cite{Tsurumaru_11_PA}---however, we have made some alterations to their model and developed a different matrix multiplication procedure suitable for efficient implementation in hardware. Briefly, we employ the Toeplitz matrix~\cite{Gray_2005} construction implemented using a shift register. A Toeplitz matrix has constant descending left-to-right diagonal elements. An $L \times N$ Toeplitz matrix can be written as \begin{equation} T_\mathbf{r} = \begin{bmatrix} r_L & r_{L+1} & \cdots & & & & \cdots & r_{N+L-1} \\ r_{L-1} & r_L & r_{L+1} & \cdots & & & \cdots & r_{N+L-2} \\ \vdots & & \ddots & & & & & \vdots \\ r_2 & \cdots & r_{L-1} & r_L & r_{L+1} & \cdots & \cdots & r_{N+1}\\ r_1 & r_2 & \cdots & r_{L-1} & r_{L} & r_{L+1} & \cdots & r_{N} \end{bmatrix}. \end{equation} A Toeplitz matrix is a two-universal hash function~\cite{Krawczyk94}. Note that a Toeplitz matrix $T_\mathbf{r}$ is completely defined by the $(N+L-1)$-bit vector $\mathbf{r} = (r_1, r_2, \dots, r_{N+L-1})$, thus its storage and transmission requirements are considerably reduced. Further, an $L \times N$ matrix of the form $U_\mathbf{r} = (I_{L} | T_\mathbf{r})$, i.e.\ a concatenation of an $L$-dimensional identity matrix $I_{L}$ and an $L \times (N-L)$ Toeplitz matrix $T_\mathbf{r}$, is also a two-universal hash function as we require, but requires only $N-1$ bits to define~\cite{Golub_matrix, Hayashi_11_PA}. Following error correction, Alice generates such a matrix by constructing a random binary string $\mathbf{r} = (r_1, r_2, \dots, r_{N-1})$ of length $N-1$, and then transmits $\mathbf{r}$ to Bob over the classical channel. Alice and Bob then use $\mathbf{r}$ and a shift register to apply the hash matrix $U_\mathbf{r}$, computing the final secure key, $\mathbf{k}_\text{F} = U_\mathbf{r} \mathbf{k}_\text{EC}$. In our implementation, the identity portion of each row of $U_\mathbf{r}$ uses no space and can be accounted for with a simple logical AND operation. We represent $T_\mathbf{r}$ as an $(N-L)$-bit logical shift register. Initially, the shift register contains the last $N-L$ bits of $\mathbf{r}$, $(r_{L}, r_{L+1}, \dots, r_{N-1})$. The remaining bits from $\mathbf{r}$ are used as input for the shift register. In this way, we conserve memory by never needing to store full matrices. The logical shift register is broken up into multiple 32-bit blocks, each of which is designed to fit inside a register on a processing unit. The register size of 32 bits is chosen for the support of multiple platforms, including our low-power ARM test board. 64-bit platforms are also available, and with single instruction, multiple data (SIMD) extensions, commonplace in contemporary desktop processors, the register could be 256 bits or larger. After privacy amplification, Alice and Bob are left with a secure key of $L$ bits which can then be used to encrypt data transmitted on a classical channel through, e.g., one-time pad. We assume here that channel authentication is performed separately, possibly using some of the secure key (reducing the length available for encryption). \section{Apparatus}\label{sec:Apparatus} Our QKD system, shown in \cref{fig.system_schematic}, consists of a weak coherent pulse (WCP) source, a variable-loss free-space channel, and a compact four-outcome (two passively chosen measurement bases) quantum receiver. The source utilizes up-conversion (sum frequency generation) from two orthogonally oriented type-I periodically poled potassium titanyl phosphate (PPKTP) crystals to produce photon pulses at \SI{532}{\nm} wavelength from a mode-locked Ti:sapphire laser at \SI{810}{\nm}, operating at a rate of \SI{76}{\MHz}, and a continuous-wave laser at \SI{1550}{\nm}. Diagonally polarized \SI{810}{\nm} laser pulses are combined with polarization- and intensity-modulated \SI{1550}{\nm} laser light (controlled by efficient telecom waveguide modulators~\cite{YMBHGMHJ13}) to generate \SI{532}{\nm} pulses possessing the short pulse width and high repetition rate of the \SI{810}{\nm} laser, as well as the intensity and polarization of the \SI{1550}{\nm} light. Phase randomization between pulses, necessary to ensure security, is provided by the short coherence time of the \SI{1550}{\nm} laser (less than the pulsing period of the \SI{810}{\nm} laser, but much more than the pulse duration). Birefringent wedges precompensate the \SI{810}{\nm} light for temporal walk-off in the PPKTP crystals. The photon pulses produced are coupled into single-mode fiber. A fiber splitter sends ${\approx}\SI{0.3}{\percent}$ of photons to a thick-silicon avalanche photodiode (Excelitas SPCM-AQ4C) to measure the average photon number per pulse. The remaining photons are sent to a free-space quantum channel consisting of a bare fiber output followed by a 3-inch-diameter lens on a longitudinal translation stage. The loss is adjusted by varying the position of the lens, changing the amount of light directed into the receiver by making the beam more or less divergent. The quantum receiver, \cref{fig.receiver_schematic}, is built using Thorlabs' cage system. The receiving telescope consists of a \SI{5}{\cm} diameter, \SI{25}{\cm} focal length collection lens followed by a \SI{6.5}{\mm} diameter, \SI{11}{\mm} focal length collimating lens. Passive measurement basis choice is implemented by coupling polarization discrimination apparatuses to two orthogonal outputs of a pentaprism beam-splitter, each of which transmits approximately \SI{47.5}{\percent} of the injected power. (A pentaprism was chosen for potential application in future experiments---a 50:50 beam-splitter would also suffice as we do not here use the pentaprism's third output.) Polarization analysis is done by a \SI{5}{\mm} cubic polarizing beam-splitter (PBS) in each arm, directing photons to one of four detector assemblies. Measurement in the diagonal basis is obtained by physically orienting (to \SI{45}{\degree}) the PBS and detectors around the beam path, relative to the other PBS (which defines the rectilinear basis). Following each analysis PBS, a second PBS, oriented at \SI{90}{\degree}, is used to suppress erroneous optical signal in the reflected path (owing to device imperfection). Each detector assembly contains a spatial-filtering shield and a \SI{2}{\nm} band-pass filter to suppress background noise. Photons are focused by \SI{6}{\cm} focal length lenses and detected by thin-Si avalanche photodiodes from Micro Photon Devices, which feature good detection efficiency (${\approx}\SI{50}{\percent}$), low dark counts (${\approx}\SI{20}{cps}$) and low jitter (${\leq}\SI{50}{\ps}$). Temporal filtering with a narrow (${\sim}\SI{1}{\ns}$) time window allows us to accept signal photons while rejecting remaining background and dark counts with high fidelity. The background yield $Y_0$ is estimated by counting photon detections between pulses. Though this approach is known to be insecure, it suffices for our proof-of-concept demonstration. Data are acquired by two time-tagger units, and processed by two x86-64 computers (and, when testing algorithmic performance, an ARM board) on a local-area network (LAN). Each time-tagger unit is connected to \SI{10}{\MHz} and \SI{1}{PPS} signals coming from a GPS receiver. The signals from the source are connected to Alice's time-tagger and the four outputs of Bob's detectors are connected to Bob's time-tagger. \begin{figure} \caption{High-loss QKD receiver. Photons are captured by the telescope and pass through motorized-rotating half- and quarter-wave plates, correcting unwanted polarization rotations. The pentaprism beam-splitter provides a passive basis choice between rectilinear and diagonal polarization bases. A polarizing beam-splitter (PBS) in each basis arm discriminates H/V and D/A polarized photons, the latter by physically orienting the PBS \SI{45} \label{fig.receiver_schematic} \end{figure} Our receiver also includes an arbitrary polarization rotation assembly, consisting of quarter-wave plates (QWPs) before and after a half-wave plate (HWP) mounted in motorized rotation stages, allowing the compensation of any unitary polarization change in the channel. We have developed an automated polarization alignment protocol which characterizes the effect of the channel on the known QKD polarization states, measuring the quantum signal directly, sufficient to then determine an optimal compensation implemented by the arbitrary polarization rotation assembly. \section{Results}\label{sec:Results} \subsection{Post-processing resource requirements}\label{sec:requirements} For a satellite uplink scenario, optical signals are sent from the ground when the satellite orbits over an optical ground station, while classical communication is performed---possibly at a later time---when the satellite orbits over one or more radio frequency (RF) ground stations. Hence, the satellite system must store all time-tags accumulated during the optical station flyover, performing all classical steps of the QKD protocol during an RF station flyover when a classical communication link is present. Specifically, Bob must store: time-tags, measurement bases, photon detections (bit values), and data defining the error correction LDPC matrix and privacy amplification Toeplitz matrix. Our time-tagging hardware produces 64-bit time-tags. To save on the limited memory and classical communications bandwidth available to a satellite, it is possible to reduce that number significantly, at the expense of additional computation steps. One simple scheme is to store the full time-tag only at the beginning of every second of data collection, together with additional information provided by the GPS receiver (which outputs a data packet every second). In this way, the space required to store the time-tag and measurement outcome information is 40~bits. To further save memory and classical communications bandwidth, the sparse LDPC matrix for error correction can be efficiently transmitted and stored as an adjacency list where only the indices of each non-zero element in each row are recorded. If the decoding step fails, we must then retry with a larger matrix (or discard the block), implying an increase of $f_\text{EC}$, i.e., worse efficiency. The embedded processing component of the satellite-side software is tested on an inexpensive (${\approx}\$150$), low-power (\SI{2}{\W}) Freescale i.MX53 QSB single-board computer featuring a \SI{1}{\GHz} single-core ARM processor and \SI{1}{GiB} of RAM. The measured performance, \cref{tab:cpustats}, illustrates successful operation within reasonable resource constraints. We have found that in our system the limiting factor for Bob is the privacy amplification step, which requires a relatively long processing time. For all other processes, the limiting factor is not Bob's computational power, but rather Alice's, and the \SI{100}{\mega\bit} Ethernet link between them. A future implementation could have a far more powerful computer at Alice than what we have used for our demonstration. \begin{table}[t] \caption{Measured performance of the satellite-side QKD process running on a Freescale i.MX53 embedded ARM board processing 300~seconds of QKD data (\SI{28.8}{\dB} loss data with rate-limiting applied; see text). Here, privacy amplification is applied without incorporating finite-size effects which reduce secure key length, giving us upper bounds on resource usage. As expected, processing time scales quadratically with the photon detection rate---a least-squares quadratic fit gives a coefficient of determination $R^2 = 0.9992$.} \centering \begin{tabular}{S[table-format=5]S[table-format=5]S[table-format=1.2]S[table-format=4.1]S[table-format=2.2]} \hline\hline {\textbf{Detection}} & {\textbf{Sifted key}} & {\textbf{QBER}} & {\textbf{Processing}} & {\textbf{RAM used}} \\ {\textbf{rate [Hz]}} & {\textbf{rate [Hz]}} & {\textbf{[\%]}} & {\textbf{time [s]}} & {\textbf{[Mbyte]}} \\ \hline 500 & 229 & 3.43 & 0.5 & 11.19 \\ 1000 & 457 & 3.44 & 1.1 & 20.42 \\ 5000 & 2326 & 3.53 & 14.5 & 70.09 \\ 10000 & 4647 & 3.57 & 56.3 & 71.70 \\ 20000 & 9286 & 3.55 & 772.1 & 75.47 \\ 30000 & 13924 & 3.54 & 2013.1 & 79.38 \\ 41887 & 19428 & 3.54 & 3969.4 & 84.52 \\ \hline\hline \end{tabular} \label{tab:cpustats} \end{table} For computational requirements analysis, we collect experimental data for 300~seconds at a receiver detection rate of about \SI{42}{\kilo\Hz}. Each one-second chunk of this data is then truncated to produce various effective detection rates. \Cref{tab:cpustats} shows detailed memory and CPU usage for the embedded processing component. Privacy amplification complexity is asymptotically quadratic in the block size $N$ due to the matrix multiplication process, while all other post-processing steps behave linearly. Hence, we expect the processing time of the QKD post-processing to overall scale quadratically with the detections, as is observed. Note that we do not expect the number of detections to exceed \num{1e7} over a single satellite pass using feasible quantum sources~\cite{Bourgoin2013}. \subsection{Experimental secure key extraction}\label{sec:QKDresults} \begin{figure} \caption{Measured raw key rate, background detection rate and QBER obtained in different loss regimes, with a source pulsing at \SI{76} \label{fig.QBER_raw_rate} \end{figure} We perform the experimental demonstration for losses ranging from 28.8 to almost \SI{60}{\dB}, determined from the photon detection rate (corrected for background) with respect to the transmitted optical power. The loss therefore includes both channel loss (variable) and receiver efficiency (fixed \SI{1.5}{\dB} for receiver optics and \SI{3}{\dB} for detector efficiency). The temporal filter window width is adjusted to improve the secure key rate for each value of loss, and ranges from \SI{1.3}{\ns} at low loss to \SI{0.1}{\ns} at high loss. The measured QBER of signal states ranges from \SIrange{1.94}{6.06}{\%}, with raw key rate (total detections within the temporal filter window, per second) ranging from \SI{38211}{\Hz} at \SI{28.8}{\dB} to \SI{44.2}{\Hz} at \SI{56.5}{\dB}, while the background detection rate ranges from \SIrange{151}{2.38}{\Hz} (see \cref{fig.QBER_raw_rate}). \begin{figure} \caption{Secure key rate (lower bound) in the infinite limit for data measured in different loss regimes. The secure key rate tends to decrease as the loss increases, with some fluctuation about the trend due to variations in the source tuning and channel parameters throughout the data collection campaign. Loss includes both channel loss and receiver efficiency.} \label{fig.fixed_loss} \end{figure} Our experimental results incorporate the full error correction and privacy amplification post-processing. To limit computational time we artificially restricted the error correction block size to \num{600000} (with the sifted key split into the necessary number of blocks). Privacy amplification was implemented over the full sifted-key length of error-corrected key bits in order to minimize finite size effects. We achieve error correction efficiencies between 1.12 and 1.50 (with better efficiencies at higher QBER, as predicted by~\cite{TMPE14}) and privacy amplification to $\epsilon = 10^{-9}$ security. The extracted secure key rate is shown in \cref{fig.fixed_loss} for asymptotic extrapolations to the infinite limit of key length (\cref{eq:SecureRate}). At the highest loss, \SI{56.5}{\dB}, our system is able to extract \SI{0.5}{\bit/\s} of secure key in the asymptotic limit. This is comparable to the result of a previous high-loss demonstration~\cite{MeyerScott2011} which reached \SI{2}{\bit/\s} at \SI{57}{\dB} (the achievable rate there being inferred without implementing the complete QKD protocol). We note that the key rate can be readily improved by employing a faster source---QKD WCP sources have been demonstrated in the \si{\GHz} range~\cite{TDLFY06, TNZHHTY07}, more than an order of magnitude above our \SI{76}{\MHz} source rate. \begin{figure} \caption{Finite-sized secure key rate as a fraction of the corresponding asymptotic secure key rate, given the total number of laser pulses transmitted. Curves are shown for experimental parameters corresponding to each of the loss conditions demonstrated, as indicated by labels beside each curve. Crosses indicate the value that was reached in the experiment. Lowest losses only require around $10^{10} \label{fig.finite_time} \end{figure} Given that the apparatus remains sufficiently stable, the particular finite duration over which data are collected in this experiment is arbitrary. For our results, each data run lasts \SIrange{5}{10}{min}. With such times, when incorporating finite-size statistics (\cref{eq:FiniteSizeRate}) we find positive secure key rates for points up to \SI{45.6}{\dB}. For higher losses, there is insufficient statistics to produce nonzero key under the condition of $10\sigma$ worst-case variation. As the detriment of finite-size effects is due to the limited number of photon counts, a faster source can also mitigate this. Based on the measured experimental parameters, we can extrapolate the raw key rates and determine the achievable finite-size secure key rate if the apparatus was run for longer times or multiple runs under the same conditions were concatenated. \Cref{fig.finite_time} shows the ratio of the asymptotic key rate that can be achieved by the finite secure key ($R/R_\infty$), for a given total number of pulses transmitted, for each experimental loss condition we examine. For the lowest losses, only about $10^{10}$ pulses is necessary to reach over \SI{80}{\percent} of the asymptotic key rate, equating to a few minutes of collection time with our \SI{76}{\MHz} source. More time is required for higher losses: several weeks of continuous collection at \SI{76}{\MHz}, for the highest losses. Interestingly, we find that the \SI{34.9}{\dB} data produces nonzero secure key sooner than the \SI{28.8}{\dB} data---this is owing to the relatively high QBER at \SI{28.8}{\dB}. Our results consistently show a significantly higher decoy QBER compared to the signal QBER. This was caused by the intensity modulator which was found to produce a slight polarization shift that is dependent on the applied modulation, causing the two different intensity levels to have slightly different polarizations before being polarization modulated, leading to a difference in the optimal alignment for the two intensity levels. This polarization shift could be corrected by the addition of a polarizer after the intensity modulator, eliminating the polarization difference between the two states. Although this difference does not invalidate our proof-of-concept demonstration, removing it is crucial in a secure implementation as it leads to distinguishability between signal and decoy states which could be used by an eavesdropper to gain information. Removing this difference may also improve final key rates as it would reduce decoy QBER without affecting signal QBER. Our system alignment is optimized for the signal QBER. \begin{figure} \caption{Experimentally measured loss over the \SI{45} \label{fig.varying_loss} \end{figure} The results presented here are comparable to the regime of a satellite uplink, where the usable part of a pass is expected to vary typically between \SIrange{40}{55}{\dB} loss~\cite{Bourgoin2013}, and help support the conclusion that our approach is suitable for eventual satellite implantation (though a faster source may be advised). To more closely examine the feasibility of our approach in the regime of a satellite uplink for QKD, we simulate several satellite passes by varying the position of the quantum-channel lens (thus varying the loss) during an experimental run. We do this once, the total run lasting approximately 45~minutes, with the loss changing smoothly from $\approx$\SI{63}{\dB} to a minimum of $\approx$\SI{28}{\dB} after about 21~minutes, and then back to $\approx$\SI{62}{\dB} over the remaining time. The data accumulated are segmented into \SI{1}{\s} blocks, with the measured loss for each second over the duration of the experiment shown in \cref{fig.varying_loss}. We redistribute select \SI{1}{\s} blocks of raw key data in such a way that we obtain data sets that reproduce the statistics expected for real satellite uplink orbits~\cite{Bourgoin2013}. The passes considered are the best, upper-quartile and median passes (in terms of contact time) over a hypothetical ground station located at \SI{45}{\degree} latitude of a year-long \SI{600}{\km} circular Sun-synchronous low Earth orbit. The predicted losses are based on uplink at a wavelength of \SI{785}{\nm}, with a receiver diameter of \SI{30}{\cm}, a \SI{2}{\micro rad} pointing error and a rural sea-level atmosphere. The differences with our system (which has \SI{532}{\nm} wavelength and \SI{5}{\cm} receiver diameter) are necessary to mitigate the increased geometric losses over the long distance link of a satellite (requiring larger receiver diameter) and the effect of atmospheric turbulence and transmission (reduced at \SI{785}{\nm} compared to \SI{532}{\nm}). Both our \SI{532}{\nm} system and the expected \SI{785}{\nm} system utilize the same Si avalanche photodiode technology. Analyzing our experimental data possessing these theoretical losses is therefore a valid proof-of-concept demonstration. The experimental data are smoothed by taking the median of a moving window of \SI{29}{\s} width, the result illustrated in \cref{fig.varying_loss}. We use these smoothed data to select \SI{1}{\s} experimental data blocks to include in our analysis for each orbit by progressively scanning (from the center, in either direction) in \SI{1}{\s} steps for the next \SI{1}{\s} data block that possesses smoothed loss matching or exceeding the theoretical orbit loss prediction. By selecting experimental data at points where the smoothed loss is matched to theoretical link predictions, we ensure that the data we sample are not biased by normal fluctuations in measured loss. \begin{figure} \caption{QBER and total loss of data sets reconstructed from measured data (shown in \cref{fig.finite_time} \label{fig.loss_curve} \end{figure} \Cref{fig.loss_curve} shows the three relevant losses---the theoretically predicted loss, the smoothed loss value at the sampled point, and the experimentally measured loss from the sampled point---and the estimated QBER for each representative pass. The measured losses of the sampled experimental data closely match the trend of the theoretical prediction, whilst maintaining realistic fluctuation. At higher losses the per-second QBER estimate has significant fluctuations due to the reduced sample size. Performing the post-processing steps on these data sets and incorporating finite-sized statistics, we are able to extract a \SI{3374}{\bit} secure key from the best pass, out of a total of \SI{544056}{\bit} raw key (\num{643521} detection events) with an average of \SI{3.1}{\%} QBER in the signal. This result shows that even with our modest \SI{76}{\MHz} source a positive key rate can feasibly be generated from one pass (albeit a good one) of a typical low Earth orbit satellite receiver. In comparison, the upper-quartile pass receives \SI{279317}{\bit} raw key (\num{348896} detections) with an average of \SI{3.5}{\%} QBER, but this is insufficient to produce nonzero secure key with finite-sized effects considered (the asymptotic secure key is \SI{17916}{\bit}). Similarly, the median pass with \SI{43375}{\bit} raw key (\num{82470} detections) and average \SI{4.4}{\%} QBER also cannot extract nonzero finite-sized secure key (asymptotic, \SI{877}{\bit}). Improvements to the source could mitigate finite-sized statistical effects. By adjusting the photon and pulse count parameters, we can predict the performance of a \SI{400}{\MHz} source that produces $\sfrac{3}{4}$ signal (i.e.\ a \SI{300}{\MHz} signal rate, as per~\cite{Bourgoin2013}), $\sfrac{1}{8}$ decoy, and $\sfrac{1}{8}$ vacuum pulses. With other measured parameters left unchanged, \SI{21570}{\bit} secure key could be extracted from a single upper-quartile pass. This is directly comparable to the estimation of~\cite{Bourgoin2013}---under better conditions (e.g.\ $E_\nu$ assumed equal to $E_\mu$, intrinsic source QBER of 1\%, $\nu=0.1$, $f_\text{EC}=1.22$, and background estimate used only in calculation of $E_\mu$ while setting $Y_0=0$), a \SI{111.3}{\kilo\bit} secure key was predicted. Alternatively, finite-size effects could be reduced by combining the measurements of multiple passes. For example, we are able to combine the measurements of three upper-quartile passes---each independently unable to produce positive finite key---and thereby extract \SI{165}{\bit} of secure key with finite-sized statistics. (Significantly more median passes, around 215 combined, would be required to yield positive finite-size secure key.) This might be a useful method to extract longer secure keys from the results of multiple marginal or individually unfruitful satellite passes. \begin{table*}[t] \caption{Experimentally measured quantities for various loss conditions, including constant and varying losses simulating a satellite pass. Loss includes both channel loss and receiver efficiency. Except for rates extrapolated to a \SI{300}{\MHz} signal rate as indicated, all parameters are based on measurements with our \SI{76}{\MHz} pulsing laser source. Values here are incorporated into \cref{eq:SecureRate} and \cref{eq:FiniteSizeRate} to determine the appropriate size of the privacy amplification matrix (\cref{sec:PA}), and thus the final secure key length. Where necessary for the finite-size heuristic, uncertainties ($1\sigma$) are also given.} \centering \begin{tabular}{l|ccccccc|ccc} \hline\hline {\textbf{Loss [dB]}} & {\textbf{28.8}} & {\textbf{34.9}} & {\textbf{40.1}} & {\textbf{45.6}} & {\textbf{50.3}} & {\textbf{52.1}} & {\textbf{56.5}} & {\textbf{Best}}& {\textbf{Upper-quartile}} & {\textbf{Median}} \\ \hline Duration [s] & 289 & 606 & 599 & 593 & 606 & 682 & 257 & 390 & 365 & 297 \\ Mean detection rate [\si{Hz}] & \num{41926} & \num{10464} & 3349 & 1167 & 427 & 301 & 186 & 1650 & 956 & 278 \\ Signal detections ($N_\mu$) [${\times}10^3$] & \num{11043} & 5739 & 1746 & 556 & 184 & 116 & 11.4 & 544 & 279 & 43.4 \\ Decoy detections ($N_\nu$) [${\times}10^3$] & 82.5 & 57.2 & 16.1 & 6.72 & 1.98 & 1.19 & 0.156 & 5.63 & 2.88 & 0.489 \\ Vacuum detections ($N_0$) [${\times}10^3$] & 43.8 & 34.2 & 14.4 & 7.75 & 5.30 & 4.19 & 0.612 & 4.64 & 3.32 & 1.50 \\ Signal photon number ($\mu$) & 0.506 & 0.490 & 0.507 & 0.579 & 0.534 & 0.503 & 0.581 & 0.505 & 0.507 & 0.512 \\ Decoy photon number ($\nu$) & 0.0392 & 0.0419 & 0.0515 & 0.0723 & 0.0517 & 0.0486 & 0.0592 & 0.0568 & 0.0571 & 0.0507 \\ Signal QBER ($E_\mu$) [\%] & 3.54 & 1.94 & 2.53 & 2.84 & 4.85 & 6.06 & 5.98 & 3.12 & 3.46 & 4.35 \\ \quad Uncertainty ($\sigma$) [${\times}10^{-2}$\,\%] & 0.566 & 0.581 & 1.20 & 2.26 & 5.14 & 7.24 & 22.4 & 2.15 & 3.52 & 10.0 \\ Decoy QBER ($E_\nu$) [\%] & 38.8 & 13.0 & 19.0 & 7.28 & 11.5 & 14.9 & 23.9 & 14.1 & 14.2 & 17.7 \\ \quad Uncertainty ($\sigma$) [${\times}10^{-1}$\,\%] & 2.17 & 1.51 & 3.44 & 3.29 & 7.62 & 11.2 & 39.1 & 4.55 & 7.04 & 19.0 \\ Signal vacuum QBER ($E_0^\mu$) [\%] & 50.8 & 52.0 & 50.4 & 50.6 & 50.3 & 50.4 & 50.5 & 50.6 & 50.7 & 50.7 \\ Decoy vacuum QBER ($E_0^\nu$) [\%] & 42.0 & 32.6 & 42.5 & 44.7 & 47.4 & 47.2 & 48.2 & 45.9 & 45.5 & 44.6 \\ Signal gain ($Q_\mu$) [${\times}10^{-6}$] & 568 & 136 & 41.8 & 13.5 & 4.34 & 2.43 & 0.634 & 20.1 & 11.0 & 2.10 \\ \quad Uncertainty ($\sigma$) [${\times}10^{-8}$] & 17.1 & 5.67 & 3.16 & 1.81 & 1.01 & 0.715 & 0.595 & 3.00 & 2.08 & 1.01 \\ Decoy gain ($Q_\nu$) [${\times}10^{-7}$] & 496 & 158 & 45.0 & 19.0 & 5.48 & 2.92 & 1.02 & 24.3 & 13.3 & 2.77 \\ \quad Uncertainty ($\sigma$) [${\times}10^{-8}$] & 17.3 & 6.61 & 3.55 & 2.32 & 1.23 & 0.848 & 0.817 & 3.54 & 2.47 & 1.25 \\ Single photon gain ($Q_1^\text{L}$) [${\times}10^{-6}$] & 370 & 111 & 24.5 & 7.70 & 2.65 & 1.31 & 0.400 & 12.1 & 6.35 & 1.10 \\ Single photon QBER ($E_1^\text{U}$) [\%] & 5.26 & 2.16 & 3.93 & 4.21 & 2.75 & 3.59 & 7.27 & 4.80 & 5.42 & 6.51 \\ Vacuum yield ($Y_0$) [${\times}10^{-7}$] & 20.6 & 7.39 & 3.14 & 1.71 & 1.15 & 0.806 & 0.312 & 1.56 & 1.19 & 0.665 \\ \quad Uncertainty ($\sigma$) [${\times}10^{-9}$] & 9.85 & 4.00 & 2.62 & 1.94 & 1.58 & 1.25 & 1.26 & 2.41 & 2.07 & 1.72 \\ Error correction efficiency ($f_\text{EC}$) & 1.41 & 1.50 & 1.40 & 1.35 & 1.17 & 1.12 & 1.13 & 1.26 & 1.223 & 1.15 \\ Raw rate [bits/s] & \num{38211} & \num{9470} & \num{2915} & 938 & 303 & 169 & 44.2 & \num{1395} & 765 & 146 \\ Sifted rate [bits/s] & \num{19298} & \num{3802} & \num{1447} & 469 & 150 & 84.1 & 21.7 & 694 & 379 & 71.6 \\ Secure rate (asymptotic) [bits/s] & \num{2684} & \num{1761} & 285 & 79 & 24.5 & 3.95 & 0.510 & 120 & 49.1 & 2.96 \\ Secure rate (finite-size) [bits/s] & \num{1935} & \num{1539} & 152 & 5.39 & -- & -- & -- & 8.65 & -- & -- \\ \quad At \SI{300}{\MHz}, projected [bits/s] & \num{12806} & \num{8683} & 1190 & 234 & -- & -- & -- & 372 & 59.1 & -- \\ Total finite-size key [kbits] & 559 & 932 & 91.2 & 3.20 & -- & -- & -- & 3.37 & -- & -- \\ \quad At \SI{300}{\MHz}, projected [kbits] & \num{3701} & \num{5262} & 713 & 138 & -- & -- & -- & 145 & 21.6 & -- \\ \hline \end{tabular} \label{tab:Measured_quantities} \end{table*} The quantities measured in the experiment are summarized, in \cref{tab:Measured_quantities}, for each of the fixed loss cases and for each of the three varying-loss satellite-pass simulations. These are the values we use in \cref{eq:SecureRate} and \cref{eq:FiniteSizeRate} to determine the secure key length. \section{Discussion and conclusion}\label{sec:Conclusion} We have demonstrated the feasibility of satellite QKD using a quantum optical uplink by successfully performing QKD at losses up to \SI{56.5}{\dB} in the laboratory, with reduced computational requirements at the receiver, compatible with those that can be achieved on a satellite platform. We have improved over a previous high-loss demonstration~\cite{MeyerScott2011} by implementing complete QKD protocols, including twin-basis measurements, error correction and privacy amplification. We have also considered the effect of statistical fluctuations on the finite key length and have shown, by successfully performing full QKD and by extrapolation with varying losses that match those that would be experienced during representative passes of a satellite, that such a system is viable. Several improvements to our system are possible as a next step, improving the key rate and moving our system towards being immediately deployable. One necessary modification to ensure secure QKD would be to employ a truly random source at Alice, rather than a fixed-length repeating pseudorandom sequence. Suitable high-speed electronics to implement this in tandem with a source with increased pulse rate could provide true security while significantly improving key rates above those reported here. While increased rates would necessitate more processing at the receiver, our analysis of computational requirements shows that detection rates could be increased by an order of magnitude or more over the demonstrated best-pass rate with processing at the receiver remaining feasible. A particularly important challenge of satellite QKD yet to be addressed is the varying time of flight due to the changing distance between the satellite and ground station during a pass. For a \SI{600}{\km} orbit the distance between the satellite and ground station will vary by up to \SI{7}{\km/\s}~\cite{Bourgoin2013}, leading to a time of flight varying by up to \SI{23}{\micro\s} per second. Correcting for such variation as part of the timing analysis is straightforward, in principle, but is beyond the scope of the present work. Notably, though not of the same magnitude, varying time of flight due to relative motion has been demonstrated in the context of QKD for moving transmitters~\cite{Nauerth2012, Wang2013} and, very recently, a moving receiver~\cite{Bourgoin2015}. Additionally, our theoretical prediction of loss for a satellite pass is based on a \SI{30}{\cm} diameter receiver at \SI{785}{\nm}, while our system uses a \SI{5}{\cm} receiver and operates at \SI{532}{\nm}. These differences do not affect the proof-of-concept demonstrated in this paper nor the basic design of our apparatus as the operating principles are the same in either case. However, the optimal parameters will need to be satisfied to ensure success of a satellite uplink---the increased telescope diameter is necessary to reduce the geometric losses, and the \SI{785}{\nm} wavelength is necessary to provide the best balance between diffraction, atmospheric absorption and turbulence, and detector efficiency~\cite{Bourgoin2013}. Together with a sufficiently accurate pointing mechanism, these engineering challenges for implementing a quantum receiver satellite payload are manifestly achievable in the near term. \section{Acknowledgments} We thank Chris Erven for valuable input and NSERC, Canadian Space Agency, CFI, CIFAR, Industry Canada, FedDev Ontario and Ontario Research Fund for funding. B.L.H. acknowledges support from NSERC Banting Postdoctoral Fellowships. \begin{thebibliography}{54} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Bennett}\ and\ \citenamefont {Brassard}(1984)}]{Bennett1984} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Bennett}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {{Proceedings of the IEEE International Conference on Computers, Systems, and Signal Processing}}}}\ (\bibinfo {address} {Bangalore, India},\ \bibinfo {year} {1984})\ pp.\ \bibinfo {pages} {175--179}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scarani}\ \emph {et~al.}(2009)\citenamefont {Scarani}, \citenamefont {Bechmann-Pasquinucci}, \citenamefont {Cerf}, \citenamefont {Du\u{s}ek}, \citenamefont {L{\"u}tkenhaus},\ and\ \citenamefont {Peev}}]{Scarani2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Scarani}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Bechmann-Pasquinucci}}, \bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Cerf}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Du\u{s}ek}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {L{\"u}tkenhaus}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Peev}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{Rev. Mod. Phys.}}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {1301} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{idQuantique}}(2015)}]{idQuantique} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibnamefont {{idQuantique}}},\ }\href {http://www.idquantique.com/} {}\bibinfo {howpublished} {\url{http://www.idquantique.com/}} (\bibinfo {year} {2015})\BibitemShut {NoStop} \bibitem [{\citenamefont {{MagiQ Technologies}}(2015)}]{MagiQ} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibnamefont {{MagiQ Technologies}}},\ }\href {http://www.magiqtech.com/} {}\bibinfo {howpublished} {\url{http://www.magiqtech.com/}} (\bibinfo {year} {2015})\BibitemShut {NoStop} \bibitem [{\citenamefont {Takesue}\ \emph {et~al.}(2007)\citenamefont {Takesue}, \citenamefont {Nam}, \citenamefont {Zhang}, \citenamefont {Hadfield}, \citenamefont {Honjo}, \citenamefont {Tamaki},\ and\ \citenamefont {Yamamoto}}]{TNZHHTY07} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Takesue}}, \bibinfo {author} {\bibfnamefont {S.~W.}\ \bibnamefont {Nam}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont {Hadfield}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Honjo}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Tamaki}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yamamoto}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Photonics}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {343} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ursin}\ \emph {et~al.}(2007)\citenamefont {Ursin}, \citenamefont {Tiefenbacher}, \citenamefont {Schmitt-Manderbach}, \citenamefont {Weier}, \citenamefont {Scheidl}, \citenamefont {Lindenthal}, \citenamefont {Blauensteiner}, \citenamefont {Jennewein}, \citenamefont {Perdigues}, \citenamefont {Trojek}, \citenamefont {Oemer}, \citenamefont {Fuerst}, \citenamefont {Meyenburg}, \citenamefont {Rarity}, \citenamefont {Sodnik}, \citenamefont {Barbieri}, \citenamefont {Weinfurter},\ and\ \citenamefont {Zeilinger}}]{Ursin2007} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ursin}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Tiefenbacher}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Schmitt-Manderbach}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weier}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Scheidl}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lindenthal}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Blauensteiner}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Perdigues}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Trojek}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Oemer}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fuerst}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Meyenburg}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Rarity}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Sodnik}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Barbieri}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{Nature Physics}}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {481} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schmitt-Manderbach}\ \emph {et~al.}(2007)\citenamefont {Schmitt-Manderbach}, \citenamefont {Weier}, \citenamefont {F{\"u}rst}, \citenamefont {Ursin}, \citenamefont {Tiefenbacher}, \citenamefont {Scheidl}, \citenamefont {Perdigues}, \citenamefont {Sodnik}, \citenamefont {Kurtsiefer}, \citenamefont {Rarity}, \citenamefont {Zeilinger},\ and\ \citenamefont {Weinfurter}}]{Schmitt07} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Schmitt-Manderbach}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weier}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {F{\"u}rst}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ursin}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Tiefenbacher}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Scheidl}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Perdigues}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Sodnik}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kurtsiefer}}, \bibinfo {author} {\bibfnamefont {J.~G.}\ \bibnamefont {Rarity}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}},\ }\href {\doibase 10.1103/PhysRevLett.98.010504} {\bibfield {journal} {\bibinfo {journal} {{Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {010504} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stucki}\ \emph {et~al.}(2009)\citenamefont {Stucki}, \citenamefont {Walenta}, \citenamefont {Vannel}, \citenamefont {Thew}, \citenamefont {Gisin}, \citenamefont {Zbinden}, \citenamefont {Gray}, \citenamefont {Towery},\ and\ \citenamefont {Ten}}]{Stucki_NJP250k} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Stucki}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Walenta}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Vannel}}, \bibinfo {author} {\bibfnamefont {R.~T.}\ \bibnamefont {Thew}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gray}}, \bibinfo {author} {\bibfnamefont {C.~R.}\ \bibnamefont {Towery}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ten}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{New J. Phys.}}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {075003} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2010)\citenamefont {Liu}, \citenamefont {Chen}, \citenamefont {Wang}, \citenamefont {Cai}, \citenamefont {Wan}, \citenamefont {Chen}, \citenamefont {Wang}, \citenamefont {Liu}, \citenamefont {Liang}, \citenamefont {Yang}, \citenamefont {Peng}, \citenamefont {Chen}, \citenamefont {Chen},\ and\ \citenamefont {Pan}}]{Liu_10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {T.-Y.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {W.-Q.}\ \bibnamefont {Cai}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wan}}, \bibinfo {author} {\bibfnamefont {L.-K.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {J.-H.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {S.-B.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Z.-B.}\ \bibnamefont {Chen}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1364/OE.18.008587} {\bibfield {journal} {\bibinfo {journal} {{Opt. Exp.}}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {8587} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yan}\ \emph {et~al.}(2012)\citenamefont {Yan}, \citenamefont {Hamel}, \citenamefont {Heinrichs}, \citenamefont {Jiang}, \citenamefont {Itzler},\ and\ \citenamefont {Jennewein}}]{YHHJIJ12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Yan}}, \bibinfo {author} {\bibfnamefont {D.~R.}\ \bibnamefont {Hamel}}, \bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont {Heinrichs}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Itzler}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{Rev. Sci. Intrum.}}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {073105} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Korzh}\ \emph {et~al.}(2015)\citenamefont {Korzh}, \citenamefont {Lim}, \citenamefont {Houlmann}, \citenamefont {Gisin}, \citenamefont {Li}, \citenamefont {Nolan}, \citenamefont {Sanguinetti}, \citenamefont {Thew},\ and\ \citenamefont {Zbinden}}]{korzh2014provably} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Korzh}}, \bibinfo {author} {\bibfnamefont {C.~C.~W.}\ \bibnamefont {Lim}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Houlmann}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Nolan}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Sanguinetti}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Thew}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{Nature Photonics}}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {163} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Briegel}\ \emph {et~al.}(1998)\citenamefont {Briegel}, \citenamefont {D{\"u}r}, \citenamefont {Cirac},\ and\ \citenamefont {Zoller}}]{Briegel1998} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-J.}\ \bibnamefont {Briegel}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {D{\"u}r}}, \bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {5932} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Simon}\ \emph {et~al.}(2010)\citenamefont {Simon}, \citenamefont {Afzelius}, \citenamefont {Appel}, \citenamefont {Boyer de~la Giroday}, \citenamefont {Dewhurst}, \citenamefont {Gisin}, \citenamefont {Hu}, \citenamefont {Jelezko}, \citenamefont {Kr{\"o}ll}, \citenamefont {M{\"u}ller}, \citenamefont {Nunn}, \citenamefont {Polzik}, \citenamefont {Rarity}, \citenamefont {De~Riedmatten}, \citenamefont {Rosenfeld}, \citenamefont {Shields}, \citenamefont {Sk{\"o}ld}, \citenamefont {Stevenson}, \citenamefont {Thew}, \citenamefont {Walmsley}, \citenamefont {Weber}, \citenamefont {Weinfurter}, \citenamefont {Wrachtrup},\ and\ \citenamefont {Young}}]{SAABDGHJKMNPRDRSSSTWWWWY10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Simon}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Afzelius}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Appel}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Boyer de~la Giroday}}, \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Dewhurst}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {C.~Y.}\ \bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Jelezko}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kr{\"o}ll}}, \bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {M{\"u}ller}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Nunn}}, \bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont {Polzik}}, \bibinfo {author} {\bibfnamefont {J.~G.}\ \bibnamefont {Rarity}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {De~Riedmatten}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Rosenfeld}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Shields}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Sk{\"o}ld}}, \bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Stevenson}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Thew}}, \bibinfo {author} {\bibfnamefont {I.~A.}\ \bibnamefont {Walmsley}}, \bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont {Weber}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wrachtrup}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Young}},\ }\href {\doibase 10.1140/epjd/e2010-00103-y} {\bibfield {journal} {\bibinfo {journal} {The European Physical Journal D}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo {pages} {1} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sangouard}\ \emph {et~al.}(2011)\citenamefont {Sangouard}, \citenamefont {Simon}, \citenamefont {de~Riedmatten},\ and\ \citenamefont {Gisin}}]{Sangouard2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Sangouard}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Simon}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {de~Riedmatten}}, \ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{Rev. Mod. Phys.}}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {33} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gilbert}\ and\ \citenamefont {Hamrick}(2000)}]{Gilbert2000} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Gilbert}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hamrick}},\ }\href@noop {} {\emph {\bibinfo {title} {{Practical Quantum Cryptography: A Comprehensive Analysis (Part One)}}}},\ \bibinfo {type} {Tech. Rep.}\ \bibinfo {number} {MTR00W0000052}\ (\bibinfo {institution} {The MITRE Corporation},\ \bibinfo {year} {2000})\ \bibinfo {note} {arXiv:quant-ph/0009027}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nordholt}\ \emph {et~al.}(2002)\citenamefont {Nordholt}, \citenamefont {{Hughes}}, \citenamefont {{Morgan}}, \citenamefont {{Peterson}},\ and\ \citenamefont {{Wipf}}}]{Nordholt02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Nordholt}}, \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {{Hughes}}}, \bibinfo {author} {\bibfnamefont {G.~L.}\ \bibnamefont {{Morgan}}}, \bibinfo {author} {\bibfnamefont {C.~G.}\ \bibnamefont {{Peterson}}}, \ and\ \bibinfo {author} {\bibfnamefont {C.~C.}\ \bibnamefont {{Wipf}}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {{Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series}}}},\ \bibinfo {series} {{Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series}}, Vol.\ \bibinfo {volume} {4635},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {G.~S.}\ \bibnamefont {{Mecherle}}}}\ (\bibinfo {year} {2002})\ pp.\ \bibinfo {pages} {116--126}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ursin}\ \emph {et~al.}(2009)\citenamefont {Ursin}, \citenamefont {Jennewein}, \citenamefont {Kofler}, \citenamefont {Perdigues}, \citenamefont {Cacciapuoti}, \citenamefont {de~Matos}, \citenamefont {Aspelmeyer}, \citenamefont {Valencia}, \citenamefont {Scheidl}, \citenamefont {Acin}, \citenamefont {Barbieri}, \citenamefont {Bianco}, \citenamefont {Brukner}, \citenamefont {Capmany}, \citenamefont {Cova}, \citenamefont {Giggenbach}, \citenamefont {Leeb}, \citenamefont {Hadfield}, \citenamefont {Laflamme}, \citenamefont {L{\"u}tkenhaus}, \citenamefont {Milburn}, \citenamefont {Peev}, \citenamefont {Ralph}, \citenamefont {Rarity}, \citenamefont {Renner}, \citenamefont {Samain}, \citenamefont {Solomos}, \citenamefont {Tittel}, \citenamefont {Torres}, \citenamefont {Toyoshima}, \citenamefont {Ortigosa-Blanch}, \citenamefont {Pruneri}, \citenamefont {Villoresi}, \citenamefont {Walmsley}, \citenamefont {Weihs}, \citenamefont {Weinfurter}, \citenamefont {Zukowski},\ and\ \citenamefont {Zeilinger.}}]{Ursin2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ursin}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kofler}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Perdigues}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Cacciapuoti}}, \bibinfo {author} {\bibfnamefont {C.~J.}\ \bibnamefont {de~Matos}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Aspelmeyer}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Valencia}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Scheidl}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Acin}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Barbieri}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Bianco}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Brukner}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Capmany}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Cova}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Giggenbach}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Leeb}}, \bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont {Hadfield}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Laflamme}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {L{\"u}tkenhaus}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Milburn}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Peev}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Ralph}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Rarity}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Renner}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Samain}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Solomos}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tittel}}, \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Torres}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Toyoshima}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ortigosa-Blanch}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Pruneri}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Villoresi}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Walmsley}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Weihs}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Zukowski}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger.}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Europhysics News}\ }\textbf {\bibinfo {volume} {40}},\ \bibinfo {pages} {26} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bonato}\ \emph {et~al.}(2009)\citenamefont {Bonato}, \citenamefont {Tomaello}, \citenamefont {Deppo}, \citenamefont {Naletto},\ and\ \citenamefont {Villoresi}}]{Bonato_NJP_09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Bonato}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Tomaello}}, \bibinfo {author} {\bibfnamefont {V.~D.}\ \bibnamefont {Deppo}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Naletto}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Villoresi}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{New J. Phys.}}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {045017} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bourgoin}\ \emph {et~al.}(2013)\citenamefont {Bourgoin}, \citenamefont {Meyer-Scott}, \citenamefont {Higgins}, \citenamefont {Helou}, \citenamefont {Erven}, \citenamefont {H{\"u}bel}, \citenamefont {Kumar}, \citenamefont {Hudson}, \citenamefont {D'Souza}, \citenamefont {Girard}, \citenamefont {Laflamme},\ and\ \citenamefont {Jennewein}}]{Bourgoin2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Bourgoin}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Meyer-Scott}}, \bibinfo {author} {\bibfnamefont {B.~L.}\ \bibnamefont {Higgins}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Helou}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Erven}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {H{\"u}bel}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Kumar}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Hudson}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {D'Souza}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Girard}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Laflamme}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}},\ }\href {http://stacks.iop.org/1367-2630/15/i=2/a=023006} {\bibfield {journal} {\bibinfo {journal} {{New J. Phys.}}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {023006} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Buttler}\ \emph {et~al.}(2000)\citenamefont {Buttler}, \citenamefont {Hughes}, \citenamefont {Lamoreaux}, \citenamefont {Morgan}, \citenamefont {Nordholt},\ and\ \citenamefont {Peterson}}]{BHLMNP00} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~T.}\ \bibnamefont {Buttler}}, \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Hughes}}, \bibinfo {author} {\bibfnamefont {S.~K.}\ \bibnamefont {Lamoreaux}}, \bibinfo {author} {\bibfnamefont {G.~L.}\ \bibnamefont {Morgan}}, \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Nordholt}}, \ and\ \bibinfo {author} {\bibfnamefont {C.~G.}\ \bibnamefont {Peterson}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{Phys. Rev. Lett.}}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {5652} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rarity}\ \emph {et~al.}(2002)\citenamefont {Rarity}, \citenamefont {Tapster}, \citenamefont {Gorman},\ and\ \citenamefont {Knight}}]{Rarity_NJP_02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~G.}\ \bibnamefont {Rarity}}, \bibinfo {author} {\bibfnamefont {P.~R.}\ \bibnamefont {Tapster}}, \bibinfo {author} {\bibfnamefont {P.~M.}\ \bibnamefont {Gorman}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Knight}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{New J. Phys.}}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {82} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Meyer-Scott}\ \emph {et~al.}(2011)\citenamefont {Meyer-Scott}, \citenamefont {Yan}, \citenamefont {MacDonald}, \citenamefont {Bourgoin}, \citenamefont {H{\"u}bel},\ and\ \citenamefont {Jennewein}}]{MeyerScott2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Meyer-Scott}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Yan}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {MacDonald}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Bourgoin}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {H{\"u}bel}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}},\ }\href {\doibase 10.1103/PhysRevA.84.062326} {\bibfield {journal} {\bibinfo {journal} {{Phys. Rev. \textnormal{A}}}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {062326} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ma}\ \emph {et~al.}(2005)\citenamefont {Ma}, \citenamefont {Qi}, \citenamefont {Zhao},\ and\ \citenamefont {Lo}}]{Ma2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Qi}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhao}}, \ and\ \bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}},\ }\href {\doibase 10.1103/PhysRevA.72.012326} {\bibfield {journal} {\bibinfo {journal} {{Phys. Rev. \textnormal{A}}}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {012326} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Renner}(2005)}]{Renner_phd} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Renner}},\ }\emph {\bibinfo {title} {Security of Quantum Key Distribution}},\ \href@noop {} {Ph.D. thesis},\ \bibinfo {school} {ETH Zurich} (\bibinfo {year} {2005}),\ \bibinfo {note} {arXiv:0512258}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tomamichel}\ \emph {et~al.}(2012)\citenamefont {Tomamichel}, \citenamefont {Lim}, \citenamefont {Gisin},\ and\ \citenamefont {Renner}}]{TLGR12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Tomamichel}}, \bibinfo {author} {\bibfnamefont {C.~C.~W.}\ \bibnamefont {Lim}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Renner}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature communications}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {634} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yan}\ \emph {et~al.}(2013)\citenamefont {Yan}, \citenamefont {Meyer-Scott}, \citenamefont {Bourgoin}, \citenamefont {Higgins}, \citenamefont {Gigov}, \citenamefont {MacDonald}, \citenamefont {H{\"u}bel},\ and\ \citenamefont {Jennewein}}]{YMBHGMHJ13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Yan}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Meyer-Scott}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Bourgoin}}, \bibinfo {author} {\bibfnamefont {B.~L.}\ \bibnamefont {Higgins}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gigov}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {MacDonald}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {H{\"u}bel}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}},\ }\href {http://jlt.osa.org/abstract.cfm?URI=jlt-31-9-1399} {\bibfield {journal} {\bibinfo {journal} {J. Lightwave Technol.}\ }\textbf {\bibinfo {volume} {31}},\ \bibinfo {pages} {1399} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cai}\ and\ \citenamefont {Scarani}(2009)}]{CaiScarani} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~Y.~Q.}\ \bibnamefont {Cai}}\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Scarani}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {{New J. Phys.}}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {045024} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lucamarini}\ \emph {et~al.}(2013)\citenamefont {Lucamarini}, \citenamefont {Patel}, \citenamefont {Dynes}, \citenamefont {Fr\"{o}hlich}, \citenamefont {Sharpe}, \citenamefont {Dixon}, \citenamefont {Yuan}, \citenamefont {Penty},\ and\ \citenamefont {Shields}}]{LPDFSDYPS13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lucamarini}}, \bibinfo {author} {\bibfnamefont {K.~A.}\ \bibnamefont {Patel}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Dynes}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Fr\"{o}hlich}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Sharpe}}, \bibinfo {author} {\bibfnamefont {A.~R.}\ \bibnamefont {Dixon}}, \bibinfo {author} {\bibfnamefont {Z.~L.}\ \bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont {R.~V.}\ \bibnamefont {Penty}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Shields}},\ }\href {\doibase 10.1364/OE.21.024550} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {24550} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sun}\ \emph {et~al.}(2009)\citenamefont {Sun}, \citenamefont {Liang},\ and\ \citenamefont {Li}}]{Sun2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-H.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {L.-M.}\ \bibnamefont {Liang}}, \ and\ \bibinfo {author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Li}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physics Letters A}\ }\textbf {\bibinfo {volume} {373}},\ \bibinfo {pages} {2533} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Curty}\ \emph {et~al.}(2014)\citenamefont {Curty}, \citenamefont {Xu}, \citenamefont {Cui}, \citenamefont {Lim}, \citenamefont {Tamaki},\ and\ \citenamefont {Lo}}]{curty2014finite} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Curty}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Cui}}, \bibinfo {author} {\bibfnamefont {C.~C.~W.}\ \bibnamefont {Lim}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Tamaki}}, \ and\ \bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature communications}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {3732} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scarani}\ and\ \citenamefont {Renner}(2008)}]{ScarRenner08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Scarani}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Renner}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.\ Rev.\ Lett.}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {200501} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {MacKay}\ and\ \citenamefont {Neal}(1997)}]{MN97} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~J.~C.}\ \bibnamefont {MacKay}}\ and\ \bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Neal}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Electronics Lett.}\ }\textbf {\bibinfo {volume} {33}},\ \bibinfo {pages} {457} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Krawczyk}(1994)}]{Krawczyk94} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Krawczyk}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {{Advances in Cryptology --- CRYPTO '94}}}},\ \bibinfo {series} {{Lecture Notes in Computer Science}}, Vol.\ \bibinfo {volume} {839},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {Y.}~\bibnamefont {Desmedt}}}\ (\bibinfo {publisher} {Springer Berlin / Heidelberg},\ \bibinfo {year} {1994})\ pp.\ \bibinfo {pages} {129--139}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hu}\ \emph {et~al.}(2005)\citenamefont {Hu}, \citenamefont {Eleftheriou},\ and\ \citenamefont {Arnold}}]{Hu_peg} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-Y.}\ \bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Eleftheriou}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Arnold}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Information Theory, IEEE Transactions on}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {386} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hu}\ \emph {et~al.}(2003)\citenamefont {Hu}, \citenamefont {Eletheriou},\ and\ \citenamefont {Arnold}}]{PEG_SW} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-Y.}\ \bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Eletheriou}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Arnold}},\ }\href {http://www.inference.phy.cam.ac.uk/mackay/PEG\_ECC.html} {\enquote {\bibinfo {title} {{Source code for Progressive Edge Growth parity-check matrix construction}},}\ } (\bibinfo {year} {2003})\BibitemShut {NoStop} \bibitem [{\citenamefont {Elkouss}\ \emph {et~al.}(2009)\citenamefont {Elkouss}, \citenamefont {Leverrier}, \citenamefont {All{\'e}aume},\ and\ \citenamefont {Boutros}}]{El_2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Elkouss}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Leverrier}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {All{\'e}aume}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Boutros}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {{Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 3}}}},\ \bibinfo {series and number} {{ISIT'09}}\ (\bibinfo {publisher} {IEEE Press},\ \bibinfo {year} {2009})\ pp.\ \bibinfo {pages} {1879--1883}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Martinez Mateo}}(2011)}]{Mateo_phd} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {{Martinez Mateo}}},\ }\emph {\bibinfo {title} {{Efficient Information Reconciliation for Quantum Key Distribution}}},\ \href@noop {} {Ph.D. thesis},\ \bibinfo {school} {Universidad Politecnica de Madrid} (\bibinfo {year} {2011})\BibitemShut {NoStop} \bibitem [{\citenamefont {Pearson}(2004)}]{Pea04} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Pearson}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {{Proc.\ 7th International Conference on Quantum Communication, Measurement and Computing (QCMC)}}}}\ (\bibinfo {year} {2004})\ pp.\ \bibinfo {pages} {299--302}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lucio-Martinez}\ \emph {et~al.}(2009)\citenamefont {Lucio-Martinez}, \citenamefont {Chan}, \citenamefont {Mo}, \citenamefont {Hosier},\ and\ \citenamefont {Tittel}}]{Lucio-Martinez_NJP_09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Lucio-Martinez}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Chan}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Mo}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Hosier}}, \ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tittel}},\ }\href {http://stacks.iop.org/1367-2630/11/i=9/a=095001} {\bibfield {journal} {\bibinfo {journal} {{New J. Phys.}}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {095001} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chan}(2009)}]{Chan_masc} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Chan}},\ }\emph {\bibinfo {title} {{Low-Density Parity-Check Codes For Quantum Key Distribution}}},\ \href@noop {} {Master's thesis},\ \bibinfo {school} {University of Calgary} (\bibinfo {year} {2009})\BibitemShut {NoStop} \bibitem [{\citenamefont {Shannon}(1948)}]{Shannon1948} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~E.}\ \bibnamefont {Shannon}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Bell System Technical Journal}\ }\textbf {\bibinfo {volume} {27}},\ \bibinfo {pages} {379} (\bibinfo {year} {1948})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cover}\ and\ \citenamefont {Thomas}(2006)}]{CoverIT} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~M.}\ \bibnamefont {Cover}}\ and\ \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Thomas}},\ }\href@noop {} {\emph {\bibinfo {title} {{Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)}}}}\ (\bibinfo {publisher} {Wiley-Interscience},\ \bibinfo {year} {2006})\BibitemShut {NoStop} \bibitem [{\citenamefont {Elkouss}\ \emph {et~al.}(2011)\citenamefont {Elkouss}, \citenamefont {Martinez-Mateo},\ and\ \citenamefont {Martin}}]{El_2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Elkouss}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Martinez-Mateo}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Martin}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum Info. Comput.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {226} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Raviteja}\ and\ \citenamefont {Thangaraj}(2008)}]{Rav_NestedLDPC} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Raviteja}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Thangaraj}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {{The 14th National Conference on Communications (India)}}}}\ (\bibinfo {year} {2008})\ pp.\ \bibinfo {pages} {79--83}\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1} \BibitemOpen \bibinfo {note} {Although MD5 is not cryptographically secure as methods to modify a file while preserving its hash value are known, this does not assist Eve's effort to reconstruct the \protect \emph {same} key as Alice and Bob.}\BibitemShut {Stop} \bibitem [{\citenamefont {Tsurumaru}\ \emph {et~al.}(2011)\citenamefont {Tsurumaru}, \citenamefont {Matsumoto},\ and\ \citenamefont {Asai}}]{Tsurumaru_11_PA} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Tsurumaru}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Matsumoto}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Asai}},\ }\href {https://sqt.ait.ac.at/software/attachments/download/190} {\enquote {\bibinfo {title} {{{QKD} Post-Processing Algorithms of Mitsubishi Electric Corporation}},}\ }\bibinfo {howpublished} {{AIT} {QKD} Post Processing Workshop 2011} (\bibinfo {year} {2011})\BibitemShut {NoStop} \bibitem [{\citenamefont {Gray}(2005)}]{Gray_2005} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Gray}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Foundations and Trends in Communications and Information Theory}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {155} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Golub}\ and\ \citenamefont {Van~Loan}(1996)}]{Golub_matrix} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Golub}}\ and\ \bibinfo {author} {\bibfnamefont {C.~F.}\ \bibnamefont {Van~Loan}},\ }\href@noop {} {\emph {\bibinfo {title} {{Matrix Computations}}}},\ edited by\ \bibinfo {editor} {\bibnamefont {3rd}}\ (\bibinfo {publisher} {The Johns Hopkins University Press},\ \bibinfo {year} {1996})\BibitemShut {NoStop} \bibitem [{\citenamefont {Hayashi}(2011)}]{Hayashi_11_PA} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hayashi}},\ }\href {\doibase 10.1109/TIT.2011.2110950} {\bibfield {journal} {\bibinfo {journal} {Information Theory, IEEE Transactions on}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {3989} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tomamichel}\ \emph {et~al.}(2014)\citenamefont {Tomamichel}, \citenamefont {Martinez-Mateo}, \citenamefont {Pacher},\ and\ \citenamefont {Elkouss}}]{TMPE14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Tomamichel}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Martinez-Mateo}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Pacher}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Elkouss}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {{2014 IEEE International Symposium on Information Theory (ISIT)}}}}\ (\bibinfo {year} {2014})\ pp.\ \bibinfo {pages} {1469--1473}\BibitemShut {NoStop} \bibitem [{\citenamefont {Takesue}\ \emph {et~al.}(2006)\citenamefont {Takesue}, \citenamefont {Diamanti}, \citenamefont {Langrock}, \citenamefont {Fejer},\ and\ \citenamefont {Yamamoto}}]{TDLFY06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Takesue}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Diamanti}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Langrock}}, \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Fejer}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yamamoto}},\ }\href {\doibase 10.1364/OE.14.009522} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {9522} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nauerth}\ \emph {et~al.}(2013)\citenamefont {Nauerth}, \citenamefont {Moll}, \citenamefont {Rau}, \citenamefont {Fuchs}, \citenamefont {Horwath}, \citenamefont {Frick},\ and\ \citenamefont {Weinfurter}}]{Nauerth2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Nauerth}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Moll}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Rau}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Fuchs}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Horwath}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Frick}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}},\ }\href {\doibase 10.1038/nphoton.2013.46} {\bibfield {journal} {\bibinfo {journal} {{Nature Photonics}}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {382–386} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2013)\citenamefont {Wang}, \citenamefont {Yang}, \citenamefont {Liao}, \citenamefont {Zhang}, \citenamefont {Shen}, \citenamefont {Hu}, \citenamefont {Wu}, \citenamefont {Yang}, \citenamefont {Jiang}, \citenamefont {Tang}, \citenamefont {Zhong}, \citenamefont {Liang}, \citenamefont {Liu}, \citenamefont {Hu}, \citenamefont {Huang}, \citenamefont {Qi}, \citenamefont {Ren}, \citenamefont {Pan}, \citenamefont {Yin}, \citenamefont {Jia}, \citenamefont {Chen}, \citenamefont {Chen}, \citenamefont {Peng},\ and\ \citenamefont {Pan}}]{Wang2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-Y.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {S.-K.}\ \bibnamefont {Liao}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Shen}}, \bibinfo {author} {\bibfnamefont {X.-F.}\ \bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {J.-C.}\ \bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {S.-J.}\ \bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Tang}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Zhong}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liang}}, \bibinfo {author} {\bibfnamefont {W.-Y.}\ \bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {Y.-M.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Qi}}, \bibinfo {author} {\bibfnamefont {J.-G.}\ \bibnamefont {Ren}}, \bibinfo {author} {\bibfnamefont {G.-S.}\ \bibnamefont {Pan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {J.-J.}\ \bibnamefont {Jia}}, \bibinfo {author} {\bibfnamefont {Y.-A.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase 10.1038/nphoton.2013.89} {\bibfield {journal} {\bibinfo {journal} {{Nature Photonics}}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {387–393} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bourgoin}\ \emph {et~al.}(2015)\citenamefont {Bourgoin}, \citenamefont {Higgins}, \citenamefont {Gigov}, \citenamefont {Holloway}, \citenamefont {Pugh}, \citenamefont {Kaiser}, \citenamefont {Cranmer},\ and\ \citenamefont {Jennewein}}]{Bourgoin2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Bourgoin}}, \bibinfo {author} {\bibfnamefont {B.~L.}\ \bibnamefont {Higgins}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gigov}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Holloway}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Pugh}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kaiser}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cranmer}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}},\ }\href {\doibase 10.1364/OE.23.033437} {\bibfield {journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {23}},\ \bibinfo {pages} {33437} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title{Solutions to Linear Bimatrix Equations with Applications to Pole Assignment of Complex-Valued Linear Systems} \author{Bin Zhou\thanks{Center for Control Theory and Guidance Technology, Harbin Institute of Technology, Harbin, 150001, China. Email: [email protected], [email protected].}} \date{} \maketitle \begin{abstract} We study in this paper solutions to several kinds of linear bimatrix equations arising from pole assignment and stability analysis of complex-valued linear systems, which have several potential applications in control theory, particularly, can be used to model second-order linear systems in a very dense manner. These linear bimatrix equations include generalized Sylvester bimatrix equations, Sylvester bimatrix equations, Stein bimatrix equations, and Lyapunov bimatrix equations. Complete and explicit solutions are provided in terms of the bimatrices that are coefficients of the equations/systems. The obtained solutions are then used to solve the full state feedback pole assignment problem for complex-valued linear system. For a special case of complex-valued linear systems, the so-called antilinear system, the solutions are also used to solve the so-called anti-preserving (the closed-loop system is still an antilinear system) and normalization (the closed-loop system is a normal linear system) problems. Second-order linear systems, particularly, the spacecraft rendezvous control system, are used to demonstrate the obtained theoretical results. \textbf{Keywords:} Linear bimatrix equations; Complex-valued linear systems; Pole assignment; Second-order linear systems; Spacecraft rendezvous. \end{abstract} \section{Introduction} In this paper we continue to study complex-valued linear systems introduced in \cite{zhou17arxiv}. Complex-valued linear systems refer to linear systems whose state evolution dependents on both the state and its conjugate (see Subsection \ref{sec2.1} for a detailed introduction). There are several reasons for study this class of linear systems \cite{zhou17arxiv}, for example, they are naturally encountered in linear dynamical quantum systems theory, and can be used to model any real-valued linear systems with lower dimensions (see Subsection \ref{sec3.3} for a detailed development). Analysis and design of complex-valued linear systems have been studied in our early paper \cite{zhou17arxiv}, where some fundamental problems such as state response, controllability, observability, stability, pole assignment, linear quadratic regulation, and state observer design, were solved. The conditions and/or methods obtained there are based on bimatrices associated with the complex-valued linear system, which is mathematically appealing. The pole assignment problem for complex-valued linear system was solved in \cite{zhou17arxiv} by using coefficients of the so-called real-representation system, for which any pole assignment algorithms for normal linear systems can be applied. In this paper, we will continue to study the pole assignment problem for complex-valued linear systems by establishing a different method. Our new solution is based on solving the so-called (generalized) Sylvester bimatrix equation whose coefficients are bimatrices associated with the complex-valued linear system. Our study on linear bimatrix equations and their applications in pole assignment has been inspired by early work for normal linear systems. For example, the (normal) Sylvester matrix equation was utilized to solve the pole assignment for normal linear system in \cite{bs82scl}, and the generalized Sylvester matrix equation was used in \cite{duan93tac,duan96tac,duan15book,dz06tac} and \cite{zd06scl} to solve the (parametric) pole assignment problem for normal linear systems, descriptor linear systems, and even high-order linear systems. We will show that the pole assignment problem for a complex-valued linear system has a solution if and only if the associated (generalized) Sylvester bimatrix equation has a nonsingular solution. Thus the main task of this paper is to provide complete and explicit solutions to the homogeneous (generalized) Sylvester bimatrix equation. The solutions we provided have quite element expressions that use the original coefficient bimatrices and a right-coprime factorization (in the bimatrix framework) of the system. We also provide solutions to non-homogeneous Sylvester bimatrix equations and Stein bimatrix equations which include the Lyapunov bimatrix equation as a special case. We are particularly interested in pole assignment for the so-called antilinear system studied recently in \cite{wdl13aucc,wqls16jfi, wz17book} and \cite{wzls15iet}. By our approach we first provide closed-form solutions to the associated (generalized) Sylvester bimatrix equations, and then consider two different problems, namely, the anti-preserving problem which ensures that the closed-loop system is still (or equivalent to) an antilinear system, and the normalization problem which guarantees that the closed-loop system is (equivalent to) a normal linear system. The anti-preserving problem was firstly studied in \cite{wz17book}. However, we can provide complete solutions that use full state feedback rather than only normal state feedback used in \cite{wz17book}. We discovered that the anti-preserving problem is meaningful only for discrete-time antilinear systems (as studied in \cite{wz17book}) since any continuous-time antilinear system cannot be asymptotically stable. However, the normalization problem is valid for both continuous-time and discrete-time antilinear systems, and seems more interesting as the closed-loop system is (equivalent to) a normal linear system that is more easy to handle. \textbf{Notation}: For a matrix $A\in \mathbf{C}^{n\times m}$, we use $A^{\#},$ $A^{\mathrm{T}},$ $A^{\mathrm{H}},$ $\mathrm{rank}\left( A\right) ,$ $\left \vert A\right \vert ,$ $\left \Vert A\right \Vert ,$ $\lambda \left( A\right) ,$ $\rho \left( A\right) ,$ $\mu \left( A\right) ,$ $\operatorname{Re}\left( A\right) $ and $\operatorname{Im}\left( A\right) $ to denote respectively its conjugate, transpose, conjugate transpose, rank, determinant (when $n=m$), norm, eigenvalue set (when $n=m$), spectral radius (when $n=m$), spectral abscissa ($\max_{s\in \lambda(A)}\{ \operatorname{Re} \{s\} \}$), real part and imaginary part. The notation $0_{n\times m}$ refers to an $n\times m$ zero matrix. For two integers $p,q$ with $p\leq q,$ denote $\mathbf{I}\left[ p,q\right] =\{p,p+1,\cdots,q\}.$ Let $\mathbf{Z} ^{+}=\{0,1,2,\cdots \}$, $\mathbf{R}^{+}=[0,\infty),\mathbf{R}=\mathbf{R} ^{+}\cup \{-\mathbf{R}^{+}\},\mathbf{Z}=\mathbf{Z}^{+}\cup \{-\mathbf{Z}^{+}\}$, and $\mathrm{j}$ the unitary imaginary number. For a series of matrices $A_{i},i\in \mathbf{I}\left[ 1,l\right] ,$ $\mathrm{diag}\{A_{1},A_{2} ,\cdots,A_{l}\}$ denotes a diagonal matrix whose diagonal elements are $A_{i},i\in \mathbf{I}\left[ 1,l\right] .$ \section{\label{sec2}Motivation and Preliminaries} \subsection{\label{sec2.1}Complex-Valued Linear Systems} To introduce complex valued linear systems we recall the bimatrix $\left \{ A_{1},A_{2}\right \} \in \{ \mathbf{C}^{n\times m},\mathbf{C}^{n\times m}\}$ given in \cite{zhou17arxiv}, where it was defined in such a manner that, for any $x\in \mathbf{C}^{m},$ \[ y=\left \{ A_{1},A_{2}\right \} x\triangleq A_{1}x+A_{2}^{\#}x^{\#}, \] which defines a linear mapping over the field of real numbers \cite{zhou17arxiv}. Further properties of the bimatrix can be found in \cite{zhou17arxiv}. With the notion of bimatrix, we continue to study the following complex-valued linear system \cite{zhou17arxiv} (without output equation) \begin{equation} x^{+}=\left \{ A_{1},A_{2}\right \} x+\left \{ B_{1},B_{2}\right \} u, \label{sys} \end{equation} where $A_{i}\in \mathbf{C}^{n\times n},B_{i}\in \mathbf{C}^{n\times m},i=1,2,$ are known coefficients, $x=x(t)$ is the state, $u=u(t)$ is the control, and $x^{+}(t)$ denotes $x\left( t+1\right) $ if $t\in \mathbf{Z}^{+}$ (namely, discrete-time systems) and denotes $\dot{x}(t)$ if $t\in \mathbf{R}^{+}$ (namely, continuous-time systems). Throughout this paper, the dependence of variables on $t$ will be suppressed unless necessary. The initial condition is set to be $x\left( 0\right) =x_{0}\in \mathbf{C}^{n}$ \cite{zhou17arxiv}. There are several reasons for studying linear systems in the form of (\ref{sys}). Readers are encouraged to refer to \cite{zhou17arxiv} for details, while an explicit application of (\ref{sys}) to second-order linear system will be shown in detail in the next subsection. If $A_{2}\ $and $B_{2}$ are null matrices, then system (\ref{sys}) becomes \begin{equation} x^{+}=A_{1}x+B_{1}u, \label{normal} \end{equation} which is the normal linear system that has been well studied during the past half century \cite{kfa69book, rugh96book}. If $A_{1}$ and $B_{1}$ are set as zeros, then (\ref{sys}) reduces to the so-called antilinear system \begin{equation} x^{+}=A_{2}^{\#}x^{\#}+B_{2}^{\#}u^{\#}, \label{antilinear} \end{equation} which was initially studied in \cite{wdl13aucc} and \cite{wzls15iet}. In our recent paper \cite{zhou17arxiv} we have carried out a comprehensive study on the analysis and design of the complex-valued linear system (\ref{sys}), including state response, controllability, observability, stability, stabilization, pole assignment, linear quadratic regulation, and state observer design. The obtained results will reduce to classical ones when they are applied on the normal linear system (\ref{normal}), and will reduce to and/or improve the existing results when they are applied on the antilinear system (\ref{antilinear}). Our study is mainly based on the properties of the bimatrix $\{P_{1},P_{2}\} \in \{ \mathbf{C}^{n\times m},\mathbf{C}^{n\times m}\}$, especially, its real representation \begin{equation} \left \{ P_{1},P_{2}\right \} _{\circ}\triangleq \left[ \begin{array} [c]{cc} \mathrm{Re}\left( P_{1}+P_{2}\right) & -\mathrm{Im}\left( P_{1} +P_{2}\right) \\ \mathrm{Im}\left( P_{1}-P_{2}\right) & \mathrm{Re}\left( P_{1} -P_{2}\right) \end{array} \right] \in \mathbf{R}^{2n\times2m}, \label{real} \end{equation} and complex-lifting \begin{equation} \left \{ P_{1},P_{2}\right \} _{\diamond}\triangleq \left[ \begin{array} [c]{cc} P_{1} & P_{2}^{\#}\\ P_{2} & P_{1}^{\#} \end{array} \right] \in \mathbf{C}^{2n\times2m}. \label{lifting1} \end{equation} Particularly, we shown that, for stabilization and pole assignment of system (\ref{sys}), the so-called full state feedback \begin{equation} u=\left \{ K_{1},K_{2}\right \} x, \label{eqfeedback} \end{equation} is necessary \cite{zhou17arxiv}, and, generally, the well-known normal linear feedback \begin{equation} u=K_{1}x, \label{normalfeedback} \end{equation} is valid only for the antilinear system (\ref{anticoprime}) when $t\in \mathbf{Z}^{+}.$ In this paper, we continue to study the complex-valued linear system (\ref{sys}). We are interested in the particular problem of pole assignment of this class of systems by the full state feedback (\ref{eqfeedback}). We will show that solutions to the pole assignment problem can be completely characterized by solutions to a class of generalized Sylvester bimatrix equations. Our study is clearly motivated by the existing work on pole assignment of the normal linear system, for which it is well known that solutions to the associated pole assignment can be characterized by solutions to some (generalized) Sylvester matrix equations \cite{bs82scl,duan93tac,duan15book}. We will provide complete solutions to such a type of generalized Sylvester bimatrix equations and will show that the obtained results include the existing ones for both the normal linear system (\ref{normal}) and the antilinear system (\ref{antilinear}) as special cases. From this point of view, we have built a quite general framework for pole assignment of linear systems. \subsection{\label{sec3.3}Second-Order Linear Systems} In this subsection, we use a second-order linear system model to demonstrate the purpose of studying the complex-valued linear system (\ref{sys}). Consider the second-order linear system \begin{equation} M\ddot{\xi}+D\dot{\xi}+K\xi=Gv, \label{second} \end{equation} where $M,D,K\in \mathbf{R}^{n\times n}$ and $G\in \mathbf{R}^{n\times q}$ are given matrices, $\xi$ is the state (often denotes the displacements of the object to be controlled), and $v$ is the control. Let the initial condition be $\xi \left( 0\right) =\xi_{10}\in \mathbf{R}^{n}$ and $\dot{\xi}\left( 0\right) =\xi_{20}\in \mathbf{R}^{n}.$ For simplicity, we only consider the continuous-time case without output equation and assume that $M$ is nonsingular. We further assume, without loss of generality, that $q=2m,$ since, otherwise, we set $G=[G,0_{n\times1}]$. Denote $G=[G_{1},G_{2} ],G_{i}\in \mathbf{R}^{n\times m}$ and $v=[v_{1}^{\mathrm{T}},v_{2} ^{\mathrm{T}}]^{\mathrm{T}},v_{i}\in \mathbf{R}^{m},i=1,2.$ Second-order linear system can be used to describe many physical systems, for example, the mass-spring system \cite{kailath80book}, and the spacecraft rendezvous control system \cite{cw60jas}. To describe the second-order linear system (\ref{second}) as a complex-valued linear system, we first write it equivalently as \begin{equation} \left[ \begin{array} [c]{c} \dot{\xi}\\ \ddot{\xi} \end{array} \right] =\left[ \begin{array} [c]{cc} 0_{n\times n} & I_{n}\\ -M^{-1}K & -M^{-1}D \end{array} \right] \left[ \begin{array} [c]{c} \xi \\ \dot{\xi} \end{array} \right] +\left[ \begin{array} [c]{cc} 0_{n\times m} & 0_{n\times m}\\ G_{1} & G_{2} \end{array} \right] \left[ \begin{array} [c]{c} v_{1}\\ v_{2} \end{array} \right] . \label{second1} \end{equation} If we choose \begin{equation} x=\xi+\mathrm{j}\dot{\xi},\;u=v_{1}+\mathrm{j}v_{2}, \label{eqxu} \end{equation} then system (\ref{second1}) can be equivalently rewritten as (\ref{sys}) where the initial condition is $x_{0}=\xi _{10}+\mathrm{j}\xi_{20},$ and $A_{i}\in \mathbf{C}^{n\times n},B_{i} \in \mathbf{C}^{n\times m},i=1,2,$ are given by \begin{equation} \left \{ \begin{array} [c]{l} A_{1}=-\frac{1}{2}M^{-1}D-\frac{\mathrm{j}}{2}\left( I_{n}+M^{-1}K\right) ,\\ A_{2}=\frac{1}{2}M^{-1}D-\frac{\mathrm{j}}{2}\left( I_{n}-M^{-1}K\right) ,\\ B_{1}=\frac{1}{2}G_{2}+\frac{\mathrm{j}}{2}G_{1},\\ B_{2}=-\frac{1}{2}G_{2}-\frac{\mathrm{j}}{2}G_{1}=-B_{1}. \end{array} \right. \label{a12b12} \end{equation} \begin{remark} We have another method to describe system (\ref{second}) as (\ref{sys}). Write (\ref{second}) as \begin{equation} \left[ \begin{array} [c]{c} \dot{\xi}\\ \ddot{\xi} \end{array} \right] =\left[ \begin{array} [c]{cc} 0_{n\times n} & I_{n}\\ -M^{-1}K & -M^{-1}D \end{array} \right] \left[ \begin{array} [c]{c} \xi \\ \dot{\xi} \end{array} \right] +\left[ \begin{array} [c]{cc} 0_{n\times q} & 0_{n\times q}\\ G & 0_{n\times q} \end{array} \right] \left[ \begin{array} [c]{c} v\\ w \end{array} \right] , \label{eq92} \end{equation} where $w\in \mathbf{C}^{q}$ is a temp variable. Then, similar to (\ref{second1} ), by setting $x$ as in (\ref{eqxu}) and $u=v+\mathrm{j}w,$ (\ref{eq92}) can be written as (\ref{sys}), where $A_{i},i=1,2,$ are given by (\ref{a12b12}) and \[ B_{1}=\frac{\mathrm{j}}{2}G,\quad B_{2}=-\frac{\mathrm{j}}{2}G=-B_{1}. \] This method does not require that $q$ is an even number, but however leads to higher dimensions of the inputs than (\ref{a12b12}). \end{remark} The corresponding complex-valued linear system model (\ref{sys}) for (\ref{second}) seems more convenient to use than the augmented normal linear system model (\ref{second1}) since it possesses the same dimension as the original system (\ref{second}). We also mention that, as has been made clear in \cite{zhou17arxiv}, the full state feedback (\ref{eqfeedback}) for the associated complex-valued linear system (\ref{sys}) can be equivalently written as \begin{equation} v=\left[ \begin{array} [c]{c} v_{1}\\ v_{2} \end{array} \right] =\left \{ K_{1},K_{2}\right \} _{\circ}\left[ \begin{array} [c]{c} \xi \\ \dot{\xi} \end{array} \right] , \label{eqv} \end{equation} where $\left \{ K_{1},K_{2}\right \} _{\circ}$ is a real matrix. Therefore, the full state feedback (\ref{eqfeedback}) can be implemented physically. \subsection{Derivation of Linear Bimatrix Equations} The problem of pole assignment for the complex-valued linear system (\ref{sys}) by the full state feedback (\ref{eqfeedback}) can be stated as finding the bimatrix $\left \{ K_{1},K_{2}\right \} $\ $\in \{ \mathbf{C} ^{m\times n},\mathbf{C}^{m\times n}\}$ such that the resulting closed-loop system \begin{equation} x^{+}=\left( \left \{ A_{1},A_{2}\right \} +\left \{ B_{1},B_{2}\right \} \left \{ K_{1},K_{2}\right \} \right) x, \label{closed2} \end{equation} possesses a desired eigenvalue set $\mathit{\Gamma}$ that is symmetric with respect to the real axis. Since $\mathit{\Gamma}$ is symmetric with respect to the real axis, there is a real matrix $F\in \mathbf{R}^{2n\times2n}$ such that $\lambda \left( F\right) =\mathit{\Gamma}$. \begin{lemma} The pole assignment problem is solvable if and only if there exists a nonsingular bimatrix $\{X_{1},X_{2}\} \in \{ \mathbf{C}^{n\times n} ,\mathbf{C}^{n\times n}\}$ to the following generalized Sylvester bimatrix equation \begin{equation} \left \{ A_{1},A_{2}\right \} \left \{ X_{1},X_{2}\right \} +\left \{ B_{1},B_{2}\right \} \left \{ Y_{1},Y_{2}\right \} =\left \{ X_{1} ,X_{2}\right \} \left \{ F_{1},F_{2}\right \} , \label{syl} \end{equation} where $\left \{ F_{1},F_{2}\right \} \in \{ \mathbf{C}^{n\times n} ,\mathbf{C}^{n\times n}\}$ \ is the unique bimatrix satisfying \begin{equation} F=\left \{ F_{1},F_{2}\right \} _{\circ}. \label{eqff} \end{equation} In this case, the feedback gain bimatrix $\left \{ K_{1},K_{2}\right \} $ is determined by \begin{equation} \left \{ K_{1},K_{2}\right \} =\left \{ Y_{1},Y_{2}\right \} \left \{ X_{1},X_{2}\right \} ^{-1}. \label{eqsylk} \end{equation} \end{lemma} We give a remark regarding the closed-loop system (\ref{closed2}). \begin{remark} By the state transformation $y=\left \{ X_{1},X_{2}\right \} ^{-1}x,$ we have from (\ref{syl}) and (\ref{eqsylk}) that the closed-loop system (\ref{closed2} ) is equivalent to \begin{align} y^{+} & =\left \{ X_{1},X_{2}\right \} ^{-1}x^{+}\nonumber \\ & =\left \{ X_{1},X_{2}\right \} ^{-1}\left( \left \{ A_{1},A_{2}\right \} +\left \{ B_{1},B_{2}\right \} \left \{ K_{1},K_{2}\right \} \right) x\nonumber \\ & =\left \{ X_{1},X_{2}\right \} ^{-1}\left( \left \{ A_{1},A_{2}\right \} +\left \{ B_{1},B_{2}\right \} \left \{ K_{1},K_{2}\right \} \right) \left \{ X_{1},X_{2}\right \} y\nonumber \\ & =\left \{ X_{1},X_{2}\right \} ^{-1}\left( \left \{ A_{1},A_{2}\right \} \left \{ X_{1},X_{2}\right \} +\left \{ B_{1},B_{2}\right \} \left \{ Y_{1},Y_{2}\right \} \right) y\nonumber \\ & =\left \{ X_{1},X_{2}\right \} ^{-1}\left \{ X_{1},X_{2}\right \} \left \{ F_{1},F_{2}\right \} y\nonumber \\ & =\left \{ F_{1},F_{2}\right \} y. \label{closed1} \end{align} Clearly, if we want the closed-loop system to be equivalent to a normal linear system, then we should set $F_{2}=0_{n\times n},$ and to be equivalent to an antilinear system, then we should set $F_{1}=0_{n\times n}.$ \end{remark} Therefore, to solve the pole assignment problem for system (\ref{sys}), the main task is to finding complete solutions to the generalized Sylvester bimatrix equation (\ref{syl}), which is one of the main tasks in this paper and will be studied in Sections \ref{sec3}-\ref{sec4}. The generalized Sylvester bimatrix equation (\ref{syl}) is homogeneous and thus its solution is non-unique. As done in pole assignment for normal linear system \cite{bs82scl}, sometimes we may first prescribe $\{Y_{1},Y_{2}\}$ and then seek the (unique) solution for $\{X_{1},X_{2}\}$ (if exists). In this case, (\ref{syl}) becomes the following non-homogeneous one: \begin{equation} \left \{ A_{1},A_{2}\right \} \left \{ X_{1},X_{2}\right \} -\left \{ X_{1},X_{2}\right \} \left \{ F_{1},F_{2}\right \} =\left \{ C_{1} ,C_{2}\right \} , \label{bimatrixsyl} \end{equation} where $\{C_{1},C_{2}\} \triangleq-\left \{ B_{1},B_{2}\right \} \left \{ Y_{1},Y_{2}\right \} $ is known.\ This equation is referred to as the Sylvester bimatrix equation, and will be studied in Section \ref{sec5}. The non-homogeneous equation (\ref{bimatrixsyl}) also appears in the stability analysis of the complex-valued linear system (\ref{sys}) with $t\in \mathbf{R}^{+}$, which is asymptotically stable if and only if \cite{zhou17arxiv} \begin{equation} \left \{ A_{1},A_{2}\right \} ^{\mathrm{H}}\left \{ P_{1},P_{2}\right \} +\left \{ P_{1},P_{2}\right \} \left \{ A_{1},A_{2}\right \} =-\left \{ Q_{1},Q_{2}\right \} , \label{bilya} \end{equation} has a (unique) solution $\left \{ P_{1},P_{2}\right \} >0$ for any given $\left \{ Q_{1},Q_{2}\right \} >0.$ Clearly, (\ref{bilya}) is in the form of (\ref{bimatrixsyl}), and will be referred to as Lyapunov bimatrix equation. When we study stability of the complex-valued linear system (\ref{sys}) with $t\in \mathbf{Z}^{+},$ the discrete-time Lyapunov bimatrix equation \begin{equation} \left \{ P_{1},P_{2}\right \} =\left \{ A_{1},A_{2}\right \} ^{\mathrm{H} }\left \{ P_{1},P_{2}\right \} \left \{ A_{1},A_{2}\right \} +\left \{ Q_{1},Q_{2}\right \} , \label{bidilya} \end{equation} is encountered. It is shown in \cite{zhou17arxiv} that stability of (\ref{sys}) with $t\in \mathbf{Z}^{+}$ is equivalent to the existence of a (unique) solution $\left \{ P_{1},P_{2}\right \} >0$ to (\ref{bidilya}) for any given $\left \{ Q_{1},Q_{2}\right \} >0.$ Equation (\ref{bidilya}) is a special case of \begin{equation} \left \{ X_{1},X_{2}\right \} =\left \{ A_{1},A_{2}\right \} \left \{ X_{1},X_{2}\right \} \left \{ F_{1},F_{2}\right \} +\left \{ C_{1} ,C_{2}\right \} , \label{bistein} \end{equation} which is referred to as the Stein bimatrix equation, and will also be studied in Section \ref{sec5}. Though in the above the bimatrix $\left \{ F_{1},F_{2}\right \} $ has the same dimension as $\left \{ A_{1},A_{2}\right \} ,$ however, this is not necessary. Hence, without loss of generality, hereafter we assume, if not specified, that $\left \{ F_{1},F_{2}\right \} \in \{ \mathbf{C}^{p\times p},\mathbf{C} ^{p\times p}\},$ where $p$ is any positive integer. \section{\label{sec3}Solutions to Generalized Sylvester Bimatrix Equations} In this section we study solutions to the generalized Sylvester bimatrix equation (\ref{syl}) and will also consider a special case that its coefficients are determined by the normal linear system (\ref{normal}). Hereafter we assume that system (\ref{sys}) (or $(\{A_{1},A_{2}\},\{B_{1} ,B_{2}\})$) is controllable (see \cite{zhou17arxiv} for definition and criterion for the controllability of system (\ref{sys})). \subsection{General Solutions} Two polynomial bimatrices $\left \{ N_{1}(s),N_{2}(s)\right \} \in \{ \mathbf{C}^{n\times m},\mathbf{C}^{n\times m}\}$ and $\left \{ D_{1} (s),D_{2}(s)\right \} \in \{ \mathbf{C}^{m\times m},\mathbf{C}^{m\times m}\}$ are said to be right-coprime if \begin{equation} 2m=\mathrm{rank}\left \{ \left[ \begin{array} [c]{c} N_{1}\left( s\right) \\ D_{1}\left( s\right) \end{array} \right] ,\left[ \begin{array} [c]{c} N_{2}\left( s\right) \\ D_{2}\left( s\right) \end{array} \right] \right \} ,\forall s\in \mathbf{C,} \label{coprime1} \end{equation} where, and hereafter, $s$ should be treated as a \textit{real parameter} when computing its conjugate, namely, $s=s^{\#}$. We then can present the following explicit solutions to the generalized Sylvester bimatrix equation (\ref{syl}). \begin{theorem} \label{th1}Let $\left \{ N_{1}(s),N_{2}(s)\right \} \in \{ \mathbf{C}^{n\times m},\mathbf{C}^{n\times m}\}$ and $\left \{ D_{1}(s),D_{2}(s)\right \} \in \{ \mathbf{C}^{m\times m},\mathbf{C}^{m\times m}\}$ be two polynomial bimatrices such that \begin{equation} \left \{ sI_{n}-A_{1},-A_{2}\right \} \left \{ N_{1}(s),N_{2}(s)\right \} =\left \{ B_{1},B_{2}\right \} \left \{ D_{1}(s),D_{2}(s)\right \} , \label{coprime} \end{equation} and $\left \{ N_{1}(s),N_{2}(s)\right \} $ and $\left \{ D_{1}(s),D_{2} (s)\right \} $ are right-coprime. Then complete solutions to (\ref{syl}) are given by \begin{equation} \left \{ \begin{array} [c]{rl} \left \{ X_{1},X_{2}\right \} & =\sum \limits_{i=0}^{\omega}\left \{ N_{1i},N_{2i}\right \} \left \{ Z_{1},Z_{2}\right \} \left \{ F_{1} ,F_{2}\right \} ^{i},\\ \left \{ Y_{1},Y_{2}\right \} & =\sum \limits_{i=0}^{\omega}\left \{ D_{1i},D_{2i}\right \} \left \{ Z_{1},Z_{2}\right \} \left \{ F_{1} ,F_{2}\right \} ^{i}, \end{array} \right. \label{xysolution} \end{equation} where $\{Z_{1},Z_{2}\} \in \{ \mathbf{C}^{m\times p},\mathbf{C}^{m\times p}\}$ is an arbitrarily bimatrix and \begin{equation} \left[ \begin{array} [c]{c} N_{j}(s)\\ D_{j}(s) \end{array} \right] =\sum \limits_{i=0}^{\omega}\left[ \begin{array} [c]{c} N_{ji}\\ D_{ji} \end{array} \right] s^{i},\;j=1,2,\; \omega \in \mathbf{Z}^{+}. \label{nidi} \end{equation} \end{theorem} Notice that the polynomial bimatrix equations (\ref{coprime} ) can be written as coupled polynomial matrix equations \begin{equation} \left \{ \begin{array} [c]{rl} \left( sI_{n}-A_{1}\right) N_{1}(s)-A_{2}^{\#}N_{2}(s) & =B_{1} D_{1}(s)+B_{2}^{\#}D_{2}(s),\\ \left( sI_{n}-A_{1}\right) N_{2}^{\#}(s)-A_{2}^{\#}N_{1}^{\#}(s) & =B_{1}D_{2}^{\#}(s)+B_{2}^{\#}D_{1}^{\#}(s), \end{array} \right. \label{eqcoprime_coupled} \end{equation} and, the generalized Sylvester bimatrix equation (\ref{syl}) can also be expressed equivalently as coupled matrix equations \begin{equation} \left \{ \begin{array} [c]{rl} A_{1}X_{1}+A_{2}^{\#}X_{2}+B_{1}Y_{1}+B_{2}^{\#}Y_{2} & =X_{1}F_{1}+X_{2} ^{\#}F_{2},\\ A_{1}X_{2}^{\#}+A_{2}^{\#}X_{1}^{\#}+B_{1}Y_{2}^{\#}+B_{2}^{\#}Y_{1}^{\#} & =X_{1}F_{2}^{\#}+X_{2}^{\#}F_{1}^{\#}. \end{array} \right. \label{sylcoupled} \end{equation} In the next subsection, we show how to transform these coupled equations into equivalent decoupled ones. \subsection{Decoupling of Coupled Equations} We first show that the coupled polynomial matrix equations in (\ref{eqcoprime_coupled}) can be decoupled. \begin{lemma} \label{lm1}The coupled polynomial matrix equations (\ref{eqcoprime_coupled}) with unknowns $(N_{1}(s),N_{2}(s),D_{1}(s),D_{2}(s))$ are solvable if and only if the following decoupled matrix equations \begin{equation} \left \{ \begin{array} [c]{rl} \left( sI_{n}-A_{1}\right) N_{+}(s)-A_{2}^{\#}N_{+}^{\#}(s) & =B_{1} D_{+}(s)+B_{2}^{\#}D_{+}^{\#}(s),\\ \left( sI_{n}-A_{1}\right) N_{-}(s)+A_{2}^{\#}N_{-}^{\#}(s) & =B_{1} D_{-}(s)-B_{2}^{\#}D_{-}^{\#}(s), \end{array} \right. \label{eqcoprime_decoupled} \end{equation} with unknowns $(N_{+}(s),N_{-}(s),D_{+}(s),D_{-}(s))$ are solvable. Moreover, $(N_{1}(s),N_{2}(s),D_{1}(s),D_{2}(s))$ and $(N_{+}(s)$, $N_{-}(s),D_{+} (s),D_{-}(s))$ are one-to-one according to \begin{equation} \left \{ \begin{array} [c]{ll} N_{1}(s)=\frac{1}{2}\left( N_{+}(s)+N_{-}(s)\right) , & D_{1}(s)=\frac{1} {2}\left( D_{+}(s)+D_{-}(s)\right) ,\\ N_{2}(s)=\frac{1}{2}\left( N_{+}(s)-N_{-}(s)\right) ^{\#}, & D_{2} (s)=\frac{1}{2}\left( D_{+}(s)-D_{-}(s)\right) ^{\#}. \end{array} \right. \label{eqnpnm} \end{equation} Furthermore, $\left \{ N_{1}(s),N_{2}(s)\right \} $ and $\{D_{1} (s),D_{2}(s)\}$ are right-coprime if and only if \begin{equation} \mathrm{rank}\left[ \begin{array} [c]{cc} N_{+}(s) & -N_{-}(s)\\ D_{+}(s) & -D_{-}(s)\\ N_{+}^{\#}(s) & N_{-}^{\#}(s)\\ D_{+}^{\#}(s) & D_{-}^{\#}(s) \end{array} \right] =2m,\; \forall s\in \mathbf{C}. \label{eqcoprime5} \end{equation} \end{lemma} By this lemma, the two matrix pairs $\left( N_{+}(s),D_{+}(s)\right) $ and $\left( N_{-}(s),D_{-}(s)\right) $ can be solved separately, which is useful in computation. In fact, we need only to solve the first equation since the second one can be solved in a similar way by setting $A_{2}\mapsto-A_{2}$ and $B_{2}\mapsto-B_{2}.$ General solutions to (\ref{eqcoprime_decoupled}) and (\ref{eqcoprime5}) will be investigated in a separate paper. Similar to Lemma \ref{lm1}, the coupled matrix equations (\ref{sylcoupled}) can also be decoupled in some cases, as shown below. \begin{lemma} \label{lm2}Assume that there exist two real matrices $F_{ii}\in \mathbf{R} ^{p\times p},i=1,2,$ such that \begin{equation} F\triangleq \{F_{1},F_{2}\}_{\circ}=\mathrm{diag}\{F_{11},F_{22}\}, \label{eqf} \end{equation} namely, $F_{1}$ and $F_{2}$ are also real matrices given by (see (\ref{real})) \begin{equation} F_{1}=\frac{1}{2}\left( F_{11}+F_{22}\right) ,\;F_{2}=\frac{1}{2}\left( F_{11}-F_{22}\right) . \label{eqf1f2} \end{equation} Then the associated coupled matrix equations (\ref{sylcoupled}) are solvable with unknowns $(X_{1},X_{2},Y_{1},Y_{2})$ if and only if the decoupled matrix equations \begin{equation} \left \{ \begin{array} [c]{rl} A_{1}X_{+}+A_{2}^{\#}X_{+}^{\#}+B_{1}Y_{+}+B_{2}^{\#}Y_{+}^{\#} & =X_{+}\left( F_{1}+F_{2}\right) ,\\ A_{1}X_{-}-A_{2}^{\#}X_{-}^{\#}+B_{1}Y_{-}-B_{2}^{\#}Y_{-}^{\#} & =X_{-}\left( F_{1}-F_{2}\right) , \end{array} \right. \label{eqsylde} \end{equation} are solvable with unknowns $(X_{+},Y_{+})$ and $(X_{-},Y_{-}).$ Moreover, $(X_{1},X_{2},Y_{1},Y_{2})\ $and $(X_{+},Y_{+},X_{-},Y_{-})$ are one-to-one according to \begin{equation} \left \{ \begin{array} [c]{ll} X_{1}=\frac{1}{2}\left( X_{+}+X_{-}\right) , & Y_{1}=\frac{1}{2}\left( Y_{+}+Y_{-}\right) ,\\ X_{2}=\frac{1}{2}\left( X_{+}-X_{-}\right) ^{\#}, & Y_{2}=\frac{1}{2}\left( Y_{+}-Y_{-}\right) ^{\#}. \end{array} \right. \label{eqxpxm} \end{equation} \end{lemma} The assumption (\ref{eqf}) is not restrictive if only asymptotic stability of the closed-loop system is concerned since, in view of (\ref{eqff}), we can always find real matrices $F_{1}\ $and $F_{2}$ such that $F$ is asymptotically stable. Combining Lemmas \ref{lm1} and \ref{lm2} gives the following result. \begin{theorem} Assume that there exist two real matrices $F_{ii}\in \mathbf{R}^{p\times p},i=1,2$ satisfying (\ref{eqf}) and $\left( F_{1},F_{2}\right) $ is given\ by (\ref{eqf1f2}). Let $(N_{+}(s),D_{+}(s))$ and $(N_{-}(s),D_{-}(s))$ satisfy respectively the first and second equations of (\ref{eqcoprime_decoupled}) and such that (\ref{eqcoprime5}). Denote \begin{equation} \left[ \begin{array} [c]{c} N_{\pm}(s)\\ D_{\pm}(s) \end{array} \right] =\sum \limits_{i=0}^{\omega}\left[ \begin{array} [c]{c} N_{\pm i}\\ D_{\pm i} \end{array} \right] s^{i},\; \omega \in \mathbf{Z}^{+}. \label{ndpmpm} \end{equation} Then complete solutions to (\ref{eqsylde}) are given by \begin{equation} \left[ \begin{array} [c]{c} X_{\pm}\\ Y_{\pm} \end{array} \right] =\frac{1}{2}\sum \limits_{i=0}^{\omega}\left( \left[ \begin{array} [c]{c} N_{\pm i}\\ D_{\pm i} \end{array} \right] \left( Z_{\pm}+Z_{\pm}^{\#}\right) +\left[ \begin{array} [c]{c} N_{\mp i}\\ D_{\mp i} \end{array} \right] \left( Z_{\pm}-Z_{\pm}^{\#}\right) \right) \left( F_{1}\pm F_{2}\right) ^{i}, \label{eqxypm} \end{equation} where $\left( Z_{+},Z_{-}\right) $ and $\left( Z_{1},Z_{2}\right) $ are one-to-one according to \begin{equation} Z_{\pm}=Z_{1}\pm Z_{2}^{\#}. \label{eqzpm} \end{equation} \end{theorem} It follows that, though $\left( X_{+},Y_{+}\right) $ and $\left( X_{-},Y_{-}\right) $ are decoupled in (\ref{eqsylde}), and $(N_{+} (s),D_{+}(s))$ and $(N_{-}(s),D_{-}(s))$ are also decoupled in (\ref{eqcoprime_decoupled}), $\left( X_{+},Y_{+}\right) $ (and $\left( X_{-},Y_{-}\right) $) depends on both $(N_{+}(s),D_{+}(s))$ and $(N_{-}(s),D_{-}(s)).$ If we let $Z_{\pm}\in \mathbf{R}^{m\times p},$ then they are decoupled as \[ \left[ \begin{array} [c]{c} X_{\pm}\\ Y_{\pm} \end{array} \right] =\sum \limits_{i=0}^{\omega}\left[ \begin{array} [c]{c} N_{\pm i}\\ D_{\pm i} \end{array} \right] Z_{\pm}\left( F_{1}\pm F_{2}\right) ^{i}, \] which is appealing in mathematics. \subsection{\label{sec4.3}Solutions for Normal Linear Systems} When considering the generalized Sylvester bimatrix equation (\ref{syl}) for the normal linear system (\ref{normal}), we may want the closed-loop system to be (similar to) a normal system as well (see Remark \ref{rm4} later for a different situation). Thus, according to (\ref{closed1}), we should choose $F_{2}=0_{n\times n}.$ Consequently, the generalized Sylvester bimatrix equation (\ref{syl}) or the coupled matrix equation (\ref{sylcoupled}) becomes \begin{equation} \left \{ \begin{array} [c]{rl} A_{1}X_{1}+B_{1}Y_{1} & =X_{1}F_{1},\\ A_{1}X_{2}^{\#}+B_{1}Y_{2}^{\#} & =X_{2}^{\#}F_{1}^{\#}, \end{array} \right. \label{sylnormal} \end{equation} where $F_{1}$ is such that $y^{+}=F_{1}y$ is asymptotically stable. These two equations in (\ref{sylnormal}) are exactly the same one \begin{equation} A_{1}X_{0}+B_{1}Y_{0}=X_{0}F_{0}, \label{sylvester} \end{equation} where $F_{0}=F_{1}$ and $F_{1}^{\#}.$ Equation (\ref{sylvester}) is known as the generalized Sylvester matrix equation and has been extensively used and studied in the literature for pole assignment of the normal linear system (\ref{normal}) \cite{duan93tac,duan15book,zd06scl}. In this case, the two polynomial matrix equations (\ref{eqcoprime_decoupled}) reduce further to the single one \begin{equation} \left( sI_{n}-A_{1}\right) N_{0}(s)=B_{1}D_{0}\left( s\right) , \label{normalcoprime} \end{equation} that has been investigated in the literature, for example, \cite{duan93tac} and \cite{zdl09iet}. We can simply choose \begin{equation} \left \{ \begin{array} [c]{ll} N_{1}(s)=N_{0}(s), & N_{2}(s)=0_{n\times m},\\ D_{1}(s)=D_{0}(s),\; & D_{2}(s)=0_{m\times m}, \end{array} \right. \label{eq82} \end{equation} to satisfy (\ref{eqcoprime_coupled}). Thus (\ref{coprime1}) is equivalent to, for all $s\in \mathbf{C},$ \[ 2m=\mathrm{rank}\left[ \begin{array} [c]{cc} N_{1}\left( s\right) & 0_{n\times m}\\ D_{1}\left( s\right) & 0_{m\times m}\\ 0_{n\times m} & N_{1}^{\#}\left( s\right) \\ 0_{m\times m} & D_{1}^{\#}\left( s\right) \end{array} \right] =2\mathrm{rank}\left[ \begin{array} [c]{c} N_{0}(s)\\ D_{0}(s) \end{array} \right] , \] which implies that $\left \{ N_{1}(s),N_{2}(s)\right \} $ and $\{D_{1} (s),D_{2}(s)\}$ are right-coprime if and only if \[ \mathrm{rank}\left[ \begin{array} [c]{c} N_{0}(s)\\ D_{0}(s) \end{array} \right] =m,\; \forall s\in \mathbf{C}, \] namely, $N_{0}\left( s\right) $ and $D_{0}\left( s\right) $ are right-coprime in the normal sense \cite{kailath80book}. In this case, by denoting \[ \left[ \begin{array} [c]{c} N_{0}(s)\\ D_{0}(s) \end{array} \right] =\sum \limits_{i=0}^{\omega}\left[ \begin{array} [c]{c} N_{0i}\\ D_{0i} \end{array} \right] s^{i}, \] complete solutions to (\ref{sylvester}) are given by \cite{zd06scl} \begin{equation} \left[ \begin{array} [c]{c} X_{0}\\ Y_{0} \end{array} \right] =\sum \limits_{i=0}^{\omega}\left[ \begin{array} [c]{c} N_{0i}\\ D_{0i} \end{array} \right] Z_{0}F_{0}^{i}, \label{eq91} \end{equation} where $Z_{0}\in \mathbf{C}^{m\times p}$ is any matrix. On the other hand, in view of (\ref{eq82}), we can apply solution (\ref{xysolution}) on equation (\ref{sylnormal}) (setting $A_{2}=0_{n\times n},B_{2}=0_{n\times m} ,F_{2}=0_{p\times p}$ and using (\ref{eq82})) to obtain \begin{equation} \left \{ \begin{array} [c]{l} \left[ \begin{array} [c]{c} X_{1}\\ Y_{1} \end{array} \right] =\sum \limits_{i=0}^{\omega}\left[ \begin{array} [c]{c} N_{0i}\\ D_{0i} \end{array} \right] Z_{1}F_{1}^{i},\\ \left[ \begin{array} [c]{c} X_{2}\\ Y_{2} \end{array} \right] =\sum \limits_{i=0}^{\omega}\left[ \begin{array} [c]{c} N_{0i}\\ D_{0i} \end{array} \right] Z_{2}F_{1}^{\#i}, \end{array} \right. \label{eqxyxy} \end{equation} where $Z_{i}\in \mathbf{C}^{m\times p},i=1,2,$ are any matrices. These two expressions coincide exactly with (\ref{eq91}). \begin{remark} \label{rm9}Notice that, if we choose $Z_{2}=0_{m\times n},$ then we have from (\ref{eqxyxy}) that $X_{2}=0_{n\times n}$ and $Y_{2}=0_{m\times n}$. Then, if $Z_{1}$ is properly chosen such that $X_{1}$ is nonsingular, by (\ref{eqsylk} ), the feedback gain bimatrix is given by \begin{align} \left \{ K_{1},K_{2}\right \} & =\left \{ Y_{1},0_{m\times n}\right \} \left \{ X_{1},0_{n\times n}\right \} ^{-1}\nonumber \\ & =\left \{ Y_{1},0_{m\times n}\right \} \left \{ X_{1}^{-1},0_{n\times n}\right \} \nonumber \\ & =\left \{ Y_{1}X_{1}^{-1},0_{m\times n}\right \} , \label{eq89} \end{align} which means that the resulting controller is just the normal linear state feedback (\ref{normalfeedback}). However, if $Z_{2}\neq0_{m\times n},$ then $X_{2}\neq0_{n\times n}$ and $Y_{2}\neq0_{m\times n},$ which in turn implies that the resulting controller is the full state feedback (\ref{eqfeedback}). Yet in both cases the closed-loop system is equivalent to a normal linear system (see (\ref{closed1})). \end{remark} \section{\label{sec4}Solutions for Antilinear Systems} In this section, we carry out a careful study on pole assignment for the antilinear system (\ref{antilinear}) and present explicit solutions to the associated Sylvester bimatrix equations (\ref{syl}) in different special cases. \subsection{General Solutions} We first provide a method for computing the right-coprime factorization (\ref{coprime}) or (\ref{eqcoprime_decoupled}) associated with the antilinear system (\ref{antilinear}). \begin{lemma} \label{coro10}Consider the antilinear system (\ref{antilinear}). Let $\left( N_{0}(s),D_{0}\left( s\right) \right) \in(\mathbf{C}^{n\times m} ,\mathbf{C}^{m\times m})$ satisfy \begin{equation} sN_{0}^{\#}(s)-A_{2}N_{0}(s)=B_{2}D_{0}(s). \label{anticoprime} \end{equation} Then the polynomial matrices $N_{+}(s),D_{+}(s),N_{-}(s)$ and $D_{-}(s)\}$ satisfying (\ref{eqcoprime_decoupled}) can be chosen as \begin{equation} \left \{ \begin{array} [c]{ll} N_{+}\left( s\right) =N_{0}\left( s\right) , & D_{+}\left( s\right) =D_{0}\left( s\right) ,\\ N_{-}\left( s\right) =N_{0}\left( -s\right) , & D_{-}\left( s\right) =D_{0}\left( -s\right) , \end{array} \right. \label{eq76} \end{equation} or equivalently, $\{N_{1}(s),N_{2}(s)\}$ and $\{D_{1}(s),D_{2}(s)\}$ satisfying (\ref{coprime}) can be chosen as \begin{equation} \left \{ \begin{array} [c]{ll} N_{1}\left( s\right) =\frac{1}{2}\left( N_{0}\left( s\right) +N_{0}\left( -s\right) \right) , & D_{1}(s)=\frac{1}{2}\left( D_{0}(s)+D_{0}(-s)\right) ,\\ N_{2}\left( s\right) =\frac{1}{2}\left( N_{0}\left( s\right) -N_{0}\left( -s\right) \right) ^{\#}, & D_{2}(s)=\frac{1}{2}\left( D_{0}(s)-D_{0}(-s)\right) ^{\#}. \end{array} \right. \label{eqcoprime6} \end{equation} Moreover, $\{N_{1}(s),N_{2}(s)\}$ and $\{D_{1}(s),D_{2}(s)\}$ are right-coprime if and only if \begin{equation} \mathrm{rank}\left[ \begin{array} [c]{cc} N_{0}\left( s\right) & -N_{0}\left( -s\right) \\ D_{0}\left( s\right) & -D_{0}\left( -s\right) \\ N_{0}^{\#}\left( s\right) & N_{0}^{\#}\left( -s\right) \\ D_{0}^{\#}\left( s\right) & D_{0}^{\#}\left( -s\right) \end{array} \right] =2m,\; \forall s\in \mathbf{C}. \label{eq79} \end{equation} \end{lemma} A polynomial matrix pair $\left( N_{0}(s),D_{0}\left( s\right) \right) $ satisfying (\ref{eq79}) may be called anti-right-coprime, and the equation (\ref{anticoprime}) can be referred to as the anti-right-coprime factorization of the antilinear system (\ref{antilinear}), which can be studied by using the approach in \cite{zdl09iet}. The Sylvester bimatrix equation (\ref{syl}) or the decoupled matrix equations (\ref{eqsylde}) associated with the antilinear system (\ref{antilinear}) is equivalent to \begin{equation} \left \{ \begin{array} [c]{cc} A_{2}^{\#}X_{+}^{\#}+B_{2}^{\#}Y_{+}^{\#} & =X_{+}\left( F_{1}+F_{2}\right) ,\\ -A_{2}^{\#}X_{-}^{\#}-B_{2}^{\#}Y_{-}^{\#} & =X_{-}\left( F_{1}-F_{2}\right) . \end{array} \right. \label{eqsyl2} \end{equation} These two equations take also the same form while their coefficients are different. \begin{corollary} \label{coro14}Assume that there exist two real matrices $F_{ii}\in \mathbf{R}^{p\times p},i=1,2$ satisfying (\ref{eqf}) and $\left( F_{1} ,F_{2}\right) $ is given\ by (\ref{eqf1f2}). Let $\left( N_{0} (s),D_{0}\left( s\right) \right) \in(\mathbf{C}^{n\times m},\mathbf{C} ^{m\times m})$ satisfy (\ref{anticoprime}) and (\ref{eq79}). Denote \begin{equation} \left[ \begin{array} [c]{c} N_{0}\left( s\right) \\ D_{0}\left( s\right) \end{array} \right] =\sum \limits_{i=0}^{\omega}\left[ \begin{array} [c]{c} N_{0,i}\\ D_{0,i} \end{array} \right] s^{i}\triangleq \sum \limits_{i=0}^{2\varpi}\left[ \begin{array} [c]{c} N_{0,i}\\ D_{0,i} \end{array} \right] s^{i}. \label{eqn0d0} \end{equation} Then complete solutions to (\ref{eqsyl2}) are given by \begin{equation} \left[ \begin{array} [c]{c} X_{\pm}\\ Y_{\pm} \end{array} \right] =\sum \limits_{i=0}^{\varpi}\left[ \begin{array} [c]{c} N_{0,2i}\\ D_{0,2i} \end{array} \right] Z_{\pm}\left( F_{1}\pm F_{2}\right) ^{2i}\pm \sum \limits_{i=0} ^{\varpi-1}\left[ \begin{array} [c]{c} N_{0,2i+1}\\ D_{0,2i+1} \end{array} \right] Z_{\pm}^{\#}\left( F_{1}\pm F_{2}\right) ^{2i+1}, \label{eq99} \end{equation} where $Z_{\pm}$ are determined by (\ref{eqzpm}). \end{corollary} \subsection{Normalization of Antilinear Systems} We now consider a special case that $F_{2}=0_{n\times n}.$ In this case, by (\ref{closed1}), the closed-loop system (\ref{closed2}) is equivalent to $y^{+}=F_{1}y,$ which is a normal linear system and is asymptotically stable if and only if $F_{1}$ is asymptotically stable ($\mu \left( F_{1}\right) <0$ when $t\in \mathbf{R}^{+}$ and $\rho \left( F_{1}\right) <1$ when $t\in \mathbf{Z}^{+}).$ In this case, the generalized Sylvester bimatrix equation (\ref{syl}) or the coupled matrix equations (\ref{sylcoupled}) become \begin{equation} \left \{ \begin{array} [c]{rl} A_{2}^{\#}X_{2}+B_{2}^{\#}Y_{2} & =X_{1}F_{1},\\ A_{2}^{\#}X_{1}^{\#}+B_{2}^{\#}Y_{1}^{\#} & =X_{2}^{\#}F_{1}^{\#}. \end{array} \right. \label{conjsyl2} \end{equation} Hence, in this case, if (\ref{conjsyl2}) has a nonsingular solution $\{X_{1},X_{2}\},$ then the full state feedback (\ref{eqfeedback}) and (\ref{eqsylk}) will make the system be a normal linear system. We call such a procedure as the \textit{normalization} of the antilinear system (\ref{antilinear}). \begin{corollary} Let $F_{2}=0_{p\times p}$ and $\left( N_{0}(s),D_{0}\left( s\right) \right) \in(\mathbf{C}^{n\times m},\mathbf{C}^{m\times m})$ satisfy (\ref{anticoprime}), (\ref{eq79}) and (\ref{eqn0d0}). Then complete solutions to the first equation of (\ref{conjsyl2}) are given by \begin{equation} \left[ \begin{array} [c]{c} X_{1}\\ Y_{1} \end{array} \right] =\sum \limits_{i=0}^{\omega}\left \{ \begin{array} [c]{ll} \left[ \begin{array} [c]{c} N_{0,i}\\ D_{0,i} \end{array} \right] Z_{1}F_{1}^{i}, & i\text{ is even}\\ \left[ \begin{array} [c]{c} N_{0,i}\\ D_{0,i} \end{array} \right] Z_{2}F_{1}^{i}, & i\text{ is odd,} \end{array} \right. \label{x1y1a} \end{equation} and complete solutions to the second equation of (\ref{conjsyl2}) are given by \begin{equation} \left[ \begin{array} [c]{c} X_{2}\\ Y_{2} \end{array} \right] =\sum \limits_{i=0}^{\omega}\left \{ \begin{array} [c]{ll} \left[ \begin{array} [c]{c} N_{0,i}^{\#}\\ D_{0,i}^{\#} \end{array} \right] Z_{2}F_{1}^{i}, & i\text{ is even}\\ \left[ \begin{array} [c]{c} N_{0,i}^{\#}\\ D_{0,i}^{\#} \end{array} \right] Z_{1}F_{1}^{i}, & i\text{ is odd,} \end{array} \right. \label{x2y2a} \end{equation} \newline where $Z_{i}\in \mathbf{C}^{m\times p},i=1,2$ are arbitrary matrices. \end{corollary} \subsection{Anti-Preserving of Antilinear Systems} We next consider a special case that $F_{1}=0_{n\times n}.$ In this case, by (\ref{closed1}), the closed-loop system (\ref{closed2}) is equivalent to $y^{+}=F_{2}^{\#}y^{\#},$ which is still an antilinear system and is asymptotically stable if and only if $t\in \mathbf{Z}^{+}$ and \cite{zhou17arxiv} \begin{equation} \rho \left( F_{2}F_{2}^{\#}\right) <1. \label{eqf2rho} \end{equation} In this case, the generalized Sylvester bimatrix equation (\ref{syl}) or the coupled matrix equations (\ref{sylcoupled}) become \begin{equation} \left \{ \begin{array} [c]{rl} A_{2}^{\#}X_{1}^{\#}+B_{2}^{\#}Y_{1}^{\#} & =X_{1}F_{2}^{\#},\\ A_{2}^{\#}X_{2}+B_{2}^{\#}Y_{2} & =X_{2}^{\#}F_{2}, \end{array} \right. \label{conjsyl1} \end{equation} which are decoupled. Hence, in this case, if (\ref{conjsyl1}) has a nonsingular solution $\{X_{1},X_{2}\},$ then the full state feedback (\ref{eqfeedback}) and (\ref{eqsylk}) will make the system be (equivalent to) an antilinear system as well. We call such a procedure as the \textit{anti-preserving} of the antilinear system (\ref{antilinear}). \begin{corollary} \label{coro15}Let $F_{1}=0_{p\times p}$ and $\left( N_{0}(s),D_{0}\left( s\right) \right) \in(\mathbf{C}^{n\times m},\mathbf{C}^{m\times m})$ satisfy (\ref{anticoprime}), (\ref{eq79}) and (\ref{eqn0d0}). Then complete solutions to the first equation of (\ref{conjsyl1}) are given by \begin{equation} \left[ \begin{array} [c]{c} X_{1}\\ Y_{1} \end{array} \right] =\sum \limits_{i=0}^{\omega}\left \{ \begin{array} [c]{ll} \left[ \begin{array} [c]{c} N_{0,i}\\ D_{0,i} \end{array} \right] Z_{1}\left( F_{2}^{\#}F_{2}\right) ^{\frac{i}{2}}, & i\text{ is even}\\ \left[ \begin{array} [c]{c} N_{0,i}\\ D_{0,i} \end{array} \right] Z_{1}^{\#}F_{2}\left( F_{2}^{\#}F_{2}\right) ^{\frac{i-1}{2}}, & i\text{ is odd,} \end{array} \right. \label{eqx1y1} \end{equation} and complete solutions to the second equation of (\ref{conjsyl1}) are given by \begin{equation} \left[ \begin{array} [c]{c} X_{2}\\ Y_{2} \end{array} \right] =\sum \limits_{i=0}^{\omega}\left \{ \begin{array} [c]{ll} \left[ \begin{array} [c]{c} N_{0,i}^{\#}\\ D_{0,i}^{\#} \end{array} \right] Z_{2}\left( F_{2}^{\#}F_{2}\right) ^{\frac{i}{2}}, & i\text{ is even}\\ \left[ \begin{array} [c]{c} N_{0,i}^{\#}\\ D_{0,i}^{\#} \end{array} \right] Z_{2}^{\#}F_{2}\left( F_{2}^{\#}F_{2}\right) ^{\frac{i-1}{2}}, & i\text{ is odd,} \end{array} \right. \label{eqx2y2} \end{equation} \newline where $Z_{i}\in \mathbf{C}^{m\times p},i=1,2$ are arbitrary matrices. \end{corollary} The solution $\left( X_{1},Y_{1}\right) $ given in Corollary \ref{coro15} coincides with those obtained in \cite{wz17book}. We emphasize that the solutions in Corollary \ref{coro15} can \textit{only be used to design discrete-time antilinear systems}. In contrast, the coupled equations (\ref{eqsyl2}) and (\ref{conjsyl2}) can be used to design both continuous-time and discrete-time antilinear systems. \begin{remark} If we choose $Z_{2}=0_{m\times n}$ and $Z_{1}$ such that $X_{1}$ is nonsingular, then, by (\ref{eqsylk}), we also have (\ref{eq89}), namely, the resulting controller is the normal state feedback (\ref{normalfeedback}). This case is just the one studied in \cite{wz17book} and our controller is exactly the one obtained there (yet the rank condition (\ref{eq79}) was not available in \cite{wz17book} where a different concept was adopted). However, similar to Remark \ref{rm9}, if we choose $Z_{2}\neq0_{m\times n},$ such that $X_{2} \neq0_{n\times n}$ and/or $Y_{2}\neq0_{m\times n}$, then the resulting controller is the full state feedback (\ref{eqfeedback}). \end{remark} \begin{remark} \label{rm4}In Subsection \ref{sec4.3} we have assumed that $F_{2}=0_{n\times n}$ for the normal linear system (\ref{normal}), namely, the closed-loop system is equivalent to a normal linear system. However, similar to the discussion in this subsection, for the normal linear system (\ref{normal}), we can also set $F_{1}=0_{n\times n}$ such that the closed-loop system is equivalent to an antilinear system. We may call this procedure as the anti-linearization of the normal linear system (\ref{normal}). The corresponding coupled matrix equations and their complete solutions can be easily stated and are omitted for brevity. \end{remark} \section{\label{sec5}Sylvester and Stein Bimatrix Equations} We will study in this section solutions to Sylvester and Stein bimatrix equations. Since we are frequently dealing with the normal linear system (\ref{normal}) and the antilinear system (\ref{antilinear}), for easy reference, we make the following assumptions. \begin{assumption} \label{ass1}$A_{2}=0_{n\times n},F_{2}=0_{p\times p}$ and $C_{2}=0_{n\times p},$ namely, $\{A_{1},A_{2}\}=\{A_{1},0_{n\times n}\},\{F_{1},F_{2} \}=\{F_{1},0_{p\times p}\}$ and $\{C_{1},C_{2}\}=\{C_{1},0_{n\times p}\}.$ \end{assumption} \begin{assumption} \label{ass2}$A_{1}=0_{n\times n},F_{1}=0_{p\times p}$ and $C_{1}=0_{n\times p},$ namely, $\{A_{1},A_{2}\}=\{0_{n\times n},A_{2}\},\{F_{1},F_{2} \}=\{0_{p\times p},F_{2}\}$ and $\{C_{1},C_{2}\}=\{0_{n\times p},C_{2}\}.$ \end{assumption} \subsection{\label{sec5.1}The Sylvester Bimatrix Equation} In this subsection we discuss the Sylvester bimatrix equation (\ref{bimatrixsyl}), which can also be written as coupled matrix equations \begin{equation} \left \{ \begin{array} [c]{cl} C_{1} & =A_{1}X_{1}+A_{2}^{\#}X_{2}-(X_{1}F_{1}+X_{2}^{\#}F_{2}),\\ C_{2} & =A_{1}^{\#}X_{2}+A_{2}X_{1}-(X_{1}^{\#}F_{2}+X_{2}F_{1}). \end{array} \right. \label{eqc1c2} \end{equation} \begin{proposition} \label{pp1}The Sylvester bimatrix equation (\ref{bimatrixsyl}) has a unique solution if and only if \begin{equation} \lambda \left \{ A_{1},A_{2}\right \} \cap \lambda \left \{ F_{1},F_{2}\right \} =\varnothing. \label{eqeigjoint} \end{equation} In this case, the unique solution is given by \begin{equation} \left \{ X_{1},X_{2}\right \} =\left( \sum \limits_{k=0}^{2p}\beta_{k}\left \{ A_{1},A_{2}\right \} ^{k}\right) ^{-1}\sum \limits_{k=1}^{2p}\beta_{k}\left \{ D_{1}(k),D_{2}(k)\right \} , \label{equniquesylsolu} \end{equation} where $\beta \left( s\right) =s^{2p}+\beta_{2p-1}s^{2p-1}+\cdots+\beta _{1}s+\beta_{0}\ $is the characteristic polynomial of $\left \{ F_{1} ,F_{2}\right \} ,$ and, for $k\geq1,$ \begin{equation} \left \{ D_{1}(k),D_{2}(k)\right \} =\sum \limits_{i=0}^{k-1}\left \{ A_{1},A_{2}\right \} ^{i}\left \{ C_{1},C_{2}\right \} \left \{ F_{1} ,F_{2}\right \} ^{k-1-i}. \label{eqd1kd2k} \end{equation} \end{proposition} Notice that the bimatrix series $\left \{ D_{1}(k),D_{2}(k)\right \} $ in (\ref{eqd1kd2k}) can also be defined in a recursive way: \begin{equation} \left \{ \begin{array} [c]{rl} \left \{ D_{1}(k+1),D_{2}(k+1)\right \} & =\left \{ A_{1},A_{2}\right \} \left \{ D_{1}(k),D_{2}(k)\right \} +\left \{ C_{1},C_{2}\right \} \left \{ F_{1},F_{2}\right \} ^{k}\\ & =\left \{ A_{1},A_{2}\right \} ^{k}\left \{ C_{1},C_{2}\right \} +\left \{ D_{1}(k),D_{2}(k)\right \} \left \{ F_{1},F_{2}\right \} ,\;k\geq1,\\ \left \{ D_{1}(1),D_{2}(1)\right \} & =\left \{ C_{1},C_{2}\right \} . \end{array} \right. \label{eq43} \end{equation} \begin{corollary} \label{ppsyl}Assume that \begin{equation} \mu \left \{ A_{1},A_{2}\right \} +\mu \left \{ F_{1},F_{2}\right \} <0. \label{eqmu1mu2} \end{equation} Then the Sylvester bimatrix equation (\ref{bimatrixsyl}) has a unique solution given by \begin{equation} \left \{ X_{1},X_{2}\right \} =\int_{0}^{\infty}\mathrm{e}^{t\left \{ A_{1},A_{2}\right \} }\left \{ C_{1},C_{2}\right \} \mathrm{e}^{t\left \{ F_{1},F_{2}\right \} }\mathrm{d}t. \label{eq44} \end{equation} Particularly, if the complex-valued linear system (\ref{sys}) with $t\in \mathbf{R}^{+}$ is asymptotically stable, then the solution to the Lyapunov bimatrix equation (\ref{bilya}) is given by \begin{equation} \left \{ P_{1},P_{2}\right \} =\int_{0}^{\infty}\mathrm{e}^{t\left \{ A_{1},A_{2}\right \} ^{\mathrm{H}}}\left \{ Q_{1},Q_{2}\right \} \mathrm{e}^{t\left \{ A_{1},A_{2}\right \} }\mathrm{d}t. \label{slya} \end{equation} \end{corollary} We now take a look at the following well-known Sylvester matrix equation \begin{equation} A_{1}X-XF_{1}=C_{1},\label{syleq} \end{equation} which was firstly studied by J. J. Sylvester \cite{sylvester84} and then by several authors (for example, \cite{Hartwig72siam,jameson68siam}). The following corollary reveals the relationship between the Sylvester bimatrix equation (\ref{bimatrixsyl}) and the Sylvester matrix equation (\ref{syleq}). \begin{corollary} \label{normalsyl}The Sylvester matrix equation (\ref{syleq}) is solvable if and only if the Sylvester bimatrix equation (\ref{bimatrixsyl}) under Assumption \ref{ass1} is solvable. Particularly, \begin{enumerate} \item If $\left \{ X_{1},X_{2}\right \} $ is a solution to (\ref{bimatrixsyl} ), then $X=X_{1}$ is a solution to (\ref{syleq}), and if $X$ is a solution to (\ref{syleq}), then $\{X_{1},X_{2}\}=\left \{ X,0_{n\times p}\right \} $ is a solution to (\ref{bimatrixsyl}). \item If (\ref{bimatrixsyl}) has a unique solution $\left \{ X_{1} ,X_{2}\right \} ,$ then $X_{2}=0_{n\times p}$ and (\ref{syleq}) also has a unique solution $X=X_{1}.$ If (\ref{syleq}) has a unique solution $X,$ then (\ref{bimatrixsyl}) has a unique solution (given by $\left \{ X,0_{n\times p}\right \} $) if one of the following two conditions is satisfied \begin{align} s & \in \lambda \left( A_{1}\right) \Longrightarrow s^{\#}\in \lambda \left( A_{1}\right) ,\label{eq27a}\\ s & \in \lambda \left( F_{1}\right) \Longrightarrow s^{\#}\in \lambda \left( F_{1}\right) . \label{eq27b} \end{align} \end{enumerate} \end{corollary} Notice that equation (\ref{eq27a}) (equation (\ref{eq27b})) is satisfied if $A_{1}$ ($F_{1}$) is a real matrix. We can check that, under Assumption \ref{ass1}, the unique solution (\ref{equniquesylsolu}) to (\ref{bimatrixsyl}) coincides exactly with the unique solution to (\ref{syleq}) obtained in \cite{jameson68siam}. We omit the details to save spaces. \begin{remark} \label{rm2}Under Assumption \ref{ass1}, it follows from $\lambda \{A_{1},0_{n\times n}\}=\lambda(A_{1})\cup \lambda(A_{1}^{\#})$ that (\ref{eqmu1mu2}) is equivalent to \begin{equation} \mu \left( A_{1}\right) +\mu \left( F_{1}\right) <0. \label{eq64} \end{equation} On the other hand, we have $\mathrm{e}^{t\left \{ A_{1},A_{2}\right \} }=\{ \mathrm{e}^{A_{1}t},0_{n\times n}\}$ and $\mathrm{e}^{t\left \{ F_{1} ,F_{2}\right \} }=\{ \mathrm{e}^{F_{1}t},0_{p\times p}\}.$ Then applying Corollary \ref{ppsyl} on (\ref{syleq}) gives the well-known result \cite{Lancaster70siam}: if (\ref{eq64}) is satisfied, the Sylvester matrix equation (\ref{syleq}) has a unique solution given by $X=\int_{0}^{\infty }\mathrm{e}^{tA_{1}}C_{1}\mathrm{e}^{tF_{1}}\mathrm{d}t$. \end{remark} We next discuss the so-called conjugate-Sylvester matrix equation \begin{equation} C_{2}=A_{2}^{\#}X-X^{\#}F_{2}, \label{sylconj} \end{equation} which was firstly investigated in \cite{bhh87ctmt,bhh88siam} and then in \cite{wz17book}. \begin{corollary} \label{conjsyl}The conjugate-Sylvester matrix equation (\ref{sylconj}) is solvable if and only if the Sylvester bimatrix equation (\ref{bimatrixsyl}) under Assumption \ref{ass2}$\ $is solvable. Particularly, \begin{enumerate} \item If $\left \{ X_{1},X_{2}\right \} $ is a solution to (\ref{bimatrixsyl} ), then $X=X_{1}$ is a solution to (\ref{sylconj}), and if $X$ is a solution to (\ref{sylconj}), then $\left \{ X_{1},X_{2}\right \} =\left \{ X,0_{n\times p}\right \} $ is a solution to (\ref{bimatrixsyl}). \item (\ref{bimatrixsyl}) has a unique solution (denoted by $\left \{ X_{1},X_{2}\right \} $) if and only if (\ref{sylconj}) has a unique solution (denoted by $X$). Moreover, $X_{2}=0_{n\times p}$ and $X=X_{1}$ with \begin{equation} X_{1}=\gamma^{-1}\left( A_{2}^{\#}A_{2}\right) \left( \sum \limits_{k=1} ^{p}\gamma_{k}\sum \limits_{i=0}^{k-1}\left( A_{2}^{\#}A_{2}\right) ^{i}\left( C_{2}^{\#}F_{2}+A_{2}^{\#}C_{2}\right) \left( F_{2}^{\#} F_{2}\right) ^{k-1-i}\right) , \label{eqx2} \end{equation} where $\gamma \left( s\right) =s^{p}+\gamma_{p-1}s^{p-1}+\cdots+\gamma _{1}s+\gamma_{0}$ is a polynomial with real coefficients, defined by \begin{equation} \gamma \left( s\right) =\left \vert sI_{p}-F_{2}^{\#}F_{2}\right \vert . \label{eqg} \end{equation} \end{enumerate} \end{corollary} We notice that (\ref{eqx2}) coincides with the solution obtained in \cite{wz17book}. Moreover, Item 2 of Corollary \ref{conjsyl} seems better than that of Corollary \ref{normalsyl} since it provides an \textquotedblleft if and only if\textquotedblright \ condition. \begin{remark} \label{rm3}An explanation of the solution (\ref{eqx2}) is given as follows. We have \begin{align*} \left \{ D_{1}(2),D_{2}(2)\right \} & =\left \{ A_{2}^{\#}A_{2},0_{n\times n}\right \} \left \{ X_{1},X_{2}\right \} -\left \{ X_{1},X_{2}\right \} \left \{ F_{2}^{\#}F_{2},0_{p\times p}\right \} \\ & =\left \{ A_{2}^{\#}A_{2}X_{1}-X_{1}F_{2}^{\#}F_{2},A_{2}A_{2}^{\#} X_{2}-X_{2}F_{2}^{\#}F_{2}\right \} \\ & =\left \{ C_{2}^{\#}F_{2}+A_{2}^{\#}C_{2},0_{n\times p}\right \}. \end{align*} Therefore, if (\ref{bimatrixsyl}) has a unique solution, then $X_{2}=0_{n\times p}$ and $X_{1}$ satisfies \begin{equation} A_{2}^{\#}A_{2}X_{1}-X_{1}F_{2}^{\#}F_{2}=C_{2}^{\#}F_{2}+A_{2}^{\#}C_{2}. \label{eq51} \end{equation} The closed-form solution to (\ref{eq51}) is just (\ref{eqx2}) by using the result in \cite{jameson68siam}. \end{remark} \subsection{The Stein Bimatrix Equation} We now study the Stein bimatrix equation (\ref{bistein}), which is equivalent to the coupled matrix equations \begin{equation} \left \{ \begin{array} [c]{cc} X_{1} & =A_{1}X_{1}F_{1}+A_{2}^{\#}X_{2}F_{1}+A_{1}X_{2}^{\#}F_{2}+A_{2} ^{\#}X_{1}^{\#}F_{2}+C_{1},\\ X_{2} & =A_{1}^{\#}X_{1}^{\#}F_{2}+A_{2}X_{2}^{\#}F_{2}+A_{1}^{\#}X_{2} F_{1}+A_{2}X_{1}F_{1}+C_{2}. \end{array} \right. \label{steincoupled} \end{equation} \begin{proposition} \label{ppstein}The Stein bimatrix equation (\ref{bistein}) has a unique solution if and only if \begin{equation} \lambda_{i}\left \{ A_{1},A_{2}\right \} \lambda_{j}\left \{ F_{1} ,F_{2}\right \} \neq1,\forall i,j. \label{eq20} \end{equation} In this case, the unique solution is given by \begin{equation} \left \{ X_{1},X_{2}\right \} =\left( \sum \limits_{k=0}^{2p}\beta_{k}\left \{ A_{1},A_{2}\right \} ^{2p-i}\right) ^{-1}\sum \limits_{k=1}^{2p}\beta _{k}\left \{ A_{1},A_{2}\right \} ^{2p-k}\left \{ D_{1}(k),D_{2}(k)\right \} , \label{eq21} \end{equation} where $\beta \left( s\right) =\beta^{2p}+\beta_{2p-1}s^{2p-1}+\cdots +\beta_{1}s+\beta_{0}$ is the characteristic polynomial of $\left \{ F_{1},F_{2}\right \} ,$ and, for $k\geq1,$ \begin{equation} \left \{ D_{1}(k),D_{2}(k)\right \} =\sum \limits_{i=0}^{k-1}\left \{ A_{1},A_{2}\right \} ^{i}\left \{ C_{1},C_{2}\right \} \left \{ F_{1} ,F_{2}\right \} ^{i}. \label{eq22} \end{equation} \end{proposition} Notice that, similar to (\ref{eq43}), the bimatrix $\left \{ D_{1} (k),D_{2}(k)\right \} $ in (\ref{eq22}) can also be defined in a recursive way: \begin{equation} \left \{ \begin{array} [c]{rl} \left \{ D_{1}(k+1),D_{2}(k+1)\right \} & =\left \{ A_{1},A_{2}\right \} \left \{ D_{1}(k),D_{2}(k)\right \} \left \{ F_{1},F_{2}\right \} ,\;k\geq1,\\ \left \{ D_{1}(1),D_{2}(1)\right \} & =\left \{ C_{1},C_{2}\right \} . \end{array} \right. \label{eqdd} \end{equation} We can also state the following corollary that parallels to corollary \ref{ppsyl}. \begin{corollary} \label{ppstein2}Assume that \begin{equation} \rho \left \{ A_{1},A_{2}\right \} \rho \left \{ F_{1},F_{2}\right \} <1, \label{eqrho} \end{equation} and $\left \{ D_{1}(k),D_{2}(k)\right \} $ is defined by (\ref{eq22}). Then the Stein bimatrix equation (\ref{bistein}) has a unique solution given by \begin{equation} \left \{ X_{1},X_{2}\right \} =\sum \limits_{k=0}^{\infty}\left \{ A_{1} ,A_{2}\right \} ^{k}\left \{ C_{1},C_{2}\right \} \left \{ F_{1} ,F_{2}\right \} ^{k}. \label{eq29} \end{equation} Particularly, if the complex-valued linear system (\ref{sys}) with $t\in \mathbf{Z}^{+}$ is asymptotically stable, then the solution to the discrete-time Lyapunov bimatrix equation (\ref{bidilya}) is given by \begin{equation} \left \{ P_{1},P_{2}\right \} =\sum \limits_{k=0}^{\infty}\left( \left \{ A_{1},A_{2}\right \} ^{\mathrm{H}}\right) ^{k}\left \{ Q_{1},Q_{2}\right \} \left \{ A_{1},A_{2}\right \} ^{k}. \label{dlyasolu} \end{equation} \end{corollary} We now check the following well-known Stein matrix equation \begin{equation} X=A_{1}XF_{1}+C_{1}, \label{stein} \end{equation} which has been widely studied in the literature (see, for example, \cite{jw03laa} and \cite{zld11laa}). The relationship between (\ref{bistein}) and (\ref{stein}) can be made clear as follows. \begin{corollary} \label{normalstein}The Stein matrix equation (\ref{stein}) is solvable if and only if the Stein bimatrix equation (\ref{bistein}) under Assumption \ref{ass1} is solvable. Particularly, \begin{enumerate} \item If $\left \{ X_{1},X_{2}\right \} $ is a solution to (\ref{bistein}), then $X=X_{1}$ is a solution to (\ref{stein}), and if $X$ is a solution to (\ref{stein}), then $\{X_{1},X_{2}\}=\left \{ X,0_{n\times p}\right \} $ is a solution to (\ref{bistein}). \item If (\ref{bistein}) has a unique solution $\left \{ X_{1},X_{2}\right \} ,$ then $X_{2}=0_{n\times p}$ and (\ref{stein}) also has a unique solution $X=X_{1}.$ If (\ref{stein}) has a unique solution $X,$ then (\ref{bistein}) has a unique solution (given by $\left \{ X,0_{n\times p}\right \} $) if one of the two conditions (\ref{eq27a})-(\ref{eq27b}) is satisfied. \end{enumerate} \end{corollary} Under Assumption \ref{ass1}, one may check that the unique solution (\ref{eq21}) to (\ref{bistein}) coincides with the unique solution to (\ref{stein}) obtained in \cite{jw03laa}. The details are omitted. \begin{remark} Similar to Remark \ref{rm2}, under Assumption \ref{ass1}, we can show that (\ref{eqrho}) is equivalent to \begin{equation} \rho \left( A_{1}\right) \rho \left( F_{1}\right) <1. \label{eqrho1} \end{equation} On the other hand, we have from (\ref{eqdd}) that $D_{2}(k)=0_{n\times p}$ and $D_{1}(k+1)=A_{1}D_{1}\left( k\right) F_{1}$ with $D_{1}\left( 1\right) =C_{1}.$ Then applying Corollary \ref{ppstein2} on (\ref{stein}) gives the well-known result (see, for example, \cite{zld11laa}): if (\ref{eqrho1}) is satisfied, the Stein matrix equation (\ref{stein}) has a unique solution given by $X= \sum \nolimits_{k=0}^{\infty} A_{1}^{k}C_{1}F_{1}^{k}.$ \end{remark} We next discuss the so-called conjugate-Stein matrix equation \begin{equation} X=A_{2}X^{\#}F_{2}+C_{2}, \label{conj_stein} \end{equation} which was firstly investigated in \cite{jw03laa} and was also studied in several other papers, for example, \cite{wz17book} and \cite{zld11laa}, where it was equivalently transformed into a normal Stein matrix equation in the form of (\ref{stein}). The following corollary reveals the relationship between (\ref{conj_stein}) and (\ref{bistein}). \begin{corollary} \label{conjstein}The conjugate-Stein matrix equation (\ref{conj_stein}) is solvable if and only if the Stein bimatrix equation (\ref{bistein}) under Assumption \ref{ass2} is solvable. Particularly, \begin{enumerate} \item If $\left \{ X_{1},X_{2}\right \} $ is a solution to (\ref{bistein}), then $X=X_{2}$ is a solution to (\ref{conj_stein}), and if $X$ is a solution to (\ref{conj_stein}), then $\left \{ X_{1},X_{2}\right \} =\left \{ 0_{n\times p},X\right \} $ is a solution to (\ref{bistein}). \item (\ref{bistein}) has a unique solution (denoted by $\left \{ X_{1} ,X_{2}\right \} $) if and only if (\ref{conj_stein}) has a unique solution (denoted by $X$). Moreover, $X_{1}=0_{n\times p}\ $and $X=X_{2}$ with \begin{equation} X_{2}=\left( \sum \limits_{k=0}^{p}\gamma_{k}(A_{2}A_{2}^{\#})^{p-k}\right) ^{-1}\left( \sum \limits_{k=1}^{p}\gamma_{k}(A_{2}A_{2}^{\#})^{p-1}\left( C_{2}+A_{2}C_{2}^{\#}F_{2}\right) \left( F_{2}^{\#}F_{2}\right) ^{k-1}\right) . \label{eqx22} \end{equation} \end{enumerate} \end{corollary} Similar to the situation in Subsection \ref{sec5.1}, Item 2 of Corollary \ref{conjstein} is better than that of Corollary \ref{normalstein} since it reveals an \textquotedblleft if and only if\textquotedblright \ relation. Notice that (\ref{eqx22}) coincides with the solution obtained in \cite{jw03laa}. \begin{remark} \label{rm5}Parallel to Remark \ref{rm3}, we give an explanation on (\ref{eqx22}). We have \begin{align*} \left \{ X_{1},X_{2}\right \} & =\left \{ A_{1},A_{2}\right \} ^{2}\left \{ X_{1},X_{2}\right \} \left \{ F_{1},F_{2}\right \} ^{2}+\left \{ D_{1}(2),D_{2}(2)\right \} \\ & =\left \{ A_{2}^{\#}A_{2},0_{n\times n}\right \} \left \{ X_{1} ,X_{2}\right \} \left \{ F_{2}^{\#}F_{2},0_{p\times p}\right \} +\left \{ 0_{n\times p},C_{2}+A_{2}C_{2}^{\#}F_{2}\right \} \\ & =\left \{ A_{2}^{\#}A_{2}X_{1}F_{2}^{\#}F_{2},A_{2}A_{2}^{\#}X_{2} F_{2}^{\#}F_{2}+C_{2}+A_{2}C_{2}^{\#}F_{2}\right \} . \end{align*} Therefore, if (\ref{bistein}) has a unique solution, we have $X_{1}=0_{n\times p}$ and $X_{2}$ satisfying \begin{equation} X_{2}=A_{2}A_{2}^{\#}X_{2}F_{2}^{\#}F_{2}+C_{2}+A_{2}C_{2}^{\#}F_{2}, \label{eq54} \end{equation} whose unique solution is exactly (\ref{eqx22}) by using the result in \cite{jw03laa}. Moreover, (\ref{eqrho}) is equivalent to \begin{equation} \rho \left( A_{2}A_{2}^{\#}\right) \rho \left( F_{2}^{\#}F_{2}\right) <1. \label{eq50} \end{equation} Hence, under this condition, the unique solution to (\ref{eq54}) can also be expressed by (see (72) in \cite{zld11laa}): \[ X_{2}=\sum \limits_{k=0}^{\infty}\left( A_{2}A_{2}^{\#}\right) ^{k}\left( C_{2}+A_{2}C_{2}^{\#}F_{2}\right) \left( F_{2}^{\#}F_{2}\right) ^{k}. \] \end{remark} \section{\label{sec6}Example: Design of the Spacecraft Rendezvous System} We use the spacecraft rendezvous system model to illustrate the effectiveness of the proposed methods. The linearized equation of the spacecraft rendezvous control system is known as the C-W equation \cite{cw60jas} \begin{equation} \left \{ \begin{array} [c]{l} \ddot{\xi}_{1}=2\omega \dot{\xi}_{2}+3\omega^{2}\xi_{1}+a_{1},\\ \ddot{\xi}_{2}=-2\omega \dot{\xi}_{1}+a_{2},\\ \ddot{\xi}_{3}=-\omega^{2}\xi_{3}+a_{3}, \end{array} \right. \label{cw} \end{equation} where $\xi_{i},i=1,2,3,$ are relative positives of the chase spacecraft with respect to the target, $a_{1},a_{2}$ and $a_{3}$ are the control accelerations that the thrusts generate in the three directions, and $\omega$ is the orbit rate of the target orbit, which is a known constant. For more information about this model, see \cite{cw60jas} and the references therein. Notice that (\ref{cw}) is a full-actuated system. To demonstrate the design in a general case, we assume that $a_{1}=0$, since, otherwise, the design is trivial as $a_{i}$ can be easily designed to cancel all of the open-loop dynamics. Notice that in this case the system is still controllable. System (\ref{cw}) is exactly in the form of (\ref{second}) with $\xi=[\xi _{1},\xi_{2},\xi_{3}]^{\mathrm{T}},$ $v=[a_{2},a_{3}]^{\mathrm{T}},n=3$, $q=2$ and $m=1$. Then, according to the development in Subsection \ref{sec3.3}, this system can be written as the complex-valued linear system (\ref{sys}), where $A_{i},B_{i},i=1,2,$ are computed according to (\ref{a12b12}) as \begin{align*} A_{1} & =\left[ \begin{array} [c]{ccc} \frac{3\omega^{2}\mathrm{j}}{2}-\frac{\mathrm{j}}{2} & \omega & 0\\ -\omega & -\frac{\mathrm{j}}{2} & 0\\ 0 & 0 & -\frac{\omega^{2}\mathrm{j}}{2}-\frac{\mathrm{j}}{2} \end{array} \right] ,\;B_{1}=\left[ \begin{array} [c]{c} 0\\ \frac{\mathrm{j}}{2}\\ \frac{1}{2} \end{array} \right] ,\\ A_{2} & =\left[ \begin{array} [c]{ccc} -\frac{3\omega^{2}\mathrm{j}}{2}-\frac{\mathrm{j}}{2} & -\omega & 0\\ \omega & -\frac{\mathrm{j}}{2} & 0\\ 0 & 0 & \frac{\omega^{2}\mathrm{j}}{2}-\frac{\mathrm{j}}{2} \end{array} \right] ,\;B_{2}=\left[ \begin{array} [c]{c} 0\\ -\frac{\mathrm{j}}{2}\\ -\frac{1}{2} \end{array} \right] . \end{align*} The eigenvalue set of the open-loop system is known as $\{0,0,\pm \omega \mathrm{j},\pm \omega \mathrm{j}\}$ \cite{cw60jas}. We are going to design a full state feedback (\ref{eqfeedback}) such that the closed-loop system possesses an eigenvalue set that is a shift of the open-loop system along the real axis, say, $\mathit{\Gamma}=\{-\gamma,-\gamma,-\gamma \pm \omega \mathrm{j},-\gamma \pm \omega \mathrm{j}\},$ where $\gamma>0$ is a real constant. Thus we can choose \[ F=\mathrm{diag}\left \{ \left[ \begin{array} [c]{cc} 0 & 1\\ 0 & 0 \end{array} \right] ,\left[ \begin{array} [c]{cc} 0 & \omega \\ -\omega & 0 \end{array} \right] ,\left[ \begin{array} [c]{cc} 0 & \omega \\ -\omega & 0 \end{array} \right] \right \} -\gamma I_{6}, \] and the unique bimatrix $\{F_{1},F_{2}\}$ satisfying (\ref{eqff}) can be obtained as \[ F_{1}=\left[ \begin{array} [c]{ccc} -\gamma & \frac{1}{2} & -\frac{\omega \mathrm{j}}{2}\\ 0 & -\gamma & \frac{\omega}{2}\\ -\frac{\omega \mathrm{j}}{2} & -\frac{\omega}{2} & -\gamma \end{array} \right] ,\;F_{2}=\left[ \begin{array} [c]{ccc} 0 & \frac{1}{2} & \frac{\omega \mathrm{j}}{2}\\ 0 & 0 & -\frac{\omega}{2}\\ -\frac{\omega \mathrm{j}}{2} & \frac{\omega}{2} & 0 \end{array} \right] . \] The right-coprime bimatrix polynomials $\{N_{1}(s),N_{2}(s)\}$ and $\{D_{1}(s),D_{2}(s)\}$ satisfying (\ref{coprime}) can be computed as \[ \left \{ \begin{array} [c]{rl} N_{1}\left( s\right) & =\left[ \begin{array} [c]{c} \omega s\left( 1+\mathrm{j}s\right) \\ -\frac{\left( 1+\mathrm{j}s\right) \left( 3\omega^{2}-s^{2}\right) }{2}\\ \frac{s}{2}-\frac{\mathrm{j}}{2} \end{array} \right] \triangleq \sum \limits_{i=0}^{3}N_{1i}s^{i},\\ N_{2}\left( s\right) & =\left[ \begin{array} [c]{c} -\omega s\left( -1+\mathrm{j}s\right) \\ \frac{\mathrm{j}\left( 3\omega^{2}-s^{2}\right) \left( s+\mathrm{j}\right) }{2}\\ -\frac{s}{2}-\frac{\mathrm{j}}{2} \end{array} \right] \triangleq \sum \limits_{i=0}^{3}N_{2i}s^{i}, \end{array} \right. \] and \[ \left \{ \begin{array} [c]{rl} D_{1}\left( s\right) & =\frac{1}{2}\left( s^{2}+1\right) \left( \omega^{2}+s^{2}\right) \triangleq \sum \limits_{i=0}^{4}D_{1i}s^{i},\\ D_{2}\left( s\right) & =\frac{1}{2}\left( s^{2}-1\right) \left( \omega^{2}+s^{2}\right) \triangleq \sum \limits_{i=0}^{4}D_{2i}s^{i}. \end{array} \right. \] Then, according to Theorem \ref{th1}, complete solutions to the associated generalized Sylvester bimatrix equation (\ref{syl}) are given by \[ \left \{ \begin{array} [c]{rl} \left \{ X_{1},X_{2}\right \} & =\sum \limits_{i=0}^{3}\left \{ N_{1i} ,N_{2i}\right \} \left \{ Z_{1},Z_{2}\right \} \left \{ F_{1},F_{2}\right \} ^{i},\\ \left \{ Y_{1},Y_{2}\right \} & =\sum \limits_{i=0}^{4}\left \{ D_{1i} ,D_{2i}\right \} \left \{ Z_{1},Z_{2}\right \} \left \{ F_{1},F_{2}\right \} ^{i}, \end{array} \right. \] where $Z_{i}\in \mathbf{C}^{1\times3},i=1,2,$ are arbitrary matrices. Particularly, if we choose \[ Z_{1}=\left[ \begin{array} [c]{ccc} 1+\mathrm{j} & 0 & 0 \end{array} \right] ,\;Z_{2}=\left[ \begin{array} [c]{ccc} 0 & 0 & 1 \end{array} \right] , \] the state feedback gain bimatrix $\{K_{1},K_{2}\}$ can be obtained according to (\ref{eqsylk}) as (we omit to display the variables $\{X_{1},X_{2}\}$ and $\{Y_{1},Y_{2}\}$) \[ \left \{ \begin{array} [c]{rl} K_{1} & =\left[ \begin{array} [c]{ccc} \frac{k_{11}}{12\omega^{3}} & \frac{k_{12}}{6\omega^{2}} & -\frac {\gamma \left( 2+\gamma \mathrm{j}\right) }{2} \end{array} \right] ,\\ K_{2} & =\left[ \begin{array} [c]{ccc} -\frac{k_{21}}{12\omega^{3}} & \frac{k_{22}}{6\omega^{2}} & \frac {\gamma \left( 2+\gamma \mathrm{j}\right) }{2} \end{array} \right] , \end{array} \right. \] where \[ \left \{ \begin{array} [c]{l} k_{11}=\gamma^{4}\mathrm{j}-12\gamma^{3}\omega^{2}+19\gamma^{2}\omega ^{2}\mathrm{j}+\gamma^{2}-42\gamma \omega^{4}+6\gamma \omega^{2}\mathrm{j} +4\omega^{2},\\ k_{21}=-\gamma^{4}\mathrm{j}+12\gamma^{3}\omega^{2}-19\gamma^{2}\omega ^{2}\mathrm{j}+\gamma^{2}+42\gamma \omega^{4}+6\gamma \omega^{2}\mathrm{j} +4\omega^{2},\\ k_{12}=\gamma^{4}+\gamma^{2}\omega^{2}-\gamma^{2}\mathrm{j}+12\gamma \omega ^{2}\mathrm{j}-\omega^{2}\mathrm{j,}\\ k_{22}=\gamma^{4}+\gamma^{2}\omega^{2}+\gamma^{2}\mathrm{j}+12\gamma \omega ^{2}\mathrm{j}+\omega^{2}\mathrm{j.} \end{array} \right. \] Finally, the resulting controller can be implemented according to (\ref{eqv}), which is physically realizable. \section{\label{sec7}Conclusion} This paper has studied several kinds of linear bimatrix equations, whose coefficients are bimatrices that were introduced in our early studies. These equations arise from full state feedback pole assignment and stability analysis of complex-valued linear systems. Explicit solutions to these linear bimatrix equations are established. Particularly, explicit solutions are provided for the case that the coefficients of the bimatrix equations are determined by the so-called antilinear systems. Explicit solutions are then used to solve the pole assignment problem for complex-valued linear systems, particularly, for second-order linear systems that can be easily converted into complex-valued linear systems. The spacecraft rendezvous control system is then used to demonstrate the obtained theoretical results. The results in this paper can be readily extended to linear bimatrix equations associated with complex-valued descriptor linear systems and high-order complex-valued linear systems. \end{document}
\begin{document} \title{It\^o q{} \begin{abstract} Extending It\^o's formula to non-smooth functions is important both in theory and applications. One of the fairly general extensions of the formula, known as Meyer-It\^o, applies to one dimensional semimartingales and convex functions. There are also satisfactory generalizations of It\^o's formula for diffusion processes where the Meyer-It\^o assumptions are weakened even further. We study a version of It\^o\rq{}s formula for multi-dimensional finite variation L\'evy processes assuming that the underlying function is continuous and admits weak derivatives. We also discuss some applications of this extension, particularly in finance. \end{abstract} \textbf{Keywords: It\^o\rq{}s formula, Finite variation L\'evy process, Weak derivative, PIDE} \thispagestyle{empty} {0\leq t<\infty}agenumbering{arabic} \setcounter{equation}{0} \setlength{\baselineskip}{1.3\baselineskip} \section{Introduction} In order to motivate our study, we consider the following Partial Integro-Differential Equation (PIDE): \begin{align}\label{eq:pide} \frac{{0\leq t<\infty}artial P}{{0\leq t<\infty}artial t}(t,x)&+rx\frac{{0\leq t<\infty}artial P}{{0\leq t<\infty}artial x}(t,x)+\frac{\sigma^2x^2}{2}\frac{{0\leq t<\infty}artial^2 P}{{0\leq t<\infty}artial x^2}(t,x)-rP(t,x)\nonumber\\ &+\int v(dy)\left(P(t,xe^y)-P(t,x)-x(e^y-1)\frac{{0\leq t<\infty}artial P}{{0\leq t<\infty}artial x}(t,x)\right)=0,\nonumber\\ & \hspace{-1.25cm}P(T,x)=(x-K)^+ \text{, for all $x\in(0,D)$,}\nonumber\\ &\hspace{-1.25cm} P(t,x)=0 \text{, for all $x\geq D$, and all $t\in[0,T],$} \end{align} where $D>K>0$, $r>0$, $T>0$, are constants, and $v$ is the L\'evy measure of a L\'evy process $X$ with characteristic triplet $(\sigma^2,v,\gamma)$. Furthermore, it is assumed that $\left(e^{X_t}\right)_{0\leq t\leq T}$ is a martingale with respect to the natural filtration generated by $X$ and a risk-neutral probability measure. Finding the solution of this PIDE (or similar ones) is of particular interest in different applied fields. For instance, under some circumstances the solution of PIDE \eqref{eq:pide} can be identified as the price of a financial security. As it follows, It\^o's formula is a key element in this procedure. More precisely, assume that the risk-neutral evolution of an asset is modeled by $S_t=S_0e^{rt+X_t}$, where $r$ and $X$ are the same as above such that $\left(e^{-rt}S_t\right)_{0\leq t\leq T}$ is a martingale under the risk-neutral probability measure. Suppose that we are interested in pricing a barrier option with maturity $T$, strike price $K$, barrier $D>K$, and the payoff $\max(S_T-K,0)1_{\{\max_{0\leq t\leq T}S_t<D\}}$. If $\sigma>0$, then using It\^o\rq{}s formula one can show that there is a $C^{1,2}$ solution of PIDE \eqref{eq:pide} which is in fact the price of this barrier option given by \begin{equation}\label{eq:purposed} P(t,x)=e^{-r(T-t)}\mathbb E[H(S_{T\wedge\tau_D})|S_t=x], \end{equation} where $\mathbb{E}$ is the expectation under the risk-neutral measure, $H(x):=(x-K)^+1_{\{x<D\}}$, and $\tau_D:=\inf\{s\geq t; X_s\geq D\}$, see Proposition 12.2 of \cite{cont}. Equation \eqref{eq:purposed} is in fact the Feynman-Kac representation of the solution of PIDE \eqref{eq:pide} which can be numerically calculated through simulation techniques. Note that the condition $\sigma>0$ is crucial for this argument to work which guarantees that the purposed solution \eqref{eq:purposed} is smooth and hence It\^o's formula is applicable. However, in the case of pure jump L\'evy processes, i.e. when $\sigma=0$, the smoothness is not obvious and it can fail. The situation is more complicated for American options where the smoothness of the purposed solution is not known even in the presence of a non-zero volatility, see Chapter 12 of \cite{cont} for more detail. For example, Theorem 7.2 of \cite{boy} shows that the smoothness of the purposed solution in the case of American option fails for tempered stable L\'evy processes with finite variation. One purpose of this work is to fix this kind of problems for models using finite variation L\'evy processes. For this class of processes, under some conditions, we obtain an It\^o formula that works well with non-smooth continuous functions. In particular, this can provide a solution to PIDE \eqref{eq:pide} when $\sigma=0$ and $X$ is a finite variation L\'evy process. This problem is investigated at the end of this paper. We continue with some literature review. A version of It\^o's formula is obtained in \cite{aebi} where the underlying process is a continuous semimartingale with a special structure. In this paper, the first and second order derivatives of the function are defined in the sense of distributions and they satisfy some local integrability conditions. \cite{follmer2} discuss an extension of the formula to a one-dimensional standard Brownian motion and an absolutely continuous function with a locally square integrable derivative. This result was further extended by \cite{follmer1} to a multi-dimensional Brownian motion. Following the idea of \cite{follmer2}, an extension of It\^o's formula is proved in \cite{bardina1} for a one-dimensional diffusion process such that its law has a density satisfying certain integrability conditions. In their work, it is assumed that the underlying function $f=f(t,x)$ is absolutely continuous in $x$ with a locally square integrable derivative satisfying a mild form of continuity in time $t$. In all the above works, the sample paths of the underlying processes are continuous. Concerning discontinuous processes, Theorem 70, Chapter \mathbb{R}mnum{4} of \cite{protter} (known as Meyer-It\^o's formula) provides a fairly general extension of It\^o's formula to semimartingales and one dimensional convex functions. Comparing to Theorem 70, Chapter \mathbb{R}mnum{4} of \cite{protter}, our extension applies to finite variation L\'evy processes and continuous functions that admit weak derivatives. Therefore this generalizes Meyer-It\^o's formula for finite variation L\'evy processes. In addition, it is assumed that the function is multi-dimensional and time-dependent. Beside the motivation provided at the beginning and theoretical interests to extend It\^o's formula for these processes, it is also argued in \cite{geman} that the evolution of asset prices are better modeled by finite variation processes with infinite activity\footnote{A L\'evy process $X$ in $\mathbb{R}^d$ is of infinite activity, if there are infinite number of jumps on any finite time interval, i.e. $v(\mathbb{R}^d)=\infty$, where $v$ is the L\'evy measure of $X$. }. The structure of the paper is as follows. The theoretical backgrounds, in particular some fundamental results in real and functional analysis are reviewed in Section \ref{sec:pd}. Section \ref{sec:dakt} concentrates on hypotheses and key tools. The main result is proved in Section \ref{sec:mr}. Finally, the paper ends with some examples and conclusions. \section{Preliminaries and Definitions}\label{sec:pd} In this section, we recall a few results from real and functional analysis (basically Distribution theory) that will be used later. We begin with some definitions. In what follows, $\mathbb{R}$ is the set of real numbers; $U\subset\mathbb{R}^d$ is a nonempty open set, $d\geq 1$; $|.|$ and $\norm{.}_d$ are respectively the one-dimensional and d-dimensional Euclidean norms; and $m$ is the Lebesgue measure. For simplicity, regardless of the dimension of the space, the Lebesgue measure is always denoted by $m$. \begin{Def} A point $x\in U\subset\mathbb{R}^d$ is a Lebesgue point of a function $f:U\longmapsto\mathbb{R}$ if $$\lim_{r\rightarrow0^+}\frac{1}{m(B_r(x))}\integral{B_r(x)}{}{|f(y)-f(x)|}dy=0,$$ where $B_r(x)=\set{y\in\mathbb{R}^d}{|y-x|<r}$ and the limit is taken for those $r$ small enough to guarantee that $B_r(x)$ is a subset of $U$. \end{Def} \begin{Def} The set of all Lebesgue points of $f:U\longmapsto\mathbb{R}$ is denoted by $L_f$ and it is called the Lebesgue set. \end{Def} \begin{Def} A family $\{E_r\}_{r>O}$ of Borel subsets of $U$ is said to shrink nicely to $x\in U$ if the following two conditions hold \begin{itemize} \item $E_r\subset B_r(x)\subset U$ for each $r$, \item there is a constant $\alpha > 0$, independent of $r$, such that $m(E_r)> \alpha m(B_r(x))$. \end{itemize} \end{Def} \begin{theorem}\label{theorem:ldt} \textbf{The Lebesgue Differentiation Theorem.} Suppose that $f\in L_{loc}^1(U)$ and $supp(f)\subset U$. Then we have \begin{itemize} \item $m(U-L_f)=0$, \item for every $x$ in the Lebesgue set of $f$, in particular for almost every $x$ in $U$, we have $$\lim_{r\rightarrow0^+}\frac{1}{m(E_r)}\integral{E_r}{}{|f(y)-f(x)|}dy=0,$$ where $\{E_r\}_{r>0}$ is a family of Borel subsets of $U\subset\mathbb{R}^d$ that shrinks nicely to $x$. \end{itemize} \end{theorem} For a proof of this theorem in the case of $U=\mathbb{R}^d$, see Theorem 3.21 of \cite{folland}. The generalization to an open set $U\subset\mathbb{R}^d$ is straightforward. Note that following the Lebesgue Differentiation Theorem we have $\lim_{r\rightarrow0^+}\frac{1}{m(E_r)}\integral{E_r}{}{f(y)}dy=f(x),$ where $f$ and $E_r$ are the same as in the above theorem. Therefore, this can be thought of as a generalization of the fundamental theorem of calculus. In general, determining the Lebesgue points of a function is not an easy task. The next lemma gives a partial answer to this challenge; the proof is simple and hence omitted. \begin{lemma}\label{lemma:fcont} If $f\in L_{loc}^1(U)$, $U\subset\mathbb{R}^d$, and $f$ is continuous at $x\in U$, then $x\in L_f.$ \end{lemma} \begin{Def}\label{def:convolution} If $g:\mathbb{R}^d\longmapsto\mathbb{R}$ and $f:\mathbb{R}^d\longmapsto\mathbb{R}$, are measurable functions, then the convolution $g*f:\mathbb{R}^d\longmapsto\mathbb{R}$ is defined by $(g*f)(x):=\integral{\mathbb{R}^d}{}{g(x-y)f(y)}dy,$ provided that for every $x$ in $\mathbb{R}^d$, the integral is well defined. \end{Def} Some basic properties of convolution can be found in standard text books such as \cite{folland} or \cite{brezis}. The next lemma provides a simple and sufficient condition for the existence of convolution. \begin{lemma}\label{lemma:well defined} Let $f\in L_{loc}^p(U)$, $p\geq1$, and $supp(f)\subset U$. Suppose that $g:\mathbb{R}^d\longmapsto\mathbb{R}$ is bounded and compactly supported. Then $g*f1_U$, $f1_U(x)=\left\{ \begin{array}{ll} f(x), & \hbox{$x\in U;$} \\ 0, & \hbox{$x\notin U,$} \end{array} \right.$ is well defined on $\mathbb{R}^d$, i.e. the integral $\integral{U}{}{g(x-y)f(y)}dy$ is finite for all $x$ in $\mathbb{R}^d$. \end{lemma} Let $\eta$ be any function in $C_c^{\infty}(\mathbb{R}^d)$ such that it satisfies the following conditions $$\eta\geq0,\qquad \integral{\mathbb{R}^d}{}{\eta(x)}dx=1,\qquad supp(\eta)=\overline{B_1(0)}.$$ For any $\epsilon>0$, define $\eta^{\epsilon}(x)=\frac{1}{\epsilon^d}\eta(\frac{x}{\epsilon})$ then clearly we have $$\eta^{\epsilon}\in C_c^{\infty}(\mathbb{R}^d),\qquad \integral{\mathbb{R}^d}{}{\eta^{\epsilon}(x)}dx=1,\qquad supp(\eta^{\epsilon})=\overline{B_{\epsilon}(0)}.$$ The next definition provides an example of such a function. \begin{Def} Let $$\eta(x)=\left\{ \begin{array}{ll} ce^{\frac{-1}{1-\norm{x}_d^2}}, & \hbox{$\norm{x}_d<1;$} \\ 0, & \hbox{$\norm{x}_d\geq1,$} \end{array} \right.$$ and take $c$ such that $\integral{\mathbb{R}^d}{}{\eta(x)}dx=1$. Then $\eta^\epsilon$ is called the standard mollifier. \end{Def} Our discussion does not depend on a specific choice of $\eta^\epsilon$. However, if necessary, the reader can always consider the standard mollifier. Suppose that $f\in L_{loc}^p(U)$, $p\geq1$, and for every $\epsilon>0$, let $f^\epsilon:\mathbb{R}^d\longmapsto\mathbb{R}$ be defined by $$f^\epsilon(x):=(\eta^\epsilon*f1_U)(x)=\integral{U}{}{\eta^\epsilon(x-y)f(y)}dy.$$ For a fixed $x$ and $\epsilon$ small enough (that depends on $x$), $\overline{B_\epsilon(x)}\subset U$ and so $f^\epsilon(x)$ exists. However, if $supp(f)\subset U$ and since $f\in L_{loc}^p( U)$, $p\geq1$, by Lemma \ref{lemma:well defined}, $f^\epsilon$ is well defined on $\mathbb{R}^d$ for all $\epsilon>0$. The following theorem is a classical well-known result in the theory of distributions. Parts (1) and (2) can be found in Section 4.4 of \cite{brezis}, and part (3) is a conclusion of Theorem \ref{theorem:ldt}. \begin{theorem}\label{theorem:convsmooth} Assume that $f\in L_{loc}^p(U)$, $p\geq1$, $supp(f)\subset U$, and $\epsilon>0$. Then \begin{enumerate} \item $f^\epsilon\in C^{\infty}(\mathbb{R}^d)$ and ${0\leq t<\infty}artial^\alpha f^\epsilon={0\leq t<\infty}artial^\alpha\eta^\epsilon*f1_U$, \item $f^\epsilon\longrightarrow f1_U$ in $L_{loc}^p(\mathbb{R}^d)$ as $\epsilon\rightarrow0^+$, \item $f^\epsilon\longrightarrow f1_U$ pointwise on $L_{f1_U}$ as $\epsilon\rightarrow0^+$, hence $f^\epsilon\longrightarrow f$ pointwise on $L_f$ as $\epsilon\rightarrow0^+$. \end{enumerate} \end{theorem} Note that part (3) of Theorem \ref{theorem:convsmooth} implies that $f^\epsilon\longrightarrow f$, Lebesgue almost every where on $U$. Let $\mathbb N_0$ be the set of non-negative integers and $\mathbb N_0^d=\set{(\alpha_1,\alpha_2,...,\alpha_d)}{\alpha_i\in\mathbb N_0, i=1,2,...,d}.$ An element of the set $\mathbb N_0^d$ is called a multi-index. In our extended version of It\^o's formula instead of classical strong differentiability, we apply weak differentiability which is defined below. \begin{Def}\label{def:weak1} Suppose that $\alpha\in\mathbb N_0^d$ is a multi-index. We say that a function $f\in L_{loc}^1(U)$, $U\subset\mathbb{R}^d$, is weakly differentiable; and also its $\alpha$th-weak derivative denoted by ${0\leq t<\infty}artial^\alpha f\in L_{loc}^1(U)$, if $$\integral{U}{}{({0\leq t<\infty}artial^\alpha f(x)){0\leq t<\infty}hi(x)}dx=(-1)^{|\alpha|}\integral{U}{}{f(x)({0\leq t<\infty}artial^\alpha{0\leq t<\infty}hi(x))}dx,\; \text{for all}\;{0\leq t<\infty}hi\in C_c^\infty(U),$$ where $|\alpha|=\sum_{i=1}^d\alpha_i$, and the functions ${0\leq t<\infty}hi\in C_c^\infty(U)$ are called test functions. \end{Def} By applying Theorem \ref{theorem:convsmooth} and simple properties of weak derivatives, we can get the following theorem. \begin{theorem}\label{theorem:convweak} Let $f\in L_{loc}^1(U)$ and $supp(f)\subset U$. We further assume that $f$ admits the weak derivative ${0\leq t<\infty}artial^\alpha f\in L_{loc}^1(U)$, then: \begin{enumerate} \item $f^\epsilon\in C^\infty(\mathbb{R}^d)$, and ${0\leq t<\infty}artial^\alpha(f^\epsilon)=\eta^\epsilon*({0\leq t<\infty}artial^\alpha f)$ on $U$, \item ${0\leq t<\infty}artial^\alpha(f^\epsilon)\longrightarrow{0\leq t<\infty}artial^\alpha f$ in $L_{loc}^1(U)$ as $\epsilon\rightarrow0^+$, \item ${0\leq t<\infty}artial^\alpha (f^\epsilon)\longrightarrow {0\leq t<\infty}artial^\alpha f$ pointwise on $L_{{0\leq t<\infty}artial^\alpha f}$ as $\epsilon\rightarrow 0^+$. \end{enumerate} \end{theorem} \begin{remark}\label{remark:part1} Note that part(1) of Theorems \ref{theorem:convsmooth} and \ref{theorem:convweak} still holds if we replace $\eta^\epsilon$ by a test function. \end{remark} Though it is very simple, the next lemma is a key point in our discussion. \begin{lemma}\label{lemma:key} Assume that $f\in L_{loc}^1(U)$ has the weak derivative ${0\leq t<\infty}artial^\alpha f\in L_{loc}^1(U).$ Suppose that ${0\leq t<\infty}hi\in C_c^\infty(\mathbb{R}^d)$ is a test function with support of $K$ such that ${0\leq t<\infty}hi(x)\geq0$, for all $x\in\mathbb{R}^d$ and $\integral{\mathbb{R}^d}{}{{0\leq t<\infty}hi(x)}dx=1$. Then for every $x\in\mathbb{R}^d$ we have $$|{0\leq t<\infty}artial^\alpha(f*{0\leq t<\infty}hi)(x)|\leq\sup_{z\in U\cap\Lambda(x)}|{0\leq t<\infty}artial^\alpha f(z)|,$$ where $\Lambda(x)=\set{y\in\mathbb{R}^d}{x-y\in K}$. \end{lemma} \begin{proof} By using Remark \ref{remark:part1} we get $${0\leq t<\infty}artial^\alpha(f*{0\leq t<\infty}hi)(x)=({0\leq t<\infty}hi*1_U{0\leq t<\infty}artial^\alpha f)(x)=\integral{U}{}{{0\leq t<\infty}hi(x-y){0\leq t<\infty}artial^\alpha f(y)}dy=\integral{U\cap\Lambda(x)}{}{{0\leq t<\infty}hi(x-y){0\leq t<\infty}artial^\alpha f(y)}dy.$$ Using this equation and the following inequalities, we get the result \begin{align*} |{0\leq t<\infty}artial^\alpha (f*{0\leq t<\infty}hi)(x)| & \leq \sup_{z\in U\cap\Lambda(x)}|{0\leq t<\infty}artial^\alpha f(z)|\integral{U\cap\Lambda(x)}{}{{0\leq t<\infty}hi(x-y)} dy\\ &\leq \sup_{z\in U\cap\Lambda(x)}|{0\leq t<\infty}artial^\alpha f(z)|\integral{\mathbb{R}^d}{}{{0\leq t<\infty}hi(x)}dx=\sup_{z\in U\cap\Lambda(x)}|{0\leq t<\infty}artial^\alpha f(z)|. \end{align*} \end{proof} \begin{remark} Note that the value of the right-hand side of the inequality in Lemma \ref{lemma:key} can be infinity. \end{remark} \section{Discussion of Assumptions and Key Tools}\label{sec:dakt} In applying classical It\^o's formula on smooth functions $f:[0,\infty)\times U\longmapsto\mathbb{R}$, $U\subset\mathbb{R}^d$, the differentiability at $t=0$ is understood by being the right-hand side derivative. Note that since the Lebesgue measure of $\{0\}\times U$ is zero, the weak derivatives of $f$ can be defined similar to Definition \ref{def:weak1}. Assume that $f:[0,\infty)\times U\longmapsto\mathbb{R}$ is a Lebesgue measurable function. In accordance with Definition\ref{def:weak1}, we say that $f\in L_{loc}^1([0,\infty)\times U)$ has weak derivatives ${0\leq t<\infty}artial^\alpha f\in L_{loc}^1([0,\infty)\times U)$ if \begin{equation}\label{eq:weak2} \integral{[0,\infty)\times U}{}{({0\leq t<\infty}artial^\alpha f(x)){0\leq t<\infty}hi(x)}dx=(-1)^{|\alpha|}\integral{[0,\infty)\times U}{}{f(x)({0\leq t<\infty}artial^\alpha{0\leq t<\infty}hi(x))}dx,\; \text{for all}\;{0\leq t<\infty}hi\in C_c^\infty([0,\infty)\times U). \end{equation} Note that since a test function ${0\leq t<\infty}hi$ is smooth, its derivatives at the origin are understood as the right-hand side ones. The results of Section \ref{sec:pd} are stated for open subsets of $\mathbb{R}^d.$ However, $[0,\infty)\times U$ is not an open set. So in our first step we fix this problem by introducing an extended version of $f$. Suppose that the function $f:[0,\infty)\times U\longmapsto\mathbb{R}$ is continuous on $[0,\infty)\times U$. This function can be continuously extended to a new function $\tilde f:\mathbb{R}\times U\longmapsto\mathbb{R}$: \begin{equation}\label{eq:weakex} \tilde f(t,x)=\left\{ \begin{array}{ll} f(t,x), & \hbox{$(t,x)\in [0,\infty)\times U$;} \\ f(-t,x), & \hbox{$(t,x)\in (-\infty,0)\times U.$} \end{array} \right. \end{equation} Now in addition assume that $f\in L_{loc}^1([0,\infty)\times U)$ and it is weakly differentiable in the sense of equation \eqref{eq:weak2}. Then one can easily show that $\tilde f\in L_{loc}^1(\mathbb{R}\times U)$ and it is weakly differentiable on the open set $\mathbb{R}\times U$ in the sense of Definition \ref{def:weak1}. The weak derivatives of $\tilde f$ can be stated explicitly based on weak derivatives of $f$. For instance in the case of $d=1$, one can easily check that $$\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial t}(t,x)=\left\{ \begin{array}{ll} \frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial t}(t,x), & \hbox{$(t,x)\in [0,\infty)\times U$;} \\ -\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial t}(-t,x), & \hbox{$(t,x)\in (-\infty,0)\times U,$} \end{array} \right.$$ and $$\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial x}(t,x)=\left\{ \begin{array}{ll} \frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial x}(t,x), & \hbox{$(t,x)\in [0,\infty)\times U$;} \\ \frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial x}(-t,x), & \hbox{$(t,x)\in (-\infty,0)\times U,$} \end{array} \right.$$ where $\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial t}(t,x)$ and $\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial x}(t,x)$ are weak derivatives of $f$ in the sense of equation \eqref{eq:weak2}. Assume that $(\Omega,\mathfrak F,\mathbb{P})$ is a complete probability space. Let $X=(X_t)_{t\geq0}$, $X_t:\Omega\longmapsto U$, $U\subset\mathbb{R}^d$, be a c\`adl\`ag stochastic process that is defined on this space. In any extension of It\^o's formula, it is important to somehow measure the amount of time that the process spends in some certain regions of the domain. In particular, this is crucial for those points for which the function is not smooth. For instance, in the case of Meyer-It\^o formula (see Theorem 70, Chapter \mathbb{R}mnum{4} of \cite{protter}), this is done through local times. In the next proposition, we discuss a similar tool which is a key result in our extension. The proposition is provided for a certain class of processes explained below. \begin{assump}\label{assump:main} Suppose that $X:[0,\infty)\times\Omega\longmapsto U$ is a c\`adl\`ag stochastic process defined on the complete probability space $(\Omega,\mathfrak F,\mathbb{P})$, that satisfies the following condition: If $A\subset U$ is a Borel measurable set such that $m(A)=0$, where $m$ is the Lebesgue measure, then for all $s\in\mathbb{R}^+$, $\mathbb{P}(X_s\in A)=0$. In other words, for all $s\in\mathbb{R}^+$, the measure $\mu_s$ on $U$ defined by $\mu_s(A)=\mathbb{P}(X_s\in A)$, is absolutely continuous with respect to the Lebesgue measure. \end{assump} \begin{prop}\label{prop:main} Assume that the process $X$ satisfies Assumption \ref{assump:main}. Let $A\subset [0,\infty)\times U$ be any Lebesgue measurable set such that $m(A)=0$, then for all $t\geq0$ we have $$\mathbb{P}\set{\omega\in\Omega}{m(\set{s\in[0,t]}{(s,X_s(\omega))\in A})=0}=1.$$ In particular, this implicitly implies that for almost all $\omega\in\Omega$, the set $\set{s\in[0,t]}{(s,X_s(\omega))\in A}$ is Lebesgue measurable for all $t\geq0$. \end{prop} \begin{proof} First assume that $A$ is a Borel measurable set. Define the process $Y:[0,\infty)\times\Omega\longmapsto[0,\infty)\times U$ by $Y(s,\omega)=(s,X_s(\omega))$. The process $Y$ is c\`adl\`ag and by Proposition 1.21 of \cite{jacod}, $Y$ is $\mathfrak B_{[0,\infty)}\times\mathfrak F$ measurable, where $\mathfrak B_{[0,\infty)}$ is the Borel $\sigma$-algebra on $[0,\infty)$ and $\mathfrak F$ is the $\sigma$-algebra on $\Omega$. Hence $Y^{-1}(A)$ belongs to $\mathfrak B_{[0,\infty)}\times\mathfrak F$ and so $\llbracket0,t\rrbracket\cap Y^{-1}(A)$ is in $\mathfrak B_{[0,\infty)}\times\mathfrak F\subset\mathcal L\times\mathfrak F$, where $\llbracket0,t\rrbracket=[0,t]\times\Omega$, and $\mathcal L$ is Lebesgue $\sigma$-algebra on $[0,\infty)$. Therefore the function $f:[0,\infty)\times\Omega\longmapsto\mathbb{R}$ defined by $f:=1_{\llbracket0,t\rrbracket\cap Y^{-1}(A)}$ belongs to $L^1(m\times\mathbb{P})$. From Fubini-Tonelli Theorem, see Theorem 2.37 of \cite{folland}, it follows that $f_\omega$ defined by $f_\omega:=(.,\omega)$ is in $ L^1(m)$ for almost all $\omega$. So for a fixed $\omega$, $\llbracket0,t\rrbracket\cap Y^{-1}(A)$ is Lebesgue measurable, and $m\set{s\in[0,t]}{(s,X_s(\omega))\in A}$ is well defined for almost all $\omega\in\Omega$. Moreover, let $Z(\omega):=\integral{}{}{f_\omega}dm=m\set{s\in[0,t]}{\left(s,X_s(\omega)\right)\in A}$, then again by Fubini-Tonelli Theorem $Z$ is a random variable and $Z\in L^1(\mathbb{P})$, furthermore, we can calculate its expectation \begin{align*} \expect{Z} &= \int\int f_\omega\;dm\;d\mathbb{P}=\integral{0}{t}{\integral{}{}{f_s}d\mathbb{P}}ds \\ &= \integral{0}{t}{\expect{1_{\{(s,X_s)\in A\}}}}ds. \end{align*} Note that for a fixed $s$, $1_{\{(s,X_s)\in A\}}=1_{\{X_s\in A_s\}}$, where $A_s=\set{y\in\mathbb{R}^d}{(s,y)\in A}$ is Borel measurable, hence we obtain \begin{equation}\label{eq:mainprop1} \expect{Z}=\integral{0}{t}{\mathbb{P}(X_s\in A_s)}ds. \end{equation} The set $A$ is Borel measurable and hence Lebesgue measurable as well. By Theorem 2.36 of \cite{folland} the function $s\longmapsto m(A_s)$ is Lebesgue measurable and $m(A)=\integral{[0,\infty)}{}{m(A_s)}ds$. By the proposition's assumption $m(A)=0$ which concludes that $m(A_s)=0$ for Lebesgue almost all $s\geq0$, i.e. there exists a set $N\subset[0,\infty)$ such that $m(N)=0$ and if $s\notin N$ then $m(A_s)=0.$ From equation \eqref{eq:mainprop1} and Assumption \ref{assump:main}, we get $$\expect{Z}=\integral{[0,t]\cap N^c}{}{\mathbb{P}(X_s\in A_s)}ds=\integral{[0,t]\cap\set{s}{m(A_s)=0}}{}{\mathbb{P}(X_s\in A_s)}ds=0.$$ The random variable $Z$ is non-negative and $\expect{Z}=0$, hence $Z=0$, $\mathbb{P}$-almost surely which means that for almost all $\omega\in\Omega$, $m(\set{s\in[0,t]}{(s,X_s(\omega))\in A})=0$. This completes the proof when $A$ is Borel measurable. Next, suppose that $A$ is a Lebesgue measurable set, then $A=A^{'}\cup A^{''}$, $A^{''}\subset B$, where $A^{'}$ and $B$ are Borel measurable and $m(B)=0$. Now the result follows from the previous part, and the facts that $m(A)=0$ and the probability space is complete. \end{proof} Note that if $A=[0,t]\times B$, where $B\subset\mathbb{R}^d$ a Borel set, then $\set{s\in[0,t]}{(s,X_s)\in A}$ is the amount of time that the process $X$ spends in Borel set $B$. So under Assumption \ref{assump:main}, Proposition \ref{prop:main} concludes that almost surely the Lebesgue measure of this amount of time is zero for any zero Borel measurable set. We would like to point out that this measure can be quite different than local times. For instance, let $X$ be a standard Brownian motion, then by Proposition \ref{prop:main}, $m\set{s\in[0,t]}{X_s=a}=0$, $\mathbb{P}$-almost surely for all real numbers $a$ whereas the local time of a Brownian motion at the level $a$ is not zero. This is also because of the fact that as a measure the local time of a Brownian motion is singular with respect to the Lebesgue measure. \section{The Main Result}\label{sec:mr} In this section, we state and prove our main result. First, we mention that the result holds for a finite variation L\'evy process that satisfies Assumption \ref{assump:main}. This assumption is not valid for a compound Poisson process $X$ as $\mathbb{P}[X_t=0]>0$, for $t>0$, and therefore, the measure $\mu_t$ defined in Assumption \ref{assump:main} is not absolutely continuous with respect to the Lebesgue measure, see Remark 27.3 of \cite{sato}. However, based on Theorem 27.7 of \cite{sato}, Assumption \ref{assump:main} is always satisfied for a finite variation L\'evy process with infinite activity, if its L\'evy measure is absolutely continuous with respect to the Lebesgue measure. For simplicity we present the theorem for the case of $d=1$, however there is no restriction on extending the result to a general $d$. \begin{theorem}\label{theorem:main} Assume that $f:[0,\infty)\times U\longmapsto\mathbb{R}$ is a continuous function on $[0,\infty)\times U$ such that $f\in L_{loc}^1([0,\infty)\times U)$, $supp(f)\subset[0,\infty)\times U$, and $U$ is an open set of $\mathbb{R}$. Let the weak derivatives $\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial s},\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial x}\in L_{loc}^1([0,\infty)\times U)$ be locally bounded and defined by equation \eqref{eq:weak2}. Suppose that $X$ is a finite variation L\'evy process satisfying Assumption \ref{assump:main} such that for all $t\geq0$, $X_t$ and $X_{t^-}$ are in $U$. Then \begin{align*} f(t,X_t) = f(0,X_0)&+\integral{0}{t}{\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial s}(s,X_s)}ds+\gamma\integral{0}{t}{\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial x}(s,X_s)}ds \\ &+ \iint\limits_{[0,t]\times \mathbb{R}}{0\leq t<\infty}arenth{f(s,X_{s^-}+x)-f(s,X_{s^-})}\;J_X(ds\times dx), \end{align*} where $J_X$ and $\gamma$ are respectively the Poisson random measure and the drift coefficient of the process $X$ admitting the following representation: $X_t=\gamma t+\integral{[0,t]\times \mathbb{R}}{}{x}J_X(ds\times dx)$. \end{theorem} \begin{proof} Assume that $\tilde f$ is an extension of the function $f$ to $\mathbb{R}\times U$ given by equation \eqref{eq:weakex}, note that $supp(\tilde f)\subset\mathbb{R}\times U$. Let ${0\leq t<\infty}hi_n=\eta^{\frac{1}{n}}$ and $f_n(t,x):=({0\leq t<\infty}hi_n*\tilde f1_{\mathbb{R}\times U})(t,x)$, where $(t,x)\in\mathbb{R}^2$, $n\geq1$, and $\eta^{\frac{1}{n}}$ is defined in Section \ref{sec:pd}. Since $\tilde f\in L_{loc}^1(\mathbb{R}\times U)$, by Theorem \ref{theorem:convweak}, $f_n\in C^\infty(\mathbb{R}\times\mathbb{R})$ for all $n\geq1$. Hence from It\^o's formula, see Theorem 4.2 of \cite{kyprianou}, we have \begin{align*} f_n(t,X_t) = f_n(0,X_0)&+\integral{0}{t}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)}ds+\gamma\integral{0}{t}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial x}}ds \\ &+ \iint\limits_{[0,t]\times \mathbb{R}}{0\leq t<\infty}arenth{f_n(s,X_{s^-}+x)-f_n(s,X_{s^-})}\;J_X(ds\times dx). \end{align*} The rest of the proof is divided into five steps: \textbf{Step 1.} Since $\tilde f$ is a continuous function, by Lemma \ref{lemma:fcont} $L_{\tilde f}=\mathbb{R}\times U$. On the other hand for all $t\geq0$, $X_t$ is in $U$ and so by Theorem \ref{theorem:convsmooth}, $f_n(t,X_t)\longrightarrow\tilde f(t,X_t)$, for all $\omega\in\Omega$ and $t\in\mathbb{R}$. Especially $f_n(0,X_0)\longrightarrow\tilde f(0,X_0)$. Also note that for $t\geq0$, $\tilde f(t,X_t)=f(t,X_t)$ by the definition of $\tilde f$. \textbf{Step 2.} From Theorem \ref{theorem:convweak}, if $(s,X_s)\in L_{\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial s}}$, then we have $$\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)\longrightarrow\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial s}(s,X_s).$$ Let $L_1=\mathbb{R}\times U-L_{\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial s}}$, then \begin{align*} \integral{0}{t}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)}ds &= \integral{0}{t}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)1_{\{(s,X_s)\notin L_1\}}}ds+\integral{0}{t}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)1_{\{(s,X_s)\in L_1\}}}ds. \end{align*} By Theorem \ref{theorem:ldt}, $m(L_1)=0$, therefore by Proposition \ref{prop:main}, $m\set{s\in[0,t]}{(s,X_s)\in L_1}=0$, $\mathbb{P}$-almost surely. Hence because of the properties of Lebesgue integral, for each fixed $t$, the integral $$\integral{0}{t}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)1_{\{(s,X_s)\in L_1\}}}ds=\integral{[0,t]\cap\{s:\;(s,X_s)\in L_1\}}{}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)}ds,$$ is zero $\mathbb{P}$-almost surely. Therefore for a fixed $t$, $$\integral{0}{t}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)}ds=\integral{0}{t}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)1_{\{(s,X_s)\notin L_1\}}}ds,\;\mathbb{P}-\;\text{almost surely}.$$ By Lemma \ref{lemma:key}, for all $(s,x)\in\mathbb{R}^2$, $|\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,x)|\leq\sup_{z\in (\mathbb{R}\times U)\cap\Lambda(s,x)}|\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial s}(z)|\leq \sup_{z\in \Lambda(s,x)}|\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial s}(z)|$, where $\Lambda(s,x)=\set{y\in\mathbb{R}^2}{(s,x)-y\in K}$, and $K=\sup{0\leq t<\infty}hi_n=\overline{B_{\frac{1}{n}}(0)}\subset\overline{B_1(0)}$ which results $$|\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)|\leq\sup_{z\in\Lambda(s,X_s)}|\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial s}(z)|,\;0\leq s\leq t.$$ For a fixed $\omega\in\Omega$, $\Lambda(s,X_s)$ is bounded, because $X$ is bounded on $[0,t]$ (due to being a c\`adl\`ag process). Therefore for a fixed $\omega\in\Omega$ and $s\in[0,t]$, one can find an upper bound for $|\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)|$ that depends only on $\omega$, $t$, and the minimum, maximum of $\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial s}(s,X_s)$ on $[0,t]$. This upper bound is finite because the weak derivatives of $f$ are locally bounded by the assumption of the theorem and so the weak derivatives of $\tilde f$ must be locally bounded too. Therefore, one can apply Lebesgue Dominated Convergence theorem and we obtain: $$\lim_{n\rightarrow\infty}\integral{0}{t}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)}ds=\integral{0}{t}{\lim_{n\rightarrow\infty}\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)1_{\{(s,X_s)\notin L_1\}}}ds,\;\mathbb{P}-\;\text{almost surely}.$$ By Theorem \ref{theorem:convweak}, this is $\mathbb{P}$-almost surely equal to $\integral{0}{t}{\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial s}(s,X_s)1_{\{(s,X_s)\notin L_1\}}}ds$. Since $\mathbb{P}$-almost surely, $m\set{s\in[0,t]}{(s,X_s)\in L_1}=0$, and for each $s\in[0,t]$, $\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial s}(s,X_s)=\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial s}(s,X_s)$, we have $$\lim_{n\rightarrow\infty}\integral{0}{t}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial s}(s,X_s)}ds=\integral{0}{t}{\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial s}(s,X_s)}ds,\;\mathbb{P}-\;\text{almost surely}.$$ \textbf{Step 3.} Similar to Step 2, one can prove that $$\lim_{n\rightarrow\infty}\integral{0}{t}{\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial x}(s,X_s)}ds=\integral{0}{t}{\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial x}(s,X_s)}ds,\;\mathbb{P}-\;\text{almost surely}.$$ \textbf{Step 4.} Let $I_n=\iint\limits_{[0,t]\times\mathbb{R}}{0\leq t<\infty}arenth{f_n(s,X_{s^-}+x)-f_n(s,X_{s^-})}\;J_X(ds\times dx)$, by using mean-value theorem we have $|f_n(s,X_{s^-}+x)-f_n(s,X_{s^-})|=|\frac{{0\leq t<\infty}artial f_n}{{0\leq t<\infty}artial x}(s,C)|\,|x|$, where $C$ is a random variable between $X_{s^-}$ and $X_{s^-}+x$. By applying Lemma \ref{lemma:key} and the same procedure as Step 2, we can show that $|f_n(s,X_{s^-}+x)-f_n(s,X_{s^-})|\leq C^{'}|x|$, where $C^{'}$ is a finite random variable, free from $s$, $x$, $n$. On the other hand, since $X$ is a finite variation L\'evy process, we also have that $\integral{[0,t]\times\mathbb{R}}{}{|x|}J_X{(ds\times dx)}<\infty$, $\mathbb{P}$-almost surely. Therefore by applying Lebesgue Dominated Convergence theorem, one can interchange the limit and the integral in expression $I_n$ as $n$ goes to infinity. Since $L_{\tilde f}=\mathbb{R}\times U\supseteq[0,t]\times U$, and for all $s\geq0$, $X_s$ and $X_{s^-}$ are in $U$, by part three of Theorem \ref{theorem:convsmooth}, we get \begin{align*} \lim_{n\rightarrow\infty}I_n &= \iint\limits_{[0,t]\times\mathbb{R}}{0\leq t<\infty}arenth{\tilde f(s,X_{s^-}+x)-\tilde f(s,X_{s^-})}\;J_X(ds\times dx) \\ &=\iint\limits_{[0,t]\times\mathbb{R}}{0\leq t<\infty}arenth{f(s,X_{s^-}+x)-f(s,X_{s^-})}\;J_X(ds\times dx). \end{align*} \textbf{Step 5.} From Steps 1, 2, 3, 4, for a fixed $t\geq0$, we have $\mathbb{P}$-almost surely the following identity \begin{align}\label{eq:step5} f(t,X_t) = f(0,X_0)&+\integral{0}{t}{\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial s}(s,X_s)}ds+\gamma\integral{0}{t}{\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial x}}ds\nonumber \\ &+ \iint\limits_{[0,t]\times\mathbb{R}}{0\leq t<\infty}arenth{f(s,X_{s^-}+x)-f(s,X_{s^-})}\;J_X(ds\times dx). \end{align} The process $X$ is c\`adl\`ag, so the left-hand side and the right-hand side of the above equality are well defined processes. Therefore so far we have shown that the two sides of the above equation (when considered as processes) are in fact modifications of each other. Now we prove that as processes the left-hand side and the right-hand side are indeed indistinguishable. \begin{enumerate} \item First note that since $f$ is continuous on $[0,\infty)\times U$, then $(f(t,X_t))_{t\geq0}$ is c\`adl\`ag. \item The function $\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial s}$ is Borel measurable and for a fixed $\omega\in\Omega$, $(X_s)_{0\leq s\leq t}$ is also Borel measurable. Hence for a fixed $\omega$, $\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial s}(s,X_s)$ is Borel measurable. So it is also Lebesgue measurable and by Fundamental theorem of Lebesgue integral calculus $t\longmapsto\integral{0}{t}{\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial s}(s,X_s)}ds$ is uniformly continuous in $t$. Note that in Step 2, we actually showed that $\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial s}(s,X_s)$ is Lebesgue integrable and on $[0,t]$, $\frac{{0\leq t<\infty}artial\tilde f}{{0\leq t<\infty}artial s}(s,X_s)=\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial s}(s,X_s)$. \item Similarly to the previous case, $t\longmapsto\integral{0}{t}{\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial x}(s,X_s)}ds$ is also continuous in $t$. \item Let $Z_t:=\iint\limits_{[0,t]\times\mathbb{R}}{0\leq t<\infty}arenth{f(s,X_{s^-}+x)-f(s,X_{s^-})}\;J_X(ds\times dx)$. For all $s\geq0$, $X_s$ and $X_{s^-}$ are in $U$, therefore $Z_t=\sum_{0\leq s\leq t}{0\leq t<\infty}arenth{f(s,X_s)-f(s,X_{s^-})}$. If the function $f$ is $C^{1,1}$, then obviously the process $Z=(Z_t)_{t\geq0}$ is right continuous. However, since here $f$ is not necessarily smooth, to show the right continuity of $Z$, we do as follows: \begin{align*} \lim_{h\rightarrow0^+}|Z_{t+h}-Z_t| & = \lim_{h\to0^+}|\sum_{t<s\leq t+h}{0\leq t<\infty}arenth{f(s,X_s)-f(s,X_{s^-})}| \\ &\leq\lim_{h\rightarrow0^+}\sum_{t<s\leq t+h}|f(s,X_s)-f(s,X_{s^-})|\\ &=\lim_{h\rightarrow0^+}\sum_{t<s\leq t+h}|\lim_{n\rightarrow\infty}\left(f_n(s,X_s)-f_n(s,X_{s^-})\right)|\\ &\leq\lim_{h\rightarrow0^+}\sum_{t<s\leq t+h}C^{''}|\Delta X_s|, \end{align*} where similar to Step 4, one can show that $C^{''}$ is a finite random variable free from $s$, $h$, $n$, so we obtain $$\lim_{h\rightarrow0^+}|Z_{t+h}-Z_t|\leq C^{''}\lim_{h\rightarrow0^+}\sum_{t<s\leq t+h}\Delta X_s=0,\;\mathbb{P}-\;\text{almost surely}.$$ This shows that the process $Z$ is right continuous. Thus the left-hand side and the right-hand side of equation \eqref{eq:step5}, when considered as processes, are right continuous, and we already know that they are also modification of each other. By Theorem 4, Chapter \mathbb{R}mnum{1} of \cite{protter}, we conclude that the left-hand side and the right-hand side of this equation define two processes that are indistinguishable. This proves our theorem. \end{enumerate} \end{proof} The next example shows that even in one dimensional cases, there are simple functions for which Meyer-It\^o formula is not applicable but Theorem \ref{theorem:main} can be used. \begin{example} Assume that $X:[0,\infty)\times\Omega\longmapsto\mathbb{R}$ is a finite variation L\'evy process that satisfies Assumption \ref{assump:main}. Let the function $f:\mathbb{R}\longmapsto\mathbb{R}$ be defined by $$ f(x)=\left\{ \begin{array}{ll} x^2\sin(\frac{1}{x}), & \hbox{$x\neq0$;} \\ 0, & \hbox{$x=0.$} \end{array} \right.$$ This function is continuous, but its derivative is not continuous at origin. So the classical It\^o's formula cannot be applied. Moreover, one can show that $f$ cannot be written as the difference of two convex functions, and hence Meyer-It\^o's formula (Theorem 70, Chapter \mathbb{R}mnum{4} of \cite{protter}) is not applicable as well. However, $f$ is weakly differentiable, its weak derivative is locally bounded, and therefore Theorem \ref{theorem:main} is in force. \end{example} \begin{example} Let the function $f$ and the process $X$ be the same as Theorem \ref{theorem:main}. In addition, we equip the probability space $(\Omega,\mathfrak F,\mathbb{P})$ with the natural filtration $\mathbb F^X=\{\mathcal F_t; t\geq0\}$ generated by the history of $X$, i.e. for each $t\geq0$, $\mathcal F_t$ is the sigma algebra generated by $\{X_s; s\leq t\}$ and all the null sets of $\mathfrak F$. Since $X$ is a finite variation L\'evy process, similar to Step 4 of Theorem \ref{theorem:main}, one can show that for every $t\geq0$, $\iint\limits_{[0,t]\times\mathbb{R}}\left|f(s,X_{s^-}+x)-f(s,X_{s^-})\right|\;ds\times v(dx)<C\int_\mathbb{R} x\;v(dx)<\infty,$ $\mathbb{P}$-almost surely, where $C$ is a random variable free from $s$ and $x$. Then we have the following decomposition: $f(t,X_t)=f(0,X_0)+M_t+\int_0^t\mathcal A f(s,X_s)\;ds$, where $M$ is a local martingale with respect to $\mathbb F^X$ given by $M_t=\iint\limits_{[0,t]\times\mathbb{R}}{0\leq t<\infty}arenth{f(s,X_{s^-}+x)-f(s,X_{s^-})}\;\tilde J_X(ds\times dx)$, $\tilde J_X(ds\times dx)=J_X(ds\times dx)-ds\times v(dx)$, and \begin{align*} \mathcal A f(s,X_s) = \frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial s}(s,X_s)+\gamma\frac{{0\leq t<\infty}artial f}{{0\leq t<\infty}artial x}(s,X_s) + \int_\mathbb{R}{0\leq t<\infty}arenth{f(s,X_{s^-}+x)-f(s,X_{s^-})}\;v(dx). \end{align*} In other words, this shows that the process $(f(t,X_t))_{t\geq0}$ is a special semimartingale. \end{example} In the next lemma, we get back to the motivation provided in the introduction. This lemma also highlights applications of Theorem \ref{theorem:main} in Feynman-Kac representations. Comparing to similar results, for instance \cite{rong}, this representation is valid in the absence of diffusions terms. In addition, there are less restrictive assumptions on the underlying function. \begin{lemma} Suppose that $X$ is a finite variation L\'evy process that satisfies Assumption \ref{assump:main} for $U=\mathbb{R}$. Let the function $P=P(t,x)$, defined by equation \eqref{eq:purposed}, admit $L_{loc}^1([0,T]\times(0,\infty))$-weak derivatives which are locally bounded. Then using Theorem \ref{theorem:main} and following the same procedure as Proposition 12.2 of \citet{cont} (or \cite{rong}), one can show that $P=P(t,x)$ is the solution of PIDE \eqref{eq:pide}. \end{lemma} \section{Conclusions} A version of It\^o's formula is studied under multi-dimensional finite variation L\'evy processes that is time-dependent and requires weak differentiability. The formula can be particularly useful for functions that are continuous and piecewise smooth. The possible formula's applications were motivated by a financial example. The two main assumptions are that the process is finite variation and the weak derivatives of the functions are locally bounded. The extension of the formula to pure jump semimartingales, using the theory of distributions (in functional analysis), is interesting for future work. \section{Acknowledgments} The first author gratefully acknowledges partial financial support from the Vienna Science and Technology Fund (WWTF) under grant MA09-005 at Vienna University of Technology. Also, the authors are thankful to an anonymous referee for his/her constructive comments. \end{document}
\begin{document} \title{Generalized Small Cancellation Presentations for Automatic Groups} \author{Robert H. Gilman} \address{Department of Mathematical Sciences\\Stevens Institute of Technology} \email{[email protected]} \thanks{Partially supported by NSF grant 1318716} \subjclass[2010]{20F05, 20F65 } \keywords{small cancellation, automatic group, pregroup} \date{June 28, 2014} \begin{abstract} By a result of Gersten and Short finite presentations satisfying the usual non-metric small cancellation conditions present biautomatic groups. We show that in the case in which all pieces have length one, a generalization of the C(3)-T(6) condition yields a larger collection of biautomatic groups. \end{abstract} \maketitle \newcommand{^{-1}}{^{-1}} \newcommand{\rightarrow} \newcommand{\tto}{\rightarrow} \newcommand{\tto}{\buildrel * \over \rightarrow } \newcommand{\edge}[1]{\buildrel #1 \over \rightarrow } \newcommand{\abs}[1]{\vert #1\vert} \newcommand{\Abs}[1]{\Vert #1 \Vert} \newcommand{\set}[1]{\{ #1 \}} \newcommand{\subgroup}[1]{\langle #1 \rightarrow} \newcommand{\ttongle} \newcommand{\ovr}[1]{\overline #1} \newcommand{\Gamma}{\Gammaamma} \newcommand{\mathcal K}{\mathcal K} \newcommand{P^{(2)}}{P^{(2)}} \section{Introduction} Gersten and Short show in~\cite{GS1} that groups satisfying any of the usual non-metric small cancellation conditions C(6), or C(4)-T(4), or C(3)-T(6) are biautomatic. Most of their argument with respect to C(3)-T(6) applies verbatim to a more general situation and yields small cancellation type presentations for a larger class of automatic groups. These presentations are essentially finite prees (in the sense of Frank Rimlinger~\cite{Ri}) which satisfy two axioms introduced by Reinhold Baer~\cite{Ba}. A pree, defined below, is a set with a partial multiplication satisfying certain axioms. An example is an amalgam of two groups; the multiplication is defined for pairs of elements within each group but is not defined for pairs which are not within one group or the other. Amalgams obey an additional axiom which make them pregroups as defined by John Stallings~\cite{St1}. See the recent survey~\cite{GL2012} for an introduction to prees and their axioms. The axioms we employ, A(4) and A(5), are given in Section~\ref{prees}. Their relation to the usual small cancellation conditions is discussed at the beginning of Section~\ref{cancellation}, where we develop a variation of small cancellation theory. There are many such variations, e.g.~\cite{J1, J2, J3, J4, MW1, MW2, O}. The one which which seem closest to ours is by Uri Weiss~\cite{W}, who shows that V(6) groups with all pieces of length 1 are biautomatic. V(6) denotes a generalization of the C(6) and C(4)-T(4) conditions. Section 4 contains the proof of our main theorem, Theorem~\ref{A45}, which says that the universal groups of finite prees satisfying Axioms A(4) and A(5) are biautomatic. As every finite group satisfies these two axioms and is its own universal group, the class of groups covered by Theorem~\ref{A45} includes all finite groups. Finite C(3)-T(6) groups, on the other hand, are all cyclic~\cite{EH}. It would be interesting to have other examples of groups from Theorem~\ref{A45} which are not C(3)-T(6). \section{Prees}\label{prees} \begin{definition}\label{pree} A pree is a set equipped with a partial multiplication which affords an identity, denoted $1$, two-sided inverses, and the following form of associative law. \end{definition} It is not hard to show that inverses are unique in a pree. Further if $ab=c$, then $b = a^{-1} c$ etc. It follows that products $ab=c$ may be thought of as triangles. More precisely the six products obtained by reading the edge labels around the boundary of the triangle on the left-hand side of Figure~\ref{triangles} in either direction and starting at any vertex are defined if any one of them is. As usual an edge read against its orientation contributes the inverse of its label. \begin{figure} \caption{The product $ab=c$ (left) and the associative law (right).\label{triangles} \label{triangles} \end{figure} \begin{definition}[The associative law]\label{associative} If $ab$ and $bc$ are defined, then $(ab)c$ is defined if and only if $a(bc)$ is; and when they are defined, $(ab)c=a(bc)$. \end{definition} The associative law is equivalent to a geometric condition, namely that if three triangles fit around a common vertex as in the right-hand side of Figure~\ref{triangles}, then the perimeter is also a valid triangle in $P$. \begin{definition} The universal group of a pree $P$ is the group $U(P)$ with generators $P$ and relators $abc^{-1}$ for every product $ab=c$ defined in $P$. We write $\overline a$ for the image of $a$ in $U(P)$. Elements of $U(P)$ are represented by words over the alphabet $P$. Note that $1$ is a letter in this alphabet. \end{definition} \begin{definition}\label{reducible} A word over $P$ is reducible if the product of two successive letters is defined in $P$. Otherwise the word is called irreducible or reduced. \end{definition} It is straightforward to use Tietze transformations to change any finite presentation into one given by a finite partial multiplication table, and with more Tietze transformations one can insure that this multiplication table satisfies the conditions of a pree. Thus every finitely presented group is the universal group of a finite pree, and the pree affords a finite presentation for the group. \begin{theorem}\label{embedding} It is undecidable whether or not the canonical morphism $\pi: P \to U(P)$ from a pree $P$ to its universal group is an embedding. \end{theorem} This theorem is a special case of a result of Trevor Evans~\cite{Ev} which says that if the embedding problem is solvable for a class of finite partial algebras, then the word problem is solvable for the corresponding class of algebras. Now we present the additional axioms we need. There are two of them, and they have the geometric incarnations given in Figure~\ref{spree} \begin{figure} \caption{Axioms A(4) and A(5) with $b_2b_3$ defined. \label{spree} \label{spree} \end{figure} \begin{description} \item [Axiom A(4)] If the products $a_1^{-1} a_2=b_1$, $a_2^{-1} a_3=b_2$, $a_3^{-1} a_4=b_3$, and $a_4^{-1} a_1=b_4$ are defined, then at least one of the products $b_ib_{i+1}$ (indices read modulo 4) is defined. \item [Axiom A(5)] If the products $a_1^{-1} a_2=b_1$, $a_2^{-1} a_3=b_2$, $a_3^{-1} a_4=b_3$, $a_4^{-1} a_5=b_4$ and $a_5^{-1} a_1=b_5$ are defined, then at least one of the products $b_ib_{i+1}$ (indices read modulo 5) is defined. \end{description} \section{Small Cancellation Theory}\label{cancellation} We require a simple extension of small cancellation theory for diagrams constructed from triangles All relations in the presentation $P$ for $U(P)$ have length 3, and there are no pieces of length 2. Indeed $ab$ can occur in at most one relator, namely the relator $abc^{-1}$ corresponding to a product $ab=c$ defined in $P$. Thus the small cancellation condition C(3) holds. Similarly the condition T(6) says that the perimeter of any diagram formed by fitting 2, 3, 4 or 5 triangles around a common vertex (as in Figure~\ref{spree} and the right-hand side of Figure~\ref{triangles}) contains a subword $aa^{-1}$ for some $a\in P$. Axioms A(4) and A(5) on the other hand together with the Associative law enforce the weaker condition that the perimeter is reducible. \begin{definition} A diagram is a planar directed labeled graph whose boundary is a simple closed curve and whose faces are triangles each with 3 distinct vertices. The label of the boundary of each face is a relator in $P$. The number of triangular faces of a diagram is its area. The boundary word, determined up to cyclic permutation and inverse, is the label of the boundary path. Boundaries of triangles are sometimes called perimeters. For brevity boundary words are sometimes themselves called boundaries. \end{definition} The label of a path in a diagram is the product of the labels of edges except that an edge traversed against its orientation contributes the inverse of its label. We may change the orientation of an edge as long as we invert its label. To remove a triangle with one or more edges on the boundary we remove the edges on the boundary except we keep any vertices which began with degree more than 2. \begin{lemma} If $D$ is a diagram with area greater than one, there is a triangle in $D$ with one or more edges on the boundary such that removing that triangle results in a diagram. \end{lemma} \begin{proof} If there is a triangle with two edges on the boundary, then because the boundary is a simple closed curve, the common vertex has degree 2, and the triangle may be removed. Likewise a triangle with only one edge on the boundary and the opposite vertex in the interior of the diagram may be removed. Suppose then that all triangles meeting the boundary have exactly one edge and its opposing vertex on the boundary. For each such triangle the complement of the diagram is divided into two parts. Since the triangle has two edges not on the boundary, both parts have positive area. Pick a triangle such that one of the complementary parts has minimum possible area. Since that area is positive, it must contain another triangle meeting the boundary. But then that triangle would give a complementary part of smaller area. Thus this last case cannot arise. \end{proof} The preceding lemma has the following consequence. \begin{lemma}\label{recursive} Every diagram is generated by starting with a single triangle and attaching subsequent triangles in the following way. A triangle is attached to a diagram by identifying one or two edges of the triangle with an edge or two successive edges respectively of the boundary of the diagram so that identified edges have the same label and are oriented in the same direction or have inverse labels and are oriented in opposite directions. In addition two vertices of a triangle may not be identified when it is attached. \end{lemma} \begin{lemma}\label{boundary} If a diagram has two triangles with the same vertices, then it has an internal vertex of degree two. \end{lemma} \begin{proof} Let $D$ be a diagram. If $D$ has area 1, there is nothing to prove. Otherwise $D$ is formed by attaching a triangle $T$ to a diagram $D'$ of smaller area. By induction we may assume $D'$ does not have two triangles with the same vertices. Consequently the vertices of $T$ are also the vertices of some triangle $T'$ of $D'$. But then these vertices must be the vertices of a path of length 2 on the boundary of $D'$. It follows that the middle vertex is an internal vertex of degree 2 in $D$. \end{proof} \begin{definition} A minimal diagram is one which has minimum area among all diagrams with the same boundary word (up to cyclic permutation and inverse.). \end{definition} \begin{lemma}\label{minimal} Let $D$ be a minimal diagram with boundary $w$. If $D$ has an internal vertex of degree 2, then $D$ consists of two triangles joined along two edges and $w=aa^{-1}$. \end{lemma} \begin{proof} It is easy to see that no vertex of any diagram has degree 1. If $p$ is an internal vertex of degree 2, then $p$ is a vertex of two triangles both of which contain the two edges incident to $p$. Hence both edges opposite $p$ have label $c=a^{-1} b$. Unless both of these edges are on the boundary of $D$, they can be combined to make a diagram of smaller area. If they are on the boundary, they must be the whole boundary because the boundary of $D$ is a simple closed curve. Thus $D$ consists of two triangles joined along two edges, and $w=cc^{-1}$. \end{proof} \begin{lemma}\label{le:curvature} Consider any diagram, and let $d(p)$ be the degree of each vertex $p$, then \begin{equation} \label{eq:curvature} \sum^{\circ} 4-d(p) =6 + \sum^{\bullet} d(p)-6 \end{equation} where the first sum is over all boundary vertices and the second is over all internal ones. \end{lemma} \begin{proof} Argue by induction on area. \end{proof} Let $L(P)$ be the language of all labels of boundaries of diagrams. Each diagram determines multiple labels as we may start at any vertex on the perimeter and proceed clockwise or counterclockwise. Notice that because we do not identify vertices when building a diagram by attaching triangles, the boundary of every diagram has length at least two. \begin{lemma} $L(P)$ is the set of all words $w$ over the alphabet $P$ of length at least 2 such that $\overline w$ is the identity in $U(P)$. \end{lemma} \begin{proof} $U(P)$ is the quotient of the free semigroup $S$ over $P$ by the congruence $\sim$ with generators $ab\sim c$ for all products $ab=c$ defined in $P$. Say that $v, w\in S$ are {\em neighbors} if one is obtained from the other by replacing a subword $ab$ with $c$. Observe that if $v$ and $w$ are neighbors of length at least 2, and $D$ is a diagram for $w$, then a diagram for $v$ can be obtained by attaching a triangle to $D$. Suppose $|w|\ge 2$ and $w$ defines the identity in $U(P)$. Then there is a sequence of neighbors \[ w=u_1, u_2, \ldots, abc^{-1} \] where $ab=c$ is any product defined in $P$. As all elements of the sequence \[ w, 1w, 1u_1, 1u_2, \ldots, 1abc^{-1}, abc^{-1}\] have size at least 2, it follows that there is a diagram for $w$. Conversely if $w$ is a perimeter label for a diagram $D$, then by removing triangles we can reduce $D$ to a single triangle $T$. The corresponding sequence of neighbors begins with $w$ and ends with a label of the perimeter of $T$. It follows that $\overline w = 1$. \end{proof} \section{Biautomaticity}\label{biautomaticity} We consider a fixed pree $P$ satisfying Axioms A(4) and A(5) and show that it is biautomatic. Much of our argument follows that in~\cite{GS1}. \begin{lemma}\label{d6} If $D$ is a minimal diagram without internal vertices of degree 2, then all internal vertices have degree at least 6. \end{lemma} \begin{proof} If an internal vertex $p$ has degree 3, then by the associative law from Definition~\ref{associative} we can remove $p$ and all incident edges and still have a diagram over $P$. Likewise if $p$ has degree 4 or 5, then Axioms A(4) and A(5) guarantee that one can discard $p$ and construct a diagram of smaller area by adding one or two internal edges. We must check that each new triangle created by the addition of edges has all vertices distinct. But if it did not, $D$ would have two triangles sharing the same three vertices, contrary to Lemma~\ref{boundary}. See Figure~\ref{spree}. \end{proof} \begin{lemma} \label{galleries} Let $w$ be a word of length at least 3 defining the identity in $U(P)$, and let $D$ be a diagram of minimal area with boundary $w$. Let $\delta_2$ and $\delta_3$ be the number of vertices of degree 2 and 3 respectively on the boundary of $D$, and let $\delta_5$ be the number of degree greater than 4, then $2\delta_2 + \delta_3 \ge 6 + \delta_5$. Further, if equality holds, then all internal vertices have degree 6. \end{lemma} \begin{proof} Each vertex of degree 2 contributes 2 to the left-hand side of the Equation~\ref{eq:curvature} in Lemma~\ref{le:curvature}. Likewise each vertex of degree 3 contributes 1, and vertices of degree greater than 4 contribute -1 or less. As all internal vertices have degree at least 6 by Lemmas~\ref{minimal} and~\ref{d6}, Equation~\ref{eq:curvature} yields the desired result. \end{proof} \begin{lemma} \label{45} Words of length 4 or 5 which define the identity in $U(P)$ are reducible. \end{lemma} \begin{proof} Let $w$ be such a word and apply Lemma~\ref{galleries} to a minimal diagram for $w$. If $\delta_2\ge 2$, then some boundary vertex in the interior of $w$ has degree 2, and we are done. The only other possibility is $\abs w = 5$, $\delta_2=1$, and $\delta_3=4$. But it is easy to see that there is no diagram with these parameters. \end{proof} \begin{theorem}\label{A45} If $P$ is a pree satisfying Axioms A(4) and A(5), then $P$ embeds in $U(P)$, and the multiplication in $P$ is induced by the multiplication in $U(P)$. \end{theorem} \begin{proof} The theorem is proved by the same small cancellation argument used in~\cite{GS1}. Let $w$ be a word of length at least two over $P$ which defines the identity in $U(P)$ and consider a diagram $D$ of minimum area for $w$. We must show that if $w=ab$, then $b=a^{-1}$; and if $w=abc$, then $ab=c^{-1}$ in $P$. By Lemmas~\ref{minimal} and~\ref{d6} we may assume that $w=abc$ and all internal vertices of $D$ have degree at least 6. By Lemma~\ref{galleries} there are either three boundary vertices of degree 2 or at least 4 boundary vertices. It follows that $D$ is a triangle. \end{proof} The proof of the next theorem is given in a sequence of lemmas. It is useful to keep in mind the following example $$ P=\set{(0,0),(0,1), (0,-1), (1, 0), (-1,0), (1,1), (-1,-1) } $$ with $U(P)=Z\times Z$. The partial multiplication in $P$ is inherited from the usual addition in $Z\times Z$. \begin{theorem}If $P$ is a finite pree satisfying Axioms A(4) and A(5), then $U(P)$ is biautomatic. \end{theorem} First we observe that Lemma~\ref{galleries} implies that in any diagram $D$ of minimum area there must be a number of intervals along the boundary consisting of two vertices of degree 2 or 3 either adjacent to each other or separated by vertices of degree 4. As in~\cite{GS1} we refer to these intervals as galleries. See Figure~\ref{gallery}. \begin{figure} \caption{Some galleries. \label{gallery} \label{gallery} \end{figure} Note that some of the vertices in this figure might be identified in $D$. In other words there is a morphism of diagrams from the gallery as illustrated to $D$. Under this morphism the labeled vertices have distinct images and their degrees are preserved. The minimum number of galleries varies as in Table~\ref{g} depending on the value of $\delta_2$. \begin{table} \begin{tabular}{ccc} $\delta_2$ & $\delta_3$ & Galleries\\ \hline $\ge 3$ &$\ge 0$& 3 \\ 2 & $\ge 2$ & 4 \\ 1 & $\ge 4$ & 5 \\ 0 & $\ge 6$ & 6 \\ \end{tabular} \caption{Effect of $\delta_2$ on $\delta_3$ and the minimum number of galleries. \label{g}} \end{table} Now we augment Definition~\ref{reducible}. \begin{definition} A word $w=a_1\cdots a_n$ over $P$ is irreducible if for all $i$ the product $a_ia_{i+1}$ is not defined in $P$, and it is strongly irreducible if in addition it cannot be shortened by attaching to it a gallery of type $34^k3$ as in Figure~\ref{gallery} and replacing $abc$ by $gh$ or $abcdef$ by $ghijk$ etc. Reductions of this latter type are called strong reductions. \end{definition} Both reductions and strong reductions shorten a word's length by 1. There are also reductions corresponding to other types of galleries. Reductions corresponding to galleries of type $24^k3$ shorten the length of a word by 2. Since galleries are diagrams over $P$, it is clear that the above reductions do not change the element of $U(P)$ represented by a word, over $P$. Thus every word can be reduced to a strongly irreducible word representing the same group element. \begin{lemma}\label{geodesic} A word $w$ over $P$ is the label of a geodesic in $U(P)$ if and only if it is strongly reduced and not the word $1$. Further the set of strongly irreducible words is a regular language. \end{lemma} \begin{proof} Clearly the set of irreducible words not equal to $1$ is regular. Further it is straightforward to construct a finite automaton over the alphabet $P\times P \cup P\times \set{\$}$ which accepts a pair of words $(w,v\$)$ if and only if $w$ and $v$ are words over $P$ and $v$ is obtained from $w$ from a single reduction corresponding to attaching a gallery to $w$. Here $\$$ is a padding symbol not in $P$. The triangles of $P$ can serve as the vertices of a suitable automaton. It follows by standard techniques that the set of all words over $P$ which are strongly reducible forms a regular language. Hence its complement is regular and the intersection of the complement with the irreducible words not equal to $1$ is regular too. Thus the second assertion of the lemma holds. Clearly geodesics are strongly irreducible lest there be a shorter word denoting the same group element. Suppose that $w$ is strongly irreducible but not geodesic. In particular $w \ne aa^{-1}$. Also $\abs w\ne 1$ by Theorem~\ref{A45}, so $\abs w \ge 3$. By our hypothesis there must be a shorter word $v$ representing the same group element as $w$. Thus there will be a diagram $D$ of minimum area with boundary label $wv^{-1}$. Pick $v$ so that $D$ has smallest possible area. Consider the possibilities in Table~\ref{g}. There cannot be 3 vertices of degree 2, because then at least one of them would be in the interior of $w$ or $v$ contradicting the fact that both are strongly irreducible. Likewise there can be no gallery all of whose vertices of degree 3 or 4 lie in the interior of $w$ or in the interior of $v$. Thus there are 4 galleries and 2 vertices of degree 2. The only way they all fit into $D$ is if $\abs v \ge 1$, and the two boundary vertices separating the boundary segments with labels $w$ and $v$ are of degree 2 and are each part of two galleries. Each gallery has a vertex of degree 2 or 3 at its other end. Since these vertices are in the interior of $w$ or $v$, they must be of degree 3. In particular $\abs v \ge 2$. Let $p$ and $q$ be the vertices separating $w$ and $v$ so that the boundary of $D$ consists of a path with label $w$ from $p$ to $q$ and a path with label $v$ also from $p$ to $q$. Let $w=a_1a_2\cdots w_m$, and $v=b_1b_2\cdots b_n $, and denote by $r$ the boundary vertex between $a_1$ and $a_2$. By the previous paragraph there is a gallery of type $24^k3$ attached to the boundary segment with label $a_1^{-1} b_1\cdots b_{k+2}$. This gallery affords a reduction of $a_1^{-1} b_1 \cdots b_{k+2}$ to $u=c_1\cdots c_{k+1}$. It follows that the path from $p$ to $q$ with label $a_1c_1\cdots c_{k+1}b_{k+3}\cdots b_m$ is a geodesic. If this path is $w$, then we are done. Otherwise there are paths from $r$ to $q$ with labels $a_2\cdots a_m$ and $c_1\cdots c_{k+1}b_{k+3}\cdots b_n$. The first label is clearly strongly irreducible, and the second is a geodesic. By induction on length, $m-1=n-1$. \end{proof} In order to show that $U(P)$ is biautomatic, we must find a set $L$ of words over $P$ which maps onto $U(P)$ and such that for some constant $K$ two paths which begin and end a distance at most one apart and which have labels in $L$ $K$-synchronously fellow travel. $L$ will consist of some geodesics, but not all of them. At this point in~\cite{GS1} use is made of the C3-T6 small cancellation hypothesis, which need not hold in our situation. We must proceed differently. \begin{figure} \caption{ Defining $L$ \label{maximal} \label{maximal} \end{figure} \begin{definition} \label{combing} Let $L$ be the set of strongly reduced words $w=a_1\cdots a_n$ over $P$ with the property that whenever there is a valid diagram of the type illustrated in Figure~\ref{maximal} with $k\ge 3$, then the product $bc$ is defined. \end{definition} \begin{lemma} $L$ is a regular set which maps onto $U(P)$. \end{lemma} \begin{proof} The set of words which admit a valid diagram as in Figure~\ref{maximal} such that $bc$ is not defined is clearly regular. It follows from this observation and from Lemma~\ref{geodesic} that $L$ is regular. To show that $L$ maps onto $U(P)$ let $g$ be any element of $U(P)$ and pick a geodesic $w=a_1\cdots a_n$ from $1$ to $g$ with the property that the number of words $ef$ representing the same group element as $a_1a_2$ is maximal, and subject to that the number representing the same group element as $a_3a_4$ is maximal, and so forth. Suppose $w$ admits a diagram as in Figure~\ref{maximal} for which $bc$ is not defined. Let $ef$, representing the same group element as $a_{2i+1}a_{2i+2}$. As $efc^{-1} b^{-1} a^{-1}$ represents the identity in $U(P)$, it is reducible by Lemma~\ref{45}. On the other hand $b^{-1} a^{-1}$ and $ef$ are subwords of geodesics and so are irreducible. Further $bc$ is irreducible by assumption. Thus $fc^{-1}=f'$ for some $f'\in P$. We conclude that $ef'$ represents the same group element as $ab$. But no $ef'$ obtained as above can have $e=a$ and $f'=b$. For if so, $1=efc^{-1} b^{-1} a^{-1} = afc^{-1} b^{-1} a^{-1}$ would imply that $bc=f$ in $P$. Thus the geodesic obtained by substituting $ab\cdots d$ for $w_{2i+1}\cdots w_{2i+k}$ in $w$ contradicts the choice of $w$. \end{proof} It remains to show that for some constant $K$ two paths which begin and end a distance at most one apart and which have labels in $L$, $K$-synchronously fellow travel. Suppose $w = a_1\cdots a_n$ and $v = b_1\cdots b_m$ are labels of two such paths. Let $g_i$ be the group element reached by $a_1\cdots a_i$, and define $h_i$ likewise with respect to $v$. We claim that for each $i$ there exists a $j$ such that $\rho(g_i,h_j)$, the distance in $U(P)$ between $g_i$ and $h_j$, is at most 2; and likewise with $i$ and $j$ reversed. Since $w$ and $v$ are both geodesics beginning a distance at most one apart, it follows by a straightforward argument that $\abs {i-j}\le 3$; and hence that $K=5$ suffices. We use induction on $m+n$. It $m$ and $n$ are both at most 3, then the desired conclusion is immediate. Thus we assume $m\ge 4$ or $n\ge 4$. Since $w$ and $v$ are geodesics, it follows that $m\ge 2$ an $n\ge 2$. Because of the way $L$ is defined, it suffices to show that $\rho(g_2,h_2)\le 1$. By hypothesis there are letters $a,b\in P$ such that $awb^{-1} v^{-1}$ represents the identity in $U(P)$. Note that $a=1$ and $b=1$ are possible. Consider a corresponding diagram $D$ of minimum area with vertices labeled as in Figure~\ref{rectangle}. \begin{figure} \caption{ Two geodesics \label{rectangle} \label{rectangle} \end{figure} \begin{figure} \caption{ The case $\delta_2=1$. \label{D2333} \label{D2333} \end{figure} If $g_n$ is joined to $h_{m-1}$ or $h_m$ to $g_{n-1}$ in $D$, then we are done by induction on $m+n$. Thus we may assume that only $g_0$ or $h_0$ can have degree 2. Hence either $\delta_2=1$, and there are at least 5 galleries or $\delta_2=0$ and there are at least 6 galleries. As the vertices supporting a gallery cannot all lie on any one side, 6 galleries is the maximum there can be. Consequently the inequality in Lemma~\ref{galleries} is an equality, and all internal vertices must have degree 6. Suppose $\delta_2=1$. By symmetry we may assume $g_0$ has degree 2. Because there are at least 5 galleries, none attached to a single side, it must be that the other corner vertices all have degree 3. The situation is illustrated in Figure~\ref{D2333}. Now consider the subdiagram $D'$ with corners $g_1$, $g_n$, $h_1$, $h_m$. Of course $D'$ is a diagram of minimum area for its boundary. If either $g_1$ or $h_1$ has degree 2 with respect to this subdiagram, then there is an edge from $g_2$ to $h_2$, and we are done. The alternative is that the corners of $D'$ have degree 3, there are 6 galleries, and all internal vertices of $D'$ have degree 6. See Figure~\ref{D2333P} where the degree of $g_2$ is 3 or 4 depending on whether or not $p=g_3$ and likewise for $h_2$ and $q$. \begin{figure} \caption{ The case $\delta_2=1$ continued. \label{D2333P} \label{D2333P} \end{figure} We know that the vertices $h_1,h_2, \ldots$ are part of a gallery in $D'$. That is, in $D'$, $h_1$ and $h_k$ have degree 3 for some $k \ge 2$ and all intervening boundary vertices have degree 4. But now Definition~\ref{combing} implies that we can remove the edge from $h_1$ to $r$ and replace it with an edge from $g_1$ to $h_2$. With this change we obtain a diagram of minimum area with an internal vertex of degree 5 in contradiction to Lemma~\ref{minimal}. It remains to consider the case in which $D$ has $\delta_2=0$. Here we begin with the situation of Figure~\ref{D3333} and use the preceding argument. \begin{figure} \caption{ The case $\delta_2=0$. \label{D3333} \label{D3333} \end{figure} If $q \ne h_2$, then we replace the edge from $h_1$ to $q$ by one from $r$ to $h_2$ thereby decreasing the degree of $q$ to 5 and obtaining a contradiction. We obtain a similar contradiction if $p \ne g_2$. Finally if $q=h_2$ and $p=g_2$, then we are done. \end{document}
\begin{document} \title{Bijections in de Bruijn Graphs} \begin{abstract} A T-net of order $m$ is a graph with $m$ nodes and $2m$ directed edges, where every node has indegree and outdegree equal to $2$. (A well known example of T-nets are de Bruijn graphs.) Given a T-net $N$ of order $m$, there is the so called "doubling" process that creates a T-net $N^*$ from $N$ with $2m$ nodes and $4m$ edges. Let $\vert X\vert$ denote the number of Eulerian cycles in a graph $X$. It is known that $\vert N^*\vert=2^{m-1}\vert N\vert$. In this paper we present a new proof of this identity. Moreover we prove that $\vert N\vert\leq 2^{m-1}$.\\ Let $\Theta(X)$ denote the set of all Eulerian cycles in a graph $X$ and $S(n)$ the set of all binary sequences of length $n$. Exploiting the new proof we construct a bijection $\Theta(N)\times S(m-1)\rightarrow \Theta(N^*)$, which allows us to solve one of Stanley's open questions: we find a bijection between de Bruijn sequences of order $n$ and $S(2^{n-1})$. \end{abstract} \section{Introduction} In 1894, A. de Rivi\`{e}re formulated a question about existence of circular arrangements of $2^n$ zeros and ones in such a way that every word of length $n$ appears exactly once, \cite{RIVIERE}. Let $B_0(n)$ denote the set of all such arrangements. (we apply the convention that the elements of $B_0(n)$ are binary sequences that start with $n$ zeros). The question was solved in the same year by C. Flye Sainte-Marie, \cite{MARIE}, together with presenting a formula for counting these arrangements: $\vert B_0(n)\vert =2^{2^{n-1}-n}$. However the paper was then forgotten. The topic became well known through the paper of N.G. de Bruijn, who proved the same formula for the size of $B_0(n)$, \cite{DEBRUIJN}. Some time after, the paper of C. Flye Sainte-Marie was rediscovered by Stanley, and it turned out that both proofs were principally the same, \cite{DEBRUIJN2}. The proof uses a relation between $B_0(n)$ and the set of Eulerian cycles in a certain type of T-nets: A T-net $N$ of order $m$ is defined as a graph with $m$ nodes and $2m$ directed edges, where every node has indegree and outdegree equal to $2$ (a T-net is often referred as a balanced digraph with indegree and outdegree of nodes equal to $2$, see for example \cite{STANLEY_AC}). N.G. de Bruijn defined a doubled T-net $N^*$ of $N$. A doubled T-net $N^*$ of $N$ is a T-net such that: \begin{itemize} \item each node of $N^*$ corresponds to an edge of $N$ \item two nodes in $N^*$ are connected by an edge if their corresponding edges in $N$ are incident and the ending node of one edge is the starting node of the second edge. \end{itemize} \begin{remark} We call two edges to be incident if they share at least one common node; the orientation of edges does not matter. \end{remark} As a result $N^*$ has $2m$ nodes and $4m$ edges, see an example on Figure \ref{fg_doubling_simple}. (A doubled T-net of $N$ is known as well as a line graph of $N$, \cite{KISHORE}.) \begin{figure} \caption{A doubling of a de Bruijn graph: $N$ and $N^*$} \label{fg_doubling_simple} \end{figure} Let $\Theta(X)$ be the set of all Eulerian cycles in $X$ and let $\vert X \vert = \vert \Theta(X)\vert $ denote the number of Eulerian cycles in $X$, where $X$ is a graph. It was proved inductively that $\vert N^*\vert=2^{m-1}\vert N\vert$. Moreover N.G. de Bruijn constructed a T-net (nowadays called a "de Bruijn graph") whose Eulerian cycles are in bijection with the elements of $B_0(n)$. A de Bruijn graph $H_n$ of order $n$ is a T-net of order $2^n$, whose nodes correspond to the binary words of length $n-1$. A node $s_1s_2\dots s_{n-1}$ has two outgoing edges to the nodes $s_2\dots s_{n-1}0$ and $s_2\dots s_{n-1}1$. It follows that a node $s_1s_2\dots s_{n-1}$ has two incoming edges from nodes $0s_1s_2\dots s_{n-2}$ and $1s_1s_2\dots s_{n-2}$. Given an edge $e$ going from the node $s_1s_2\dots s_{n-1}$ to the node $s_2\dots s_{n-1}s_n$, then the edge $e$ corresponds to the word $s_1s_2\dots s_{n-1}s_n$ of length $n$, which implies the natural bijection between Eulerian cycles $\Theta(H_n)$ and binary sequences $B_0(n)$, \cite{DEBRUIJN}. That is why we will write $B_0(n)\equiv \Theta(H_n)$. De Bruijn graphs found several interesting applications, among others in networking, \cite{BAKER}, and bioinformatics, \cite{PAVEL}, \cite{ZERBINO}. The important property of de Bruijn graphs is that a doubled T-net of a de Bruijn graph of order $n$ is a de Bruijn graph of order $n+1$, see an example on Figure \ref{fg_doubling_simple} of the de Bruijn graph of order $3$ ($H_3=N$) and of order $4$ ($H_4=N^*)$. Since $\vert B_0(2)\vert=1$ ($B_0(2)=\{0011\}$) it has been derived that $\vert B_0(n)\vert =2^{2^{n-1}-n}$, \cite{BAKER}, \cite{DEBRUIJN}, \cite{DEBRUIJN2}. There is also another proof using matrix representation of graphs, \cite{STANLEY_AC}. Yet it was an open question of Stanley, \cite{STANLEY_OP}, \cite{STANLEY_AC}, if there was a bijective proof: \begin{quote} Let $B(n)$ be the set of all binary de Bruijn sequences of order $n$, and let $S(n)$ be the set of all binary sequences of length $n$. Find an explicit bijection $B(n) \times B(n)\rightarrow S(2^n)$. \end{quote} This open question was solved in 2009, \cite{KISHORE}, \cite{STANLEY_AC}. \begin{remark} In the open question of Stanley, $B(n)$ denotes the de Bruijn sequences that do not necessarily start with $n$ zeros like in the case of $B_0$. $B(n)$ contains all $2^n$ "circular rotations" of all sequences from $B_0(n)$; formally, given $s=s_1s_2\dots s_{2^n}\in B_0(n) $, then $s_is_{i+1}\dots s_{2^n}s_1s_2\dots s_{i-1}\in B(n)$, where $1\leq i \leq 2^n$. It is easy to see that all these $2^n$ "circular rotations" are distinct binary sequences. It follows that $\vert B(n)\vert=2^n\vert B_0(n)\vert$. Hence it is enough to find a bijection $B_0(n)\rightarrow S(2^{n-1}-n)$ to solve this open question. \end{remark} In this paper we present a new proof of the identity $\vert N^*\vert=2^{m-1}\vert N\vert$, which allows us to prove that $\vert N\vert\leq 2^{m-1}$ and to construct a bijection $\nu : \Theta(N)\times S(m-1)\rightarrow \Theta(N^*)$ and consequently to present another solution to the Stanley's open question: We define $\rho_2(\epsilon)=0011$ (recall that $B_0(2)=\{0011\}$) and let $\rho_{n} : S(2^{n-1}-n) \rightarrow B_0(n)$ be a map defined as $\rho_n(s)=\nu(\rho_{n-1}(\dot s),\ddot s)$, where $\epsilon$ is the binary sequence of length $0$, $n>2$, $s=\dot s\ddot s$, $\dot s \in S(2^{n-2}-(n-1))$, and $\ddot s \in S(2^{n-2}-1)$. \begin{proposition} The map $\rho_n$ is a bijection. \end{proposition} \begin{proof} Note that $\dot s \in S(2^{n-2}-(n-1))$ and $\vert B_0(n-1)\vert=2^{(n-1)-1}-(n-1)=2^{n-2}-(n-1)$; thus $\dot s$ is a valid input for the function $\rho_{n-1}$ and $\rho_{n-1}(\dot s)\in B_0(n-1)\equiv \Theta(H_{n-1})$. In addition, $H_{n-1}$ has $m=2^{n-2}$ nodes and $\ddot s\in S(2^{n-2}-1)$ has the length $m-1$, hence it makes sense to define $\rho_n(s)=\nu(\rho_{n-1}(\dot s),\ddot s)$. Because $\nu$ is a bijection, see Proposition \ref{pr_bij_doublig_tnet}, it is easy to see by induction on $n$ that $\rho_n$ is a bijection as well. \end{proof} \begin{remark} Less formally said, the bijection $\rho_n(s)$ splits the binary sequence $s$ into two subsequences $\dot s$ and $\ddot s$. Then the bijection $\rho_{n-1}$ is applied to $\dot s$, the result of which is a de Bruijn sequence $p$ from $B_0(n-1)$ (and thus an Eulerian cycle in $H_{n-1}$). Then the bijection $\nu$ is applied to $p$ and $\ddot s$. The result is a de Bruijn sequence from $B_0(n)$. \end{remark} \section{A double and quadruple of a T-net} \noindent Let $Y$ be a set of graphs; we define $\Theta(Y)=\bigcup_{X\in Y}\Theta(X)$ (the union of sets of Eulerian cycles in graphs from $Y$) and $\vert Y\vert =\sum_{X\in Y}\vert X\vert$ (the sum of the numbers of Eulerian cycles). Let $U(X)$ denote the set of nodes of a graph $X$. \begin{figure} \caption{A node replacing by $4$ nodes and $4$ edges} \label{fg_node_replace} \end{figure} We present a new way of constructing a doubled T-net, which will enable us to show a new non-inductive proof of the identity $\vert N^{*}\vert=2^{m-1}\vert N\vert$ and to prove $\vert N\vert\leq 2^{m-1}$. \begin{figure} \caption{A removing black edges and fusion of nodes} \label{fg_node_fusion} \end{figure} We introduce a quadruple of $N$ denoted by $\hat N$: The quadruple $\hat N$ arises from $N$ by replacing every node $a\in U(N)$ by 4 nodes and 4 edges as depicted on the Figure \ref{fg_node_replace}. Let $\Gamma(a)$ denote the set of these 4 nodes and $\Pi(a)$ denote the set of these 4 edges that have replaced the node $a$. The edges from $\Pi(a)$ are in blue color on the figures and we will distinguish blue and black edges as follows: In a graph containing at least one blue edge, we define an Eulerian cycle to be a cycle that traverses all blue edges exactly once and all black edges exactly twice, see Figure \ref{fg_doubling}. \begin{figure} \caption{An example of $N$, $\hat N$, and $N^*$} \label{fg_doubling} \end{figure} \begin{remark} Note that a quadruple $\hat N$ is not a T-net, since the indegree and outdegree are not always equal to $2$. But since the black edges can be traversed twice, we can consider them as parallel edges (two edges that are incident to the same two nodes). Then it would be possible to regard $\hat N$ as a T-net. \end{remark} By removing black edges and "fusing" their incident nodes into one node in $\hat N$ (as depicted on Figure \ref{fg_node_fusion}), we obtain a doubled T-net $N^{*}$ of $N$. And the reverse process yields $\hat N$ from $N^*$: turn all edges from black to blue and then replace every node by two nodes connected by one black edge, where one node has two outgoing blue edges and one incoming black edge and the second node two incoming edges and one outgoing black edge. Thus we have a natural bijection between Eulerian cycles in $\hat N$ and $N^*$. See an example on Figure \ref{fg_doubling}. \begin{remark} If all edges in a graph are in one color, then it makes no difference if they are black or blue. An Eulerian cycle traverses in that case just once every edge. \end{remark} \begin{figure} \caption{Edges replacement. Case I} \label{fg_edges_repl_a} \end{figure} \begin{figure} \caption{Edges replacement. Case II} \label{fg_edges_repl_b} \end{figure} Fix an order on nodes $U(N)$. As a result we have a bijection $\phi: \{1,2,\dots ,m\} \rightarrow U(N)$. Given $i\in \{1,2,\dots ,m\}$, let us denote the edges from $\Pi(\phi(i))$ by $t,u,v,z$, in such a way that $t$ and $v$ are not incident edges; it follows that $u$ and $z$ are not incident as well. Let $W_0=\{\hat N\}$, we define $W_i=\{\dot w,\ddot w\mid w\in W_{i-1}\}$, where $i\in\{1,2,\dots ,m\}$ and $\dot w$, $\ddot w$ are defined as follows: We construct the graph $\dot w$ by removing edges $t,v$ from $w$ and by changing the color of $u,z$ from blue to black (thus allowing the edges $u,z$ to be traversed twice). Similarly we construct $\ddot w$ from $w$ by removing edges $u,z$ and by changing the color of $t,v$ from blue to black, where $t,u,v,z\in \Pi(\phi(i))$. The crucial observation is: \begin{proposition} \label{pr_half_of_euler_paths} Let $w\in W_i$, where $i\in\{0,1,\dots ,m-2\}$. Then $\vert w \vert= 2\vert \dot w\vert + 2\vert \ddot w\vert$. \end{proposition} \begin{remark} The following proof is almost identical to the one in \cite{DEBRUIJN}, where the author constructed two graphs $d_1, d_2$ from a graph $d$ and proved that $\vert d\vert = 2\vert d_1\vert + 2\vert d_1\vert$ \end{remark} \begin{proof} Given an Eulerian cycle $g$ in $w$, then split $g$ in four paths $A,B,C,D$ and edges $t,u,v,z \in \Pi(\phi(i))$. We will count the number of Eulerian cycles in $\dot w, \ddot w$ that are composed from all 4 paths $A,B,C,B$ and that differ only in their connections on edges $t,u,v,z$. Exploiting the N.G. de Bruijn's notation, all possible cases are depicted on Figures \ref{fg_edges_repl_a} and \ref{fg_edges_repl_b}. \begin{itemize} \item In case I, the graph $w$ contains $4$ Eulerian cycles: AtBzDuCv, AtCuBzDv, AtCvDuBz, AzDuBtCv; whereas the graphs $\dot w$ and $\ddot w$ have together $2$ Eulerian cycles: AzDuCuBz and AtBtCvDv. Thus $\vert w\vert=4$ and $\vert \dot w\vert$+$\vert \ddot w\vert=2$. \item In case II, the graph $w$ contains $4$ Eulerian cycles: AtCuDvBz, AtDuCvBz, AzBtCuDv, AzBtDuCv; whereas the graph $\ddot w$ has $2$ Eulerian cycles: AtCvBtDv, AtDvBtCv. The graph $\dot w$ is disconnected and therefore $\dot w$ has $0$ Eulerian cycles. Thus $\vert w\vert=4$ and $\vert \dot w\vert$+$\vert \ddot w\vert=2$. In case II, it is possible the $A=B$ or $C=D$. In such a case, $\vert w\vert=2$ and $\vert \dot w\vert$+$\vert \ddot w\vert=1$. \end{itemize} This ends the proof. \end{proof} \noindent We define $\Delta = \{w \mid w\in W_m \mbox{ and $w$ is connected}\}$. The Figure \ref{fg_tree} shows an example of all iterations and construction of graphs in $\Delta$ from the graph $\hat N$, where $N$ is a de Bruijn graph of order $3$. The order of nodes from $N$ is $00<10<01<11$. Most of the disconnected graphs are ommited. \begin{figure} \caption{Constructing the set $\Delta$ from $\hat N$} \label{fg_tree} \end{figure} \begin{remark} In the previous proof in case II, it can happen that $A=B$ or $C=D$. Note in the iteration step $i=m$ (when constructing $W_m$ from $W_{m-1}$) it holds that $A=B$ and $C=D$, because all nodes have indegree and outdegree equal to $1$ with exception of nodes $\Gamma(\phi(m-1))$. Hence $\vert W_{m-1}\vert = \vert W_{m}\vert$. It follows as well that every connected graph $w\in W_{m-1}$ has exactly one Eulerian cycle. That is why in the Proposition \ref{pr_half_of_euler_paths} we consider $i\in\{0,1,\dots ,m-2\}$. \end{remark} \begin{corollary} \label{cor_rel_iterated_sets} $2\vert W_{i-1}\vert = \vert W_i\vert$ and $\vert W_{m-1}\vert = \vert W_m\vert$, where $i\in \{1,2,\dots ,m-1\}$. \end{corollary} \begin{proposition} $2^{m-1}\vert \Delta \vert=\vert N^{*}\vert = \vert \hat N \vert$. \end{proposition} \begin{proof} The only graphs in $W_m$ that contain an Eulerian cycle are connected graphs, it means only graphs from $\Delta$. On the other hand every graph $w\in \Delta$ contains exactly one Eulerian cycle, since every node has indegree and outdegree equal to $1$. The proposition follows then from Corollary \ref{cor_rel_iterated_sets}, because $\vert \hat N\vert=\vert W_0\vert$ (recall that $W_0=\{\hat N\}$). \end{proof} \begin{proposition} \label{pr_bij_n_delta} There is a bijection between $\Theta(N)$ and $\Theta(\Delta)$ and $\Theta(W_{m-1})$ and $\Theta(W_{m})$. \end{proposition} \begin{proof} Given a connected graph $w\in W_{m-1}$, then just one graph of $\dot w$ and $\ddot w$ is connected. Let us say it is $\dot w$. Recall that there is exactly one Eulerian cycle $AtCuCvAz$ in $w$ ($A=B$ and $C=D$, see Figure \ref{fg_edges_repl_b}). Then $AtCv$ is the only Eulerian cycle in $\dot w\in \Delta \subset W_{m}$. This shows a bijection between $\Theta(W_{m-1})$ and $\Theta(W_{m})$ and $\Theta(\Delta)$. Let $\bar p = p_1p_2\dots p_{4m}$ be the only Eulerian cycle in $w \in \Delta$, where $p_i$ are edges of $w$. Without loss of generality suppose that $p_1\in \Pi(a)$ for some $a\in U(N)$ (it means that $p_1$ is a blue edge in $\hat N$). It follows that all $p_i$ with $i$ odd are blue edges in $\hat N$ all $p_i$ with $i$ even are edges from $N$ (they are black edges in $\hat N$); in consequence the path $p=p_2p_4\dots p_{4m}$ is an Eulerian cycle in $N$. A turning the Eulerian cycle in $w$ into the Eulerian cycle $p$ in $N$ is schematically depicted on Figure \ref{fg_node_replacement_to_simple}. Thus we have a bijection between $\Theta(N)$ and $\Theta(\Delta$). This ends the proof. \end{proof} \begin{figure} \caption{Converting a an Eulerian cycle from $\Delta$ into an Eulerian cycle in $N$} \label{fg_node_replacement_to_simple} \end{figure} \begin{corollary} Let $N$ be a T-net of order $m$. Then $\vert N\vert\leq 2^{m-1}$ Eulerian cycles. \end{corollary} \begin{proof} The set $W_{m-1}$ contains $2^{m-1}$ graphs and recall that every connected graph $w\in W_{m-1}$ has exactly one Eulerian cycle. The result follows then from $\vert W_{m-1}\vert=\vert W_{m}\vert$ and $\Delta\subseteq W_{m}$. \end{proof} \section{Bijection of binary sequences and de Bruijn sequences} Given $i\in \{1,2,\dots ,m\}$, in the previous section we agreed the edges from $\Pi(\phi(i))$ are denoted by $t,u,v,z$, in such a way that $t$ and $v$ are not incident edges (and consequently that $u$ and $z$ are not incident as well). For this section we need that these edges are ordered, hence let us suppose that it holds $t<u<v<z$. This will allow us to identify "uniquely" the edges. Let us look again on the Figure \ref{fg_edges_repl_a}. We can identify the path $A$ as the path between incident nodes of the edge $z$ that do not contain edges $t,u,v$. In a similar way we can identify $B,C,D$. On the Figure \ref{fg_edges_repl_b} we can not distinguish $A$ from $B$ and $C$ from $D$ only by edges $t,u,v,z$. If $A\not =B$, then let $\delta$ be the first node where $A$ and $B$ differ. The node $\delta$ has two outgoing blue edges, let us say they are $t,z$. We use this difference to distinguish $A$ and $B$. Let us define $A$ to be the path that follows the edge $t$ from $\delta$ and $B$ the path that follows the edge $z$ from $\delta$. Again in a similarly way we can distinguish $C$ from $D$. Hence let us suppose we have an "algorithm" that splits an Eulerian cycle $p\in \Theta(W_i)$ into the paths $A,B,C,D$ and edges $t,u,v,z \in \Pi(\phi(i))$ for given $N,i$ (recall that the nodes of $N$ are ordered and thus $i$ determines the node $\phi(i)\in U(N)$). We introduce the function $\omega_{N,i}: (p,\alpha)\rightarrow \Theta(W_{i-1})$, where \begin{itemize} \item $N$ is a T-net of order $m$ \item $i\in \{1,\dots,m-1\}$ \item $p\in \Theta(W_i)$ \item $\alpha\in \{0,1\}$ \end{itemize} \begin{remark} Less formally said, the function $\omega$ transform an Eulerian cycle $p\in \Theta(W_{i})$ into an Eulerian cycle $\bar p\in \Theta(W_{i-1})$ for given $N,i,\alpha$. \end{remark} \noindent Given $N$ and $i$, we define for the case I (Figure \ref{fg_edges_repl_a}):\\ $\omega_{N,i}(AzDuCuBz,0)=AtBzDuCv$\\ $\omega_{N,i}(AzDuCuBz,1)=AtCuBzDv$\\ $\omega_{N,i}(AtBtCvDv,0)=AtCvDuBz$\\ $\omega_{N,i}(AtBtCvDv,1)=AzDuBtCv$\\ For the case II (Figure \ref{fg_edges_repl_b}), where $A\not=B$ and $C\not =D$:\\ $\omega_{N,i}(AtCvBtDv,0)=AtCuDvBz$\\ $\omega_{N,i}(AtCvBtDv,1)=AzBtCuDv$\\ $\omega_{N,i}(AtDvBtCv,0)=AtDuCvBz$\\ $\omega_{N,i}(AtDvBtCv,1)=AzBtDuCv$\\ For the case II where $A=B$ and $C\not =D$:\\ $\omega_{N,i}(AtCvAtDv,0)=AtCuDvAz$\\ $\omega_{N,i}(AtCvAtDv,1)=AtDuCvAz$\\ For the case II where $A\not =B$ and $C=D$:\\ $\omega_{N,i}(AtCvBtCv,0)=AtCuCvBz$\\ $\omega_{N,i}(AtCvBtCv,1)=AzBtCuCv$\\ Now, when we fixed an order on edges at the beginning of this section, it is necessary to distinguish another possibility in the case II, namely the paths $A,B$ can be paths between incident nodes of the edge $t$ that do not contain edges $u,v,z$ and $C,D$ can be paths between incident nodes of the edge $v$ that do not contain edges $t,u,z$. Obviously, in this case it is possible to define $\omega$ in a similar way. To save some space we do not present an explicit definition. \begin{remark} The previous definition of $\omega_{N,i}(p,\alpha)$ can be modified with regard to the reader's needs, including the way of recognition of paths $A,B,C,D$. It matters only that $\omega_{N,i}$ is injective. Our definition is just one possible way. \end{remark} \begin{remark} To understand correctly the definition of $\omega$, recall that when comparing two Eulerian cycles, it does not matter which edge is written as the first one. For example the paths $AtCuDvAz$ and $AzAtCuDv$ are an identical Eulerian cycle. \end{remark} Let $S(n)$ denote the set of all binary sequences of length $n$. \begin{proposition} \label{pr_bij_doublig_tnet} Let $N$ be a T-net of order $m$, $s=s_1s_2\dots s_{m-1} \in S(m-1)$ be a binary sequence, and $p\in \Theta(N)$. We define $p=p^{m-1}$ and $p^{i-1}=\omega_{N,i}(p^{i},s_i)$, where $i \in \{1,2,\dots ,m-1\}$. Then the map $\nu : \Theta(N)\times S(m-1)\rightarrow \Theta(N^*)$ defined as $\nu(p,s)=p^{0}$ is a bijection. \end{proposition} \begin{proof} Recall that there is a bijection between $\Theta(N)$ and $\Theta(W_{m-1})$, see Proposition \ref{pr_bij_n_delta}; hence we can suppose that $p\in W_{m-1}$.\\ The definition of the function $\omega$ implies that $\omega_{N,i}(p, \alpha)=\omega_{N,i}(\bar p, \bar \alpha)$ if and only if $p=\bar p$ and $\alpha=\bar \alpha$. It follows that $\nu$ is injective. In addition we proved that $\vert N\vert=\vert W_{m-1}\vert$ and that $2^{m-1}\vert N\vert=\vert \hat N\vert=\vert W_0\vert$. In consequence $\nu$ is surjective and thus bijective. \end{proof} \end{document}
\begin{document} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{I}}{\mathcal{I}} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{F}F}{\mathbb{F}} \newcommand{\mathbb{H}}{\mathbb{H}} \newcommand{\mathcal{C}C}{\mathbb{C}} \newcommand{\mathbb{S}}{\mathbb{S}} \newcommand{\mathcal{T}S}{\widetilde{\mathbb{S}}} \newcommand{\mathcal{R}R}{\mathbb{R}} \newcommand{\mathbb{O}}{\mathbb{O}} \newcommand{\mathcal{T}O}{\widetilde{\mathbb{O}}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathcal{A}AA}{\mathbb{A}} \newcommand{\overline}{\overlineerline} \newcommand{\widehat}{\widehat} \newcommand{\epsilon}{\epsilonpsilon} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{examples}[theorem]{Examples} \newtheorem{question}[theorem]{Question} \title[Locally complex and Cayley-Dickson algebras] {On locally complex algebras and low-dimensional Cayley-Dickson algebras } \thanks{ 2010 {\epsilonm Math. Subj. Class.} 17A35, 17A45, 17A70, 17D05.} \thanks{Supported by the Slovenian Research Agency (program No. P1-0288). } \author{Matej Bre\v sar, Peter \v Semrl, \v Spela \v Spenko } \address{Matej Bre\v sar, Faculty of Mathematics and Physics, University of Ljubljana, and Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia} \address{Peter \v Semrl and \v Spela \v Spenko, Faculty of Mathematics and Physics, University of Ljubljana, Slovenia} \epsilonmail{[email protected]} \epsilonmail{[email protected]} \epsilonmail{[email protected] } \begin{abstract}The paper begins with short proofs of classical theorems by Frobenius and (resp.) Zorn on associative and (resp.) alternative real division algebras. These theorems characterize the first three (resp. four) Cayley-Dickson algebras. Then we introduce and study the class of real unital nonassociative algebras in which the subalgebra generated by any nonscalar element is isomorphic to $\mathcal{C}C$. We call them {\epsilonm locally complex algebras}. In particular, we describe all such algebras that have dimension at most $4$. Our main motivation, however, for introducing locally complex algebras is that this concept makes it possible for us to extend Frobenius' and Zorn's theorems in a way that it also involves the fifth Cayley-Dickson algebra, the sedenions. \epsilonnd{abstract} \maketitle \section{Introduction} The real number field $\mathcal{R}R$, the complex number field $\mathcal{C}C$, and the division agebra of real quaternions $\mathbb{H}$ are classical examples of associative real division algebras. In 1878 Frobenius \cite{F} proved that in the finite dimensional context they are also the only examples. Assuming alternativity instead of associativity, there is another example: $\mathbb{O}$, the division algebra of octonions. It turns out that this is the only additional example. This result is attributed to Zorn \cite{Z}. In Section 3 we give short and self-contained proofs of these classical theorems by Frobenius and Zorn. Both proofs are based on the same idea. In fact, the proof of Zorn's theorem is a continuation of the proof of Frobenius' theorem. The proofs are constructive, it appears like $\mathbb{H}$ and $\mathbb{O}$ are met "unintentionally". Our proofs of Frobenius' and Zorn's theorems were discovered by accident, when examining the class of real unital algebras with the following property: the subalgebra generated by any element different from a scalar multiple of $1$ is isomorphic to $\mathcal{C}C$. These algebras, which we call {\epsilonm locally complex}, will be first considered in Section \ref{loccom}. In particular, we will classify all locally complex algebras of dimension at most 4. Unlike real division algebras which exist only in dimensions $1,2,4$, and $8$ \cite{BM, Ker}, locally complex algebras exist in abundance in any dimension. However, among alternative (and hence also associative) finite dimensional real algebras, the concepts of division algebras and locally complex algebras coincide. Frobenius' and Zorn's theorems can be therefore equivalently stated so that one replaces "division" by "locally complex" in the formulation. This observation paves the way for continuing in the direction of these two theorems. The algebras $\mathcal{R}R$, $\mathcal{C}C$, $\mathbb{H}$, and $\mathbb{O}$ are the first four (real) algebras formed in the Cayley-Dickson process. The next one is the $16$-dimensional algebra $\mathbb{S}$ of (real) {\epsilonm sedenions}. It is the first algebra in this process that is neither a division nor an alternative algebra. Although it is therefore somewhat less attractive than its famous predecessors, $\mathbb{S}$ has recently gained a considerable attention. Over the last years it was considered in several papers by algebraists as well as by mathematical physicists \cite{Baez, Biss, BH, CM, ChanDj, Im, Kuwata, Moreno}. To the best of our knowledge, however, there are no results that characterize $\mathbb{S}$ through its abstract algebraic properties. Moreover, one might get an impression when looking at some of these papers that such characterizations are not really expected (for example, see the introduction in \cite{Biss}). One of the goals of this paper is to show that actually they can be established. In Section \ref{Secsuper} we consider locally complex algebras that are simultaneously superalgebras with the property that all their homogeneous elements satisfy the alternativity conditions (see \epsilonqref{alt} below). Our main result says that besides the obvious examples, i.e., $\mathcal{R}R$, $\mathcal{C}C$, $\mathbb{H}$, $\mathbb{O}$, and $\mathbb{S}$, there are exactly two more algebras having these properties, one in dimension $8$ and another one in dimension $16$. As corollaries we get three characterizations of $\mathbb{S}$: the first one is based on the existence of special elements satisfying a version of the alternativity condition, the second one is based on the properties of zero divisors, and the third one is based on the structure of subalgebras. Let us remark that among the papers listed above, the one by Calderon and Martin \cite{CM} is philosophically the closest one to our paper since it also considers superalgebras. However, the two papers do not seem to have any overlap. On the other hand, in our final results on sedenions we were influenced by the papers \cite{Biss, ChanDj, Moreno}. \section{Preliminaries}\label{Prel} The purpose of this section is to recall some definitions and elementary properties of the notions needed in subsequent sections. Let $A$ be a nonassociative algebra over a field. In this paper we will be actually interested only in the case where this field is $\mathcal{R}R$, although some parts, like the following definitions and comments, make sense in a more general setting. Recall that $A$ is said to be a {\epsilonm division algebra} if for every nonzero $a\in A$, $x\mapsto ax$ and $x\mapsto xa$ are bijective maps from $A$ onto $A$. If $A$ is finite dimensional, then this is clearly equivalent to the condition that $A$ has no zero divisors. If $A$ is associative, then it is a division algebra if and only if it is unital (i.e., it has a unity $1$) and every nonzero element in $A$ has a multiplicative inverse. For general algebras this is not true. The real {\epsilonm Cayley-Dickson} algebras $\mathcal{A}AA_n$, $n\ge 0$, are (nonassociative) real algebras with involution $\ast$, defined recursively as follows: $\mathcal{A}AA_0 =\mathcal{R}R$ with trivial involution $a^* =a$, and $\mathcal{A}AA_n$ is the vector space $\mathcal{A}AA_{n-1}\times \mathcal{A}AA_{n-1}$ endowed with multiplication and involution defined by $$ (a,b)(c,d) = (ac - d^*b, da + bc^*), $$ $$ (a,b)^* = (a^*,-b). $$ It is easy to see that $\mathcal{A}AA_n$ is unital (in fact, the unity of $\mathcal{A}AA_n$ is $(1,0)$ where $1$ is the unity of $\mathcal{A}AA_{n-1})$, $x+x^*$ and $xx^*=x^*x$ are scalar multiplies of $1$ for every $x\in \mathcal{A}AA_n$, and $\dim \mathcal{A}AA_n = 2^n$. Next, it is clear that $\mathcal{A}AA_1 = \mathcal{C}C$, and one easily notices that $\mathcal{A}AA_2 =\mathbb{H}$, the {\epsilonm quaternions}. The next algebra in this process is $\mathcal{A}AA_3 =\mathbb{O}$, the {\epsilonm octonions}. For an excellent survey on octonions we refer the reader to \cite{Baez}. Let us record here just a few basic properties of $\mathbb{O}$. First of all, $\mathbb{O}$ is an $8$-dimensional division algebra. Denoting its basis by $\{1,e_1,\ldots,e_7\}$, the multiplication in $\mathbb{O}$ is determined by the following table: \begin{small} \begin{center} \begin{tabular}{|r|r|r|r|r|r|r|r|} \hline & $e_1$ & $e_2$ & $e_3$ & $e_4$ & $e_5$ & $e_6$ & $e_7$\\\hline $e_1$ & $-1$ & $e_3$ & $-e_2$ & $e_5$ & $-e_4$ & $-e_7$ & $e_6$\\\hline $e_2$ & $-e_3$ & $-1$ & $e_1$ & $e_6$ & $e_7$ & $-e_4$ & $-e_5$\\\hline $e_3$ & $e_2$ & $-e_1$ & $-1$ & $e_7$ & $-e_6$ & $e_5$ & $-e_4$\\\hline $e_4$ & $-e_5$ & $-e_6$ & $-e_7$ & $-1$ & $e_1$ & $e_2$ & $e_3$\\\hline $e_5$ & $e_4$ & $-e_7$ & $e_6$ & $-e_1$ & $-1$ & $-e_3$ & $e_2$\\\hline $e_6$ & $e_7$ & $e_4$ & $-e_5$ & $-e_2$ & $e_3$ & $-1$ & $-e_1$\\\hline $ e_7$ & $-e_6$ & $e_5$ & $e_4$ & $-e_3$ & $-e_2$ & $e_1$ & $-1$\\\hline \epsilonnd{tabular} \epsilonnd{center} \epsilonnd{small} Note that the linear span of $1,e_1,e_2,e_3$ is a subalgebra of $\mathbb{O}$ isomorphic to $\mathbb{H}$. It is well known that $\mathbb{O}$ is a division algebra which is not associative. However, it is "almost" associative - namely, it is alternative. Recall that an algebra $A$ is said to be {\epsilonm alternative} if \begin{equation}\label{alt} x^2 y = x (xy)\quad\mbox{and}\quad yx^2 = (yx)x \epsilonnd{equation} holds for all $x,y\in A$. Incidentally, Artin's theorem says that this is equivalent to the condition that any two elements generate an associative subalgebra \cite[p. 36]{ZSSS}. We shall need the identities from \epsilonqref{alt} in their linearized forms: \begin{equation} \label{eq2} (xz+zx)y=x(zy)+z(xy),\,\, y(xz+zx)=(yx)z+(yz)x. \epsilonnd{equation} Let us also record the so-called middle Moufang identity which, as one easily checks (see, e.g., \cite[p. 35]{ZSSS}), holds in every alternative algebra: \begin{equation} \label{eq3} (xy)(zx) = x(yz)x. \epsilonnd{equation} With regard to the right-hand side of \epsilonqref{eq3} it should be pointed out that alternative algebras are flexible, i.e., $x(yx) = (xy)x$ holds (after all, this follows from Artin's theorem), and therefore there is a convention to write $xyx$ instead of $(xy)x$ or $x(yx)$. The next algebra obtained by the Cayley-Dickson process is the $16$-dimensional algebra $\mathcal{A}AA_4=\mathbb{S}$, the {\epsilonm sedenions}. Let $\{1,e_1,\ldots,e_{15}\}$ be a basis of $\mathbb{S}$. This is the multiplication table for $\mathbb{S}$: \begin{center} \begin{tiny} \begin{tabular}{|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|} \hline & $e_1$ & $e_2$ & $e_3$ & $e_4$ & $e_5$ & $e_6$ & $e_7$ & $e_8$ & $e_9$ & $e_{10}$ & $e_{11}$ & $e_{12}$ & $e_{13}$ & $e_{14}$ & $e_{15}$\\\hline $e_1$ & $-1$ & $e_3$ & $-e_2$ & $e_5$ & $-e_4$ & $-e_7$ & $e_6$ & $e_9$ & $-e_8$ & $-e_{11}$ & $e_{10}$ & $-e_{13}$ & $e_{12}$ & $e_{15}$ & $-e_{14}$\\\hline $e_2$ & $-e_3$ & $-1$ & $e_1$ & $e_6$ & $e_7$ & $-e_4$ & $-e_5$ & $e_{10}$ & $e_{11}$ & $-e_8$ & $-e_9$ & $-e_{14}$ & $-e_{15}$ & $e_{12}$ & $e_{13}$\\\hline $e_3$ & $e_2$ & $-e_1$ & $-1$ & $e_7$ & $-e_6$ & $e_5$ & $-e_4$ & $e_{11}$ & $-e_{10}$ & $e_9$ & $-e_8$ & $-e_{15}$& $e_{14}$ & $-e_{13}$ &$e_{12}$\\\hline $e_4$ &$-e_5$ & $-e_6$ & $-e_7$ & $-1$ & $e_1$ & $e_2$ & $e_3$ & $e_{12}$ & $e_{13}$ & $e_{14}$ &$e_{15}$ & $-e_8$ & $-e_9$ & $-e_{10}$ & $-e_{11}$\\\hline $e_5$ & $e_4$ & $-e_7$ & $e_6$ &$-e_1$ & $-1$ & $-e_3$ & $e_2$ & $e_{13}$ & $-e_{12}$ & $e_{15}$ & $-e_{14}$ & $e_9$ &$-e_8$ & $e_{11}$ & $-e_{10}$\\\hline $e_6$ & $e_7$ & $e_4$ & $-e_5$ & $-e_2$ & $e_3$ & $-1$ &$-e_1$ & $e_{14}$ & $-e_{15}$ & $-e_{12}$ & $e_{13}$ & $e_{10}$ & $-e_{11}$ & $-e_8$ & $e_9$\\\hline $e_7$ & $-e_6$ & $e_5$ & $e_4$ & $-e_3$ & $-e_2$ & $e_1$ & $-1$ & $e_{15}$ &$e_{14}$ & $-e_{13}$ & $-e_{12}$ & $e_{11}$ & $e_{10}$ & $-e_9$ & $-e_8$\\\hline $e_8$ & $-e_9$ &$-e_{10}$ & $-e_{11}$ & $-e_{12}$ & $-e_{13}$ & $-e_{14}$ & $-e_{15}$ & $-1$ & $e_1$ & $e_2$ &$e_3$ & $e_4$ & $e_5$ & $e_6$ & $e_7$\\\hline $e_9$ & $e_8$ & $-e_{11}$ & $e_{10}$ & $-e_{13}$ &$e_{12}$ & $e_{15}$ & $-e_{14}$ & $-e_1$ & $-1$ & $-e_3$ & $e_2$ & $-e_5$ & $e_4$ & $e_7$ & $-e_6$\\\hline $e_{10}$ & $e_{11}$ & $e_8$ & $-e_9$ & $-e_{14}$ & $-e_{15}$ & $e_{12}$ & $e_{13}$ &$-e_2$ & $e_3$ & $-1$ & $-e_1$ & $-e_6$ & $-e_7$ & $e_4$ & $e_5$\\\hline $e_{11}$ &$-e_{10}$ & $e_9$ & $e_8$ & $-e_{15}$ & $e_{14}$ & $-e_{13}$ & $e_{12}$ & $-e_3$ & $-e_2$ &$e_1$ & $-1$ & $-e_7$ & $e_6$ & $-e_5$ & $e_4$\\\hline $e_{12}$ & $e_{13}$ & $e_{14}$ & $e_{15}$ &$e_8$ & $-e_9$ & $-e_{10}$ & $-e_{11}$ & $-e_4$ & $e_5$ & $e_6$ & $e_7$ & $-1$ & $-e_1$ &$-e_2$ & $-e_3$\\\hline $e_{13}$ & $-e_{12}$ & $e_{15}$ & $-e_{14}$ & $e_9$ & $e_8$ & $e_{11}$ & $-e_{10}$ & $-e_5$ & $-e_4$ & $e_7$ & $-e_6$ & $e_1$ & $-1$ & $e_3$ & $-e_2$\\\hline $e_{14}$ & $-e_{15}$ & $-e_{12}$ & $e_{13}$ & $e_{10}$ & $-e_{11}$ & $e_8$ & $e_9$ &$-e_6$ & $-e_7$ & $-e_4$ & $e_5$ & $e_2$ & $-e_3$ & $-1$ & $e_1$\\\hline $e_{15}$ & $e_{14}$ &$-e_{13}$ & $-e_{12}$ & $e_{11}$ & $e_{10}$ & $-e_9$ & $e_8$ & $-e_7$ & $e_6$ & $-e_5$ & $-e_4$ & $e_3$ & $e_2$ & $-e_1$& $-1$\\\hline \epsilonnd{tabular} \epsilonnd{tiny} \epsilonnd{center} The sedenions have zero divisors and they are not an alternative algebra. Anyhow, we shall see that they are close enough to alternative division algebras, so that these approximate properties are "almost" characteristic for $\mathbb{S}$. Let us recall the definition of another notion needed for dealing with these properties. An algebra $A$ is said to be a {\epsilonm superalgebra} if it is $\mathbb{Z}_2$-graded, i.e., there exist linear subspaces $A_i$, $i\in \mathbb Z_2$, such that $A=A_0\oplus A_1$ and $A_iA_j\subseteq A_{i+j}$ for all $i,j\in \mathbb{Z}_2$. We call $A_0$ an {\epsilonm even} and $A_1$ an {\epsilonm odd} part of $A$. Elements in $A_0\cup A_1$ are said to be {\epsilonm homogeneous}. Note that if $A$ is unital, then $1\in A_0$. Cayley-Dickson algebras possess a natural superalgebra structure. Indeed, $A=\mathcal{A}AA_n$ becomes a superalgebra by defining $A_0 = \mathcal{A}AA_{n-1}\times 0$ and $A_1 = 0\times \mathcal{A}AA_{n-1}$. This simple observation is the concept behind the contents of Section \ref{Secsuper}. The algebras $\mathcal{A}AA_n$, $n\ge 4$, are not alternative, but at least they have certain nonscalar elements that share many properties with elements in alternative algebras: these are scalar multiples of the element $e=(0,1)$, where $1$ is of course the unity of $\mathcal{A}AA_{n-1}$ (see e.g. \cite[Section 5]{Biss}). Let us point out only one property that is sufficient for our purposes: $e$ satisfies $x^2 e = x(xe)$ for all $x\in\mathcal{A}AA_n$. This can be easily verified. Moreover, this property is "almost" characteristic for $e$: only elements in the linear span of $1$ and $e$ satisfy this identity for every $x$ \cite[Lemma 1.2]{AS} (the authors are thankful to Alberto Elduque for drawing their attention to this result). Now, let us call an element $a$ in an arbitrary nonassociative algebra $A$ an {\epsilonm alter-scalar} if $a$ is not a scalar and satisfies $x^2 a = x(xa)$ holds for all $x\in A$. (A similar, but not exactly the same notion of a strongly alternative element was defined in \cite{Moreno2}. There is also a standard notion of an alternative element defined through the condition $a^2x = a(ax)$ for every $x$, but this is too weak for our goals). What is important for us is that $\mathbb{S}$ contains alter-scalars. With respect to the notation introduced above, these are nonzero scalar multiplies of $e_8$. Thus, the standard basis of $\mathbb{S}$ has an element that is in some sense "better" than the others. This does not seem to be the case with the preceding Cayley-Dickson algebras. Next we recall that an algebra $A$ is said to be {\epsilonm quadratic} if it is unital and the elements $1,x,x^2$ are linearly dependent for every $x\in A$. Thus, for every $x\in A$ there exist $t(x),n(x) \in \mathcal{R}R$ such that $x^{2} - t(x)x + n(x)=0$. Obviously, $t(x)$ and $n(x)$ are uniquely determined if $x\notin \mathcal{R}R$. Setting $t(\lambda) = 2\lambda$ and $n(\lambda)= \lambda^2$ for $\lambda\in\mathcal{R}R$, we can then consider $t$ and $n$ as maps from $A$ into $\mathcal{R}R$ (the reason for this definition is that in this way $t$ becomes a linear functional, but we shall not need this). We call $t(x)$ and $n(x)$ the {\epsilonm trace} and the {\epsilonm norm} of $x$, respectively. For some elementary properties of quadratic algebras, a characterization of quadratic alternative algebras, and further references we refer to \cite{Eld}. From $x^2 - (x+x^*)x + x^*x =0$ we see that all algebras $\mathcal{A}AA_n$ are quadratic. Further, every real division algebra $A$ that is algebraic and power-associative (this means that every subalgebra generated by one element is associative) is automatically quadratic. Indeed, if $x\in A$ then there exists a nonzero polynomial $f(X)\in\mathcal{R}R[X]$ such that $f(x)=0$. Writing $f(X)$ as the product of linear and quadratic polynomials in $\mathcal{R}R[X]$ it follows that $p(x)=0$ for some $p(X)\in\mathcal{R}R[X]$ of degree $1$ or $2$. In particular, algebraic alternative (and hence associative) real division algebras are quadratic. Finally, if $A$ is a real unital algebra, i.e., an algebra over $\mathcal{R}R$ with unity $1$, then we shall follow a standard convention and identify $\mathcal{R}R$ with $\mathcal{R}R 1$; thus we shall write $\lambda$ for $\lambda 1$, where $\lambda\in\mathcal{R}R$. \section{Frobenius' and Zorn's theorems} Our first lemma is well known. It describes one of the basic properties of quadratic algebras. We give the proof for the sake of completness. \begin{lemma} \label{L0} Let $A$ be a quadratic real algebra. Then $U=\{u\in A\setminus{\mathcal{R}R}\,|\,u^2 \in \mathcal{R}R\}\cup\{0\}$ is a linear subspace of $A$, $uv+vu\in \mathcal{R}R$ for all $u,v\in U$, and $A=\mathcal{R}R\oplus U$. \epsilonnd{lemma} \begin{proof} Obviously, $U$ is closed under scalar multiplication. We have to show that $u,v\in U$ implies $u+v\in U$. If $u,v,1$ are linearly dependent, then one easily notices that already $u$ and $v$ are dependent, and the result follows. Thus, let $u,v,1$ be independent. We have $(u+v)^2 + (u-v)^2=2u^2 + 2v^2 \in \mathcal{R}R$. On the other hand, as $A$ is quadratic there exist $\lambda,\mu\in\mathcal{R}R$ such that $(u+v)^2 - \lambda(u+v)\in\mathcal{R}R$ and $(u-v)^2 - \mu(u-v)\in\mathcal{R}R$, and hence $ \lambda(u+v) + \mu(u-v)\in \mathcal{R}R$. However, the independence of $1,u,v$ implies $\lambda + \mu = \lambda -\mu =0$, so that $\lambda=\mu=0$. This proves that $u\pm v\in U$. Thus $U$ is indeed a subspace of $A$. Accordingly, $u v + v u = (u+v)^{2} - u^{2} - v^{2}\in\mathcal{R}R$ for all $u,v\in U$. Finally, if $a\in A\setminus{\mathcal{R}R}$, then $a^2 - \nu a\in\mathcal{R}R$ for some $\nu\in \mathcal{R}R$, and therefore $u=a -\frac{\nu}{2}\in U$; thus, $a = \frac{\nu}{2} + u\in \mathcal{R}R\oplus U$. \epsilonnd{proof} \begin{remark} \label{Rem1} If $A$ is additionally a division algebra, then every nonzero $u\in U$ can be written as $u=\alpha v$ with $\alpha\in \mathcal{R}R$ and $v^2 =-1$. Indeed, since $u^2\in\mathcal{R}R$ and since $u^2$ cannot be $\ge 0$ (otherwise $(u-\alpha)(u+\alpha) = u^2 - \alpha^2$ would be $0$ for some $\alpha\in\mathcal{R}R$) we have $u^2=- \alpha^2$ with $0\ne \alpha\in\mathcal{R}R$. Thus, $v = \alpha^{-1}u$ is a desired element. \epsilonnd{remark} Note that by $\langle u,v\rangle = - \frac{1}{2}(uv +vu)$ one defines an inner product on $U$ if $A$ is a division algebra. The next lemma therefore deals with nothing but the Gram-Schmidt process. Nevertheless, we give the proof. \begin{lemma} \label{L1} Let $A$ be a quadratic real division algebra, and let $U$ be as in Lemma \ref{L0}. Suppose $e_1,\ldots,e_k\in U$ are such that $e_i^{2} = -1$ for all $i\le k$ and $e_ie_j =- e_j e_i$ for all $i,j\le k$, $i\ne j$. If $U$ is not equal to the linear span of $e_1,\ldots,e_k$, then there exists $e_{k+1}\in U$ such that $e_{k+1}^{2} = -1$ and $e_i e_{k+1} =- e_{k+1} e_i$ for all $i\le k$. \epsilonnd{lemma} \begin{proof} Pick $u\in U$ that is not contained in the linear span of $e_1,\ldots,e_k$, and set $\alpha_i = \frac{1}{2}(ue_i + e_i u) \in \mathcal{R}R$ (by Lemma \ref{L0}). Note that $v = u + \alpha_1 e_1+\ldots +\alpha_k e_k$ satisfies $e_i v = -v e_i$ for all $i\le k$. Let $e_{k+1}$ be a scalar multiple of $v$ such that $e_{k+1}^{2} = -1$ (Remark \ref{Rem1}). Then $e_{k+1}$ has all desired properties. \epsilonnd{proof} \begin{theorem} \label{TF} {\bf (Frobenius' theorem)} An algebraic associative real division algebra $A$ is isomorphic to $\mathcal{R}R$, $\mathcal{C}C$, or $\mathbb{H}$. \epsilonnd{theorem} \begin{proof} As pointed out at the end of Section \ref{Prel}, $A$ is quadratic. We may assume that $n=\dim A\ge 2$. By Remark \ref{Rem1} we can fix $i\in A$ such that $i^2=-1$. Thus, $A\cong\mathcal{C}C$ if $n=2$. Let $n > 2$. By Lemma \ref{L1} there is $j\in A$ such that $j^2=-1$ and $ij=-ji$. Set $k=ij$. Now one immediately checks that $k^2 =-1$, $ki = j =-ik$, $jk=i=-kj$, and $i,j,k$ are linearly independent. Therefore $A$ contains a subalgebra isomorphic to $\mathbb{H}$. It remains to show that $n$ is not $>4$. If it was, then by Lemma \ref{L1} there would exist $e\in A$ such that $e\ne 0$, $ei = -ie$, $ej = - je$, and $ek=-ke$. However, from the first two identities we infer $ eij= -iej=ije$; since $ij=k$, this contradicts the third identity. \epsilonnd{proof} In standard graduate algebra textbooks one can find different proofs of Frobenius' theorem. In some of them the advanced theory is used, but there are also such that use only elementary tools, e.g., \cite{Her} and \cite{Lam}. The proof in \cite{Her} is actually based on similar ideas than our proof, but it is considerably lengthier. The one in \cite{Lam} (which is based on \cite{Pal}) is different, and also short. We believe that our proof, consisting of four simple steps (Lemma \ref{L0}, Remark \ref{Rem1}, Lemma \ref{L1}, and the final proof), should be easily understandable to undergraduate students. Some of these steps, especially both lemmas, are of independent interest. We now switch to the proof of Zorn's theorem. We need a simple lemma: \begin{lemma} \label{L2} Let $A$ be an alternative algebra, and let $e_1,\ldots,e_k\in A$ be such that $e_ie_j\in\{e_1,\ldots,e_k\}$ whenever $i\ne j$. If $w\in A$ is such that $e_iw=-we_i$ for every $i$, then $(e_ie_j)w=-e_i(e_jw)$ and $w(e_ie_j)=-(we_i)e_j$ whenever $i\neq j$. \epsilonnd{lemma} \begin{proof} Just set $x=e_i$, $y=e_j$, and $z=w$ in \epsilonqref{eq2}, and the result follows. \epsilonnd{proof} \begin{theorem} \label{TZ} {\bf (Zorn's theorem)} An algebraic alternative real division algebra $A$ is isomorphic to $\mathcal{R}R$, $\mathcal{C}C$, $\mathbb{H}$, or $\mathbb{O}$. \epsilonnd{theorem} \begin{proof} Since a subalgebra generated by two elements is associative, the first part of the proof of Theorem \ref{TF} remains unchanged in the present context. We may therefore assume that $A$ contains a copy of $\mathbb{H}$ and that $n=\dim A > 4$. Let us just change the notation and write $e_1 = i$, $e_2 = j$, and $e_3=k$. By Lemma \ref{L1} there exists $e_4\in A$ such that $e_4^2=-1$ and $e_4e_i=-e_ie_4$ for $i=1,2,3$. Now define $e_5=e_1e_4$, $e_6=e_2e_4$, $e_7=e_3e_4$. Using the alternativity and anticommutativity relations we see that $$ e_5^2=e_6^2=e_7^2=-1, $$ $$ e_1e_5 = -e_5e_1 = e_2e_6=-e_6e_2 = e_3e_7=-e_7e_3= -e_4, $$ $$ e_4e_5 = -e_5e_4 = e_1,\,\, e_4e_6 = -e_6e_4 = e_2,\,\, e_4 e_7 = -e_7e_4 = e_3. $$ Further, using \epsilonqref{eq3} we obtain $$ e_5e_6 = -e_6e_5 = -e_3 ,\,\, e_6e_7 = -e_7e_6 = -e_1,\,\, e_7e_5 = -e_5e_7 = -e_2. $$ Finally, use Lemma \ref{L2} with $k=3$ and $w=e_4$, and note that the resulting identites yield the rest of the multiplication table. It is easy to see that $1,e_1,\ldots,e_7$ are linearly independent. Indeed, by taking squares we first see that $\sum_{i=1}^7 \lambda_ie_i$ cannot be a nonzero scalar; if $\sum_{i=1}^7 \lambda_ie_i =0$, then after multiplying this relation with $e_i$ we get $\lambda_i=0$. Thus, we have showed that $A$ contains $\mathbb{O}$. It remains to show that $n=8$. Suppose $n > 8$. Then, by Lemma \ref{L1}, there exists $f\in A$ such that $f\ne 0$ and $fe_i=-e_if$, $1\le i\le 7$. Lemma \ref{L2} tells us that $f$ also satisfies $(e_ie_j)f=-e_i(e_jf)$ and $f(e_ie_j)=-(fe_i)e_j$ for $i\neq j$. Accordingly, \begin{equation} \label{espe} e_1(e_2(e_4f))=-e_1((e_2e_4)f)= -e_1(e_6f) = (e_1e_6)f =- e_7f. \epsilonnd{equation} Note that for $1\le i\le 3$ we have $$ e_i(e_4f)=-(e_ie_4)f=f(e_ie_4)=-f(e_4e_i)=(fe_4)e_i=-(e_4f)e_i. $$ This makes it possible for us to apply Lemma \ref{L2} for $k=3$ and $w=e_4f$. In particular this gives $(e_1e_2)(e_4f)=-e_1(e_2(e_4f))$. Consequently, $$ e_1(e_2(e_4f)) = -e_3(e_4f) = (e_3e_4)f= e_7f, $$ contradicting \epsilonqref{espe}. \epsilonnd{proof} \begin{remark} \label{Rdalje} From the first part of the proof we see that if an alternative (not necessarily a division) real algebra $A$ contains a copy of $\mathbb{H}$ and $\dim A > 4$, then it also contains a copy of $\mathbb{O}$. \epsilonnd{remark} Classical versions of Frobenius' and Zorn's theorems deal with finite dimensional algebras rather than with (slightly more general) algebraic ones. Our method, however, yields these more general versions for free. But actually we shall need the more general version of Zorn's theorem in Section \ref{Secsuper}. We cannot claim that any of the arguments given in this section is entirely original. After finding these proofs we have realized, when searching the literature, that many of these ideas appear in different texts. But to the best of our knowledge nobody has compiled these arguments in the same way that leads to short and direct proofs of theorems by Frobenius and Zorn. Therefore we hope and believe that this section is of some value. \section{Locally complex algebras} \label{loccom} As already mentioned, we define a {\epsilonm locally complex algebra} as a real unital algebra $A$ such that every $a\in A\setminus{\mathcal{R}R}$ generates a subalgebra isomorphic to $\mathcal{C}C$. A locally complex algebra $A$ is obviously quadratic. We can therefore consider the trace $t(a)$ and the norm $n(a)$ of each $a\in A$. \begin{lemma}\label{Tlc} The following conditions are equivalent for a real unital algebra $A$: \begin{enumerate} \item[(i)] $A$ is locally complex; \item[(ii)] every $0\ne a\in A$ has a multiplicative inverse lying in $\mathcal{R}R a + \mathcal{R}R$; \item[(iii)] $A$ is quadratic and $A$ has no nontrivial idempotents or square-zero elements; \item[(iv)] $A$ is quadratic and $n(a)> 0$ for every $0\ne a\in A$. \epsilonnd{enumerate} Moreover, if $2\le \dim A = n < \infty$, then {\rm (i)-(iv)} are equivalent to \begin{enumerate} \item[(v)] $A$ has a basis $\{1,e_1,\ldots, e_{n-1}\}$ such that $e_i^{2} = -1$ for all $i$ and $e_i e_j =- e_j e_i$ for all $i\ne j$. \epsilonnd{enumerate} \epsilonnd{lemma} \begin{proof} It is easy to see that (i)$\Longrightarrow$ (ii) and (ii)$\Longrightarrow$ (iii). Suppose $A$ is quadratic and $n(a) \le 0$ for some $0\ne a\in A$. Then $a\notin\mathcal{R}R$. Therefore also $b=a - \frac{t(a)}{2} \notin \mathcal{R}R$. Note that $b^2 \ge 0$. If $b^2 =0$, then $A$ has a nontrivial nilpotent. If $b^2 > 0$, i.e., $b^2 = \alpha^2$ for some $0\ne \alpha\in\mathcal{R}R$, then $e=\frac{1}{2}(1-\alpha^{-1}b)$ is a nontrivial idempotent in $A$. Thus, (iii)$\Longrightarrow$ (iv). The proof of (iv)$\Longrightarrow$ (ii) is also straightforward. Therefore (ii)-(iv) are equivalent. Now assume (ii)-(iv) and pick $a\in A\setminus{\mathcal{R}R}$. Then $b= a - \frac{t(a)}{2}$ satisfies $b^{2} \in\mathcal{R}R$. Just as in the argument above we see that $b^{2}$ cannot be $\ge 0$. Hence $b^{2} = -\alpha^2$ for some $\alpha\in\mathcal{R}R\setminus{\{0\}}$, and so $i = \alpha^{-1}b$ satisfies $i^{2} = -1$. This yields (i). Finally, assume $2\le \dim A = n < \infty$. The implication (i)-(iv) $\Longrightarrow$ (v) follows from (the proof of) Lemma \ref{L1}. Assuming (v) and writing $a\in A$ as $a = \lambda_0 +\sum_{i=1}^{n-1}\lambda_i e_i$, we see that $a^{2} - t(a)a + n(a)=0$ with $t(a)=2\lambda_0$ and $n(a) = \sum_{i=0}^{n-1}\lambda_i^2$. Thus, (iv) holds. \epsilonnd{proof} We can now list various examples of locally complex algebras. \begin{example} \label{ex0} A quadratic real division algebra is locally complex. \epsilonnd{example} \begin{example} \label{ex1} Let $J_n$ be an $n$-dimensional real vector space, and let $\{1,e_1,\ldots,e_{n-1}\}$ be its basis. Define a multiplication in $J_n$ so that $1$ is of course the unity, and the others are multiplied according to $e_i e_j = -\delta_{ij}$. Then $J_n$ is a locally complex algebra and simultaneously a Jordan algebra. Another way of representing $J_n$ is by identifying it with $\mathcal{R}R\times \mathcal{R}R^{n-1}$, and defining multiplication by $(\lambda,u) (\mu , v) = (\lambda\mu - \langle u,v\rangle, \lambda v + \mu u)$, where $\langle \,.\,,\,.\,\rangle$ denotes the standard inner product on $\mathcal{R}R^{n-1}$. \epsilonnd{example} \begin{example} \label{ex2} A real unital algebra $A$ is said to be {\epsilonm nicely normed} if there exists a linear map $\ast:A\to A$ such that $a^{**} =a$, $(a b)^* = b^* a^*$ for all $a,b\in A$, and $a+a^*\in\mathcal{R}R$, $a a^* = a^* a > 0$ for all $0\ne a\in A$ (cf. \cite[p.\,154]{Baez}). These algebras form an important subclass of locally complex algebras. Namely, every element $a$ in such an algebra $A$ satisfies $a^{2} - t(a)a + n(a)=0$ with $t(a)=a+a^*$ and $n(a) = a a^*$, so that $A$ is indeed locally complex. Note that $U=\{u\in A\setminus{\mathcal{R}R}\,|\,u^2 \in \mathcal{R}R\}\cup\{0\} = \{u\in A\,|\,u^* = -u\}$. In particular, the Cayley-Dickson algebras $\mathcal{A}AA_n$ are nicely normed, and hence locally complex. \epsilonnd{example} From Lemma \ref{Tlc} we can deduce the following characterization of finite dimensional nicely normed algebras. \begin{corollary} \label{Cnn} let $A$ be a real unital algebra. If $2\le \dim A = n < \infty$, then the following conditions are equivalent: \begin{enumerate} \item[(i)] $A$ is nicely normed; \item[(ii)] $A$ has a basis $\{1,e_1,\ldots, e_{n-1}\}$ such that $e_i^{2} = -1$ for all $i$ and $e_i e_j =- e_j e_i \in {\rm span}\{e_1,\ldots,e_{n-1}\}$ for all $i\ne j$. \epsilonnd{enumerate} \epsilonnd{corollary} \begin{proof} Assume (i). By Lemma \ref{Tlc}\,(v) $A$ has a basis $\{1,e_1,\ldots, e_{n-1}\}$ that has all desired properties except that we do not know yet that $e_i e_j \in {\rm span}\{e_1,\ldots,e_{n-1}\}$. In view of the observation in Example \ref{ex2} we have ${\rm span}\{e_1,\ldots,e_{n-1}\} = U = \{u\in A\,|\,u^* = -u\}$. Therefore, if $i\ne j$, $(e_i e_j)^\ast = e_j^\ast e_i^\ast = e_j e_i = -e_i e_j$, and hence $e_i e_j\in U$. Conversely, if (ii) holds, then we can define $\ast$ according to $1^* = 1$ and $e_{i}^* = -e_{i}$, and one easily checks that this makes $A$ a nicely normed algebra. \epsilonnd{proof} If $A$ is a {\epsilonm commutative} finite dimensional locally complex algebra, then the $e_i$'s from (v) in Lemma \ref{Tlc} must satisfy $e_ie_j = 0$ if $i\ne j$. This can be interpreted as follows. \begin{corollary}\label{Cjor} Let $A$ be a locally complex algebra with $2\le \dim A = n < \infty$. Then $A$ is commutative if and only if $A\cong J_n$. \epsilonnd{corollary} Let $A$ be an alternative real algebra. If $A$ is an algebraic division algebra, then it is quadratic, and hence , as already mentioned, locally complex. Conversely, if $A$ is locally complex, then by Lemma \ref{Tlc}\,(ii) for every $0\ne a\in A$ there exist $\lambda,\mu\in \mathcal{R}R$ such that $a(\lambda a + \mu) =1$. Since $A$ is alternative it follows that for every $y\in A$ the equation $ax=y$ has the solution $x= (\lambda a+ \mu)y$. Similarly one solves the equation $xa = y$. Therefore $A$ is an algebraic division algebra. Accordingly, Frobenius' and Zorn's theorem can be equivalently stated as follows. \begin{theorem}\label{TFZ} {\bf (Frobenius' and Zorn's theorems)} An associative locally complex algebra is isomorphic to $\mathcal{R}R$, $\mathcal{C}C$, or $\mathbb{H}$. An alternative locally complex algebra is isomorphic to $\mathcal{R}R$, $\mathcal{C}C$, $\mathbb{H}$, or $\mathbb{O}$. \epsilonnd{theorem} As already mentioned in the introduction, this version of Frobenius' and Zorn's theorems indicates the direction in which these theorems can be generalized. We shall deal with this in the next section. In the rest of this section we will classify locally complex algebras up to dimesion 4. Clearly, $\mathcal{R}R$ and $\mathcal{C}C$ are, up to an isomorphism, the only locally complex algebras of dimension $\le 2$. We fix some notation. The members of $\mathcal{R}R \times \mathcal{R}R^2$ will be denoted by $(\lambda ,x) = (\lambda , x_1 , x_2)$ and the members of $\mathcal{R}R \times \mathcal{R}R^3$ by $(\lambda ,x) = (\lambda, x_1 , x_2 , x_3)$. For each (ordered) pair $x,y \in \mathcal{R}R^2$ we denote by $|x\ y|$ the $2\times 2$ determinant $ \left| \begin{array}{ccc} x_1 & y_1 \\ x_2 & y_2 \epsilonnd{array} \right|$. The symbol $x\times y$ stands for the usual vector product (cross product) of $x,y \in \mathcal{R}R^3$, while $(x,y,z)$ denotes the scalar triple product $(x,y,z) = \langle x\times y , z \rangle$, $x,y,z \in \mathcal{R}R^3$. Let $t,s$ be nonnegative real numbers. We denote by $A_{t,s}$ the 3-dimensional algebra $A_{t,s}= \mathcal{R}R \times \mathcal{R}R^2$ with the multiplication given by $$ (\lambda , x) \, (\mu , y) = (\lambda \mu - \langle x,y \rangle + t |x\ y| ,\lambda y + \mu x + s |x\ y|e_1 ), $$ where $e_1 = (1,0)\in \mathcal{R}R^2$. It follows from Lemma \ref{Tlc}\,(v) that $A_{t,s}$ is a locally complex algebra. We will show that each 3-dimensional locally complex algebra $A$ is isomorphic to $A_{t,s}$ for some $(t,s)\in [0,\infty) \times [0, \infty)$ and that $A_{t,s}$ and $A_{t', s'}$ are not isomorphic whenever $(t,s)\not= (t' , s')$. In short, we have the following classification theorem for 3-dimensional locally complex algebras. \begin{theorem}\label{bv1} The map $(t,s) \mapsto A_{t,s}$, $t,s \ge 0$, induces a bijection between $[0,\infty) \times [0, \infty)$ and isomorphism classes of 3-dimensional locally complex algebras. \epsilonnd{theorem} \begin{proof} We first show that each 3-dimensional locally complex algebra $A$ is isomorphic to $A_{t,s}$ for some $(t,s)\in [0,\infty) \times [0, \infty)$. It is a straightforward consequence of Lemma \ref{Tlc}\,(v) that $A$ is isomorphic to $\mathcal{R}R \times \mathcal{R}R^2$ with the multiplication given by $$ (\lambda , x) \, (\mu , y) = (\lambda \mu - \langle x,y \rangle ,\lambda y + \mu x ) + |x\ y| (t,z) $$ for some $(t,z)\in \mathcal{R}R \times \mathcal{R}R^2$. So, we may, and we will assume that $A$ is this algebra. We have two possibilities; either $t\ge 0$, or $t<0$. Let us consider only the second one; the case when $t\ge 0$ can be handled in a similar, but simpler way. Set $s= \| z\|$. There exists an orthogonal $2\times 2$ matrix $Q$ such that $Qz=-se_1$ and $\det Q = -1$. Observe that $|Qx \ Qy| = (\det Q) |x\ y| = - |x\ y|$ and $\langle Qx , Qy \rangle = \langle x,y \rangle$, $x,y \in \mathcal{R}R^2$. We claim that the map $\varphi : A \to A_{|t|,s}$ given by $\varphi (\lambda , x) = (\lambda , Qx)$, $(\lambda , x) \in \mathcal{R}R \times \mathcal{R}R^2$, is an isomorphism. Clearly, it is linear and bijective. Moreover, we have $$ \varphi( (\lambda, x)\, (\mu, y)) = \varphi ((\lambda \mu - \langle x,y \rangle + t|x\ y|,\lambda y + \mu x + |x\ y|z )) $$ $$ = (\lambda \mu - \langle x,y \rangle + t|x\ y|,\lambda Qy + \mu Qx - s|x\ y|e_1 ). $$ On the other hand, $$ \varphi(\lambda, x)\, \varphi(\mu, y)= (\lambda , Qx)\, (\mu, Qy) $$ $$ = (\lambda \mu - \langle Qx,Qy \rangle + |t|\ |Qx\ Qy| ,\lambda Qy + \mu Qx + s |Qx\ Qy|e_1 ) $$ $$ = (\lambda \mu - \langle x,y \rangle + t |x\ y| ,\lambda Qy + \mu Qx - s |x\ y|e_1 ). $$ Hence, $\varphi$ is an isomorphism. It remains to show that if $A_{t,s}$ and $A_{t', s'}$ are isomorphic for some $(t,s),(t' , s')\in [0,\infty)\times [0, \infty)$, then $(t,s)= (t' , s')$. So, let $\varphi : A_{t,s} \to A_{t',s'}$ be an isomorphism. Then $\varphi$ is linear and unital. In particular, $\varphi (\lambda, 0) = (\lambda, 0)$ for every $\lambda \in \mathcal{R}R$. Furthermore, we have $$ \{ (0,x) \in A_{t,s} \, | \, x\in \mathcal{R}R^2 \} = \{ u \in A_{t,s}\, | \, u^2 \in \mathcal{R}R \ \, {\rm and}\ \, u\not\in \mathcal{R}R \}\cup\{0\}. $$ It follows that $$ \varphi (\lambda , x) = (\lambda , Qx ) $$ for some linear map $Q : \mathcal{R}R^2 \to \mathcal{R}R^2$. From $$ (\lambda^2 - \| Qx\|^2 , 2\lambda Qx) = (\lambda , Qx)^2 = (\varphi (\lambda , x))^2 $$ $$ = \varphi ((\lambda, x)^2) = \varphi (\lambda^2 - \| x\|^2 , 2 \lambda x) = (\lambda^2 - \|x\|^2 , 2 \lambda Qx) $$ we get that $\| Qx\|^2 = \| x\|^2$ for every $x\in \mathcal{R}R^2$. Thus, $Q$ is orthogonal. The equation $$ \varphi ((\lambda, x)\, (\mu, y)) = \varphi(\lambda, x)\, \varphi (\mu , y) $$ can be rewritten as $$ (\lambda \mu - \langle x,y \rangle + t |x\ y| ,\lambda Qy + \mu Qx + s |x\ y| Qe_1 ) $$ $$ = (\lambda \mu - \langle x,y \rangle + t' (\det Q) \, |x\ y| ,\lambda Qy + \mu Qx + s' (\det Q) \, |x\ y|e_1 ). $$ We conclude that $t=t' \det Q$ and $sQe_1 = s' (\det Q) e_1$. Applying the fact that $| \det Q| =1$ and $\| Qe_1 \| = \| e_1 \| =1$ we get $| t| = |t'|$ and $|s | = |s'|$. As all $t,t',s,s'$ are nonnegative, we have $t=t'$ and $s=s'$, as desired. \epsilonnd{proof} It follows directly from Corollary \ref{Cnn} that $A_{t,s}$ is nicely normed if and only if $t=0$. So, the above statement shows that there is a natural bijection between $[0, \infty)$ and isomorphism classes of 3-dimensional nicely normed algebras. The next result owes a lot to the paper \cite{Die} classifying 4-dimensional real quadratic division algebras. Our approach covers a more general class of real algebras. It is self-contained and completely elementary using just simple linear algebra tools. We identify linear maps on $\mathcal{R}R^3$ with $3\times 3$ real matrices. Let $M_3$ denote the set of all $3\times 3$ real matrices. For $(T,u),(T',u') \in M_3 \times \mathcal{R}R^3$ we write $(T,u) \sim (T',u')$ if and only if there exists an orthogonal $3\times 3$ matrix $Q$ such that $T' = (\det Q) QTQ^T$ and $u'=(\det Q) Q u$. It is clear that $\sim$ is an equivalence relation on $M_3 \times \mathcal{R}R^3$. The set of equivalence classes will be denoted by $(M_3 \times \mathcal{R}R^3)/\sim$. For $T\in M_3$ and $u\in \mathcal{R}R^3$ we denote by $A_{T,u}$ the 4-dimensional algebra $A_{T,u}= \mathcal{R}R \times \mathcal{R}R^3$ with the multiplication given by $$ (\lambda , x) \, (\mu , y) = (\lambda \mu - \langle x,y \rangle + (x,y,u) ,\lambda y + \mu x + T(x\times y) ). $$ As in the 3-dimensional case one can easily verify that $A_{T,u}$ is a locally complex algebra. We will show that each 4-dimensional locally complex algebra $A$ is isomorphic to $A_{T,u}$ for some $(T,u)\in M_3 \times \mathcal{R}R^3$ and that $A_{T,u}$ and $A_{T', u'}$ are isomorphic if and only if $(T,u) \sim (T' , u')$. In other words, we will prove the following. \begin{theorem}\label{bv2} The map $(T,u) \mapsto A_{T,u}$, $T\in M_3$, $u\in \mathcal{R}R^3$, induces a bijection between $(M_3 \times \mathcal{R}R^3)/\sim$ and isomorphism classes of 4-dimensional locally complex algebras. \epsilonnd{theorem} \begin{proof} We will first show that each $4$-dimensional locally complex algebra $A$ is isomorphic to $A_{T,u}$ for some $(T,u)\in M_3 \times \mathcal{R}R^3$. It is a straightforward consequence of Lemma \ref{Tlc}\,(v) that $A$ is isomorphic to $\mathcal{R}R \times \mathcal{R}R^3$ with the multiplication given by $$ (\lambda , x) \, (\mu , y) = (\lambda \mu - \langle x,y \rangle ,\lambda y + \mu x ) + S(x_1 y_2 - x_2 y_1 , x_1 y_3 - x_3 y_1 , x_2 y_3 - x_3 y_2 ) $$ for some linear map $S : \mathcal{R}R^3 \to \mathcal{R}R \times \mathcal{R}R^3$. Observe that $S : \mathcal{R}R^3 \to \mathcal{R}R \times \mathcal{R}R^3$ can be decomposed into a direct sum of a linear functional on $\mathcal{R}R^3$ and an endomorphism on $\mathcal{R}R^3$. Recall that every linear functional on $\mathcal{R}R^3$ can be represented in a unique way as an inner product with a fixed vector in $\mathcal{R}R^3$. Finally, observe that the coordinates of the vector $(x_1 y_2 - x_2 y_1 , x_1 y_3 - x_3 y_1 , x_2 y_3 - x_3 y_2 )$ are up to a permutation and a multiplication by $\pm 1$ the coordinates of the vector product $x \times y$. Thus, $A$ is isomorphic to $\mathcal{R}R \times \mathcal{R}R^3$ with the multiplication given by $$ (\lambda , x) \, (\mu , y) = (\lambda \mu - \langle x,y \rangle + (x,y,u) ,\lambda y + \mu x + T(x \times y)) $$ for some $u\in \mathcal{R}R^3$ and some endomorphism $T$ of $\mathcal{R}R^3$. Hence, $A$ is isomorphic to $A_{T,u}$, as desired. Assume now that $A_{T,u}$ and $A_{T', u'}$ are isomorphic for some $(T,u),(T' , u')\in M_3\times \mathcal{R}R^3$. We have to show that $(T,u)\sim (T' , u')$. So, let $\varphi : A_{T,u} \to A_{T',u'}$ be an isomorphism. Exactly in the same way as in the 3-dimensional case we show that $$ \varphi (\lambda , x) = (\lambda , Qx ) $$ for some orthogonal $3\times 3$ matrix $Q$. The equation $$ \varphi ((\lambda, x)\, (\mu, y)) = \varphi(\lambda, x)\, \varphi (\mu , y) $$ can be rewritten as $$ (\lambda \mu - \langle x,y \rangle + (x,y,u) ,\lambda Qy + \mu Qx + QT(x\times y) ) $$ $$ = (\lambda \mu - \langle x,y \rangle + (Qx,Qy,u') ,\lambda Qy + \mu Qx + T' (Qx \times Qy) ). $$ We conclude that $$ (x,y,u)= (Qx, Qy , u') $$ and $$ QT(x\times y) = T' (Qx \times Qy) $$ for all $x,y \in \mathcal{R}R^3$. As $Q$ is orthogonal we have $Q(x\times y) = (\det Q ) (Qx \times Qy)$, and consequently, $$ (x,y,u) = (\det Q)\, (x,y, Q^T u') \ \ \ {\rm and}\ \ \ QT(x\times y) = (\det Q)\, T'Q (x\times y),\ \ \ x,y \in \mathcal{R}R^3. $$ It follows that $u' = (\det Q) Q u$ and $T' = (\det Q) QTQ^T$, as desired. Finally, if $(T,u) \sim (T' ,u')$ for some $T,T' \in M_3$ and $u,u' \in \mathcal{R}R^3$ then there exists an orthogonal $3\times 3$ matrix $Q$ such that $T' = (\det Q) QTQ^T$ and $u'=(\det Q) Q u$. It is then straightforward to check that the map $\varphi : A_{T,u} \to A_{T' , u'}$ defined by $\varphi (\lambda, x) = (\lambda , Qx)$, $(\lambda, x) \in A_{T,u}$, is an isomorphism. \epsilonnd{proof} It is rather easy to verify that $A_{T,u}$ is nicely normed if and only if $u=0$. We will next show that $A_{T,u}$ is a division algebra if and only if $\langle Tx , x \rangle \not=0$ for each nonzero $x\in \mathcal{R}R^3$ (that is, the quadratic form $q(x) = \langle Tx , x \rangle$ is either positive definite, or negative definite). Indeed, assume first that $A_{T,u}$ is not a division algebra. Then $$ (\lambda \mu - \langle x,y \rangle + (x,y,u) ,\lambda y + \mu x + T(x\times y) ) = 0 $$ for some nonzero $(\lambda , x) , (\mu , y) \in A_{T,u}$. In particular, $$ T(x\times y) = -\lambda y -\mu x. $$ Set $z = x \times y$. We have $z\not=0$, since otherwise $x$ and $y$ are linearly dependent and therefore \begin{itemize} \item either $\lambda = 0$ and then $\langle x,y \rangle = 0$ and $\mu x=0$ which further yields that $(\lambda , x) = 0$ or $(\mu , y ) = 0$, a contradiction; or \item $\mu=0$ which yields a contradiction in exactly the same way; or \item $\lambda\not=0$ and $\mu\not=0$ and then $y = -\mu \lambda^{-1} x$ and $\lambda \mu = \langle x, y \rangle$ yield $0< \lambda^2 = -\langle x , x \rangle \le 0$, a contradiction. \epsilonnd{itemize} Hence, $z\not=0$ and because $z$ is orthogonal to both $x$ and $y$ we have $\langle Tz , z \rangle =0$. To prove the other direction we assume that there exists $z\in \mathcal{R}R^3$ with $\| z \| =1$ and $\langle Tz, z \rangle =0$. Then $Tz = -tw$ for some real number $t$ and some $w\in \mathcal{R}R^3$ with $w\perp z$ and $\| w \| =1$. There is a unique $v\in \mathcal{R}R^3$ such that $z = w \times v$ and $v\perp w$. Set $s=-(w, v, u)$. Then $(0,w)$ and $(t, v- sw)$ are nonzero elements of $A_{T,u}$ whose product is equal to zero. Hence, $A_{T,u}$ is not a division algebra, as desired. Following Dieterich's idea \cite{Die} we will now disscuss a geometric interpretation of the classification of 4-dimensional locally complex algebras. Let us start with a simple observation concerning $3\times3$ skew-symmetric matrices. If $x,y \in \mathcal{R}R^3$ are any two vectors such that $x\times y = (c_1 , c_2 , c_3)$, then $$ R= \left[ \begin{array}{ccc} 0 & c_3 & -c_2 \\ -c_3 & 0 & c_1 \\ c_2 & -c_1 & 0 \epsilonnd{array} \right] = xy^T - yx^T, $$ where $x$ and $y$ are represented as $3\times 1$ matrices. If $Q$ is any orthogonal matrix, then $QRQ^T = (Qx)(Qy)^T - (Qy)(Qx)^T$. As $Qx \times Qy = (\det Q)\, Q(x\times y)$, we have $$ Q \left[ \begin{array}{ccc} 0 & c_3 & -c_2 \\ -c_3 & 0 & c_1 \\ c_2 & -c_1 & 0 \epsilonnd{array} \right] Q^T = \left[ \begin{array}{ccc} 0 & d_3 & -d_2 \\ -d_3 & 0 & d_1 \\ d_2 & -d_1 & 0 \epsilonnd{array} \right], $$ where $$ \left[ \begin{array}{ccc} d_1 \\ d_2 \\ d_3 \epsilonnd{array} \right] = (\det Q) \, Q \left[ \begin{array}{ccc} c_1 \\ c_2 \\ c_3 \epsilonnd{array} \right]. $$ If we choose $Q\in SO(3)$ such that $$ \left[ \begin{array}{ccc} 0 \\ 0 \\ \sqrt{c_{1}^2 + c_{2}^2 + c_{3}^2} \epsilonnd{array} \right] = Q \left[ \begin{array}{ccc} c_1 \\ c_2 \\ c_3 \epsilonnd{array} \right], $$ then $$ QRQ^T = \left[ \begin{array}{ccc} 0 & d & 0 \\ -d & 0 & 0 \\ 0 & 0 & 0 \epsilonnd{array} \right], $$ where $d= \sqrt{c_{1}^2 + c_{2}^2 + c_{3}^2}$. In particular, $d = \| R \|$. Any $3\times 3$ matrix $T$ can be uniquely decomposed into its symmetric and skew-symmetric part, $T=P+R$, $P= (1/2)(T + T^T)$, $R = (1/2)(T - T^T)$. If $T' = (\det Q) QTQ^T$ and $T' = P' +R'$ with $P'$ symmetric and $R'$ skew-symmetric, then $P' = (\det Q) QPQ^T$ and $R' = (\det Q) QRQ^T$. We will say that $A_{T,u}$ is of rank 3,2,1,0, respectively, if the symmetric part $P$ of $T$ is of rank 3,2,1,0, respectively. By the previous remark, two isomorphic algebras $A_{T,u}$ have the same rank. Let us start with algebras $A_{T,u}$ of rank 3. We have two possibilities: either all eigenvalues of $P= T+T^T$ have the same sign, or $P$ has both positive and negative eigenvalues. In the first case we will say that $A_{T,u}$ is an ellipsoid locally complex algebra of dimension 4, while in the second case we call $A_{T,u}$ a hyperboloid locally complex algebra of dimension 4. As we are interested in isomorphism classes we can use the fact that $A_{T,u}$ is isomorphic to $A_{-T,u}$ to restrict our attention to the case when all the eigenvalues of $P$ are positive (the ellipsoid case) or to the case when two eigenvalues of $P$ are positive and one is negative (the hyperboloid case). Once we have done this restriction two algebras $A_{T,u}$ and $A_{T' , u'}$ of the above types are isomorphic if and only if $T' = QTQ^T$ and $u' = Qu$ for some $Q\in SO(3)$. To consider isomorphism classes of hyperboloid locally complex algebras of dimension 4 (a 4-dimensional locally complex algebra is hyperboloid if it is isomorphic to some hyperboloid algebra $A_{T,u}$) we set $\tau = \{ \delta \in \mathcal{R}R^3 \, | \, \delta_1 \ge \delta_2 > 0 > \delta_3 \}$ and $\kappa = \tau \times \mathcal{R}R^3 \times \mathcal{R}R^3$. The elements of $\kappa$ will be called configurations. Each configuration consists of a hyperboloid $H_\delta = \{ x\in \mathcal{R}R^3 \, | \, \langle \mathcal{D}elta_\delta x,x \rangle =1\}$ (a hyperboloid in principal axis form) and a pair of points. Here, $\mathcal{D}elta_\delta$ is the diagonal matrix with the diagonal entries: $\delta_1 , \delta_2 , \delta_3$. The symmetry group of the hyperboloid $H_\delta$ is defined to be $G_\delta = \{ Q \in SO(3) \, | \, Q\mathcal{D}elta_\delta Q^T = \mathcal{D}elta_\delta \}$ (the requirement that $\det Q = 1$ tells that we allow only symmetries that preserve the orientation). Note that this symmetry group consists of 4 elements whenever $\delta_1 > \delta_2$. Namely, in this case the symmetry group consists of the identity and all diagonal matrices with two eigenvalues -1 and one eigenvalue 1. The symmetry group is infinite if and only if the hyperboloid $H_\delta$ is circular, that is, $\delta_1 = \delta_2$. Two configurations $(\delta , u, c)$ and $(\delta' , u' ,c')$ are said to be equivalent, $(\delta , u, c) \epsilonquiv (\delta' , u' ,c')$, if and only if their hyperboloids coincide and their pairs of points lie in the same orbit under the operation of the symmetry group of the hyperboloid, that is, if and only if $\delta = \delta'$ and $(u' , c') = (Qu, Qc)$ for some $Q\in G_\delta$. We denote by $\kappa/ \epsilonquiv$ the set of equivalence classes of $\kappa$. We have a natural bijection between $\kappa/\epsilonquiv$ and the set of equivalence classes of hyperboloid locally complex algebras of dimension 4. Indeed, the bijection is induced by the map $$ (\delta , u, c) \mapsto A_{\mathcal{D}elta_\delta + R_c,u} $$ where $$ \mathcal{D}elta_\delta + R_c = \left[ \begin{array}{ccc} \delta_1 & c_3 & -c_2 \\ -c_3 & \delta_2 & c_1 \\ c_2 & -c_1 & \delta_3 \epsilonnd{array} \right]. $$ Clearly, $A_{\mathcal{D}elta_\delta + R_c,u}$ is a hyperboloid locally complex algebra. We have to show that each hyperboloid algebra $A_{T, v}$ is isomorphic to some $A_{\mathcal{D}elta_\delta + R_c,u}$ and that $A_{\mathcal{D}elta_\delta + R_c,u}$ and $A_{\mathcal{D}elta_{\delta'} + R_{c'} ,u'}$ are isomorphic if and only if $(\delta , u, c) \epsilonquiv (\delta' , u' ,c')$. The second statement is trivial. To verify the first one we write $T=P+R$ with $P$ symmetric with two positive eigenvalues and $R$ skew-symmetric. Then there exists $Q\in SO(3)$ such that $QPQ^T = \mathcal{D}elta_\delta$ for some $\delta \in \tau$. We have $QRQ^T = R_c$ for some $c\in \mathcal{R}R^3$. Set $u=Qv$ to complete the proof. In a similar fashion we can consider isomorphism classes of ellipsoid locally complex algebras of dimension 4. Note that a locally complex algebra $A_{T,u}$ is a division algebra if and only if it is an ellipsoid algebra. As above we can consider configurations which consist of an ellipsoid in principal axis form and a pair of points. To each such configuration there corresponds a 4-dimensional real division algebra and this correspondence induces a bijection between the equivalence classes of configurations (the equivalence being defined via the symmetry group of the ellipsoid) and the isomorphism classes of 4-dimensional real quadratic division algebras. We omit the details that can be found in \cite{Die}. It is clear that locally complex algebras of rank 2 are either elliptic cylinder algebras or hyperbolic cylinder algebras. We leave the details to the reader. In the same way one can classify also isomorphism classes of locally complex algebras of rank 1. Let us conclude with the detailed disscussion on 4-dimensional locally complex algebras of rank 0. By $e_3$ we denote $e_3 = (0,0,1) \in \mathcal{R}R^3$. We define an equivalence relation on the set $[0, \infty) \times \mathcal{R}R^3$ as follows: $(d,u), (d',u')\in [0, \infty) \times \mathcal{R}R^3$ are said to be equivalent, $(d,u) \epsilonquiv (d', u')$, if either \begin{itemize} \item $d=d' =0$ and $\| u \| = \| u' \|$; or \item $d=d' >0$, $\| u \| = \| u '\|$, and $\langle u, e_3 \rangle = \langle u' , e_3 \rangle $. \epsilonnd{itemize} Note that the equivalence class of $(d,u)\in [0 , \infty) \times \mathcal{R}R^3$ with $d>0$ contains infinitely many elements if $u$ and $e_3$ are linearly independent, and is a singleton when $u$ is a scalar multiple of $e_3$. There is a natural bijection between the isomorphism classes of 4-dimensional locally complex algebras of rank 0 and the set $([0, \infty) \times \mathcal{R}R^3)/\epsilonquiv$. The bijection is induced by the map from $[0, \infty) \times \mathcal{R}R^3$ which maps the pair $(d, u)$, $d\ge 0$, $u\in \mathcal{R}R^3$, into $A_{T_d , u}$ with $$ T_d= \left[ \begin{array}{ccc} 0 & d & 0 \\ -d & 0 & 0 \\ 0 & 0 & 0 \epsilonnd{array} \right]. $$ Obviously, $A_{T_d , u}$ is a locally complex algebra of rank 0 and one can easily verify that each 4-dimensional locally complex algebra of rank 0 is isomorphic to some $A_{T_d , u}$. It remains to show that $A_{T_d , u}$ and $A_{T_{d'}, u'}$ are isomorphic if and only if $(d,u) \epsilonquiv (d' , u')$. So, assume that $A_{T_d , u}$ and $A_{T_{d'}, u'}$ are isomorphic for some $(d,u), (d' , u') \in [0, \infty) \times \mathcal{R}R^3$. Then there exists an orthogonal matrix $Q$ such that $T_{d'} = (\det Q) QT_d Q^T $ and $u' = (\det Q) Qu$. In particular, $d' = \| T_{d'} \| = \| T_d \| = d$ and $\| u' \| = \| u \|$. If $d=0$, then $d' = 0$, and hence, $(d,u) \epsilonquiv (d' , u')$ in this special case. Therefore we may assume that $d=d' >0$. From $T_{d'} = (\det Q) QT_d Q^T$ we conclude that $Qe_3 = (\det Q) e_3$. Consequently, $$ \langle u' , e_3 \rangle = \langle (\det Q) Qu, (\det Q) Qe_3 \rangle = \langle u, e_3 \rangle . $$ To prove the converse we assume that $(d,u) \epsilonquiv (d' , u')$. We have one of the two possibilities and we will consider just the second one. So, assume that $d=d' >0$, $\| u \| = \| u '\|$, and $\langle u, e_3 \rangle = \langle u' , e_3 \rangle $. Then there exists an orthogonal matrix $Q$ such that $Qe_3 = e_3$ and $Qu = u'$. The orthogonal complement of $e_3$ and $u$ is one-dimensional (if $e_3$ and $u$ are linearly independent) or two-dimensional (if $e_3$ and $u$ are linearly dependent). We have a freedom to choose the action of $Q$ on the orthogonal complement of $e_3$ and $u$ (of course, up to the requirement that $Q$ is an orthogonal matrix). In particular, we can choose $Q$ in such a way that $\det Q =1$. It follows that $T_{d'} = QT_d Q^T $ and $u' = Qu$, as desired. \section{Super-alternative locally complex algebras} \label{Secsuper} Let us call an algebra $A$ a {\epsilonm super-alternative algebra} if it is $\mathbb Z_2$-graded, $A = A_0\oplus A_1$, and the alternativity conditions \epsilonqref{alt} hold for all its homogeneous elements. Equivalently, \begin{equation}\label{alte} u^2 x = u(ux),\,\, xu^2 = (xu)u \quad\mbox{for all $u\in A_i$, $i\in \mathbb{Z}_2$, $x\in A$,} \epsilonnd{equation} or, in the linearized form, \begin{eqnarray}\nonumber &&(uv + vu) x = u(vx) + v(ux),\\ \label{alte0}&&x(uv+vu) = (xu)v + (xv)u \quad\mbox{for all $u,v\in A_i$, $i\in \mathbb{Z}_2$ , $x\in A$.} \epsilonnd{eqnarray} The notion of a super-alternative algebra should not be confused with the notion of an {\epsilonm alternative superalgebra}. The latter is defined through the alternativity of the Grassmann envelope of $A$. It turns out that nontrivial examples of alternative superalgebras exist only very exceptionally: prime alternative superalgebras of characteristic different from $2$ and $3$ are either associative or their odd part is zero \cite{SZ}. As we shall see, super-alternative algebras are more easy to find. Throughout this section {\epsilonm $A$ will be a super-alternative locally complex algebra}. Our goal is to to classify all such algebras $A$. Obvious examples are $\mathcal{R}R$, $\mathcal{C}C$, $\mathbb{H}$, and $\mathbb{O}$, as we can always take the trivial $\mathbb{Z}_2$-grading (the odd part is $0$). Further, one can check by a straigtforward calculation that if $\mathcal{A}AA_{n-1}$ is an alternative algebra, then every $u\in (\mathcal{A}AA_{n-1}\times 0)\cup (0\times \mathcal{A}AA_{n-1})$ satisfies \epsilonqref{alte} for every $x\in \mathcal{A}AA_n$. Therefore, $\mathcal{C}C$, $\mathbb{H}$, $\mathbb{O}$, and $\mathbb{S}$ are super-alternative algebras with respect to the natural $\mathbb{Z}_2$-grading mentioned in Section \ref{Prel}. Of course, the important information for us in this context is that $\mathbb{S}$ is also a super-alternative locally complex algebra. As we shall see, besides $\mathcal{R}R$, $\mathcal{C}C$, $\mathbb{H}$, $\mathbb{O}$ and $\mathbb{S}$ only two more algebras must be added to the complete list of such algebras. We continue by recording several simple but useful observations. First, the following special case of \epsilonqref{alte0} will be often used: (a) If $u,v\in A_i$, $i\in\mathbb{Z}_2$, are such that $uv+vu=0$, then $u(vx) =- v(ux)$ and $(xu)v=-(xv)u$ for all $x\in A$. If $v\in A_1$, then $v^2 \in A_0$; on the other hand, $v^2 = \lambda v +\mu$ for some $\lambda,\mu\in \mathcal{R}R$. Since $v\notin A_0$, we must have $\lambda =0$ and hence $v^2 =\mu\in\mathcal{R}R$. Since $A$ is locally complex, it follows that $\mu< 0$ if $v\ne 0$. Thus, we have (b) If $0\ne v\in A_1$, then there is $\alpha\in \mathcal{R}R$ such that $(\alpha v)^2 = -1$. Let $u\in A_0$ and $v\in A_1$ be such that $u^2= v^2=-1$. Using Lemma \ref{L0} we have $uv + vu \in\mathcal{R}R\cap A_1 =0$. Therefore $v(uv) = - v(vu) = -v^2 u = u$. Next, $(uv)v = uv^2 = -u$. Similarly we see that $(uv)u = - u(uv) = v$. Finally, using (a) we get $(uv)(uv) = -(uv)(vu) = v((uv)u) = v^2 = -1$. We have proved: (c) If $u\in A_0$ and $v\in A_1$ are such that $u^2= v^2=-1$, then $uv = -vu$, $v(uv) = - (uv)v = u$, $(uv)u = - u(uv) = v$, and $(uv)^2 = -1$. Let $u$ be a homogeneous element and suppose that $ux=0$ for some $x\in A$. If $u\ne 0$, then by multiplying this identity from the left by $u-t(u)$ it follows from \epsilonqref{alte} that $n(u)x=0$, and hence $x=0$. Similarly, $xu=0$ implies $x=0$ if $u\ne 0$. Thus: (d) Homogeneous elements are not zero divisors. It is clear that our conditions on $A$ imply that $A_0$ is a locally complex alternative algebra. Theorem \ref{TFZ} therefore tells us that $A_0$ is isomorphic to $\mathcal{R}R$, $\mathcal{C}C$, $\mathbb{H}$, or $\mathbb{O}$. If $A_1=0$, then we get the desired conclusion that $A=A_0$ is one of the algebras from the expected list. Without loss of generality we may therefore assume that $A_1\ne 0$. Given $0\ne u\in A_1$, it follows from (d) that $x\mapsto ux$ is an injective linear map from $A_0$ into $A_1$; the same rule defines an injective linear map from $A_1$ into $A_0$. We may therefore conclude that (e) $\dim A_0 = \dim A_1$. In particular we now know that a super-alternative locally complex algebra must be finite dimensional. Moreover, its dimension can be only $1$, $2$, $4$, $8$, or $16$. We shall now consider separately each of the four possibilities concerning $A_0$. \begin{lemma} \label{case1} If $A_0 \cong \mathcal{R}R$, then $A\cong\mathcal{C}C$. \epsilonnd{lemma} \begin{proof} By (b) there is $i\in A_1$ with $i^2=-1$, and hence $A\cong\mathcal{C}C$ by (e). \epsilonnd{proof} \begin{lemma} \label{case2} If $A_0 \cong \mathcal{C}C$, then $A\cong\mathbb{H}$. \epsilonnd{lemma} \begin{proof} We have $A_0 = \mathcal{R}R \oplus \mathcal{R}R i$ with $i^2=-1$. By (b) we may pick $j\in A_1$ such that $j^2=-1$. Setting $k=ij\in A_1$ it follows from (c) that $A$ contains a copy of $\mathbb{H}$. However, in view of (e) we actually have $A\cong\mathbb{H}$. \epsilonnd{proof} Let us now introduce another (an unexpected one for us) example of a super-alternative locally complex algebra. Let $\mathcal{T}O$ be the $8$-dimensional algebra with basis $\{1,f_1,\ldots,f_7\}$ and multiplication table \begin{center} \begin{small} \begin{tabular}{|r|r|r|r|r|r|r|r|} \hline & $f_1$ & $f_2$ & $f_3$ & $f_4$ & $f_5$ & $f_6$ & $f_7$\\\hline $f_1$ & $-1$ & $f_3$ & $-f_2$ & $f_5$ & $-f_4$ & $f_7$ & $-f_6$\\\hline $f_2$ & $-f_3$ & $-1$ & $f_1$ & $f_6$ & $-f_7$ & $-f_4$ & $f_5$\\\hline $f_3$ & $f_2$ & $-f_1$ & $-1$ & $f_7$ & $f_6$ & $-f_5$ & $-f_4$\\\hline $f_4$ & $-f_5$ & $-f_6$ & $-f_7$ & $-1$ & $f_1$ & $f_2$ & $f_3$\\\hline $f_5$ & $f_4$ & $f_7$ & $-f_6$ & $-f_1$ & $-1$ & $f_3$ & $-f_2$\\\hline $f_6$ & $-f_7$ & $f_4$ & $f_5$ & $-f_2$ & $-f_3$ & $-1$ & $f_1$\\\hline $ f_7$ & $f_6$ & $-f_5$ & $f_4$ & $-f_3$ & $f_2$ & $-f_1$ & $-1$\\\hline \epsilonnd{tabular} \epsilonnd{small} \epsilonnd{center} \begin{lemma} \label{lto} $\mathcal{T}O$ is a super-alternative locally complex algebra with zero divisors and without alter-scalar elements (and hence $\mathcal{T}O\not\cong\mathbb{O}$). \epsilonnd{lemma} \begin{proof} The fact that $\mathcal{T}O$ is locally complex follows from Lemma \ref{Tlc}\,(v). Let $\mathcal{T}O_0$ be the linear span of $1,f_1,f_2,f_3$, and let $\mathcal{T}O_1$ be the linear span of $f_4,f_5,f_6,f_7$. Then $\mathcal{T}O$ becomes a superalgebra with the even part $\mathcal{T}O_0\cong\mathbb{H}$. From the way we shall arrive at $\mathcal{T}O$ in the next proof it is not really surprising that $\mathcal{T}O$ is super-alternative. But we used Mathematica for the actual checking that this is indeed true. Note that $(f_1-f_4)(f_3 - f_6) =0$, so that $\mathcal{T}O$ has zero divisors. Let $a\in \mathcal{T}O$ be such that $x^2 a = x(xa)$ for all $x\in \mathcal{T}O$. From $(f_i + f_j)^2a= (f_i+f_j)((f_i + f_j)a)$, together with $f_i(f_ia) = f_j(f_ja) = -a$, it follows that $f_i(f_ja) + f_j(f_i a) = 0$ whenever $i\ne j$. Writing $a = \lambda_0 +\sum_{k=1}^7 \lambda_k f_k$ we thus have \begin{equation}\label{fijk} \sum_{k=1}^7 \lambda_k \mathcal{B}igl(f_i(f_j f_k) + f_j(f_i f_k)\mathcal{B}igr) = 0 \quad\mbox{whenever $i\ne j$.} \epsilonnd{equation} Chosing $i=1$ and $j=4$ it follows that $\lambda_2=\lambda_3 =\lambda_6 =\lambda_7=0$. Chosing, for example, $i=2$ and $j=7$ we further get $\lambda_1 =\lambda_4=0$, and chosing $i=3$ and $j=4$ finally leads to $\lambda_5=0$. Therefore $a =\lambda_0$ is a scalar. \epsilonnd{proof} \begin{lemma} \label{case3} If $A_0 \cong \mathbb{H}$, then $A\cong\mathbb{O}$ or $A\cong\mathcal{T}O$. \epsilonnd{lemma} \begin{proof} Let $\{1,i,j,k\}$ be a basis of $A_0$ where these elements have the usual meaning. Pick $f\in A_1$ with $f^2 = -1$. Then $f$ anticommutes with $i,j,k$ by (c). It is clear that $\{f,if,jf,kf\}$ is a basis of $A_1$. We claim that all elements in this basis pairwise anticommute. It is easy to see that $f$ anticommutes with each of $if,jf,kf$. Using (a) repeatedly we obtain $(if)(jf)=-(i(jf))f=(j(if))f = -(jf)(if)$. Other identities can be checked analogously. Since $i(jf)\in A_1$, we have \begin{equation}\label{lambde} i(jf) = \lambda_1 f + \lambda_2 if + \lambda_3 jf +\lambda_4 kf \epsilonnd{equation} for some $\lambda_i\in \mathcal{R}R$. From (a) we infer that $(i(jf))f = - (if)(jf)$. Similarly, using (a) and (c) we get $$ f(i(jf)) = -f((jf)i) = (jf)(fi) = -(jf)(if) = (if)(jf). $$ The last two identities show that $i(jf)$ anticommutes with $f$. Consequently, anticommuting \epsilonqref{lambde} with $f$ it follows that $\lambda_1=0$. A similar arguing shows that $i(jf)$ anticommutes with both $if$ and $jf$, which leads to $\lambda_2 = \lambda_3=0$. Note that (c) implies that the squares of both $kf$ and $i(jf)$ are equal $-1$. But then $\lambda_4^2 =1$, i.e., $\lambda_4 = 1$ or $\lambda_4=-1$. If $\lambda_4=1$, i.e., $i(jf) = kf$, then we set $f_1=i$, $f_2=j$, $f_3=k$, $f_4=f$, $f_5=if$, $f_6=jf$, and $f_7=kf$. Using the information we have, it is now just a matter of a routine calculation to verify that $A\cong\mathcal{T}O$. Since we know that $\mathbb{O}$ is a super-alternative locally complex algebra, the other possibility $\lambda_4=-1$ can lead only to $A\cong \mathbb{O}$. \epsilonnd{proof} The 16-dimensional analogue of $\mathcal{T}O$ is the algebra which we denote by $\mathcal{T}S$ and define as follows: if $\{1,f_1,\ldots,f_{15}\}$ is its basis, then the multiplication table is \begin{center} \begin{tiny} \begin{tabular}{|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|} \hline & $f_1$& $f_2$& $f_3$& $f_4$& $f_5$& $f_6$& $f_7$& $f_8$& $f_9$&$f_{10}$& $f_{11}$& $f_{12}$& $f_{13}$& $f_{14}$& $f_{15}$\\\hline $f_1$& $-1$& $f_3$&$-f_2$& $f_5$& $-f_4$& $-f_7$& $f_6$& $f_9$& $-f_8$& $-f_{11}$& $f_{10}$&$-f_{13}$& $f_{12}$& $-f_{15}$& $f_{14}$\\\hline $f_2$& $-f_3$& $-1$& $f_1$& $f_6$&$f_7$& $-f_4$& $-f_5$& $f_{10}$& $f_{11}$& $-f_8$& $-f_9$& $-f_{14}$& $f_{15}$& $f_{12}$& $-f_{13}$\\\hline $f_3$& $f_2$& $-f_1$& $-1$& $f_7$& $-f_6$& $f_5$& $-f_4$&$f_{11}$& $-f_{10}$& $f_9$& $-f_8$& $f_{15}$& $f_{14}$& $-f_{13}$& $-f_{12}$\\\hline $f_4$&$-f_5$& $-f_6$& $-f_7$& $-1$& $f_1$& $f_2$& $f_3$& $f_{12}$& $f_{13}$& $f_{14}$& $-f_{15}$& $-f_8$& $-f_9$& $-f_{10}$& $f_{11}$\\\hline $f_5$& $f_4$& $-f_7$& $f_6$&$-f_1$& $-1$& $-f_3$& $f_2$& $f_{13}$& $-f_{12}$& $-f_{15}$& $-f_{14}$& $f_9$&$-f_8$& $f_{11}$& $f_{10}$\\\hline $f_6$& $f_7$& $f_4$& $-f_5$& $-f_2$& $f_3$& $-1$& $-f_1$& $f_{14}$& $f_{15}$& $-f_{12}$& $f_{13}$& $f_{10}$& $-f_{11}$& $-f_8$& $-f_9$\\\hline $f_7$& $-f_6$& $f_5$& $f_4$& $-f_3$& $-f_2$& $f_1$& $-1$& $f_{15}$& $-f_{14}$& $f_{13}$& $f_{12}$& $-f_{11}$& $-f_{10}$& $f_9$& $-f_8$\\\hline $f_8$& $-f_9$& $-f_{10}$& $-f_{11}$& $-f_{12}$& $-f_{13}$& $-f_{14}$& $-f_{15}$& $-1$&$f_1$& $f_2$& $f_3$& $f_4$& $f_5$& $f_6$& $f_7$\\\hline $f_9$& $f_8$& $-f_{11}$& $f_{10}$& $-f_{13}$& $f_{12}$& $-f_{15}$& $f_{14}$& $-f_1$& $-1$& $-f_3$& $f_2$& $-f_5$& $f_4$& $-f_7$& $f_6$\\\hline $f_{10}$& $f_{11}$& $f_8$& $-f_9$& $-f_{14}$& $f_{15}$& $f_{12}$& $-f_{13}$& $-f_2$& $f_3$& $-1$& $-f_1$& $-f_6$& $f_7$& $f_4$& $-f_5$\\\hline $f_{11}$& $-f_{10}$& $f_9$& $f_8$& $f_{15}$& $f_{14}$& $-f_{13}$& $-f_{12}$& $-f_3$& $-f_2$& $f_1$& $-1$& $f_7$& $f_6$& $-f_5$& $-f_4$\\\hline $f_{12}$& $f_{13}$& $f_{14}$& $-f_{15}$& $f_8$& $-f_9$& $-f_{10}$& $f_{11}$& $-f_4$& $f_5$& $f_6$& $-f_7$& $-1$& $-f_1$& $-f_2$& $f_3$\\\hline $f_{13}$& $-f_{12}$& $-f_{15}$& $-f_{14}$& $f_9$& $f_8$& $f_{11}$& $f_{10}$& $-f_5$& $-f_4$& $-f_7$& $-f_6$& $f_1$& $-1$& $f_3$& $f_2$\\\hline $f_{14}$& $f_{15}$& $-f_{12}$& $f_{13}$& $f_{10}$& $-f_{11}$& $f_8$& $-f_9$& $-f_6$& $f_7$& $-f_4$& $f_5$& $f_2$& $-f_3$& $-1$& $-f_1$\\\hline $f_{15}$& $-f_{14}$& $f_{13}$& $f_{12}$& $-f_{11}$& $-f_{10}$& $f_9$& $f_8$& $-f_7$& $-f_6$& $f_5$& $f_4$&$-f_3$& $-f_2$& $f_1$& $-1$\\\hline \epsilonnd{tabular} \epsilonnd{tiny} \epsilonnd{center} The proof of the next lemma is similar to that of Lemma \ref{lto}. Therefore we omit details. \begin{lemma} \label{lts} $\mathcal{T}S$ is a super-alternative locally complex algebra without alter-scalar elements (and hence $\mathcal{T}S\not\cong\mathbb{S}$). \epsilonnd{lemma} The final lemma is similar to Lemma \ref{case3}, but the proof is somewhat more complicated. One of the problems that we have to face in this proof is that we do not have a complete freedom in the selection of an element playing the role of $f$ from the proof of Lemma \ref{case3}. While $f$ was an arbitrary element in $A_1$ with square $-1$, now we shall have to find a special one. \begin{lemma} \label{case4} If $A_0 \cong \mathbb{O}$, then $A\cong\mathbb{S}$ or $A\cong\mathcal{T}S$. \epsilonnd{lemma} \begin{proof} Let $\{1,e_1,\ldots,e_7\}$ be a basis of $A_0$ whose multiplication table is given in Section \ref{Prel}. We begin with three claims needed for future reference. {\sc Claim 1}: Let $i,j\in\{1,2,\ldots,7\}$, $i\ne j$. If $p\in A_1$, then $q = p + (e_i e_j)(e_i (e_jp))$ satisfies $(e_ie_j)q = - e_i(e_jq)$. Indeed, by \epsilonqref{alte} we have $(e_ie_j)q = (e_ie_j)p - e_i(e_jp)$, while using (a) and \epsilonqref{alte} we get \begin{align*} e_i(e_jq) &= e_i(e_jp) + e_i(e_j((e_i e_j)(e_i (e_jp)))) = e_i(e_jp) - e_i((e_ie_j)(e_j (e_i (e_jp))))\\ &= e_i(e_jp) + (e_ie_j)(e_i(e_j (e_i (e_jp))) = e_i(e_jp) - (e_ie_j)(e_j(e_i (e_i (e_jp)))\\ &= e_i(e_jp) + (e_ie_j)(e_j(e_j p)) = e_i(e_jp) - (e_ie_j)p, \epsilonnd{align*} so that $(e_ie_j)q = - e_i(e_jq)$. {\sc Claim 2}: Let $i,j,k\in\{1,2,\ldots,7\}$ be such that $e_i,e_j,e_ie_j,e_k$ are linearly independent, and let $s\in A_1$ be such that $(e_ie_j)s =- e_i(e_j s)$. Then $t= s + (e_ie_k)(e_i(e_ks))$ also satisfies $(e_ie_j)t =- e_i(e_jt)$. (Let us add that (a) implies $t= s + (e_ke_i)(e_k(e_is))$, and that $(e_ie_j)z =- e_i(e_j z)$ is equivalent to $(e_je_i)z =- e_j(e_i z)$; the order of indices is thus irrelevant.) Indeed, by now already familiar arguing we have \begin{align*} (e_ie_j)t &= (e_ie_j)s + (e_ie_j)((e_ie_k)(e_i(e_ks))) = (e_ie_j)s - (e_ie_k)((e_ie_j)(e_i(e_ks)))\\ &= (e_ie_j)s + (e_ie_k)(e_i((e_ie_j)(e_ks))) = (e_ie_j)s - (e_ie_k)(e_i(e_k((e_ie_j)s)))\\ &= -\bigl(e_i(e_js) - (e_ie_k)(e_i(e_k(e_i(e_js))))\bigr) = -\bigl( e_i(e_js) + (e_ie_k)(e_k(e_i(e_i(e_js))))\bigr)\\ &= -\bigl( e_i(e_js) - (e_ie_k)(e_k(e_js))\bigr) = -\bigl( e_i(e_js) + e_i(e_i ((e_ie_k)(e_k(e_js))))\bigr)\\ &= -\bigl( e_i(e_js) - e_i((e_ie_k) (e_i(e_k(e_js))))\bigr) = -\bigl( e_i(e_js) + e_i((e_ie_k) (e_i(e_j(e_ks))))\bigr)\\ &=-\bigl( e_i(e_js) - e_i((e_ie_k) (e_j(e_i(e_ks))))\bigr) = -\bigl( e_i(e_js) + e_i(e_j((e_ie_k)(e_i(e_ks))))\bigr)\\ &=- e_i(e_jt). \epsilonnd{align*} {\sc Claim 3}: Let $i,j,k\in\{1,2,\ldots,7\}$, $i\ne j$, and let $\epsilon\in \mathcal{R}R$ and $w\in A_1$ be such that $(e_ie_j)w =\epsilonpsilon e_i(e_j w)$. Set $u = e_kw$. If $k\in \{i,j\}$, then $(e_ie_j)u =\epsilonpsilon e_i(e_j u)$, and if $k\notin \{i,j\}$, then $(e_ie_j)u =-\epsilonpsilon e_i(e_j u)$. If $k\in \{i,j\}$, then we may assume $k = j$ without loss of generality. We have $$ (e_i e_j)(u) = (e_i e_j)(e_ jw)= - e_j ((e_i e_j) w) = -\epsilon e_j (e_i (e_jw)) = \epsilon e_i (e_j u). $$ If $k\notin \{i,j\}$, then we have \begin{align*} &(e_i e_j)(u) = (e_i e_j)(e_ kw)= - e_k ((e_i e_j) w) \\ =& -\epsilon e_k (e_i (e_jw)) =\epsilon e_i(e_k(e_j w)) =- \epsilon e_i (e_j u). \epsilonnd{align*} After establishing these auxiliary claims, we now begin the actual proof by picking a nonzero $u\in A_1$. As mentioned above, an arbitrary chosen $u$ may not be the right choice, so we have to "remedy" it. Let $v' = u + (e_1e_2)(e_1(e_2u))\in A_1$. By Claim 1, $v'$ satisfies $(e_1e_2)v'=-e_1(e_2v')$. If $v'=0$, then we have $(e_1e_2)u=e_1(e_2u)$. But then $v'' = e_3u$ satisfies $(e_1e_2)v''=-e_1(e_2v'')$ by Claim 3. Thus, in any case there is a nonzero $v\in A_1$ such that $$(e_1e_2)v=- e_1(e_2v).$$ Now consider $w' = v + (e_1 e_4)(e_1(e_4v))$. By Claim 1 we have $(e_1e_4)w' = - e_1(e_4w')$, and by Claim 2 we have $(e_1e_2)w'=- e_1(e_2w')$. If $w'=0$, then $(e_1e_4)v= e_1(e_4v)$. But then $w'' = e_2v$ satisfies $(e_1e_2)w''=- e_1(e_2w'')$ and $(e_1e_4)w'' = - e_1(e_4w'')$. Thus, there exists a nonzero $w\in A_1$ satisfying $$(e_1e_2)w=-e_1(e_2w),\,\, (e_1e_4)w=- e_1(e_4w).$$ We now repeat the same procedure with respect to $e_2$ and $e_4$. That is, we introduce $x'= w + (e_2e_4)(e_2(e_4w))$, and apply Claims 1 and 2 to conclude that $(e_1e_2)x'=-e_1(e_2x')$, $(e_1e_4)x'=- e_1(e_4x')$, and $(e_2e_4)x'=- e_2(e_4x')$. If $x'=0$, then $(e_2e_4)w = e_2(e_4w)$, and therefore Claim 3 tells us that $(e_1e_2)x''=-e_1(e_2x'')$, $(e_1e_4)x''=- e_1(e_4x'')$, and $(e_2e_4)x''=- e_2(e_4x'')$, where $x'' = e_1 w$. In any case we have found a a nonzero $x\in A_1$ satisfying $$(e_1e_2)x=- e_1(e_2x),\,\, (e_1e_4)x=- e_1(e_4x),\,\,(e_2e_4)x=- e_2(e_4x).$$ Considering $y' = x + (e_3e_4)(e_3(e_4x))$ we see from Claim 2 that $(e_1e_4)y'=- e_1(e_4y')$ and $(e_2e_4)y'=- e_2(e_4y')$, while apparently we cannot conclude that also $(e_1e_2)y'=- e_1(e_2y')$. However, multiplying $(e_1e_2)x=- e_1(e_2x)$ from the left by $e_1$ we get $e_1( (e_1e_2)x)= e_2x$, which can be written as $e_1(e_3x) = - (e_1 e_3)x$. Therefore Claim 2 yields $e_1(e_3y') = - (e_1 e_3)y'$. Multiplying this from the left by $e_1$ we arrive at the desired identity $(e_1e_2)y'=- e_1(e_2y')$. Also, $(e_3e_4)y'=- e_3(e_4y')$ holds by Claim 1. We still have to deal with the case where $y'=0$, i.e., $(e_3e_4)x= e_3(e_4x)$. The usual reasoning now does not work, since we do not have "enough room" to apply Claim 3. Thus, the final conclusion is that there exists a nonzero $y\in A_1$ such that $$ (e_1e_2)y=-e_1(e_2y),\,\,(e_1e_4)y=- e_1(e_4y),\,\,(e_2e_4)y=- e_2(e_4y), \,\,(e_3e_4)y=\pm e_3(e_4y). $$ In view of (b) we may assume without loss of generality that $y^2=-1$. Let us first consider the case where $(e_3e_4)y=e_3(e_4y)$. We set $f_8 = y$ and $f_i = e_i$, $f_{i+8}= f_i f_8$, $i=1,\ldots,7$. By standard calculations one can now verify that $A\cong \mathcal{T}S$; checking all details is lengthy and tedious, but straigtforward. The other possibility where $(e_3e_4)y=-e_3(e_4y)$ of course leads to $A\cong \mathbb{S}$. \epsilonnd{proof} All lemmas together yield our main result. \begin{theorem}\label{MT} A super-alternative locally complex algebra is isomorphic to $\mathcal{R}R$, $\mathcal{C}C$, $\mathbb{H}$, $\mathbb{O}$, $\mathcal{T}O$, $\mathbb{S}$, or $\mathcal{T}S$. \epsilonnd{theorem} \begin{remark}\label{rema1} In the course of the proof we did not use the assumption that \epsilonqref{alte} holds for all $u,x\in A_1$. Therefore we can replace the super-alternativity assumption by a slightly milder one. \epsilonnd{remark} This list reduces to Cayley-Dickson algebras under the additional assumption that there exist alter-scalar elements. \begin{corollary}\label{MTc1} A super-alternative locally complex algebra containing alter-scalar elements is isomorphic to $\mathcal{R}R$, $\mathcal{C}C$, $\mathbb{H}$, $\mathbb{O}$, or $\mathbb{S}$. \epsilonnd{corollary} \begin{corollary}\label{MTc2} A super-alternative locally complex algebra which contains alter-scalar elements, but is not alternative, is isomorphic to $\mathbb{S}$. \epsilonnd{corollary} Let $A$ be an algebra, and let $x\in A$. The {\epsilonm annihilator} of $x$ is the space Ann$(x) = \{y\in A\,|\, xy =0\}$. If $A =\mathcal{A}AA_n$ is a Cayley-Dickson algebra, then the dimension of Ann$(x)$ is a multiple of $4$ \cite{Biss, Moreno}. Moreover, if $A = \mathcal{A}AA_4 =\mathbb{S}$, then the dimension of Ann$(x)$ is exactly $4$ for every zero divisor $x$ in $A$ \cite[Section 12]{Biss}. The algebras $\mathcal{T}O$ and $\mathcal{T}S$ do not have this property. It is easy to check that $x =f_1-f_4\in \mathcal{T}O$ has the $2$-dimensional annihilator spanned by $f_2 + f_7$ and $f_3 - f_6$. Further, the dimension of the annihilator of $x = f_3 + f_{12}\in\mathcal{T}S$ is $6$; it is spanned by $f_1 + f_{14}$, $f_2 - f_{13}$, $f_{4} + f_{11}$, $f_{5} + f_{10}$, $f_6-f_9$, and $f_7-f_8$. Thus, we have \begin{corollary}\label{MTc3} Let $A$ be a super-alternative locally complex algebra which is not a division algebra. If the dimension of {\rm Ann}$(x)$ is $4$ for every zero divisor in $A$, then $A\cong \mathbb{S}$. \epsilonnd{corollary} One can check that $$ 1\mapsto 1,\,\, e_1\mapsto f_1,\,\, e_2\mapsto f_2,\,\, e_3\mapsto f_3,\,\, e_4\mapsto f_{12},\,\, e_5\mapsto -f_{13},\,\, e_6\mapsto-f_{14},\,\, e_7\mapsto -f_{15} $$ defines an embedding of $\mathcal{T}O$ into $\mathbb{S}$. Thus, both $\mathbb{O}$ and $\mathcal{T}O$ can be viewed as subalgebras of $\mathbb{S}$. Chan and \mathcal{D}J okovi\' c proved that $\mathbb{S}$ has $6$-dimensional subalgebras, which, however, are not contained in $8$-dimensional subalgebras of $\mathbb{S}$ \cite[Corollary 3.6, Theorem 8.1]{ChanDj}. Accordingly, $\mathbb{O}$ and $\mathcal{T}O$ do not have $6$-dimensional subalgebras. Further, $\mathbb{S}$ does not contain $5$-dimensional subalgebras \cite[Proposition 4.4]{ChanDj}. This does not hold for $\mathcal{T}S$. For example, the linear span of $1$, $f_{1}+f_{14}$, $f_{3}-f_{12}$, $f_{6}-f_{9}$, and $f_{7}-f_{8}$ is a $5$-dimensional subalgebra of $\mathcal{T}S$. Combining all these we get our final corollary. \begin{corollary}\label{MTc4} Let $A$ be a super-alternative locally complex algebra. If $A$ has $6$-dimensional subalgebras, but does not have $5$-dimensional subalgebras, then $A\cong \mathbb{S}$. \epsilonnd{corollary} {\bf Acknowledgement}. The authors are grateful to the referee for careful reading of the paper and the resulting useful remarks. \begin{thebibliography}{99} \bibitem{Baez}J.\,C. Baez, The octonions, {\epsilonm Bull. Amer. Math. Soc.} {\bf 39} (2002), 145-205. \bibitem{Biss}D.\,K. Biss, D. Dugger, D.\,C. Isaksen, Large annihilators in Cayley-Dickson algebras, {\epsilonm Comm. Algebra} {\bf 36} (2008), 632-664. \bibitem{BM} R. Bott, J. Milnor, On the parallelizability of the spheres, {\epsilonm Bull. Amer. Math. Soc.} {\bf 64} (1958), 87-89. \bibitem{BH} M. Bremner, I. Hentzel, Identities for algebras obtained from the Cayley-Dickson process, {\epsilonm Comm. Algebra} {\bf 29} (2001), 3523-3534. \bibitem{CM}A.\,J. Calderon Martin, C. Martin Gonzalez, Two-graded absolute valued algebras, {\epsilonm J. Algebra} {\bf 292} (2005), 492-515. \bibitem{ChanDj} K.-C. Chan, D. \v Z. \mathcal{D}J okovi\' c, Conjugacy classes of subalgebras of the real sedenions, {\epsilonm Canad. Math. Bull.} {\bf 49} (2006), 492-507. \bibitem{Die} E. Dieterich, Real quadratic division algebras, {\epsilonm Comm. Algebra} {\bf 28} (2000), 941-947. \bibitem{AS} P. Eakin, A. Sathaye, On automorphisms and derivations of Cayley-Dickson algebras, {\epsilonm J. Algebra} {\bf 129} (1990), 263-278. \bibitem{Eld}A. Elduque, Quadratic alternative algebras, {\epsilonm J. Math. Physics} {\bf 31} (1990), 1-5. \bibitem{F} F.\,G. Frobenius, \" Uber lineare Substitutionen und bilineare Formen, {\epsilonm J. Reine Angew. Math.} {\bf 84} (1878) 1-63. \bibitem{Her}I.\,N. Herstein, {\epsilonm Topics in algebra}, John Wiley and Sons, 1975. \bibitem{Im} K. Imaeda, M. Imaeda, Sedenions: algebra and analysis, {\epsilonm Appl. Math. Comp.} {\bf 115} (2000), 77-88. \bibitem{Ker} M. Kervaire, Non-parallelizability of the $n$ sphere for $n > 7$, {\epsilonm Proc. Nat. Acad. Sci. USA} {\bf 44} (1958), 280-283. \bibitem{Kuwata} S. Kuwata, Born-Infeld Lagrangian using Cayley-Dickson algebras, {\epsilonm Internat. J. Modern Physics A} {\bf 19} (2004), 1525-1548. \bibitem{Lam} T.\,Y. Lam, {\epsilonm A first course in noncommutative rings}, Springer, 1991. \bibitem{Moreno}G. Moreno, The zero divisors of the Cayley-Dickson algebras over the real numbers, {\epsilonm Bol. Soc. Mat. Mexicana} {\bf 4} (1998), 13-28. \bibitem{Moreno2}G. Moreno, Alternative elements in the Cayley-Dickson algebras, {\epsilonm Topics in mathematical physics, general relativity and cosmology in honor of Jerzy Plebañski}, 333-346, World Sci. Publ., Hackensack, NJ, 2006. \bibitem{Pal} R.\,S. Palais, The classification of real division algebras, {\epsilonm Amer. Math. Monthly} {\bf 75} (1968), 366-368. \bibitem{SZ}E.\,I. Zelmanov, I.\,P. Shestakov, Prime alternative superalgebras and nilpotence of the radical of a free alternative algebra, {\epsilonm Izv. Akad. Nauk SSSR Ser. Mat.} {\bf 54} (1990), 676-693; English transl. in {\epsilonm Math. USSR Izv.} {\bf 37}(1991), 19-36. \bibitem{ZSSS} K.\,A. Zhevlakov, A.\,M. Slinko, I.\,P. Shestakov, A.\,I. Shirshov, {\epsilonm Rings that are nearly associative}, Academic Press, 1982. \bibitem{Z}M. Zorn, Theorie der alternativen Ringe, {\epsilonm Abhandlungen Hamburg} {\bf 8} (1930), 123-147. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \title{A group-theoretical classification of three-tone and four-tone harmonic chords} \author{Jason K.C. Polak} \date{\today} \maketitle \begin{abstract} We classify three-tone and four-tone chords based on subgroups of the symmetric group acting on chords contained within a twelve-tone scale. The actions are inversion, major-minor duality, and augmented-diminished duality. These actions correspond to elements of symmetric groups, and also correspond directly to intuitive concepts in the harmony theory of music. We produce a graph of how these actions relate different seventh chords that suggests a concept of distance in the theory of harmony. \end{abstract} \tableofcontents \section{Introduction} Early on in music theory we learn of the harmonic triads: major, minor, augmented, and diminished. Later on we find out about four-note chords such as seventh chords. We wish to describe a classification of these types of chords using the action of the finite symmetric groups. We represent notes by a number in the set $\mathbb{Z}/12 = \{0, 1,2,\dots,10,11\}$. Under this scheme, for example, $0$ represents $C$, $1$ represents $C\sharp$, $2$ represents $D$, and so on. We consider only pitch classes modulo the octave. We describe the sounding of simultaneous notes by an ordered increasing list of integers in $\mathbb{Z}/12$ surrounded by parentheses. For example, a major second interval $M2$ would be represented by $(0, 2)$, and a major chord would be represented by $(0,4, 7)$. We denote the set of all simultaneous $k$-notes by $N_k$. So $(0,4,7)\in N_3$. Each ordered list describing a set of tones gives rise to a partition of $12$. Consider for example the major chord $(0,4,7)$. The differences between these notes is $4, 3, 5$, the last distance of $5$ being the distance from $7$ from $12$. In general, if $(0,a_1,\dots,a_{k-1})$ is a $k$-tone chord then it is associated to a partition of $12$ via the map \begin{align*} (0,a_1,a_2,a_3,\dots,a_{k-1})\mapsto [a_1,a_2-a_1,a_3-a_2,\dots,12-a_{k-1}]. \end{align*} We denote the set of all such partitions of $12$ by $P_{12}$, and we consider partitions unordered. To write a partition, we use square brackets, so that the partition associated to $(0,4,7)$ is the partition $[3,4,5]$. Since we consider all partitions unordered, we usually write the numbers of a partition in increasing order. Under this scheme, different intervals or chords may correspond to the same partition. For example, the partition $[3,4,5]$ is also associated to $(0, 0+3, 0 +3 +5) = (0,3,8)$. To summarize, we can say that to each simultaneous playing of notes, we can assign a partition of twelve. Each partition may give rise to more than one way of playing a set of simultaneous notes, but since each way of doing so corresponds to a different way of adding up the partition of twelve, we can apply elements of the appropriate symmetric group to obtain every possible set of simultaneous notes. This is the basis of the classification scheme we introduce. We also use this classification scheme to create a graph showing how each chord is related with regard to each of the introduced group actions. The method of studying group actions on chords goes back to \cite{riemann1919handbuch}. Our method of relating chords is similar to the one used in \cite{crans2009musical} and, but the authors in that paper are not concerned with pitch classes or classifying from partitions. They also consider triads, whereas the main focus of this work is four-tone harmonic chords or \emph{seventh} chords. The work of \cite{cannas2017group} is focused on seventh chords, but again is more concerned with absolute pitches. Also, unlike in these previous interesting studies, we are more concerned with deriving the basic harmonic three-tone and four-tone chords axiomatically and with representing this relationship graphically. \section{Three-tone harmonic chords} As a warmup for three-tone chords, we consider the following definition, which gives us the usual three-tone harmonic chords. \begin{definition} A harmonic three-tone chord is any chord whose partition does not contain any $x \leq 2$. \end{definition} Therefore, the major chord $(0,4,7)$, whose partition is $[3,4,5]$, is an example of a harmonic chord since it does not contain $1$ or $2$. We will see now that this definition encompasses the usual harmonic chords that we learn in beginning music theory. In order to classify harmonic three-tone chords, we first list all partitions of twelve that do not have any $x\leq 2$ in them. There are three of these: $[3, 3,6], [3,4,5],$ and $[4,4,4]$. Each may correspond to many chords; however to list all such chords we just need to find one chord and apply element of the symmetric group $S_3$ to it. We start with $[3,4,5]$. One such corresponding chord is $(0,4,7)$, the major chord. Here are the possibilities we are starting with: \begin{enumerate} \item $(0,4,7) \mapsto [3,4,5]$, the major chord. \item $(0,3,6) \mapsto [3,3,6]$, the diminished chord. \item $(0,4,8) \mapsto [4,4,4]$, the augmented chord. \end{enumerate} We now consider two operations on chords. The first is inversion, named after the corresponding inversion in music theory. It is the function \begin{align*} i_k: N_k&\longrightarrow N_k\\ (0,a_1,\dots,a_{k-1})&\longmapsto (0, a_2-a_1,\dots, a_{k-1}-a_1, 12-a_1). \end{align*} When no confusion is possible, we denote $i_k$ by $i$. Despite its name, inversion does not have order two except when $k=2$, but we keep the terminology to be consistent with music theory. In fact, it is easily seen that $i_k$ has order $k$; that is, $i_k^k = {\rm id}$, where ${\rm id}$ is the identity operator. There is a second operator, which we call major-minor duality, denoted by $d_k$: \begin{align*} d_k: N_k&\longrightarrow N_k\\ (0,a_1,\dots,a_{k-1})&\longmapsto (0,12-a_{k-1},12-a_{k-2},\dots,12-a_1). \end{align*} Again, when $k$ is clear, we denote $d_k$ by $d$. The duality operator satisfies $d^2 = {\rm id}$. In this way, we get an action of the dihedral group $D_k$ on $N_k$. Recall the dihedral group on $k$ vertices is a group of order $2k$ and is given by generators and relations in the presentation \begin{align*} D_k = \langle~r,s ~|~ r^k, s^2, r^ks = sr^{-k}~\rangle. \end{align*} The action of $D_k$ on $N_k$ is that of $r$ acting as $i_k$ and $s$ acting as $d_k$. Let us now return to $k = 3$, the situation of three-tone chords. In this case, $D_3 = S_3$, where in general $S_k$ is the full symmetric group on $k$ symbols. We now recall the chords we listed corresponding to all partitions satisfying the definition of a harmonic three-tone chord. By substituting these chords into the functions of inversion and duality, we will obtain all harmonic three-tone chords by definition. We have for inversion: \begin{enumerate} \item The major chord: under inversion, we obtain the chords \begin{align*} &\{ (0,4,7), i(0,4,7) = (0,3,8), i(0,3,8) = (0,5,9)\}\\ =&\{ (0,4,7), (0,3,8), (0,5,9)\}. \end{align*} by successive application of $i = i_3$. In music theory, these correspond to the root, first inversion, and second inversion respectively of the major chord. Applying the duality operator to the above chords gives: \begin{align*} &\{ d(0,4,7) = (0,5,8), d(0,3,8) = (0,4,9), d(0,5,9) = (0,3,7)\}\\ =&\{ (0,5,8), (0,4,9), (0,3,7)\}. \end{align*} We recognize this as the familiar minor chords and its inversions. In fact, we see from this that through duality, the root, first, and second inversions of the major chord correspond to the second, first, and root positions of the minor chord respectively. This follows from the relation property $di^k = i^{-k}d$ satisfied by the dihedral group. Because the major and minor chords are dual, the $d$ operator could be thought of as major-minor duality. \item The diminished chord: under inversion, we obtain the chords: \begin{align*} \{ (0,3,6), i(0,3,6) = (0,3,9), i(0,3,9) = (0,6,9)\} \end{align*} by successive applications of $i$. These are the root, first inversion, and second inversion of the diminished chord. Applying the duality operator gives $d(0,3,6) = (0,6,9)$. Therefore, we conclude that the duality operator transposes the root and second inversion of the diminished chord and fixes the first inversion. \item The augmented chord: under inversion, we find that \begin{align*} i(0,4,8) = (0,4,8). \end{align*} So, under inversion, the augmented chord is stable. In some sense it is this fixed property of the augmented chord that actually demands resolution. Not only that, but $d(0,4,8) = (0,4,8)$. Therefore, the augmented chord is both inversion stable and major-minor dual. \end{enumerate} \section{Four-tone harmonic chords} Four-tone harmonic chords are more complex than three-tone harmonic chords. Recall that we have defined a three-tone harmonic chord to be one whose partition does not contain $1$ or $2$. On the other hand, it makes less sense to define a four-tone harmonic chord in exactly the same way, since that would exclude seventh chords. Instead, we use the following. \begin{definition}\label{defn:fourtoneharmonic} A four-tone harmonic chord is a four-tone chord whose partition contains at most one $x\leq 2$. \end{definition} We have already described two operators $i_k$ and $d_k$, called inversion and duality, that will operate on the set $N_k$, the set of $k$-tone chords. However, in the case of $k=4$, the symmetric group $S_4$ has $24$ elements, which is larger than the dihedral group $D_4$ of eight elements. Can we obtain any additional actions on $N_4$, and if so, do those actions contain any new operators of harmonic significance? We will answer this question now. To begin, we again list all partition of $12$ of length $4$ that satisfy Definition~\ref{defn:fourtoneharmonic} of a four-tone harmonic chord along with one example of each type. To keep the verbiage simple for seventh chords, we describe seventh chords first by their three-tone harmonic chord and the type of seventh added to them as follows: \begin{enumerate} \item $(0,4,7,11)\mapsto [4, 4, 3, 1]$, the major-major seventh (also known as the major seventh chord) \item $(0,4,7,10) \mapsto [4, 3, 3, 2]$, the major-minor seventh chord (also known as the dominant seventh chord) \item $(0,3,6,9) \mapsto [3, 3, 3, 3]$, the diminished-diminished seventh chord (also known as the fully diminished seventh chord) \end{enumerate} We can calculate how many chords we will find before we actually find them. For the permutations $[4,4,3,1]$ and $[4,3,3,2]$ there are $4!/2! = 12$ possible chords, and there is only one possible chord for $[3,3,3,3]$. Therefore, there are $25$ four-tone harmonic chords. As before, we write down all the possible chords we can obtain from these using the inversion and duality operators. Let us first discuss the major-major seventh. Its inversions are the orbit of the major-major seventh element $(0,4,7,11)$ under the inversion operator. Explicitly: \begin{align*} \{ (0,4,7,11), (0,3,7,8), (0,4,5,9), (0,1,5,8)\}. \end{align*} Something else which is quite intriguing is that $d(0,4,7,11) = (0,1,5,8)$. That is $i^3 = d$ when restricted to the orbit of the major-major seventh under $i$. In other words, adding the major seventh interval to the major chord has stabilized the duality between major and minor, so that the dual of the major-major seventh orbit is again the major-major seventh orbit. The harmonic significance is that the inversions of the major-major seventh pair up as follows under duality: \begin{align*} (0,4,7,11)&\leftrightarrow (0,1,5,8)\\ (0,3,7,8)&\leftrightarrow (0,4,5,9). \end{align*} This pairing of inversions means that in harmony, inversions of the major-major seventh according to this pairing also function as major-minor tension as well. This relationship was not present in three-tone chord harmony. The more prevalent major-minor seventh chord, often functioning as a dominant seventh chord, behaves differently. We still have its inversions: \begin{align*} \{ (0,4,7,10), (0,3,6,8), (0,3,5,9), (0,2,6,9) \}. \end{align*} Applying the duality operator to this set gives the following chords and their inversions: \begin{align*} \{ (0,2,5,8), (0,3,6,10), (0,3,7,9), (0,4,6,9)\}. \end{align*} The root position of this set (the one which contains either a major or minor seventh interval from the lowest note) is $(0,3,6,10)$, which is the diminished-minor seventh chord. We have thus far only accounted for 9 of the 25 types of four-tone harmonic chords. We recall the reason for this is that the dihedral group $D_4$ only has $8$ elements whereas $S_4$ has $24$ elements. Therefore, we need to write down a new type of operator on chords to obtain the remaining types of four-tone harmonic chords. We consider a new operator on four-tone chords called augmented-diminished duality: \begin{align*} a:N_4&\longrightarrow N_4\\ (0,a_1,a_2,a_3)&\longmapsto (0,a_1,a_1 + a_3 - a_2,a_3). \end{align*} The reader will verify that $a^2 = {\rm id}$, similar to duality. We note that we can also specify chords of the form $(0,a_1,a_2,a_3)$ by an \emph{ordered partition}. That is, the chord $(0,a_1,a_2,a_3)$ corresponds to the \emph{ordered} partition $[a_1,a_2-a_1,a_3-a_2,12-a_3]$. The augmented-diminished operator $a$ then corresponds to the map \begin{align*} [a,b,c,d]\mapsto [a,c,b,d]. \end{align*} The augmented-diminished map is so-named because it transforms the major-major seventh inversions as follows: \begin{align*} \{ (0,4,7,11), (0,3,7,8), (0,4,5,9), (0,1,5,8)\}\\ \downarrow\\ \{ (0,4,8,11), (0,3,4,8), (0,4,8,9), (0,1,4,8)\}. \end{align*} That is, the augmented-diminished operator $a$ satisfies $a(0,4,7,11) = (0,4,8,11)$. It transforms the major-major seventh into a augmented-major seventh. We note that the second set is not a set of inversions, although it contains some inversion pairs. Namely, the second inversion of $(0,4,8,11)$ is $(0,3,4,8)$. What about the chord $(0,4,8,9)$? The inversions of this chord are: \begin{align*} \{ (0,3,7,11), (0,4,8,9), (0,4,5,8), (0,1,4,8)\}. \end{align*} We see two chords that were in the augmentation of the inversions of the major-major seventh. The chord $(0,3,7,11)$ is a new seventh chord in our list, the minor-major seventh chord. On the other hand, the set of inversions of the augmented-major seventh chord are \begin{align*} \{ (0,4,8,11), (0, 4,7,8), (0,3,4,8), (0,1,5,9)\}. \end{align*} Finally, if we take the diminished-minor chord $(0,3,6,10)$ and apply the augmented-diminished operator we get $(0,3,7,10)$, a minor-minor chord. Its inversions are \begin{align*} \{ (0,3,7,10), (0,4,7,9), (0,3, 5, 8), (0,2,5,9)\}. \end{align*} This finishes our analysis of all four-tone harmonic chords. We have summarized these classified into inversion orbits in Table~\ref{tab:fourtoneInversionOrbit}. \begin{table}[h] \centering \begin{tabular}{|l | l|} \hline Chord type & Inversions \\ \hline\hline Major-Major & $\{ (0,4,7,11), (0,3,7,8), (0,4,5,9), (0,1,5,8)\}$ \\ \hline Minor-Major & $\{ (0,3,7,11), (0,4,8,9), (0,4,5,8), (0,1,4,8)\}$\\ Augmented-Major & $\{ (0,4,8,11), (0, 4,7,8), (0,3,4,8), (0,1,5,9)\}$\\ \hline\hline Major-Minor & $\{ (0,4,7,10), (0,3,6,8), (0,3,5,9), (0,2,6,9) \}$\\ Diminished-Minor & $\{ (0,3,6,10), (0,3,7,9), (0,4,6,9), (0,2,5,8)\}$\\ \hline Minor-Minor & $\{ (0,3,7,10), (0,4,7,9), (0,3, 5, 8), (0,2,5,9)\}$\\ \hline\hline Diminished-Diminished & $\{ (0,3,6,9) \}$\\ \hline \end{tabular} \caption{A table of all four-tone harmonic chords, where each row corresponds to an orbit under the inversion map. The single horizontal lines in the table group together dual sets of inversions; that is inversion that are taken to each other under major-minor duality. The double line separates the three orbits of the augmented-diminished operator, which also correspond to the three permutations that we originally used to derive all these chords.} \label{tab:fourtoneInversionOrbit} \end{table} \section{The chord graph} Recall that we have used the augmented-diminished operator \begin{align*} a:N_4&\longrightarrow N_4\\ (0,a_1,a_2,a_3)&\longmapsto (0,a_1,a_1 + a_3 - a_2,a_3). \end{align*} along with inversion and duality to derive all four-tone harmonic chords from the three permutations $[4,4,3,1], [4,3,3,2],$ and $[3,3,3,3]$. In this section we talk a little about the harmonic significance of the augmented-diminished operator. In order to visualize more readily how the augmented-diminished operator acts, we examine the following directed graph showing the relations between all four-tone harmonic chords, where the relations are defined by the operators inversion, major-minor duality, and augmented-diminished duality:\\ \begin{center} {\small \begin{tikzcd} ~ & ~ & \m{mM}0\ar[r, "i"']\ar[rrr,"d"',leftrightarrow, bend left]\ar["a"',loop left, leftrightarrow,dotted]& \m{mM}1\ar[d, "i"]\ar[r,"d",leftrightarrow]&\m{AM}2\ar[r,"i"]& \m{AM}3\ar[d,"i"]\ar["a",loop right, dotted] & ~ & ~\\ ~ & ~ & \m{mM}3\ar[u,"i"]\ar[rrr,"d",leftrightarrow, bend right] & \m{mM}2\ar[l,"i"]\ar[r,"d",leftrightarrow]\ar[r,"a"', shift right=1.5ex, leftrightarrow,dotted]&\m{AM}1\ar[u,"i"] & \m{AM}0\ar[l,"i"] & ~ & ~\\ \m{MM}1\ar[d,"i"]\ar[rrrruu,"a", leftrightarrow, bend left=40, dotted, crossing over] \ar[d,"d", shift left=1.5ex, leftrightarrow] & \m{MM}0\ar[rrrru,"a",leftrightarrow, bend right, dotted]\ar[l,"i"']\ar[d,"d", shift left=1.5ex, leftrightarrow] & ~ & ~ & ~ & ~ & \m{mm}3\ar[d,"i"]\ar[ldd,"a",dotted, bend right, crossing over] \ar[d,"d",leftrightarrow,shift left=1.5ex] & \m{mm}2\ar[l,"i"]\ar[d,"d",leftrightarrow, shift left=1.5ex]\ar[dddlll,"a",leftrightarrow, dotted, bend left] \\ \m{MM}2\ar[r,"i"]\ar[rrruuu,"a",leftrightarrow, dotted, bend left] & \m{MM}3\ar[u,"i"]\ar[ruu,"a",dotted, bend right, crossing over] & ~ & ~ & ~ & ~ & \m{mm}0\ar[r,"i"]\ar[lllld,"a",leftrightarrow,bend right, dotted]& \m{mm}1\ar[u,"i"]\ar[lllldd,"a",leftrightarrow, bend left=40, dotted, crossing over] \\ ~ & ~ & \m{dm}0\ar[r,"i"]\ar[rrr,"d",leftrightarrow,bend left] & \m{dm}1\ar[r,"a",leftrightarrow, dotted, shift left=1.5ex]\ar[d,"i"]\ar[r,"d"',leftrightarrow] & \m{Mm}2\ar[r,"i"] & \m{Mm}3\ar[d,"i"] \\ ~ & ~ & \m{dm}3\ar[u,"i"]\ar[rrr,"d",leftrightarrow, bend right]\ar[loop left, "a"',dotted] & \m{dm}2\ar[l,"i"]\ar[r,"d",leftrightarrow] & \m{Mm}1\ar[u,"i"] & \m{Mm}0\ar[l,"i"]\ar[loop right, "a"', dotted] \end{tikzcd}}\end{center} We see from this diagram that there is a hierarchy of group actions that have different harmonic roles. Not shown in this diagram is the diminished-diminished chord, which is fixed under all these actions. The two connected components of this graph as we have seen correspond to two permutations: $[4,4,3,1]$ for the upper left component and $[4,3,3,2]$ for the lower right component. These two components are isomorphic as graphs, and there is a natural isomorphism which is obtained by the following map: \begin{align*} \m{MM}\longmapsto\m{mm}\\ \m{mM}\longmapsto\m{Mm}\\ \m{AM}\longmapsto\m{dm}. \end{align*} That is, this is the map that exchanges major with minor and augmented with diminished. If we attempt to extend this map to all chords, we find that the diminished-diminished chord is sent to the augmented-augmented chord, which is just an augmented triad together with the doubling of the root one octave higher. Leaving this aside for the moment, we see that there are four levels of harmonic similarity. At the most basic level, we have inversions. Then major-minor duality maps sets of inversions to sets of inversions. Augmentation is a little different, and highlights the difference between inversion sets. Finally, the isomorphism of the two components corresponding to two permutations provides a mirroring between two very different sets of chords. Returning to the augmented-diminished operator, we see that it actually highlights the difference between inversions. This is a new facet of four-tone harmonic chords that is not present in three-tone harmonic chords. With three-tone harmonic chords, each inversion of a major or minor triad provides variation for voice leading and some harmonic instability especially for the second inversion. However, with four-tone harmonic chords, the addition of the seventh factor adds truly different harmonic functions to each inversion. For example the root position of a minor-major chord is stable under the augmented-diminished operator, whereas the first inversion of a minor-major chord is sent to the second inversion of a major-major chord. This shows that one inversion can behave much differently than another in music. \addcontentsline{toc}{section}{References} \end{document}
\begin{document} \title[All dihedral division algebras of degree five are cyclic] {\textbf{All dihedral division algebras of degree five are cyclic}} \author{ Eliyahu Matzri } \address{Department of Mathematics, Bar-Ilan University, Ramat-Gan, 52900, Israel} \email{[email protected]} \thanks{The author thanks his supervisors, L.H. Rowen and U. Vishne, for many interesting and motivating talks and for supporting this work through BSF grant no.~2004-083.} \mathcal{S}ubjclass{Primary 16K20, 12E15} \keywords{Central simple algebras, cyclic algebras} \begin{abstract} In \cite{AAB95} Rowen and Saltman proved that every division algebra which is split by a dihedral extension of degree $2n$ of the center, $n$ odd, is in fact cyclic. The proof requires roots of unity of order $n$ in the center. We show that for $n=5$, this assumption can be removed. It then follows that ${}_{5\!\!\!\:}\Br(F)$, the $5$-torsion part of the Brauer group, is generated by cyclic algebras, generalizing a result of Merkurjev \cite{AAC95} on the $2$ and $3$ torsion parts. \end{abstract} \maketitle \mathcal{S}ection{\textbf{Mathematical background}} We begin with basic notions needed for this work and refer the reader to \cite{RRB95} or \cite{NN59} for more details.\\ Let $R$ be a ring and let $\C(R)=\{r\in R \mid rx=xr \hbox{ \ } \forall x\in R \}$ denote the center of $R$. \begin{defn} A ring $R$ will be called a simple ring if $R$ has no non-trivial two-sided ideals. In particular $R$ is a division ring if every nonzero element is invertible. \end{defn} \mathcal{S}mallskip \begin{rem} Notice that if $R$ is simple, its center is naturally a field. \end{rem} \mathcal{S}mallskip \begin{defn} An $F$-algebra $R$ is called an $F$-central simple algebra if $R$ is simple with $\C(R)=F$ and $\dim_F(R)<\infty$. \end{defn} \begin{rem} Every $F$-central simple algebra $A$ has $\dim_F(A)=n^2$, and we define the degree of $A$, denoted $\deg(A)$, to be $n$. \end{rem} By Wedderburn's Theorem every $F$-central simple algebra is of the form $M_n(D)$, where $D$ is a division algebra with center $F$. \ \\ \mathcal{S}mallskip The Brauer group of a field $F$, denoted $\Br(F)$, is the set of isomorphism classes of $F$-central simple algebras modulo the following relation: two central simple algebras $A,B$ are equivalent if and only if there exist natural numbers $n,m$ such that $M_n(A)\cong M_m(B)$. \mathcal{S}mallskip \begin{prop} Let $D$ be an $F$-central division algebra of degree $n$, and $K$ a subfield of $D$, then $K$ is a maximal subfield if and only if $[K:F]=n$. \end{prop} \begin{defn} A crossed product is an $F$-central simple algebra $A$ of degree $n$ containing a commutative $F$-subalgebra $C$ Galois over $F$, with $[C:F]=n$. Note that if $A$ is a division algebra then $C$ is a maximal subfield of $A$. \end{defn} \mathcal{S}mallskip \begin{defn} Let $D$ be an $F$-central division algebra of degree $n$. We will say that $D$ is split by a group $G$ if $D$ contains a maximal subfield $K$ with Galois closure $E$ such that $\Gal(E/F)=G$. \end{defn} \mathcal{S}mallskip \begin{thm} Let $A$ be a crossed product where $K\mathcal{S}ubset A$ is a maximal subfield with Galois group $\Gal(K/F)=G$. Then $A$ has the following description: $A=\mathop \oplus \limits_{\mathcal{S}igma \in G}Kx_{\mathcal{S}igma} $ as a left $K$-vector space, and multiplication in $A$ is according to the rules: $$x_\mathcal{S}igma k=\mathcal{S}igma(k)x_\mathcal{S}igma \hbox{ \ } \forall k\in K$$ and $$x_\mathcal{S}igma x_\tau=c(\mathcal{S}igma,\tau)x_\tau x_\mathcal{S}igma$$ where $c\in \h ^2(G,K^{\times})$ is a $2$-cocycle. In this case $A$ is denoted $A=(K,G,c)$. \end{thm} \mathcal{S}mallskip \begin{rem} If $G=\left<\mathcal{S}igma\right>$ we can give a simpler representation of $A$ as follows:\\ $A=\mathop \oplus \limits_{i = 0}^{n - 1}Kx^i$ as a left $K$-vector space, where $n=\deg(A)=\left | G\right |$ and the multiplication is according to the rules: $$xk=\mathcal{S}igma(k)x \hbox{ \ } \forall k\in K$$ and $$x^ix^j=\left\{ \begin{array}{ll} x^{i+j}, & i+j<n \\ \beta x^{i+j-n}, & i+j \geq n \end{array} \right. $$ \\ In this case, $A$ is denoted as $A=(K,\mathcal{S}igma,\beta)$. \end{rem} \mathcal{S}mallskip \begin{rem} If $F$ contains a primitive $n$-th root of unity $\rho$, we can give an even simpler description of $A$ (since then $K=F[x\mid x^n=\alpha\in F]$) as follows: $$A=F[x,y\mid x^n=\alpha;y^n=\beta;xy=\rho_nyx]\hbox{ \ \ \ } \alpha,\beta\in F$$ \end{rem} \mathcal{S}mallskip \mathcal{S}ection{\textbf{Some preliminary results}} \mathcal{S}mallskip In this section we briefly repeat the arguments of Rowen and Saltman in \cite{AAB95} but we do not assume $F$ contains roots of unity. The situation we will be handling is the following: \\ $D/F$ is a central simple algebra of odd degree $n$ having a maximal subfield $K\mathcal{S}ubset D$ with Galois closure $E\mathcal{S}upset K\mathcal{S}upset F$, such that $$\Gal(E/F)=D_n=\left<\mathcal{S}igma, \tau : \mathcal{S}igma^n=\tau^2=1 \ ,\tau \mathcal{S}igma \tau = \mathcal{S}igma^{-1}\right>,$$ and $K=E^{\left<\tau\right>}$. Extending scalars to $E^{\left<\mathcal{S}igma\right>}$, we may view $E\mathcal{S}ubset D'=D\otimes E^{\left<\mathcal{S}igma\right>}$. Now $\Gal(E/E^{\left<\mathcal{S}igma\right>})=\left<\mathcal{S}igma\right>$, i.e. $D'$ is cyclic, so we have an element $\beta\in D'$ such that\\ $(1) \hbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\beta^{-1}x\beta=\mathcal{S}igma (x)\hbox{ \ \ \ } \forall x\in E.$\\ In particular $\beta^n \in E^{\left<\mathcal{S}igma\right>}$. Notice that $\tau$ can be extended to $D'=D\otimes E^{\left<\mathcal{S}igma\right>}$ by its action on $E^{\left<\mathcal{S}igma\right>}$, that is, we write $\tau$ instead of $1\otimes\tau$. \mathcal{S}mallskip \begin{lem} We may assume that $\tau(\beta)=\beta^{-1}$. \end{lem} \mathcal{S}mallskip \begin{proof} Applying $\tau$ to $(1)$ yields $$\tau(\beta)^{-1}\tau(x)\tau(\beta)=\mathcal{S}igma^{-1}(\tau(x)) \hbox{ , \ \ \ } \forall x\in E.$$ Now since $\tau$ is an automorphism of $E$, $\tau(x)$ runs over all elements of $E$, and thus $$\tau(\beta)^{-1}y\tau(\beta)=\mathcal{S}igma^{-1}(y) \hbox{ , \ \ \ } \forall y\in E$$ that is $\tau(\beta)$ acts on $E$ as $\mathcal{S}igma^{-1}$. Now define $\beta '=\beta^r \tau(\beta)^{-r},$ where $r=(n+1)/2,$ and compute that $\tau(\beta')=\beta'^{-1}$, and $\beta'$ acts on $E$ as $\mathcal{S}igma$. \end{proof} \mathcal{S}mallskip Let $P_t(X)=X^n+\mathcal{S}um_{i=1} ^{n} c_i(t)X^{n-i}$ denote the characteristic polynomial of $t\in D'$. Note that $c_1(t)=-\tr(t)$ and $c_n(t)=(-1)^n\N(t)$ where $tr(t)$ and $N(t)$ are the reduced trace and norm of $t$. \mathcal{S}mallskip \begin{lem}\label{t} Let $t=\beta^i e,$ for $e\in E$ and $0<i<n$, $i\neq 0$. Then $tr(t)=0$. \end{lem} \mathcal{S}mallskip \begin{proof} Let $d=\Gcd(i,n)$.\\ Clearly we have $t^{n/d}=\beta^{ni/d}\N_{\mathcal{S}igma^i}(e)\in E^{\left<\mathcal{S}igma^i\right>}$ where $\N_{\mathcal{S}igma^i}$ is the norm from $E$ to $E^{\left<\mathcal{S}igma^i\right>}$. Now $[E:E^{\left<\mathcal{S}igma^i\right>}]=n/d,$ implying $P(X)=X^{n/d}-\beta^{ni/d}\N_{\mathcal{S}igma^i}(e)$ is the characteristic polynomial of $t$, hence $\tr_{E/E^{\left<\mathcal{S}igma^i\right>}}(t)=0$ which implies $\tr_{E/F}(t)=0$. \end{proof} \mathcal{S}mallskip \begin{lem} Let $t=(\beta+\beta^{-1})e$ for $e\in E$. Then the coefficients of $P_t(X)$ satisfy $c_i(t)=0$ for every odd $0<i<n$. \end{lem} \mathcal{S}mallskip \begin{proof} Notice that for $i$ odd, $t^i$ is a sum of elements of the form $a\beta^s$ where $a\in E$ and $s$ odd, $-n< s < n$, so by \ref{t} and Newton's identities we are done in the characteristic zero case. For the general case, we refer the reader to \cite{AAB95} where the main idea is that you can form a model for this situation in the form of an Azumaya algebra and then use a specialization argument. \end{proof} \mathcal{S}mallskip \begin{cor}\label{b} There is an element $t\in D$ such that for every $e\in E$ (and so also for $k\in K\mathcal{S}ubset E$), $c_i=0$ for every odd $0<i<n$ in $P_{te}(X)$. \end{cor} \mathcal{S}mallskip \begin{proof} Since $D=D'^{\left<\tau\right>}$ we have $t=\beta+\beta^{-1}$ is the desired element. \end{proof} \mathcal{S}mallskip \begin{rem}\label{char} Notice that if $n=p$ is prime $\Char(F)=p$, the element $t=\beta+\beta^{-1}\in D$ we found satisfies $t^p\in F$ and $t\notin F$ and so by a theorem of Albert in the ``special results'' chapter of his seminal book \cite{dB22}, which is knows as Albert's cyclicity criterion, $D$ is cyclic (this is not a new result as J.P Tignol and P. Mammone did this for any field $F$ with $\Char(F)\mid n$ in \cite{Dev68} using the corestriction, but it shows that the proof of Rowen and Saltman also applies to this case). \end{rem} \mathcal{S}ection{\textbf{The case $n=5$ }} \mathcal{S}mallskip Now we would like to focus on the particular case where $n=5$. The main tool we will be using is the following proposition taken from [$3$, Proposition $2.2$]. \mathcal{S}mallskip \begin{prop}\label{deg3} Let $G(x_1,...,x_n)$ be a homogeneous form of degree $3$ defined over a field $F$. If $G$ has a solution, $\alpha \in K^{(n)}$, defined over a quadratic extension $K$ of $F$, then $G$ has a solution defined over $F$. \end{prop} \mathcal{S}mallskip \begin{proof} The proof in \cite{dB93} uses basic intersection theory which we will not use, instead we will give an algebraic proof (which is actually a translation of the proof in \cite{dB93}) which will enable us to find an explicit solution in section $3$. Since $[K:F]=2$ the solution $\alpha$ has the following form: $\alpha=(\alpha_1+\beta_1t,...,\alpha_n+\beta_nt)$ where $\alpha_i,\beta_i\in k$, and $t\in K$ such that $K=F[t]$. Now specialize $G(x_1,...,x_n)$ to $G(\alpha_1+\beta_1Z,...,\alpha_n+\beta_nZ)$, denoting it by $g(Z)$. Notice that the coefficient of $Z^3$ in $g(Z)$ is $G(\beta_1,...,\beta_n)$ hence if $G(\beta_1,...,\beta_n)=0$ we have a solution defined over $F$ else $g(Z)$ is a degree $3$ polynomial defined over $F$. Since $g(t)=0$ we get that $g(Z)=cm_t(Z)(Z-w)$, where $c=G(\beta_1,...,\beta_n)$ and $m_t(Z)$ is the minimal polynomial of $t$ over $F$. Now $c$, $g(Z)$ and $m_t(Z)$ are defined over $F$ hence $w$ is in $F$ and clearly $G(\alpha_1+\beta_1w,...,\alpha_n+\beta_nw)=g(w)=0$ so we have found a solution $\gamma=(\alpha_1+\beta_1w,...,\alpha_n+\beta_nw)\in F^n$. \end{proof} \mathcal{S}mallskip \begin{thm}\label{yyy} Let $D$ be a division algebra of degree $5$ split by the group $D_5$ then $D$ is cyclic. \end{thm} \mathcal{S}mallskip \begin{proof} In view of remark \ref{char}, we may assume $\Char(F)\neq5$. First we remark that by Albert's cyclicity criterion it is enough to find an element $t\in D-F$ such that $t^5\in F$, that is $c_i=0$ for every $0<i<n$. Now by \ref{b} we have $t\in D$ with the property $c_i(te)=0$ for every odd $0<i<n$ and $\forall e\in E$. Now since $P_{t^{-1}}(x)=-N(t)^{-1}P_t(x^{-1})x^5$ we have $c_i(et^{-1})=0$ for every even $0<i<n$ and $\forall e\in E$. Hence we are left with finding a solution for $c_1(et^{-1})=0$ (which is linear) and $c_3(et^{-1})=0$ (which is cubic) in the five dimensional vector space $Et^{-1}$. Define $V:=\{et^{-1}\in Et^{-1} \mid c_1(et^{-1})=0\}$, which is a four dimensional subspace of $Et^{-1}$. We have to find a solution for $c_3(v)=0$ in $V$. Let us add a fifth root of unity to $F$, which is either a quadratic extension or a chain of two quadratic extensions. After this extension we are in the case of Rowen and Saltman where they gave an explicit element whose fifth power is in $F$ which was $(v+v^{-1})t^{-1}$, where $v\in E$. This element is clearly in $V\otimes_F F[\rho_5]$. Now by \ref{deg3} since $c_3(v)$ is homogeneous of degree $3$, we have a solution after either one or two quadratic extensions. Thus, we have a solution before the extension and we are done. \end{proof} \mathcal{S}mallskip \begin{rem} If the fifth root of unity is in a quadratic extension of $F$, we know $D$ is cyclic by a theorem of Vishne [$10$, Theorem $13.6$] and D. Haile, M. A. Knus, M. Rost, J. P. Tignol \cite{Gam90}, so what actually is new is the last case of $[F[\rho]:F]=4$. \end{rem} \mathcal{S}ection{\textbf{A generic example}} Fixing $p$ let $K=F[\rho_p]$ and denote $\Gal(K/F)=\left<\tau\right>$. In [$6$, Theorem $2$] Merkurjev proves that ${}_{p\!\!\!\:}\Br(F)$ is generated by $F$-central simple algebras, $A$, of degree $p$ such that $A\otimes K \mathcal{S}imeq (\alpha,\beta)$ where $K[\mathcal{S}qrt[p]{\alpha}]$ is cyclic over $K$ Galois over $F$.\\ In \cite{Enf87} Vishne calls these algebras quasi-symbols and gives more details about them including generic examples. We will show that for $p=5$ these algebras are cyclic and conclude that ${}_{5\!\!\!\:}\Br(F)$ is generated by cyclic algebras. \mathcal{S}ubsection{\textbf{A generic Quasi-symbol of degree $5$ }} \ \\ \mathcal{S}mallskip For $p=5$ we have two possibilities for $[K:F]$. The first is $[K:F]=2$; in this case Vishne shows that every quasi-symbol is cyclic. The second case is $[K:F]=4$; in this case every quasi-symbol $A$ has one of the following forms (after extending scalars to $K$): \begin{enumerate} \item $A\otimes K=(\alpha,\beta)$, where $\alpha\in F$ and $\tau(\beta)\equiv\beta^2 \hbox{ \ }\pmod{K^{\times^5} }$. \item $A\otimes K=(\alpha,\beta)$, where $\tau(\alpha)=\alpha^{-1}$ and $\tau(\beta)\equiv\beta^{-2} \hbox{ \ }\pmod{K^{\times^5}}$. \end{enumerate} The first kind is known to be cyclic by [10, Theorem 10.3]. So we are left with the second kind for which Vishne gives the following generic construction which we will show is cyclic. Thus every quasi-symbol of degree $5$ is cyclic and hence, by [$6$, Theorem $2$] we conclude that ${}_{5\!\!\!\:}\Br(F)$ is generated by cyclic algebras. Let $k_0$ be a field of characteristic $\neq 5$ and $k=k_0[\rho]$ where $\rho$ is a fixed primitive fifth root of unity, $\Gal(k/k_0)=\left<\tau\right>$ where $\tau(\rho)=\rho^2$. Set $K=k(a,b,\eta)$ a transcendental extension and extend $\tau$ to $K$ by $$\tau(a)=a^{-1}, \hbox{ \ \ \ \ \ } \tau(b)=\eta^5b^{-2}, \hbox{ \ \ \ \ \ }\tau(\eta)=\eta^2b^{-1}.$$ Notice that we still have $\tau^5=1$. Define $F=K^{\left<\tau\right>}$ and $$D=(a,b)_K=K[x,y\mid x^5=a,\hbox{ \ \ } y^5=b,\hbox{ \ \ } yxy^{-1}=\rho x],$$ and extend $\tau$ to $D$ by $\tau(x)=x^{-1},\hbox{ \ \ }\tau(y)=\eta y^{-2}$. Notice that $\tau^2(\eta)=\eta^{-1}$ and $\tau^2(y)=y^{-1}$.\\ Now define $D_0=D^{\left<\tau\right>}$; $D_0/F$ is the generic quasi-symbol of degree $5$ of the second type. \begin{rem} Vishne's construction is much more general and we specialized it to the above case, for the general construction we refer the reader to \cite{Enf87}. \end{rem} \mathcal{S}mallskip \begin{prop} $D_0$ is split by $D_5$. \end{prop} \mathcal{S}mallskip \begin{proof} Notice that $\Gal (K[y]/F)=C_5 \rtimes C_4=\left<\mathcal{S}igma\right> \rtimes\left<\tau\right>$ and now we will see how $\tau$ acts on $\mathcal{S}igma$. Applying $\tau$ to $x^{-1}tx=\mathcal{S}igma(t)$, which holds for every $t\in K[y]$, yields $\tau(\mathcal{S}igma(t))=\tau(x^{-1})\tau(t)\tau(x)=x\tau(t)x^{-1}=\mathcal{S}igma^{-1}(\tau(t))$ and so we get $\tau\mathcal{S}igma\tau^{-1}=\mathcal{S}igma^{-1}$. Hence $\tau^2$ is a central element in $\Gal (K[y]/F)$ and it is clear that $E=K[y]^{\left<\tau^2\right>}\mathcal{S}ubset K[y]$ is Galois over $F$ with $\Gal(E/F)=D_5=\left<\mathcal{S}igma\right> \rtimes\left<\tau\right>$ and we are done. \end{proof} \mathcal{S}mallskip \begin{cor} $D_0$ is cyclic. \end{cor} \mathcal{S}mallskip In \cite{AAC95} Merkurjev proves the following theorem: \begin{thm} Let $F$ be a field. ${}_{n\!\!\!\:}\Br(F)$ is generated by cyclic algebras, for $n=2,3$. \end{thm} \mathcal{S}mallskip Now as a result of the above we can extend Merkurjev's theorem to $n=5$ and get \begin{thm} ${}_{5\!\!\!\:}\Br(F)$ is generated by cyclic algebras. \end{thm} \mathcal{S}mallskip \begin{proof} By section $8$ of \cite{Enf87} ${}_{5\!\!\!\:}\Br(F)$ is generated by quasi-symbols of degree $5$, and so we are done. \end{proof} \mathcal{S}ubsection{\textbf{Finding an explicit solution}} \ \\ \mathcal{S}mallskip Since the above example is a generic one, it would be nice to give an explicit element with fifth power in $F$, which is what we do now by going over the general proof. Let $P_t(X)=X^n+\mathcal{S}um_{i=1} ^{n} c_iX^{n-i}$ denote the characteristic polynomial of $t\in D_0$.\\ $V=(x+x^{-1})^{-1}K[y]^{\left<\tau\right>}$ is a $5$-dimensional $F$-subspace of $D_0$, satisfying $c_2(v)=c_4(v)=0$ for all $v\in V$; and we want to find a solution in $V$ for $tr(Z)=c_1((x+x^{-1})^{-1}Z)=0$ and $G(Z)=c_3((x+x^{-1})^{-1}Z)=0$. Extending scalars from $F$ to $F[\rho+\rho^{-1}]$, we have the solutions $Z_1=y+y^{-1}=\alpha+\beta(\rho+\rho^{-1})$ and $Z_2=\tau(Z_1)=\alpha+\beta\tau(\rho+\rho^{-1})=\alpha+\beta\tau(\rho^2+\rho^{-2})$ where $\alpha=(\alpha_1,...,\alpha_5), \beta =(\beta_1,...,\beta_5)\in K[y]^{\left<\tau\right>}$ so $\alpha_i,\beta_i\in F$. Now define the following line: $L=\{\alpha+\beta t\}=\{(\alpha_1+\beta_1t,...,\alpha_5+\beta_5t)\}$ defined over $F$. \mathcal{S}mallskip \begin{prop} For every $l\in L$ we have $tr(l)=0$. \end{prop} \mathcal{S}mallskip \begin{proof} By standard linear algebra, $L\cap \{tr(Z)=0\}$ is either one point or the whole line $L$; since $Z_1,Z_2\in L\cap \{tr(Z)=0\}$, we get $L\cap \{tr(Z)=0\}=L$ and we are done. \end{proof} \mathcal{S}mallskip Now let us study the variety $\{G(Z)=0\}\cap L$. First we need to compute $G(Z)$. In order to do that we use the representation of $D$ induced by right multiplication on\\ $D=K[y]+K[y]x+K[y]x^2+K[y]x^3+K[y]x^4$, namely $$x\longrightarrow \left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & a \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ \end{array} \right)$$ $$m\in K[y]\longrightarrow \Diag(m,\mathcal{S}igma(m),\mathcal{S}igma^2(m),\mathcal{S}igma^3(m),\mathcal{S}igma^4(m))$$ Now the minimal polynomial of $x+x^{-1}$ is $$\lambda^5-5\lambda^3+5\lambda-(a+a^{-1})$$ hence $$(x+x^{-1})^{-1}=((x+x^{-1})^4-5(x+x^{-1})^2+5)(a+a^{-1})^{-1}=(a+a^{-1})^{-1}(x^4+x^{-4}-x^2-x^{-2}+1)$$ implying $$(x+x^{-1})^{-1}\longrightarrow (a+a^{-1})^{-1} \left( \begin{array}{ccccc} 1 & a & -1 & -a & 1 \\ a^{-1} & 1 & a & -1 & -a \\ -1 & a^{-1} & 1 & a & -1 \\ -a^{-1} & -1 & a^{-1} & 1 & a \\ 1 & -a^{-1} & -1 & a^{-1} & 1 \\ \end{array} \right)$$ Now when we compute the characteristic polynomial of $(x+x^{-1})^{-1}m$ we get that:\\ \ \\ $$c_3((x+x^{-1})^{-1}m)=(a+a^{-1})^{-1}(m\mathcal{S}igma(m)\mathcal{S}igma^2(m)+\mathcal{S}igma(m)\mathcal{S}igma^2(m)\mathcal{S}igma^3(m) +\mathcal{S}igma^2(m)\mathcal{S}igma^3(m)\mathcal{S}igma^4(m)+$$ $\mathcal{S}igma^3(m)\mathcal{S}igma^4(m)m+\mathcal{S}igma^4(m)m\mathcal{S}igma(m))=(a+a^{-1})^{-1}\tr_\mathcal{S}igma(m\mathcal{S}igma(m)\mathcal{S}igma^2(m)) $. \\ \ \\ Yielding $F(Z)=(a+a^{-1})^{-1}\tr_\mathcal{S}igma(Z\mathcal{S}igma(Z)\mathcal{S}igma^2(Z))$ \\Now clearly $\{F(Z)=0\}\cap L$ is defined over $F$ by the polynomial \\ $f(t)=F(\alpha+\beta t)=(a+a^{-1})^{-1}\tr_\mathcal{S}igma (\alpha+\beta t)\mathcal{S}igma(\alpha+\beta t)\mathcal{S}igma^2(\alpha+\beta t))= (a+a^{-1})^{-1}\tr_\mathcal{S}igma(\beta\mathcal{S}igma(\beta)\mathcal{S}igma^2(\beta)t^3+...)=F(\beta)t^3+...$\\ But we know two solutions for $f(t)$, namely $t_1=\rho+\rho^{-1}$ and $t_2=\rho^2+\rho^{-2}$, so we get $f(t)=F(\beta)(t-t_1)(t-t_2)(t-t_3)$. Now since $f(t)$ and $F(\beta)(t-t_1)(t-t_2)$ are defined over $F$, we get $t_3\in F$.\\ Explicitly $f(0)=-t_1t_2t_3F(\beta)$ implies $t_3=\frac{-f(0)}{t_1t_2F(\beta)}=\frac{f(0)}{F(\beta)}=\frac{F(\alpha)}{F(\beta)}$ is in $F$. Hence we get:\\ \begin{thm} The element $w=(x+x^{-1})^{-1}(\alpha+\beta \frac{F(\alpha)}{F(\beta)})\in D_0-F$ satisfies $w^5\in F$. \end{thm} \ \\ Now we are left with solving for $\alpha,\beta$ from the two equations: $$y+y^{-1}=\alpha+\beta(\rho+\rho^{-1})$$ $$\eta y^{-2}+\eta^{-1} y^2=\tau(y+y^{-1})=\alpha+\beta(\rho^2+\rho^{-2})$$ Hence \\ $$\beta=\frac{y+y^{-1}-\eta y^{-2}-\eta^{-1} y^2}{\rho+\rho^{-1}-\rho^2-\rho^{-2}}$$ $$\alpha=y+y^{-1}-\beta(\rho+\rho^{-1})$$ \ \\ \ \\ \ \\ \mathcal{S}ubsection{\textbf{The general case}} \ \\ \mathcal{S}mallskip We will now show that the above solution for the case of quasi-symbols, where we do decent from $F[\rho+\rho^{-1}]$ to $F$ is valid for the general case of $D_5=\left<\mathcal{S}igma, \tau : \mathcal{S}igma^5=\tau^2=1,\tau \mathcal{S}igma \tau = \mathcal{S}igma^{-1}\right>$ division algebras, where we need to decent from $F[\rho]\otimes E^{\left< \mathcal{S}igma \right>} $ to $F$. The situation is the following: we look for a solution to $c_3(t)=c_1(t)=0$ where $c_i(t)$ are as in section $3$ and $t\in (\beta+\beta^{-1})^{-1}E^{\left< \tau \right>}$. Let $\Gal(F[\rho]/F)=\left< \pi \right>$; hence $\Gal (E\otimes F[\rho]/F)=D_5\times \left< \pi \right>$ and so after extending scalars to $F[\rho]$ we want a solution in $(\beta+\beta^{-1})^{-1}(E\otimes F[\rho])^{\left< \tau \right>\times\left< \pi \right>}$, which will then be defined over $F$. \mathcal{S}mallskip \begin{prop} We may assume $v+v^{-1}\in (E\otimes F[\rho])^{\left< \tau \right>\times\left< \pi^2 \right>}$, for $v$ as in the proof of theorem \ref{yyy}. \end{prop} \mathcal{S}mallskip \begin{proof} Since $v=x^r\tau(x)^{-r}$, where $x$ is any eigenvector of $\mathcal{S}igma$ with eigenvalue $\rho$, we may write $x=\mathcal{S}um_{i=0}^4 \rho^{-i}\mathcal{S}igma^i(k)$ for $k\in E^{\left< \tau \right>\times \left< \pi \right>}$. Now $\tau(x)=\pi^2(x)$ and so $\tau(v)=\tau(x)^rx^{-r}=\pi^2(x)^rx^{-r}=\pi^2(x^r\pi^2(x)^{-r})=\pi^2(x^r\tau(x)^{-r})=\pi^2(v)$ implying $\tau(v+v^{-1})=v+v^{-1}$, hence $v+v^{-1}$ is in $(E\otimes F[\rho])^{\left< \tau \right>\times\left< \pi^2 \right>}$, as desired. \end{proof} \mathcal{S}mallskip Now it is clear that after extending scalars to $F[\rho+\rho^{-1}]$ we have the solution $(\beta+\beta^{-1})^{-1}(v+v^{-1})$ and so we are in the same situation as in the quasi-symbol case, hence the above solution is valid for the general case too . \end{document}
\begin{document} \title{Atom interferometry} \author{A. Miffre, M. Jacquey, M. B\"uchner, G. Tr\'enec and J. Vigu\'e} \address{Laboratoire Collisions, Agr\'egats, R\'eactivit\'e (UMR 5589 CNRS-UPS), \\ IRSAMC, Universit\'e Paul Sabatier Toulouse 3, 31062 Toulouse cedex 9, France \\ e-mail:~{\tt [email protected]}} \date{\today} \begin{abstract} In this paper, we present a brief overview of atom interferometry. This field of research has developed very rapidly since 1991. Atom and light wave interferometers present some similarities but there are very important differences in the tools used to manipulate these two types of waves. Moreover, the sensitivity of atomic waves and light waves to their environment is very different. Atom interferometry has already been used for a large variety of studies: measurements of atomic properties and of inertial effects (accelerations and rotations), new access to some fundamental constants, observation of quantum decoherence, etc. We review the techniques used for a coherent manipulation of atomic waves and the main applications of atom interferometers. PACS: 03.75.Dg, 39.20.+q Key words: interferometry, diffraction, matter wave, coherence, decoherence, inertial effects, Sagnac effect, high precision. \end{abstract} \maketitle \section{Introduction} Several atom interferometers \cite{carnal91,keith91,riehle91,kasevich91} gave their first signals in 1991 and atom interferometry has developed very rapidly since. Some review papers have already appeared \cite{adams94,baudon99} and the book "Atom interferometry" edited by Berman \cite{berman97} in 1997 represents an excellent introduction to this field. The present paper gives an overview of atom interferometry. After a comparison of atomic and light waves, we describe the sources of atomic waves and the tools used for their coherent manipulation. We then present the main types of interferometers and their applications. The non-linear effects due to atom-atom interactions (recently reviewed by Bongs and Sengstock \cite{bongs04}) are not discussed here. \section{Main new features of atom interferometry} The main differences between light waves and atomic waves are their dispersion relations and their group velocities. As we neglect here atom-atom interactions in the atomic wave, we may consider only single-atom plane waves described by a ket $\left| {\mathbf k}, i\right>$, where ${\mathbf k}$ is the wave vector and $i$ is the atom internal state which replaces the polarization vector ${\mathbf {\varepsilon}}$ of a light wave. The total energy $E_{tot} = \hbar \omega $ of such a state is the sum of the internal energy $E_i$ (including the rest mass energy $mc^2$) and the kinetic energy given, in the non-relativistic limit, by $E_{kin} = \hbar^2 {\mathbf k}^2/(2m)$: \begin{equation} \label{n0} \hbar \omega = E_i + \hbar^2 \frac{{\mathbf k}^2}{2m} \end{equation} \noindent from which we get the group velocity equal to the classical velocity ${\mathbf v}$: \begin{equation} \label{n1} \partial\omega /\partial {\mathbf k} = \hbar {\mathbf k}/m = {\mathbf v} \end{equation} \noindent The dependence of the group velocity on $k$ induces the well-known wave packet spreading: vacuum is dispersive for matter waves while it is not for light. From a practical point of view, a very important quantity is the de Broglie wavelength given by: \begin{equation} \label{n2} \lambda_{dB}= \frac{2\pi}{k} = \frac{h}{mv} \approx \frac{4\times 10^{-7}}{A v} \mbox{ meter} \end{equation} \noindent where $A$ is the mass number and $v$ is the velocity in meters per second. For thermal atoms or molecules with $ v \sim 10^3$ m/s, the de Broglie wavelength is $\lambda_{dB} \sim (0.4/A)$ nanometers. For cold and ultra-cold atoms, with velocities in the millimeter/second to meter/second range, the de Broglie wavelength may be comparable to $1$ micrometer or even larger. Finally, the sensitivity of atomic waves to inertial effects is considerably larger than the one of light waves and this is a consequence of their considerably smaller group velocity: during the time spent by an atom inside an interferometer, a rotation or an acceleration changes the lengths of the interfering paths, thus inducing a phase shift of the interference signals. \section{Main tools of atom interferometry} \subsection{Sources of atomic waves and detectors} The simplest source is a thermal atomic beam, either effusive or supersonic, this last type providing a narrower velocity distribution and, in both cases, the flux at the interferometer output is very small. Very efficient detectors are needed and most experiments with thermal atoms have been done either with alkali or with metastable atoms which can be very efficiently detected using surface ionization. Cold atoms give access to very long interaction times, which improves the ultimate resolution of a measurement. Therefore, many experiments use laser-cooled gases with the following scheme: an atomic trap is first loaded; after a final cooling step and optical pumping in a particular sub-level, the gas cloud is accelerated by laser beams and sent into the interferometer. The usual detection technique is then based on laser fluorescence, which allows the selective measurement of the populations of the ground state hyperfine levels. In most cases, the interference signal is to be found in the repartition of the population among these hyperfine levels. The possibility of measuring on each atomic cloud the populations of two hyperfine levels reduces the noise, down to the quantum projection limit \cite{itano93,santarelli99}, well below the fluctuations from shot to shot. Degenerate quantum gases, Bose Einstein condensate (BEC) or Fermi degenerate gases, can be used as sources for atom interferometers and the atom-laser beams extracted from BEC are analogous to laser beams in optics. For the high densities achieved in BEC, the atom-atom interactions are not negligible and, at the present state of the art, these sources are especially interesting for non-linear atom optics (as reviewed by \cite{bongs04}). Only some experiments, in which these non-linear aspects are weak, will be discussed here. \subsection{Coherent manipulation of atomic waves} \subsubsection{Diffraction by material structures} Diffraction of atoms by crystal surfaces, first observed by Stern and Estermann in 1929-1930 \cite{stern29,estermann30}, is used nowadays to study surface order and surface excitations. This diffraction process has not been used for an atom interferometer, because of the extreme requirements on surface quality and positioning. Diffraction by material slits is obviously possible and a Young's double slit experiment was realized by Carnal and Mlynek \cite{carnal91} in 1991. Diffraction by a grating made of nanowires is also possible and high quality gratings with periods down to $100$ nm can be made by nanolithography techniques \cite{ekstrom92,savas95} with areas close to $1$ mm$^2$. These gratings can diffract any atom, molecule or cluster \cite{schollkopf04}. Sch\"ollkopf and Toennies \cite{schollkopf94} have used diffraction of a supersonic beam as a mass selection process for weakly bound helium clusters. As discussed below, the atom-surface van der Waals interaction cannot be neglected. The use of gratings and nanostructures with cold atoms is not common, probably because of the strength of the atom-grating van der Waals interaction. However, Shimizu and co-workers have developed atom holograms in transmission \cite{morinaga96a,fujita00} and also in reflection \cite{shimizu02}, using the quantum reflection regime. \subsubsection{Diffraction by a laser standing wave} In 1933, Kapitza and Dirac \cite{kapitza33} proposed to diffract an electron beam by a standing light wave, in order to prove the existence of stimulated emission of radiation but this experiment was feasible only with lasers \cite{batelaan01}. In 1966, Altshuler et al. \cite{altshuler66} extended this idea to the diffraction of atoms: the diffraction probability was predicted to be considerably larger, especially if the laser frequency is close to a resonance transition. During a diffraction process (see Fig. 1), the atom absorbs a photon $\left| \omega, {\mathbf k} \right>$ going in one direction and makes a stimulated emission of a photon $\left| \omega, -{\mathbf k}\right>$ going in the opposite direction. If the two photons have the same polarization, this process is fully elastic, i. e. the initial and final internal states are identical. Conservation of energy and momentum is exactly fulfilled in the Bragg geometry, as shown in Fig. \ref{f1} and \ref{f2}. Usually, the laser standing wave has a finite spatial width (respectively a finite duration): the corresponding dispersion of the photon momentum around its mean value ${\mathbf k}$ (respectively of its frequency around its mean value $\omega$) relaxes the conservation laws. In the simplest case, laser diffraction can be described as a coherent evolution of the atom among two states differing by their momentum and this process is therefore a Rabi oscillation. \begin{figure} \caption{\label{f1} \label{f1} \end{figure} \begin{figure} \caption{\label{f2} \label{f2} \end{figure} A standing wave moving in the laboratory with a velocity $v$ can be produced by using two counter-propagating laser beams with different frequencies (the velocity being proportional to the frequency difference). This possibility is widely used to characterize the momentum distribution of ultra-cold atoms \cite{bongs04,rolston02}. The calculation of the diffraction probability has been the subject of many works, corresponding to limiting cases, such as the Raman-Nath case (thin grating) \cite{bernhardt81} or the Bragg case (thick grating and weak potential) \cite{giltner95a}. The various possible regimes are discussed in reference \cite{keller99}. Bloch states, which have been introduced by Letokhov and Minogin \cite{letokhov81} (see also \cite{castin91}) to describe the motion of the atom in a standing wave provide an unified treatment of atom diffraction \cite{champenois01}. The photon recoil effect was first observed in a saturated absorption spectroscopy experiment by Hall and Bord\'e \cite{hall76} in 1976. Resolved diffraction peaks were observed \cite{moskowitz83} in 1983 and Bragg scattering \cite{martin88} in 1988, both by Pritchard and co-workers. Siu Au Lee and co-workers \cite{giltner95a} were able to observe Bragg diffraction up to the sixth order and to build an interferometer operating with any diffraction order $p$ from $p=1$ to $3$ \cite{giltner95b}. This diffraction process can be generalized to several cases as explained by Bord\'e \cite{borde89,borde97}. We will discuss here only the case of Raman diffraction (see Fig. \ref{f1} and \ref{f2}). The atom has two ground state hyperfine sub-levels $\left| g_1 \right>$ and $\left| g_2 \right>$, with an energy splitting $\hbar \omega_{12}$. The laser standing wave is replaced by two counter-propagating waves of frequencies $\omega_{1}$ and $\omega_{2}$, with $\omega_{1}- \omega_{2}= \omega_{12}$. The diffraction corresponds to the absorption of a photon $\omega_{1}$ and the stimulated emission of a photon $\omega_{2}$, while the atom makes a Raman transition from state $\left|g_1\right>$ to state $\left|g_2 \right>$. The main advantage of such an inelastic diffraction process is that the transmitted and diffracted beams differ by their internal states. It is therefore very easy to detect selectively the direct and diffracted beams, but this diffraction process is coherent only if the laser beat note $(\omega_{1}- \omega_{2})$ is phase-locked on a stable oscillator. Finally, we have not discussed the limitations of this diffraction process. The laser frequencies are usually chosen close to resonance but not exactly at resonance, so that the probability of a spontaneous photon emission remains negligible. A different regime is based on adiabatic transfer and, then, the laser is exactly at resonance \cite{marte91}. In this case, the transfer of a very large number of photon momenta has been demonstrated \cite{weitz94}. Diffraction by laser has a very important advantage with respect to diffraction by material gratings: the diffraction amplitude can be rapidly modulated as a function of time and this possibility is widely used to build temporal interferometers. \subsubsection{Mirrors, traps and microtraps} A repulsive potential can be used to build mirrors for atomic waves: - the Earth gravitational potential reflects an atomic beam going upward, producing an atomic fountain. - laser evanescent waves with a positive detuning ($\omega > \omega_0$) have been used as mirrors \cite{aminoff93}. The atom feels the sum of the van der Waals attractive potential of the surface and the dipole repulsive potential due to the evanescent wave \cite{landragin96}. - periodic magnetic structures can produce a magnetic field which decreases exponentially far from the structure and such a field can be used as mirrors for cold atoms with a non vanishing magnetic moment. For slow atoms, the angular momentum projection $M$ is quantized on the magnetic field direction and follows adiabatically the field, thus creating an attractive or repulsive potential. This idea \cite{opat92,opat99} has been demonstrated by several experiments \cite{roach95,drndic99}. The coherence of the reflected wave depends on the mirror roughness. Several experiments \cite{savalli2002,esteve04} have tested the roughness of atomic mirrors: it is possible but difficult to produce very coherent reflection. Many atom traps (too numerous to be listed here) have been developed for the production of quantum degenerate gases: these traps rely either on magnetic forces or on the dipole potential in far off-resonance laser beams. In an excellent vacuum (near $10^{-10}$ mbar), the atom residence time in such a trap can be quite long, (of the order $100$ s), limited either by collisions with the residual gas or, for a dense trapped gas, by dimer formation by $3$-body collisions. The atom coherence time, which is more difficult to measure, is also sensitive to the fluctuations of the trapping potential position and depth. Miniature magnetic traps and waveguides are developed in order to build integrated atom optics on a chip. In such traps where the atom is very near a surface, new effects have been predicted by Henkel \cite{henkel99a,henkel99b,henkel01}: the low-frequency part of the thermal electromagnetic fields is considerably enhanced near a conducting surface and these fields may reduce the coherence lifetime in the trap. Some recent experiments have observed the reduction of the atom residence time in magnetic micro-traps when the atom-surface distance is reduced \cite{fortagh02,jones03,rekdal04}. Finally, in 2005, after several unsuccessful attempts, a Michelson atom interferometer has been operated on a chip \cite{wang05}. \section{Main types of atom interferometers} \subsection{Polarization interferometers versus separated beam interferometers} With light waves \cite{born75}, a distinction is classically made between polarization interferometers (made of a polarizer, a birefringent medium and an analyzer) and interferometers using division of wavefront or of amplitude (in which a light beam is split in two beams which recombine on the detector). Obviously, this distinction is more technical than fundamental. Among atom interferometers, all those using an inelastic diffraction process (Ramsey-Bord\'e or Raman process) have a mixed character: the wave follows two different paths but the beam-splitters have modified the atom internal state. With atoms, the equivalent of pure polarization interferometers can be found in the Ramsey \cite{ramsey50} or Sokolov \cite{sokolov73} experiments and in atomic clocks. The two paths followed by the atom differ essentially by the internal states of the atom and the momentum transfer, which is due to the absorption of a microwave photon, is very small although not completely vanishing \cite{wolf04}. Recent developments on polarization interferometers, done by the group of Baudon and coworkers, are reviewed in \cite{baudon99}. Here, we will concentrate on interferometers in which the two atomic paths are noticeably different. \subsection{Interferometer designs} With atomic waves, a complete equivalent of the Fabry-Perot or Michelson interferometers is not feasible and, if we except some Young's double slit type experiments, most interferometers are based on the Mach-Zehnder design, with a diffraction process replacing the mirrors and the beam-splitters. The high symmetry of the Mach-Zehnder interferometer is very helpful to minimize the sensitivity to defects but less symmetrical designs are also interesting (see Fig. \ref{f3}). The Mach-Zehnder design can be divided in various subtypes: \begin{figure} \caption{\label{f3} \label{f3} \end{figure} \begin{itemize} \item temporal or spatial interferometers. In the simplest temporal interferometers, the diffraction gratings are produced by pulsing the laser beams at time $t=0$, $t=T$ and $t=2T$. In spatial interferometers, three gratings, located at $z=0$, $z=L$ and $z=2L$, are successively crossed by the atoms. While spatial interferometers are very similar to their traditional optics counterparts, temporal interferometers, which are almost unknown in optics, are easy to build with atoms thanks to laser diffraction and, with cold atoms, temporal interferometers are the usual choice. In many cases, the phase shift to be measured varies with the time interval $T$ and an accurate knowledge of $T$ is needed for a high precision measurement. In a spatial interferometer, $T= L/v$ and the dispersion of the phase shift due to the velocity distribution limits the maximum observable phase shift and the accuracy of its measurement. Several techniques \cite{hammond95,roberts04} have been proposed to overcome this difficulty. \item among temporal interferometers, some of them are based on an echo-type technique, as discussed by Sleator and co-workers \cite{cahn97}. With this technique, the atomic source transverse velocity distribution may be larger than one photon recoil velocity, an advantage shared with the Talbot-Lau interferometers. \item the diffraction process itself can be used in the far-field regime (the Fraunhofer regime where the beams associated to the various diffraction orders do not overlap) or the near-field regime. Near-field diffraction is used in Talbot-Lau interferometers \cite{brezger03}, which is the only possible design with heavy molecules, because of their very small de Broglie wavelength: many such experiments have been done by Arndt, Zeilinger and co-workers \cite{brezger02,hackermuller03} but a Talbot-Lau interferometer has been developed with thermal atoms by the group of Clauser \cite{clauser94}. \item although most interferometers involve only two atomic paths, a multiple path interferometer has been built and operated by Weitz and co-workers \cite{weitz96}. \end{itemize} \section{Applications of atom interferometry} Let us now review the main applications of atom interferometry, illustrating each case by some experimental results. \subsection{Young double slit interferometers and related experiments} A Young's double slit experiment was realized by Carnal and Mlynek in 1991 using a supersonic beam of metastable helium \cite{carnal91}. A similar experiment was made in 1992 with ultra-cold metastable neon by Shimizu and co-workers \cite{shimizu92a}, who also observed the shift of the fringe position due to an inhomogeneous electric field \cite{shimizu92b}. Mlynek and co-workers have also used a charged wire to build a kind of Fresnel biprism interferometer \cite{nowak98,nowak99}. By modulating the laser power density of an evanescent wave mirror, Dalibard and co-workers were able to induce a phase modulation of an atomic wave \cite{steane95} and they used this process to build a temporal Young double slit interferometer \cite{szriftgiser96}. \subsection{Effects of an electric field} The electric polarizability $\alpha$ is interesting to measure by atom interferometry, because spectroscopy can measure only the polarizability difference between internal states. The phase shift due to an electric field ${\mathbf E}$ applied on the interferometer is given by: \begin{equation} \label{n3} \Delta \phi = \frac{2 \pi \epsilon_0 \alpha}{\hbar v} \oint {\mathbf E}^2(s) ds \end{equation} \noindent where the path integral is taken on a closed circuit following the two atomic paths inside the interferometer and $v$ is the atom velocity. It is necessary to apply an electric field on only one of the interfering beams and this is possible, if the collimation is sufficient, by using a capacitor with a thin electrode (a septum) inserted between the two beams. With such an experiment, Pritchard and co-workers \cite{ekstrom95} have obtained a very accurate measurement of the sodium atom polarizability. With a similar experiment, Toennies and co-workers have compared the electric polarisabilities of helium atom and dimer \cite{toennies03} and our group has measured the electric polarizability of lithium atom \cite{miffre05a,miffre05b}. The velocity dependence of the phase shift has been compensated by applying time dependent phase shifts by Pritchard and co-workers \cite{roberts04}. With an interferometer using inelastic diffraction, a homogeneous electric field applied on the two interfering beams induces a phase shift proportional to the polarizability difference: such an experiment was done on magnesium \cite{rieger93} and on calcium \cite{morinaga96b}. With a multiple beam interferometer, Weitz and co-workers have observed the tensorial character of the phase shift due to the AC Stark effect of a pulsed optical field \cite{weitz00}. The Aharonov-Casher phase \cite{aharonov84}, which results from the application of an electric field on an atom with an oriented magnetic moment, has been measured by several experiments \cite{sangster93,sangster95,zeiske95,yanagimachi02}. \subsection{Effect of a magnetic field} With paramagnetic atoms in a given $F, M_F$ hyperfine sublevel, the phase shift due to the magnetic field $B$ is given by : \begin{equation} \label{n4} \Delta \phi(F,M_F) = \frac{g_F \mu_B M_F}{\hbar v} \oint B(s) ds \end{equation} \noindent where the field is assumed to vary slowly enough to insure an adiabatic behaviour. The resulting phase shift $\Delta \phi(F,M_F)$ vanishes for a homogeneous field and is proportional to the magnetic field gradient. This gradient may be created by a current sheet circulating between the two atomic beams \cite{schmiedmayer94,schmiedmayer97} or more simply by a coil \cite{giltner96,miffre05} or a wire \cite{wang05} (in this last case, the gradient is pulsed). In any case, it is difficult to evaluate the field integral very accurately, so that these experiments cannot provide competitive measurements of the Zeeman effect. In the experiments \cite{schmiedmayer94,schmiedmayer97,giltner96,miffre05}, the fringe visibility ${\mathcal{V}}$, which was recorded as a function of the applied gradient, presents a series of revivals when all the $\Delta \phi(F,M_F)$ are multiples of $2\pi$. If the diffraction process is inelastic, a phase shift appears as soon as the magnetic moments in the two states are not equal. If the magnetic field is homogeneous and pulsed in time, the phase shift is non-dispersive (i.e. independent of the atom velocity) and this effect is a particular case of the scalar Aharonov-Bohm effect. This effect has been studied on sodium by Morinaga and co-workers \cite{shinohara02}. As a weak magnetic field induces a large phase shift, most experiments try to cancel the sensitivity to the magnetic field by pumping the atom in a $M_F= 0$ sub-level (which has only a quadratic Zeeman effect) and by keeping the magnetic field weak and homogeneous but non zero, as in atomic clocks. \subsection{Measurement of $h/M$ and of the fine structure constant $\alpha$} With a proper design (see Fig. \ref{f3}), the interference signal is sensitive to the photon recoil energy $\hbar \omega_{rec} = \hbar^2 k_L^2/(2 M)$. The associated phase shift, which may be very large, can be measured only with temporal interferometers operated with cold atoms. The knowledge of $\hbar/M$, combined with the Rydberg constant and mass ratios which are very accurately known, gives access to the fine structure constant $\alpha$. This new measurement is very interesting because it is almost completely independent of quantum electrodynamics (QED) calculations, while the best measurement of $\alpha$, deduced from the anomalous Land\'e factor of the electron \cite{mohr05}, rely on very complex QED calculations. The first demonstration was made on cesium by Chu and co-workers \cite{weiss93,weiss94} with an uncertainty of $0.1$ ppm on $\hbar/M$ but their result was lower than the accepted value by $0.85$ ppm. Many improvements have been made and, in 2002, this experiment has given a measurement of $\alpha$ with an accuracy of $7.4$ ppb \cite{wicht02}. Similar experiments have been started on rubidium \cite{cahn97} and hydrogen \cite{heupel02}. The contrast interferometer of Pritchard and co-workers \cite{gupta02}, which operates with a sodium BEC, has given a precision of $7$ ppm on $h/M_{Na}$, but the result differs by $200$ ppm from the accepted value, a difference attributed to the mean-field interaction in the condensate. A recent experiment \cite{lecoq05} by Aspect and co-workers with a rubidium BEC has observed a similar shift which has been quantitatively related to the mean-field interaction in the expanding BEC. Bloch oscillations of an atom in a standing light wave were first observed by the research groups of Salomon \cite{bendahan96} and Raizen \cite{wilkinson96} in 1996. Such an experiment can give a measurement of $h/M$ by measuring the transferred momentum through a velocity measurement. A first experiment on rubidium $^{87}Rb$ has been made by Biraben and co-workers \cite{battesti04}, with an uncertainty on $\alpha$ equal to $0.4$ ppm and this uncertainty has been recently reduced to $6.7$ ppb \cite{clade06}. \subsection{Measurement of inertial effects} The sensitivity of a matter wave interferometer to inertial effects is large \cite{anandan77,clauser88,borde89} as first illustrated by the observation of the effect of gravity on a neutron interferometer in 1975 \cite{colella75}. The classical aspects of the detection of inertial effects with an atomic beam have been discussed by the group of Zeilinger \cite{oberthaler96}. \subsubsection{Measurement of accelerations} The phase-shift due to an acceleration ${\mathbf a}$ is given by: \begin{equation} \label{n5} \Delta\phi ={\mathbf k}_{eff} \cdot {\mathbf a} T^2 \end{equation} \noindent where $T$ is the time between laser diffraction pulses and $\hbar k_{eff}$ the momentum transferred to the atom by the diffraction process. In the case of the acceleration of gravity ${\mathbf a} = {\mathbf g}$, minor corrections due to the gravity gradient must be taken into account \cite{wolf99,peters01}. The first measurement of ${\mathbf g}$ by atom interferometry was performed in 1991 by Kasevich and Chu \cite{kasevich91,kasevich92}, with a temporal interferometer with Raman diffraction and laser cooled atoms. In 1997, Sleator and co-workers \cite{cahn97} realized a preliminary measurement of $g$ with their echo interferometer. After a series of improvements, Chu and co-workers \cite{peters99,peters01} have obtained an uncertainty of $3\times 10^{-9}$ on $g$. A gravity gradiometer has been built by the research group of Kasevich \cite{snadden98,mcguirk02}, with an achieved sensitivity equal to $4\times 10^{-9}$ s$^{-2}$ and this apparatus has been used for a preliminary measurement of the gravitational constant $G$ \cite{kasevich02}. Tino and co-workers are presently building an atom interferometer dedicated to the measurement of $G$, aiming at a $100$ ppm accuracy \cite{tino02,fattori03}. An atom interferometer using amplitude gratings made of laser standing waves has been built by H\"ansch and co-workers and it has been used to test the equivalence principle with a $10^{-7}$ sensitivity \cite{fray04}. The period of Bloch oscillations in a laser standing wave in the vertical direction is directly related to $g$ and several experiments have used this property to measure $g$. Inguscio and co-workers \cite{roati04} have compared two realizations of this experiment, one with a bosonic atom $^{87}$Rb and one with a fermionic atom $^{40}$K, thus proving the superiority of noninteracting fermions for such an experiment. Biraben and co-workers \cite{clade05} have made a preliminary measurement of $g$ with a $10^{-6}$ accuracy and pointed out several interesting features of this technique. If an atom interferometer is placed in an homogeneous electric field, the atoms will be accelerated if their electric charge does not exactly vanish. Chu and co-workers \cite{young97} proposed to use the large sensitivity to accelerations of atom interferometers to test the charge neutrality of atoms. Following \cite{delhuille01}, it is possible to achieve a sensitivity near $10^{-21}q_e$ ($q_e$: electron charge), equal to the sensitivity achieved by experiments with neutrons \cite{baumann88} or molecules \cite{dylla73}. \subsubsection{Measurement of rotations: atom gyros} The sensitivity of an interferometer to rotations is due to the Sagnac effect. We can use equation (\ref{n4}) and replace the acceleration ${\mathbf a}$ by the Coriolis term. Classically, the phase-shift is given by: \begin{equation} \label{n6} \Delta\phi = \frac{4 \pi {\mathbf \Omega} \cdot {\mathbf A}}{\lambda_{dB} v} \end{equation} \noindent ${\mathbf \Omega}$ is the angular velocity of the interferometer, ${\mathbf A}$ is the area enclosed by the interferometer paths (normal to its surface) and $\lambda_{dB}$ is the de Broglie wavelength . As this phase shift is an inertial effect, it must be independent of the atom mass and this appears obviously if one writes: \begin{equation} \label{n7}\Delta\phi = 2 \Omega v T^2 k_{eff} = 2 \Omega L^2 k_{eff}/v \end{equation} \noindent where $v$ is the atom velocity, $T$ the time between diffraction pulses or $L$ the distance between diffraction gratings. The exact value of the area $A$ is not a simple question \cite{antoine03,antoine05}, because of the pulse duration or spatial width of the laser beams. The first atom interferometer gyrometer was built by Helmcke, Bord\'e and co-workers \cite{riehle91} in 1991: it was a spatial Ramsey-Bord\'e interferometer using a thermal calcium beam. A spatial interferometer with a thermal sodium beam and material gratings, built by Pritchard and co-workers \cite{lenef97}, has reached a sensitivity equal to $3\times 10^{-6}$ rad/s$\sqrt{Hz}$. A thermal cesium spatial interferometer with Raman diffraction by Kasevich and co-workers \cite{gustavson97,gustavson00} has reached a sensitivity equal to $6\times 10^{-10}$ rad/s$\sqrt{Hz}$. A temporal interferometer using Raman diffraction with a cold cesium fountain beam has been developed at BNM-SYRTE laboratory \cite{leduc04} and a somewhat similar apparatus aiming at a very high sensitivity needed for a direct detection of the Lense-Thirring effect (a frame-dragging effect predicted by general relativity \cite{ciufolini04}) is under development in a joint effort involving several European laboratories (HYPER project \cite{landragin01}). \subsection{Other interactions} Up to now, we have considered that the atoms inside the interferometers are isolated and submitted only to electromagnetic or inertial fields. In the following experiments, the atom interactions are more complex, usually with a stochastic or dissipative character. \subsubsection{Effect of a gas on atomic wave propagation: index of refraction or decoherence} Pritchard and co-workers have measured the index of refraction of various gases for sodium atomic waves \cite{schmiedmayer95,schmiedmayer97,roberts02}. Using a gas cell with a septum, a small gas density (corresponding to a gas pressure near $10^{-3}$ millibar) is introduced on one of the interfering beams (the atomic wave goes in and out of the cell through thin slits). The phase shift and attenuation of the transmitted wave are both detected on the interferometer signal. In the case of rare gases, the experimental results can be compared to calculations using the sodium-rare gas interaction potentials and the main features are well explained \cite{schmiedmayer95,champenois97,forrey96}. The presence of a gas in the interferometer can also induce a decoherence effect: this effect has been observed \cite{hornberger03a,hornberger03b,hackermuller03a} in the Talbot-Lau interferometer developed by Arndt, Zeilinger and co-workers. The C$_{70}$ molecules are very massive and a collision with an atom destroys the coherence of the spatial wavefunction, by transferring some momentum to the C$_{70}$ molecules, but the resulting deviation is small enough so that the molecules still arrive on the detector: as the coherence is lost, the fringe visibility decrease with increasing gas density. \subsubsection{Atom-surface van der Waals interaction} When an atom goes through the narrow slits of a grating, the atom-surface interaction is quite large and, even at thermal velocities, this interaction cannot be neglected. Each slit can be viewed as a cylindrical lens, with a velocity dependent focal length. This effect modifies the diffraction amplitudes and it has been studied carefully by the group of Toennies \cite{grisenti99}. Moreover, as predicted by our group \cite{champenois99}, the transmitted wave receives a phase shift which has been recently measured by Cronin and co-workers \cite{perreault05}. \subsubsection{Decoherence effects by spontaneous photon emission} If an atom is excited by a laser, a spontaneous emission of a photon occurs. This process transfers momentum to the atom and at the same time gives a signature of the presence of the atom. Such a process is therefore associated to a loss of coherence and the visibility of the interference signals is reduced when spontaneous photon emission occurs inside the interferometer. This effect has been first studied by Mlynek and co-workers on the laser diffraction pattern of a metastable helium beam \cite{pfau94}. Then, Pritchard and co-workers were able to study in great detail the loss of coherence due to the spontaneous emission of a variable number of photons as a function of the distance between the two interfering paths \cite{chapman95,kokorowski01}. Mei and Weitz \cite{mei01,mei01a} have extended this study, using their multiple path interferometer, and they have shown that, in some cases, decoherence can increase the fringe visibility. Finally, an experiment due to Arndt, Zeilinger and co-workers \cite{hackermuller04} investigates the same effect on C$_{70}$ molecules in their Talbot-Lau interferometer: the molecules, heated by a laser before entering the interferometer, emit infra-red photons and this emission induces decoherence. \subsubsection{Decoherence effects by gravitational waves and related topics} Several works \cite{percival97,benatti02,chiao03}, some being highly speculative, have discussed the detection of space-time fluctuations with atom interferometers. The particular case of the interaction of low-frequency gravitational waves with a matter-wave interferometer can be described without any approximation: a very intense background radiation of gravitational waves emitted by binary star systems is predicted by general relativity and its existence is commonly accepted. Because there is a very large gain of sensitivity for inertial effects when going from light wave to matter wave interferometers, the same gain was expected for the detection of gravitational waves. S. Reynaud and co-workers \cite{lamine02} have shown that this gain of sensitivity does not exist. \section{Conclusion} This paper has given an overview of atom interferometry, limited to interferometers in which the two atomic paths are spatially different. We have also described the main applications of this technique. We have quoted most of the pioneering works as well as a large fraction of recent papers, illustrating the present state of art, but many interesting papers have been omitted because of lack of space. Let us summarize the main messages of this paper: - atom interferometry has been rapidly expanding since 1991 and a wide variety of experiments have already been realized. These experiments give new access to the measurements of very different quantities (atomic properties, accelerations and rotations, fundamental constants $\alpha$ and $G$, quantum decoherence effects, etc). Even if this list is already large, new applications are still ahead! - the possibilities opened by lasers to cool and manipulate coherently atoms are extremely wide. In particular, the atom internal states, which replace the polarization states of light, give a considerably larger set of possibilities and this property explains why so many different types of experiments can be developed. - atom interferometry has already achieved an extraordinary sensitivity and many improvements are expected to provide further gains of sensitivity. Better sources of atomic waves and, in particular, the development of intense and continuous atom-lasers will provide extraordinary improvements. The atom-atom interactions, which give very impressive effects in quantum degenerate gases, will then play an important role which has no equivalent in traditional interferometry with photons. \end{document}
\begin{document} \title{Rainbow clique subdivisions} \begin{abstract} We show that for any integer $t \ge 2$, every properly edge colored $n$-vertex graph with average degree at least $(\ensuremath{\ell}og n)^{2+o(1)}$ contains a rainbow subdivision of a complete graph of size $t$. Note that this bound is within a log factor of the lower bound. This also implies a result on the rainbow Tur\'{a}n number of cycles. \end{abstract} \section{Introduction}\ensuremath{\ell}abel{sec:intro} Let $G$ be a graph. A \textit{subdivision} of $G$, denoted by $\mathsf{T}G$, is a graph obtained from $G$ by replacing each of its edges into internally vertex disjoint paths. Subdivisions play an important role in graph theory. One of the important results on subdivisions dated back to 1930s where Kuratowski \cite{Kur30} showed that a graph is not planar if and only if it contains a subdivision of a complete graph on five vertices or a subdivision of a complete bipartite graph with three vertices in each partition. Mader \cite{Mad67} initiated the study of the relation between the average degree of a graph and the size of its largest clique subdivisions. For integer $t > 0$, let $d(t)$ be the minimum number $d$ such that every graph with average degree at least $d$ contains a subdivision of a complete graph $K_t$. Mader \cite{Mad67} showed the existence of $d(t)$ in 1967. Mader \cite{Mad67}, and independently Erd\H{o}s and Hajnal \cite{EH69} conjectured that $d(t) = O(t^2)$. Subsequently, Mader \cite{Mad72} showed that $O(2^t)$ is an upper bound of $d(t)$. In 1990s, Koml\'os and Szemer\'edi \cite{K-Sz-1, K-Sz-2}, and independently, Bollob\'as and Thomassen \cite{BT98} confirmed this conjecture. As for lower bound, Jung \cite{Jung70} observed that disjoint union of complete regular bipartite graphs give the lower bound of $d(t) = \Omegaega(t^2)$. Hence, $d(t) = \Thetaeta(t^2)$. In order to achieve a subdivision of a complete graph of size linear to the average degree, some additional conditions, such as minimum girth conditions, are needed to eliminate the extremal examples. In fact, Mader \cite{Mad99} conjectured that every $C_4$-free graph of average degree $d$ contains a $\mathsf{TK}_{\Omegaega(d)}$. K\"uhn and Osthus \cite{KO02, KO06} proved that every graph with sufficiently large girth contains a $\mathsf{TK}_{\deltalta(G)+1}$. They \cite{KO04} also showed the existence of $\mathsf{TK}_{d/\ensuremath{\ell}og^{12} d}$ in every $C_4$-free graph of average degree $d$. In \cite{BLS15}, Balogh, Liu and Sharifzadeh proved Mader's conjecture assuming the graph is $C_6$-free. Liu and Montgomery \cite{liu2017proof} completely resolved this conjecture recently. For $\ell \in \mathbb{N}$, a \textit{balanced subdivision} of $G$, denoted by $\mathsf{T}G^{(\ell)}$, is a graph obtained from $G$ by replacing each of its edges into internally vertex disjoint paths of length exactly $\ell$. Thomassen \cite{Tho84, Tho85, thomassen} conjectured that for every constant $k \in \mathbb{N}$, there exists $d$ such that every graph with average degree at least $d$ contains a $\mathsf{TK}^{(\ell)}_k$ for some $\ell \in \mathbb{N}$. Liu and Montgomery \cite{liu2020proof} confirmed Thomassen's conjecture. More recently, the author \cite{Wan21} showed that in every graph with average degree at least $d$ there is a $\mathsf{TK}_{\Omegaega(d^c)}^{(\ell)}$ for every constant $0 < c < 1/2$, which is almost optimal. Note that $\ell$ is a polylogarithmic function of the number of vertices of the graph in these results. Balanced clique subdivisions have also been studied extensively when restricting $\ell$ to be constant (see \cite{AKS03, ES74,Erd71,FS11,Jan21,Jia11,JS12,KP88,Tom22}). In a graph with a proper edge coloring, we say that a subgraph is rainbow if all the edges have distinct colors. A rainbow variant of clique subdivision problems was considered by Jiang, Methuku and Yepremyan \cite{JMY21}. They proved that every properly edge-colored graph on $n$ vertices with average degree at least $e^{c\sqrt{\ensuremath{\ell}og n}}$ contains a rainbow $\mathsf{TK}_t$. Later, this upper bound on average degree was improved to $(\ensuremath{\ell}og n)^{60}$ by Jiang, Letzter, Methuku and Yepremyan \cite{JLMY21}. Recently, Tomon \cite{Tom22} showed that $(\ensuremath{\ell}og n)^{6+o(1)}$ suffices. In this paper, we prove the following. \begin{theorem} \ensuremath{\ell}abel{main_clique} Let $t > 0$ be an integer. Suppose $G$ is a properly edge colored graph on $n$ vertices with average degree at least $(\ensuremath{\ell}og n)^{2+o(1)}$. Then $G$ contains a rainbow $\mathsf{TK}_t$. \end{theorem} We would like to point out that Theorem \ref{main_clique} is closely related to the study of rainbow Tur\'{a}n number. Let $H$ be a graph. The Tur\'{a}n number $ex(n,H)$ is the maximum number of edges that a graph on $n$ vertices without a copy of $H$ can have. Keevash, Mubayi, Sudakov and Verstra\"{e}te \cite{KMSV07} first introduced the following rainbow variant of Tur\'{a}n number. The rainbow Tur\'{a}n number $ex^*(n,H)$ is the maximum number of edges that a properly edge colored graph on $n$ vertices without a rainbow copy of $H$ can have. In \cite{KMSV07}, Keevash, Mubayi, Sudakov and Verstra\"{e}te showed that $ex^*(n,H) = (1+o(1))ex(n,H)$ for non-bipartite $H$, thus determined the asymptotic value of $ex^*(n,H)$ by Erd\H{o}s-Stone-Simonovits Theorem \cite{ES66, ES46}. When $H$ is bipartite, determining $ex^*(n,H)$ is harder. In particular, much attention has been drawn on the study of $ex^*(n,C_{2k})$ where $C_{2k}$ is a cycle of length $2k$ (see \cite{DLS13, Jan20, KMSV07}), and Janzer \cite{Jan20} determined $ex^*(n,C_{2k}) = \Thetaeta(n^{1+1/k})$. It is well known that a graph with $n$ vertices without a cycle contains at most $n-1$ edges. It is natural to ask how many edges a properly edge colored graph on $n$ vertices without a rainbow cycle can have. Equivalently, let $\mathcal{C}$ be the set of all cycles, it is interesting to determine $ex^*(n,\mathcal{C})$. Keevash, Mubayi, Sudakov and Verstra\"{e}te \cite{KMSV07} showed that $ex^*(n,\mathcal{C}) = O(n^{4/3})$ and ask what number it should be. Later, Das, Lee, Sudakov \cite{DLS13} improved the bound to $ne^{(\ensuremath{\ell}og n)^{1/2+o(1)}}$. This was further improved by Janzer \cite{Jan20} to $O(n(\ensuremath{\ell}og n)^4)$. The best upper bound is $n(\ensuremath{\ell}og n)^{2+o(1)}$ obtained recently by Tomon \cite{Tom22}. It is easy to see that Theorem \ref{main_clique} implies $ex^*(n,\mathcal{C}) \ensuremath{\ell}e n (\ensuremath{\ell}og n)^{2+o(1)}$ (for example, take $t=3$). \begin{cor} \ensuremath{\ell}abel{main_cycle} Suppose $G$ is a properly edge colored graph on $n$ vertices with average degree at least $(\ensuremath{\ell}og n)^{2+o(1)}$. Then $G$ contains a rainbow cycle. \end{cor} We remark that both Theorem \ref{main_clique} and Corollary \ref{main_cycle} are within a $\ensuremath{\ell}og n$ factor of the lower bound because of the following example due to Keevash, Mubayi, Sudakov and Verstra\"{e}te \cite{KMSV07}. Consider $d$-dimensional hypercube $Q_d$: the vertices of $Q_d$ are all the subsets of $\{1,2,-{LD}ots,d\}$ and the edges of $Q_d$ consist of all pairs of subsets of $[d]$ whose Hamming distance is exactly $1$. Let $f$ be a proper edge coloring of $Q_d$ such that $f( \{X,X\setminus \{i\} \}) = i$ for $X \subseteq [d]$ and $i \in X$. One can check $Q_d$ with such edge coloring $f$ contains no rainbow cycle. Moreover, the average degree of $Q_d$ is $d = \ensuremath{\ell}og n$. This implies $ex^*(n,\mathcal{C}) \ge nd/2 = \Omegaega( n \ensuremath{\ell}og n)$. The proof of Theorem \ref{main_clique} adopts the idea in \cite{Tom22} together with some new ideas. First we generalize the definition of $\alphapha$-maximal graphs to $\omegaega$-maximal graphs (see Definition \ref{def:omega}). We show that log-maximal graphs have good expansion property even after sampling the colors (see Lemma \ref{lem:expand}). By sprinkling technique, every vertex in a log-maximal graph can reach more than half of the vertices via a rainbow path of logarithmic length avoiding a given set of vertices and colors (see Lemma \ref{lem:reachable}). Then it implies that any two vertices in a log-maximal graph can be connected by a rainbow path of small length upon removal of a set of vertices and colors of moderate size (see Lemma \ref{lem:diamter}). Finally we complete the proof by a greedy argument. \subsection{Notations} \ensuremath{\ell}abel{sec:notation} For an integer $n \ge 1$, let $[n] = \{1,2,-{LD}ots,n\}$. Let $G$ be a graph. Let $V(G)$ and $E(G)$ be vertex set and edge set of $G$ respectively. We define $v(G) = |V(G)|$ and $e(G) = |E(G)|$. Let $X \subseteq V(G)$, we write $G - X$ for the induced subgraph of $G[V(G) \backslash X]$. Let $X, Y \subseteq V(G)$, we write $G[X,Y]$ for the induced bipartite subgraph of $G$ with parts $X$ and $Y$. We define $e_G(X,Y) = |E(G[X,Y])|$. Let $d(G), \deltalta(G), \Deltalta(G)$ be the average degree, minimum degree and maximum degree of $G$ respectively. For $v \in V(G)$, let $d_G(v)$ denote the degree of $v$ in $G$. For $X \subseteq V(G)$, denote $N_G(X)$ the (external) neighborhood of $X$ in $G - X$. We omit the subscript if there is no confusion. We also omit the floors and ceilings when they are not crucial. All logarithms are of base $2$. \section{Preliminaries} \ensuremath{\ell}abel{sec:pre} We need the following definitions. \begin{defn} Let $G$ be a graph with proper edge coloring $f: E(G) \rightarrow R$. Let $\phi: V(G) \rightarrow 2^{V(G) \cup R} $ be a mapping that assigns a set of (forbidden) vertices and colors for each vertex in $G$. For $X \subseteq V(G)$ and $Q \subseteq R$, the \textit{restricted external neighborhood of $X$ in $G$ with respect to the colors $Q$} is $$N_{Q,\phi}(X) := \{ y \in V(G) \backslash X : \exists x \in X, xy \in E(G), f(xy) \in Q \backslash \phi(x), y \not\in \phi(x) \}$$ \end{defn} \begin{defn} Let $G$ be a graph with proper edge coloring $f: E(G) \rightarrow R$. A \textit{rainbow $Q$-path} in $G$ is a path $v_1 v_2 \cdots v_k$ in $G$ such that $f(v_i v_{i+1}) \in Q$ for $i \in [k-1]$ and all $f(v_i v_{i+1})$ are distinct. \end{defn} We also need the multiplicative Azuma's inequality (see \cite{probmethod}). \begin{lemma}\emph{(Multiplicative Azuma's inequality)} \ensuremath{\ell}abel{azuma} Let $Z_0,\dots,Z_{n}$ be a martingale such that $|Z_{i}-Z_{i-1}|\ensuremath{\ell}eq c$ for $i=1,\dots,n$, and let $\mu=\mathbb{E}(Z_{0})$. Then for $\deltalta\in [0,1]$, we have $$\mathbb{P}(Z_n \ensuremath{\ell}eq (1-\deltalta)\mu)\ensuremath{\ell}eq e^{-\deltalta^{2}\mu/3c}.$$ \end{lemma} \section{Rainbow clique subdivisions} \ensuremath{\ell}abel{sec:clique} In this section, we show Theorem \ref{main_clique}. \subsection{$\omegaega$-maximal graphs} We generalize the definition of $\alphapha$-maximal graphs in \cite{Tom22} as follows. \begin{defn} \ensuremath{\ell}abel{def:omega} Let $\omegaega: \mathbb{R} \rightarrow \mathbb{R}$ be a function. A graph $G$ is called \textit{$\omegaega$-maximal} if for every subgraph $H$ of $G$, we have $$\frac{d(H)}{\omegaega(v(H))} \ensuremath{\ell}e \frac{d(G)}{\omegaega(v(G))}.$$ \end{defn} It is easy to see that if $\omegaega(x) = x^{\alphapha}$ then $\omegaega$-maximal graphs and $\alphapha$-maximal graphs defined in \cite{Tom22} are the same. Note that $\omegaega$-maximal graphs have relatively large minimum degree. \begin{lemma} \ensuremath{\ell}abel{lem:maximal} Let $\omegaega: \mathbb{R}^+ \rightarrow \mathbb{R}^+$ be an increasing function. Let $G$ be an $\omegaega$-maximal graph. Then $\deltalta(G) \ge d(G)/2$. \end{lemma} \begin{proof} Let $v \in V(G)$ be a vertex of degree $\deltalta(G)$ and $H = G - \{v\}$. Since $G$ is $\omegaega$-maximal, we have $\frac{d(H)}{\omegaega(v(H))} \ensuremath{\ell}e \frac{d(G)}{\omegaega(v(G))}$, which is $$\frac{d(G)v(G)-2\deltalta(G)}{(v(G) - 1) \omegaega(v(G) - 1) } \ensuremath{\ell}e \frac{d(G)}{\omegaega(v(G))}.$$ So we have $$\deltalta(G) \ge \frac{1}{2} d(G) \ensuremath{\ell}eft( v(G) - \frac{(v(G)-1) \omegaega(v(G)-1)}{\omegaega (v(G))} \right) \ge \frac{1}{2}d(G).$$ \end{proof} In the rest of this section, we take $\omegaega = x \mapsto \ensuremath{\ell}og x$. \subsection{Expansions in log-maximal graphs} We show that log-maximal graphs have good expansion property even after sampling the colors. \begin{lemma} \ensuremath{\ell}abel{lem:expand} Let $0 < p_c \ensuremath{\ell}e 1$ and $\ensuremath{\ell}ambda > 24^{2}$. Let $n > 0$ be a sufficiently large integer. Let $G$ be a graph on $n$ vertices with proper edge coloring $f: E(G) \rightarrow R$ and $B \subseteq V(G)$ satisfying the following: \begin{itemize} \item[(i)] $G$ is log-maximal; \item[(ii)] $d := d(G) \ge \ensuremath{\ell}ambda (p_c)^{-1} \ensuremath{\ell}og n$; \item[(iii)] $\phi: V(G) \rightarrow 2^{V(G) \cup R}$ such that $|\phi(v)| \ensuremath{\ell}e \frac{d}{8 \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|})$ for all $v \in V(G)$; \item[(iv)] $ 2 \ensuremath{\ell}e |B| \ensuremath{\ell}e \frac{n}{2}$. \end{itemize} Let $Q \subseteq R$ be a random subset of colors such that each color is chosen with probability $p_c$ independently. Then with probability at least $1 - e^{-\Omegaega(\ensuremath{\ell}ambda^{1/2})}$, we have $$|N_{Q,\phi}(B)| \ge \min \ensuremath{\ell}eft( \frac{|B|}{4}, \frac{|B|\ensuremath{\ell}og(\frac{2n}{3|B|})}{8\ensuremath{\ell}og|B|} \right).$$ \end{lemma} \begin{proof} Define the auxiliary graph $H$ as follow: $V(H) = B \cup N_G(B)$ and $E(H) = \{xy: x \in B, y \in N_G(B) \backslash \phi(x), f(xy) \in R \backslash \phi(x) \}$. Let $H_Q$ be the graph $H$ restricted in random subset of colors $Q$, i.e. $V(H_Q) = B \cup N_G(B)$ and $E(H_Q) = \{xy: x \in B, y \in N_G(B) \backslash \phi(x), f(xy) \in Q \backslash \phi(x) \}$. Let $\Deltalta = \ensuremath{\ell}ambda^{1/2} p_c^{-1}$. Let $S = \{v \in N_H(B): |N_H(v) \cap B| \ge \Deltalta \}$ and $T = N_H(B) \backslash S$. We distinguish cases by the number of edges between $B$ and $T$ in $G$. \noindent \textbf{Case 1.} $e_G(B,T) \ensuremath{\ell}e \frac{d |B|}{4 \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|})$. First, we claim that $|S| \ge \min \{ \frac{|B|}{2}, \frac{|B|}{4\ensuremath{\ell}og|B|} \ensuremath{\ell}og(\frac{2n}{3|B|}) \}.$ Otherwise, suppose $|S| < |B|/2$. Let $C = V(G) \backslash B$. Since $E(G) = E(G[B \cup S]) \cup E(G[C]) \cup E(G[B, T])$, we have \begin{equation*} \begin{split} {d(G[B \cup S])(|B| + |S|)}/{2} &=e(G[B \cup S]) \\ &\ge e(G) - e(G[C]) - e_G(B, T) \\ &= {dn}/{2} - {d(G[C])|C|}/{2} - e_G(B, T) \end{split} \end{equation*} As $G$ is log-maximal, we have $$\frac{d(G[B \cup S])}{\ensuremath{\ell}og (|B| + |S|)} \ensuremath{\ell}e \frac{d}{\ensuremath{\ell}og n},$$ and $$\frac{d(G[C])}{\ensuremath{\ell}og |C|} \ensuremath{\ell}e \frac{d}{\ensuremath{\ell}og n}.$$ Hence, $$\frac{d(|B| + |S|) \ensuremath{\ell}og(|B| + |S|) }{2\ensuremath{\ell}og n} \ge \frac{dn}{2} - \frac{d|C| \ensuremath{\ell}og |C|}{2\ensuremath{\ell}og n} - e_G(B, T).$$ Since $e_G(B,T) \ensuremath{\ell}e \frac{d |B|}{4 \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|})$, \begin{equation*} \begin{split} (|B| + |S|) \ensuremath{\ell}og(|B| + |S|) &\ge n \ensuremath{\ell}og n - |C| \ensuremath{\ell}og |C| - \frac{2\ensuremath{\ell}og n}{d} e_G(B, T) \\ &\ge |B| \ensuremath{\ell}og n + |C| (\ensuremath{\ell}og n - \ensuremath{\ell}og |C|) - \frac{2\ensuremath{\ell}og n}{d} \frac{d |B|}{4 \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|}) \\ &\ge |B| \ensuremath{\ell}og n - \frac{|B|}{2} \ensuremath{\ell}og(\frac{2n}{3|B|}) \end{split} \end{equation*} Since $|S| < |B|/2$, \begin{equation*} \begin{split} |S| \ensuremath{\ell}og(\frac{3|B|}{2}) &\ge |B| (\ensuremath{\ell}og n - \ensuremath{\ell}og(\frac{3|B|}{2}) ) - \frac{|B|}{2} \ensuremath{\ell}og(\frac{2n}{3|B|}) = \frac{|B|}{2} \ensuremath{\ell}og(\frac{2n}{3|B|}). \end{split} \end{equation*} Therefore, $$|S| \ge \frac{|B|}{2\ensuremath{\ell}og(\frac{3|B|}{2})} \ensuremath{\ell}og(\frac{2n}{3|B|}) > \frac{|B|}{4\ensuremath{\ell}og|B|} \ensuremath{\ell}og(\frac{2n}{3|B|}).$$ This completes the proof the claim. Now let $W = N_{H_Q}(B) \cap S$. For every vertex $y \in S$, we have $$\mathbb{P}(y \in W) = 1 - (1-p_c)^{|N_H(y) \cap B|} \ge 1 - (1-p_c)^{\Deltalta} = 1 - (1-p_c)^{\ensuremath{\ell}ambda^{1/2} p_c^{-1}} \ge 1 - e^{-\ensuremath{\ell}ambda^{1/2}}.$$ Thus, $\mathbb{E}|W| \ge |S|(1 - e^{-\ensuremath{\ell}ambda^{1/2}})$. By Markov's inequality, $$\mathbb{P}\ensuremath{\ell}eft(|W| \ge \min \ensuremath{\ell}eft( \frac{|B|}{4}, \frac{|B|}{8\ensuremath{\ell}og|B|} \ensuremath{\ell}og(\frac{2n}{3|B|}) \right) \right) \ge \mathbb{P}(|W| \ge |S|/2) \ge 1 - 2 e^{-\ensuremath{\ell}ambda^{1/2}}.$$ This concludes the proof of Case 1. \noindent \textbf{Case 2.} $e_G(B,T) > \frac{d |B|}{4 \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|})$. Since $|\phi(v)| \ensuremath{\ell}e \frac{d}{8 \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|})$ for all $v \in V(G)$ by (iii), we have $$e_H(B,T) \ge e_G(B,T) - \sum_{v \in B} |\phi(v)| > \frac{d |B|}{4 \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|}) - \frac{d |B|}{8 \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|}) = \frac{d |B|}{8 \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|}).$$ Let $T_1 = \{v \in T: |N_H(v) \cap B| < p_c^{-1}/2 \}$ and $T_2 = T \backslash T_1$. If $e_H(B,T_1) \ge e_H(B,T_2)$, then let $W = N_{H_Q}(B) \cap T_1$. For every vertex $y \in T_1$, we have $$\mathbb{P}(y \in W) = 1 - (1-p_c)^{|N_H(y) \cap B|} > 1 - (1-\frac{p_c|N_H(y) \cap B|}{2})=\frac{p_c|N_H(y) \cap B|}{2}.$$ Then $$\mathbb{E}|W| = \sum_{y \in T_1} \mathbb{P}(y \in W) > \frac{p_c}{2} \sum_{y \in T_1} |N_H(y) \cap B| = \frac{p_c}{2} e_H(B,T_1) \ge \frac{p_c}{4} e_H(B,T) > \frac{dp_c |B|}{32 \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|}) .$$ Otherwise, $e_H(B,T_1) < e_H(B,T_2)$. Let $W = N_{H_Q}(B) \cap T_2$. For every vertex $y \in T_1$, we have $$\mathbb{P}(y \in W) = 1 - (1-p_c)^{|N_H(y) \cap B|} > 1 - e^{-1/2} > 1/3.$$ Then $$\mathbb{E}|W| = \sum_{y \in T_2} \mathbb{P}(y \in W) > |T_2|/3 \ge \frac{e_H(B,T_2)}{3\Deltalta} \ge \frac{e_H(B,T)}{6\Deltalta} \ge \frac{d p_c |B|}{48\ensuremath{\ell}ambda^{1/2} \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|}).$$ In both cases, let $Z_i$ be the color-exposure martingale associated with $W$. Since $f$ is a proper coloring of $H$, $|Z_i - Z_{i-1}| \ensuremath{\ell}e |B|$ for $i \in [|R|]$. Hence, by Lemma \ref{azuma}, we have $$\mathbb{P}(|W| \ensuremath{\ell}eq \frac{d p_c|B| }{96\ensuremath{\ell}ambda^{1/2} \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|}) ) \ensuremath{\ell}e\mathbb{P}(|W| \ensuremath{\ell}eq \mathbb{E}|W|/2 )\ensuremath{\ell}eq e^{-\mathbb{E}|W|/12|B|} \ensuremath{\ell}e e^{-\ensuremath{\ell}ambda^{1/2}/600}.$$ Therefore, with probability at least $1 - e^{-\Omegaega(\ensuremath{\ell}ambda^{1/2})}$, $$|W| \ge \frac{d p_c|B| }{96\ensuremath{\ell}ambda^{1/2} \ensuremath{\ell}og n} \ensuremath{\ell}og(\frac{2n}{3|B|}) \ge \min \ensuremath{\ell}eft( \frac{|B|}{4}, \frac{|B|\ensuremath{\ell}og(\frac{2n}{3|B|})}{8\ensuremath{\ell}og|B|} \right) $$ since $d \ge \ensuremath{\ell}ambda (p_c)^{-1} \ensuremath{\ell}og n$ by (ii). This completes the proof of Case 2. \end{proof} Now we see that every vertex in a log-maximal graph can reach many vertices by a rainbow path of moderate length. \begin{lemma} \ensuremath{\ell}abel{lem:reachable} Let $0 < p_c \ensuremath{\ell}e 1$ and $n > 0$ be a sufficiently large integer such that $p_c \ge 1/\ensuremath{\ell}og n$. Let $G$ be a graph on $n$ vertices with proper edge coloring $f: E(G) \rightarrow R$ satifying the following: \begin{itemize} \item[(i)] $G$ is log-maximal; \item[(ii)] $d := d(G) \ge \ensuremath{\ell}ambda^2 p_c^{-1} (\ensuremath{\ell}og n)^2$ where $\ensuremath{\ell}ambda \ge (\ensuremath{\ell}og \ensuremath{\ell}og n)^{10}$; \item[(iii)] $\phi_0 \subseteq V(G) \cup R$ with $|\phi_0| \ensuremath{\ell}e {d}/{16 \ensuremath{\ell}og n}$. \end{itemize} Let $Q \subseteq R$ be a random subset of colors such that each color is chosen with probability $p_c$ independently. Then for every $v \in V(G)$, with probability more than $1/2$, more than $n/2$ vertices of $G$ can be reached from $v$ by a rainbow $(Q \backslash \phi_0)$-path of length $O(\ensuremath{\ell}og n \cdot \ensuremath{\ell}og \ensuremath{\ell}og n)$ avoiding (forbidden) vertices in $\phi_0$. \end{lemma} \begin{proof} Let $l = 32 \ensuremath{\ell}og n \cdot \ensuremath{\ell}og \ensuremath{\ell}og n $; so $l \ensuremath{\ell}e {d}/{32 \ensuremath{\ell}og n}$. We adopt the ``sprinkling" technique. We sample colors with different probability in each round so that the final distribution of colors is the same after this process ends. More precisely, we define $q_i$ for $i \in [l]$ as follows: $q_1 = p_c / 2$ and $q_i = q$ for $i \in [l] \backslash \{1\}$ where $1-p_c = (1-q_0)(1-q)^{l-1}$. Thus, $q = \Thetaeta(p_c/l) = \Thetaeta(p_c / (\ensuremath{\ell}og n \cdot \ensuremath{\ell}og \ensuremath{\ell}og n))$. Fix $v \in V(G)$. Let $Q_i$ be a random sample of $Q$ such that each color is chosen with probability $q_i$ independently for $i \in [l]$. We define $\phi_i$, $S_i$ and $B_i$ recursively as follows. $$\phi_0(v) := \phi_0, \phi_0(x) := \emptyset, \forall x \in V(G) \backslash \{v\}, $$ $$S_1 = B_1 := \{x \in N(v) \backslash \phi_0(v) : f(xv) \in Q_1 \backslash \phi_0(v) \}.$$ For $i \in [l]$, $B_i$ is a subset of the vertices that are reachable from $v$ by a rainbow $((Q_1 \cup \cdots \cup Q_i) \backslash \phi_0)$-path of length at most $i$ avoiding vertices in $\phi_0$. For each $x \in B_i$, let $P_{vx}^i$ be an arbitrary path from $v$ to $x$ of length at most $i$. For $i \in [l-1]$, we define $\phi_i(x)$ to be the union of $\phi_0$ and vertices and colors used in $P_{vx}^i$ for $x \in B_i$; and $\phi_i(x) = \emptyset$ for $x \not\in B_i$. For $i \in [l-1]$, define $$S_{i+1} := \{ y \in N(B_i) \backslash (B_i \cup \phi_0): \exists x \in B_i, y \not\in \phi_i(x), xy \in E(G), f(xy) \in Q_{i+1} \backslash \phi_i(x) \}$$ $$B_{i+1} := B_i \cup S_{i+1}.$$ We want to apply Lemma \ref{lem:expand} to $(G,B,p_c,\ensuremath{\ell}ambda)_{\ref{lem:expand}} = (G,B_i,q_i,\ensuremath{\ell}ambda)$ for every $i \in [l]$, and show that $B_i$ expands substantially. Note that $$d \ge \ensuremath{\ell}ambda^2 p_c^{-1} (\ensuremath{\ell}og n)^2 > \ensuremath{\ell}ambda q_i^{-1} \ensuremath{\ell}og n$$ and $$|\phi_i(x)| \ensuremath{\ell}e 2l + |\phi_0| \ensuremath{\ell}e 2 \cdot \frac{d }{32 \ensuremath{\ell}og n} + \frac{d}{16 \ensuremath{\ell}og n} = \frac{d}{8 \ensuremath{\ell}og n}.$$ Moreover, as $G$ is log-maximal, $\deltalta(G) \ge d/2$ by Lemma \ref{lem:maximal}. We have $$\mathbb{E}|B_1| \ge \deltalta(G) q_1 - |\phi_0| \ge \frac{dp_c}{4} - \frac{d}{16 \ensuremath{\ell}og n} > 8.$$ Hence, by Markov's inequality, with probability at least $3/4$, $|B_1| > 2$, thus $|B_i| \ge |B_1| > 2$ for all $i \in [l]$. Therefore, by Lemma \ref{lem:expand} with probability at least $1 - e^{-\Omegaega(\ensuremath{\ell}ambda^{1/2})}$, we have $$|S_{i+1}| \ge |N_{Q_{i+1},\phi_i}(B_i)| \ge \min \ensuremath{\ell}eft( \frac{|B_i|}{4}, \frac{|B_i|\ensuremath{\ell}og(\frac{2n}{3|B_i|})}{8\ensuremath{\ell}og(|B_i|)} \right).$$ With probability $3/4 - l \cdot e^{-\Omegaega(\ensuremath{\ell}ambda^{1/2})} > 1/2$, this is true for all $i \in [l-1]$. If $|B_i| < ({2n}/{3})^{1/3}$, then $|S_{i+1}| \ge {|B_i|}/{4}$. Otherwise, $|S_{i+1}| \ge {|B_i|\ensuremath{\ell}og(\frac{2n}{3|B_i|})}/{8\ensuremath{\ell}og(|B_i|)}$. Let $r_1 \ge 1$ be the minimum integer such that $|B_{r_1}| \ge (\frac{2n}{3})^{1/3} $. It is easy to see that $r_1 \ensuremath{\ell}e {\ensuremath{\ell}og(2n/3)}/{3\ensuremath{\ell}og(5/4)} = O(\ensuremath{\ell}og n)$. For $r_1 \ensuremath{\ell}e i \ensuremath{\ell}e l$, let $\deltalta_i > 0$ be such that $|B_i| = (2n/3)^{1-\deltalta_i}$. Now let $r_1 \ensuremath{\ell}e r_2 \ensuremath{\ell}e l$ be the minimum integer such that $|B_{r_2}| > {n}/{2} = ({2n}/{3})^{1 - \ensuremath{\ell}og_{2n/3}(4/3)} $. Thus, for $r_1 \ensuremath{\ell}e i < r_2$, $$|B_{i+1}| = |B_i| + |S_{i+1}| \ge |B_i| \ensuremath{\ell}eft(1 + \frac{\deltalta_i \ensuremath{\ell}og(2n/3)}{8 (1-\deltalta_i) \ensuremath{\ell}og(2n/3)} \right) = |B_i| \ensuremath{\ell}eft( 1 + \frac{\deltalta_i}{8(1-\deltalta_i)} \right) \ge |B_i| \frac{1}{1-\deltalta_i/8}.$$ Hence, $$1 - \deltalta_{i+1} \ge 1 - \deltalta_i + \ensuremath{\ell}og_{2n/3} \ensuremath{\ell}eft( \frac{1}{1-\deltalta_i/8} \right).$$ $$\deltalta_{i+1} \ensuremath{\ell}e \deltalta_i + \ensuremath{\ell}og_{2n/3} \ensuremath{\ell}eft( 1-\deltalta_i/8 \right) = \deltalta_i + \frac{\ensuremath{\ell}og(1-\deltalta_i/8)}{\ensuremath{\ell}og (2n/3)} \ensuremath{\ell}e \deltalta_i (1 - \frac{1}{8 \ensuremath{\ell}og (2n/3)}).$$ By definition of $r_2$, we have $\deltalta_{r_2-1} \ge \ensuremath{\ell}og_{2n/3}(4/3)$. Therefore, $r_2 \ensuremath{\ell}e r_1 + 10 \ensuremath{\ell}og n \cdot \ensuremath{\ell}og \ensuremath{\ell}og n \ensuremath{\ell}e l$. So $|B_l| > {n}/{2}$, which concludes the proof. \end{proof} As a corollary, we are able to show that every two vertices in a log-maximal graph on $n$ vertices can be connected by a path of length at most $O(\ensuremath{\ell}og n \cdot \ensuremath{\ell}og \ensuremath{\ell}og n)$ upon forbidding a moderate size of vertices and colors. In other word, its small diameter property is robust. \begin{lemma} \ensuremath{\ell}abel{lem:diamter} Let $n > 0$ be a sufficiently large integer. Let $G$ be a graph on $n$ vertices with proper edge coloring $f: E(G) \rightarrow R$ satifying the following: \begin{itemize} \item[(i)] $G$ is log-maximal; \item[(ii)] $d := d(G) \ge 4 \ensuremath{\ell}ambda^2 (\ensuremath{\ell}og n)^2$ where $\ensuremath{\ell}ambda \ge (\ensuremath{\ell}og \ensuremath{\ell}og n)^{10}$; \item[(iii)] $\phi_0 \subseteq V(G) \cup R$ with $|\phi_0| \ensuremath{\ell}e {d}/{16 \ensuremath{\ell}og n}$. \end{itemize} For any two vertices $u,v \in V(G)$, there exists a rainbow $(R \backslash \phi_0)$-path from $u$ to $v$ of length $O(\ensuremath{\ell}og n \cdot \ensuremath{\ell}og \ensuremath{\ell}og n)$ avoiding vertices in $\phi_0$. \end{lemma} \begin{proof} Let $p_c = 1/2$ and $(R_u, R_v)$ be a partition of $R$ such that each color appears in $R_u$ with probability $1/2$. One can view $R_u$ (resp. $R_v$) as a random subset of colors $R$ such that each color is chosen with probability $p_c$ independently. Let $B_u$ (resp. $B_v$) be the vertices of $G$ can be reached from $u$ (resp. $v$) by a rainbow $(R_u \backslash \phi_0)$-path (resp. $(R_v \backslash \phi_0)$-path) of length $O(\ensuremath{\ell}og n \cdot \ensuremath{\ell}og \ensuremath{\ell}og n)$ avoiding (forbidden) vertices in $\phi_0$. Apply Lemma \ref{lem:reachable} to $G,u$ with $p_c = 1/2$ (resp. to $G,v$ with $p_c = 1/2$), we have with probability more than $1/2$, $|B_u| \ge n/2$ (resp. $|B_v| \ge n/2$). Therefore, with positive probability, there exists a partition $(R_u, R_v)$ of $R$ such that $|B_u| > n/2$ and $|B_v| > n/2$, thus $B_u \cap B_v \ne \emptyset$. Let $w \in B_u \cap B_v$ and $P_{uw}$ (resp. $P_{vw}$) be a rainbow $(R_u \backslash \phi_0)$-path (resp. $(R_v \backslash \phi_0)$-path) from $u$ (resp. $v$) to $w$ of length $O(\ensuremath{\ell}og n \cdot \ensuremath{\ell}og \ensuremath{\ell}og n)$ avoiding (forbidden) vertices in $\phi_0$. Choose $w$ such that $|V(P_{uw})| + |V(P_{vw})|$ is minimum. Therefore, $P_{uw} \cup P_{vw}$ is a desired rainbow path. \end{proof} \subsection{Proof of Theorem \ref{main_clique}} \begin{proof} Let $\varepsilon > 0$ be an arbitrarily small real number. By passing into subgraph, we may assume that $d(G) = (\ensuremath{\ell}og n)^{2+\varepsilon}$. Suppose $f: E(G) \rightarrow R$ is the proper edge coloring of $G$. Let $H$ be a log-maximal subgraph of $G$ and $m = v(H)$. So $$d(H) \ge \frac{\ensuremath{\ell}og m}{\ensuremath{\ell}og n} d(G) = \ensuremath{\ell}og m \cdot (\ensuremath{\ell}og n)^{1+\varepsilon} \ge (\ensuremath{\ell}og m)^{2+\varepsilon}$$ and $\deltalta(H) \ge d(H)/2 > (\ensuremath{\ell}og m)^{2+\varepsilon}/2$. Note that $m \ge d(H) \ge (\ensuremath{\ell}og n)^{1+\varepsilon}$; so $m$ is also sufficiently large. Let $v_1,v_2,\cdots,v_t$ be $t$ distinct vertices in $H$. Let $K \subseteq {t \choose 2}$ be a maximal collection of pairs such that there exists a family of pairwise internally disjoint rainbow paths $\mathcal{P} = \{k \in K: P_k\}$ such that \stepcounter{propcounter} \begin{enumerate}[label = {\bfseries \Alph{propcounter}\arabic{enumi}}] \item For each $\{i,j\} \in K$, $P_{\{i,j\}}$ is a rainbow path of length $O(\ensuremath{\ell}og m \cdot \ensuremath{\ell}og \ensuremath{\ell}og m)$ from $v_i$ to $v_j$; \ensuremath{\ell}abel{main-1} \item No colors appear more than once in $\{f(e): e \in P, P \in \mathcal{P} \}$. \ensuremath{\ell}abel{main-2} \end{enumerate} If $K = {t \choose 2}$, then the graph formed by all the paths in $\mathcal{P}$ is a desired rainbow $\mathsf{TK}_t$. Hence, we may assume that there exist distinct $i, j \in [t]$ such that $\mathcal{P}$ contains no such path from $v_i$ to $v_j$. Let $\phi_0$ be the union of vertices and colors in the paths in $\mathcal{P}$ except $v_i$ and $v_j$. Note that $|\phi_0| \ensuremath{\ell}e {t \choose 2} \cdot O(\ensuremath{\ell}og m \cdot \ensuremath{\ell}og \ensuremath{\ell}og m) < {d(H)}/{16 \ensuremath{\ell}og m}$ and $d(H) \ge (\ensuremath{\ell}og m)^{2+\varepsilon} \ge 4 (\ensuremath{\ell}og \ensuremath{\ell}og m)^{20} (\ensuremath{\ell}og m)^2$. Apply Lemma \ref{lem:diamter} with $(G,\ensuremath{\ell}ambda,\phi_0)_{\ref{lem:diamter}} = (H,(\ensuremath{\ell}og \ensuremath{\ell}og m)^{10},\phi_0)$, we obtain a rainbow $(R \backslash \phi_0)$-path $P_{\{i,j\}}$ from $v_i$ to $v_j$ of length $O( \ensuremath{\ell}og m \cdot \ensuremath{\ell}og \ensuremath{\ell}og m)$ avoiding vertices in $\phi_0$. Hence, $K \cup \{\{i,j\}\}$ and $\mathcal{P} \cup \{ P_{\{i,j\}} \}$ contradict the maximality of $K$. This completes the proof. \end{proof} \end{document} \end{document}
\begin{document} \title{Optomechanical generation of a mechanical catlike state by phonon subtraction} \author{Itay Shomroni}\email{[email protected]} \author{Liu Qiu} \affiliation{Institute of Physics, \'Ecole Polytechnique F\'ed\'erale de Lausanne, CH-1015 Lausanne, Switzerland} \author{Tobias J. Kippenberg} \affiliation{Institute of Physics, \'Ecole Polytechnique F\'ed\'erale de Lausanne, CH-1015 Lausanne, Switzerland} \date{23 September 2019} \begin{abstract} We propose a scheme to prepare a macroscopic mechanical oscillator in a cat-like state, close to a coherent state superposition. The mechanical oscillator, coupled by radiation-pressure interaction to a field in an optical cavity, is first prepared close to a squeezed vacuum state using a reservoir engineering technique. The system is then probed using a short optical pulse tuned to the lower motional sideband of the cavity resonance, realizing a photon-phonon swap interaction. A photon number measurement of the photons emerging from the cavity then conditions a phonon-subtracted cat-like state with a negative Wigner distribution exhibiting separated peaks and multiple interference fringes. We show that this scheme is feasible using state-of-the-art photonic crystal optomechanical system. \end{abstract} \pacs{Valid PACS appear here} \maketitle \emph{Introduction.---} Since its inception, major questions in quantum mechanics have been whether and how the superposition principle applies to macroscopic objects, as embodied in the famous thought experiment of Schr\"odinger. Nowadays, superpositions of coherent states, also known as cat states, are routinely generated in microscopic systems such as ions~\cite{monroe1996,leibfried2005}, radiation in superconducting cavities~\cite{vlastakis2013}, optical photons~\cite{huang2015,ourjoumtsev2006,neergaard2006}, as well as atoms~\cite{omran2019} and hybrid atom-light systems~\cite{hacker2019}. Preparing such states in macroscopic systems has proved to be more difficult. It has mainly been considered within the framework of quantum optomechanics, where a macroscopic mechanical oscillator is coupled to an electromagnetic field in a cavity via radiation pressure interaction~\cite{aspelmeyer2014}. Recent advances in optomechanics include cooling the oscillator to its ground state~\cite{chan2011,qiu2019}, observation of quantum correlations between light and mechanics~\cite{purdy2013b,purdy2013,safavi-naeini2013,sudhir2017,safavi-naeini2012,sudhir2017}, and quantum nondemolition (QND) measurements~\cite{suh2014,shomroni2019} and squeezing~\cite{kronwald2013,wollman2015,lecocq2015,pirkkalainen2015,lei2016} of mechanical motion, conditional preparation of single-phonon and entangled mechanical states~\cite{galland2014,hong2017,riedinger2018,marinkovic2018}, continuous-variable steady-state mechanical entanglement~\cite{ockeloen-korppi2018}, and generation of acoustic Fock states through coupling with superconducting qubits~\cite{chu2017,chu2018,satzinger2018,sletten2019,arrangoiz-arriola2019}. Despite numerous theoretical proposals~\cite{mancini1997,bose1997,kleckner2008,paternostro2011,romero-isart2011,pepper2012,akram2013,tan2013,vanner2013,milburn2016,hoff2016,liao2016,abdi2016,clarke2018,li2018,davis2018}, however, the generation of macroscopic cat states has remained elusive. Such states are interesting not only for addressing fundamental questions in quantum theory~\cite{marshall2003,blencowe2013,nimmrichter2014,carlesso2019}, but also for efficient encoding of quantum information in continuous-variable systems~\cite{cochrane1999,leghtas2013,mirrahimi2014} \begin{figure} \caption{Optomechanical scheme for generation of a mechanical catlike state. (a)~Illustration of a cavity-optomechanical system. A mechanical oscillator (frequency $\Omega_m$, energy dissipation rate $\Gamma_m$, displacement $\hat x$) forms part of an optical cavity (frequency $\omega_c$, energy dissipation rate $\kappa$). Cavity photons couple to the oscillator through radiation-pressure interaction, and the output light from the cavity is analyzed. (b)~Time-domain picture of the scheme. The oscillator is first prepared in a squeezed state by driving the cavity on the upper and lower motional sidebands. Then, a short pulse on the lower motional sideband drives an anti-Stokes photon-phonon scattering (beamsplitter interaction), subtracting phonons from the mechanical state, which can be analyzed after a variable wait time. (c,d) Frequency-domain pictures of the squeezing (c) and subtraction (d) stages. The Wigner distribution of the mechanical state is also shown. } \label{fig:scheme} \end{figure} Here we propose a scheme for preparation of a mechanical catlike state in quantum optomechanics that does not rely on nonlinear coupling~\cite{romero-isart2011,tan2013,akram2013} or on external generation and transfer of the nonclassical state~\cite{hoff2016,teh2018}. An established technique for preparation of cat-like states is subtraction of one or several quanta from a squeezed vacuum state~\cite{dakna1997,biswas2007}. This was demonstrated in optics by passing a squeezed vacuum beam through a high-transmission beam splitter. Conditioned on $m$ photons detected at the reflection port, an equal number of photons is subtracted from the transmitted beam, resulting in a catlike state~\cite{ourjoumtsev2006,neergaard2006}. In optomechanics, it was proposed to transfer such a state onto a mechanical oscillator~\cite{hoff2016}. Ref.~\onlinecite{milburn2016} studied nonclassical state generation through various combinations of pulsed position measurements or measurement-induced mechanical squeezing, with single-phonon subtraction or addition, using two optical modes. Our approach combines methods adopted from continuous quantum control with number-resolved photon counting, both available in a single sideband-resolved optical mode, to generate a phonon-subtracted squeezed mechanical state (Fig.~\ref{fig:scheme}). In optomechanics, it is possible to realize a beamsplitter-type interaction between a cavity photon (frequency $\omega_c$) and a mechanical excitation (a phonon of frequency $\Omega_m$) within the resolved-sideband regime $\Omega_m\gg\kappa$, where $\kappa$ is the cavity linewidth~\cite{aspelmeyer2014}. This interaction occurs when the cavity is driven with frequency $\omega_l$ on the lower motional sideband, $\omega_c-\omega_l=-\Omega_m$, through cavity-enhanced anti-Stokes scattering of drive photons by the oscillator (Fig.~\ref{fig:scheme}d), and is the basis of sideband cooling of mechanical motion~\cite{aspelmeyer2014} and coherent photon-phonon swap~\cite{hofer2011,palomaki2013}. In our scheme, shown in Fig.~\ref{fig:scheme}, the mechanical oscillator is first prepared close to a squeezed vacuum state~\cite{kronwald2013,wollman2015,lecocq2015,pirkkalainen2015,lei2016}, and then one or several phonons are swapped with photons which proceed to emerge from the cavity. Light from the cavity is optically filtered on the resonance frequency in order to detect only anti-Stokes scattered photons. Conditioned on subsequent number-resolved photon detection~\cite{mattioli2015}, a mechanical phonon-subtracted squeezed state is generated. The state can be subsequently analyzed by mechanical tomography, for example using single-quadrature QND measurements, demonstrated in the optical domain~\cite{shomroni2019}, or by state swap followed by homodyne detection~\cite{palomaki2013}. Optomechanical crystals~\cite{chan2012} are an especially promising platform for our scheme. They operate in the resolved sideband regime, and cooling to the ground state with strong driving has been demonstrated~\cite{qiu2019} (note that cooling and squeezing are here combined in a single step~\cite{kronwald2013}). Additionally, they can show extremely long coherence times of more than a second~\cite{maccabe2019}, making them attractive for studying nonclassical states of motion. We note that the individual components of our scheme have both been separately implemented. Squeezing was successfully demonstrated in several optomechanical systems~\cite{wollman2015,lecocq2015,pirkkalainen2015,lei2016}, and photon counting was applied to optomechanical crystals prepared in the ground state to generate single-phonon and entangled~\cite{cohen2015,riedinger2016,hong2017,riedinger2018,marinkovic2018} mechanical states. \emph{Squeezing of the mechanical state.---} The first stage of our protocol is squeezing of the mechanical oscillator by reservoir engineering~\cite{kronwald2013}. The optomechanical system in the resolved-sideband regime is driven with two tones tuned to the upper ($+$) and lower ($-$) mechanical sidebands, with coupling rates $g_\pm = g_0\sqrt{\bar n_\pm}$ (Fig.~\ref{fig:scheme}c), where $g_0$ is the single-photon optomechanical coupling rate and $\bar n_\pm$ is the mean intracavity photon number due to each drive. When $g_+=g_-$, a QND measurement of a single mechanical quadrature $\hat X_1 = (\hat b\dagg+\hat b)/\sqrt{2}$ is performed~\cite{caves1980,clerk2008}, with $\hat b$ being the phonon annihilation operator. When $g_->g_+$, however, both quadratures $\hat X_1$ and $\hat X_2=i(\hat b\dagg-\hat b)/\sqrt{2}$ are equally damped by the cavity field while the fluctuations associated with the damping are distributed unequally. This results in a squeezed thermal state characterized by a squeezing parameter $r$ and purity $\neff$, where $\tanh r = g_+/g_-$ and $\neff+\frac{1}{2}=\sqrt{\langle\Delta X_1^2\rangle\langle\Delta X_2^2\rangle}$. The advantage of this scheme is that it allows arbitrarily strong squeezing (limited by drive power), in particular exceeding the $3\unit{dB}$ limit of parametric driving. While Ref.~\onlinecite{kronwald2013} focused on maximum squeezing (minimum variance in one quadrature) for a given drive power characterized by the cooperativity~$\C=4g_-^2/\kappa\gamma$, this comes at the expense of increased $\neff$ (although for optimal squeezing, $\neff\rightarrow 0.2$ in the limit of high cooperativity~\cite{kronwald2013}). State purity, however, is important for engineering quantum states, and in this work we relax the demand for optimal squeezing in favor of purity. For a given cooling strength $\C$, there is a trade-off between the state purity $\neff$ and the amount of squeezing $r$ related to the imbalance of the drives~\cite{kronwald2013}. Figure~\ref{fig:purity}a shows the state purity $\neff$ vs.~the squeezing parameter $r$ for different cooperativities, and Fig.~\ref{fig:purity}b shows the required cooperativity vs.~the squeezing parameter for different purities. In Fig.~\ref{fig:purity}, we assume that the mechanical oscillator is coupled to a bath with mean thermal occupancy $\nth=2$. For conciseness, in this work we consider two working points, both with $\neff=0.02$: (1) $r=0.5$ ($4.3\unit{dB}$ squeezing), $\C\simeq 200$, compatible with recent high fidelity ground state cooling experiments in optomechanical crystals~\cite{qiu2019} and (2) $r=1$ ($8.7\unit{dB}$), $\C\simeq 1000$. Note that mechanical squeezing of $4.7\unit{dB}$ has been reported~\cite{lei2016}. Accordingly, we will assume in the following that the oscillator is prepared in the desired squeezed state. \begin{figure} \caption{State purity in optomechanical dissipative squeezing. (a)~State purity~$\neff$ vs.~squeezing parameter~$r$ for different cooperativities~$\C$. (b)~The cooperativity required to achieve a given purity vs.~$r$. The steady-state in optomechanical dissipative squeezing, in particular the variance of the squeezed quadrature but also the thermal component $\neff$, results from a trade-off between optical damping and ratio of the drives. The two working points with $\neff=0.02$ used in this work are indicated in both panels. } \label{fig:purity} \end{figure} \emph{Conditional phonon subtraction.---} Following the squeezing stage, we apply a weak pulse tuned to the lower motional sideband (Fig.~\ref{fig:scheme}d), realizing a beamsplitter interaction, $\hat H\s{int} = \hbar g(\hat a\dagg\hat b+\hat b\dagg\hat a)$, where $\hat a$ is the photon annihilation operator in a frame displaced by the mean cavity field $\bar a$, and $g=g_0\bar a$ is the coupling rate enhanced by $\bar a $~\cite{suppmat}. The relation between the mechanical mode $\hat b(t)$ and the cavity output field $\Aout(t)$ assuming weak coupling $g\ll\kappa$ is (see the appendix) \begin{equation} \begin{pmatrix}\Aout(t) \\ \hat b(t) \end{pmatrix} = \begin{pmatrix}\cos\theta & i\sin\theta \\ i\sin\theta & \cos\theta \end{pmatrix} \begin{pmatrix}\Ain(t)\\ \hat b(0) \end{pmatrix} \label{eq:beamsplitter1} \end{equation} where $\cos\theta\equiv e^{-\tilde g t}$ is the beamsplitter amplitude ``transmission'' with $\tilde g\equiv 2g^2/\kappa$ being the interaction strength and $\Ain(t)$ being the optical input in the second ``port'' of the beamsplitter, which is vacuum noise in the displaced frame. If the initial mechanical state is squeezed vacuum ($\neff=0$) $\hat S(r)\hat\rho_0\hat S\dagg(r)$ where $\hat\rho_0 = \lvert 0\rangle\langle 0\rvert$ and $\hat S(r) = e^{r(\hat b^2-\hat b\dagg{}^2)/2}$ is the squeezing operator, the final mechanical state $\hat\rho\s{out}^{(m)}$ conditioned on the detection of $m$ photons in the output field can be calculated analytically~\cite{dakna1997}. It is parametrized by $m$ and by $\cos^2\theta\tanh r$, with the initial squeezing degraded by the transmission $\cos^2\theta$ due to mixing with the optical vacuum noise $\Ain$. Increasing the transmission however also reduces the probability to herald $m$ subtracted phonons. Importantly, as we discuss below, non-negligible values of $\theta$ can be easily obtained with pulse durations satisfying $\kappa^{-1} \ll t\s{pulse} \ll (\nth\Gamma_m)^{-1}$, with $\nth\Gamma_m$ being the thermal decoherence rate; thus, we can neglect decoherence of the mechanical state during the pulse. The Wigner distribution of the conditioned mechanical state appears as two displaced peaks with an intermediate oscillating region, similar to a cat state. The peak separation increases with $m$ and initial squeezing~$r$~\cite{dakna1997}. Squeezed Fock and and thermal states have also been treated in the literature~\cite{kim1989,marian1991,marian1992,marian1993,hu2010} but yield complicated expressions. Instead we solve numerically for the mechanical output state when the input is a squeezed thermal state, with parameters $m$, $r$, $\neff$, and $\theta$~\cite{suppmat}. As in Refs.~\mbox{\onlinecite{hoff2016,clarke2018}}, we characterize the quantum nature of the output state using two measures based on the Wigner distribution $W(x,p)$. The macroscopicity~\cite{lee2011} \begin{equation} \mathcal{I} = -\frac{\pi}{2}\int\!\!\!\int W(x,p)\biggl(\frac{\partial^2}{\partial x}+\frac{\partial^2}{\partial p}+2\biggr)W(x,p)\,dx\,dp \end{equation} assesses $W(x,p)$ through the amplitude and frequency of its interference fringes, with higher values indicating higher nonclassicality. For any state with a given mean excitation number $\langle\hat n\rangle$, the maximum possible value of $\mathcal{I}$ is $\langle\hat n\rangle$. In particular, this maximum is attained both for cat states and for phonon-subtracted squeezed vacuum states, but also for squeezed vacuum~\footnote{$\mathcal{I}$ is maximized for any pure state that is orthogonal to a phonon-subtracted state of itself~\cite{lee2011}. Cat states, phonon-subtracted squeezed vacuum states, and squeezed vacuum, all contain only odd or only even number states, and thus satisfy this condition.}. We also consider the Wigner negativity~\cite{kenfack2004} \begin{equation} \mathcal{N} = \frac{1}{2}\biggl(\int\!\!\!\int\lvert W(x,p)\rvert\,dx\,dp-1\biggr) \end{equation} which is simply the phase-space volume of the negative part of $W(x,p)$. Figure~\ref{fig:negativity}a,b shows these measures vs.~the squeezing parameter~$r$ for different initial mechanical state purities~$\neff$ and for detection of~$m=1, 2$ or~3 photons. As expected, the nonclassicality of the final mechanical state is degraded by initial state impurity, but can be increased by stronger squeezing. Figure~\ref{fig:negativity}a,b also shows that for highly impure initial states or very weak squeezing, more subtractions actually decrease nonclassicality. \begin{figure} \caption{Effect of initial state purity on the nonclassicality of phonon-subtracted squeezed thermal mechanical states. (a)--(b)~The output state macroscopicity and Wigner negativity vs.~squeezing parameter $r$ for state purities $\neff=0$ (blue), 0.02 (orange), and 0.1 (green) and for~$m=1$ (solid), 2~(dashed), and 3~(dash-dotted) subtracted phonons. Squeezed vacuum $\neff=0$ is shown for reference. (c)--(e) Wigner distributions for $r=1$, $\neff=0.02$, and $m=1,2,3$. The macroscopicity $\mathcal{I} \label{fig:negativity} \end{figure} Figure~\ref{fig:negativity}c--e shows the Wigner distributions for $r=1$, $\neff=0.02$, and $m=1,2,3$, indicating the achieved macroscopicity $\mathcal{I}$. Figure~\ref{fig:negativity} assumes no additional optical losses and thus gives maximum nonclassicality for the given parameters. Even with optical losses, this maximum can be maintained by reducing the interaction strength at the expense of heralding probability, such that any photon lost will prevent heralding. In an actual experiment, a balance must be struck between constraints on experiment duration and decoherence of the mechanical state due to optical losses. We extend the previous analysis by including a beam splitter in the optical path to account for a finite optical detection efficiency~$\eta$. Figure~\ref{fig:losses}a,b shows the effect on the heralding probability and macroscopicity, respectively, in the case $\neff=0.02$ for $r=0.5,1$ and $m=1,2,3$. \begin{figure*} \caption{Effect of optical losses on generated catlike mechanical state. (a)~Heralding probability (successful detection of $m$ photons) vs.~total optical detection efficiency $\eta$, shown for initial squeezing $r=0.5$ (blue) and $r=1$ (orange), and for $m=1$~(solid), $2$~(dashed), and $3$~(dash-dotted) subtracted phonons. This probability incorporates the weak optomechanical beam splitter interaction $\theta$, chosen as $\theta=0.05$ for $m=1$ cases and $\theta=0.1$ for $m=2$ cases. (b)~The macroscopicity $\mathcal{I} \label{fig:losses} \end{figure*} \emph{Experimental realization.---} To estimate the experimental feasibility of our scheme we consider an optomechanical crystal, similar to that used in our recent QND~\cite{shomroni2019} and ground-state cooling~\cite{qiu2019} experiments, operating in the resolved sideband regime, with mechanical frequency $\Omega_m/2\pi=5.2\unit{GHz}$, intrinsic mechanical linewidth $\Gamma_m/2\pi=100\unit{kHz}$, and optical cavity at telecommunication wavelengths with linewidth $\kappa/2\pi=1\unit{GHz}$ of which $\kappa\s{ex}/2\pi=800\unit{MHz}$ output coupling and the rest intrinsic dissipation~\footnote{In other words, we use an overcoupled version of the system of Ref.~\onlinecite{qiu2019} with $\kappa_i/2\pi\simeq 200\unit{MHz}$ intrinsic cavity energy loss rate.}. We assume cryogenic operation at temperature $T\s{bath}=0.5\unit{K}$, yielding bath occupation $\nth=2$ and thermal decoherence time of $(n\s{th}\Gamma_m)^{-1}\approx 1\unit{\mu s}$, much longer than the cavity lifetime and mechanical period, and thus we can safely neglect thermal decoherence in the analysis. Note that optomechanical crystals with decoherence times above $1\unit{s}$ have been demonstrated, albeit at mK temperatures~\cite{maccabe2019}. With a typical single-photon optomechanical coupling rate $g_0/2\pi\sim 1\unit{MHz}$, beamsplitter reflections of the order of a few percent as used here can be realized using, e.g., $10\unit{ns}$ pulses of low input power $\sim 10\unit{\mu W}$, much weaker than in typical cooling experiments. We assume total detection efficiency $\eta=20\%$, which is feasible assuming almost 100\% outcoupling of light from the cavity into an optical fiber, as was demonstrated in Refs.~\onlinecite{tiecke2015,burek2017,magrini2018}; 80\% cavity efficiency $\kappa_e/\kappa$; and additional 25\% efficiency due to the transmission of the optical filter and other components, and photon-counter detection efficiency. This gives heralding probabilities in the range $10^{-4}$--$10^{-7}$ from Fig.~\ref{fig:losses}a. Squeezing of the initial thermal state occurs at a rate $\C\Gamma_m\approx 2\pi\times 20\unit{MHz}$ for $\C=200$, and thus reducing the initial occupancy $\nth=2$ to $\neff=0.02$ can be done in a timescale of $\sim 100\unit{ns}$. As noted above, the subtraction pulse duration can be $\sim 10\unit{ns}$. We assume next a tomography of the final mechanical state to take $\sim 100\unit{ns}$, given the mechanical period of $\sim 30\unit{ps}$. Overall we conservatively assume a repetition rate $\sim 10\unit{\mu s}$. Thus even with a heralding probability of $10^{-7}$, we expect 1 event every $100\unit{s}$, resulting in a feasible experiment duration of several hours. Note that similar photon-counting experiments were done on a time scale of $100\unit{hrs}$~\cite{hong2017}. Figure~\ref{fig:losses}c--f shows the mechanical Wigner distributions corresponding to $\eta=0.2$. For squeezing $r=0.5$, macroscopicities $\mathcal{I}\approx 1$ are obtained, similar to a single phonon Fock state but with substantially different distributions (Fig.~\ref{fig:losses}c,d). For $r=1$ much higher values $\mathcal{I}\approx 4$ are possible (Fig.~\ref{fig:losses}e,f). \emph{Conclusion.---} We presented a scheme to prepare a macroscopic mechanical oscillator in a cat-like state by combining reservoir-engineering techniques, phonon-photon swap operations, and photon counting. A key feature of our scheme is its simplicity. It does not require preparation of nonclassical states of light, and is similar to methods used to generate macroscopic Fock states~\cite{galland2014,hong2017}, differing essentially in the squeezing step. We have used experimental parameters that are currently available in optomechanical crystals. While in this work we considered phonon subtraction from a squeezed state, phonon addition may equally well be performed, by applying a pulse tuned to the \textit{upper} motional sideband, providing additional avenues for generating nonclassical mechanical states~\cite{milburn2016,li2018}. Generation of such states will enable the study of quantum theory in macroscopic objects, and is a first step in using highly coherent and scalable mechanical platforms for continuous variable quantum information applications~\cite{cochrane1999}. \begin{acknowledgments} We thank D.~Malz, C.~Galland, and N.~J.~Engelsen for useful discussions and comments. This work was supported by the European Union's Horizon 2020 research and innovation programme under Grant Agreement No.~732894 (FET Proactive HOT). \end{acknowledgments} \appendix* \section{Theoretical details} We consider a standard optomechanical system with a single relevant optical mode with frequency $\omega_c$, and annihilation operator $\hat a$ and a single relevant mechanical mode with frequency $\Omega_m$ and annihilation operator $\hat b$. The two modes interact via radiation pressure interaction with a single-photon coupling rate $g_0$. The Hamiltonian is given by~\cite{aspelmeyer2014} \begin{multline} \hat H = \hbar\omega_c\hat a\dagg\hat a + \hbar\Omega_m\hat b\dagg\hat b - \hbar g_0\hat a\dagg\hat a(\hat b\dagg+\hat b) \\ -i\hbar[\hat a\s{in}(t)\hat a\dagg - \hat a\s{in}(t)\dagg\hat a], \label{eq:Hamiltonian0} \end{multline} where $\hat a\s{in}(t) = e^{-i\omega_l t}(\bar a\s{in} + \delta\hat a\s{in})$ is a coherent drive at frequency $\omega_l$, separated into its mean and noise terms. For simplicity, we take the drive envelope to be constant. Following standard procedure, we move to a frame rotating with the drive frequency and linearize the dynamics about the mean values $\hat a = \bar a + \delta\hat a$ and $\hat b = \bar b + \delta\hat b$. This yields \begin{multline} H = -\hbar\Delta\delta\hat a\dagg\delta\hat a + \hbar\Omega_m\delta\hat b\dagg\delta\hat b - \hbar g(\delta\hat a+\delta\hat a\dagg)(\delta\hat b\dagg+\delta\hat b) \\ -i\hbar\bar a\s{in}(\delta\hat a\dagg - \delta\hat a), \label{eq:Hamiltonian1} \end{multline} where $\Delta=\omega_l-\omega_c$ and $g = g_0\bar a$. Including dissipation to the optical and mechanical baths as well as the adjoining input noises yields the Langevin equations \begin{subequations} \begin{align} \dot{\hat a} &= -(\kappa/2-i\Delta)\hat a - ig(\hat b\dagg +\hat b) + \sqrt{\kappa}\hat a\s{in},\\ \dot{\hat b} &= -(\Gamma_m/2+i\Omega_m)\hat b - ig(\hat a\dagg + \hat a) + \sqrt{\Gamma_m}\hat b\s{in}, \end{align} \end{subequations} where for brevity we have omitted the $\delta$ designation on the noise operators. When the drive is tuned to the lower mechanical sideband, $\Delta=-\Omega_m$, and in the good cavity limit, $\kappa\ll\Omega_m$, we can perform the rotating-wave approximation, leading to \begin{subequations} \begin{align} \dot{\hat a} &= -(\kappa/2-i\Delta)\hat a + ig\hat b + \sqrt{\kappa}\hat a\s{in},\\ \dot{\hat b} &= -(\Gamma_m/2+i\Omega_m)\hat b + ig\hat a + \sqrt{\Gamma_m}\hat b\s{in}, \end{align} \end{subequations} We will be interested in interactions occurring on a time scale much shorter than the mechanical dissipation, and hence we can neglect $\Gamma_m$. We further move into a frame rotating at the mechanical frequency to obtain \begin{subequations} \begin{align} \dot{\hat a} &= -\frac{\kappa}{2}\hat a + ig\hat b + \sqrt{\kappa}\hat a\s{in},\\ \dot{\hat b} &= ig\hat a, \end{align} \end{subequations} In the weak coupling limit $g\ll\kappa$, we can adiabatically eliminate the cavity dynamics, \begin{subequations} \begin{align} \hat a(t) &= i\frac{2g}{\kappa}\hat b + \frac{2}{\sqrt{\kappa}}\hat a\s{in}, \label{eq:A6a}\\ \hat b(t) &= e^{-\tilde g t}\hat b(0) + i\sqrt{2\tilde g}\,e^{-\tilde g t}\int_0^t dt' e^{\tilde g t'}\hat a\s{in}(t'), \label{eq:A6b} \end{align} \end{subequations} where we defined the coupling strength $\tilde g = 2g^2/\kappa$. Substituting the output field given by $\hat a\s{out} = -\hat a\s{in}+\sqrt{\kappa}\hat a$ in Eq.~\eqref{eq:A6a} yields \begin{equation} \hat a\s{out} = \hat a\s{in} + i\sqrt{2\tilde g}\hat b. \label{eq:A7} \end{equation} We introduce the temporal optical modes~\cite{hofer2011,galland2014} \begin{subequations} \begin{align} \Ain(t) &= \sqrt{\frac{2\tilde g}{e^{2\tilde g t}-1}} \int_0^t dt' e^{\tilde g t'}\hat a\s{in}(t'), \\ \Aout(t) &= \sqrt{\frac{2\tilde g}{1-e^{-2\tilde g t}}} \int_0^t dt' e^{-\tilde g t'}\hat a\s{out}(t'), \end{align} \end{subequations} which obey $[\hat A_i,\hat A_i\dagg]=1$, in Eqs.~\eqref{eq:A6b} and~\eqref{eq:A7} to yield \begin{subequations} \begin{align} \Aout(t) &= e^{-\tilde g t}\Ain(t) + i\sqrt{1-e^{-2\tilde g t}}\,\hat b(0),\\ \hat b(t) &= e^{-\tilde g t}\hat b(0) + i\sqrt{1-e^{-2\tilde g t}}\,\Ain(t). \end{align} \end{subequations} In other words, we realize a beam-splitter transformation between mechanical and optical modes \begin{equation} \begin{pmatrix} \Aout(t) \\ \hat b(t) \end{pmatrix} = \begin{pmatrix} \cos\theta & i\sin\theta \\ i\sin\theta & \cos\theta \end{pmatrix} \begin{pmatrix} \Ain(t)\\ \hat b(0) \end{pmatrix} \label{eq:beamsplitter} \end{equation} with $\cos\theta\equiv e^{-\tilde g t}$ and $\sin\theta\equiv\sqrt{1-e^{-2\tilde g t}}$. In our case, $\tilde gt\ll 1$ so $\theta\ll 1$. The unitary transformation~\eqref{eq:beamsplitter} entails~\cite{dakna1997,campos1989} $\Aout = U\dagg\Ain U$ and $\hat b(t) = U\dagg\hat b(0)U$ with $U=e^{i\frac{\pi}{2}\hat L_3}e^{-2i\theta\hat L_2}e^{-i\frac{\pi}{2}\hat L_3}$ and the ``angular momentum'' operators given by~\cite{yurke1986,ban1994,dakna1997} \begin{subequations} \begin{align} \hat L_2 &= \frac{1}{2i}[\Ain\dagg(t)\hat b(0)-\hat b\dagg(0)\Ain(t)],\\ \hat L_3 &= \frac{1}{2}[\Ain\dagg(t)\Ain(t)-\hat b\dagg(0)\hat b(0)]. \end{align} \end{subequations} Thus, in the Schr\"odinger picture, a system initially described by a density matrix $\hat\rho\s{in}$ will evolve according to $\hat\rho\s{out}=\hat U\hat\rho\s{in}\hat U\dagg$. The initial state of the systems is \begin{equation} \hat\rho\s{in} = \hat\rho\s{in}^M\otimes\lvert 0\rangle\langle 0\rvert, \end{equation} where $\hat\rho\s{in}^M$ is the mechanical input state and the cavity is in the vacuum state (hereafter all bras and kets refer to optical Hilbert space). In this case the output state is given by~\cite{ban1994,dakna1997} \begin{equation} \label{eq:outputstate} \begin{split} \hat\rho\s{out} &= \sum_{n=0}^{\infty}\sum_{m=0}^{\infty} \biggl[ \frac{e^{-i(m-n)\pi/2}}{\sqrt{n!\,m!}}(-1)^{m+n}\lvert\tan\theta\rvert^{m+n} \\ &\quad\times \hat b^m \lvert\cos\theta\rvert^{\hat b\dagg\hat b} \hat\rho\s{in}^M \lvert\cos\theta\rvert^{\hat b\dagg\hat b} \hat b\dagg{}^n \otimes \lvert m\rangle\langle n\rvert\biggr]. \end{split} \end{equation} Conditioned on detection of $m$ output photons, the mechanical state is reduced in the usual way \begin{equation} \label{eq:reducedstate} \rho\s{out}^{(m)} = \frac{\langle m\rvert\hat\rho\s{out}\rvert m\rangle}{\mathrm{tr}_M(\langle m\rvert\hat\rho\s{out}\rvert m\rangle)} \end{equation} with probability \begin{equation} \label{eq:prbability} \begin{split} P(m) &= \mathrm{tr}_M(\langle m\rvert\hat\rho\s{out}\rvert m\rangle) \\ &= \sum_{n=m}^{\infty} \binom{n}{m}(\sin\theta)^{2m}(\cos\theta)^{2(n-m)} \langle n\vert\hat\rho\s{in}^M\vert n\rangle. \end{split} \end{equation} Equations~\eqref{eq:outputstate}, \eqref{eq:reducedstate} and~\eqref{eq:prbability} can be solved for an arbitrary mechanical input state $\hat\rho\s{in}^M$. This has been done for squeezed vacuum in Ref.~\onlinecite{dakna1997}. In our work we solve numerically for $\hat\rho\s{out}^{(m)}$ for a squeezed thermal input state. \end{document}
\begin{document} \preprint{} \title{Entanglement of Collectively Interacting Harmonic Chains: An effective \\ Two-Dimensional System} \author{R.G. Unanyan$^{1,2}$ , M. Fleischhauer$^{2}$ and D. Bru\ss $^{1}$} \affiliation{$^{1}$Institut f\"{u}r Theoretische Physik III, Heinrich-Heine-Universit\"{a} t D\"{u}sseldorf, D-40225 D\"{u}sseldorf, Germany\\ $^{2}$Fachbereich Physik, Technische Universit\"{a}t Kaiserslautern, 67663, Kaiserslautern, Germany} \begin{abstract} We study the ground-state entanglement of one-dimensional harmonic chains that are coupled to each other by a collective interaction as realized e.g. in an anisotropic ion crystal. Due to the collective type of coupling, where each chain interacts with every other one in the same way, the total system shows critical behavior in the direction orthogonal to the chains while the isolated harmonic chains can be gapped and non-critical. We derive lower and most importantly upper bounds for the entanglement, quantified by the von Neumann entropy, between a compact block of oscillators and its environment. For sufficiently large size of the subsystems the bounds coincide and show that the area law for entanglement is violated by a logarithmic correction. \end{abstract} \maketitle Presently there is a growing interest in the interrelation between entanglement and ground-state properties of many-body lattice models. For a number of spin systems \cite{GVidal-PRL-2003} a strict correspondence between the absence of criticality, the presence of an energy gap, and an area law for the entanglement was established. The latter states that the entanglement of a compact sub-set of lattice sites with the rest of the system, measured by the von Neumann entropy, scales with the surface area of the sub-set. For critical spin systems it was shown that an additional logarithmic correction to the area law emerges. A similar relation between criticality and entanglement was suggested for harmonic lattice models \cite {Bombelli-PRD-1986,Srednicki-PRL-1993}. In \cite {Plenio-PRL-2005,Plenio-PRA-2006} an area law was established for harmonic lattice models in arbitrary dimensions with nearest-neighbor coupling which have a gaped spectrum. For finite-range couplings in one dimension a one-to-one correspondence between the validity of the area law and non-criticality was established in \cite{RUFM}, and logarithmic corrections were derived for critical systems. Although the relation between criticality and entropy-area law seems rather universal, there are a number of examples where this relation does not hold \cite{Duer-PRL-2005,Plenio-PRA-2006}. Until now there is no general understanding of the conditions for the validity of an entropy area law in particular in higher dimensions \cite {GVidal-PRL-2003,Plenio-PRL-2005,Plenio-PRA-2006,Eisert-condmat-2006}. In the present paper we discuss a specific gapless oscillator model with dimension larger than one, for which an exact asymptotic expression for the entropy can be obtained. Due to the collective nature of the interactions in one spatial direction the system is critical and thus a violation of the area law is expected. We here derive a lower and, most importantly, a tight upper bound for the entropy and in this way obtain an exact form of the correction term to the area law. Let us consider a set of parallel harmonic chains (see Fig.\ref{fig1}) each containing $n_{x}$ oscillators, with $n_{x}\rightarrow \infty $ in the thermodynamic limit. We will refer to the direction parallel to the chains as $x$-axis, and to the orthogonal direction as $y$-axis. The number of parallel chains is denoted as $n_{y}$, again with $n_{y}\rightarrow \infty $ in the thermodynamic limit. The oscillators are described by the canonical variables $\left( q_{i},p_{i}\right) $, where $i=1,2,...N$ ($N=n_{x}n_{y}$) is a collective index that labels the oscillator. We adopt the following notation: $i=1,...,n_{x}$ correspond to the oscillators in the first chain with growing $x$ coordinate, $i=n_{x}+1,....,2n_{x}$ corresponds to oscillators in the second chain and so on. We consider a quadratic Hamiltonian of the form \begin{equation} H=\frac{1}{2}{\displaystyle\sum\limits_{i=1}^{N}}p_{i}^{2}+\frac{1}{2}{ \displaystyle\sum\limits_{i,j=1}^{N}}V_{ij}q_{i}q_{j}, \label{Hamiltonian} \end{equation} with a coupling matrix $V$. We are interested only in a translationaly invariant coupling, i.e. we assume that the matrix elements of $V$ depend only on the difference of the $x$ coordinates and the difference of the $y$ coordinates. Hence $V$ is a block Toeplitz matrix. For oscillator systems with a quadratic coupling of the form of eq.(\ref{Hamiltonian}) the ground state \begin{equation} \Psi _{0}\left( \mathbf{q}\right) \sim \exp \left( -\frac{1}{2}\left\langle \mathbf{q}\right\vert V^{1/2}\left\vert \mathbf{q}\right\rangle \right) \label{ground-state} \end{equation} and all its properties, as e.g. the correlation length in position or momentum space, are determined by the square root of $V$, where $\mathbf{q} =(q_{1},q_{2},\dots ,q_{N})$ is the vector of position variables. The ground state can easily be determined if $V$ is the square of another matrix, which we assume to be again a Toeplitz matrix, \begin{equation} V=Z^{2}/n_{y}. \label{multiInteraction} \end{equation} The factor $1/n_{y}$ is choosen such that the matrix elements of $V$ remain finite in the limit $N\to \infty$. Assuming $Z$ to be a Toeplitz matrix guarantees that the coupling $V$ is a Toeplitz matrix as well. We furthermore consider $Z$ to be of the block-matrix form \begin{equation} Z=\left[ \begin{array}{cccccc} \Lambda & Q & Q & . & . & Q \\ Q & \Lambda & Q & . & . & Q \\ Q & Q & \Lambda & . & & \\ & . & & & . & Q \\ & & . & & & Q \\ Q & & & Q & Q & \Lambda \end{array} \right] . \label{Z matrix} \end{equation} The elements of $Z$ are $n_x\times n_x$ matrices and characterize according to eq.(\ref{ground-state}) correlations. The diagonal elements of $Z$ describe correlations within one chain, i.e. in $x$ direction, the off-diagonal elements describe correlations between the chains. $\Lambda $ and $Q$ are both assumed to be Toeplitz matrices of finite range, i.e. their matrix elements $\Lambda _{k}$ and $Q_{k}$, where $\Lambda _{k}\equiv\Lambda _{k=|i-j|}=\langle i|\Lambda |j\rangle , $ vanish exactly for $k\geq R$. The finite range of $\Lambda $ and $Q$ ensures that the interaction $V$ is of finite range within the chains, while the form of $Z$ implies that $V$ is constant orthogonal to the chains. We assume furthermore that $\Lambda $, $Q$ and $\Lambda -Q$ are positive definite matrices. A simple calculation shows that the ground state of $V$ is degenerate and in the thermodynamic limit $n_{x},n_{y}\rightarrow \infty $ has only one non-zero eigenvalue. This means that the total Hamiltonian, Eq. (\ref{Hamiltonian}), is gapless. It should be noted however that the collective nature of the interactions is not sufficient for a gapless spectrum of the Hamiltonian. Since all off-diagonal elements of $Z$ are identical, correlations between oscillators do not depend on their distance in $y$ direction and the total system is critical irrespective of the correlation properties within the chains. Thus one expects that the entropy area law is broken. In fact one can easily find a lower bound to the entropy by the following simple argument: Let us consider a partition of the set of $N$ oscillators into a compact sub-system I with $N_{0}=l_{x}l_{y}$ and a sub-system II with $ N-N_{0}$ oscillators (see Fig.\ref{fig1}). If we now consider harmonic chains in $y-$ rather than in $x-$ direction, the ``$y$''-chains couple to each other with finite-range interaction $\Lambda $ (see Fig.\ref{fig1} b). We thus have reason to assume that $S\ge l_{x}S_{0}$, where $S_{0}$ is the entropy of a single ``$y$''-chain. Since the coupling within the chain is now collective ($Q$), the ``$y$''-chain itself is critical and its entropy scales as $S_{0}\sim \ln l_{y}$. Thus $S\ge l_{x}\ln l_{y}$ which in the thermodynamic limit $\{l_x,l_y\}\to\infty$ is much larger than surface area $ 2(l_x+l_y)$. While it is easy to see that the area law is broken, it is non-trivial to find an upper bound to the entropy and the exact form of the correction term. This will be done in the following. \begin{figure} \caption{(a) Collectively interacting strings of harmonic oscillators with finite-range intra-chain coupling $\Lambda $ and collective inter-chain coupling $Q$. The grey area indicates the sub-system I of oscillators. (b) Alternative view: interacting strings with collective intra-chain coupling $Q $ and finite-range inter-chain coupling $\Lambda $. } \label{fig1} \end{figure} Using the spectral representation of $V$, the correlation matrices $V^{1/2}$ and $V^{-1/2}$ can be decomposed as \begin{equation} V^{1/2}=\left[ \left( \Lambda -Q\right) \otimes \mathbf{1}_{y}+n_{y}Q\otimes \mathcal{P}_{n_y,n_y}\right] /\sqrt{n_y}, \label{correlationSpace} \end{equation} and \begin{eqnarray} &&\quad V^{-1/2} = \Bigl\{( \Lambda -Q) ^{-1}\otimes \mathbf{1}_{y}+ \\ && +\bigl[ ( \Lambda -Q+n_{y}Q) ^{-1}-( \Lambda -Q) ^{-1} \bigr] \otimes \mathcal{P}_{n_y,n_y}\Bigr\}\sqrt{n_y}, \notag \end{eqnarray} where $\mathbf{1}_{y}$ is the unity matrix of size $n_{y}\times n_{y}$ and $ \mathcal{P}_{nm}=|P_{nm}\rangle\langle P_{nm}|$ is the projector onto the (in general non-normalized) vector \begin{equation*} \left\vert P_{nm}\right\rangle=\frac{1}{\sqrt{n}}\underset{m}{\Bigl( \underbrace{1,1,...1}}\Bigr) ^{T}. \end{equation*} Following Refs. \cite {Bombelli-PRD-1986,Srednicki-PRL-1993,Plenio-PRL-2005,Reznik}, the von-Neumann entropy or the entropy of entanglement of the two compact parts I and II can be calculated from a decomposition of $V^{1/2}$ into the two subsystems. To this end we express $V^{1/2}$ and $V^{-1/2}$ in a block form according to the two sub-systems by proper reordering of rows and columns \begin{equation} V^{-1/2}=\left[ \begin{array}{cc} A & B \\ B^{T} & C \end{array} \right] ,\qquad V^{1/2}=\left[ \begin{array}{cc} D & E \\ E^{T} & F \end{array} \right] . \label{BlockForm} \end{equation} Here $A$ and $D$ are $N_{0}\times N_{0}$ matrices describing correlations within sub-system I, $C$ and $F$ are $\left( N-N_{0}\right) \times \left( N-N_{0}\right) $ matrices describing correlations within sub-system II, and the matrices $B$ and $E$ describe the correlations between them. The entropy of entanglement is then given by the eigenvalues $\mu _{i}\geq 1$ of the matrix product $A\cdot D$ \cite{Plenio-PRL-2005}: \begin{eqnarray} S &=&{\displaystyle\sum\limits_{i=1}^{N_{0}}f}\left( \sqrt{\mu _{i}}\right) , \label{Entropy} \\ {f}\left( x\right) &=&\frac{x+1}{2}\ln \frac{x+1}{2}-\frac{x-1}{2}\ln \frac{ x-1}{2}. \end{eqnarray} Despite the simplicity of its form, expression (\ref{Entropy}) cannot be explicitly evaluated in general. Due to the special interaction matrix the eigenvalues can however be evaluated in the thermodynamic limit: From the spectral decomposition of $V^{1/2}$, eq.(\ref{correlationSpace}), one easily finds that the subsystem matrices read \begin{align} A& =\left[ A_{0}\otimes \mathbf{1}_{l_{y}}+(A_{1}-A_{0})\otimes \mathcal{P} _{n_{y},l_{y}}\right] \sqrt{n_{y}}, \label{partition} \\ D& =\left[ D_{0}\otimes \mathbf{1}_{l_{y}}+n_{y}D_{1}\otimes \mathcal{P} _{n_{y},l_{y}}\right] /\sqrt{n_{y}}, \notag \end{align} where $A_{0},A_{1}$ and $D_{0},D_{1}$ are $l_{x}\times l_{x}$ principal submatrices of $(\Lambda -Q)^{-1},\left( \Lambda -Q+n_{y}Q\right) ^{-1}$, and $(\Lambda -Q),Q$ respectively. For large $n_{y}$ one has \begin{equation} A\cdot D\approx \left( A_{0}\cdot D_{0}\right) \otimes \mathbf{1} _{l_{y}}+n_{y}\left( A_{0}\cdot D_{1}\right) \otimes \mathcal{P} _{n_{y},l_{y}}. \label{AD} \end{equation} Here we have used that $\mathcal{P}_{n_{y},l_{y}}^{2}=l_{y}/n_{y}\mathcal{P} _{n_{y},l_{y}}$ which scales as $1/n_{y}$ for fixed $l_{y}$ and is thus negligible in the thermodynamic limit. Furthermore $\mathcal{P}_{n_{y},l_{y}} $ has one nonzero eigenvalue ${l_{y}}/{n_{y}}$, which vanishes in the thermodynamic limit ($l_{y}$ fixed and $n_{y}\rightarrow \infty $), and $ \left( l_{y}-1\right) $ zero eigenvalues. Thus the $l_{x}l_{y}$ eigenvalues of $A\cdot D$ can be decomposed into two sets. The first set consists of the $l_{x}$ eigenvalues of $A_{0}\cdot D_{0}$ each of which occurs $(l_{y}-1)$ times: \begin{eqnarray} \mu _{1},\cdots ,\mu _{l_{y}-1} &=&\alpha _{1}\left( A_{0}\cdot D_{0}\right) , \notag \\ \mu _{l_{y}},\cdots ,\mu _{2(l_{y}-1)} &=&\alpha _{2}\left( A_{0}\cdot D_{0}\right) , \label{first} \\ &\vdots & \notag \\ \mu _{(l_{x}-1)(l_{y}-1)+1},\cdots ,\mu _{l_{x}(l_{y}-1)} &=&\alpha _{l_{x}}\left( A_{0}\cdot D_{0}\right) . \notag \end{eqnarray} Here and in the following $\alpha _{k}\left( X\right) $ denotes the $k$th eigenvalues of the matrix $X$. The total number of these eigenvalues is $ l_{x}(l_{y}-1)$. The second set consists of the $l_{x}$ eigenvalues of $ (A_{0}\cdot D_{0}+l_{y}\left( A_{0}\cdot D_{1}\right) )$ \begin{eqnarray} \mu _{k} &=&\alpha _{k}\left( A_{0}\cdot D_{0}+l_{y}\left( A_{0}\cdot D_{1}\right) \right) , \label{uppereigenvalues} \\ &&\qquad \text{for}\quad k=l_{x}(l_{y}-1)+1,...,l_{x}l_{y}. \notag \end{eqnarray} Expression (\ref{uppereigenvalues}) for the second set of eigenvalues can be simplified using Lidskii's theorem \cite{Lidskii} which states: Let $X$ and $ Y$ \ be $M$ -dimensional Hermitian matrices. Moreover let $\alpha _{k}\left( X\right) ,\alpha _{k}\left( Y\right) $ and $\alpha _{k}\left( X-Y\right) $ , $k=1,...,M$ be the eigenvalues of $X,Y$ and $X-Y$ respectively in ascending order $\{\alpha _{1}\left( X\right) \leq \alpha _{2}\left( X\right) \leq ...\leq \alpha _{M}\left( X\right)\} $. Then there exist numbers $w_{kj}\geq 0$, ($k,j=1,...,M$), such that $\sum_{k}w_{kj}=\sum_{j}w_{kj}=1$ and \begin{equation} \alpha _{k}\left( X\right) =\alpha _{k}\left( Y\right) +\sum_{j=1}^{M}w_{kj}\alpha _{j}\left( X-Y\right) . \label{Lidskii} \end{equation} Equation (\ref{Lidskii}) implies that for sufficiently large $l_{y}$ the eigenvalues of the matrix $A_{0}\cdot D_{0}+l_{y}\left( A_{0}\cdot D_{1}\right) $ are \begin{equation} \alpha _{k}\left( A_{0}\cdot D_{0}+l_{y}\left( A_{0}\cdot D_{1}\right) \right) \approx l_{y}\sum_{j=1}^{l_{x}}w_{kj}\alpha _{j}\left( A_{0}\cdot D_{1}\right) . \label{largelL} \end{equation} An \textit{\ upper} bound to the entropy can be found by evaluating the sum over the eigenvalues (\ref{first}) and (\ref{uppereigenvalues}) in eq.(\ref {Entropy}) separately \begin{eqnarray} S &=&S_{1}+S_{2} \\ &=&\sum_{j=1}^{l_{x}(l_{y}-1)}F\left( \sqrt{\mu _{j}}\right) +\sum_{j=l_{x}(l_{y}-1)+1}^{l_{x}l_{y}}F\left( \sqrt{\mu _{j}}\right) . \notag \end{eqnarray} Taking into account eq.(\ref{first}) one recognizes that $S_{1}$ is apart from a prefactor $(l_{y}-1)$ formally equivalent to the von-Neumann entropy of a linear oscillator chain of length $l_{x}$ with interaction $\tilde{V} =(\Lambda -Q)^{2}$ \begin{equation} S_{1}=(l_{y}-1)\sum_{k=1}^{l_{x}}F\left( \sqrt{\alpha _{k}(A_{0}\cdot D_{0})} \right) . \end{equation} Since $\Lambda -Q$ was assumed to be strictly positive, the interaction $ \tilde{V}$ has only nonzero eigenvalues and thus corresponds to a gaped oscillator chain. As shown in \cite{Plenio-PRL-2005},\cite{RUFM} the entropy of such a linear chain saturates in the thermodynamic limit, i.e it becomes independent on the length $l_{x}$ of the chain. Thus we have in the thermodynamic limit \begin{equation} S_{1}\leq l_{y}c_{1}. \end{equation} To obtain an upper bound to $S_{2}$ we use the inequality ${F}\left( x\right) <1-\ln 2+\ln x$. This yields with eq.(\ref{largelL}) \begin{equation} S_{2}<l_{x}(1-\ln 2)+\frac{1}{2}\sum_{k=1}^{l_{x}}\ln \left( l_{y}\sum_{j=1}^{l_{x}}w_{kj}\alpha _{j}(A_{0}\cdot D_{1})\right) . \end{equation} To further evaluate the last term we make use of the convexity of the logarithm together with the arithmetic mean inequality \begin{eqnarray} &&\frac{1}{2}\sum_{k=1}^{l_{x}}\ln \left( l_{y}\sum_{j=1}^{l_{x}}w_{kj}\alpha _{j}(A_{0}\cdot D_{1})\right) \notag \\ &&\quad \leq \frac{l_{x}}{2}\ln \left( \frac{l_{y}}{l_{x}} \sum_{j=1}^{l_{x}}\sum_{k=1}^{l_{x}}w_{kj}\alpha _{j}(A_{0}\cdot D_{1})\right) \\ &&\quad =\frac{l_{x}}{2}\ln \left( \frac{l_{y}}{l_{x}}\sum_{j=1}^{l_{x}} \alpha _{j}(A_{0}\cdot D_{1})\right) , \notag \end{eqnarray} where we have used $\sum_{k}w_{kj}=1$ in the last step. We now have to evaluate the remaining logarithm. For this we make use of the fact that $\Lambda $ and $Q$ are regular (i.e. strictly positive) Toeplitz matrices. Because of this, their elements can be obtained from the non-negative spectral functions $\lambda \left( \theta \right) $ and $ q\left( \theta \right) $ \cite{Szegoe} $\Lambda _{k} =\frac{1}{2\pi }\int\limits_{0}^{2\pi }\lambda \left( \theta \right) \exp \left[ -ik\theta \right] d\theta$ , $Q_{k} =\frac{1}{2\pi }\int\limits_{0}^{2\pi }q\left( \theta \right) \exp \left[ -ik\theta \right] d\theta . $ Since we have assumed above that also $\Lambda -Q$ is strictly positive, the functions $\lambda \left( \theta \right) $ , $q\left( \theta \right) $ are strictly positive \ and $\lambda \left( \theta \right) >q\left( \theta \right) $. In addition, we require also that $\left( \lambda \left( \theta \right) -q\left( \theta \right) \right) ^{\pm 1}$ and $q\left( \theta \right) $ have bounded derivatives of second order. As a consequence one finds (see \cite{Szegoe} page 221) \begin{equation} \frac{1}{l_{x}}\left( \sum\limits_{j=1}^{l_{x}}\alpha _{j}\left( A_{0}\cdot D_{1}\right) \right) \approx \frac{1}{2\pi }\int\limits_{0}^{2\pi }\frac{ q\left( \theta \right) }{\lambda \left( \theta \right) -q\left( \theta \right) }d\theta \label{producteoplitz} \end{equation} which is a constant independent on $l_{x}$. Thus the desired upper bound to the entropy for sufficiently large $l_{x},l_{y}$ is: \begin{equation} S\leq c_{1}l_{y}+c_{2}l_{x}+\frac{l_{x}}{2}\ln l_{y} \label{finalupperbound} \end{equation} where $c_{1},c_{2}$ are some constants independent of the size of the subsystem. A \textit{\ lower} bound to the entropy can be found from the inequality $ F\left( x\right) \geq \ln x.$ This yields, with eq.(\ref{largelL}), \begin{eqnarray} S &\geq &\frac{(l_{y}-1)}{2}{\sum\limits_{k=1}^{l_{x}}\ln }\left[ \alpha _{k}\left( A_{0}\cdot D_{0}\right) \right] \label{lower1} \\ &&+\frac{l_{x}}{2}{\ln }(l_{y})+\frac{1}{2}\sum\limits_{k=1}^{l_{x}}{\ln } \left( \sum_{j=1}^{l_{x}}w_{kj}\alpha _{j}\left( A_{0}\cdot D_{1}\right) \right) . \notag \end{eqnarray} Making use of Jensen's inequality for concave functions $\ln \left( \sum_{j}t_{j}\alpha _{j}\right) \geq \sum_{j}t_{j}\ln (\alpha _{j})$ and $ \sum_{k}w_{kj}=1$ we find \begin{eqnarray} S &\geq &\frac{(l_{y}-1)}{2}{\sum\limits_{k=1}^{l_{x}}\ln }\left( \alpha _{k}\left( A_{0}\cdot D_{0}\right) \right) \label{lower2} \\ &&+\frac{l_{x}}{2}{\ln }(l_{y})+\frac{1}{2}\sum_{j=1}^{l_{x}}{\ln }\left( \alpha _{j}\left( A_{0}\cdot D_{1}\right) \right) . \notag \end{eqnarray} To evaluate the sums over the logarithms we employ Szeg\"{o}'s theorem \cite {Szegoe} for determinants of a Toeplitz matrices $T$. The theorem states: for sufficiently large $l_{x}$ \begin{equation*} \ln \left( \det \left( T\right) \right) \approx q_{0}l_{x}+\sum\limits_{k=1}^{\infty }k\left\vert q_{k}\right\vert ^{2}, \end{equation*} for regular spectral function $q\left( \theta \right) $. Here $q_{k}$ is Fourier coefficients of $\ln q\left( \theta \right) $. Since moreover \begin{eqnarray} \sum_{j}\ln \left( \alpha _{j}(A_{0}\cdot D_{1})\right) &=&\ln \left( \prod_{j}\alpha _{j}(A_{0}\cdot D_{1})\right) \notag \\ &=&\ln \bigl[ \det (A_{0})\det (D_{1})\bigr] , \end{eqnarray} we eventually find the lower bound \begin{equation} S\geq a_{1}l_{x}+a_{2}l_{y}+\frac{l_{x}}{2}{\ln }(l_{y}). \label{lower} \end{equation} Here $a_{1},a_{2}$ are constants independent of the size of the subsystem and we have ignored an unimportant constant term. By combining the two estimates (\ref{finalupperbound}) and (\ref{lower}) one finds \begin{equation*} {c}_1\, l_x+c_{2}\, l_{y}+\frac{l_{x}}{2}\ln (l_{y})\, \geq\, S\, \geq \, a_{1}\, l_x +a_{2} \, l_y+\frac{l_{x}}{2}{\ln }(l_{y}). \label{Estimation} \end{equation*} Since both sides of this inequality have the same functional form, $S$ approaches for large $l_x, l_y$ the asymptotic value \begin{equation} S\approx \frac{l_{x}}{2}{\ln }(l_{y}),\qquad l_x, l_y \gg 1. \label{Logcorrections} \end{equation} This is the main result of our paper. It shows that the entropy area law is violated for a set of harmonic chains, which for themselves have a gaped spectrum and are non-critical but become gapless by a collective interaction between the chains. Both upper and lower bound to the entropy attain the same logarithmic correction term to the area law. A physical system that can be approximated by the model studied here is an anisotropic ion crystal. In such a system the Coulomb-interaction in the direction of the small lattice constant can in first approximation be considered as collective, while the one in an orthogonal direction is of finite range. In conclusion, we derived an exact asymptotic expression for the entanglement entropy of a critical system of interacting oscillators in more than one dimension. We found that similar to one-dimensional systems \cite {RUFM} the entanglement area law is violated by a logarithmic correction proportional to the surface area in the critical direction. To our knowledge the system of collectively interacting harmonic strings considered here, which is approximately realized e.g. in an anisotropic ion crystal, is the first nontrivial example of a critical two-dimensional system for which the correction to the area law can explicitly be calculated. This work was supported by the European Commission through the Integrated Project FET/QIPC "SCALA". \end{document}
\begin{document} \author{Simon Barazer} \maketitle \begin{abstract} In this paper I study oriented metric ribbon graphs. I show that it's possible to decompose these graph in some canonical way by performing surgery along appropriate multi-curves. This result provide a recursion scheme for the volumes of the moduli space of $4-$valent metric ribbon graphs which can be interpreted as an oriented version of the topological recursion. I give applications to counting dessins d'enfants in a particular case. \end{abstract} \keywords{Metric ribbon graphs, $4-$ valent ribbon graphs, geometric recursion, bipartite maps, measurable foliations, dessin d'enfants.} \tableofcontents \section{Intro :} Ribbon graphs have been studied in many place in the literature they are also called maps or fatgraphs in some places. They can be seen as cell decomposition of the surface or combinatorial model of surfaces with boundaries if we remove an open disc in each face. A metric ribbon graph is then a ribbon graph with a positive weight on each edge \cite{mulase1998ribbon},\cite{andersen2020kontsevich}. They provide a cell decomposition of the moduli space and were use in \cite{kontsevich1992intersection} in order to prove the Witten conjecture. Ribbon graph also appear naturally in Feymann diagram expansion of correlators in models of matrix integrals\cite{eynard2015random}. In an other setting they can be used to compute volumes of spaces of abelian and quadratic differentials \cite{delecroix2021masur}. Then they are object of interest in this field because volume he moduli space of metric ribbon graphs can be used to understand the topology of quadratic and abelian differentials. And compute statistics for the distribution of cylinders in periodic surfaces.\\ \paragraph{Volumes of moduli space of metric ribbon graph :} In this paper I investigate in some sense the geometry of metric ribbon graphs and I will use ideas which are close to the ones of \cite{andersen2020topological}. I will study curves on ribbon graphs in order to decompose them in simple pieces. Results are know in the case of three valents ribbon graphs, an analogous of the Mirzakhnani Mac Shane formula can be use to prove a topological recursion scheme for the volumes \cite{andersen2020kontsevich}, nevertheless outside this case very few things are know on these volumes. I will mainly focus for applications to the case of $4-$valents graphs with a coherent orientation of the edges of the graph. These graphs are related to orientable foliations with poles, abelian differentials with simple poles and also bipartite maps. Each stratum of the moduli space of metric ribbon graphs have a piece-wise linear structure with a lattice of integral points it allow to define a Lebesgue measure normalised by this lattice. And I will consider the volumes of the level set of surfaces with fixed boundary lengths. In the case of oriented ribbon graphs these lengths are not independents. This dependence correspond to a residues condition, each boundary come with a sign and the sum of the lengths is zero if they are weighted by these signs. It remain that I will split the boundaries in two sets, the positives and negatives and the volumes that I consider are functions of two set of variables $(L^+,L^-)$ \begin{equation*} Z_{g,n^+,n^-}(L^+|L^-). \end{equation*} In general these volumes are not well defined for all values of $L^+,L^-$ which satisfy the residue condition. But in the case of four valent graphs I will give the following result. \begin{thm} \label{conti1} The volumes $Z_{g,n^+,n^-}(L^+|L^-)$ are continuous for all $L^+,L^-$ and are homogeneous piece-wise polynomials of degrees $4g-3+n^++n^-$ \end{thm} The piece wise polynomiality is annoying for direct uses of these volumes. \paragraph{Surgeries on the graphs} In order to perform surgeries on our graphs I introduce the notion of admissible curves on a ribbon graph, this is a particular subset of homotopy classes of simple curves on which it's possible to define the twist flow and perform surgeries. Admissible curves are the one that do not split a vertex and are related to quadratic differentials with double poles. On an oriented ribbon graph admissible curves admit an orientation and this define a structure of directed stable graph. In other word there is a sign on each boundary of the surface obtained after cutting along the multi-curve and the sign are opposites on two boundaries glued by a curve. By studying admissible curves I proved that it' possible to find a multi-curve that separate the vertex from the rest of the surface and this choice is indeed canonical. There is a two form on each stratum of metric ribbon graphs which come from the intersection pairing in cohomology. When there is vertices of even degrees this form can degenerate and then the space have only a Poisson structure. The curve that are in the kernel of this form enjoy many interesting properties and are the ones which are used to decompose the surface . \begin{thm} \label{acyclic_decomp1} Let $R$ an oriented metric ribbon graph with at least two vertices. For each vertex $v$ of $R$ there exist a unique admissible multi-curve $\Gamma_v^{+}$ such that \begin{itemize} \item The stable graph $\Go_v$ of $\Gamma_v^+$ contain a component $c_0$ which separate $v$ from the rest of the surface. \item All the curves in $\Gamma_v$ are boundaries of $c_0$. \item $c_0$ is glued along it's negative boundaries. \end{itemize} \end{thm} From this theorem, in the orientable case, it's possible to find a decomposition of the graph such that each component contain a unique vertex. The main property of these stable graph is the fact that they are acyclic, this is the reason the rigidity of the theorem. We can find numerous decomposition's of the surface in minimal surfaces but really few of them correspond to acyclic stable graphs. Acyclic multi-curves are very rigid in some sense. This theorem can also be used to remove vertices of even degree in an unorientable ribbon graph but we do not give the detail here. \paragraph{Recursion for the volumes :} By using this theorem and techniques which where introduced by M.Mirzakhani \cite{mirzakhani2007simple} I can perform surgeries over the moduli space of orientable ribbon graphs and this give a recursion for the volumes $Z_{g,n^+,n^-}(L^+|L^-)$. The form of the recursion is in some sense similar to recursion of \cite{andersen2020kontsevich} and \cite{mirzakhani2007simple} but I need to take care to the sign of the boundaries. The recursion correspond to removing a pair of pant's which is glued along negatives boundaries positive boundaries. In this case the gluings that I consider are the following (see figure \ref{fig_gluings}) \begin{enumerate} \item Removing a pant that contain two positive boundaries $(i,j)$ \item I remove a pant that contain a positive boundary $i$ and a negative boundary $j$ \item Removing a pant with one positive boundary $i$ which is connected to the surface by two negative boundaries and do not separate the surface \item Removing a pant with one positive boundary $i$ which is connected to the surface by two negative boundaries and separates the surface in two components. \end{enumerate} Then by applying theorem \ref{acyclic_decomp1} I obtain the following theorem $[]_+$ means the positive part. This theorem allow to compute the volumes inductively by a recursion scheme without using enumeration of the ribbon graphs and it's an efficient way for low value of $g$ and $n^\pm$ but understand the structure of the volumes could help to make this recursion more effective. \begin{thm} \label{rec4} For all value of the boundaries lengths the volumes satisfy the recursion \begin{eqnarray*} (2g-2+n^++n^-)Z_{\small{g,n^+,n^-}}(L^+|L^-)&=& \sum_{i}\sum_{j} [L_i^+-L_j^-]~Z_{\small{g,n^+,n^--1}}([L_i^+-L_j^-]_+,L^+_{\{i\}^c}|L^-_{\{j\}^c})\\ &+&\frac{1}{2}\sum_{i\neq j} (L_i^+~+L_j^+)~Z_{\small{g,n^+-1,n^-}}(L_i^+ +L_j^+,L^+_{\{i,j\}^c}|L^-)\\ &+&\frac{1}{2}\sum_{i} \int_0^{L_i^+} Z_{\small{g-1,n^++1,n^-}}(x,L_i^+-x,L^+_{\{i\}^c}|L^-)~x(L_i^+-x)~ dx\\ &+&\frac{1}{2}\sum_{i}\sum_{\underset{I_1^{\pm} \sqcup I_2^{\pm}=I^{\pm}}{g_1+g_2=g}} x_1 x_2 Z_{\small{g_1,n^+_1+1,n^-_1}}(x_1,L^+_{I_1^+}|L^-_{I_1^-})~ Z_{\small{g_2,n^+_2+1,n^-_2}}(x_2,L^+_{I_2^+}|L^-_{I_2^-}) \end{eqnarray*} where we use the notation \begin{equation*} x_l = \sum_{i\in I^{-}_l}L_i^{-}-\sum_{i\in I^{+}_l}L_i^{+}, \end{equation*} and the sets $I^\pm$ correspond to $\{1,...,n^\pm\}$ minus the positives boundary $i$. \end{thm} \begin{rem} Going to the Laplace transform lead to a recursion in terms of partial derivatives and things seams to going well. But the fact that our functions are supported on some affine submanifold make computations different than in the usual case.\\ \end{rem} \begin{figure} \caption{The different gluing of the recursion } \label{fig_gluings} \end{figure} \paragraph{Special case, cut and join equation and Grotendieck Dessins d'enfants :} There is one case where the volumes and the recurrence are particularly nice, it correspond to the surfaces with only one negative boundary. In this case I can write the volumes as functions $F_{g,n^+}(L)$ of only the positives boundaries. The only gluing that are allowed are the one of type $(1)$ and $(3)$, moreover the fact that there is only one negative boundary don't provide enough "space" for discontinuities. \begin{thm} \label{rec4special} The functions $F_{g,n}$ are homogeneous polynomials of degree $4g-4+n$ and satisfy the following recursion \begin{eqnarray*} (2g-1+n)F_{g,n}(L)&=& \frac{1}{2}\sum_{i\neq j} (L_i~+L_j)~F_{g,n-1}(L_i +L_j,L_{\{i,j\}^c})\\ &+&\frac{1}{2}\sum_{i} \int_0^{L_i} F_{g-1,n+1}(x,L_i-x,L_{\{i\}^c})~x(L_i-x)~ dx\\ \end{eqnarray*} \end{thm} This recursion lead to a recursion for the coefficients of the polynomials. They are symmetric and then they can be indexed by partitions $\mu=(0^{\mu(0)},1^{\mu(1)},...)$, I can form a generating function $\phi$ given by \begin{equation*} \phi(s,t)=\sum_\mu \frac{s^{\frac{|\mu|+\#\mu}{2}}\prod_i i^{\mu(i)}t_i^{\mu(i)}}{\prod_i \mu(i)!} c(\mu). \end{equation*} Where $c(\mu)$ are the coefficient's of the polynomials, then rewriting the recursion in term of this serie leads to the following equation which is a cut and join equation. \begin{cor} \label{cutandjoin} The series $\phi(s,t)$ satisfy the following equation \begin{equation*} \frac{\partial \phi}{\partial s} = \sum_{i,j}(i+j)t_it_j \frac{\partial \phi}{\partial t_{i+j}} + \sum_{i+j}(i+1)(j+1)t_{i+j-3} \frac{\partial^2 \phi}{\partial t_{i}\partial t_{j}}+ t_0^2 \end{equation*} \end{cor} This equation is not surprising because there is a bijection between orientable $4-$valent graph and Dessins d'enfants with a maximal ramification over a point simple ramification over an other and arbitrary ramifications over the third one. And then the last result give the cut and join for these Hurwitz numbers. \paragraph{Perspectives :} \begin{itemize} \item In a future work I plan to investigate the recursion in a more general context without the restriction on the order of the vertices. The recursion in this case but is explicit only for vertices of low order. \item In a future work I will expand the chapter on "dessins d'enfants" which is a special case of a more general result. I will give the relation between this recursion and the usual topological recursion in this case. \end{itemize} \paragraph{Acknowledgements :} I am very to my Advisor Maxim Kontsevich and Anton Zorich for their help and support in this personal work. I am also grateful to Elise Goujard Vincent Delecroix to their interests in my work, their support and the invitations at the IMB of Bordeaux. And finally I am grateful to the Renewquantum and all the participants of TR at Salento to inspiring discussions. \section{Topology of surfaces :} In this first section I give some classical definition and result about surfaces and curves on it the main reference is \cite{fathi2012thurston}. I will introduce directed surfaces and directed multi-curves which are important in the rest of this text. I will also introduce foliations and quadratic/ abelian differentials with poles and study their elementary properties. I will also give quickly the relation between multi-arcs and foliations with poles. \subsection{Surfaces and decoration} In this paper I will consider oriented topological surfaces of genus $g$ with $n$ boundaries. All the surfaces that I consider will be stable i.e with a strictly negative Euler characteristic. Sometime I will also consider surfaces with labelled boundaries. If $M$ is the surface with boundaries I can consider three surfaces \begin{itemize} \item The surface $M^{top}$ obtained by gluing a disc on each boundary \item The surface $M^{\bullet}$ obtained by gluing a pointed disc \item $M^{db}$ the scotty double which is the surface obtained by doubling $M$ along it's boundary. \end{itemize} By abuse of notations I generally denote $\partial M$ the set $\pi_0(\partial M)$. For a surface with boundaries the group $\Mod(M)$ is the mapping class group of isotopy classes of homeomorphism which act trivially on the boundaries $\partial M$. I do not add marked points on the boundaries then I don't allow twist along boundary curves. \paragraph{Directed surfaces :} \label{orientboundaries} I introduce directed surface which are surface with boundaries and an orientation of the boundaries $\epsilon$. By orientation of the boundaries I means the data of a map \begin{equation*} \epsilon : \partial M \longrightarrow \{\pm 1\} \end{equation*} which is non constant, I will use the notation $\Mo$ for a couple $(M,\epsilon)$. The positive boundaries $\partial^+\Mo$ are in some sense the entries and the negative boundaries $\partial^-\Mo$ are the exit of our surface, I will denote $n^{\pm}$ the respective cardinality of these sets, which is non zero by assumption.\\ For directed labelled surfaces I assume that each sets $\partial^{\pm}\Mo$ is labelled from $1$ to $n^{\pm}$, and then a directed labelled surface is characterised by the triple $(g,n^+,n^-)$.\\ In what is next I will have positive weights on the boundaries which correspond to the lengths of the boundaries for a metric ribbon graph, the "flow" through the boundary for measured foliation or the absolute value of the residue for a quadratic differential. As there is entries and exits in the surface I assume than the sum of the entry is equal to the sum of the exits. I will then consider the convex cone \begin{equation} \label{LambdaMo} \Lambda_{\Mo}=\{L\in (\R_+^*)^{\partial M}|\sum_\beta \epsilon(\beta)L_\beta=0\}. \end{equation} When the boundaries are labelled I will denote the corresponding set $\Lambda_{n^{+},n^{-}}$. \begin{rem} \label{orientbord} There is the natural orientation of the boundaries according to the orientation of the surfaces, and $\epsilon$ is an orientation of the boundary curves by assuming than the negative boundaries are oriented in the opposite direction compared to the standard orientation. \end{rem} \paragraph{Decorations :} \label{decorations} We will briefly use some decoration on the surfaces which are partition $\mu=(1^{\mu(1)},2^{\mu(2)},...)$ such that \begin{equation*} |\mu| -2\#\mu= 4g-4+2n \end{equation*} where I use the notations \begin{equation*} |\mu|=\sum_i i\mu(i)~~~~~~\#\mu=\sum_i. \mu(i) \end{equation*} \subsection{Simple curves and stable graphs :} \paragraph{Curves and multi-curves :} \label{curvestopo} Let $M$ is a topological surface an essential simple curves is an isotopy class of simple closed curves which are non contractibles and do not retracts on a boundary \cite{fathi2012thurston}. A primitive multi-curve is a familly of disjoint non isotopic simple closed curves. We can generalise this notion by considering integral multi-curves. An integral multi-curve can be represented formally by a sum \begin{equation*} \Gamma=\sum_\gamma m_\gamma \gamma, \end{equation*} where the sum run over the set of essential curves, the weights are strictly positive integers and two curves in the support of the sum are non intersecting. \paragraph{Surgery along multi-curves and stable graphs } If $\Gamma$ is a multi-curve on $M$, It's always possible to cut $M$ along $\Gamma$. This create obtain a new topological surface $M_\Gamma$ with boundaries, which is well defined modulo isotopies. This procedure is encoded by a stable graphs. Such graph are defined by the following data's \begin{itemize} \item A set of vertices $X_0\G$ \item A family of topological stable surfaces $(\G(c))_{c\in X_0\G}$ with boundaries \item If $X\G= \sqcup_c \pi_0(\partial \G(c))$ then there in an involution \begin{equation*} j: X\G\longrightarrow X\G. \end{equation*} \end{itemize} These data's define a topological surface $M_\G$ by gluing the boundaries of $\sqcup_c \G(c)$ which are identified by the involution. The fixed points of the involution are the boundaries of $M_\G$. Let $X_1\G$ the set of orbits of order two under the involution, each elements of $X_1\G$ a curve on $M_\G$ and the union of these curves define a primitive multi-curve on $M_\G$. Reciprocally each multi-curve is associated to a stable graph $\G$ characterised by $X_0\G=\pi_0(M_\Gamma)$ and $\G(c)$ is the connected component $\M_\Gamma(c)$.\\ Two stable $\G_i$ graph are isomorphic if there is an homeomorphism $M_{\G_1}\longrightarrow M_{\G_2}$ that preserve the two decompositions. The set $\st(M)$ of stable graphs marked by $M$ is the then equivalence class $\phi:M\rightarrow M_\G$ modulo isomorphism $\alpha:M_{\G_1}\longrightarrow M_{\G_2}$ such that $\phi_2^{-1}\circ\alpha\circ \phi_1$ is an element of $\Mod(M)$.\\ Each stable graph come with a group of automorphisms $\Aut(\G)$ which are morphisms of graph $\phi:X\G \longrightarrow X\G$ which fix each legs commute with $j$ and such that for each $c\in X_0\G$ the surfaces $\G(c)$ and $\G(\phi(c))$ are homeomorphic.\\ \paragraph{Directed multi-curves :} \label{directedmulticurve} In the category of directed surfaces it's natural to introduce directed multi-curves. Let $(M,\epsilon)$ a directed surface an orientation $\epsilon_\Gamma$ on a multi-curve $\Gamma$ is an orientation on the boundary of $M_\Gamma$ such that \begin{itemize} \item Each component $(M_\Gamma(c),\epsilon_\Gamma(c))$ of $M_\Gamma$ is directed \item if $\beta$ is not a fixed point of $j$ then then $\epsilon_\Gamma(j(\beta))=-\epsilon_\Gamma(\beta)$, \item if $\beta\in \partial M$ then $\epsilon_\Gamma(\beta)=\epsilon(\beta)$. \end{itemize} \begin{rem} Using the remark \ref{orientbord} a directed multi-curve is equivalent to an orientation of each curves by taking the orientation of the positive boundary. \end{rem} \begin{rem} We will see later all the directed multi-curves are not relevant for us and there is possible degenerations.\\ \end{rem} \paragraph{Directed graphs and convex cone :} \label{directed_graphs} To a directed multi-curve we can associate a directed stable graph $\Go$. Directed stable graphs are defined in a similar way than directed multi-curves. Edges and leaves of such graph are oriented from $-$ to $+$. These directions represent flux through the directed surfaces. What I show later is that such graph can be used to decompose oriented foliations and ribbon graph in a canonical way.\\ If $\Go$ is an oriented stable graph then I consider convex cone \begin{equation} \label{conedirected_graph} \Lambda_{\Go} = \{L\in \prod_c \Lambda_{\Go(c)}|L(\beta)=L(j(\beta))\}. \end{equation} If $\Go$ is marked by $\Mo$ there is a map \begin{equation*} L_\partial : \Lambda_{\Go}\longrightarrow \Lambda_{\Mo} \end{equation*} and the image is a convex cone. For each $L$ in the image I will consider $\Lambda_{\Go}(L)$ the level set which is an affine submanifold. The space $ \Lambda_{\Go}$ is the intersection of $\R_+^{X\Go}$ with the subspace $T_{\Go}$ defined by the exact sequence \begin{equation*} 0\longrightarrow T_{\Go} \longrightarrow \R^{X\Go} \longrightarrow \R^{X_1\Go} \longrightarrow 0. \end{equation*} The dimension of $T_{\Go}$ is always equal to $\#X_1\G+\#\partial\G-\#X_0\G+1$ but it can happen that the dimension of $\Lambda_\Go$ is smaller than expected. This occurs when constraints induced by the orientation are incompatibles and force curve to be equal to zeros. This is due to the topology of the directed graph and such graph have no physical meaning I this paper. I say that a graph is not degenerate if the dimension of $\Lambda_{\Go}$ is $\#X_1\G+\#\partial\G-\#X_0\G+1$.\\ The spaces $\Lambda_{\Go},\Lambda_{\Go}(L)$ carries natural affine measures normalised by the set of integral points, we will denote them $d\sigma_{\Go}$ and $d\sigma_{\Go}(L)$. The measure $d\sigma_{\Go}(L)$ is the conditional measure of $d\sigma_{\Go}$ with respect to $d\sigma_{\Mo}$.\\ \paragraph{Topology of the graph :} \label{topo_directed_graph} If $\Go$ is a directed graph and $\gamma$ is an edges in $X_1\Go$ then there is a projection \begin{equation*} l_\gamma: \Lambda_{\Go}\longrightarrow \R_+. \end{equation*} According to the topology of the graph several things can happen \begin{itemize} \item The length function is equal to zero on $\Lambda_{\Go}$ and then the graph is degenerate \item The length is non zero but it is constant in $\Lambda_{\Go}(L)$ and then its a function of $L$ \item The length is non zero but it is bounded in $\Lambda_{\Go}(L)$ \item The length is non zero and unbounded in $\Lambda_{\Go}(L)$ and then its a function of $L$ \end{itemize} An absolute directed cycle in a directed graph is closed cycle which contain only edges oriented positively. A relative directed cycle is a directed path that goes from a positive to a negative boundary, absolute cycles and relative cycles define elements of $\Lambda_{\Go}$, and we can say that it's primitive if it can't be written as a sum of two cycles. We have the following proposition. \begin{prop} The extremal points of $\Lambda_{\Go}$ are the primitive cycles. \end{prop} \begin{proof} I don't give details here but I $u$ is an element of the cone and let $\gamma$ an edge in the support of $u$ i.e $u_\gamma>0$ . The fact that the sum of the entries and the exists is vanishing at each vertex of the graph imply that we can find an cycle $c$ that pass through this edge and with support contained in the one of $u$. This is a consequence of an exploration process in the graph. Moreover I can choose $c$ such that there is an edge with $u_{\gamma'}=c_{\gamma'}>0$ and then the support of $u-c$ is strictly contained in the one of $u$. By induction of the cardinal of the support it give the claim. \end{proof} From this proposition I can derive the following corollary. \begin{cor} \label{prop_topo_edges} The following characterisation is true \begin{itemize} \item An edge $\gamma$ is degenerated iff $\gamma$ is not contained in any cycle in $\Go$. \item The edge $\gamma$ is bounded iff $\gamma$ is not contained in any absolute cycle of $\Go$. \item The edge $\gamma$ is constant iff $\gamma$ separe the graph in two connected components \end{itemize} \end{cor} \paragraph{Acyclic graph and constant one :} \label{acyclic_graphs} I will use a lot a particular kind of directed stable graph which are the ones which are acyclic. This means than the graph does not contain any absolute cycle. In this case the orientation of the edges define a partial order relation on the vertices and this fact characterise acyclic graphs. From the result of proposition (\ref{prop_topo_edges})we see that all the acyclic graphs are non degenerate.\\ It will be convenient for use to label the component of an acyclic graph and a natural way to do it is to assume that the enumeration is compatible with the order relation.\\ It's possible to say that a graph is bounded if all the edges are bounded. From proposition \ref{prop_topo_edges} \begin{cor} \label{prop_acyclic_bounded} The graph $\Go$ is bounded iff it's acyclic. \end{cor} For an acyclic graph it make sense to compute the intregral of a continuous function over the set $\Lambda_{\Go}(L)$.\\ An other particular kind of directed graph are the one such that all the edges are constant. According to proposition \ref{prop_topo_edges} this means that all the edges separe the graph and then. \begin{cor} \label{prop_tree_constant} Constant graphs are directed trees stable. \end{cor} For such graph $\Go$ the length of an edge factor through $L$ and is the restriction of a linear function \begin{equation*} l_{\Go,\gamma}: \Lambda_{\Mo}\longrightarrow \R \end{equation*} A constant edge separate the graph in two connected components and if $I_1,I_2$ are the boundaries in these components we have \begin{equation*} l_{\Go,\gamma}=\left|\sum_{\beta\in I_l}\epsilon(\beta)l_\beta\right|. \end{equation*} which is linear on the cell. \begin{figure} \caption{Acyclic stable graph} \label{fig_acyclic} \end{figure} \subsection{Foliations and differentials with poles :} \paragraph{Definition :} For $M$ a surface with boundaries measurable foliations on $M$ can be define in several possible ways. Some references on the subject are for instance \cite{alessandrini2016horofunction},\cite{fathi2012thurston},\cite{gupta2016meromorphic}. Let $\MF(M)$ as the space of foliations on $M^{\bullet}$ such that the one form that locally define the foliation is given (locally) by the real part of a quadratic differential with a simple poles at the puncture. I assume that the quadratic differential have no simple poles in this case. These objects are considered up to isotopies and whitehead moves. It's possible to find a circle around each puncture of $M^{\bullet}$ such that the leaves of the foliation are all transverse or all tangent to the circle and cut the surface along these circles. This induce a foliation on $M$ which is "transverse" to the boundaries in some sense. When the foliation is tangent to the circle there is no canonical choice of circles. \\ In what follow I will be mainly interested into orientable foliations. A foliation is locally given by a closed one form but this choice is not canonical there is a sign ambiguity. A foliation is orientable if it have a trivial monodrony and in this case it can be represented globally by a closed one form.\\ The residue of an element $\lambda$ in $\MF(M)$ is the absolute value of the residue of a one form that locally represent $\lambda$ near the poles. In the case of orientable surface there is no sign ambiguities and then residues are reals. Stoke theorem then imply than the sum of the residues is equal to zeros. When none of the residues is vanishing the signs define an orientation of the boundaries of $M$. So it's possible to consider the subspace $\MF(\Mo)$ of oriented foliation on a directed surface $\Mo$. \paragraph{Multi-arcs and foliations :} \label{multiarc_fol} On a surface with boundaries we can consider arcs that rely two boundaries. Here I assume than these arcs are simple and non trivial in the sense that the surface obtained but cutting $M$ along the arc does not contain component's homeomorphic to a disc. A weighted multi-arc is the a sum \begin{equation*} \sum_a m_a a \end{equation*} of arcs which are pairwise non intersecting. I will denote $\MA_\R(M)$ the space of multi-arcs.\\ A foliation define in a natural way a weighted multi-arc by looking the leaf which intersect the boundaries. This procedure is the exploration of the surface from the boundaries and it give a map \begin{equation*} \MF(M)\longrightarrow \MA_\R(M)\cap \{0\}. \end{equation*} To construct this map I take a circle that surround each poles of the foliation with non vanishing residues. By choosing the circles close enough I can assume than the foliation intersect transversely these circles. Each circle define a contractible neighbourhood $U_y$ of a pole $y$ and if a leaf enter in such neighbourhood it cannot escape. Then the intersection of the singular leaf of the foliation and the circles is finite a finite set $X_0\lambda$ and each circle $C_y$ is divided in a finite number of intervals. We denote $X\lambda$ the set of intervals. If $x\in C_y$ is a point in one of these intervals, it's possible to consider the half leaf starting to $x$ in the direction opposite to $U_y$. By assumption this leaf does not hit any singularities. By an adaptation of Thurston recurrence lemma \cite{fathi2012thurston} it's then possible to show that such leaf must intersect an other circle $C_{y}$ at a point $T(x)$. The map $T$ is well defined on the union of the intervals and induce a map \begin{equation*} s_1: X\lambda \longrightarrow X\lambda \end{equation*} such that $T$ map $I$ to $s_1(I)$. The map $s_1$ is an involution. A leaf of the foliation that join $I$ and $s_1(I)$ define an arc on the surface. The intersection product of these arcs with the boundary curves is non trivial then the arcs are non trivial. By surgeries it's also possible to see that two arc are necessarily non homotopic. And then the foliation define a multi-arcs Moreover the transverse measure on $\lambda$ induce a measure on each intervals and the total mass give a weight $m_I(\lambda)$ on each interval $I$. The map $T$ preserve these measures and then $m_I(\lambda)$ define weight on the arc $a$ and this give the desired map.\\ A multi-arc define a partial foliation of the surface which can be extended by the Thurston enlargement procedure \cite{fathi2012thurston}. So the map is surjective but it's not injective in general. In some singular case the foliation can have interior leaves which can't be detected by an exploration from the boundaries.\\ We the foliation is orientable the trajectories can be oriented in a way such that they goes from the positive boundaries to the negative boundaries. A directed multi-arc on $\Mo$ is then a multi-arc such that all the arcs join a positive and a negative boundary. Then in this case the restriction induce a map \begin{equation*} \MF(\Mo)\longrightarrow \MA_\rightarrow(\Mo) \end{equation*} I assume that the residues are non vanishing. \\ \paragraph{Foliation with vanishing residues :} A particular subset $\MF_0(M)$ of foliations with poles are the one with zero residues. $\MF_0(M)$ contain a trivial element which is the Jenkin Strebel foliation it is periodic and all the non singular trajectories retract to a pole. This foliation will be denoted $0$, it's non trivial that it's unique up to Whitehead moves. A non trivial foliation contains leaves which are not homotopic to boundary curves. It's then possible to pinch the boundaries and obtain a foliation on the punctured surface $M^{\bullet}$, the foliation have a marked singularity on each puncture of $M^{\bullet}$ which is a conical singularity. \begin{prop} \label{prop_MF0} The contraction define a bijection \begin{equation*} \MF_0(M)\backslash \{0\}\longrightarrow \MF(M^{\bullet}) \end{equation*} \end{prop} The bijection is characterised by the fact that the two foliation have the same intersection number on the essential simple closed curves in $M^{\bullet}$. \paragraph{Stratum of abelian differential :} \label{stratum_ab_quad} If $\Mo$ is a directed surface I will consider the Teichmüller space of abelian differential with simple poles $\HT(\Mo)$ and such that the sign of the real part of the residue correspond to the sign of the boundaries of $\Mo$. I will also consider the subspace $\HT_0 (\Mo)$ of abelian differentials with real residues.\\ For a non directed surface I will consider the space $\QT(M)$ of quadratic differentials with double poles on $M^{\bullet}$ and $\QT_0(M)$ the subspace of surfaces with real residues.\\ If $\mu$ is a decoration on $M$ (paragraph \ref{decorations}). Then we will denote $\QT(M,\mu)$ the stratum of quadratic differentials with $\mu(i)$ zeros of order $i-2$ and in a similar way we denote $\HT(\Mo,\mu)$ the abelian differentials with $\mu(i)$ zeros of order $\frac{i-2}{2}$. \paragraph{Pair of transverse foliations :} \label{hubbard_masur} Two measurable foliations $\lambda_1,\lambda_2$ on a surface without boundaries are transverse iff for all essential simple curves they satisfy \begin{equation*} \iota(\lambda_1,\gamma)+\iota(\lambda_2,\gamma)>0. \end{equation*} Quadratic differentials define pair of transverse foliations and the two notions are in fact equivalent due to the Hubard-Masur theorem \cite{hubbard1979quadratic}.\\ Transversality have a straightforward generalisation for foliations with poles but I replace essential by non contractible which means that we include the boundary curves. As before a quadratic differentials with double poles define a pair of transverse foliations with poles. I do not give statement of the converse in general but a doubling argument allow to prove the following result by using the Hubard Masur theorem \begin{prop} \label{prop_hubbard_masur_poles} The subspace of pair of transverse foliations in $\MF(M)\times \MF_0(M)$ is identified with the space $\QT_0(M)$ of quadratic differential with real residues. \end{prop} \section{Metric ribbon graphs and their moduli spaces :} In this section I give classical definitions on ribbon graph and the relation with multi-arcs. I will define orientable ribbon graph and study some of their properties. Metric ribbon graph are the most important object of this text and I will give some classical definition and the relation with weighted multi arcs and foliation with poles. A simple construction that I call zippered rectangle construction allow to construct foliation from metric ribbon graphs and this construction will be used in the next section. Finally I will define the Teichmüller and moduli space of metric ribbon graphs. \subsection{Ribbon graphs} \paragraph{Combinatorial ribbon graph} \label{def_combi_rib} Following M.Kontsevich \cite{kontsevich1992intersection} a combinatorial ribbon graph $R$ is defined by the following way. Let $XR$ a set of oriented edges or half edges and let \begin{itemize} \item $s_1: XR\rightarrow XR$ an involution without fixed point, \item $s_0:XR \rightarrow XR$ a cyclic order which define the vertices of the graph. \end{itemize} The boundary permutation is defined as $s_2= s_1\circ s_0^{-1}$ and these data's satisfy $s_0s_1s_2=id$. I denote $X_iR$ the i-cycles in $R$ i.e $X_0R$ is the set of vertices, $X_1R$ the unoriented edges and $X_2R$ are the faces or boundaries. We have \begin{equation*} X_iR=XR/\langle s_i\rangle. \end{equation*} For all $e\in XR$ I denote $[e]_i\in X_iR$ the projection and $\#[e]_i$ the cardinal of the orbit of $e$ under $s_i$.\\ For each ribbon graph I consider $\mu_R$ the decoration of $M$ that count the number of vertices of $R$ with a given degree. \begin{equation*} \mu_R(i)=\text{number of vertices of degree i} \end{equation*} \paragraph{Orientation of a ribbon graph} \label{orientation_coverRG} Let $R$ a ribbon graph, an orientation on $R$ is defined as a map \begin{equation*} \epsilon : XR\rightarrow \{\pm 1\}, \end{equation*} such that \begin{equation*} \epsilon\circ s_2=\epsilon~~~~~~\epsilon\circ s_1=-\epsilon. \end{equation*} I will say that $R$ is orientable if it admits an orientation and oriented if some orientation is fixed. I can use the notation $R^{\circ}=(R,\epsilon)$ for an oriented ribbon graph.\\ The orientations satisfy the following trivial properties \begin{itemize} \item if $R$ is connected there is at most two orientations because the group generated by $s_1,s_2$ act transitively on the set of oriented edges, \item if $R$ is orientable then $R$ have only vertex of even degree because $\epsilon\circ s_0=-\epsilon$. \item An orientation of the graph is a map from the set of half edges which is constant on the boundaries because we have $\epsilon\circ s_2=\epsilon$. Then it induce a map \begin{equation*} \epsilon : X_2R \longrightarrow \{\pm 1\}. \end{equation*} And then it define a partition of the boundaries $X_2R=X_2^+R\sqcup X_2^-R$ \item For each ribbon graph there is a natural double cover $\tilde{R}$ which is oriented and is ramified over the vertices of odd degree. \end{itemize} \begin{figure} \caption{An oriented ribbon graph} \label{fig_orientation_graph} \end{figure} \paragraph{Automorphisms :} \label{auto_rg} The data's $(XR,s_0,s_1)$ characterise the ribbon graph, two triples are equivalents iff there is a bijection between the sets of oriented edges that preserve the data's. An automorphism of the graph is a bijection $XR\rightarrow XR$ that preserve these data's and fix each boundary. We denote $\Aut(R)$ the group of automorphisms, as it's noticed in \cite{mulase1998ribbon} it can happen than an automorphism act trivially on the set of edges $X_1R$ but not on the half edge. Nevertheless as we consider orientable ribbon graph such automorphism necessarily reverse the orientation of the edges and then permute the boundaries. In this case the action of $\Aut(R)$ on the half edges is necessarily free. \paragraph{Zippered rectangles} \label{zip_rect} In this section I give simple construction called in the rest of the text zippered rectangles. I restrict to the case of orientable ribbon graph is this text but this construction is more general by taking the double cover. A ribbon graph is naturally associated to a surface with boundaries or a surface with a cellular decomposition by gluing rectangles.\\ Let $R$ a combinatorial ribbon graph and consider for all $e\in X^+R$ a rectangle \begin{equation*} R_e=[0,1]\times [-1,1]. \end{equation*} It's possible to glue these rectangles by using \begin{equation*} \{1\}\times [0,1]\subset R_e \rightarrow \{0\}\times [0,1] \subset R_{s_2e}~~,~~\{0\}\times [-1,0]\subset R_e \rightarrow \{1\}\times [-1,0] \subset R_{s_1s_2s_1e} \end{equation*} There is conical singularities at the points $(0,0),(1,0)\in R_e$ but they are removable.\\ I then obtain a surface with boundaries $M_R$ with an embedded graph on it's given given by the union of $[0,1]\times \{0\}$. The surface $M_R$ retract on the graph $R$. And there is an identification \begin{equation*} X_2R=\pi_0(\partial M_R), \end{equation*} then the orientation on $R$ induce an orientation of the boundaries of $M_R$.\\ \begin{rem} We can also construct the surface $M_R^{top}$ obtained by capping the boundaries of $M_R$ the graph $R$ induce a cell decomposition of the surface which is a map. The orientability is then equivalent to the fact that this map is bipartite, in the sense that face are labelled by $\pm 1$ and two adjacent faces have opposite signs. \end{rem} \paragraph{Embedded ribbon graphs :} \label{embedding_rib} If $M^{\circ}$ is a surface of type $(g,n^+,n^-)$ and $R$ an oriented ribbon graph, an embedding of $(R,\epsilon)$ is an isotopy class of homeomorphism $M_R \rightarrow M$ which preserve the orientation on the boundaries. We denote $\Rib(M^{\circ})$ the set of embedded ribbon graph. The mapping class group act on the set of embedded ribbon graphs and the quotient is the space $\rib(M)$ of combinatorial ribbon graph with the same topology than $M^{\circ}$ and marked boundaries. An embedded ribbon graph is generic if it contain only vertices of degree four and I will denote $\Rib^*(M^{\circ}),\rib^*(M^{\circ})$ the set of generic ribbon graph.\\ \paragraph{Ribbon graph and filling multi arcs :} \label{ribbon_filling_arc} Let $R$ an embedded ribbon graph on $M$ with no vertices of degree one or two, then each edge $e\in X_1R$ belong to two boundaries and there is a unique arc $e^*$ that rely these two boundaries and intersect only this edge. The union of all the $e^*$ form a multi-arc $A_R$ (figure \ref{figribarc}). When there is vertices of order one or two $A_R$ is still a multi arcs on the surface obtained by removing these points. I make this choice to be consistent with the non triviality of the arcs.\\ All the multi-arcs does not define a ribbon graph. In fact it's possible to cut the surface along multi-arc and the result is a family of surface $M_A$ with boundaries and corner at the boundaries. A multi-arc is filling iff $\iota(A,\gamma)>0$ for all $\gamma$ non contractible which is equivalent to the fact that all the components of $M_A$ are topological polygones with $2i$ sides $(i\ge 3)$. For a filling multi-arc I denote $\mu_A$ the partition such that $\mu_A(i)$ is the number of face with $2i$ sides.\\ The multi-arcs $A_R$ is always filling, the vertices of degree $i$ in $R$ correspond to faces with $2i$ sides in $A_R$. Reciprocally a multi-arc also define an embedded ribbon graph and then there is a bijection between embedded ribbon graph and filling multi-arcs that preserve the profile $\mu_R=\mu_{A_R}$ \\ \begin{figure} \caption{Ribbon graph and the associated multi-arc on a pair of pant.} \label{figribarc} \end{figure} A foliation is filling if it's intersection pairing with any non contractible simple closed curve is non zeros. In other word it's filling if up to Whitehead moves it's equivalent to a foliation without saddle connection. We can define similar notion for multi-arcs. \begin{prop} \label{prop_fillingfol_fillingarc} The set of filling foliations is identified with the set of filling weighted multi-arcs \end{prop} Later I will give an explicit construction of the inverse of \begin{equation*} \lambda\longrightarrow A_\lambda \end{equation*} \subsection{Metric ribbon graph, weighted arcs and measured foliations :} \paragraph{Metric ribbon graph :} Let $M^{\circ}$, an oriented metric ribbon graph marked by $M^{\circ}$ is a pair $S=(R,m)$ where \begin{itemize} \item $R\in \Rib(M)$ is an embedded oriented ribbon graph \item $m$ is a metric on $R$, which is a map \begin{equation*} m: X_1R \longrightarrow \R_{>0}. \end{equation*} it assign a positive length to each edge of the graph. \end{itemize} I denote $\Met(R)=\R_{>0}^{X_1R}$ the set of metrics on $R$ and I will use the notation $T_R=\R^{X_1R}$ for the tangent space OF $\Met(R)$, $R^{\circ}$ is oriented it give a canonical identification \begin{equation*} T_R=H^1(M,X_0R,\R). \end{equation*} When the graph is four-valent the following formula is valid formula \begin{equation*} \overline{d}_R:= \dim \Met(R)= 4g-4+2n \end{equation*} \paragraph{Weighted multi-arcs} \label{weighted_multi_arcs} Metric ribbon graphs can be seen in a different way. In paragraph (\ref{ribbon_filling_arc}) I show that we can associate to each ribbon graph $R$ a filling multi-arc $A_R$. In this way we can obtain a bijection between filling multi-arcs and embedded ribbon graphs. In the same way if $S=(R,m)$ is a metric ribbon graph it's possible to associate the weighted multi-arc \begin{equation*} A_S=\underset{e\in X_1R}{\sum}m_e(S) e^*, \end{equation*} and then metric ribbon graphs are weighted filling multi-arcs. The two definition can have advantages in different situations. \paragraph{Lengths of the boundaries and orientability} \label{orient_boundaries} For all $\beta\in \pi_0(\partial M)$ and $S=(R,m)$ a metric ribbon graph on $M$ it's possible to define the length on $\beta$ as \begin{equation*} l_\beta(S)= \sum_{e \in XR,~[e]_2=\beta} m_{[e]_1}(S). \end{equation*} this define a linear function \begin{equation*} l_\beta: \Met(R)\longrightarrow \R_{>0} \end{equation*} In the language of multi arcs the lengths of the boundaries is just the pairing between the arc and a curve homotopic to the boundary.\\ The fact that the dual of an oriented ribbon graph is bipartite imply the following condition on the boundary lengths \begin{equation*} \sum_\beta \epsilon(\beta) l_\beta= 0. \end{equation*} for all metric on $R$. Then the image of the application $L_\partial$ lie into the affine subspace $\Lambda_{M^{\circ}}$ (equation \ref{LambdaMo}).\\ The derivative $dl_\beta$ of $l_\beta$ is an element in $T_R^*$ and I will denote $H_R$ the subspace generated by the linear form $dl_\beta$ and $K_R$ the annihilator of $H_R$ i.e the subspace of $T_R$ defined by \begin{equation*} 0\longrightarrow K_R \longrightarrow T_R\longrightarrow H_R^* \longrightarrow 0. \end{equation*} The following proposition characterise the orientability and despite it simplicity it's very useful in practice. \begin{prop} \label{prop_dimorientability} Let $R$ a ribbon graph not necessarily orientable with $n$ boundaries, then the dimension of $H_R$ is \begin{itemize} \item $n$ if $R$ is not orientable \item $n-1$ if $R$ is orientable and the only relation is given by the orientation \begin{equation*} \sum_\beta \epsilon(\beta)dl_\beta=0. \end{equation*} \end{itemize} \end{prop} And then in the case of four-valent ribbon graph we have \begin{equation*} d_R=\dim K_R= 4g-3+n \end{equation*} \begin{proof} Let $\epsilon:X_2R\rightarrow \R$ such that \begin{equation*} \sum_x \epsilon_x dl_x^{comb}=0. \end{equation*} We can consider a leaf $\epsilon:XR\rightarrow \R$ we have $\epsilon(s_2e)=\epsilon(e)$ and \begin{equation*} \sum_{[e]_1\in X_1R}(\epsilon(e)+\epsilon(s_1e))dm_{[e]_1}=0. \end{equation*} And then $\epsilon(e)=-\epsilon(s_1e)$,up to a multiplicative constant the only relation are given by the orientations. If the graph is connected there is at most one orientation up to a sign, then we see that the tangent map is either surjective if the graph is not orientable or the image is of codimension one if it's orientable.\\ \end{proof} \paragraph{Metric ribbon graphs and orientable foliations :} \label{foliation_ribbongraph} A metric ribbon graph define in a natural way a foliation with poles which is the real part of a quadratic differential. If $S=(R,m)$ is a metric ribbon graph then the Jenkin Strebel differential $q_S(0)$ is defined on $R_e^{\bullet}$ by $m_e^2(dz)^2$. These forms can be glued and define a quadratic differential with double poles and real residues. The real part of this quadratic differential define a foliation $\lambda_S$ wich is locally given by $m_e|dx|$. This construction induce a map \begin{equation*} \text{Met}(R)\longrightarrow \MF(M) \end{equation*} and using the part \ref{ribbon_filling_arc} it give a map \begin{equation*} \MA_\R^0(M)\longrightarrow \MF(M) \end{equation*} this map is the opposite of the map constructed in section \ref{multiarc_fol}. When the graph is oriented then the foliation $\lambda_S$ is naturally oriented and it's the real part of an abelian differential $\alpha_S(0)$. Then the following proposition is true \begin{prop} \label{prop_fillingfol_fillingarc} The set of filling foliations $\MF^0(M)$ is identified with the set of filling weighted multi-arcs $\MA_\R^0(M)$.\\ The set of filling oriented foliation $\MF^0(\Mo)$ is identified with $\MA_\R^0(\Mo)$ \end{prop} \subsection{Moduli space and volumes :} \paragraph{Construction of the moduli space :} The Teichmüller space $\Tc(M^{\circ})$ of oriented metric ribbon graph on $M^{\circ}$ is the space of all embedded oriented metric ribbon graph in $M^{\circ}$. Which is the union of the disjoint cells \begin{equation*} \Tc(\Mo)=\underset{R^{\circ}\in \Rib(\Mo)}{\bigsqcup}\Met(R^{\circ}), \end{equation*} A ribbon graph $R$ can degenerate to an other $R'$ by contracting a set of edges which does not contain loop and this induce a map \begin{equation*} \Met(R')\longrightarrow \Met(R) \end{equation*} and define a structure of linear cell complex on $\Tc(M^{\circ})$. The complex $\Tc(M^{\circ})$ is naturally embedded into the larger complex $\Tc(M)$ of all metric ribbon graphs by forgetting the orientation. The degeneration of ribbon graph preserve the orientation and then $\Tc(M^{\circ})$ is a closed subcomplex of codimension $2g-2+n$. The space $\Tc(M)$ can be used to find a cell decomposition of the decorated Teichmüller space $\T_{g,n}\times (\R_+)^n$ \cite{kontsevich1992intersection} The top cells of $\Tc(M^{\circ})$ is the space of four valent ribbon graphs which will be denoted $\Tcs(M^{\circ})$.\\ There is a natural action of the mapping class group on $\Tc(M^{\circ})$ and the moduli space is the quotient $\Mc(M^{\circ})$. The space $\M^{comb,*}(M^{\circ})$ is then the disjoint union of the quotients $\Aut(R^{\circ})\backslash \Met(R^{\circ})$ where $R^\circ$ run over the elements in $\rib(\Mo)$.\\ The lengths of boundaries are defined on the Teichmüller space and induce a map \begin{equation*} L_{\partial}: \Mc(M^{\circ}) \longrightarrow \Lambda_{\Mo} \end{equation*} and I will consider the level set $\Mc(M^{\circ},L)\subset \Mc(M^{\circ})$ which is a cell complex and locally an affine submanifold of codimension $n-1$. \paragraph{Combinatorial Teichmüller and foliations :} From the result of the last section there is injective map \begin{equation*} \T^{comb}(M)\longrightarrow \MA_\R(M)\longrightarrow \MF(M) \end{equation*} the image of the first is the set of filling multi-arcs and then \begin{equation*} \T^{comb}(M)= \MA_\R^0(M) \end{equation*} the space $\MA_\R(M)$ is a cell complex which is the closure of $ \T^{comb}(M)$. In a similar way \begin{equation*} \T^{comb}(\Mo)\longrightarrow \MA_\R(\Mo)\longrightarrow \MF(\Mo) \end{equation*} and \begin{equation*} \T^{comb}(\Mo)= \MA_\R^0(\Mo). \end{equation*} In both cases the spaces $\T^{comb,*}(M),\T^{comb,*}(\Mo)$ are identified with the subset $\MF^{comb,*}(M),\MF^{comb,*}(\Mo)$ of foliations with no saddle connection. \paragraph{Measures on the Teichmüller space of metric ribbon graphs :} \label{measurecombteich} The Teichmüller space of metric ribbon graphs posses a set of integral points $\T^{comb}_\Z(\Mo)\subset \T^{comb}(\Mo)$ of metric ribbon graphs such that the lengths of each edge is a positive integer. In each cell I have \begin{equation*} \Met_\Z(R)=\Met(R)\cap \Z^{X_1R}. \end{equation*} The measure $d\nu_{\Mo}$ is the affine measure normalised by the set of integral points. It's defined on each to cells of $\T^{comb}(\Mo)$ by \begin{equation*} d\nu_{\Mo}= \left| \bigwedge_{e\in X_1R} dm_e \right|. \end{equation*} By definition the set $\Tcs(\Mo)^c$ is negligible for this measure. For now there is no orientation on the cells and then $d\nu_{\Mo}$ is not defined by a volume form.\\ The measure $d\nu_{\Mo}$ is the counting measure for $\T^{comb}_\Z(\Mo)$, for all open set $U\subset \T^{comb}(\Mo)$ with a negligible boundary we have \begin{equation*} \nu_{\Mo}(U)=\lim_{t\rightarrow +\infty}\frac{\#t\cdot U\cap \T^{comb}_\Z(\Mo)}{t^{\overline{d}_{\Mo}}}. \end{equation*} For each $L$ there is a natural affine measure $d\nu_{\Mo}(L)$ on $\T^{comb}(\Mo,L)$. For all $S=(R,m)\in \Tcs(\Mo,L)$ the exponential map for the affine structure \begin{equation*} \exp_S : U \subset K_R \longrightarrow \Met(R) \end{equation*} In our case the exponential is just $(R,m)\rightarrow (R,m+u)$. The measure $d\nu_{\Mo}(L)$ on $\exp_S(U)$ is the Lebesgue measure normalised by the lattice of integral points $K_R(\Z)$ in $K_R$. If we choose an other base point for the exponential then the change of coordinates are translation and preserve the volume form so the measure $\nu_{\Mo}(L)$ is well defined.\\ \paragraph{Volumes of the moduli space :} For each $\Mo$ the moduli space $\Mc(\Mo)$ is also equipped by the affine measure $\nu_{\Mo}$. The action of the mapping class group act linearly and preserve the set of integral points and then the measure. This space have an infinite volume, the measure $dZ_{\Mo}$ on $\Lambda_{\Mo}$ is defined as the pushforward of $d\nu_{\Mo}$ under the map $L_{\partial}$ \begin{equation*} dZ_{\Mo}= L_{\partial M~*}~d\nu_{\Mo}. \end{equation*} This measure is characterised by the relation \begin{equation*} \int_{\Lambda_{\Mo}} f(L)~dZ_{\Mo} = \int_{\Mc(\Mo)} f( L_{\partial}(S))~d\nu_{\Mo}. \end{equation*} For each $L\in \Lambda_{\Mo}$ we can also consider the level set $\M^{comb,*}(\Mo,L)$ is equipped by it's affine measure $d\nu_{\Mo}(L)$. Again the action of the mapping class group is linear and preserve the lattice of integral points \begin{equation*} g: K_R(\Z) \longrightarrow K_{g\cdot R}(\Z). \end{equation*} We denote $Z_{\Mo}(L)$ the volume of this affine submanifold \begin{equation*} Z_{\Mo}(L) =\int_{\Mc(\Mo,L)}d\nu_{\Mo}(L) \end{equation*} The volume are naturally related to the measure $dZ_{\Mo}$ the measure can be decomposed as \begin{equation*} d\nu_{\Mo} =d\nu_{\Mo}(L)\times d\sigma_{\Mo} \end{equation*} and then we have the following lemma \begin{lem} The volumes $dZ_{\Mo}$ are absolutely continuous with respect to $d\sigma_{\Mo}$ and on $\Lambda_{\Mo}$ we have the relation \begin{equation*} \frac{dZ_{\Mo}}{d\sigma_{\Mo}}=Z_{\Mo}(L) \end{equation*} \end{lem} \section{Curves on orientable ribbon graphs :} In this section I study curves on ribbon graph, on a metric ribbon graph a curve have a unique combinatorial representation and it's possible to define it's combinatorial length. I will explain how to perform surgery along a curve on a ribbon graph. I will also introduce admissible curves and give several definitions. These curves will be used in the rest of the text and are the one which seems relevant when the graph have vertices of degree higher than three. I will give coordinate for admissible curves and foliations which use zippered rectangle construction. \subsection{Curves on ribbon graph :} \paragraph{Combinatorial representation :} Let $\gamma$ a curve on $M$ and $R$ an embedded ribbon graph. As the surface retract on the graph we can see that $\gamma$ define a "curve" on $R$ (figure \ref{curve1}). We can associate to an oriented curve a combinatorial model which is a sequence of oriented edges $e_0,...,e_r$, such that $e_0=e_r$ and $[e_{i+1}]_0=[s_1e_i]_0$ $\forall i$. Such curves is defined modulo the action of $\Z$ by shifting the sequence, but there is still several representations of an isotopy class of curves . The minimal representation is the one which minimise the number of edges in the sequence up to homotopy. To construct this minimum it's possible to process by induction or use the universal cover $R^\infty$ of the ribbon graph $R$ on $M$. The graph $R^\infty$ is an infinite tree. A representation of the curve lift to an infinite path $(e_i^\infty)$ on $R^\infty$. As the graph is a tree this path is represented by a unique minimal path that traverse each edge at most one time. The projection of this path is homotopic to the original path and minimise the combinatorial length.\\ Using this construction we have a bijection between homotopy class of curves on surfaces and minimal unoriented combinatorial curves on the ribbon graph. \begin{figure} \caption{Curve on a metric ribbon graph} \label{curve1} \end{figure} \paragraph{Length of a curve on a metric ribbon graph :} Let $S=(R,m)$ a metric ribbon graph then for all curve $\gamma$ we can define it's length by the following way. It suffice to set an orientation and a minimal representation $(e_0,...,e_{r+1})$ then the length is defined in a straightforward way by \begin{equation*} l_\gamma(S)=\sum_{i=0}^{r}m_{[e_i]}(S) \end{equation*} The combinatorial length is then minimum of the lengths of the combinatorial paths homotopic to $\gamma$, computed with the metric. If $A_S$ is the mutli-arc associated to $S$ then the length of a curve $\gamma$ is also given by using the intersection pairing. Each edge $e$ of the graph is associated to an arc $e^*$ that rely the two boundaries $[e]_2,[s_1e]_2$ then it's possible to define the Thurston intersection pairing of an element of a curve with $e^*$ and we set \begin{equation} \label{y_coord} y_e(\gamma)=\iota(\gamma,e) . \end{equation} With these notations the length of $\gamma$ is given by the following formula \begin{equation*} l_\gamma(S)=\iota(A_S,\gamma)=\sum_{e\in X_1R} m_e(S)y_e(\lambda). \end{equation*} The proof is direct because the minimal combinatorial representation of the path minimise the number of edges in the path and then the number of intersection with $A_S$. An other way to set that is to look at the universal cover of our graph the number of edge crossed by the minimal path is exactly the intersection number with $A_S$. \paragraph{Surgery along a curve :} Let $R$ a ribbon graph on $M$ and $\Gamma$ a curve. We can define the ribbon graph $R_\Gamma$ obtained by cutting $R$ along $\Gamma$ (see figure (\ref{figribcut})). It's not easy to define it by using ribbon graphs but it's straightforward for multi-arcs.\\ If $A$ is a multi-arc and $\Gamma$ a multi-curve then up to isotopies I can assume than they are in miminal position, which means that they minimise their number of intersection. Then it possible to cut the surface along the curve and the result is a family of arcs on each connected components of $M_\Gamma$. There is possibly pairs of homotopic arcs but I identify and it define a multi-arc $A_\Gamma$ on $M_\Gamma$. If $A$ is filling then using the paragraph \ref{ribbon_filling_arc} $A_\Gamma$ is filling to.\\ If $A_R$ is the multi-arc associated to $R$, the multi-arc $(A_R)_{\Gamma}$ is filling and then it's associated to a ribbon graph $R_\Gamma$ on $M_\Gamma$ such that $A_{R_\Gamma}=(A_R)_\Gamma$. The multi arc $A_{R_\Gamma}$ is characterized by the following condition \begin{equation*} \iota(A_\Gamma,\gamma)= \iota(A,\gamma) \end{equation*} for all $\gamma$ with $\iota(\gamma,\Gamma)=0$. If $R$ have a metric $m$ it induce weights on $A_R$, then it's possible to obtain weights on $(A_R)_\Gamma$ by summing the weights of homotopic arcs in $A\cap M_\Gamma$. This induce a linear map \begin{equation*} \cut_\Gamma:\Met(R)\longrightarrow \Met(R_\Gamma). \end{equation*} and this map is continuous at the intersection of two cells. I will also use the notation $S_\Gamma$ for $\cut_\Gamma(S)$ to be consistent with the other notations. The metric ribbon graph $S_\Gamma$ is characterised by \begin{equation*} l_\gamma(S_\Gamma)= l_\gamma(S) \end{equation*} for all $\gamma\in \Si(M_\Gamma,\partial M_\Gamma)$ \begin{figure} \caption{Ribbon graph cut along the curve of figure (\ref{curve1} \label{figribcut} \end{figure} \paragraph{Multi-curves on an oriented surface} \label{oriented_multicurve} In the case of oriented ribbon graph it's natural to consider orientable multi-curves. A curve is orientable if the orientation $\epsilon$ is constant along it's minimal representation and in this case It is possible to orient the curve such that $\epsilon=1$ when we travel along the curves (see figure \ref{}). The following lemma assert that the orientability is stable under surgeries along orientable curves and conversely if we glue oriented graph by identifying positive and negative boundaries the result is still oriented. \begin{lem} \label{orientablecurves} Let $\Gamma$ a multi-curve with stable graph $\G$ \begin{enumerate} \item If $R^{\circ}$ is an oriented ribbon graph such that $\Gamma$ is orientable then the graph $R_{\Gamma}$ posses a natural orientation $\epsilon_{\Gamma,R}$ which induce an orientation on $\G$. \item If $R$ is a metric ribbon graph such that $R_\Gamma$ is oriented and this orientation induce an orientation on $\G$. Then the surface $R$ is also oriented an this orientation is compatible with the orientation on $\Gamma$. \end{enumerate} \end{lem} \subsection{Admissible curves :} \label{admissiblecurves} \begin{figure} \caption{Admissible curve on a "lattice"} \label{figure_admiss_latt} \end{figure} \paragraph{Definition :} \label{definition} When the graph $R$ is not generic it may happen that a curve split a vertex into several vertices of lower degrees (see figure \ref{non_admin}). For each graph there is a partition $\mu_R=(\mu_R(i))_i$ such that $\mu_R(i)$ is a number of vertices of degree $i$. For all multi-curve there is a partition $\mu_{R_\Gamma(c)}$ for each component of $R_\Gamma$ obtained after cutting along the curve. The curve $\Gamma$ is then naturally decorated by $R$. Let $\mu_{R,\Gamma}$ the decoration on $M$ obtained by summing the decorations of each connected components of $M_\Gamma$. Then we have $\#\mu_{R,\Gamma}\ge \#\mu$ and the curve does not split any singularity iff the two decorations coincide. Then we put the following definition. \begin{Def} \label{def_admissible} A multi-curve $\Gamma$ is admissible on $R$ iff it does not split any vertices of the graph. Which is equivalent to $\mu_{R,\Gamma}=\mu$ or even $\#\mu_{R,\Gamma}=\#\mu_R$. \end{Def} Admissible curves on a metric ribbon graphs are intimately related to quadratic differential with poles and prescribed singularities. For a quadratic differential with poles it's possible to consider the decoration $\mu_q$ which count the zeros of order $i-2$. Then I give the following generalisation of admissibility for foliations. \begin{Def} \label{defadmissible2} A foliation $\lambda\in \MF(M)$ is admissible on $S$ if there is $q_S(\lambda)$ a quadratic differential with double poles such that \begin{equation*} \Re (q_S(\lambda))=S~~~~~\Im (q_\lambda(S)) =\lambda. \end{equation*} moreover I assume that it satisfy $\mu_q=\mu_S$. \end{Def} This definition coincide with the first one for multi-curves. \begin{lem} \label{integralptsZMF(R)} The integral foliations are the admissible multi-curves \begin{equation*} \MS_\Z(R^\circ)=\MF_0(R^\circ)\cap \MS_\Z(M). \end{equation*} \end{lem} \begin{proof} Let $\Gamma=\sum_\gamma m_\gamma \gamma \in \MS_\Z(M)$ and $S$ then the quadratic differential can be obtained in the following way. I consider the surface $S_\Gamma$, on this surface there is a Jenkin-Strebel $q_{S_\Gamma}(0)$ differential on each of its connected component. The trajectories of these differentials are periodic and surround a pole. It's possible to glue an horizontal cylinders of height $m_\gamma$ to the two boundaries $\gamma^1,\gamma^2$ associated to $\gamma\in \Gamma$. The result is Jenkin-Strebel differential $q$ such that \begin{equation*} \Re (q)=S~~~~~\Im (q) =\Gamma. \end{equation*} by uniqueness (proposition \ref{prop_hubbard_masur_poles}) $q=q_\Gamma(S)$ and by construction the profile of the quadratic differential is $\mu_{S,\Gamma}$. Then the curve is admissible iff $q_\Gamma(S)$ have $\mu=\mu_{S,\Gamma}=\mu_q$. \\ \end{proof} \begin{figure} \caption{A non admissible curve} \label{non_admin} \end{figure} \paragraph{Combinatorial representation of an admissible curve :} I present a way to describe admissible curves by their combinatorial representation. Let $R$ a ribbon graph, I introduce the two permutations \begin{equation*} s_2^+=s_2~~~~~~~~s_2^-=s_1s_2^{-1}s_1. \end{equation*} A admissible curve cannot turn around a vertex in some sense they can only turn around boundaries. \begin{lem} \label{lem_combi_admissible} A curve is admissible on $R$ iff it's simple and admit a representation of the form $e_0,...,e_r$ with \begin{equation*} e_{i+1}= s_2^{\pm} e_i. \end{equation*} and $\epsilon(e_0)=1$ \end{lem} And such representation is minimal and then necessarily unique up to a cyclic permutation and reversing the order. Then an admissible curve can be encoded by a starting edge and a finite word $\{\pm 1\}$.\\ \paragraph{Admissible multi-curves on an oriented ribbon graph :} If $R^{\circ}$ is oriented we have the following lemma which give the relation between admissible curve and orientable curves (see figure \ref{fig_orient_curve}). \begin{lem} Let $R^{\circ}$ an oriented ribbon graph then all the admissible curves are orientable. \begin{itemize} \item all the admissible curves are orientable, \item Admissible curves and orientable curves coincide iff the graph have only vertices of degree two or four. \end{itemize} \end{lem} \begin{proof} This is a consequence of the representation (lemma \ref{lem_combi_admissible}) because the two permutations $s^\pm$ preserve the orientation. \end{proof} \begin{proof} To prove the second part we need proposition \ref{prop_coord_x}. If we have a vertex $v$ of degree $2k>4$ it's possible to add an edge $e_0$ which split this vertex in two vertex of degree $2k_i\ge 4$. Let $R'$ this new ribbon graph, it's oriented and their is a map \begin{equation*} K_R \longrightarrow K_{R'} \end{equation*} By computing the dimension $\dim K_{R'}=\dim K_R+1$ and from proposition \ref{prop_coord_x} any curve in $K_{R'}(\Z)\backslash K_{R}(\Z)$ will be orientable and split the vertex $v$ on $R$. If there is only vertices of degree $4$ it's easy to see than an orientable curve is admissible. \end{proof} \begin{rem} In the orientable case the fact that the curve are canonically oriented have the following consequence. An admissible curve turn around the positive poles in the direct direction and it turn around the negatives poles in the indirect direction. Then an admissible curve orbit around a boundary during a time before leave it to an other boundary according to figure (\ref{figure_admiss_latt}). But admissibility is less restrictive than zigzag \cite{goncharov2021spectral} \end{rem} A corollary of the this result is the following \begin{cor} A foliation $\lambda$ is orientable on $S^{\circ}$ iff there is an abelian differential $\alpha$ such that \begin{equation*} \Re \alpha = S^{\circ}~~~~~~~\Im \alpha= \lambda \end{equation*} and $\alpha$ have $\mu_S(i)$ zeros of order $\frac{i-2}{2}$. \end{cor} For the converse result we need proposition \ref{prop_coord_x} \begin{figure} \caption{Admissible curve on oriented graph are oriented} \label{fig_orient_curve} \end{figure} \subsection{Zippered rectangles and coordinates for admissible foliations :} \paragraph{Coordinates for $\MF(R)$ and $\MF_0(R)$} \label{coord_z} For all $R$ I denote $\Q(R)$ the quadratic differential marked by $M$ such that $\Re(q)\in \Met(R)$ and $\mu_q=\mu_R$ is other word it correspond to the space of admissible foliations on a surface in $\Met(R)$. I also consider $\Q_0(R)$ the subspace of differentials with real residues. we also consider $\Hc_0(R)=\Hc(R)\cap \Hc_0\T(M)$ the abelian differentials with real residues.\\ \begin{prop} \label{prop_coord_x} For all $S$ the spaces $\MF(S),\MF_0(S)$ depend only of the ribbon graph and there is homeomorphism which preserve the integral structure \begin{equation*} \Q(R)\longrightarrow \Met(R)\times T_R~~~~~~\Q_0(R)\longrightarrow \Met(R)\times K_R. \end{equation*} In particular there is an homeomorphism \begin{equation*} \MF(R)\simeq T_R ~~~~~~~\MF_0(R)\simeq K_R \end{equation*} \end{prop} Assume than $R^{\circ}$ is oriented on $\Mo$, I consider in a similar way the space of abelian differentials $\Hc(R^{\circ}),\Hc(R^{\circ})$. As a corollary I obtain the following fact \begin{cor} All the admissible foliation on $R$ are orientable \begin{equation*} \MF(R)\subset\MF(\Mo) \end{equation*} and the quadratic differential $q_S(x)$ is the square of an abelian differential $\alpha_S(x)$ and under the identification \begin{equation*} K_R=H^1(M_R^{top},X_0R,\R)~~~~T_R=H^1(M_R,X_0R,\R) \end{equation*} the map correspond to the period coordinates. \end{cor} An other important corollary is the following \begin{cor} \label{coord_admissible_curves} The set of integral multi-curves $\MF_\Z(R)$ is identified with the lattice of integral points $K_R(\Z)$ under the period coordinates $x$ \end{cor} \begin{rem} In the case of unorientable ribbon graph it's possible to construct a canonical double cover $\title{R}$ of $R$ which orientable. The tangent space is naturally identified with the space \begin{equation*} H^1(M_{\tilde{R}},X_0\tilde{R},\R)^- \end{equation*} of one form anti-invariant under the galois involution. In this case the map $x$ correspond to the period map with value in this space. \end{rem} \begin{figure} \caption{coordinates $x,y,z$} \label{coordfig} \end{figure} I now prove proposition \ref{prop_coord_x}. \begin{proof} I first construct the map which is the period map of the imaginary foliation along the edge of the embedded graph $R$ \begin{equation*} \MF(S)\longrightarrow T_R, \end{equation*} we use the zippered rectangle construction \ref{zip_rect} which is a decomposition of foliations with poles. For all $q$ and all $e\in XR$, there is a maximal embedded infinite rectangle $R^{\bullet}_e\rightarrow M^\bullet$ such that $\Re q$ is locally given by $|dx|$ on $R_e^{\bullet}$. Moreover I assume than the orientation of $[0,1]$ correspond with the orientation on $E$. $q$ have no singularities on the interior of $R_e^{\bullet}$ but the maximality imply that there is at least one singularity on each boundaries. As $q$ have the same profile than $R$, $\mu_q=\mu_R$ then $q$ have no vertical saddle connections there and then there is only one singularity on each boundary of $R_e^{\bullet}$. It's possible to choose a square root's $\alpha$ of $q$ on $R^{\bullet}_e$ such that $\Re q =dx$. After this choice I denote $x_e^-$ the singularity on the left boundary and $x_e^+$ the one on the right. Let $I_e\subset R^\cdot_e$ the horizontal maximal open interval oriented according $e$ such that the left extremity is $x_e^-$. I have an isomorphism $I_e=]0,m_e(S_q)[$, I can consider for all $x\in I_e$ the vertical flow $v_t$ such that $\Im \alpha (\partial v_t)=1$. It define a map \begin{eqnarray*} \phi_e : I_e\times \R &\longrightarrow& R_e^\bullet\\ (x,y)&\longrightarrow& v_y(x). \end{eqnarray*} If $(x,y)$ are complex coordinates then the pull back of $\alpha$ under this map is equal to $dz$. In the local coordinates given by $\phi_e$ I can define \begin{equation*} x_e^+=m_e(\alpha) + ix_e(\alpha). \end{equation*} which is the relative period of $\Im q$ along the edge $e$. There is no sign ambiguities because I assume than the real part is positive and then $x_{s_1e}=x_{e}$. This define an element of the tangent space $T_R$.\\ By construction the data $(S,x)$ are enough to recover $q$. It suffice to glue the rectangles $R_e^{\bullet}$ of length $m_e$ by performing a shear of parameter $x_e$ on the right boundary. There is no constraint on the parameter $(m,x)$ to perform the construction. I obtain in this way a riemann surface marqued by $M_R$ and the one form $(dz)^2$ on each rectangle induce a quadratic differential $q_S(x)$. The two construction are the inverse of each other then I can conclude that there is a bijection \begin{equation*} \MF(S)\longrightarrow T_R. \end{equation*} The imaginary part of the quadratic differential $\alpha_S(x)$ define a foliation $\lambda_R(x)$ which is does not depend of the metric on the edges, so the space $\MF(S)$ depend only of $R$.\\ To conclude \begin{equation*} \Res_\beta \lambda = \sum_{e\in X_1R} y_e(\beta)x_e(\lambda). \end{equation*} The RHS is $dl_\beta$ evaluated at $\sum_e x_e(\lambda)\partial_{m_e}\in T_R$. Then the elements of $\MF_0(S)$ correspond exactly to the vectors in $K_R$. \end{proof} \paragraph{Irreducible ribbon graphs :} \label{irreducible_graph} Irreducible ribbon graph are generalised pair of pants in some sense, they are interesting class of ribbon graph but I wont use them so much here. \begin{Def} A ribbon graph is irreducible iff $\MS_\Z(R)=0$ \end{Def} They are irreducible because we can reduce they topology by admissible surgeries. In some sense they are minimal objects in the category of surfaces with decoration. From the results of the last section I can derive the following fact. A ribbon graph is irreducible iff it satisfy \begin{equation*} K_R=\{0\}. \end{equation*} For such graph the length of each edge is an explicit function of the length of the boundaries. The irreducible ribbon graph can be classified, we have the following proposition which is given by computing the dimension of $K_R$. \begin{prop} A ribbon graph is irreducible iff it's of genus zeros and it satisfy one of the two following conditions \begin{itemize} \item it have only two vertices of odd degree, \item or it's orientable and have only one vertex. \end{itemize} \end{prop} \paragraph{Dual coordinates on $\MF(R,\partial R)$ and $\MF_0(R)$} For each $R$ I define $\MF(M,\partial M)$ as the space of formal sum \begin{equation*} \overline{\lambda}= \lambda + \sum_\beta h_\beta \beta \end{equation*} where $h\in \R_{+}^{\partial M}$. An element of this space is a foliation in $\MF_0(R)$ marked by a choice of a periodic and possibly singular trajectory around each pole. Let $R$ an embedded ribbon graph I define the subspace $\MF(R,\partial R)$ of admissible foliations on $R$. There is a natural inclusion of $\MF_0(R)$ in $\MF(R,\partial R)$ and a forget full map \begin{equation*} \MF(R,\partial R)\longrightarrow \MF_0(R) \end{equation*}. The "coordinates" $(y_e)$ of equation \ref{y_coord} are well defined for the elements of $\MF(R,\partial R)$ and more generally $\MF(M,\partial M)$. But in general the map \begin{equation*} y: \MF(R,\partial R)\longrightarrow \R_+^{X_1R} \end{equation*} is neither injective or surjective. I define other parameters in the following way. For each $e\in XR$ let $\gamma_e$ the unoriented arc that join $[e]_2$ and $[e]_0$ and I denote \begin{equation*} z_e = \iota(\lambda,\gamma_e) \end{equation*} which is the distance between the singularity $[e]_2$ and the boundary curve $[e]_0$. They satisfy the relation (see figure \ref{coordfig}) \begin{equation*} x_e(\lambda)=z_{e}(\lambda)-z_{s_2e}(\lambda)~~~~~~y_e(\lambda)=z_{s_0^{-1}e}(\lambda)+z_{e}(\lambda) \end{equation*} and the $(z_e)$ satisfy also the constraints \begin{equation} \label{constraints_z} z_{s_2s_1e}(\lambda)+z_{e}(\lambda)=z_{s_2e}(\lambda)+z_{s_1e}(\lambda) \end{equation} Let $W_R^+$ the set of $(z_e)\in \R_+^{XR}$ that satisfy the last relations. And $W_R$ the same space but with coefficients in $\R$. Then we have the following fact \begin{lem} \label{WRtraintrack} There is a train track $\tau_R$ such that $W_R^+$ is the set of weights on $\tau_R$. \end{lem} \begin{figure} \caption{The train track} \label{figtraintrack} \end{figure} I construct the train track in the following way (see figure \ref{figtraintrack}) . For each oriented edge I associate a vertex $v_e$. The edges of $\tau_R$ are of two types \begin{itemize} \item there is an edge $s_e$ for all $e\in X_1R$ that join the two vertices labelled by the two orientations of $e$, \item there is an edge $s'_e$ for all $e\in XR$ that join $v_{[e]_1}$ and $v_{[s_0e]_1} $. \end{itemize} Then $W_R^+$ is the set of positive weights on the train track $\tau_R$. The train track is non degenerate, if $W_R$ is the same space with real weights, then $\dim W_R=\dim W_R^+=\#X_1R+1$.\\ Then the following proposition is true \begin{prop} \label{prop_coord_z} The map \begin{eqnarray*} \MF(R,\partial R)&\longrightarrow& W_R^+\\ \lambda &\longrightarrow& z(\lambda) \end{eqnarray*} is a bijection and identify $W_R^+(\Z)$ with $\MS_\Z(M,\partial M)$ \end{prop} \begin{rem} IN the orientable case I can consider for each $e\in XR$ the vector $\gamma_e$ in $H_1(M_R^{top},X_0R\cap X_2R,\R)$. Then the space $W_R$ is identified with the cohomology $H^1(M_R^{top},X_0R\cap X_2R,\R)$. if we perform the change of variable $(\epsilon(e)z_e)$. \end{rem} \begin{cor} \label{cor_dl_nonzero} The one form $dl_\lambda$ is equal to zeros on $T_R$ iff the foliation $\lambda$ is trivial. \end{cor} As I will show later this is not true if we restric to $K_R$. I prove the proposition \ref{prop_coord_z}, I restrict to abelian differential for simplicity \begin{proof} I use zippered rectangles as in the last section the space . For all $(S,z)\in \Met(R)\times W^+_R$ let $x(z)$ the x-coordinates given by the last relation. From proposition \ref{prop_coord_x} I can construct an abelian differential $\alpha_S(x)$ on $M^{\bullet}$. As we have \begin{equation*} x_e=z_{s_2e}-z_{e} \end{equation*} the sum along a boundary is zeros and then $x(z)$ is in $K_R$ so $\Im \alpha_S(x)$ is in $\MF_0(R)$. For each $e$ we can consider the trajectory along $[e]_2$ which pass trough the point $(0,z_e)\in R_e^\cdot$, this is well defined due to the constraints \ref{constraints_z}. The horizontal foliation and the trajectory does not depend of the choice of $S\in \Met(R)$ and then we have the inverse map \begin{equation*} W_R^+\longrightarrow \MF(R,\partial R). \end{equation*} \end{proof} \subsection{Deformations of metric ribbon graphs, twist and horocyclic flow :} I use curve and foliations to study deformations of metric ribbon graph. As in \cite{andersen2020kontsevich} I consider twist flow along admissible curves, I rely this flow to the horocyclic flow on the space of quadratic differential with poles, where it's is much easier to understand it.\\ \paragraph{Combinatorial twist :} We give a first intuitive definition of the twist flow which is the same than the definition of the twist flow along a geodesic on an hyperbolic Riemann surface. Let $\gamma\in\Si(M)$ then after cutting $S$ along $\gamma$ there is two new boundaries $\gamma^1,\gamma^2$ in $M_\gamma$ that correspond to $\gamma$. If $S\in \Met(R)$, a point $x\in \gamma$ induce two points $x^i\in \gamma^i$. It is possible to glue the two boundaries $\gamma^1,\gamma^2$ of $S_\gamma$ by identifying these two points and the result is $S$. For $t$ small enough it's also possible to do translation of $x^2$ by a distance $-t$ according to the orientation of the boundary $\gamma^2$. This give a new point $x^2_t$ and then it's possible to glue $S_\gamma$ by identifying $x^1$ and $x^2_t$. The new surface is denoted $\phi^t_\gamma(S)$ and its a well defined metric ribbon graph for $t$ small enough which does not depend of the choice of the base point $x$ and the label on the two boundaries.\\ The flow $\phi^t_\gamma(S)$ is contained in $\Met(R)$ for $t$ small enough iff the curve $\gamma$ is admissible. When the curve is not admissible it split a vertex and then the twist flow will split this vertex for arbitrary small times so it's not included in $\Met(R)$.\\ For two disjoint curves $\gamma_1,\gamma_2$ there is the relation have $(S_{\gamma_1})_{\gamma_2}=(S_{\gamma_2})_{\gamma_1}=S_{\gamma_1\cup\gamma_2}$ and then $\phi_{\gamma_1}^t\circ \phi_{\gamma_2}^t=\phi_{\gamma_2}^t\circ \phi_{\gamma_1}^t$ and it's possible to define for all $\Gamma\in \MS_\R(M)$ \begin{equation*} \phi_\Gamma^t=\prod_\gamma \phi_\gamma^{m_\gamma t}, \end{equation*} for $t$ small enough.\\ \paragraph{Twist flow in coordinates :} \label{twist_in_coord} Let $\gamma$ an admissible curves on $R$. From lemma \ref{lem_combi_admissible} there is a combinatorial representation $(e_i)$ with $e_{i+1}=s^\pm e_i$. Then it's possible to define the signed intersection number $\iota_\gamma(i)$ by \begin{equation*} \iota_\gamma(i) = \left\{ \begin{array}{ll} 1 & \mbox{if } e_i=s^+e_{i-1}~~~~e_{i+1}=s^-e_{i} \\ 0 & \mbox{if } e_i=s^-e_{i-1}~~~~e_{i+1}=s^+e_{i} \\ 0 & \mbox{else.} \end{array} \right. \end{equation*} Then $\iota_\gamma(e)$ for $e\in X_1R$ is defined as the sum of the $\iota_\gamma(i)$ over the $i$ such that $[e_i]_1=e$. These coefficients are independent of the combinatorial representation they are well defined for integral multi-curves by linearity and satisfy \begin{lem} The twist flow along $\Gamma$ is given locally by \begin{equation*} m_e(\phi_\Gamma^t(S))=m_e(S)+t\iota_e(\Gamma). \end{equation*} \end{lem} In fact we can see that $\iota_e(\Gamma)=x_e(\Gamma)$ and it's a "combinatorial" formula for the period coordinates. \paragraph{Twist flow and horocyclic flow :} An other way to define the twist flow is the following. For all multi-curve $\Gamma$ it's possible to consider the subspace $\MF_\Gamma(M)$ of $\MF(M)$ of foliations transverse to $\Gamma$. As $\Gamma$ correspond to an element of $\MF_0(M)$ from \ref{prop_hubbard_masur_poles} each element of $\MF_\Gamma(M)$ is the real part of a unique quadratic differential $q_\Gamma(\lambda)$ with imaginary foliation given by $\Gamma$. If $\gamma$ an essential curve $q_\gamma(\lambda)$ is a Jenkin-Strebel differential with only one cylinder of core curve $\gamma$ and height one. Then the intuitive notion of twist correspond to a shear along this cylinder which can be defined by using the horocylic flow $U_t$. The horocyclic flow preserve the imaginary foliation and then it define a map \begin{equation*} \phi^t_\gamma(\lambda)=\Re (U_t(q_\gamma(\lambda))). \end{equation*} for all $\lambda\in \MF_\Gamma(M)$. The twist flow is well defined for all $t$ \begin{equation*} \phi_t: \MF_\Gamma(M)\longrightarrow \MF_\Gamma(M). \end{equation*} As we will see later there is coordinate on $\MF_\Gamma$ on wich $\phi_t$ is continuous. Moreover $\Tc(M)\subset \MF_\gamma(M)$ is open for these coordinates then twist flow is well defined for small time on $\Tc(M)$. We have a straightforward generalisation for weighted multi-curves and the flow satisfy the following properties \begin{lem} Twist flow and horocyclic flow coincident on weighted multi-curves. \end{lem} More generally for all foliation $\lambda\in \MF(M)$ it's possible to define the twist flow on the space of foliations transverse to $\lambda$ by using the horocyclic flow. \begin{prop} The twist flow along $\lambda$ preserve $\Met(R)$ for small time if $\lambda$ is in $\MF(R)$. In this case the twist flow is locally a translation generated by the locally constant vector field $\xi_\lambda$ \begin{equation*} \xi_\lambda= \sum_e x_e(\lambda)\partial_{m_e} \end{equation*} \end{prop} Then the tangent vector of the twist flow is just given by the relative periods $x$ of the foliation along the edges of the graph. From the results of the last section we see that all the vector in $T_R$ is tangent to a unique trajectory of the twist flow. \begin{proof} For all $S\in \Met(R)$ and $\lambda \in \MF(R)$ the horocyclic flow $\phi_\lambda^t(S)$ correspond to an horizontal stretch on the rectangles $[0,m_e(S)]\times \R$ and then we see that it's well defined for small times. Moreover by computing the period coordinates give \begin{equation*} m_e(\phi_\lambda^t(S))= m_e(S) + t x_e(\lambda). \end{equation*} \end{proof} \paragraph{Gluing and bundles for oriented surfaces :} \label{gluing_coordinates} Let $\Mo$ a directed surface and $\Gao$ a directed multi-curve on it. I consider the subset $\MF_{\Gao}(\Mo)$ of orientable foliations on $\Mo$ transverse to $\Gamma$ and which are represented by an abelian differential. Such differential induce a direction on the curve of $\Gamma$ and I assume that it correspond to $\Gao$.\\ For all stable graph $\Go$ let $\Tc(\Go)$ the affine subset of $\prod_c \Tc(\Go(c))$ \begin{equation*} \Tc(\Go)=\left \{ S=(S(c))\in \prod_c \Tc(\Go(c)) | l_\beta(S)=l_{j(\beta)}~~~~~\forall \beta \in X\G \right\} \end{equation*} Let $\Gao$ an oriented multi-curve with stable graph $\Go$ then the following proposition i true \begin{prop} \label{cuttingmeasurebundle} $\MF_{\Gao}(\Mo)$ is stable under the twist flow along each curves in $\Gamma$ and there is a surjective map \begin{equation*} \cut_\Gao: \MF_{\Gao}(\Mo) \longrightarrow \Tc(\Go). \end{equation*} Moreover this map is an affine $\R^{\Gamma}$ bundle compatible with the integral structure \end{prop} We will proove the following lemma which is the existence of local twist coordinates and give the proposition as a corollary. \begin{lem} For each $S\in \Tc(\Go)$ there exist $V\subset \Tc(\Go)$ and $U\subset \MF_{\Gao}(\Mo)$ such that \begin{itemize} \item $V$ is an open neighbourhood of $S$ invariant by dilatation and $U=\cut_{\Gao}^{-1}(V)$ \item for all $\gamma\in \Gamma$ there is $t_\gamma : U \longrightarrow \R$ such that \begin{equation*} t_\gamma(\phi^t_{\gamma'})=t_\gamma + t \delta_{\gamma,\gamma'} \end{equation*} for all $\gamma'\in \Gamma$. And we have a piece-wise linear isomorphism \begin{equation*} U\longrightarrow V \times \R^{\Gamma} \end{equation*} which induce a bijection \begin{equation*} U\cap \MS_\Z(M) \longrightarrow v_\Z \times \Z^\Gamma, \end{equation*} where $U_\Z,V_\Z$ are the integral points \end{itemize} \end{lem} \begin{proof} Let $S\in \T_{\Gao}^{comb,*}(\Mo)$ and let $(R_\Gamma(c))$ such that $S_\Gamma\in \prod_c \Met(R_\Gamma(c))$, we define $U$ as the set of foliation $\lambda\in \MF_{\Gao}(\Mo)$ such that $\lambda_\Gamma \in \prod_c \Met(R_\Gamma(c))$ and we take $V$ as the set $S'\in\prod_c \Met(R_\Gamma(c))$ with $l_\beta(S')=l_{j_\Gamma(\beta)}(S')$. For each $\gamma\in \Gamma$ we have \begin{equation*} \phi_\gamma^t(\lambda)= \Re U_tq_\gamma(\lambda) \end{equation*} so $\cut_\Gamma(\phi_\gamma^t(\lambda))=\cut_\Gamma(\lambda)$ and then $U$ in invariant under the twist flow. For all $\gamma\in \Gamma$ we denote $C_\gamma^\Gamma(\lambda)$ the cylinder in $q_\Gamma(\lambda)$ associated with $\gamma$. It's possible to fix two singularities $s_\gamma^1,s_\gamma^2$ one on each boundary of $C_\gamma^\Gamma(\lambda)$ and an arc $a_\gamma$ contained in $C_\gamma^\Gamma(\lambda)$ that join these two singularities. Then I can consider \begin{equation*} t_\gamma(\lambda)=\frac{\Re \langle q_\Gamma(\lambda),a_\gamma\rangle}{\Im \langle q_\Gamma(\lambda),a_\gamma\rangle}=\frac{\Re \langle q_\gamma(\lambda),a_\gamma\rangle}{\Im \langle q_\gamma(\lambda),a_\gamma\rangle} \end{equation*} which does not depend of the choice of the roots of the quadratic differential. I have the relation \begin{equation*} t_\gamma(\phi_{\gamma'}^t\lambda)=t_\gamma(\lambda)+t\delta_{\gamma,\gamma'} \end{equation*} and if $\lambda\in U_\Z$ then $t_\gamma(\lambda)\in \Z$ and $\cut_\Gamma(\lambda)\in V_\Z$ because by definition the period are integral. By gluing it's possible to construct the inverse map \begin{equation*} gl_\Gamma: V\times \R^{\Gamma} \longrightarrow U \end{equation*} by construction the maps $\cut_\Gamma, t_\gamma,gl_\Gamma$ are piece wise linear and preserve the set of integral points. \end{proof} \paragraph{Covering of admissible curves :} \label{cover_admissible} Let $\Gao$ a directed curve on $\Mo$ let $\Tcs_{\Gao}(\Mo)$ the set of generic metric ribbon graph $S$ such that $\Gamma$ is admissible and the orientation are compatible $\Gao_S=\Gao$. There is a natural inclusion \begin{equation*} \Tcs_{\Gao}(\Mo)\longrightarrow \MF_{\Gao}(\Mo) \end{equation*} and the restriction of the cutting map define a map \begin{equation*} \Tcs_{\Gao}(\Mo)\longrightarrow \Tcs(\Mo). \end{equation*} Let $\MF_{\Gao}^*(\Mo)$ the subset of foliations with no saddle connection at all. This subset correspond to the subset of foliation represented by an abelian differential with simple zeros. The foliations in $\MF_{\Gao}^*(\Mo)$ is necessarily represented by a four-valent ribbon graph. And then there is a bijection \begin{equation*} \MF_{\Gao}^*(\Mo)\longrightarrow \Tcs_{\Gao}(\Mo). \end{equation*} \section{Acyclic decomposition :} In this part I give one of the main result of this text which allow to give recursion for the volumes. I will study the Poisson structure on the space of metric ribbon graph give several expression for the pairing which have been studied first in \cite{kontsevich1992intersection}. I will show that the combinatorial length is in some sense the hamiltonian of the twist flow which generalise result of \cite{andersen2020kontsevich}. I will briefly speak about irreducible ribbon graph but they not play a central role in this text. Finally I will show that I possible to separate a vertex from the rest of the surface. The proof use degeneration of the symplectic structure and I tried to use as little combinatorics as possible \subsection{Symplectic geometry on the space of metric ribbon graph :} \paragraph{The anti-symmetric pairing on $K_R$ :} In the case of orientable ribbon graph we have identifications \begin{equation*} T_R\overset{f_R}{\simeq}H^1(M_R ,X_0R,\R)~~~~,~~~~ K_R\overset{f_R}{\simeq}H^1(M_R^{top} ,X_0R,\R) \end{equation*} given by a map $f_R$. We have an exact sequence for the relative cohomology \begin{equation*} 0 \rightarrow H^0(M^{top},\R)\rightarrow H^0(X_0R,\R)\rightarrow H^1(M^{top},X_0R,\R)\rightarrow H^1(M_R^{top},\R) \rightarrow 0. \end{equation*} The space $H^1(M_R^{top},\R)$ is a symplectic vector space for the intersection pairing \begin{equation*} \langle\omega_1,\omega_2\rangle = \int_{M_R^{top}} \omega_1\wedge\omega_2. \end{equation*} This pairing induce a two form $\Omega_R$ on $K_R$ which is degenerate in general and then there is only a Poisson structure. \begin{equation*} \Omega_R(f_R(\omega_1),f_R(\omega_2))=\int_{M_R^{top}} \omega_1\wedge\omega_2. \end{equation*} The long exact sequence in the relative cohomology is use full to study degeneration of the symplectic structure. The image of $H^0(X_0R,\R)$ in the exact sequence of relative cohomologies measure how the pairing is degenerate. We denote $\hat{H}_R$ this subspace in $T_R$. Then the pairing $\Omega_R$ induce a non degenerate pairing on $K_R/ \hat{H}_R$\\ \begin{lem} \label{lemdegsymppair} If the graph is orientable the dimension of $\hat{H}_R$ is given by $\#X_0R-1$ \end{lem} Then we deduce that when the graph is orientable the form $\Omega_R$ is non degenerate the graph have only one vertex. And such graph will be called minimal graphs. \begin{Def} A ribbon graph is minimal if it's orientable with only one vertex i.e iff the Poisson structure on $K_R$ is symplectic. \end{Def} In figure \ref{minimal} I give series of minimal graphs for low degree. In general a minimal ribbon graph is not necessarily irreducible. \begin{figure} \caption{List of minimal orientable graphs with a vertex of degree $4,6,8$, the red ones are not irreducible} \label{minimal} \end{figure} \paragraph{The dual pairing :} There is also an identification \begin{equation*} T_R^*\overset{\hat{f}_R}{\simeq}H^1(M_R^{top}\backslash X_0R,X_2R,\R). \end{equation*} Let $\hat{K}_R$ the subspace \begin{equation*} H^1(M_R^{top},X_2R,\R) \end{equation*} using the Mayer Victoris sequence in this case the space $\hat{K}_R$ is the kernel of the map \begin{equation*} T_R^*\longrightarrow \hat{H}_R^* \end{equation*} and then is equal to $(T_R/ \hat{H}_R)^*$. The intersection pairing induce a anti-symmetric pairing $\hat{\Omega}_R$ on $\hat{K}_R$. As before we have an exact sequence for the relative cohomology \begin{equation*} 0_\rightarrow H^0(M^{top}_R,\R) \rightarrow H^0(X_2R,\R) \rightarrow H^1(M^{top}_R,X_2R,\R) \rightarrow H^1(M^{top}_R,\R) \rightarrow 0 \end{equation*} The image of $H^0(X_2R,\R)$ correspond to the space $H_R$ and $\hat{\Omega}_R$ induce a symplectic structure on $\hat{K}_R/H_R$ \paragraph{Pairing on $W_R$ :} Let the space \begin{equation*} H^{1}(M_R, X_2R\cup X_0R,\R) \end{equation*} there is two canonical forget full map \begin{eqnarray*} p_R : H^{1}(M_R, X_2R\cup X_0R,\R) &\longrightarrow& H^{1}(M_R, X_0R,\R),\\ p_R^* : H^{1}(M_R, X_2R\cup X_0R,\R) &\longrightarrow& H^{1}(M_R, X_2R,\R). \end{eqnarray*} which are surjective.\\ On each of the cohomology spaces we have a canonical intersection pairing $\langle.,.\rangle$ and moreover \begin{equation*} \langle \alpha_1 ,\alpha_2\rangle = \langle p_R(\alpha_1) ,p_R(\alpha_2)\rangle = \langle p_R^*(\alpha_1) ,p_R^*(\alpha_2)\rangle \end{equation*} If $W_R$ is the vector space defined in paragraph \ref{coord_z} then \begin{lem} If $R$ is orientable the space $H^{1}(M_R, X_2R\cap X_0R\R)$ is isomorphic to $W_R$ via the map \begin{equation*} \omega\longrightarrow (z_e(\omega))_{e\in XR} \end{equation*} where $z_e(\omega)=\langle \omega, \gamma_e\rangle$ \end{lem} Let $\overline{\Omega}_R$ the pairing induced on $W_R$. Then the following proposition is true \begin{lem} \label{lem_equ_pairing} For all $z_1,z_2\in W_R$ we have \begin{equation*} \overline{\Omega}_R(z_1,z_2)=\Omega_R(p_R(z_1),p_R(z_2))=\hat{\Omega}_R(\hat{p}_R(z_1),\hat{p}_R(z_2))=\langle \hat{p}_R(z_2),p_R(z_1)\rangle \end{equation*} the last bracket is the one between $T_R^*$ and $T_R$. \end{lem} \paragraph{Hamiltonian of the horocyclic flow } Here I give a sketch of the prove the following theorem which was also proved in \cite{andersen2017geometric} for the principal stratum. This result is valid of course for non orientable ribbon graph by using the two cover. In this case the Poisson structure is defined in a similar way by using the intersection product on the anti-invariant cohomology of the two cover the period coordinate are naturally with value in this space. \begin{thm} Let $\lambda \in \MF_0(R)$ then we have \begin{equation*} \Omega_R(\xi_\lambda,.)=dl_\lambda \end{equation*} \end{thm} \begin{proof} The proof is immediate, for all $\lambda \in \MF_0(R)$ we construct in section \ref{coord_z} an element $z(\lambda)\in W_R^+$ and by definition of $p_R$. \begin{equation*} p_R(z(\lambda))=\xi_\lambda~~~~~~\hat{p}_R(z(\lambda))=dl_\lambda \end{equation*} From this lemma and lemma \ref{lem_equ_pairing} I obtain \begin{equation*} \Omega_R(\xi_\lambda,\xi)= \Omega_R(p_R(z(\lambda)),\xi) = \langle \hat{p}_R(z(\lambda)),\xi\rangle = dl_\lambda(\xi). \end{equation*} \end{proof} \paragraph{Computation of the pairing in coordinates :} In this section I give several expressions for the pairing. On $W_R$ it's given by the following formula \begin{equation*} \overline{\Omega}_R=\frac{-1}{2}\sum_{e\in X_1R} x_e \wedge y_e \end{equation*} and then $\overline{\Omega}_R$ is the pull back of the canonical symplectic form under a map \begin{equation*} W_R\longrightarrow T_R\times T_R^* \end{equation*} In term of the $z$ coordinate we also have the following expression \begin{equation*} \overline{\Omega}_R=\frac{1}{2}\sum_{e\in X_1R} z_e \wedge z_{s_0e} \end{equation*} which is in some sense the Thurston form because it have the same form than the thurston two form on the train track $\tau_R$ (ref \cite{}).\\ There is a dual expression of this formula. The role of $s_0$ and $s_2$ are in some sense symmetric and we have \begin{equation*} \overline{\Omega}_R=\frac{1}{2}\sum_{e\in X_1R} z_e \wedge z_{s_2e} \end{equation*} It's also possible to give expression in term of the forms $x$ and $y$. In \cite{kontsevich1992intersection} M.Kontsevich introduce for each boundary $\beta$ a two form on $K_R$ defined in the following way. Let an edge $e$ with $[e]_2=\beta$ and assume than the boundary contain $r$ edges then \begin{equation*} \omega_\beta=\sum_{0\le i<j<r}x_{s_2^ie}\wedge x_{s_2^je}=-\sum_{1\le j\le r} z_{s_2^ie}\wedge z_{s_2^{i-1}e} \end{equation*} and then by abuse of notations \begin{equation*} \Omega_R=\overline{\Omega}_R=\frac{-1}{2}\sum_\beta \omega_\beta . \end{equation*} In a similar way for each vertex $v$ we can fix an edge $e$ with $[e]_0=v$ if the vertex is of degree $r$ then I can set \begin{equation*} \hat{\omega}_v=\sum_{0\le i<j<r}(-1)^{i+j}y_{s_0^ie}\wedge y_{s_0^je}=-\sum_{1\le j\le r} z_{s_0^ie}\wedge z_{s_0^{i-1}e} \end{equation*} and then \begin{equation*} \overline{\Omega}_R=\frac{-1}{2}\sum_\beta \hat{\omega}_v. \end{equation*} \paragraph{Degeneration of the structure} From the result of the last section I show that the structure is degenerated on the space $\hat{H}_R$ which is the image of \begin{equation*} H^0(X_0\tilde{R},\R)\longrightarrow K_R. \end{equation*} And could be also called the kernel foliation. There is also a map \begin{equation*} dl_\bullet: \MF_0(R)\longrightarrow K_R^*. \end{equation*} And from the result of the last section the following proposition is true \begin{prop} An element $\xi$ is in $\hat{H}_R$ iff $dl_\lambda(\xi)=0$ for all $\lambda\in \MF_0(R)$ \end{prop} This result is of some interest it means that two metric ribbon graphs in $\Met(R)$ are on the same leaf of the kernel foliation $\hat{H}_R$ iff they have the same geometry. In the sense that they have the same length spectrum on admissible closed curves. Note that this is not true for non admissible curves. \subsection{Acyclic decomposition :} In this section I construct canonical curves that lies in the kernel foliations and we will use them to decompose surfaces with vertices of even degrees. \begin{figure} \caption{Symplectic curve on an orientable graph} \label{fig:my_label} \end{figure} \paragraph{Statement of the result :} We say that an admissible multi-curve separate a vertex $v$ in $R$ from the rest of the surface if the component that contain $v$ in $R_\Gamma$ contain no other vertices. \begin{thm} \label{thm_acycl_curve} Let $R$ an oriented metric ribbon graph with at least two vertices. For each vertex $v$ of $R$ there exist a unique admissible multi-curve $\Gamma_v^{+}$ such that \begin{itemize} \item The stable graph $\Go_v$ of $\Gamma_v^+$ contain a component $c_0$ which separate $v$ from the rest of the surface. \item All the curves in $\Gamma_v$ are boundaries of $c_0$. \item $c_0$ is glued along it's negative boundaries. \end{itemize} \end{thm} These multi-curves are intimately related to degeneration of the symplectic structure $\Omega_R$. These multi-curves satisfy several elementary properties \begin{itemize} \item The multi-curve is functorial in the sense that it's well behaved under the action of the mapping class group.\\ \item We have the dual result for negative boundaries $\Gamma_v^-(S)$ is defined by $\Gamma_v^+(-S)$, where $-S$ is obtained by reversing the orientation of $S$. If $\xi_v^{\pm}$ is the twist flow along $\Gamma_v^{\pm}(S)$ then we have $\xi_v^{-}=-\xi_v^{+}$ \item The tangent vectors of the twist flow $\xi_v^{-}$ are in $\hat{H}_R$. \end{itemize} Theorem \ref{thm_acycl_curve} is quite surprising it means that the local structure of the graph around an even vertex admit some model which correspond to minimal ribbon graph. And it's possible to cut the graph around the vertex in a canonical way. This process allow to recover the structure on the graph inductively on the number of vertices and the recursion scheme is very similar to an oriented version of the topological recursion as we will see later. By applying this theorem several times we obtain the following corollary wich show how the orientable ribbon graph are rigids. \begin{cor} \label{cor_acycl_decomp} Les $R$ an oriented ribbon graph with labelled vertices, then there is a unique admissible multi-curves $\Gamma$ such that \begin{enumerate} \item The components of $R_\Gamma$ are minimals, \item The oriented stable graph $\Go$ associated to $\Gamma$ is acyclic, \item The labels on the vertices is compatible with the order on the graph. \end{enumerate} \end{cor} In other word in the orientable case if we enumerate the vertices we have can have a canonical decomposition of our graph into minimal pieces. This result is different in general that a decomposition into Fenchel Nielsen coordinate where it's not assumed that the graph is acyclic and where the component are supposed to be irreducible. With this result we are not allowed to cut the surface along a curve which is an handle. Then some genus remain in the components of the decomposition. On each component of the acyclic decomposition the two form is non degenerate and on a minimal oriented surface an admissible curve is necessarily unbounded. Then the acyclic multi-curve of corollary \ref{cor_acycl_decomp} are necessarily maximal. And then I can reformulate this corollary by saying that if the order on the vertices is fixed then there is a maximal acyclic admissible multi-curve which is compatible with this order. \\ This theorem can be interpreted in term of foliations with poles. If the foliation is orientable and if we enumerate the singularities we can decompose it in a canonical way as a family of foliation with one singularity. The local structure is more easily understandable and the global structure is the one of a directed acyclic graph (figure \ref{fig_acyclic}). \\ In the case of the sphere the minimal ribbon graph are irreducible then we have the following corollary. \begin{cor} Les $R$ an oriented ribbon graph on the sphere with labelled vertices, then there is a unique maximal admissible multi-curves $\Gamma$ such that \begin{enumerate} \item The oriented stable graph $\Go$ associated to $\Gamma$ is acyclic, \item The labels on the vertices is compatible with the order on the graph. \end{enumerate} \end{cor} \paragraph{Proof of the theorem :} Let $R^{\circ}=(R,\epsilon)$ an oriented ribbon graph and let $v$ a vertex and $e$ an edge such that $[e]_0=v$. We will construct the tangent vector associated to the twist flow along $\Gamma_v^{+}$, we consider \begin{equation*} \xi_v^{+}(R)=\sum_{i=0}^{\deg_R(v)-1}(-1)^i \partial_{[s_0^ie]}. \end{equation*} Then we have the following fact \begin{lem} The vector $\xi_v^{+}(R)$ belong to $\hat{H}_R(\Z)$ and the only relation between the $(\xi_v^{+}(R))_{v\in X_0R}$ are proportional to \begin{equation*} \sum_v \xi_v^{+}(R) =0. \end{equation*} \end{lem} \begin{proof} From part \ref{coord_z} an admissible foliation in $\MF_0(R)$ we can associate $y$ "coordinates" and we have the formula \begin{equation*} dl_\gamma(\xi_v^+) = \sum_{i=0}^{2k-1} y_{s_0^ie}(\gamma)(-1)^i, \end{equation*} where $e$ is an half edge such that $[e]_0=v$ and $\epsilon(e)=1$. By using part \ref{coord_z} we have the formula \begin{equation*} y_e(\gamma)= z_e(\gamma)+z_{s_0e}(\gamma) \end{equation*} and then we see that we have $dl_\lambda(\xi_v^+)=0$.\\ If we consider a linear relation of the form \begin{equation*} \sum_v r_v \xi_v^+=0 \end{equation*} we can lift $r_v$ to $XR$ we obtain a map wich is invariant under $s_0$ such that \begin{equation*} r_e-r_{s_1e}=0 \end{equation*} and then there is only one such relation (up to a constant) given by \begin{equation*} \sum_v\xi_v^+=0. \end{equation*} \end{proof} \begin{rem} We have the exact sequence of cohomology \begin{equation*} 0\longrightarrow H^0(M_R^{top},\R) \longrightarrow H^0(X_0R,\R)\longrightarrow H^1(M_R^{top},X_0R,\R) \longrightarrow H^1(M_R,\R) \end{equation*} the second space is the dual of $\Z^{X_0R}$ and it's generated by the canonical basis $\eta_v$ wich satisfy $\eta_v(v')=\delta_{v,v'}$. The first map is then the diagonal embedding \begin{equation*} x\longrightarrow x\sum_v \eta_v. \end{equation*} The second map is the boundary map $\delta$ and the we have for all oriented edges \begin{equation*} \delta \eta_v e =\delta_{v,[s_1e]_0}-\delta_{v,[e]_0}. \end{equation*} then we see that $\xi_v^+$ represent the vector $\eta_v$ in the cohomology. \end{rem} Using proposition \ref{prop_coord_x} we have the following corollary \begin{cor} There is a unique admissible multicurve $\Gamma^+_v$ such that $\xi_{\Gamma^+_v}=\xi_v^{+}(R)$. \end{cor} We now give an elementary construction of the curve $\Gamma^+_v$. Fix an oriented edge $e_0$ with $\epsilon(e_0)=1$ and $[e_0]_0=v$. Then we can turn around the boundary $[e]_2$ until we reach $v$ again i.e we consider the smaller $k$ such that $e_k=[s_0s_2^k e]_0=v$ and then we leave the boundary to the one wich is on his right by setting $e_{k+1}=s_1s_0^{-1}s_1 e_k$ after some time we will come back to $e_0$. If the curve do not contain all the oriented edges $e$ with $\epsilon(e)=1$ and $[e]_0=v$ then we restart the procedure on an other edge. And the end we obtain a minimal representation of a multicurve $\tilde{\Gamma}_v^+$. In other word to construct the curve we take all the positive boundaries that meet $v$ and we perform cutting and gluing at $v$ wich are described in figure \ref{symp_curveconstr}. This curve is in $\MS_\Z(M,\partial M)$, all connected component are either in $\Si(M)$ or are homotopic to a negative boundary. Let $B^+$ the positive boundaries of $R$ that are adjacents to $v$ and $B^-$ the negative boundaries of $R$ that are adjacent to $v$ only. From the constuction used in section \ref{twist_in_coord} we have the following lemma \begin{lem} The curve constructed in this way is $\Gamma^+_v + \sum_{\beta\in B^-} \beta$ and we have on $T_R$ \begin{equation*} dl_{\Gamma^+_v}=\sum_{\beta\in B^+} dl_\beta - \sum_{\beta\in B^-} dl_\beta \end{equation*} where we sum over the boundaries that contain $v$ and are oriented positively. \end{lem} We know proove the first part of the theorem \ref{thm_acycl_curve} \begin{lem} The curve $\Gamma_v^+$ satisfy the desired propeerties \end{lem} \begin{proof} Let $R_v^+$ the connected component of $R_{\Gamma^+_v}$ that contain $v$ and $\Go_{v,+}$ the stable graph associted to $\Gamma_v^+$ we can decompose $\Gamma_v^+$ into three sets of curves $A_i~,~i=1...3$. Where \begin{itemize} \item $A_1$ are the curve that join two boundaries of $R_v^+$ \item $A_2$ are the curves that join a boundary of $R_v^+$ to an other vertex of $\G_v^+$ \item $A_3$ are the curves that are not connected to $R_v^+$ \end{itemize} Let $\tilde{R}_v$ the ribbon graph obtained by cutting $R$ along the curves in $A_2$, $\tilde{R}_v'$ the component that contain $v$ and $\tilde{R}_v''$ the union of the other components. We have the map $\cut_v'': T_R\longrightarrow T_{\tilde{R}_v''}$, wich is surjective. All the boundaries in $B^+$ and $B^-$ are in $\tilde{R}_v'$ and forall $\gamma\in A_3$, $dl_\gamma$ on $T_R$is the pull back of $dl_\gamma$ on $T_{\tilde{R}_v''}$ under the map $\cut_v''$. Then we obtain \begin{equation*} \sum_{\gamma\in A_3}m_\gamma dl_\gamma =0 \end{equation*} on $T_{\tilde{R}_v''}$. All the coefficient are positive and then the multicurves must be empty we have $A_3=0$.\\ We have the following functoriality for $\xi_v^+(R)$ \begin{lem} For all admissible curves $\Gamma$ we have \begin{equation*} \cut_\Gamma \xi_v^+(R) = \xi_v^+(R_\Gamma). \end{equation*} \end{lem} Then we have \begin{equation*} \cut_v(\xi_v^+(R))=\xi_v^+(R_v^+) \end{equation*} by definition of $\Gamma_v^+$ we have $\cut_v(\xi_v^+(R))=0$ and then $\xi_v^+(R_v^+)=0$. Then by using the following lemma we can conclude that $R_v^+$ is symplectic and the $v$ is the only vertex in $R_v^+$. \begin{lem} For all $R^{\circ}$ oriented and $v\in X_0R$ we have $\xi_v^+(R^\circ)=0$ iff $R$ is symplectic. \end{lem} On $K_{\tilde{R}_v'}$ we have the relation \begin{equation*} \sum_{\gamma\in A_1}m_\gamma dl_\gamma =0 \end{equation*} as $R_v^+$ is symplectic this is also the case of $\tilde{R}_v'$ and then we have the following lemma \begin{lem} Let $R$ any symplectic ribbon graph and $\Gamma$ an admissible curve then the $dl_\gamma, \gamma \in \Gamma$ are independent on $K_R$. \end{lem} With this lemma we conclude that we must have $m_\gamma=0$ for all $\gamma\in A_1$. Finally we obtain the relation \begin{equation*} \sum_{\gamma\in A_2}m_\gamma dl_\gamma = \sum_{\beta\in B^+} dl_\beta - \sum_{\beta\in B^-} dl_\beta \end{equation*} on $T_{R_v^+}$. If $\epsilon_v$ is the orientation induced on $R_v^+$ then the only relation between the boundaries are given by $\epsilon_v$ and then we see that as $m_\gamma$ are positives then all the boundaries in $A_2$ are negatives, and the boundary of $R_v^+$ is the union of $A_2,B^+,B^-$. Then the curve $\Gamma_v^+$ satisfy the desired properties \end{proof} To conclude we need the converse statement \begin{lem} If $\Gamma$ satisfy the desired properties it's $\Gamma_v^+$ \end{lem} \begin{proof} Let $\Gamma$ such curve and let $c$ a connected component of it's stable graph that does not contain $v$ then we have the relation on $K_R$ \begin{equation*} \sum_{v\in X_0R_\Gamma(c)}\xi_v^-= \sum_{\gamma\in X\G_\Gamma(c)}\xi_\gamma. \end{equation*} and then we get by summing over $c$ \begin{equation*} \xi_\Gamma= \sum_{v'\neq v}\xi_{v'}^-=\xi_v^+ \end{equation*} \end{proof} \begin{figure} \caption{Illustration of the construction.} \label{symp_curveconstr} \end{figure} \paragraph{Case of non orientable surface} When consider non orientable surface with an even vertex we still have degeneration of the symplectic structure and we can also find canonical curves that separate this vertex from the rest of surface. \begin{thm} If $R$ is any ribbon graph and $v$ a vertex of even degree there is exactly two admissible multi-curve $\Gamma^{\pm}_v$ such that \begin{itemize} \item $\Gamma^{\pm}_v$ separate $v$ from the rest of the surface \item The component $R_v^\pm$ is orientable and admit an orientation such that it's glued along it's negative boundaries. \item All curves in $\Gamma^{\pm}_v$ are boundaries of $R_v^\pm$ \end{itemize} \end{thm} This theorem contain the case of theorem \ref{thm_acycl_curve}. In this case the group of automorphisms of the surface can eventually exchange the two multi-curve $\Gamma^{\pm}_v$. This happen for instance for the torus with one boundary. \begin{figure} \caption{Acyclic curve on a non orientable graph} \label{fig:my_label} \end{figure} \paragraph{Extracting a pair of pant on an oriented surface :} \label{acycl_pant} In the generic case when there is only vertices of degree $4$ then the only minimal surfaces are topological pair of pants. In this case minimal and irreducible surfaces coincide and then the last theorem give particular family of canonical Fenchel-Nielsen decomposition of our graph.\\ Let $S^\circ$ a generic oriented metric ribbon graph an embedded bounded pair of pant in $S$ is an orientable curve $\Gamma$ such that \begin{itemize} \item There is a component $c_0$ of $\Go$ which is a pair of pant , \item All the curve in $\Gamma$ are in the boundaries of $c_0$, \item $c_0$ is glued along it's negative boundaries. \end{itemize} The terminology is justified because the third condition imply that length of such curve is necessarily bounded by the lengths of the boundaries. This imply by \ref{prop_coord_x} that there is only a finite number of bounded embedded pair of pants.\\ These three conditions also imply that the choice of the marked marked component $c_0$ is canonical. Moreover as the only orientable ribbon graph on a pair of pant contain a unique vertex of order four. Then a bounded pair of pant on a generic oriented metric ribbon graph separate a vertex from the rest of the surface.\\ \begin{thm} \label{thm_bounded_pants} For each generic oriented metric ribbon graph $S$ and each vertex $v$ there exist a unique bounded pair of pants $\Gamma^+_v$ that separate $v$ from the rest of the surface. \end{thm} A reformulation of corollary \ref{cor_acycl_decomp} give the following result \begin{cor} \label{prop_bounded_pants_decomp} Les $R^{\circ}$ a generic ribbon graph with labelled vertices, then there is a unique orientable multi-curves $\Gamma$ such that \begin{enumerate} \item $\Gamma$ is maximal i.e components of $R_\Gamma$ are pair of pant's, \item The oriented stable graph $\Go$ associated to $\Gamma$ is acyclic, \item The labels on the vertices is compatible with the order on the graph. \end{enumerate} \end{cor} \begin{figure} \caption{Acyclic decomposition of an orientable foliation a (in red)} \label{decfol} \end{figure} \section{Surgery on the volumes and recurrence relation :} In this part I give the recursion for the volumes of moduli space. For each directed surface the moduli space $\Mc(\Mo,L)$ posses a measure $d\nu_{\Mo}(L)$. I will denote $Z_{\Mo}(L)$ the volume which is function on $\Lambda_{\Mo}$. It's well defined because the space is the a union of a finite number of relatively compact cells. In general the boundaries are labelled and then I choose to separate the positive and negative variable and write $Z_{g,n^+,n^-}(L^+|L^-)$ the volumes. \subsection{Surgery at the level of the volumes and stable graph} In this section I give results which generalise ideas of \cite{andersen2020kontsevich} outside the generic case. I study space of combinatorial surfaces marked by an admissible curves and perform surgeries along these curves. \paragraph{Covering and decomposition of the measures :} Let $\Mo$ and $\Go$ a directed stable graph, there is a natural bundle over $\Mc(\Go)$ wihch correspond to all the possible gluing of surfaces in $\Mc(\Go)$. If $\Gao$ is a multi-curve that represent $\Go$ and let $\MF_{\Gao}(\Mo)$ \ref{gluing_coordinates}. This space carry an action of $\Stab(\Gao)$ and it's possible to form the quotient \begin{equation*} B\Mc(\Go)= \Stab(\Gao) \backslash \MF_{\Gao}(\Mo). \end{equation*} The reasons of this choice are the following \begin{itemize} \item The space $\MF_{\Gao}(\Mo)$ contain in a natural way $\Tc_{\Gao}(\Mo)$. \item It's possible to cut an element of $\MF_{\Gao}(\Mo)$ along the curves in $\Gamma$ and the result is an element of $\Tc(\Go)$ and this induce a map \begin{equation*} \cut: B\Mc(\Go) \longrightarrow \Mc(\Go). \end{equation*} \item Moreover the twist flow along the curves in $\Gamma$ is well defined on $\MF_\Gamma(\Mo)$ which was not the case of$\Tc(\Mo)$ and this induce a affine orbifold torus action on $B\Mc(\Go)$. \end{itemize} This space is a piece-wise linear orbifold with a natural measure normalised by it's set of integer points and It denoted it $d\tilde{\nu}_{\Go}$. Using the results of \ref{twist_in_coord} the \begin{equation*} \cut : B\Mc(\Go)\longrightarrow \Mc(\Go) \end{equation*} is a bundle with structure group the $\#X_1\G$ dimensional torus moreover this map is piece-wise linear and \begin{lem} The measure $\tilde{\nu}_{\Go}$ and $\nu_{\Go}$ satisfy the relation \begin{equation*} \cut_{*} d\tilde{\nu}_{\Go} = \prod_{\gamma\in X_1\G} l_\gamma~~ d\nu_{\Go} \end{equation*} \end{lem} For any stable graph and any positive symmetric function \begin{equation*} F: \R_{+}^{X_1\G}\longrightarrow \R_+ \end{equation*} it's possible to consider the integral \begin{equation*} Z_{\G}(F)(L) = \int_{B\Mc(\G,L)}F(L_{\G}(S)) d\tilde{\nu}_{\Go}(L) \end{equation*} where $L_{\G}$ is the lengths of the curves of the stable graph and $L$ is a variable indexed by the boundaries. \begin{prop} For all $\Go$ a stable graph the function $Z_{\Go}(F)(L)$ is given by \begin{equation*} Z_{\Go}(F)(L) = \frac{1}{\#\Aut(\Go)} \int_{l\in\Lambda_{\Go}(L)}F(l) \prod_c Z_{M_{\Go}(c)}(L(c),l(c)) \prod_\gamma l_\gamma d\sigma_{\Go}(L) \end{equation*} \end{prop} \paragraph{Statistic for multi-curves covering and integration formula} \label{coveringcurve} If $\Gao$ is an oriented multi-curve on $\Mo$ I denote \begin{equation*} \Tcs_{\Gao}(\Mo)=\{S\in \T^{comb,*}(\Mo)|\Gamma \in \MF_0(S) ~~~\text{and}~~\Gao=\Gao_S\} \end{equation*} the space of generic oriented surfaces $S^{\circ}$ such that $\Gamma$ is orientable on $S^{\circ}$ and the orientation on it induced by $S^{\circ}$ correspond to $\Gao$. Let $\Go$ the stable graph of $\Gao$, there is an action of $\Stab(\Gao)$ on $ \T^{comb,*}_{\Gao}(\Mo)$ and I denote $\Mcs_{\Go}(\Mo)$ the quotient \begin{equation*} \Mcs_{\Go}(\Mo) =\Stab(\Gao) \backslash \Tcs_{\Gao}(\Mo) \end{equation*} There is a canonical map \begin{equation*} \pi_{\Go}~:~\Mcs_{\Go}(\Mo) \longrightarrow \Mcs(\Mo) \end{equation*} and the fiber over $S$ is the set of admissible curves on $S$ which are in the orbit of $\Gao$ or equivalently the admissible curves with stable graph given by $\Go$. The map $\pi_{\Go}$ is a covering over each cells and the $\Mcs_{\Go}(\Mo)$ is equipped by the pull back of the measure on $\Mcs(\Mo) $ As in \cite{andersen2020topological},\cite{andersen2020kontsevich} I consider the following statistic for the distribution of the length of multi-curves. We define $N_{\Go}F(S)$ as the sum of $F(L_{\Gamma}(S))$ over all the orientable curves with stable graph $\Go$. \begin{equation*} N_{\Go}F(S)=\sum_{\Gamma\in \pi^{-1}_{\Go}(S)} F(L_{\Gamma}(S)). \end{equation*} This is by definition the push forward of $F\circ L_{\Gamma}$ under $\pi_{\Go}$. The function $ N_{\Gamma}F$ is well defined on the moduli space $\Mc(M)$ because of the relation \begin{equation*} F(L_{g\cdot \Gamma}(g\cdot S))=F(L_\Gamma(S)) \end{equation*} and because the map \begin{equation*} g: \MS(R)\longrightarrow \MS(g\cdot R) \end{equation*} preserve the orientation on the stable graphs. Then it satisfy the following relation which is the formula for a push foward under a covering \begin{equation*} \int_{\Mcs(\Mo,L)}N_{\Go}F(S)d\nu_{\Mo}(L)=\int_{\Mcs_{\Go}(\M,L)}F(L_\Gamma(S))d\nu_{\Mo}(L). \end{equation*} There is a canonical map \begin{equation*} \Mcs_{\Gao}(\Mo)\longrightarrow B\Mcs(\Go) \end{equation*} this map is not surjective but the following lemma allow to avoid this problem \begin{lem} The subset $\Mcs_{\Go}(\Mo)$ is of full measure in $B\Mcs(\Go)$ and this is also true for $\Mcs_{\Go}(\Mo,L)$ in $B\Mcs(\Go,L)$ for all $L$. \end{lem} We don't give a detailed proof of this lemma it's based on the fact that the complementary set is formed by foliations that contain saddle connection. In the orientable case such saddle connections can be generic. But this phenomenon cannot happen if the foliation is transverse to a multi curve. Then the only configuration of saddle connection that are possible are not generic and then the surface are in some dimension one submanifold. There is only a countable number of such submanifolds and then the problem are in a set of zero measure. But writing correctly this argument is not very interesting here and could take a lot of space.\\ Moreover in the case of acyclic gluing that I will use the situation is indeed much more simple because the gluing allway create multi-arcs which make the situation much more simpler in fact if the stable graph is acyclic the space $\MF_{\Gao}(\Mo)$ is included in $\MA_\R(\Mo)$, because gluing cannot create a cycle. \\ Using this lemma I obtain the relation \begin{equation*} Z_{\Go}(F)(L)=\int_{\Mcs(\Mo,L)}N_{\Go}F(S)d\nu_{\Mo}(L) \end{equation*} Then it give the following proposition which was first proved by M.Mirzakhani in the case of hyperbolic surfaces \begin{prop} The statistics satisfy the following integral formula \begin{equation*} \int_{\Mc(\Mo,L)}N_{\Go}F(S)\nu^{comb}_{\Mo}(L)=\frac{1}{\Aut(\Gamma^{\circ})}\int_{l\in \Lambda_{\Go}(L)}F(l) \prod_c Z_{\Go(c)}(L(c),l(c)) \prod_\gamma l_\gamma d\sigma_{\Mo} \end{equation*} \end{prop} \subsection{Recursion for the volumes :} In this part I use the theorem \ref{thm_acycl_curve} and the results of the last part to obtain recursion for the volumes $Z_{g,n^+,n^-}(L^+|L^-)$. \paragraph{A kind of degenerated geometric recursion formula :} We denote $P(\Mo)$ the set of stable graphs on $\Mo$ which correspond to bounded pair of pants. There is four family of such stable graph which are represented in figure \ref{fig_gluings} and are also given in the introduction . \begin{enumerate} \item We remove a pant that contain two positive boundaries $(i,j)$ \item We remove a pant that contain a positive boundary $i$ and a negative boundary $j$ \item We remove a pant with one positive boundary $i$ which is connected to the surface by two negative boundaries and do not separate the surface \item We remove a pant with one positive boundary $i$ which is connected to the surface by two negative boundaries and separate the surface in two components. \end{enumerate} Then we can rewrite the theorem \ref{thm_bounded_pants} in the following strange way which is a kind of degeneration of the geometric recursion formula. The advantage of this formulation is that it is straight forward to integrate it by using the results of the last section. In other word it means that the covering associated to the bounded of pant's is of degree $2g-2+n^++n^-$ \begin{lem} \label{prop_mirzmacshane_1} For all $S\in \Mcs(\Mo,L)$ we have \begin{equation*} 2g-2+n^++n^-= \sum_{\Go \in P(\Mo)} N_{\Go}(1)(S). \end{equation*} \end{lem} \begin{proof} This is just a reformulation of theorem \ref{thm_bounded_pants} by using the definition of the functions $N_{\Go}(1)(S)$. \end{proof} \paragraph{Recursion for volumes :} In this section we make effective theorem \ref{thm_bounded_pants} in the case of surface with vertices of degree four. \begin{thm} \label{thm_recurrence_1} For all value of the boundary lengths the volumes satisfy the recursion \begin{eqnarray*} (2g-2+n)Z_{g,n^+,n^-}(L^+|L^-)&=& \sum_{i}\sum_{j} [l_i^+-l_j^-]~Z_{g,n^+,n^--1}([L_i^+-L_j^-]_+,L^+_{\{i\}^c}|L^-_{\{j\}^c})\\ &+&\frac{1}{2}\sum_{i\neq j} (L_i^+~+L_j^+)~Z_{g,n^+-1,n^-}(L_i^+ +L_j^+,L^+_{\{i,j\}^c}|L^-)\\ &+&\frac{1}{2}\sum_{i} \int_0^{L_i^+} Z_{g-1,n^++1,n^-}(x,L_i^+-x,L^+_{\{i\}^c}|L^-)~x(L_i^+-x)~ dx\\ &+&\frac{1}{2}\sum_{i}\sum_{\underset{I_1^{\pm} \sqcup I_2^{\pm}=I^{\pm}}{g_1+g_2=g}} x_1 x_2 Z_{g_1,n_1^+,n^-_1}(x_1,L^+_{I_1^+}|L^-_{I_1^-})~ Z_{g_2,n_2^+,n^-_2}(x_2,L^+_{I_2^+}|L^-_{I_2^-}) \end{eqnarray*} where we use the notation \begin{equation*} x_l = \sum_{i\in I^{-}_l}L_i^{-}-\sum_{i\in I^{+}_l}L_i^{+} \end{equation*} \end{thm} For each $L\in \R_+^E$ and $I\subset E$ we use the notation $L_I=(L_x)_{x\in I}\in \R_+^I$.\\ As corollary of this proposition I obtain the following fact \begin{cor} The volumes $Z_{g,n^+,n^-}(L^+|L^-)$ are continuous piece-wise polynomials, homogeneous of degree $4g-3+n$ which are symmetric under permutations of booths sets of variables. \end{cor} \begin{proof} To obtain the theorem I multiply the formula of proposition \ref{prop_mirzmacshane_1} by the measure $d\nu_{\Mo}(L)$ and I integrate over the moduli space. By using the result of section \ref{coveringcurve} I obtain the following formula \begin{equation*} (2g-2+n)Z_{\Mo}= \sum_{\Go \in P(\Mo)} Z_{\Go}(1). \end{equation*} The covering of bounded pair of pants split into several coverings which correspond to the fourth types of gluing with all the possible choice of boundaries. Now from the result of the last section we have \begin{equation*} Z_{\Go}(1)(L^+|L^-)=\frac{1}{\#\Aut(\Go)}\int_{L\in \Lambda_{\Go}(L)} \prod_{c} Z_{\Go(c)}(L^+(c),L'(c)|L^-(c)) \prod_\gamma l_\gamma d\sigma_{\Go}(L). \end{equation*} Then there is a unique component in our graph which is a pair of pant glued along it's negative boundary. The volumes associated to this component are constant equal to one because there is only one oriented graph on an oriented pair of pant then this term disappear in the formula. To finish the proof it remain to compute the domain of integration in each cases. All the multi-curves used are rigid in the sense that $\Lambda_{\Go}(L)$ is reduced to a point with one exception when the genus is reduced by one which correspond to type $3$ of figure \ref{fig_gluings}. \end{proof} \subsection{Some properties of the recursion and the volumes :} \paragraph{Graphical expansion :} As in the case of the topological recursion the formula \ref{thm_recurrence_1} admit a graphical expansion obtained by iterating the recursion. Proposition \ref{prop_bounded_pants_decomp} give canonical maximal acyclic multi-curve which decompose the surface. In the case of four valent vertices this is a pant decomposition and then the stable graph are trivalent. It's possible to write the volume of surface with labelled vertices as a sum over these graph \begin{prop} \label{graph_expension} The volumes satisfy \begin{equation*} (2g-2+n)! Z_{g,n^+,n^-}(L^+|L^-)=\sum_{\G^{\circ}} \frac{1}{\#\Aut(\G^{\circ})}\int_{l\in\Lambda_{\G^{\circ}}(L)} \prod_\gamma l_\gamma d\sigma_{\Go}(L) \end{equation*} where we sum over all the directed acyclic pant decomposition with an enumeration of the vertices and with $n^+$ positives and $n^-$ negatives labeled leg's. \end{prop} \paragraph{Times inversion and other symmetries :} A trivial property of the recursion is the symmetry under permutation of the variable which is due to the fact that we sum symmetric function over a set which is invariant under permutation of the variable. This is different from the symmetry of topological recursion which is deeper statement. \begin{prop} If the $Z'_g$ are function obtained by the recursion of theorem \ref{thm_recurrence_1} which satisfy $Z_0'(x_1,x_2|y_1)=Z_0'(x_2,x_1|y_1)$. Then the $Z'_g$ are symmetric under permutation of each set of the variables \end{prop} An second and more interesting properties is the times inversion which correspond to the operation \begin{equation*} Z(L^+|L^-)\longrightarrow Z(L^-|L^+). \end{equation*} The recursion of theorem \ref{thm_recurrence_1} is obtained by removing positive boundary but it's also possible to consider the recursion along negative boundaries. \begin{prop} If the coefficients satisfy $Z_0'(x_1,x_2|y_1)=Z_0'(y_1|x_1,x_2)$ then the function $Z_{g}(L^+|L^-)$ also satisfy this relation \end{prop} \begin{proof} This proposition is obtained by the graph expansion off proposition \ref{graph_expension}. In fact the set of directed acyclic graph have an involution given by reversing the orientation. \end{proof} \paragraph{String equation for the volumes :} The functions $Z_{g,n^+,n^-}(L^+|L^-)$ satisfy very simple relations when contract a variables which is an analogous of the string equation. We will investigate special case of this recursion later and derive a dilaton equation in these cases. \begin{prop} \label{prop_string_Z} The volumes satisfy the following relation when $L_1^+=0$ \begin{equation*} Z_{g,n^++1,n^-}(0,L^+|L^-)=\left(\sum_{i}L_i^-\right) Z_{g,n^+,n^-}(L^+|L^-) \end{equation*} \end{prop} \paragraph{A physical interpretation of the recursion :} Here we interpret the volumes and the recursion as an expansion of the partition function associated to a physical toy model. A surface $\Mo$ can be viewed as an interaction between strings in some sense. We consider the negative boundaries as the exits of our system and the positive boundaries are the entries. An element in $\Lambda_{\Mo}$ correspond to a positive weight on each string such that the total weight of the entries is equal to the total weight of the exits. These weights can be seen as the energy of the string and we have the conservation of the energy during the interaction. The function $Z_{\Mo}$ define a kernel operator. To a function \begin{equation*} f: \R_{+}^{I^-}\longrightarrow \R_+ \end{equation*} we can associate \begin{equation*} Z_{\Mo}(f)(L^+)= \int_{\R_{+}^{I^-}} f(L^-) dZ_{\Mo} \end{equation*} which is now a function on $\R_{+}^{I^+}$, we recall that $I^{\pm}$ is the decomposition of $\partial M$ into positives and negatives boundaries.\\ Then we can see a surface with one vertex as an elementary interaction between these string. A surface $\Mo$ with several vertices is then a complicated interaction between our entries and $Z_{\Mo}$ is the transition kernel associated to this interaction. It's not a probability kernel because it's not normalised.\\ \paragraph{Laplace transform of the recursion :} In this section we compute the Laplace transform of the last recursion \begin{equation*} \mathcal{Z}_{\small{g,n^+,n^-}}(\lambda^+|\lambda^-)= \int e^{-\lambda^+\cdot L^+ - \lambda^-\cdot L^-} dZ_{\small{g,n^+,n^-}}(L^+|L^-) \end{equation*} The fact that the support of the measure is contained in $\{\sum_i l_i^+=\sum_i l_i^-\}$ implies the following symmetry for this function \begin{equation*} \mathcal{Z}_{\small{g,n^+,n^-}}(\lambda^++t|\lambda^--t)= \mathcal{Z}_{\small{g,n^+,n^-}}(\lambda^+|\lambda^-). \end{equation*} \begin{thm} The Laplace transform satisfy the recursion \begin{eqnarray*} \mathcal{Z}_{\small{g,n^+,n^-}}(\lambda^+|\lambda^-)&=& -\sum_{i,j} \frac{\partial_1^+\mathcal{Z}_{\small{g,n^+,n^--1}}(\lambda_i^+,\lambda_{\{i\}^c}^+|\lambda_{\{j\}^c}^-)}{\lambda_i^++\lambda_j^-}\\ &+& \sum_{i\neq j} \frac{1}{2}\frac{\partial_1^+\mathcal{Z}_{{\small{g,n^+-1,n^-}}}(\lambda_i^+,\lambda_{\{i,j\}^c}^+|\lambda^-)-\partial_1^+\mathcal{Z}_{{\small{g,n^+,n^-}}}(\lambda_j^+,\lambda_{\{i,j\}^c}^+|\lambda^-)}{\lambda_i^+-\lambda_j^+}\\ &+&\frac{1}{2}\sum_{i} \partial_1^+\partial_2^+\mathcal{Z}_{{\small{g-1,n^++1,n^-}}}(\lambda_i^+,\lambda_i^+,\lambda_{\{i\}^c}^+|\lambda^-)\\ &+&\frac{1}{2}\sum_{i} \sum_{\underset{I_1^{\pm} \sqcup I_2^{\pm}=I^{\pm}}{g_1+g_2=g}} \partial_1^+\mathcal{Z}_{{\small{g_1,n^+_1+1,n^-_1}}}(\lambda_i^+,\lambda_{I_1^+}^+|\lambda_{I_1^-}^-)\partial_1^+\mathcal{Z}_{{\small{g_2,n^+_2+1,n^-_2}}}(\lambda_i^+,\lambda_{I_2^+}^+|\lambda_{I_2^-}^-)\\ \end{eqnarray*} Where we denote $\partial^\pm_{i}$ the derivative with respect to $\lambda_i^\pm$ \end{thm} \begin{proof} To prove this theorem we only need to compute the Laplace transform of the term in the recursion. For instance if $f$ is a continuous function and $\mathcal{L}f$ its Laplace transform we have \begin{equation*} \int_{x_1>x_2} e^{-\lambda_1x_1-\lambda_2x_2} (x_1-x_2) f(x_1-x_2)dx_1d_x2= \frac{1}{\lambda_1+\lambda_2}\int_x e^{-\lambda_1x}f(x)xdx=-\frac{\partial_1\mathcal{L}f(\lambda_1)}{\lambda_1+\lambda_2} \end{equation*} and then \begin{equation*} \int e^{-\lambda^+\cdot L^+ - \lambda^-\cdot L^-} [L_i^+-L_j^-]_+~Z_{g}([L_i^+-l_j^-]_+,L^+_{\{i\}^c}|L^-_{\{j\}^c}) d\sigma =-\frac{\partial_1^+\mathcal{Z}_{g}(\lambda_i^+,\lambda_{\{i\}^c}^+|\lambda_{\{j\}^c}^-)}{\lambda_i^++\lambda_j^-} \end{equation*} for the other terms of the recursion I use the formulas \begin{eqnarray} \int_{x_1,x_2}e^{-\lambda_1x_1-\lambda_2x_2}x_1x_2 f(x_1+x_2) dx_1dx_2 &=& \frac{\partial \mathcal{L}f(\lambda_1)-\partial \mathcal{L}f(\lambda_2) }{\lambda_1-\lambda_2} \\ \int_{x_1,x_2}e^{-\lambda x_1-\lambda x_2}x_1x_2 f(x_1,x_2) dx_1dx_2 &=&\partial_1\partial_2 \mathcal{L}f(\lambda,\lambda). \end{eqnarray} Here the function $f$ is continuous and piece-wise polynomial. \end{proof} \section{Case of surfaces with one negative boundary and application to counting "Dessin d'enfants" } In this section I will study in more details the case of surfaces with only one negative boundary. As we will see the recursion in this case take a much simpler and it's possible to rely it to cut and join equation. \subsection{Surfaces with one negative boundary :} First of all if $n^-=1$ there is an identification \begin{equation*} \Lambda_{n^+,1}\simeq \R_+^{n_+}, \end{equation*} and then it's possible to drop the negative boundaries and write $Z_{g,n^+,n^-}(L^+|L^-)$ as \begin{equation*} Z_{g,n^+,1}(L^+|L^-)= F_{g,n^+}(L^+), \end{equation*} where $F_{g,n^+}(L)$ is a function on $\R_+^{n^+}$.\\ As we say the recursion of theorem \ref{thm_recurrence_1} allow preserve the "sub-category" of surfaces with one negative boundary is stable under the recursion. Extracting a bounded pair of pant on a surface with only one negative boundary create only surfaces with one negative boundary. Moreover these gluing are necessarily non separating and can't be of type $2$. Then the recursion take the following form \begin{cor} The functions $F_{g,n}$ are homogeneous polynomials of degree $4g-4+n$ and satisfy the following recursion \begin{eqnarray*} (2g+n-1)F_{g,n}(L)&=& \frac{1}{2}\sum_{i\neq j} (L_i~+L_j)~F_{g,n-1}(L_i +L_j,L_{\{i,j\}^c})\\ &+&\frac{1}{2}\sum_{i} \int_0^{l_i} F_{g-1,n+1}(x,L_i-x,L_{\{i\}^c})~x(L_i-x)~ dx\\ \end{eqnarray*} \end{cor} For this proposition we can deduce $F_{g,n}(L)$ from the case $F_{0,2}(L_1,L_2)=1$. \begin{proof} To prove this proposition we remark that when we use theorem \ref{thm_bounded_pants} to a surface with only one boundary then the cutting which are allowed are necessarily of type $1$ or $3$, in the other case a component of the surface must contain only positive boundaries which is impossible. Reciprocally performing gluing of type $1$ and $3$ preserve the subcategory of surfaces with only one negative boundary then we can deduce the formula. The form of the recursion preserve the space of polynomial which conclude the proof by using corollary \ref{} \end{proof} \paragraph{String and dilaton equations :} The function $F_n$ satisfy two series of equations which are similar to string and dilaton equations. The string equation is given by rewriting the formula of proposition \ref{prop_string_Z}. We obtain the formula \begin{equation*} F_{g,n+1}(0,L)=\left(\sum_i L_i\right)F_{g,n}(L). \end{equation*} This formula is obtained by computing volume of the space of ribbon graph with only one edge in the first boundary. By looking at the volume of ribbon graph with at most two edges it's possible to compute the first order of $F_{g,n+1}$ at $L_1=0$. This give the following relation which is an analogous of dilaton equation \begin{prop} \label{prop_dilaton_F} The volume $F_{g,n+1}$ satisfy the relation \begin{equation*} \frac{\partial F_{g,n+1}}{\partial L_1}(0,L)= (2g+n-1) F_{g,n}(L) \end{equation*} \end{prop} \paragraph{Case of the sphere :} A particular case is the one of the sphere, rewriting the last formula we obtain the relation \begin{equation*} (n-1) F_{0,n}(L) = \sum_{i<j}(l_i+l_j)F_{0,n-1}(l_i+l_j,L_{\{i,j\}}) \end{equation*} But in the case of genus zeros the surface with only one positive boundary is also preserved by the recursion along positive boundaries. If we denote $|x|=\sum_i x_i$ there is the following formula \begin{equation*} (n-1) Z_0(l_1^+|L^-)= \frac{1}{2}\sum_{I_1^-,I_2^-} |L_{I_1}^-| |L_{I_2}^-| Z_0(|L_{I_1}^-||L_{I_1}^-)Z_0(|L_{I_2}^-||L_{I_2}^-). \end{equation*} Then using time inversion it's possible to obtain an other recurrence relation for the function $F_{0,n}(L)$ \begin{equation*} (n-1) F_{0,n}(L) = \frac{1}{2}\sum_{I_1,I_2}|L_{I_1}| |L_{I_2}|F_{0,n_1+1}(L_{I_1})F_{0,n_1+1}(L_{I_2}). \end{equation*} These two recurrence relation are in fact directly related to planted numbered tree and correspond to remove a pair of leafs or the roots. \paragraph{Recursion for the coefficients string, dilaton and cut and join relations :} In this section I investigate the recursion obtained for the coefficients of the polynomial $F_{g,n}(L)$, I can write \begin{equation*} F_{g,n}(L)=\sum_{\alpha} L^{\alpha}c(\alpha) \end{equation*} With the condition \begin{equation*} |\alpha|=4g-2+n~~~~~~\#\alpha=n \end{equation*} so it's possible drop the indices $(g,n)$ in the notation. By expending the last formula I derive the following relation for these coefficients. \begin{cor} The coefficients $(c(\alpha))$ satisfy the following recursion \begin{eqnarray*} (2g-1+n)c(\alpha)&=& \frac{1}{2}\sum_{i\neq j}\frac{(\alpha_i+\alpha_j)!}{\alpha_i!\alpha_j!} c(\alpha_i+\alpha_j-1,\alpha_{\{i,j\}})\\ &+&\frac{1}{2}\sum_{i,x_1+x_2=\alpha_i-3} \frac{(x_1+1)!(x_2+1)!}{\alpha_i!} c(x_1,x_2,\alpha_{\{i\}^c}) \end{eqnarray*} \end{cor} The coefficients $c(\alpha)$ satisfy an important symmetry they are invariant under permutations. Then as for the intersection numbers of the tautological class over the moduli space it's possible to see $c$ as a function on the set of generalised partitions, we write $c(\mu)$ where $\mu=(\mu(0),\mu(1),...)$. Then I can consider the following formal serie with infinitely many variables. \begin{equation*} \phi(s,t)=\sum_\mu \frac{s^{\frac{|\mu|+\#\mu}{2}}\prod_i (i!)^{\mu(i)}t_i^{\mu(i)}}{\prod_i \mu(i)!} c(\mu) \end{equation*} Then I obtain the following result \begin{cor} \label{cutandjoin} The series $\phi(s,t)$ satisfy the following equation \begin{equation*} \frac{\partial \phi}{\partial s} = \frac{1}{2}\sum_{i,j}(i+j)t_it_j \frac{\partial \phi}{\partial t_{i+j-1}} + \frac{1}{2}\sum_{i+j}(i+1)(j+1)t_{i+j-3} \frac{\partial^2 \phi}{\partial t_{i}\partial t_{j}} + \frac{t_0^2}{2} \end{equation*} \end{cor} In fact the variable $\epsilon$ and $t$ are not independent we have the relation \begin{equation*} \phi(s,t)=\phi(1,t(s))=\psi(t(s))~~~~~\text{with}~~~~~t_i(s)=s^{\frac{i+1}{2}}t_i \end{equation*} And then we have by taking the derivative and evaluating at $s=1$ \begin{equation*} \frac{\partial \phi}{\partial s}= \sum_i \frac{i+1}{2} t_i \frac{\partial \psi}{\partial t_i} \end{equation*} Then we can obtain \begin{equation*} \sum_i (i+1) t_i \frac{\partial \psi}{\partial t_i} = \sum_{i,j}(i+j)t_it_j \frac{\partial \psi}{\partial t_{i+j-1}} + \sum_{i+j}(i+1)(j+1)t_{i+j-3} \frac{\partial^2 \psi}{\partial t_{i}\partial t_{j}} + t_0^2 \end{equation*} \subsection{Dual problem and Hurwitz number :} \begin{lem} \label{hurwitzgraphorient} The oriented generic ribbon graph of type $(g,n^+,n^-,\mu)$ where $\mu$ is the partition that encode the degrees of the vertices, are in bijection with the coverings of the sphere ramified over three points $(0,1,\infty)$ such that \begin{itemize} \item There is $\mu(i)$ ramification of order $\frac{i-2}{2}$ over $1$. \item There is $n^-$ ramifications over $\infty$ and $n^+$ over $0$ which are labeled \end{itemize} \end{lem} These covering are called dessins d'enfants and where studied in many place. \begin{proof} Let $R^\circ$ an oriented ribbon graph made by gluing rectangle $R_e$ where $e\in XR$ is oriented positively. From \cite{} we have a canonical one form $\omega_R$ given locally by $dz$ on each $R_e$ if we fix a vertices $v$ of the graph then the period map \begin{equation*} z\longrightarrow \int_v^z \omega_R \end{equation*} is well defined on the universal cover of $R$ and the image of the fundamental group is contained in $\Z$ then we have a well defined map is \begin{equation*} R\longrightarrow \C/\Z \end{equation*} which is a cylinder. The graph is then the pre-image under of the circle $\R/\Z$. The vertices of the graph are mapped to $0$. The map is ramified at a vertex $v$ of degree $2i>2$ with degree $i-1$ because an interval of the form $[0,\epsilon[$ have $i$ pre-image that contain $v$. They correspond to the positive edges $e$ with $[e]_0=v$. When we pull back a differential $w$ with a simple pole at zeros under $\phi =z^k$ we have $\Res_0 \phi^*w =k\Res_0 w$. We can see $\C/\Z$ as a sphere with two removed points at $\pm i\infty$, under our assumption the positive boundaries are mapped to the pole at $i\infty$ (resp $-i\infty$. The residue of $w_R$ correspond to the length of the boundary which is also equal to the number of edge that contain a boundary. In other hand for all covering ramified over $0$ and $\pm i\infty$ it's possible obtain a ribbon graph on the surface by looking at the pre-image of the circle the orientation on the circle induce an orientation on the graph. \end{proof} Then the oriented generic ribbon graph with one negative boundaries correspond to dessins d'enfants with a maximal ramification over $-i\infty$ and $2g-2+n$ simple ramification over $0$.Let $ \tilde{h}_{\alpha,(1^{2g-1+n}),(4g-3+2n)}$ the corresponding Hurwitz number. I assume than the ramifications over the first point are labelled The last lemma and an explicit computation of the volumes in this case allow use to write the following formula \begin{lem} The volumes $F_{g,n}$ are polynomials which are naturally related to Hurwitz number in the following way. \begin{equation*} F_{g,n}(x_1,....,x_n) = \sum_{\alpha} \prod_i \frac{x_i^{\alpha(i)-1}}{(\alpha(i)-1)!} \tilde{h}_{\alpha,(1^{2g-1+n}),(4g-2+2n)} \end{equation*} \end{lem} \begin{proof} To prove this result I compute the volume $F_R(L)$ associated to an oriented ribbon graph with a single negative boundary. This is an integral over some affine subspace in $\Met(R)$. The relation on $\Met(R)$ are given by $L_i^{+}(m)=x_i$ for each $i=1...n^+$. As the graph is orientable the dual is bipartite then an edge of the graph appear in exactly one of these equations with a weight equal to $1$. In other word there is an identification \begin{equation*} \Met(R^{\circ},x) =\prod_{i} \{(m_e)_{[e]_2=\beta^+_i}|\sum_e m_e=x_{i} \end{equation*} Then the volumes are given by the volumes of simplices. Each of the factor is equipped with the affine measure and \begin{equation*} \int_{\sum_1^n y_j=x}d\sigma= \frac{x^{n-1}}{(n-1)!}. \end{equation*} Then the volumes of $ \Met(R^{\circ},x)$ is \begin{equation*} \prod_i \frac{x_i^{\alpha_i-1}}{(\alpha_i-1)!} \end{equation*} where $\alpha_i$ is the number of edges in the boundary $\beta_i^+$. By summing the contribution of all the graphs \begin{equation*} \sum_R \frac{F_R}{\#\Aut(R)} \end{equation*} the coefficient in front of $ \prod_i \frac{x_i^{\alpha_i-1}}{(\alpha_i-1)!}$ is the number of four valent ribbon graph with $\alpha_i$ edge on the $i-$ieme positive boundary counted with automorphisms. By using proposition \ref{hurwitzgraphorient} we can conclude the proof. \end{proof} From this proposition and proposition \ref{cutandjoin} give recurrence relation for the generating serie of unlabeled Hurwitz number. \printbibliography \end{document}
\begin{document} \title{Optimum Quantum Error Recovery using Semidefinite Programming} \author{Andrew S. Fletcher} \affiliation{Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, 77 Massachusetts Ave. Cambridge, MA 02139} \affiliation{MIT Lincoln Laboratory, 244 Wood St. Lexington, MA 02420} \author{Peter W. Shor} \affiliation{Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Ave. Cambridge, MA 02139} \author{Moe Z. Win} \affiliation{Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, 77 Massachusetts Ave. Cambridge, MA 02139} \date{June 5, 2006} \begin{abstract} Quantum error correction (QEC) is an essential element of physical quantum information processing systems. Most QEC efforts focus on extending classical error correction schemes to the quantum regime. The input to a noisy system is embedded in a coded subspace, and error recovery is performed via an operation designed to perfectly correct for a set of errors, presumably a large subset of the physical noise process. In this paper, we examine the choice of recovery operation. Rather than seeking perfect correction on a subset of errors, we seek a recovery operation to maximize the entanglement fidelity for a given input state and noise model. In this way, the recovery operation is optimum for the given encoding and noise process. This optimization is shown to be calculable via a semidefinite program (SDP), a well-established form of convex optimization with efficient algorithms for its solution. The error recovery operation may also be interpreted as a combining operation following a quantum spreading channel, thus providing a quantum analogy to the classical diversity combining operation. \end{abstract} \pacs{03.67.Pp, 02.60.Pn} \maketitle \section{Introduction} Any implementation of quantum computing or communications requires a strategy for error mitigation. Indeed, the development of quantum error correction (QEC) schemes was an important early step in moving quantum computing from an interesting theoretical idea to an exciting field with potential for ground-breaking technological implementations\cite{Sho:95}. The importance of efficient and optimum error mitigation only increases as the field advances. The earliest efforts in QEC used encoding techniques modified from classical error correction schemes\cite{Sho:95,Ste:96,CalSho:96,CalRaiShoSlo:97,Got:96}. Further analysis \cite{KniLaf:97,BenDivSmoWoo:96} laid the foundation for QEC theory, noting that the important metric is how faithful the statistics of the corrected state remain to the ideal behavior. While that observation suggests quantum error mitigation is thus an optimization problem, most of the subsequent work in the field has appropriately focused on perfect recovery from a set of errors. This emphasis has allowed many techniques to be borrowed from classical error correction and enabled important feasibility studies in quantum computing. It is not, however, the only way to consider controlling for quantum errors\cite{LeuNieChuYam:97}. Recently, some authors \cite{ReiWer:05,YamHarTsu:05} have returned to examining quantum error mitigation as an optimization problem. The essential properties of a quantum state are the statistics of any observable outcome; these are completely encapsulated in the density operator $\rho$ of the state. Noise is introduced by the operation $\mathcal{E}$ which can be thought of as a noisy quantum communications channel. Thus, the goal of any error correction scheme is to design a recovery operation $\mathcal{R}$ such that the recovered state is as faithful a representation as possible of the input, judged by how well the statistics of observables are preserved. The optimum recovery minimizes the `distance' between an input density $\rho$ and the output $\mathcal{R}(\mathcal{E}(\rho))$. This operation may differ from the more traditional QEC recovery operation; such differences illustrate further the contrast between quantum and classical error correction. To distinguish this approach from QEC, we use the term \emph{quantum error recovery} (QER). It should be emphasized that the optimum QER recovery operation is dependent on a given input density, encoding operation, and noise model. The paper is organized as follows. In Sec. \ref{sec:problem_statement}, we define the parameters for optimum QER. Section \ref{sec:CPTP} derives a representation of a quantum channel based on a single positive operator. In section \ref{sec:QEC_SDP}, the optimum recovery operation is cast as a semidefinite program. Section \ref{sec:div_combining} interprets the recovery operation as an optimum combining problem, and illustrates the computational benefit of such an interpretation. In Sec. \ref{sec:examples}, QER operations are derived for the amplitude damping channel using codes encoding one qubit into four and five qubits. \section{Optimum QER}\label{sec:problem_statement} Most QEC procedures are designed on the principle of `perfect' correction of arbitrary single qubit errors. Such a design postulates that single qubit errors are the dominant terms in the noise process; thus a scheme that corrects arbitrary single qubit errors and ignores higher order terms will sufficiently mitigate the noise. Pursuit of this approach has led to important results on the feasibility of quantum error correction, and indeed is a reasonable model for noise processes accurately described by the lower order terms. However, one may reasonably ask how well this generic approach succeeds in specific cases. With every quantum code, in the current paradigm, there is an associated recovery operation designed to perfectly correct the dominant errors. This `traditional' recovery (referred hereafter as the QEC recovery operation) applies a syndrome measurement to determine which error occurred, and a correction operation dependent on the observed syndrome. For a given code and error process, the QEC recovery operation may not provide the most effective safeguard from error. Depending upon the form of the error process, an alternate recovery operation may be designed that better preserves the input state, based on some measure of statistical `closeness' between the input density $\rho$ and the output density $\mathcal{R}(\mathcal{E}(\rho))$. Commonly used metrics\footnote{While not technically a metric, the fidelity is a useful measure of performance for quantum states.} for quantum information arise from the \emph{fidelity}, defined to be \begin{equation}\label{eq:fidelity} F(\rho,\sigma)=\textrm{tr}{\sqrt{\rho^{1/2}\sigma\rho^{1/2}}} \end{equation} where $\rho$ and $\sigma$ are density operators. The fidelity $F$ takes a value between 0 and 1, where 1 indicates that the states are identical. Using $F_o$ to represent any fidelity-based measure of channel performance, the error recovery optimization problem is to find \begin{equation}\label{eq:generic_fidelity_max} \mathcal{R}^\star=\arg\max_\mathcal{R} F_o(\mathcal{R}\circ\mathcal{E}), \end{equation} where the maximization is over all valid quantum operations. We vaguely described the measure of channel performance, $F_o$, as fidelity-based. In fact, the specific choice of $F_o$ influences the feasibility of the optimization, as well as the interpretation of the result. There are three main choices of fidelity-based measures of performance, the \emph{minimum fidelity}, the \emph{ensemble average fidelity}, and the \emph{entanglement fidelity}\cite{NieChu:B00}. The minimum fidelity has the advantage of bounding the performance by seeking the worst case input state. The metric $F_o$ is the minimization over all inputs of the input-output fidelity. The optimization becomes two-fold:\footnote{For simplicity of notation, the minimization is shown over all densities $\rho$. In fact, it is sufficient to minimize over pure state inputs \cite{NieChu:B00}.} \begin{equation}\label{eq:min_fidelity} \mathcal{R}^\star=\arg\max_\mathcal{R} \min_\rho F(\rho,\mathcal{R}(\mathcal{E}(\rho))), \end{equation} where $F$ is the fidelity given in (\ref{eq:fidelity}). By virtue of the minimization over $\rho$, one need not assume anything about the input state. This was the metric of choice in \cite{KniLaf:97} first establishing the theory of QEC. The disadvantage arises through the complexity of the metric; indeed the optimization problem is now over two sets, and the metric $F_o$ is not linear with respect to $\mathcal{R}$. Efficient routines that have been developed for solving the optimization problem of (\ref{eq:min_fidelity}) are sub-optimum\cite{YamHarTsu:05}. We obtain a more tractable optimization problem if we are willing to assume some form for the input distribution $\rho$, particularly if the metric $F_o$ can be reasonably defined as linear in the recovery operation $\mathcal{R}$. While assuming a form for $\rho$ makes the solution less general, it illuminates important characteristics of quantum error mitigation by enabling construction of an optimum recovery operation. Given a value of $\rho$, we may use either the ensemble average fidelity or the entanglement fidelity. Ensemble average fidelity models the input as being in a state $\rho_i$ with probability $p_i$. We define an arbitrary quantum channel $\mathcal{B}$ on the Hilbert space $\mathcal{H}$ as $\mathcal{B}:\mathcal{L}(\mathcal{H})\mapsto\mathcal{L}(\mathcal{H})$ where $\mathcal{L}(\mathcal{H})$ indicates the set of bounded linear operators on $\mathcal{H}$. The measure by which $\mathcal{B}$ preserves the input is then the average squared fidelity: \begin{equation} \bar{F}(p_i,\rho_i,\mathcal{B})=\sum_ip_iF(\rho_i,\mathcal{B}(\rho_i))^2. \end{equation} $\bar{F}$ is linear in $\mathcal{B}$ when each $\rho_i$ describes a pure state; for linearity we must assume more than just the density of the input. Entanglement fidelity\cite{Sch:96} arises from the mathematical concept of mixed state purification. Any mixed quantum state can be represented as a subsystem of a pure state in a larger Hilbert space. The subsystem is mixed due to the entangled nature of the pure state. Consider a mixed state $\rho\in\mathcal{L}(\mathcal{H})$. By defining a reference space $\mathcal{A}$, we may denote $\rho$ as a subsystem of a pure state: \begin{equation} \rho=\textrm{tr}_\mathcal{A}\ket{AH}\bra{AH} \end{equation} where $\ket{AH}$ is a pure state in the space $\mathcal{A}\otimes\mathcal{H}$. The entanglement fidelity then measures how faithfully $\mathcal{B}$ maintains the purification (or equivalently, how well it preserves the entanglement). It is given by\footnote{We use $F$ to signify both the fidelity and the entanglement fidelity. The distinction should be obvious from context, as the arguments for the fidelity are two density operators, whereas for the entanglement fidelity, the arguments are a density operator and a channel mapping.} \begin{equation}\label{eq:ent_fid} F(\rho,\mathcal{B})=\bra{AH}\mathcal{I}_\mathcal{A}\otimes\mathcal{B}(\ket{AH}\bra{AH})\ket{AH}, \end{equation} the squared fidelity of the input $\ket{AH}$ with the output of the channel $\mathcal{I}_\mathcal{A}\otimes\mathcal{B}$. With entanglement fidelity as $F_o$ in (\ref{eq:generic_fidelity_max}), we define the optimum recovery operation for the error process $\mathcal{E}$ and input distribution $\rho$ as \begin{equation}\label{eq:ent_fidelity_max} \mathcal{R}^\star_\rho=\arg\max_\mathcal{R} F(\rho,\mathcal{R}\circ\mathcal{E}). \end{equation} \section{CPTP Maps and Positive Operators}\label{sec:CPTP} A valid quantum operation must be completely positive (CP) and trace-preserving (TP)\cite{Kra:B83}. This requirement follows directly from the postulates of quantum mechanics wherein the evolution of a closed quantum system is unitary. Let $\mathcal{E}$ be a CPTP map from $\mathcal{L}(\mathcal{H})\mapsto\mathcal{L}(\mathcal{K})$. The most familiar representation of the CPTP map is the \emph{Kraus} (or operator sum) form, where the mapping is specified by a set of operators $\{E_k\}$ known as the operator elements\cite{Kra:B83}. The channel output is given by \begin{equation}\label{eq:Kraus} \mathcal{E}(\rho)=\sum_k E_k \rho E_k^\dagger, \end{equation} and the CPTP constraint is met when \begin{equation} \sum_k E_k^\dagger E_k = I, \end{equation} where $I$ is the identity operation on $\mathcal{L}(\mathcal{H})$. While a properly constrained set of operators fully specify a quantum channel, the Kraus form is an inconvenient one for optimization. The most obvious inconvenience is the many-to-one correspondence between sets of operator elements and channel mappings. For this reason, we will utilize the representation of a channel mapping by a positive semidefinite operator on $\mathcal{L}(\mathcal{K}\otimes\mathcal{H})$, which correspondence is one-to-one \cite{DarLop:01}. We will refer to the operator description of CPTP maps as the \emph{superoperator} form\cite{Cav:99,Dep:67}. The superoperator form may be derived by recognizing that the space of bounded linear operators forms a Hilbert space. It is convenient to have a general method to convert the operator to a ket notation. Let $C$ be a bounded linear operator from $\mathcal{H}_2$ to $\mathcal{H}_1$: $C\in\mathcal{L}(\mathcal{H}_2,\mathcal{H}_1)$. We define a ket in the Hilbert space $\mathcal{H}_1\otimes\mathcal{H}_2$ associated with $C$ as\footnote{The notation $\kett{\cdot}$ is used to emphasize that these kets represents operators, not quantum states.} \begin{equation} \kett{C}=\sum_{ij} c_{ij}\ketsub{i}{1}\ketsub{j}{2}. \end{equation} where $\{\ketsub{i}{1}\}$ and $\{\ketsub{j}{2}\}$ are orthonormal bases for $\mathcal{H}_1$ and $\mathcal{H}_2$, respectively, and $c_{ij}=\brasub{i}{1}C\ketsub{j}{2}$ is the matrix element of $C$ on these bases. Two useful relations follow directly from this definition. The first one, \begin{equation} A\otimes B\kett{C} = \kett{ACB^T}\label{eq:kett_triple_product}, \end{equation} applies whenever the dimensions of $A$, $B$, and $C$ indicate that $ACB^T$ is a valid operator.\footnote{Note the symbol $^T$ indicates the transpose with respect to the specified basis \emph{without conjugation}.} The second relation applies for $C_1, C_2\in\mathcal{L}(\mathcal{H}_2,\mathcal{H}_1)$: \begin{equation} \textrm{tr}_{\mathcal{H}_2}[\kett{C_1}\braa{C_2}]=C_1C_2^\dagger \in \mathcal{L}(\mathcal{H}_1).\label{eq:kett_trace} \end{equation} From these relations, we see that the channel mapping $\mathcal{E}:\mathcal{L}(\mathcal{H})\mapsto\mathcal{L}(\mathcal{K})$ can be given by \begin{eqnarray} \mathcal{E}(\rho)&=&\sum_k E_k \rho E_k^\dagger\nonumber\\ \nonumber&=& \sum_k \textrm{tr}_\mathcal{H} \left [\kett{E_k\rho}\braa{E_k}\right ]\\ \nonumber&=& \sum_k \textrm{tr}_\mathcal{H} \left [I\otimes\rho^T\kett{E_k}\braa{E_k}\right ]\\ &=& \textrm{tr}_\mathcal{H} \left [I\otimes\rho^T X_\mathcal{E}\right ],\label{eq:Pos_Op_channel} \end{eqnarray} where $X_\mathcal{E}\equiv\sum_k\kett{E_k}\braa{E_k}$. The trace-preserving property that \begin{equation} \textrm{tr}_\mathcal{K}\mathcal{E}(\rho)=1=\textrm{tr}_\mathcal{H}[\rho^T\textrm{tr}_\mathcal{K}[X_\mathcal{E}]] \end{equation} for all density operators $\rho\in\mathcal{L}(\mathcal{H})$ can be stated as \begin{equation} \textrm{tr}_\mathcal{K} X_\mathcal{E}=I\in\mathcal{L}(\mathcal{H}). \end{equation} In the superoperator form, the entire mapping $\mathcal{E}$ is specified by the positive operator $X_\mathcal{E}$. \section{Optimum Recovery via Semidefinite Programming}\label{sec:QEC_SDP} The problem given by (\ref{eq:ent_fidelity_max}) is a convex optimization problem and we may approach it with sophisticated tools. Particularly powerful is the semidefinite program (SDP) \cite{VanBoy:96}, where the objective function is linear in an input constrained to a semidefinite cone. Indeed, the power of the SDP was a primary motivation in choosing to maximize the entanglement fidelity, which is linear in the quantum operation $\mathcal{R}$. The definition of entanglement fidelity given in (\ref{eq:ent_fid}) is intuitively useful, but awkward for calculations. An easier form arises when operator elements $\{B_i\}$ for $\mathcal{B}$ are given. The entanglement fidelity is then \begin{equation}\label{eq:ent_fid_kraus} F(\rho,\mathcal{B})=\sum_i|\textrm{tr}(\rho B_i)|^2. \end{equation} From (\ref{eq:ent_fid_kraus}), we may derive a calculation rule for the entanglement fidelity when the channel $\mathcal{B}$ is expressed as in the superoperator form. Using (\ref{eq:kett_trace}), we see that $\textrm{tr}{B_i \rho=\textrm{tr}{\kett{B_i}\braa{\rho}}=\braakett{\rho}{B_i}}$. Inserting this into (\ref{eq:ent_fid_kraus}), we obtain the entanglement fidelity in terms of $X_\mathcal{B}$: \begin{eqnarray} \nonumber F(\rho,\mathcal{B})&=&\sum_i\braakett{\rho}{B_i}\braakett{B_i}{\rho}\\ &=&\braa{\rho}X_\mathcal{B}\kett{\rho}. \end{eqnarray} Armed with this expression for the entanglement fidelity, we may now express (\ref{eq:ent_fidelity_max}) in a form readily seen to be a semidefinite program. To do this, we must consider the form of the composite channel $\mathcal{R}\circ\mathcal{E}:\mathcal{L}(\mathcal{H})\mapsto\mathcal{L}(\mathcal{H})$ expressed as a positive operator on $\mathcal{H}\otimes\mathcal{H}$. If the operator elements for each channel are $\{R_i\}$ and $\{E_j\}$, then the operator $X_{\mathcal{R}\mathcal{E}}$ is given by \begin{equation} X_{\mathcal{R}\mathcal{E}}=\sum_{ij}\kett{R_iE_j}\braa{R_iE_j}. \end{equation} Applying (\ref{eq:kett_triple_product}), this becomes \begin{eqnarray} \nonumber X_{\mathcal{R}\mathcal{E}}&=&\sum_{ij}I\otimes E_j^T\kett{R_i}\braa{R_i}I\otimes E_j^*\\ &=& \sum_j (I\otimes E_j^T)X_\mathcal{R} (I\otimes E_j^*), \end{eqnarray} where the $^*$ represents complex conjugation, without transposition. The entanglement fidelity is then \begin{eqnarray} \nonumber F(\rho,\mathcal{R}\circ\mathcal{E})&=&\sum_j \braa{\rho}(I\otimes E_j^T)X_\mathcal{R} (I\otimes E_j^*)\kett{\rho}\\ &=& \textrm{tr}{X_\mathcal{R} C_{\rho,\mathcal{E}}} \end{eqnarray} where \begin{eqnarray}\nonumber C_{\rho,\mathcal{E}}&=&\sum_j I\otimes E_j^*\kett{\rho}\braa{\rho}I\otimes E_j^T\\ &=& \sum_j \kett{\rho E_j^\dagger}\braa{\rho E_j^\dagger}. \end{eqnarray} We may now express the optimization problem (\ref{eq:ent_fidelity_max}) in the simple form \begin{eqnarray}\nonumber X_{\mathcal{R}_{\rho}}^\star=\arg\max_X \textrm{tr} ({X C_{\rho,\mathcal{E}}})\\ \textrm{such that } X \geq 0, \hspace{10 pt} \textrm{tr}_\mathcal{H}{X}=I.\label{eq:ent_fid_max} \end{eqnarray} This form illustrates plainly the linearity of the objective function and the semidefinite and equality structure of the constraints. Indeed, this is the exact form of the optimization problem in \cite{AudDem:02} which first pointed out the value of semidefinite programs (SDP) for optimizing quantum channels. The value of an SDP for optimization is two-fold. First, an SDP is a sub-class of convex optimization, and thus a local optimum is guaranteed to be a global optimum. Second, there are efficient and well-understood algorithms for computing the optimum of a semidefinite program. These algorithms are sufficiently mature to be widely available. Thus, by expressing the optimum recovery channel as an SDP, we have explicit means to compute the solution numerically. \section{Quantum Diversity Combining}\label{sec:div_combining} In the preceding analysis, the method of encoding has been implied by the choice of $\rho$. Indeed, in most treatments of QEC the input density is restricted to a subspace called the quantum error correcting code (QECC). If $P_C$ is a projection operator onto the code subspace, then $\rho=P_C\rho P_C$ implies that the state $\rho$ is within the code subspace. The channel is typically defined such that the input and output spaces are identical (\emph{i.e.} $\mathcal{H}=\mathcal{K}$) and the noise process generally perturbs the system from the code subspace. While this representation is a perfectly legitimate model for the error process, and convenient when viewing QEC in a mode comparable to classical error correction, the dimensionality is unnecessarily high for the optimization routine. Recall that $X_\mathcal{R}$ is a Hermitian operator on the space $\mathcal{K}\otimes\mathcal{H}$. The optimization thus has $d_\mathcal{H}^2 d_\mathcal{K}^2$ real parameters. Even for a $[[5,1,3]]$\footnote{The notation $[[n,k,d]]$ refers to a quantum code encoding $k$ qubits of information into a $n$ qubit system. The parameter $d$ refers to the \emph{weight} of the code\cite{NieChu:B00}.} code, the smallest for arbitrary single qubit errors\cite{BenDivSmoWoo:96,LafMiqPazZur:96}, this optimization then has $2^{20}$ optimization variables. The high dimensionality can be alleviated somewhat by embedding the encoding into the noise process, and redefining $\mathcal{E}$ as a \emph{quantum spreading channel} where $d_\mathcal{H}<d_\mathcal{K}$. \begin{figure} \caption{Transform from standard quantum error correction to quantum spreading channel. By considering the error channel $\mathcal{E} \label{fig:spreading_channel} \end{figure} The transform to a quantum spreading channel is illustrated in Fig. \ref{fig:spreading_channel}. Consider the noise process $\mathcal{E}:\mathcal{K}\mapsto\mathcal{K}$ with operator elements $\{E_i\}$, input density $\rho\in\mathcal{L}(\mathcal{K})$, and code projector \begin{equation} P_C=\sum_n^{d_\mathcal{H}}\ketsub{n}{L}\brasub{n}{L} \end{equation} where $\ket{n}_L\in\mathcal{K}$ are the logical states of the code. Since the input is in the codespace, $\rho$ is preserved by the code projector: $\rho=P_C\rho P_C$. We can reduce the dimensionality of the optimization by transforming this problem. Let $\mathcal{H}$ be a $d_\mathcal{H}$ dimensional Hilbert space with orthonormal basis $\{\ket{n}_\mathcal{H}\}$. $\mathcal{H}$ can be considered the space of the information source. The encoding process is accomplished by the operator $U_C=\sum_n^{d_\mathcal{H}}\ketsub{n}{L}\brasub{n}{\mathcal{H}}$ that maps the basis states of $\mathcal{H}$ to the logical states in $\mathcal{K}$. $U_C$ is an isometry: $U_C^\dagger U_C=I$. Note that \begin{eqnarray} \nonumber U_CU_C^\dagger&=&\sum_n^{d_\mathcal{H}}\ketsub{n}{L}\brasub{n}{L}\\ &=&P_C. \end{eqnarray} If we redefine $\rho'=U_C^\dagger\rho U_C\in\mathcal{L}(\mathcal{H})$ and the operator elements $E_i'=E_i U_C$, then we see the processes $\mathcal{E}(\rho)$ and $\mathcal{E}'(\rho')$ are identical: \begin{eqnarray}\nonumber \mathcal{E}'(\rho')&=&\sum_i E_i U_C U_C^\dagger \rho U_C U_C^\dagger E_i^\dagger\\ \nonumber&=&\sum_i E_i P_C \rho P_C E_i^\dagger\\ &=&\mathcal{E}(\rho). \end{eqnarray} By enacting such a transformation, the optimization dimension is reduced from $d_\mathcal{K}^4$ to $d_\mathcal{K}^2d_\mathcal{H}^2$. For the $[[5,1,3]]$ code, the reduction is from $2^{20}$ to $2^{12}$. The above transformation illustrates an alternative interpretation of recovering an encoded quantum state after an error process. We may instead consider an unencoded state input into a \emph{quantum spreading channel}, a channel in which the output dimension is greater than the input dimension (\emph{i.e.} $\dim{\mathcal{K}}>\dim{\mathcal{H}}$). The recovery operation is an attempt to combine the spread output back into the input space, presumably with the intent to minimize information loss. The recovery operation is then the quantum analog to the classical communications concept of diversity combining. Classical diversity combining describes a broad class of problems in communications and radar systems. In its most general form, we may consider any class of transmission problems for which the receiver observes multiple transmission channels. These channels could arise due to multi-path scattering, frequency diversity (high bandwidth transmissions where channel response varies with frequency), spatial diversity (free-space propagation to multiple physically separated antennas), time diversity, or some combination of the four. Diversity combining is a catch-all phrase for the process of exploiting the multiple channel outputs to improve the quality of transmission (\emph{i.e.} by reducing error or increasing data throughput). On its face, an extension of diversity combining to the quantum communications regime is unclear. Indeed, the simplest way to understand classical diversity - receiving multiple copies of the input, independently affected by the channel - might also lead one to conclude that the concept is not applicable to quantum communication. After all, the fundamental postulates of quantum mechanics lead to the ``no-cloning'' theorem; multiple, separable copies of the input state are ruled out by theorem. A more careful consideration of classical diversity combining, however, illuminates the parallels between the quantum and classical viewpoints. In a general description of classical diversity, the input signal is coupled through the channel to a receiver system of higher dimensionality. Consider a communication signal with a single input antenna and $N$ receive antennae. Often, the desired output is a signal of the same dimension as the input, a scalar in this case. Diversity combining is then the process of extracting the maximum information from the $N$-dimensional received system. In most communications systems, this combining is done at either the analog (leading to beam-forming or multi-user detection) or digital (making the diversity system a kind of repeater code) levels. Thus, the natural inclination is to equate diversity combining with either beam-forming or repeater codes. The most general picture, however, is that of recombining the received signal that was spread by the channel from the input system to a higher dimensional output system. Thus, it is appropriate to consider a quantum spreading channel to be quantum diversity channel, and the recovery operation is a quantum diversity combiner. \section{Examples}\label{sec:examples} To illustrate the potential benefit of optimum QER, we numerically compare the performance of the optimum QER with QEC recovery for two encoding schemes. First, we examine the $[[5,1,3]]$ code in the presence of the amplitude damping channel. Second, we compare the 4-bit amplitude damping code from \cite{LeuNieChuYam:97} with the optimum QER. The latter is a particularly apt choice for comparison, as the code was designed for a specific channel and sought to only approximately satisfy the quantum error correcting conditions. In both cases, the noise channel $\mathcal{E}_a$ is the amplitude damping channel, which for a single qubit has operator elements \begin{equation}\label{eq:ampdamp} E_0=\left [ \begin{array}{ccc} 1 & 0 \\ 0 &\sqrt{1-\gamma} \end{array} \right ]\hspace{.5 cm} \textrm{and} \hspace{.5 cm} E_1=\left [ \begin{array}{ccc} 0 & \sqrt{\gamma} \\ 0 & 0 \end{array} \right ]. \end{equation} This channel is a commonly encountered model, where the parameter $\gamma$ indicates the probability of decaying from state $\ket{1}$ to $\ket{0}$ (\emph{i.e.} the probability of losing a photon). Amplitude damping is a logical choice to illustrate the benefits of optimum QER as the operation is not symmetric with respect to $\ket{0}$ and $\ket{1}$. \subsection{Five-Qubit Code} The five-qubit code was independently discovered by \cite{BenDivSmoWoo:96} and \cite{LafMiqPazZur:96}. We will here follow the treatment in \cite{NieChu:B00}, and specify the code via the generators $\{g_1,g_2,g_3,g_4\}$ and the logical $\bar{Z}$ and $\bar{X}$ operations specified in Table \ref{table:5qubitcode}. The code subspace $\mathcal{C}$ is the two-dimensional subspace that is the $+1$ eigenspace of the generators $g_i$. The logical states $\ketsub{0}{L}$ and $\ketsub{1}{L}$ are the $+1$ and $-1$ eigenkets of $\bar{Z}$ on $\mathcal{C}$. \begin{table}[tbh]\label{table:5qubitcode} \begin{tabular}{c|c} \hline \hline Name & Operator\\ \hline $g_1$ & $X Z Z X I$\\ $g_2$ & $I X Z Z X$\\ $g_3$ & $X I X Z Z$\\ $g_4$ & $Z X I X Z$\\ $\bar{Z}$ & $Z Z Z Z Z$\\ $\bar{X}$ & $X X X X X$\\ \hline \hline \end{tabular} \caption{Generators and logical operations of the five qubit (\emph{i.e.} $[[5,1,3]]$) code.} \end{table} To compute the optimum recovery for this code, we assume that the logical states are equally likely, that is $\rho=\frac{1}{2}\ketsub{0}{L}\brasub{0}{L}+\frac{1}{2}\ketsub{1}{L}\brasub{1}{L}$. (This choice of $\rho$ in fact assumes nothing about the choice of codewords; rather it is the maximum entropy distribution constrained to the code space $\mathcal{C}$.) The noise channel is $\mathcal{E}_a^{\otimes 5}$, the amplitude damping channel acting independently on each qubit. For each choice of the parameter $\gamma$, the optimum recovery operation $\mathcal{R}^\star_\rho$ is computing according to (\ref{eq:ent_fidelity_max}). We compare the entanglement fidelity $F(\rho,\mathcal{R}\circ\mathcal{E}_a^{\otimes 5})$ for both $\mathcal{R}^\star_\rho$ and $\mathcal{R}_{\textrm{QEC}}$ in fig. \ref{fig:5qubit_comp}. Figure \ref{fig:5qubit_comp} illustrates clearly the difference between optimum QER and QEC recoveries for large values of $\gamma$. It is also instructive to compare the techniques for small values of $\gamma$. We do this numerically by calculating the polynomial expansion of $F(\rho,\mathcal{R}\circ\mathcal{E})$ as $\gamma$ goes to zero. The entanglement fidelity for the optimum QER has the form $F(\rho,\mathcal{R}\circ\mathcal{E})\approx 1 -1.166 \gamma^2+\mathcal{O}(\gamma^3)$. In contrast, the QEC recovery is $F(\rho,\mathcal{R}\circ\mathcal{E})\approx 1 -2.5 \gamma^2+\mathcal{O}(\gamma^3)$. \begin{figure} \caption{Entanglement fidelity vs.\ $\gamma$ for the 5 qubit code and the amplitude damping channel $\mathcal{E} \label{fig:5qubit_comp} \end{figure} \subsection{Four-Qubit `Approximate' Code} In \cite{LeuNieChuYam:97}, Leung \emph{et. al.} recognized the advantage that channel-specific error recovery schemes can have over generic QEC routines. To illustrate the advantage, they designed a code specific to the amplitude damping channel with good performance for small $\gamma$ that only required 4 qubits of overhead, in contrast to the generic five qubit code. The code only approximately meets the QEC conditions, and as a result, the `corrected' state is somewhat distorted from the input, even when the dominant error term occurs. In this way, the procedure is based upon principles similar to the optimum QER we have developed. The logical states are given by \begin{eqnarray} \ketsub{0}{L}= \frac{1}{\sqrt{2}}(\ket{0000}+\ket{1111})\\ \ketsub{1}{L}= \frac{1}{\sqrt{2}}(\ket{0011}+\ket{1100}), \end{eqnarray} and the recovery operation is specified by the circuits in Figure 2 of \cite{LeuNieChuYam:97}. We note that the recovery operation depends explicitly on the parameter $\gamma$. We compare the recovery of Leung \emph{et. al.} with the optimum QER computed according to (\ref{eq:ent_fidelity_max}), once again assuming the completely mixed input density $\rho=\frac{1}{2}\ketsub{0}{L}\brasub{0}{L}+\frac{1}{2}\ketsub{1}{L}\brasub{1}{L}$. The numerical comparison, for various values of $\gamma$ is provided in fig. \ref{fig:Leung_comp}. As $\gamma$ goes to zero, the entanglement fidelity for the optimum QER is numerically determined to have the form $F(\rho,\mathcal{R}\circ\mathcal{E})\approx 1 -1.25 \gamma^2+\mathcal{O}(\gamma^3)$. In contrast, the Leung \emph{et. al.} recovery is $F(\rho,\mathcal{R}\circ\mathcal{E})\approx 1 -2.75 \gamma^2+\mathcal{O}(\gamma^3)$. \begin{figure} \caption{Entanglement fidelity vs.\ $\gamma$ for the 4 qubit amplitude damping code of Leung \emph{et.\! al.\! } \label{fig:Leung_comp} \end{figure} \subsection{Commentary on Examples} It is not surprising that the optimum QER operation outperforms the 5 qubit code or the Leung code. It is, however, noteworthy that the difference can be significant, for even relatively small values of $\gamma$. We may conclude that channel specific recoveries can significantly improve the performance of error correcting procedures. This conclusion was shared by Leung \emph{et. al.}, who lamented the lack of a general method to design such channel-specific schemes. The SDP formalism outlined in this paper, provides such a general method. Perhaps the most notable improvement in optimum QER can be noted in the `no recovery' comparisons in figs. \ref{fig:5qubit_comp} and \ref{fig:Leung_comp}. These curves represent the entanglement fidelity of a single qubit transmitted through the noisy channel. For values of $\gamma$ where the recovered entanglement fidelity lies below the `no recovery,' we see that the error mitigation procedure does more harm than good. Performing optimum QER as opposed to QEC significantly extends the values of $\gamma$ for which error mitigation is valid. This suggests optimum QER will be a particularly valuable technique for noisier systems. Finally, it is worth noting the duality between optimum QER and optimum encoding. We have derived the optimum operation $\mathcal{R}$ for a given encoding and noise process. By the same process, one may derive the optimum encoding given a recovery operation and noise process. This can most easily be seen by noting that the encoding operation is a valid quantum operation, and in fact, a spreading operation; it is thus subject to the same semidefinite cone constraints as the recovery operation. In a manner similar to those suggested by \cite{Sho:03} and \cite{ReiWer:05}, one can conceivably obtain a channel-specific error recovery scheme by alternatively holding the recovery fixed and optimizing the encoding, and holding the encoding fixed and optimizing the recovery. Full analysis of such a technique is deferred for future consideration. \section{Conclusion} The structure of quantum operations allow quantum error correction to be approached as an optimization problem. Specifically, optimum recovery of an encoded quantum state from an error process can be solved numerically using semidefinite programming when optimality is interpreted as a maximization of the entanglement fidelity. This analysis suggests the ability to systematically search for recovery operations for complicated error schemes beyond those readily analyzed and corrected through more traditional QEC methodologies. This problem is in general, the optimum combining operation following a quantum spreading channel, and thus a quantum parallel to the classical problem of diversity combining. \end{document}
\begin{document} \title{Suppression of chemotactic explosion by mixing} \begin{abstract} Chemotaxis plays a crucial role in a variety of processes in biology and ecology. In many instances, processes involving chemical attraction take place in fluids. One of the most studied PDE models of chemotaxis is given by the Keller-Segel equation, which describes a population density of bacteria or mold which attract chemically to substance they secrete. Solutions of the Keller-Segel equation can exhibit dramatic collapsing behavior, where density concentrates positive mass in a measure zero region. A natural question is whether presence of fluid flow can affect singularity formation by mixing the bacteria thus making concentration harder to achieve. In this paper, we consider the parabolic-elliptic Keller-Segel equation in two and three dimensions with additional advection term modeling ambient fluid flow. We prove that for any initial data, there exist incompressible fluid flows such that the solution to the equation stays globally regular. On the other hand, it is well known that when the fluid flow is absent, there exist initial data leading to finite time blow up. Thus presence of fluid flow can prevent the singularity formation. We discuss two classes of flows that have the explosion arresting property. Both classes are known as very efficient mixers. The first class are the relaxation enhancing (RE) flows of \cite{constantin2008diffusion}. These flows are stationary. The second class of flows are the Yao-Zlatos near-optimal mixing flows \cite{yao2014mixing}, which are time dependent. The proof is based on the nonlinear version of the relaxation enhancement construction of \cite{constantin2008diffusion}, and on some variations of global regularity estimate for the Keller-Segel model. \end{abstract} \author{Alexander Kiselev} \address{\hskip-\parindent Alexander Kiselev\\ Department of Mathematics\\ Rice University\\ Houston, TX 77005, USA} \email{[email protected]} \author{Xiaoqian Xu} \address{\hskip-\parindent Xiaoqian Xu\\ Department of Mathematics\\ University of Wisconsin-Madison\\ Madison, WI 53706, USA } \email{[email protected]} \maketitle \section{Introduction} Chemotaxis is ubiquitous in biology and ecology. This term is used to describe motion where cells or species sense and attempt to move towards higher (or lower) concentration of some chemical. The first mathematically rigorous studies of chemotaxis effects have been by Patlak \cite{patlak1953random} and Keller-Segel \cite{keller1970initiation}, \cite{keller1971model}. The latter work involved derivation and first analysis of the Keller-Segel system, the most studied model of chemotaxis. The Keller-Segel equation describes a population of bacteria or mold that secrete a chemical and are attracted by it. In one version of the simplified parabolic-elliptic form, this equation can be written in $\mathbb{R}^d$ as (see e.g. \cite{perthame2006transport}) \begin{equation}\lambdabel{chemo} \partial_t \rho - \Delta \rho + \nabla\cdot(\rho \nabla (-\Delta)^{-1}\rho)=0,\quad \rho(x,0)=\rho_0(x). \end{equation} The last term on the left hand side describes attraction of $\rho$ by the chemical whose concentration is given by $c(x,t)=(-\Delta)^{-1}\rho(x,t)$. The literature on the Keller-Segel equation is enormous. It is known that in dimensions larger than one, solutions to (\ref{chemo}) can concentrate finite mass in a measure zero region and so blow up in finite time. We refer to \cite{perthame2006transport}, \cite{horstmann20031970}, and \cite{horstmann200319701} for more details and further references. Typically, chemotactic processes take place in fluid, and often the agents involved in chemotaxis are also advected by the ambient flow. Some of the examples involve monocytes using chemokine signalling to concentrate at a source of infection (see e.g. \cite{deshmane2009monocyte,taub1995monocyte}), sperm and eggs of marine animals practicing broadcast spawning in the ocean (see e.g. \cite{coll1994chemical,miller1985demonstration}), and other numerous instances in biology and ecology. Our goal in this paper is to study the possible effects resulting from interaction of chemotactic and fluid transport processes. Of particular interest to us is the possibility of suppression of finite time blow up due to the mixing effect of fluid flow. The problem of chemotaxis in fluid flow has been studied before; for example, in a setting similar to ours \cite{KR1}, \cite{kiselev2012biomixing} studied the effect of chemotaxis and fluid advection on the efficiency of absorbing reaction. Moreover, in a series of papers \cite{lorz2010coupled}, \cite{duan2010global}, \cite{di2010chemotaxis}, \cite{liu2011coupled}, \cite{lorz2012coupled} a very interesting problem coupling chemotactic density with fluid mechanics equations actively forced by this density has been considered in a variety of different settings. The active coupling makes the system more challenging to analyze, but in some cases intriguing results involving global existence of weak solutions (the definition of which implies lack of the $\partial_ta$-function blow up) have been proved. These results, however, apply either in the setting where the initial data is small (see e.g. \cite{lorz2012coupled}) or close to constant \cite{duan2010global}, or in the systems where both chemotactic equation and the fluid equation have globally regular solutions if not coupled. In other words, to the best of our knowledge, there have been no rigorous results providing an example of suppression of the chemotactic explosion by fluid flow; only results showing that presence of fluid flow does not lead to blow up for the initial data that would not blow up without the flow. In this paper, our main focus will be on the question whether incompressible fluid flow can arrest the finite time blow up phenomenon which is the key signature of the Keller-Segel model. There are two possible fluid flow effects that can be helpful in finite time blow up prevention. The first applies in infinite regions, where strong flow can help diffusion quickly spread the density so thin that chemotactic effects become weak. The second effect is more universal and subtle to analyze, and involves mixing in a finite volume. In this case, the concentration may remain significant, but the flow is constantly mixing density and preventing chemotaxis from building up a concentration peak. We are primarily interested in the mixing effect, and so we will consider a finite region setting. It will also be convenient for us to adopt periodic boundary conditions and to consider the Keller-Segel equation with advection on a torus. This is not essential, and many of our results also apply on a finite region with Neumann, Dirichlet or Robin boundary conditions. Let us now state our main result. Since we are working on $\mathbb{T}^d$, we will define the concentration of the chemical by factoring out a constant background: $c(x,t)=(-\Delta)^{-1}(\rho(x,t)-\bar{\rho})$. Here $\rho(x,t)\in L^2$ is the species density, and $\bar{\rho}$ is its mean over $\mathbb{T}^d$. The inverse Laplacian can be defined on the Fourier side, or by an appropriate convolution as will be discussed below. Consider the equation \begin{equation}\lambdabel{chemo1} \partial_t\rho+(u\cdot\nabla) \rho -\Delta \rho + \nabla\cdot(\rho \nabla (-\Delta)^{-1}(\rho-\bar{\rho}))=0,\quad \rho(x,0)=\rho_0(x),\quad x\in \mathbb{T}^d. \end{equation} We will prove the following theorem. \begin{thm}\lambdabel{main} Given any initial data $\rho_0\geqslantqslant 0$, $\rho_0\in C^{\infty}(\mathbb{T}^d)$, $d=2$ or $3$, there exist smooth incompressible flows $u$ such that the unique solution $\rho(x,t)$ of (\ref{chemo1}) is globally regular in time. \end{thm} \begin{remark} 1. We will provide more details on the specific choice of the flows later. \\ 2. The restriction $d=2$ or $3$ is essential for our method to work. The case $d=4$ is in some sense borderline for our estimates: the needed estimates just fail. Extending our approach to this case, and especially to $d>4,$ would require new ideas. \\ Please check Remark~\ref{dim} after Theorem~\ref{L2cr} for more discussion. \\ 3. As we already mentioned, the periodic boundary condition is not very essential. One can get a similar result in the bounded domain; one issue is that examples of stationary RE flows are not available in a general bounded domain. On the other hand, the Yao-Zlatos flows can be constructed in general bounded domains and so can be used to achieve an analogous result in this more general setting. See Remark \ref{noslip} for more details. \\ 4. The theorem certainly holds under weaker assumptions on the regularity of the initial data. In this paper, for the sake of presentation, we do not make an effort to optimize the regularity conditions. The scheme of our proof and the kinds of the flow examples that we have will make the connection between mixing properties of the flow and its ability to suppress the chemotactic blow up quite explicit. \end{remark} It is well known that the solutions to the Keller-Segel equation can form singularities in finite time. The first rigorous proof of this result in the case where domain is a two-dimensional disk was given by J{\"a}ger and Luckhaus \cite{jager1992explosions}. Their proof is based on radial geometry and comparison principles. Nagai \cite{nagai2001blowup} has provided a proof of finite time blow up in more general bounded domains. We have not found a finite time blow up proof for the periodic case in the literature. Although it can be obtained by modification of the existing arguments, in the appendix we will provide a short independent construction of such examples in the case of two spatial dimensions. This will imply that Theorem \ref{main} indeed provides examples of the suppression of chemotactic explosion by fluid mixing. We note that fluid advection has been conjectured to regularize singular nonlinear dynamics before. The most notable example is the case of the $3$D Navier-Stokes and Euler equations. Constantin \cite{PC} has proved possibility of finite time blow up for the $3$D Euler equation in $\Rm^3$ if the pure advection term in the vorticity formulation is removed from the equation. Hou and Lei have obtained numerical evidence for the finite time blow up in a system obtained from the $3$D Navier-Stokes equation by the removal of the pure transport terms \cite{lei2009stabilizing}. In fact, finite time blow up has been also proved rigorously in some related modified model settings \cite{hou2011singularity}, \cite{hou2012singularity}, \cite{hou2014finite}. Of course, the proof of the global regularity of the $3$D Navier-Stokes remains an outstanding open problem, so whether the $3$D Navier-Stokes equation exhibits ``advection regularization'' is an open question. See also \cite{lartiti} for more discussion. As another example of related philosophy, we mention the paper \cite{berestycki2010explosion} on the elliptic problem with ``explosion'' type reaction. There is no time variable and so no finite time blow up in this paper, but it shows that certain flows can significantly affect the ``explosion threshold'': namely, the value of the reaction coupling parameter beyond which there exist no regular positive solutions. To the best of our knowledge, the examples that we construct here are the first rigorous examples of the suppression of finite time blow up by fluid mixing in nonlinear evolution setting. It should be possible to extend our method to cover some other situations, which we will briefly discuss in the last section. The paper is organized as follows. In Section $2$ we prove the $L^2$ -based global regularity criterion that we will use. In particular, it guarantees global regularity as far as the $L^2$ norm of the solutions remains bounded. In Section $3$, we set up the strategy for controlling $L^2$ norm via $\dot{H}^1$ norm of the solution. Despite the presence of nonlinearity, due to the dissipation term, if the $\dot{H}^1$ norm is sufficiently large, the $L^2$ norm has to decay. Basically, what this implies is that the solution to the Keller-Segel equation cannot blow up in the manner which makes $\dot{H}^1$ norm of the solution grow much faster than the $L^2$ norm. In Section $4$, we prove the key result on approximation of the solution of the Keller-Segel equation by solution of pure advection equation on small time scales. This result sets up our strategy to use the fluid advection to drive up the $\dot{H}^1$ norm of the solution to the full Keller-Segel equation by fluid mixing in order to keep the $L^2$ norm in check. Indeed, the highly mixing flows are known to increase the $\dot{H}^1$ norm of the solution of the advected passive scalar. If this solution stays close to the solution of the nonlinear problem long enough, this guarantees that the $\dot{H}^1$ norm of the nonlinear solution will be large, too. In Section $5$, we put together all the components of the argument and prove that the relaxation enhancing flows of \cite{constantin2008diffusion} suppress chemotactic blow up. We focus on the case of weakly mixing flows. In Section $6$ we outline another example of flows with analogous property, the Yao-Zlatos efficient mixing flows. Finally, in Section $7$ we briefly discuss possible future extensions. There are two appendices: the first one contains construction of finite time blow up in the Keller-Segel equation with periodic data, and the second includes sketches of proofs of some functional inequalities that we use in the paper. Throughout the paper, $C$ will stand for universal constants that may change from line to line. \textbf{Acknowledgment.} AK has been partially supported by the NSF grant DMS-1412023. XX has been partially supported by the NSF grant DMS-1535653. We thank Eitan Tadmor, Changhui Tan, Yao Yao and Andrej Zlatos for helpful discussions. \section{Global existence: the $L^2$ criterion} In this section, we will show that to get the global regularity of (\ref{chemo1}), we only need to have certain control of the spatial $L^2$ norm of the solution. The following theorem is a direct analog of Theorem 3.1 in \cite{kiselev2012biomixing}, where it was proved in the $\mathbb{R}^2$ setting. We will provide a sketch of the proof for the sake of completeness. Throughout the paper, we will use notation $\dot{H}^s$ for the homogeneous Sobolev space in spatial coordinates, that is we set $$ \|f\|_{\dot{H}^s}^2=\sum_{k\in\mathbb{Z}^d\setminus \{0\}}|k|^{2s}|\hat{f}(k)|^2. $$ \begin{thm}\lambdabel{L2cr} Suppose that $\rho_0 \in C^{\infty}(\mathbb{T}^d)$, $d=2$ or $3$, $\rho_0\geqslantqslant 0$, and suppose that $u\in C^{\infty}$ is divergence free. Assume $[0,T]$ is the maximal interval of existence of the unique smooth solution $\rho(x,t)$ of the equation (\ref{chemo1}). Then we must have \begin{equation}\lambdabel{13th} \int_0^t\|\rho(\cdot, \tau)-\bar{\rho}\|_{L^2(\mathbb{T}^d)}^{\frac{4}{4-d}}d\tau\xrightarrow{t\rightarrow T}\infty. \end{equation} \end{thm} \begin{remark}\lambdabel{dim} In other words, the smooth solution can be continued as far as the integral in time of the appropriate power of the $L^2$ norm of solution in space stays finite. Note that the mean value of $\rho$ is conserved by the evolution, so $\bar{\rho}(\cdot,t)\equiv\bar{\rho}_0$. We will denote it $\bar{\rho}$ throughout the rest of the paper. One may or may not include $\bar{\rho}$ into (\ref{13th}), these criteria are equivalent. One can verify that the scaling of (\ref{13th}) is sharp in the sense that it is a critical quantity for (\ref{chemo1}). One way to see this criticality is through an informal scaling argument or dimensional analysis. Indeed, one can check that $\lambdambda^2\rho(\frac{x}{\lambdambda},\frac{t}{\lambdambda^2})$ is a solution to (\ref{chemo1}) (with a different period) for every $\lambdambda$ as long as $\rho(x,t)$ is a solution. The quantity in (\ref{13th}) with respect to this scaling. This observation also means that for $d\geqslantqslant 4$ the boundedness of the $L^2$ norm may not be sufficient in order to get the regularity of the solution to (\ref{chemo1}). \end{remark} \begin{proof} The existence and uniqueness of smooth local solution can be proved by standard methods, so we will focus on global regularity. Let $s\geqslantqslant 2$ be integer. Multiply (\ref{chemo1}) by $(-\Delta)^s\rho$ and integrate. We get \begin{align*} \frac{1}{2}\partial_t \|\rho\|_{\dot{H}^s}^2&\leqslantqslant \leqslantft|\int_{\mathbb{T}^d}(\nabla\rho)\cdot (\nabla(-\Delta)^{-1}(\rho-\bar{\rho}))(-\Delta )^s\rho \,dx \right|\\ &+\leqslantft|\int_{\mathbb{T}^d}\rho(\rho-\bar{\rho})(-\Delta)^s\rho \,dx\right|+C \|u\|_{C^s}\|\rho\|^2_{\dot{H}^s}-\|\rho\|^2_{\dot{H}^{s+1}}. \end{align*} Here we integrated by parts $s$ times and used incompressibility of $u$ to obtain $$ \leqslantft|\int_{\mathbb{T}^d}(u\cdot \nabla)\rho(-\Delta)^s\rho \,dx\right|\leqslantqslant C\|u\|_{C^s}\|\rho\|^2_{\dot{H}^s}. $$ Consider the expression $$ \int_{\mathbb{T}^d}\rho(\rho-\bar{\rho})(-\Delta)^s\rho dx. $$ Integrating by parts, we can represent this integral as a sum of terms of the form $$ \int_{\mathbb{T}^d}D^l\rho D^{s-l}(\rho-\bar{\rho}) D^s \rho \,dx, $$ where $l=0,1,...,s$ and $D$ denotes any partial derivative. By H\"{o}lder inequality, we have $$ \int_{\mathbb{T}^d}D^l\rho D^{s-l}(\rho-\bar{\rho})D^s \rho \,dx\leqslantqslant\|D^l \rho\|_{L^{p_l}}\|D^{s-l}(\rho-\bar{\rho})\|_{L^{q_l}}\|\rho\|_{\dot{H}^s}, $$ with any $2\leqslantqslant p_l, q_l \leqslantqslant \infty$ satisfying $p_l^{-1}+q_l^{-1}=1/2$. For any integer $0\leqslantqslant m\leqslantqslant n$, and mean zero $f\in C^{\infty}(\mathbb{T}^d)$, we have Gagliardo-Nirenberg inequality \begin{equation}\lambdabel{14th} \|D^m f\|_{L^p}\leqslantqslant C\| f \|_{L^2}^{1-a}\|f\|_{\dot{H}^{n}}^a, \quad a=\frac{m-\frac{d}{p}+\frac{d}{2}}{n}, \end{equation} which holds for $2\leqslantqslant p \leqslantqslant \infty$ unless $a=1,$ and if $a=1$ for $2\leqslantqslant p<\infty$. We will sketch a short proof of (\ref{14th}) in the appendix to make the paper more self-contained. Take $p_l=\frac{2s}{l}$, $q_l=\frac{2s}{s-l}$. Then for all $l>0$, applying (\ref{14th}), we get $$ \|D^l\rho\|_{L^{p_l}}\|D^{s-l}(\rho-\bar{\rho})\|_{L^{q_l}}\leqslantqslant C\|\rho-\bar{\rho}\|_{L^2}^{\frac{2s-d+4}{2(s+1)}}\|\rho\|_{\dot{H}^{s+1}}^{\frac{2s+d}{2(s+1)}}. $$ In the $l=0$ case, we use $$ \|\rho\|_{L^{\infty}}\leqslantqslant \|\rho-\bar{\rho}\|_{L^{\infty}}+\bar{\rho}\leqslantqslant C\|\rho-\bar{\rho}\|_{L^2}^{1-\frac{d}{2(s+1)}}\|\rho\|_{\dot{H}^{s+1}}^{\frac{d}{2(s+1)}}+\bar{\rho}. $$ Therefore $$ \leqslantft|\int_{\mathbb{T}^d}\rho(\rho-\bar{\rho})(-\Delta)^s\rho \,dx \right|\leqslantqslant C\leqslantft(\|\rho-\bar{\rho}\|_{L^2}^{1-\frac{d}{2(s+1)}}\|\rho\|_{\dot{H}^{s+1}}^{\frac{d}{2(s+1)}}+\bar{\rho}\right) \|\rho\|^{\frac{s}{s+1}}_{\dot{H}^{s+1}}\|\rho-\bar{\rho}\|_{L^2}^{\frac{1}{s+1}}\|\rho\|_{\dot{H}^s}. $$ Next, consider $$ \int_{\mathbb{T}^d}(\nabla\rho)\cdot (\nabla(-\Delta)^{-1}(\rho-\bar{\rho}))(-\Delta)^s\rho \,dx. $$ Integrating by parts $s$ times, we get terms that can be estimated similarly to the previous case, using the fact that the double Riesz transform $\partial_{ij}(-\Delta)^{-1}$ is bounded on $L^p$, $1<p<\infty$. The only exceptional terms that appear which have different structure are $$ \int_{\mathbb{T}^d}(\partial_{i_1}...\partial_{i_s}\nabla\rho)\cdot(\nabla(-\Delta)^{-1}(\rho-\bar{\rho}))\partial_{i_1}...\partial_{i_s}\rho \,dx. $$ But these can be reduced to $$ \int_{\mathbb{T}^d}(\partial_{i_1}...\partial_{i_s}\rho)^2(\rho-\bar{\rho}) \,dx $$ by another integration by parts, and estimated as before. Altogether, we get \begin{equation}\lambdabel{15th} \begin{split} \frac{1}{2}\partial_t \|\rho\|_{\dot{H}^s}^2&\leqslantqslant C\leqslantft(\|\rho-\bar{\rho}\|_{L^2}^{1-\frac{d}{2(s+1)}}\|\rho\|_{\dot{H}^{s+1}}^{\frac{d}{2(s+1)}}+\bar{\rho}\right) \|\rho\|^{\frac{s}{s+1}}_{\dot{H}^{s+1}}\|\rho-\bar{\rho}\|_{L^2}^{\frac{1}{s+1}}\|\rho\|_{\dot{H}^s}\\ &+C\|u\|_{C^s}\|\rho\|_{\dot{H}^s}^2-\|\rho\|_{\dot{H}^{s+1}}^2. \end{split} \end{equation} Observe that \begin{equation}\lambdabel{sobeasy56} \|\rho\|_{\dot{H}^s}\leqslantqslant \|\rho-\bar{\rho}\|_{L^2}^{\frac{1}{s+1}}\|\rho\|_{\dot{H}^{s+1}}^{\frac{s}{s+1}}. \end{equation} Split the first term on the right hand side of (\ref{15th}) into two parts, and estimate them as follows. First, \begin{align*} C\|\rho-\bar{\rho}\|_{L^2}^{\frac{2s-d+4}{2(s+1)}}\|\rho\|_{\dot{H}^{s+1}}^{\frac{2s+d}{2(s+1)}}\|\rho\|_{\dot{H}^s}&\leqslantqslant\|\rho-\bar{\rho}\|_{L^2}\|\rho\|_{\dot{H}^{s+1}}^{\frac{d}{2}}\|\rho\|_{\dot{H}^s}^{2-\frac{d}{2}}\\ &\leqslantqslant \frac{1}{4} \|\rho\|_{\dot{H}^{s+1}}^2+C\|\rho-\bar{\rho}\|_{L^2}^{\frac{4}{4-d}}\|\rho\|_{\dot{H}^s}^2. \end{align*} Second, \begin{align*} \bar{\rho}\|\rho\|_{\dot{H}^{s+1}}^{\frac{s}{s+1}}\|\rho-\bar{\rho}\|_{L^2}^{\frac{1}{s+1}}\|\rho\|_{\dot{H}^s}&\leqslantqslant \bar{\rho}\|\rho\|_{\dot{H}^s}\|\rho\|_{\dot{H}^{s+1}}\\ &\leqslantqslant \frac{1}{4}\|\rho\|_{\dot{H}^{s+1}}^2+C\bar{\rho}^2\|\rho\|_{\dot{H}^s}^2. \end{align*} We used the Poincare inequality and \eqref{sobeasy56} in the first step. Recall the following Nash-type inequality \begin{equation}\lambdabel{16th} \|\rho\|_{\dot{H}^s}\leqslantqslant C\|\rho\|_{\dot{H}^{s+1}}^{\frac{2s+d}{2s+2+d}}\|\rho\|_{L^1}^{\frac{2}{2s+2+d}}, \end{equation} the proof of which will be sketched in the appendix. Since $\rho(x,t)\geqslantqslant 0$ and hence $\|\rho(\cdot,t)\|_{L^1}=\bar{\rho}>0$ is conserved in time, putting all estimates into (\ref{15th}) we get \begin{equation}\lambdabel{17th} \frac{1}{2}\partial_t\|\rho\|_{\dot{H}^s}^2\leqslantqslant C\leqslantft(\|\rho-\bar{\rho}\|_{L^2}^{\frac{4}{4-d}}+\bar{\rho}^2+\|u\|_{C^s}\right)\|\rho\|_{\dot{H}^s}^2-c\bar{\rho}^{-\frac{4}{2s+d}}\|\rho\|_{\dot{H}^s}^{2+\frac{4}{2s+d}}. \end{equation} From this differential inequality and integrability of $\|\rho(\cdot, t)-\bar{\rho}\|_{L^2(\mathbb{T}^d)}^{\frac{4}{4-d}}$ in time, a finite upper bound for $\|\rho(\cdot, t)\|_{\dot{H}^s}$ follows for all times. In fact, due to the last term on the right hand side of (\ref{17th}), it is not hard to show that there is a global, not growing in time, upper bound for any $\dot{H}^s$ norm of $\rho$. \end{proof} \section{An $\dot{H}^1$ condition for an absorbing set in $L^2$} Due to Theorem \ref{L2cr}, to show global regularity of the solution $\rho(x,t)$ to (\ref{chemo1}), it suffices to control its $L^2$ norm in spatial variables. In this section, we prove a simple criterion that says that if the $\dot{H}^1$ norm of a solution is sufficiently large compared to its $L^2$ norm, then in fact the $L^2$ norm is decaying. Our overall strategy will be then to show that mixing can increase and sustain the $\dot{H}^1$ norm of solution. This will block the $L^2$ norm from ever growing too much, leading to global regularity. \begin{prop}\lambdabel{L2decay} Let $\rho(x,t)\geqslantqslant 0$ be smooth local solution to (\ref{chemo1}) set on $\mathbb{T}^d$, $d=2$ or $3$. Suppose that $\|\rho(\cdot,t)-\bar{\rho}\|_{L^2}\equiv B>0$ for some $t\geqslantqslant 0$. Then there exists a universal constant $C_1$ such that \begin{equation}\lambdabel{18th} \|\rho(\cdot, t+\tau)-\bar{\rho}\|_{L^2}\leqslantqslant 2B \mbox{ for every } 0\leqslantqslant \tau \leqslantqslant C_1\min(1,\bar{\rho}^{-1},B^{-\frac{4}{4-d}}). \end{equation} Moreover, there exists a universal constant $C_0$ such that if, in addition, \begin{equation}\lambdabel{19th} \|\rho(\cdot,t)\|_{\dot{H}^1}^2\geqslantqslant B_1^2\equiv C_0 B^{\frac{12-2d}{4-d}}+2\bar{\rho}B^2, \end{equation} then $\partial_t \|\rho(\cdot, t)\|_{L^2}<0$. \end{prop} \begin{remark} 1. In particular, due to Theorem \ref{L2cr}, it follows that if $\|\rho(\cdot,t)-\bar{\rho}\|_{L^2}\leqslantqslant B$, then the local smooth solution persists at least till $t+C_1\min\leqslantft(1,\bar{\rho}^{-1},B^{-\frac{4}{4-d}}\right)$.\\ 2. Here and below, by universal constant we mean a constant that does not depend on any parameters of the problem: namely, a constant independent of $u,$ $\rho_0,$ and $B.$ \end{remark} \begin{proof} Let us multiply both sides of (\ref{chemo1}) by $\rho-\bar{\rho}$ and integrate. Then \begin{equation}\lambdabel{L2} \begin{split} \frac12 \partial_t\|\rho-\bar{\rho}\|_{L^2}^2&=-\|\rho\|_{\dot{H}^1}^2+\int_{\mathbb{T}^d} \nabla\cdot(\rho\nabla(-\Delta)^{-1}(\rho-\bar{\rho}))(\rho-\bar{\rho})\, dx. \end{split} \end{equation} Observe that $$ \int_{\mathbb{T}^d}\rho\nabla(-\Delta)^{-1}(\rho-\bar{\rho})\nabla \rho \, dx=\int_{\mathbb{T}^d}\rho^2(\rho-\bar{\rho})\, dx-\int_{\mathbb{T}^d}\nabla \rho\cdot\nabla (-\Delta)^{-1}(\rho-\bar{\rho})\, dx. $$ Therefore, $$ \int_{\mathbb{T}^d}\rho\nabla(-\Delta)^{-1}(\rho-\bar{\rho})\nabla\rho \, dx=\frac{1}{2}\int_{\mathbb{T}^d}\rho^2(\rho-\bar{\rho})\, dx, $$ and then the integral on the right hand side of (\ref{L2}) is equal to $$ -\int_{\mathbb{T}^d}\nabla\cdot(\rho\nabla(-\Delta)^{-1}(\rho-\bar{\rho})(\rho-\bar{\rho}) )\, dx=\frac{1}{2}\int_{\mathbb{T}^d}\rho^2(\rho-\bar{\rho})\, dx, $$ Next, notice that $$ \int_{\mathbb{T}^d}\rho^2(\rho-\bar{\rho})\, dx=\int_{\mathbb{T}^d}(\rho-\bar{\rho})^3\, dx+2\bar{\rho}\int_{\mathbb{T}^d}(\rho-\bar{\rho})^2 \, dx-2\bar{\rho}^2. $$ By a Gagliardo-Nirenberg inequality (see e.g. \cite{maz2013sobolev} or \cite{kiselev2012biomixing} for a simple proof), we have $$ \|\rho-\bar{\rho}\|_{L^3}^3\leqslantqslant C \|\rho-\bar{\rho}\|_{L^2}^{3-\frac{d}{2}}\|\rho\|_{\dot{H}^1}^{\frac{d}{2}}\leqslantqslant \|\rho\|_{\dot{H}^1}^2+C_1\|\rho-\bar{\rho}\|_{L^2}^{\frac{12-2d}{4-d}}, $$ where in the second step we applied Young's inequality. Applying all these estimates to (\ref{L2}) yields \begin{equation}\lambdabel{L2H1} \partial_t\|\rho-\bar{\rho}\|_{L^2}^2\leqslantqslant - \|\rho\|_{\dot{H}^1}^2+C_0\|\rho-\bar{\rho}\|_{L^2}^{\frac{12-2d}{4-d}}+2\bar{\rho}\|\rho-\bar{\rho}\|_{L^2}^2. \end{equation} Solving the differential equation $$ f'(\tau)=Cf^{\frac{6-d}{4-d}}(\tau)+2\bar{\rho}f(\tau), \quad f(0)=B^2, $$ leads to the solution \begin{equation}\lambdabel{22th} f(\tau)=\frac{B^2\exp (2\bar{\rho}\tau)}{\leqslantft(1-C\bar{\rho}^{-1}B^{\frac{4}{4-d}}(\exp(\frac{4\bar{\rho}\tau}{4-d})-1)\right)^{\frac{4-d}{2}}}. \end{equation} A standard comparison argument can be used to show that $\|\rho(\cdot, t+\tau)-\bar{\rho}\|_{L^2}^2\leqslantqslant f(\tau)$. On the other hand, a straightforward estimate using (\ref{22th}) gives existence of a constant $C_1$ such that if $\tau\leqslantqslant C_1 \min(1,\bar{\rho}^{-1},B^{-\frac{4}{4-d}})$, then $f(\tau)\leqslantqslant 4B^2$. The second statement of the lemma follows directly from (\ref{L2H1}) and an assumption $\rho_0\geqslantqslant 0$ (which implies $\bar \rho \geqslantqslant 0$). \end{proof} \section{An approximation lemma} We can now outline our general strategy of the proof of chemotactic blow up suppression in more detail. We know that the control of $L^2$ norm in spatial coordinates is sufficient for global regularity. We also see that if $\dot{H}^1$ norm of the solution is large, then $L^2$ norm is not growing. On the other hand, flows with strong mixing properties tend to increase $\dot{H}^1$ norm of solution. Hence our plan will be to deploy such flows, at a sufficiently strong intensity, to make sure that the $\dot{H}^1$ norm of the solution stays high, at least whenever the $L^2$ norm is not small. The first hurdle we face, however, is to show that the mixing property of flow persists in the full nonlinear Keller-Segel equation. In this section we prove a key result on the approximation of solutions to the Keller-Segel equation with advection (\ref{chemo1}) by solutions of the pure advection equation. We will be looking at the intense advection regime, and consider small, relative to all parameters except the strength of advection, time intervals. It is natural to assume that in this case most of the dynamics we observe is due to advection, though the exact statement of the result requires care since both diffusion and chemotactic terms are not trivial perturbations. Let us consider the equation (\ref{chemo1}) $$ \partial_t\rho+(u\cdot\nabla)\rho-\Delta \rho+\nabla \cdot (\rho\nabla(-\Delta)^{-1}(\rho-\bar{\rho}))=0,\quad \rho(x,0)=\rho_0(x), $$ $x\in \mathbb{T}^d$, with $d=2,3$. We will assume that the vector field $u$ is divergence free and Lipschitz in spatial variables. It may be stationary or time dependent. Let us denote $\eta(x,t)$ the unique Lipschitz solution of the equation \begin{equation}\lambdabel{23th} \partial_t\eta +(u\cdot\nabla)\eta=0, \quad \eta(x,0)=\rho_0(x). \end{equation} If we define the trajectories map by \begin{equation}\lambdabel{24th} \frac{d}{dt}\Phi_t(x)=u(\Phi_t(x),t),\quad \Phi_0(x)=x, \end{equation} then $\eta(x,t)=\rho_0(\Phi_t^{-1}(x))$. We start with the following simple observation. Denote the Lipschitz semi-norm $$ \|f\|_{Lip}=\sup_{x,y}\frac{|f(x)-f(y)|}{|x-y|}. $$ \begin{lem}\lambdabel{51} Suppose that the vector field $u$ is incompressible and Lipschitz in spatial variable for each $t\geqslantqslant 0$, $\|u(\cdot,t)\|_{Lip(\mathbb{T}^d)}\leqslantqslant D(t)$, $D(t)\in L^1_{loc}[0,\infty)$. Let $\eta(x,t)$ be the solution of (\ref{23th}). Then for every $t\geqslantqslant 0$, and for every $\rho_0\in \dot{H}^1$, we have \begin{equation}\lambdabel{25th} \|\eta(\cdot, t)\|_{\dot{H}^1}\leqslantqslant F(t)\|\rho_0\|_{\dot{H}^1}, \quad \mbox{where } F(t)=\exp\leqslantft(C\int_0^tD(s)ds\right). \end{equation} \end{lem} \begin{proof} If $u$ is incompressible and Lipschitz in spatial variable for each time, then the trajectories map $\Phi_t(x)$ is area preserving, Lipschitz in $x$ and invertible for each $t$, and the inverse map $\Phi_t^{-1}(x)$ is also Lipschitz in spatial variables. Moreover, $\|\Phi_t^{-1}\|_{Lip}\leqslantqslant \exp(C\int_0^t D(s)ds)$ (see e.g. \cite{marchioro2012mathematical}). The evolution $\eta(x,t)=\rho_0(\Phi_t^{-1})$ is a Lipschitz regular coordinate change of an $\dot{H}^1$ function $\rho_0$. The bound (\ref{25th}) follows from the well known properties of $\dot{H}^1$ functions under Lipschitz transformations of coordinates \cite{ziemer1989weakly}. \end{proof} We are now ready to prove the approximation lemma. \begin{lem}\lambdabel{5.2} Suppose that the vector field $u(x,t)$ is incompressible and Lipschitz in $x$, and is such that (\ref{25th}) is satisfied with $F(t)\in L^{\infty}_{loc}[0,\infty)$. Let $\rho(x,t)$, $\eta(x,t)$ be the solutions of (\ref{chemo1}), (\ref{23th}) respectively with $\rho_0\geqslantqslant 0\in \dot{H}^1$. Suppose that the unique local smooth solution $\rho(x,t)$ exists for $t\in [0,T]$. Then for every $t\in[0,T]$ we have \begin{equation}\lambdabel{26th} \frac{d}{dt}\|\rho-\eta\|_{L^2}^2\leqslantqslant -\|\rho\|_{\dot{H}^1}^2+4\|\rho_0\|_{\dot{H}^1}^2F(t)^2+C\|\rho-\bar{\rho}\|_{L^2}^2\leqslantft(\|\rho-\bar{\rho}\|_{L^2}^{\frac{12}{6-d}}+\bar{\rho}^2\right). \end{equation} \end{lem} Of course a direct analog of lemma holds without assumption $\rho_0\geqslantqslant 0$; we only need to replace $\bar{\rho}^2$ in (\ref{26th}) with $\|\rho\|^2_{L^1}$. \begin{proof} A direct computation using divergence free property of $u$ shows that \begin{equation}\lambdabel{27th} \begin{split} \frac12 \frac{d}{dt}\|\rho-\eta\|_{L^2}^2&= \int_{\mathbb{T}^d}\Delta \rho(\rho-\eta)\, dx-\int_{\mathbb{T}^d}\nabla\cdot(\rho\nabla(-\Delta)^{-1}(\rho-\bar{\rho}))(\rho-\eta)\, dx\\ &\leqslantqslant -\|\rho\|^2_{\dot{H}^1}+\|\rho\|_{\dot{H}^1}\|\eta\|_{\dot{H}^1}+\|\rho\nabla(-\Delta)^{-1}(\rho-\bar{\rho})\|_{L^2}\|\rho\|_{\dot{H}^1}\\ &+\|\rho\nabla(-\Delta)^{-1}(\rho-\bar{\rho})\|_{L^2}\|\eta\|_{\dot{H}^1}. \end{split} \end{equation} Applying H\"{o}lder and Gagliardo-Nirenberg inequalities, we can estimate \begin{align*} \|\rho\nabla(-\Delta)^{-1}(\rho-\bar{\rho})\|_{L^2}&\leqslantqslant \|\rho\|_{L^3}\|\nabla(-\Delta)^{-1}(\rho-\bar{\rho})\|_{L^6}\\ &\leqslantqslant C\leqslantft(\|\rho\|_{\dot{H}^1}^{\frac{d}{6}}\|\rho-\bar{\rho}\|_{L^2}^{1-\frac{d}{6}}+\bar{\rho}\right)\|\nabla(-\Delta)^{-1}(\rho-\bar{\rho})\|_{\dot{H}^1}^{\frac{d}{3}}\|\nabla(-\Delta)^{-1}(\rho-\bar{\rho})\|_{L^2}^{1-\frac{d}{3}}\\ &\leqslantqslant C\leqslantft(\|\rho\|_{\dot{H}^1}^{\frac{d}{6}}\|\rho-\bar{\rho}\|_{L^2}^{1-\frac{d}{6}}+\bar{\rho}\right)\|\rho-\bar{\rho}\|_{L^2}. \end{align*} Here the last step follows from simple estimates on Fourier side. Given these estimates, several applications of Young's inequality show that the right hand side of (\ref{27th}) can be bounded above by \begin{align*} &-\|\rho\|_{\dot{H}^1}^2+\frac{1}{4}\|\rho\|_{\dot{H}^1}^2+\|\eta\|_{\dot{H}^1}^2+\frac{1}{8}\|\rho\|_{\dot{H}^1}^2+C\|\rho-\bar{\rho}\|_{L^2}^{2+\frac{12}{6-d}}\\ &+\frac{1}{8}\|\rho\|_{\dot{H}^1}^2+C\|\rho-\bar{\rho}\|_{L^2}^2\bar{\rho}^2+\|\eta\|_{\dot{H}^1}^2. \end{align*} With help of Lemma \ref{51} the estimate (\ref{26th}) quickly follows. \end{proof} \section{Proof of the main theorem: the relaxation enhancing flows} Our first example of flows that can stop chemotactic explosion will be relaxation enhancing flows of \cite{constantin2008diffusion}. These stationary in time flows have been shown to be very efficient in speeding up convergence to the mean of the solutions of diffusion-advection equation. A particular type of the RE flows are weakly mixing flows, a well known class in dynamical systems theory which is intermediate in mixing properties between mixing and ergodic \cite{cornfeld2012ergodic}. Let us briefly review the relevant definitions. Given an incompressible vector field $u(x)$ which is Lipschitz in spatial variables, recall definition (\ref{24th}) for the trajectories map $\Phi_t(x)$. Then define a unitary operator $U^t f(x)=f(\Phi_t^{-1}(x))$ acting on $L^2(\mathbb{T}^d)$. \begin{definition} The flow $u(x)$ is called weakly mixing if the spectrum of the operator $U\equiv U^1$ is purely continuous. The flow $u(x)$ is called relaxation enhancing (RE) if the operator $U$ (or properly defined operator ($u\cdot \nabla$)) has no eigenfunctions in $\dot{H}^1$ other than a constant function. \end{definition} \begin{remark*} The fact that we talk about the spectrum of $U$ rather than $(u\cdot \nabla)$ is a minor technical point. The symmetric operator $i(u\cdot\nabla)$ is unbounded on $L^2$, and sometimes needs to be extended to its natural domain (rather than just $\dot{H}^1$) to become self-adjoint and to be a true generator for $U$. To avoid these technicalities, it is convenient to make the definition in terms of $U$, which is bounded and so can be defined on smooth functions and then extended to the entire $L^2$ by continuity. Examples of weakly mixing flows on $\mathbb{T}^d$ are classical and go back to von Neumann \cite{neumann1932operatorenmethode} (just continuous $u(x)$) and Kolmogorov \cite{kolmogorov1953dynamical} (smooth $u(x)$). The Kolmogorov construction is based on the irrational rotation on the torus with appropriately selected invariant measure. Lack of eigenfunctions is established by analysis of a small denominator problem on the Fourier side bearing some similarity to the core of the KAM theory. The original examples are not incompressible with respect to the Lebesgue measure on $\mathbb{T}^d$, but a smooth change of coordinates can be applied to obtain incompressible flows with the same properties. Weakly mixing flows are also RE, but there are smooth RE flows which are not weakly mixing: these do have eigenfunctions but rough ones, lying in $L^2$ but not in $\dot{H}^1$ (see \cite{constantin2008diffusion} for more details and examples). \end{remark*} We now state our first main theorem. Consider the equation \begin{equation}\lambdabel{28th} \partial_t \rho^A+A(u\cdot \nabla)\rho^A -\Delta \rho^A +\nabla\cdot(\rho^A\nabla(-\Delta)^{-1}(\rho-\bar{\rho}))=0,\quad \rho^A(x,0)=\rho_0(x). \end{equation} Here $A$ is the coupling constant regulating strength of the fluid flow that we will assume to be large. We note that dividing the equation by $A$ and changing time, we can instead think of all the results below as applicable in the regime of weak diffusion and chemotaxis on long time scales. \begin{thm}\lambdabel{6.2} Suppose that $u$ is smooth and incompressible vector field on $\mathbb{T}^d$, $d=2,3$, which is also relaxation enhancing. Assume that $\rho\geqslantqslant 0 \in C^{\infty}(\mathbb{T}^d)$. Then there exists an amplitude $A_0$ which depends only on $\rho_0$ and $u$ such that for every $A\geqslantqslant A_0$ the solution $\rho^A(x,t)$ of the equation (\ref{28th}) is globally regular. \end{thm} We will only prove Theorem \ref{6.2} in the case of weakly mixing flows. This serves our main purpose of providing an example of chemotactic blow up-arresting flow. In the general RE case, the proof is a fairly straightforward extension of an argument for the weakly mixing flow and the point spectrum estimates in \cite{constantin2008diffusion} (namely, Lemma 3.3 and part of the proof of Theorem 1.4 dealing with point spectrum). Before starting the proof, we need one auxiliary result from \cite{constantin2008diffusion}. Let $P_N$ be the orthogonal projection operator on the subspace formed by Fourier modes $|k|\leqslantqslant N$: $$ P_Nf(x)=\sum_{|k|\leqslantqslant N}e^{2\pi i k x} \hat{f}(k). $$ \begin{lem}\lambdabel{6.3} Let $U$ be a unitary operator with purely continuous spectrum defined on $L^2(\mathbb{T}^d)$. Let $S=\{\phi\in L^2:\|\phi\|_{L^2}=1\}$, and let $K\subset S$ be a compact set. Then for every $N$ and every $\sigma>0$, there exists $T_c(N,\sigma,K,U)$ such that for all $T\geqslantqslant T_c(N,\sigma,K,U)$ and every $\phi\in K$, we have \begin{equation}\lambdabel{29th} \frac{1}{T}\int_0^T\|P_N U^t \phi\|^2dt \leqslantqslant \sigma. \end{equation} \end{lem} This lemma connects the issues we are studying with one of the themes in quantum mechanics, namely the propagation rate of wave packets corresponding to continuous spectrum. Lemma \ref{6.3} is an extension of the well-known RAGE theorem (see e.g. \cite{cycon2009schrodinger}) which is a rigorous variant of a folklore quantum mechanics statement that quantum states corresponding to the continuous spectrum travel to infinity. In our case, travel to infinity happens not in physical space, but in the modes of the operator $-\Delta$ (that is, in Fourier modes). We refer to \cite{constantin2008diffusion} for the proof of Lemma \ref{6.3}. Now we are ready to give the proof of Theorem \ref{6.2}. \begin{proof}[Proof of Theorem \ref{6.2}] Fix any $B> \|\rho_0-\bar{\rho}_0\|_{L^2}$. If for all times we have that $\|\rho^A(\cdot,t)-\bar{\rho}\|_{L^2}<B$, then the solution stays globally regular by Theorem \ref{L2cr}. Otherwise, let $$ t_0=\inf\{t\big| \|\rho^A(\cdot,t)-\bar{\rho}\|_{L^2}=B\}. $$ Since the solution is smooth up to time $t_0$, we also have that $\|\rho^A(\cdot,t_0)-\bar{\rho}\|_{L^2}=B$; thus $t_0$ is the first time the $L^2$ norm of $\rho^A(x,t)-\bar{\rho}$ reaches $B$. Note that by Proposition \ref{L2decay}, we must also have $\|\rho^A(\cdot,t_0)\|_{\dot{H}^1}<B_1$, where $B_1=C_0 B^{\frac{12-2d}{4-d}}+2\bar{\rho}B^2$. We are going to show that if $A\geqslantqslant A_0(B,\bar{\rho},u)$ is sufficiently large, then after a small time interval of length $\tau$ that we will define shortly, we will have $\|\rho^A(\cdot, t_0+\tau)-\bar{\rho}\|_{L^2}<B$. Moreover, we will have $\|\rho^A(\cdot, t)-\bar{\rho}\|_{L^2}\leqslantqslant 2B$ for every $t\in [t_0,t_0+\tau]$. This will prove the theorem, as the argument can be applied repeatedly each time the $L^2$ norm reaches the level $B$, showing that $\|\rho^A(\cdot,t)-\bar{\rho}\|_{L^2}\leqslantqslant 2B$ for all times. Denote $\lambdambda_n$ the eigenvalues of $-\Delta$ on $\mathbb{T}^d$ in an increasing order, $0=\lambdambda_1\leqslantqslant \lambdambda_2\leqslantqslant \dots\leqslantqslant \lambdambda_n\leqslantqslant \dots$ Choose $N$ so that \begin{equation}\lambdabel{30th} \lambdambda_N\geqslantqslant 16C_0(2B)^{\frac{4}{4-d}}+32\bar{\rho}, \end{equation} where $C_0$ is the constant appearing in (\ref{L2H1}). Observe that $\lambdambda_NB^2>B_1^2$. Define the compact set $K\subset S$ by $$ K=\{\phi\in S\big| \|\phi\|_{\dot{H}^1}^2\leqslantqslant \lambdambda_N \} $$ (recall $S$ is the unit sphere in $L^2$). Let $U$ be the unitary operator associated with our weakly mixing flow $u$ as above. Fix $\sigma=0.01$. Let $T_c(N,\sigma,K,U)$ be the time threshold provided by Lemma \ref{6.3}. We proceed to impose the first condition on $A_0(\rho_0,u)$. We define $\tau$ as below and require that \begin{equation}\lambdabel{31th} \tau\equiv \frac{T_c(N,\sigma,K,U)}{A}\leqslantqslant C_1\min\leqslantft(1,\bar{\rho}^{-1}, B^{-\frac{4}{4-d}}\right) \end{equation} for every $A\geqslantqslant A_0$, where $C_1$ is the constant appearing in Proposition \ref{L2decay} in (\ref{18th}). It follows from Proposition \ref{L2decay} and Theorem \ref{L2cr} that $\|\rho^A(\cdot, t)-\bar{\rho})\|_{L^2}\leqslantqslant 2B$ for $t\in [t_0, t_0+\tau]$ and so $\rho^A$ remains smooth on this time interval. Let us introduce a short-cut notation $\phi_0(x)=\rho^A(x,t_0)$. Let $\eta^A(x,t)$ be the solution of the equation $$ \partial_t\eta^A+A(u\cdot\nabla)\eta^A=0, \quad \eta^A(x,0)=\phi_0. $$ Then $\eta^A(x,t)=U^{At}\phi_0$, and we have \begin{equation}\lambdabel{32th} \frac{1}{\tau}\int_0^{\tau}\|P_N\eta^A(x,t)\|_{L^2}^2dt=\frac{1}{\tau}\int_0^{\tau}\|P_NU^{At}\phi_0\|_{L^2}^2dt=\frac{1}{A\tau}\int_0^{A\tau}\|P_NU^s\phi_0\|_{L^2}^2ds\leqslantqslant \sigma B^2. \end{equation} Here we applied Lemma \ref{6.3} to the vector $\phi_0/\|\phi_0\|_{L^2}$. Indeed, $\phi_0/\|\phi_0\|_{L^2}\in K$ since by (\ref{30th}) we have $$ \|\phi_0\|_{\dot{H}^1}^2\leqslantqslant B_1^2<\lambdambda_N B^2<\lambdambda_N\|\phi_0\|_{L^2}^2. $$ Note also that (\ref{31th}) ensures the applicability of Lemma \ref{6.3} and the validity of the last bound in (\ref{32th}). We now impose the second and last condition on $A_0$. It will be convenient for us now to denote by $t$ time elapsed since $t_0.$ By the approximation Lemma \ref{5.2}, and since we know that $\|\rho^A(\cdot, t_0)\|_{\dot{H}^1}^2\leqslantqslant B_1<\lambdambda_N B^2$, as well as $\|\rho^A(\cdot, t_0+t)-\bar{\rho}\|_{L^2}\leqslantqslant 2B$, we have \begin{equation}\lambdabel{33th} \frac{d}{dt}\|\rho^A(\cdot, t_0+t)-\eta^A(\cdot, t)\|^2_{L^2}\leqslantqslant 4\lambdambda_N B^2F(At)^2+CB^2(B^{\frac{6}{6-d}}+\bar{\rho}^2) \end{equation} for all $t\in [0,\tau]$. Here $\tau=T_c/A$ as before. Choose $A_0$ so that \begin{equation}\lambdabel{34th} \frac{4\lambdambda_N}{A}\int_0^{T_0}F(s)^2ds +C\tau(B^{\frac{6}{6-d}}+\bar{\rho}^2)\leqslantqslant 0.01 \end{equation} for every $A\geqslantqslant A_0$. Note that since $u$ is smooth, $F(t)$ is a locally bounded function. We claim that if $A\geqslantqslant A_0$, then $\|\rho^A(\cdot, t_0+\tau)-\bar{\rho}\|_{L^2}\leqslantqslant B$. First, the condition (\ref{34th}) allows us to control $\|\rho^A(\cdot, t_0+t)-\bar{\rho}\|_{L^2}$ more tightly, which is convenient. Indeed, since $\|\eta^A(\cdot, t)\|_{L^2}=\|\phi_0\|_{L^2}=B$ for all $t\geqslantqslant 0$, (\ref{33th}) and (\ref{34th}) imply that \begin{equation}\lambdabel{35th} 0.9B\leqslantqslant \|\rho^A(\cdot, t_0+t)\|_{L^2}\leqslantqslant 1.1B \end{equation} for $t\in [0,\tau]$. Furthermore, by the estimates (\ref{32th}), (\ref{33th}) and (\ref{34th}) we have \begin{align*} &\frac{1}{\tau}\int_0^{\tau}\|P_N\rho^{A}(\cdot, t_0+t)\|_{L^2}^2dt\leqslantqslant \frac{2}{\tau}\int_0^{\tau}\|P_N\eta^A(\cdot, t)\|_{L^2}^2dt\\ &\quad\quad+\frac{2}{\tau}\int_0^{\tau}\|P_N(\rho^A(\cdot,t_0+t)-\eta^A(\cdot,t))\|_{L^2}^2dt\leqslantqslant \frac{B^2}{25}. \end{align*} Combining this estimate with \eqref{35th} we obtain \begin{equation}\lambdabel{36th} \frac{1}{\tau}\int_0^{\tau}\|\rho^A(\cdot,t_0+t)\|_{\dot{H}^1}^2dt\geqslantqslant \frac{1}{\tau}\int_0^{\tau}\lambdambda_N\|(I-P_N)\rho^A(\cdot,t_0+t)\|_{L^2}^2dt\geqslantqslant \frac{1}{2}\lambdambda_NB^2. \end{equation} Now we come back to (\ref{L2H1}): $$ \partial_t\|\rho^A-\bar{\rho}\|_{L^2}^2\leqslantqslant - \|\rho^A\|_{\dot{H}^1}^2+C_0\|\rho^A-\bar{\rho}\|_{L^2}^{\frac{12-2d}{4-d}}+2\bar{\rho}\|\rho^A-\bar{\rho}\|_{L^2}. $$ By the estimates (\ref{36th}), (\ref{35th}) and (\ref{30th}), we have \begin{equation}\lambdabel{37th} \begin{split} \|\rho^A(\cdot, t_0+\tau)-\bar{\rho}\|_{L^2}^2&\leqslantqslant B^2+\tau\leqslantft(-\frac{1}{2}\lambdambda_NB^2+C_0(2B)^{\frac{12-2d}{4-d}}+2\bar{\rho}(2B)^2\right)\\ &\leqslantqslant \leqslantft(1-\frac{1}{4}\tau\lambdambda_N\right)B^2\leqslantqslant B^2. \end{split} \end{equation} This completes the proof. \end{proof} We see that the bound we obtained on the decay of the $L^2$ norm in (\ref{37th}) is stronger than what we needed. In fact, with slightly more effort we can obtain stronger results. We now present an extension of Theorem \ref{6.2} that establishes a complete analog of ``relaxation enhancement'' established in \cite{constantin2008diffusion} for the diffusion-advection equation for the case that also includes chemotaxis. Namely, we show that not only fluid flow can prevent finite time blow up, but in fact it can enforce convergence of the solution to its mean in the long time limit. Intense fluid flow can also act to create an arbitrary strong and fast drop of $\|\rho^A(\cdot,t)-\bar{\rho}\|_{L^2}$. \begin{thm}\lambdabel{6.4} Suppose $0 \leqslantqslant \rho_0 \in C^{\infty}(\mathbb{T}^d)$, and let $\rho^A(x,t)$ be the solution of the equation (\ref{28th}). Let u be smooth, incompressible, relaxation enhancing flow. If $A_0(\rho_0,u)$ is the threshold value defined in the proof of Theorem \ref{6.2}, then for every $A\geqslantqslant A_0$, we have \begin{equation}\lambdabel{38th} \|\rho^A(\cdot,t)-\bar{\rho}\|_{L^2}\rightarrow 0 \end{equation} as $t\rightarrow \infty$. The convergence rate is exponential in time, and can be made arbitrary fast by increasing the value of $A$. Namely, for every $\partial_ta>0$ and $\kappa>0$, there exists $A_1=A_1(\rho_0,u,\kappa, \partial_ta)$ such that if $A\geqslantqslant A_1$, then \begin{equation}\lambdabel{39th} \|\rho^A(\cdot,t)-\bar{\rho}\|_{L^2}\leqslantqslant \|\rho_0-\bar{\rho}\|_{L^2}e^{-\kappa t} \end{equation} for all $t\geqslantqslant \partial_ta$. \end{thm} \begin{proof} Let us prove (\ref{39th}), since (\ref{38th}) follows from similar (and easier) arguments. The proof largely follows the argument in the proof of Theorem \ref{6.2}, but let us outline the necessary adjustments. We set $B_0=\|\rho_0-\bar{\rho}\|_{L^2}$. Choose $N$ so that \begin{equation}\lambdabel{40th} \lambdambda_N\geqslantqslant \max\leqslantft(100\kappa, 16C_0(2B_0)^{\frac{4}{4-d}}+32\bar{\rho}\right). \end{equation} Define the set $K$, as before, by $\{\phi\in S\big| \|\phi\|_{\dot{H}^1}^2\leqslantqslant \lambdambda_N\}$. For all times $t$ where $\rho^A(x,t)/\|\rho^A(\cdot,t)\|_{L^2}\notin K$, we have $\|\rho^A(\cdot, t)\|_{\dot{H}^1}\geqslantqslant \lambdambda_N\|\rho^A(\cdot,t)-\bar{\rho}\|_{L^2}$. It follows from (\ref{L2H1}) and (\ref{40th}) that at such times we have \begin{equation}\lambdabel{41th} \begin{split} \partial_t\|\rho-\bar{\rho}\|_{L^2}^2&\leqslantqslant - \|\rho^A\|_{\dot{H}^1}^2+C_0\|\rho^A-\bar{\rho}\|_{L^2}^{\frac{12-2d}{4-d}}+2\bar{\rho}\|\rho^A-\bar{\rho}\|_{L^2}\\ &\leqslantqslant \leqslantft(-\lambdambda_N+C_0(2B_0)^{\frac{4}{4-d}}+2\bar{\rho}\right)\|\rho^A-\bar{\rho}\|_{L^2}^2\leqslantqslant -\frac{1}{2}\lambdambda_N\|\rho^A-\bar{\rho}\|_{L^2}^2. \end{split} \end{equation} Here in the second inequality we used that $\|\rho^A(\cdot,t)-\bar{\rho}\|_{L^2}\leqslantqslant 2B_0$ for all times, as we know from the proof of Theorem \ref{6.2} we can ensure by making $A$ sufficiently large; we note that this bound will also follow from our argument below. Thus on the time intervals where $\rho^A(x,t)/\|\rho^A(x,t)\|_{L^2}\notin K$ we have exponential decay of $\|\rho^A(\cdot, t)-\bar{\rho}\|_{L^2}$ at rate that would imply (\ref{39th}) if all times were like that. Suppose now that $t_0$ is the smallest time such that $\rho^A(x,t_0)\in K$ ($t_0$ could equal $0$). Let $T_c(N,\sigma,K,U)$ be the time threshold provided by Lemma \ref{6.3} (we set $\sigma=0.01$ as before). Repeat all the steps in the proof of Theorem \ref{6.2} from defining the time step $\tau$ (\ref{31th}) to (\ref{37th}), with $B$ replaced by $\|\rho^A(\cdot, t_0)-\bar{\rho}\|_{L^2}$. In addition, require that $A$ is large enough so that \begin{equation}\lambdabel{42th} \tau=\frac{T_c(N,\sigma,K,U)}{A}\leqslantqslant \partial_ta/2. \end{equation} We arrive at the estimate \begin{equation}\lambdabel{43th} \begin{split} \|\rho^A(\cdot, t_0+\tau)-\bar{\rho}\|_{L^2}&\leqslantqslant \leqslantft(1-\frac{1}{4}\lambdambda_N\tau \right)\|\rho^A(\cdot, t_0)-\bar{\rho}\|_{L^2}^2\\ &\leqslantqslant e^{-\frac{1}{4}\lambdambda_N\tau}\|\rho^A(\cdot, t_0)-\bar{\rho}\|_{L^2}^2. \end{split} \end{equation} Note that even though we do not control the $L^2$ norm of the solution for all times $t$, from (\ref{L2H1}) and (\ref{40th}) it is clear that for every $t\in [t_0,t_0+\tau]\equiv I_0$, we have \begin{equation}\lambdabel{44th} \|\rho^A(\cdot, t)-\bar{\rho}\|_{L^2}^2\leqslantqslant e^{\frac{1}{8}\lambdambda_N (t-t_0)}\|\rho^A(\cdot, t_0)-\bar{\rho}\|_{L^2}^2. \end{equation} We continue further in time in a similar fashion. If $\rho^A(x,t)/\|\rho^{A}(\cdot,t)\|_{L^2}\notin K$, we have (\ref{41th}). On the other hand, if $$ t_n =\inf \{t\big| t\geqslantqslant t_{n-1}+\tau, \rho^A(x,t)/\|\rho^{A}(\cdot,t)\|_{L^2}\in K\}, $$ we can apply Lemma \ref{6.3} and Lemma \ref{5.2} on $I_n\equiv [t_n, t_n+\tau]$ obtaining \begin{equation}\lambdabel{45th} \begin{split} &\|\rho^A(\cdot, t_n+\tau)-\bar{\rho}\|_{L^2}^2\leqslantqslant e^{-\frac{1}{4}\lambdambda_N \tau}\|\rho^A(\cdot, t_n)-\bar{\rho}\|_{L^2}^2,\\ &\|\rho^A(\cdot,t)-\bar{\rho}\|_{L^2}^2\leqslantqslant e^{\frac{1}{8}\lambdambda_N(t-t_n)}\|\rho^A(\cdot,t_n)-\bar{\rho}\|_{L^2}^2,\quad \mbox{ for every } t\in [t_n,t_n+\tau]. \end{split} \end{equation} Now given any $t\geqslantqslant \partial_ta$, we can represent $$ [0,t]=W\cup (\cup_{l=0}^n I_l), $$ where $W$ is the set of times in $[0,t]$ outside all $I_l$. Note that (\ref{41th}) holds for every $s\in W$. Combining (\ref{41th}) and (\ref{45th}), we infer that for every $t\geqslantqslant \partial_ta$, we have $$ \|\rho^A(\cdot, t)-\bar{\rho}\|_{L^2}^2\leqslantqslant e^{-\frac{1}{4}\lambdambda_N(t-\frac{\partial_ta}{2})}e^{\frac{1}{8}\lambdambda_N\frac{\partial_ta}{2}}\|\rho_0-\bar{\rho}\|_{L^2}^2\leqslantqslant e^{-\frac{1}{8}\lambdambda_N t}\|\rho_0-\bar{\rho}\|_{L^2}^2. $$ This proves (\ref{39th}). \end{proof} \section{Yao-Zlatos flow} In this section, we describe another class of flows that are capable of suppressing the chemotactic explosion. These flows arise as examples of almost perfect mixers satisfying some sort of natural constraints. They have the advantage of being somewhat more explicitly defined than the RE flows which may be harder to picture. For this reason we will be able to get an explicit estimate, albeit rather weak, for the intensity of mixing necessary to arrest the blow up as a function of the $L^2$ norm of the initial data. For the RE flows, such estimate would be difficult to obtain, primarily due to the challenge of estimating the time $T_c$ from Lemma \ref{6.3}. A quantitative estimate on $T_c$ would require delicate spectral analysis of the operator $u\cdot \nabla$, something that for the moment is out of reach. On the other hand, in contrast to the RE flows, Yao-Zlatos flows are time dependent and active - their construction depends on the density being mixed. We remark that a re lated class of efficient mixer flows has been also considered in \cite{alberti2014exponential}. We refer to \cite{yao2014mixing} for a detailed discussion of different notions of mixing and the general background, and for further references. For our purpose here, we need one particular result from \cite{yao2014mixing}, that we set about to explain. Let $\mathbb{T}^2\equiv [-1/2,1/2)^2$. Consider the dyadic partition of $\mathbb{T}^2$ with squares $Q_{nij}$ given by \begin{equation}\lambdabel{46th} Q_{nij}=\leqslantft[\frac{i}{2^n},\frac{i+1}{2^n}\right]\times\leqslantft[\frac{j}{2^n},\frac{j+1}{2^n}\right],\quad i,j=-2^{n-1},\dots,2^{n-1}-1. \end{equation} Suppose $f_0\in C^{\infty}(\mathbb{T}^2)$ and is mean zero, and $u$ is an incompressible flow which is Lipschitz regular in spatial variables. Let $f(x,t)$ denote the solution of transport equation \begin{equation}\lambdabel{47th} \partial_t f+(u\cdot \nabla)f=0, \quad f(x,0)=f_0(x). \end{equation} \begin{thm}\lambdabel{7.1}[Yao-Zlatos] Given any $\kappa, \varepsilonilon\in (0,1/2]$, there exists an incompressible flow $u$ such that the following holds. \begin{equation}\lambdabel{48th} 1.\quad \|\nabla u(\cdot,t)\|_{L^{\infty}}\leqslantqslant 1, \quad \mbox{ for every }t. \end{equation} 2. Let $n=[|\log_2(\kappa\varepsilonilon)|]+2$, where $[x]$ denotes the integer part of $x$. Then for some $$ \tau_{\kappa,\varepsilonilon}\leqslantqslant C\kappa^{-1/2}|\log(\kappa\varepsilonilon)|^{3/2}, $$ and every $Q_{nij}$ as in (\ref{46th}) we have \begin{equation}\lambdabel{49th} \leqslantft|\frac{1}{|Q_{nij}|}\int_{Q_{nij}}f(x,\tau_{\kappa,\varepsilonilon})\,\, dx\right|\leqslantqslant \kappa \|f_0\|_{L^{\infty}}. \end{equation} \end{thm} This theorem provides a flow $u$ that satisfies uniform in time Lipschitz constraint (\ref{48th}) and mixes the initial density $f_0$ to scale $\varepsilonilon$ with the error at most $\kappa$ in time $\tau_{\kappa,\varepsilonilon}$. The construction in \cite{yao2014mixing} employs a multi-scale cellular flow. On the $n$th stage of the construction, the goal is to make the mean of the function on each of $Q_{nij}$ close to zero, starting with $n=1$. This is achieved by cellular flows which are designed to have the same rotation time period on streamlines away from a thin boundary layer. Such an arrangement makes evolution of density in each cell amenable to fairly precise control, and makes it possible to ensure that the ``nearly mean zero'' property of the solution propagates to smaller and smaller scales. We refer to \cite{yao2014mixing} for the details. Below, we outline only some adjustments that are needed to obtain Theorem~\ref{7.1} from the arguments in \cite{yao2014mixing}, since it is not stated there in the precise form that we need. \begin{proof} Theorem \ref{7.1} is essentially Theorem 4.3 from \cite{yao2014mixing} (or rather Theorem 5.1, which deals with the periodic boundary conditions instead of no flow - but Theorem 5.1 is a direct corollary of Theorem 4.3). We re-scaled time compared to Theorem 4.3 from \cite{yao2014mixing} to make (\ref{48th}) hold. We also replaced the conclusion of the $\varepsilonilon$-scale mixing (defined in \cite{yao2014mixing}) with how it is actually proved: (\ref{49th}) follows directly from (4.8) and the next estimate in \cite{yao2014mixing} as well as the choice of $\partial_ta$ immediately below these two estimates. \end{proof} Here is the key corollary of Theorem \ref{7.1} that we will use in our proof. \begin{cor}\lambdabel{7.2} Let $f_0\in C^{\infty}(\mathbb{T}^2)$ be a mean zero function. For every $\varepsilonilon>0$ there exists a Yao-Zlatos flow $u(x,t)$ given by Theorem~\ref{7.1}, such that $\|\nabla u\|_{L^{\infty}}\leqslantqslant 1$ for every $t$ and the solution $f(x,t)$ of the equation \eqref{47th} satisfies \begin{equation}\lambdabel{50th} \|f(\cdot, \tau)\|_{\dot{H}^{-1}}\leqslantqslant C_3\|f_0\|_{L^{\infty}}\varepsilonilon \end{equation} for some \begin{equation}\lambdabel{51th} \tau\leqslantqslant C_2\varepsilonilon^{-1/2}|\log\varepsilonilon|^{3/2}. \end{equation} Here $C_{2,3} \geqslantqslant 1$ are universal constants. \end{cor} \begin{proof} To derive this corollary from Theorem \ref{7.1}, let us set $\kappa=\varepsilonilon$. We need to address a couple of issues. The first one is the connection between (\ref{49th}) and $\dot{H}^{-1}$ norm of the solution. \begin{lem}\lambdabel{7.3} Let $f\in C^{\infty}(\mathbb{T}^2)$ be mean zero. Fix $\varepsilonilon>0$ and suppose that $$ \leqslantft|\frac{1}{|Q_{nij}|}\int_{Q_{nij}}f(x)\, dx\right|\leqslantqslant \varepsilonilon \|f\|_{L^{\infty}} $$ for some $n\geqslantqslant [|\log_2\varepsilonilon|]$ and $i,j=-2^{n-1}, \dots, 2^{n-1}-1$. Then \begin{equation}\lambdabel{52th} \|f\|_{\dot{H}^{-1}}\leqslantqslant C_3\|f\|_{L^{\infty}}\varepsilonilon. \end{equation} \end{lem} \begin{proof} The proof is by duality. Take any $g\in \dot{H}^1$. Since $f$ is mean zero, without loss of generality we can assume that $g$ is also mean zero. Then, denoting $\bar{g}_{Q_{nij}}$ the average value of $g$ over $Q_{nij}$, we have \begin{align*} &\leqslantft|\int_{\mathbb{T}^2}fg\, dx\right|=\leqslantft|\sum_{i,j}\int_{Q_{nij}}fg\, dx\right|\leqslantqslant \leqslantft|\sum_{i,j}\int_{Q_{nij}}f(x)(g(x)-\bar{g}_{Q_{nij}})\, dx\right|+\\ &\leqslantft|\sum_{i,j}\bar{g}_{Q_{nij}}\int_{Q_{nij}}f(x)\, dx\right| \leqslantqslant \sum_{i,j}\|f\|_{L^2(Q_{nij})}\|g-\bar{g}_{Q_{nij}}\|_{L^2(Q_{nij})}+\\ &\varepsilonilon \sum_{i,j}|\bar{g}_{Q_{nij}}| |Q_{nij}|\|f\|_{L^{\infty}}\leqslantqslant C2^{-n}\sum_{i,j}\|f\|_{L^2(Q_{nij})}\|\nabla g\|_{L^2(Q_{nij})}+\varepsilonilon \|f\|_{L^{\infty}}\|g\|_{L^1}\\ &\leqslantqslant C\varepsilonilon \|f\|_{L^2}\|g\|_{\dot{H}^1}+\varepsilonilon \|f\|_{L^{\infty}}\|g\|_{\dot{H}^1} \leqslantqslant C\varepsilonilon \|f\|_{L^{\infty}}\|g\|_{\dot{H}^1}. \end{align*} Here we used Poincare inequality in the last and in the penultimate step, and Cauchy-Schwartz inequality in the last step. This proves the lemma. \end{proof} Another technical aspect we need to discuss is the smoothness of $u$. The construction in \cite{yao2014mixing} does not explicitly control higher order derivatives of $u$ beyond the Lipschitz condition $\|\nabla u\|_{L^{\infty}}\leqslantqslant 1$. However, it is not difficult to see that a properly mollified velocity field will have the same mixing properties up to renormalization by some universal constant. Let $\phi$ be a mollifier, $\phi\geqslantqslant 0$, $\phi\in C^{\infty}(\mathbb{T}^2)$, supp$(\phi)\subset B_{1/4}(0)$, $\int_{\mathbb{T}^2}\phi (x)\, dx=1$. Denote $\phi_{\partial_ta}(x)= \partial_ta^{-d}\phi(x/\partial_ta)$, and $u_{\partial_ta}(x)=\phi_{\partial_ta}\ast u(x)$. \begin{lem}\lambdabel{7.4} Suppose that an incompressible vector field $u(x,t)$ satisfies $\|\nabla u\|_{L^{\infty}}\leqslantqslant D$ for all $x,t$. Assume $f_0\in C^{\infty}(\mathbb{T}^2)$, and denote $f(x,t)$ and $f_{\partial_ta}(x,t)$ solutions of the transport equation (\ref{47th}) with velocity $u$ and mollified velocity $u_{\partial_ta}$ respectively. Fix any $T>0$. Then as $\partial_ta\rightarrow 0$, we have $\|f(x,t)-f_{\partial_ta}(x,t)\|_{L^2}\rightarrow 0$ uniformly in $t\in [0,T]$. \end{lem} \begin{proof} The proof of this lemma is standard and elementary. We provide a brief sketch. First note that $\|\nabla u_{\partial_ta}(\cdot, t)\|_{L^{\infty}}\leqslantqslant \|\nabla u(\cdot, t)\|_{L^{\infty}}$. Consider $\Phi_t(x)$ and $\Phi_{t,\partial_ta}(x)$, the trajectory maps corresponding to $u$ and $u_{\partial_ta}$. A straightforward estimate based on Gronwall lemma gives $$ |\Phi_t(x)-\Phi_{t,\partial_ta}(x)|\leqslantqslant \partial_ta e^{tD}. $$ Reversing time, we find that the same holds for the inverse maps: $$ |\Phi^{-1}_t(x)-\Phi_{t,\partial_ta}^{-1}(x)|\leqslantqslant \partial_ta e^{tD}. $$ So we get $$ \int_{\mathbb{T}^2}|f(x,t)-f_{\partial_ta}(x,t)|^2\, dx=\int_{\mathbb{T}^2}|f_0(\Phi_t^{-1}(x))-f_0(\Phi_{t,\partial_ta}^{-1}(x))|^2\, dx\leqslantqslant \partial_ta\|\nabla f_0\|_{L^{\infty}}^2e^{tD}. $$ \end{proof} To complete the proof of the corollary, note now that we can choose $\partial_ta = \partial_ta(f_0)$ small enough so that $$ \|f_{\partial_ta}(\cdot,\tau)-f(\cdot,\tau)\|_{\dot{H}^{-1}}\leqslantqslant \|f_{\partial_ta}(\cdot,\tau)-f(\cdot,\tau)\|_{L^{2}}\leqslantqslant C_3\|f_0\|_{L^{\infty}}\varepsilonilon. $$ Then if $u(x,t)$ is Yao-Zlatos vector field yielding (\ref{50th}), we can take $u_{\partial_ta}$ as our smooth flow and get that (\ref{50th}) holds with the renormalized constant $2C_3$. \end{proof} Before stating our main result on Yao-Zlatos flows, we need one more auxiliary result. In our scheme, it is convenient to work with the $L^2$ norm of the solution. However Corollary \ref{7.2} involves the $L^{\infty}$ norm, so we need some control over it. We could get it from (\ref{17th}) and Sobolev embedding. However the bound in \eqref{17th} involves norms of higher order derivatives of $u$ and would result in weaker estimates for the flow intensity needed to suppress blow up. We prefer to estimate the $L^{\infty}$ norm of the solution directly. Let $\rho(x,t)$ be the solution of (\ref{chemo1}): $$ \partial_t\rho+(u\cdot\nabla) \rho -\Delta \rho + \nabla\cdot(\rho \nabla (-\Delta)^{-1}(\rho-\bar{\rho}))=0,\quad \rho(x,0)=\rho_0(x),\quad x\in \mathbb{T}^d. $$ where $u(x,t)$ is smooth and incompressible. \begin{prop}\lambdabel{7.5} Let $0 \leqslantqslant \rho\in C^{\infty}(\mathbb{T}^2)$. Suppose that $\|\rho(\cdot, t)-\bar{\rho}\|_{L^2}\leqslantqslant 2B$ for all $t\in [0,T]$ and some $B\geqslantqslant 1$. Then we also have $\|\rho(\cdot,t)-\bar{\rho}\|_{L^{\infty}}\leqslantqslant C_4 B\max(B,\bar{\rho}^{1/2})$ for some universal constant $C_4$ and all time $t\in [0,T]$. \end{prop} We postpone the proof of this proposition to the appendix. We are now ready to state the main theorem of this section. \begin{thm} Let $\rho_0\geqslantqslant 0\in C^{\infty}(\mathbb{T}^2)$, and suppose $\|\rho_0-\bar{\rho}\|_{L^2}<B$ for some $B>1$. Then there exists smooth incompressible flow $u(x,t)$ with $\|\nabla u(\cdot, t)\|_{L^\infty} \leqslantqslant A(B,\bar \rho)$ such that the solution $\rho(x,t)$ of the equation \eqref{chemo1} is globally regular. Here we can choose \begin{equation}\lambdabel{53th} A = C\exp\leqslantft(C(1+B+\bar{\rho}^{1/2})\big(\log(1+B+\bar{\rho}^{1/2})\big)^{3/2}\right) \end{equation} for some universal constant $C$. The flow $u(x,t)$ can be represented as \begin{equation}\lambdabel{mixflowmultyz} u(x,t) = \sum_j A u_j(x,t) \chi_{I_j}(t), \end{equation} where $I_j$ are disjoint time intervals, and $u_j$ are Yao-Zlatos flows given by Corollary~\ref{7.2} with a certain $\varepsilonilon = \varepsilonilon(B,\bar \rho)>0$ and certain initial data. \end{thm} \begin{remark} We have to deploy different Yao-Zlatos flows in \eqref{mixflowmultyz} due to the fact that these flows are designed to mix a specific initial data, and one can envision nonlinear dynamics attempting to break the $L^2$ barrier in different ways. \end{remark} \begin{remark} Similarly to Theorem~\ref{6.4} for the RE flows, one can use combinations of Yao-Zlatos flows to achieve stronger results, such as convergence of the solution to the mean, and at increasingly fast rate if the flow amplitude is allowed to grow. We will not pursue a walk through these results here, instead leaving it to the interested reader. \end{remark} \begin{proof} The scheme of the proof is similar to the RE flows case, but Corollary \ref{7.2} replaces Lemma \ref{6.3} with some necessary adjustments. We start with $u=0$ on some initial time interval. If $\|\rho(\cdot, t)-\bar{\rho}\|_{L^2}\leqslantqslant B$ for all $t$, global regularity follows. Suppose this is not the case, and let $t_0$ be the first time when $\|\rho(\cdot, t_0)-\bar{\rho}\|_{L^2}=B$. Similarly to the RE case, we also know that $\|\rho(\cdot,t_0)\|_{\dot{H}^1}\leqslantqslant B_1$. In addition, Proposition \ref{7.5} ensures that $\|\rho(\cdot, t_0)-\bar{\rho}\|_{L^{\infty}}\leqslantqslant C_4 B \max(B,\bar{\rho}^{1/2})$. Fix $\varepsilonilon$ given by \begin{equation}\lambdabel{54th} \varepsilonilon = \frac{1}{8C_3\sqrt{4C_0B^2+2\bar{\rho}+1}(1+C_4\max(B,\bar{\rho}^{1/2}))}. \end{equation} Take a flow $u$ guaranteed by Corollary \ref{7.2} corresponding to this value of $\varepsilonilon$ and the initial density $\rho(x,t_0)$, and let $\tau$ be the time in \eqref{51th}. Denote $\eta^A(x,t)$ the solution of the equation \begin{equation}\lambdabel{freeetaA} \partial_t\eta^A+A(u\cdot \nabla)\eta^A=0, \quad \eta^A(x,0)=\eta_0\equiv \rho(x,t_0). \end{equation} Then by Corollary \ref{7.2} and Proposition \ref{7.5} we have \begin{equation}\lambdabel{55th} \|\eta^A(\cdot, \tau/A)-\bar{\rho}\|_{\dot{H}^{-1}}\leqslantqslant C_3\|\eta_0-\bar{\rho}\|_{L^{\infty}}\varepsilonilon\leqslantqslant \frac{B}{8\sqrt{4C_0B^2+2\bar{\rho}+1}}. \end{equation} We will set $u(x,t)=0$ in \eqref{freeetaA} for $t\geqslantqslant \tau/A$. Now we are going to turn on the same flow $u(x,t)$ in the equation for $\rho^A$ at time $t_0$, for the duration $\tau/A$. Let us denote $\rho^A(x,t_0+t)$ the solution of the equation $$ \partial_t\rho^A+A(u\cdot\nabla) \rho^A -\Delta \rho^A + \nabla\cdot(\rho^A \nabla (-\Delta)^{-1}(\rho^A-\bar{\rho}))=0,\quad \rho^A(x,0)=\rho(x,t_0). $$ The first condition that we are going to impose on $A$ is that \begin{equation}\lambdabel{56th} \frac{2\tau}{A}\leqslantqslant C_1\min\big(1,\bar{\rho}^{-1},B^{-2}\big). \end{equation} Given \eqref{51th} and \eqref{54th}, it is easy to check that \eqref{56th} holds if \eqref{53th} is satisfied. Recall that given \eqref{56th}, Proposition \ref{L2decay} ensures \begin{equation}\lambdabel{57th} \|\rho^A(x,t_0+t)-\bar{\rho}\|_{L^2} \leqslantqslant 2B \end{equation} for $t\in [0,2\tau/A]$ and the solution stays smooth on this time interval. Next, by the approximation Lemma \ref{5.2}, we have \begin{equation}\lambdabel{58th} \begin{split} \frac{d}{dt}\|\rho^A(\cdot, t_0+t)-\eta^A(\cdot, t)\|_{L^2}^2&\leqslantqslant -\|\rho^A(\cdot,t_0+t)\|_{\dot{H}^1}^2+4\|\rho(\cdot, t_0)\|_{\dot{H}^1}^2\exp\leqslantft(2C\int_0^{At}\|\nabla u\|_{L^{\infty}}ds\right)\\ &+C\|\rho^A(\cdot, t_0+t)-\bar{\rho}\|_{L^2}^2\leqslantft(\|\rho^A(\cdot, t_0+t)-\bar{\rho}\|_{L^2}^{3}+\bar{\rho}^2\right)\\ &\leqslantqslant 4B_1^2\exp(2CAt)+4CB^2\leqslantft((2B)^{3}+\bar{\rho}^2\right) \end{split} \end{equation} for every $t\in [0, 2\tau/A]$. Here we used (\ref{57th}) in the last step. Let us now impose the second condition on $A$ which says that it should be large enough so that \begin{equation}\lambdabel{59th} \frac{2B_1^2}{CA}\exp(4C\tau)+\frac{2\tau}{A}4CB^2\leqslantft((2B)^{3}+\bar{\rho}^2\right)\leqslantqslant \frac{B^2}{64(4C_0B^2+2\bar{\rho}+1)}. \end{equation} Note that in particular (\ref{59th}) implies that \begin{equation}\lambdabel{60th} 0.8B\leqslantqslant \|\rho^A(\cdot, t_0+t)-\bar{\rho}\|_{L^2}\leqslantqslant 1.2B \end{equation} for $t\in [0,2\tau/A]$. Also, (\ref{59th}), (\ref{55th}) and (\ref{58th}) can be used to estimate that for every $t \in [\tau/A, 2\tau/A]$ we have \begin{align*} \|\rho^A(\cdot, t_0+t)-\bar{\rho}\|_{\dot{H}^{-1}} \leqslantqslant \|\eta^A(\cdot, t) - \bar \rho\|_{\dot{H}^{-1}} + \|\rho^A(\cdot, t_0+t)-\eta^A(\cdot, t)\|_{L^2} \leqslantqslant \\ \frac{B}{8\sqrt{4C_0B^2+2\bar{\rho}+1}}+\frac{B}{8\sqrt{4C_0B^2+2\bar{\rho}+1}}=\frac{B}{4\sqrt{4C_0B^2+2\bar{\rho}+1}}. \end{align*} Therefore, using \eqref{60th}, we obtain \begin{equation}\lambdabel{61th} \|\rho^A(\cdot, t_0+t)-\bar{\rho}\|_{\dot{H}^1}\geqslantqslant \frac{\|\rho^A(\cdot, t_0+t)-\bar{\rho}\|_{L^2}^2}{\|\rho^A(\cdot, t_0+t)-\bar{\rho}\|_{\dot{H}^{-1}}}\geqslantqslant 2B\sqrt{4C_0B^2+2\bar{\rho}+1} \end{equation} for all $t\in [\tau/A, 2\tau/A]$. The proof of global regularity is completed similarly to the proof of Theorem \ref{6.2} using (\ref{L2H1}), (\ref{60th}) and (\ref{61th}). Finally, the sufficiency of the condition (\ref{53th}) follows from straightforward analysis of the bounds (\ref{54th}), (\ref{51th}), and (\ref{59th}). The approximation lemma constraint (\ref{59th}) is what truly determines the exponential form of (\ref{53th}). \end{proof} \begin{remark}\lambdabel{noslip} One can check that by using Theorem 5.4 from \cite{yao2014mixing} and similar arguments, we can change the periodic setting to the finite domain with no-slip or no-flow boundary condition for $u(x,t)$, and obtain analogous results but with a somewhat weaker estimate for the flow intensity. We leave the details to the interested reader. \end{remark} \section{Discussion and generalization} The scheme developed in the previous sections of this paper should be flexible enough to be applied in different situations. Here we briefly and informally discuss the main features that appear to be necessary to apply our analysis. On the most general informal level, one can say that the idea of the scheme is that the fluid flow, if sufficiently intense and with strong mixing properties, can make a supercritical equation into subcritical one for a given initial data. It appears that for the scheme to be applicable to a nonlinearity $N(\rho)$ we need that for the solutions of the equation $$ \partial_t\rho+(u\cdot \nabla)\rho-\Delta \rho+N(\rho)=0, $$ either the mean or some norm of $\rho$ does not grow or at least obeys global finite (even if growing) bound in time. Without such assumption, it is difficult to rule our finite time blow up of the mean value of the solution, which large $\dot{H}^1$ norm has no way to arrest. The second condition that is needed concerns the bound on the nonlinear term in the spirit of \begin{equation}\lambdabel{62th} \leqslantft|\int N(\rho)\rho \, dx\right|\leqslantqslant Cf(\|\rho\|_{L^2})\|\rho\|_{\dot{H}^1}^a, \end{equation} with $a<2$. This would allow control of the $L^2$ norm growth by diffusion when $\dot{H}^1$ norm is large. The third condition is that some analog of the approximation lemma holds. This seems to require bounds similar to (\ref{62th}). Of course, the scheme can also be adapted to the cases where diffusion is given by some dissipative operator other than Laplacian, for example a sufficiently strong fractional Laplacian, in which case the $\dot{H}^1$ norm needs to be replaced by some other norm natural in the given context. It is also likely that diffusion term does not have to be linear, even though this may require subtler analysis. As far as other possible classes of flows that may have the blow up arresting property, the main clearly sufficient requirement for our scheme to be applicable appears to be as follows. First, the flows should be sufficiently regular and in particular satisfy Lipschitz bound in spatial variables. Secondly, for every $\varepsilonilon > 0$ it should be possible to find a flow $u_{\varepsilonilon}$ from the given class, with uniform in time Lipschitz bound, such that for every $f_0\in C^{\infty}$ the solution $f(x,t)$ of the transport equation $$ \partial_t f+(u_\varepsilonilon \cdot \nabla)f=0, \quad f(x,0)=f_0(x) $$ satisfies \begin{equation}\lambdabel{63th} \|f(\cdot, \tau_{\varepsilonilon})\|_{\dot{H}^{-1}}\leqslantqslant \varepsilonilon C(\|f_0\|_{L^\infty}) \end{equation} for some $\tau_{\varepsilonilon}<\infty$. Observe that even though we did not frame the discussion of the mixing effect of the RE flows in terms of the $\dot{H}^{-1}$ norm, Lemma \ref{6.3} clearly implies that (\ref{63th}) holds for the RE flows. There are other classes of flows that look likely to satisfy these properties, such as optimal mixing flows discussed in \cite{lin2011optimal}. Generally, decay of the $\dot{H}^{-1}$ norm is one of the general measures of the mixing ability of the flow, hence we have a link between efficient mixing and suppression of blow up, which is quite natural. We refer to \cite{lin2011optimal}, \cite{lunasin2012optimal}, \cite{iyer2014lower}, \cite{seis2013maximal} for further discussion of $\dot{H}^{-1}$ norm as a measure of mixing and some bounds on mixing rates for natural classes of flows. It also looks possible that some flows that do not in general lead to $\dot{H}^{-1}$ norm decay without diffusion can still be effective suppressors of blow up if diffusion is taken into account. A natural and common class to be investigated here are some families of stationary cellular flows. Furthermore, similarly to \cite{constantin2008diffusion}, our construction can be applied more generally to the case where the transport part of the equation is replaced by some other unitary evolution for which an analog of (\ref{63th}) holds. We plan to address some of these generalizations in future work. \section {Appendix I: Finite time blow up}\lambdabel{finbusec} In this section, our main result is a construction of examples where solutions to the Keller-Segel equation set on $\mathbb Tm^2$ without advection (\ref{chemo}) blow up in finite time. As we mentioned in the introduction, similar results are well known in slightly different settings. The argument below is included for the sake of completeness. It is closely related to the construction of \cite{nagai2001blowup}, but is simpler. The argument is essentially local and can be adapted to other situations as well. Although we will focus on the $d=2$ case, some auxiliary results that remain valid in every dimension will be presented in more generality. \begin{thm}\lambdabel{blowupt} There exist $\rho_0\in C^{\infty}(\mathbb{T}^2)$, $\rho_0\geqslantqslant 0$, such that the corresponding solution $\rho(x,t)$ of the equation (\ref{chemo}) set on $\mathbb{T}^2$ blows up in finite time. \end{thm} Without loss of generality, we will assume that the spatial period of initial data and solution is equal to one, so $\mathbb{T}^2=[-1/2,1.2]^2$. Let us first state a lemma that will allow us to conveniently estimate the chemotactic term in the equation. \begin{lem}\lambdabel{2.2} Assume $\mathbb{T}^d=[-1/2,1/2]^d$, $d\geqslantqslant 2$. For every $f(x)\in C^{\infty}(\mathbb{T}^d)$, we have \begin{equation}\lambdabel{infinsum} \nabla(-\Delta)^{-1}(f(x)-\bar{f})=-\frac{1}{c_d}\lim_{\gamma\rightarrow 0+}\int_{\mathbb{R}^d}\frac{(x-y)}{|x-y|^d}(f(y)-\bar{f})e^{-\gamma |y|^2}dy. \end{equation} Here on the right hand side $f(y)$ is extended periodically to all $\mathbb{R}^d$, $\bar f$ denotes the mean value of $f$, and $c_d$ is the area of the unit sphere in $d$ dimensions. \end{lem} The expression (\ref{infinsum}) is of course valid for a broader class of $f$, but the stated result is sufficient for our purpose. \begin{proof} Without loss of generality, let us assume that $f$ is mean zero. By definition and properties of Fourier transformation, we have $$ \nabla(-\Delta)^{-1}f(x)=-\sum_{k\in\mathbb{Z}^d,k\neq 0}e^{2\pi i k x}\frac{ik}{2\pi |k|^2}\hat{f}(k). $$ To link this expression with (\ref{infinsum}), observe first that for a smooth $f$, a straightforward computation shows that \begin{equation}\lambdabel{4} -\sum_{k\in\mathbb{Z}^d, k\neq 0} e^{2\pi i k x}\frac{ik}{2\pi |k|^2}\hat{f}(k)=-\lim_{\gamma\rightarrow 0+}\int_{\mathbb{R}^d}e^{2\pi i p x}\frac{ip}{2\pi |p|^2}\int_{\mathbb{R}^d}e^{-2\pi i p y-\gamma |y|^2}f(y) dydp, \end{equation} where the function $f(y)$ is extended periodically to the whole plane. Indeed, all we need to do is plug in the Fourier expansion $f(y)=\sum_{k\in \mathbb{Z}^d}e^{2\pi i k y}\hat{f}(k)$, integrate in $y$, and observe that $(\pi/\gamma)^{d/2}\exp(-\pi^2|k-p|/\gamma)$ is an approximation of identity. On the other hand, recall that the inverse Laplacian $(-\Delta)^{-1}g$ of a sufficiently regular and rapidly decaying function $g$ is given by \begin{equation}\lambdabel{5th} \int_{\mathbb{R}^d}e^{2\pi i p x}\frac{1}{(2\pi|p|)^2}\int_{\mathbb{R}^d}e^{-2\pi i p y}g(y)\,dydp= \begin{cases} -\frac{1}{2\pi}\int_{\mathbb{R}^d}\log|x-y|g(y)dy & d=2;\\ \frac{1}{c_d}\int_{\mathbb{R}^d}|x-y|^{2-d}g(y)dy & d\geqslantqslant 3. \end{cases} \end{equation} The expression on the right hand side of (\ref{4}), with help of (\ref{5th}), can be written as \[ \mbox{Right hand side of }(\ref{4})=\lim_{\gamma\rightarrow 0+}\leqslantft\{ \begin{array}{ll} \frac{1}{2\pi}\int_{\mathbb{R}^2}\log|x-y|\nabla \leqslantft(f(y)e^{-\gamma|y|^2}\right)dy\quad \ \ d=2;\\ -\frac{1}{c_d}\int_{\mathbb{R}^d}|x-y|^{2-d}\nabla \leqslantft(f(y)e^{-\gamma|y|^2}\right)dy\quad d\geqslantqslant 3. \end{array} \right. \] Integrating by parts, we obtain (\ref{infinsum}). \end{proof} Suppose that the initial data $\rho_0$ is concentrated in a small ball $B_a$ of radius $a$ centered at the origin, so that $\int_{B_a}\rho_0\, dx=\int_{\mathbb{T}^2}\rho_0\, dx\equiv M$. Suppose that $\frac{1}{4}>b>2a$, $M>1$, and let $\phi$ be a cut-off function on scale $b$. Namely, assume that $\phi\in C^{\infty}(\mathbb{T}^2)$, $1\geqslantqslant \phi(x)\geqslantqslant 0$, $\phi=1$ on $B_b$, and $\phi=0$ on $B_{2b}^c$. The function $\phi$ can be chosen so that for any multi-index $\alpha\in \mathbb{Z}^2$, $|D^{\alpha}\phi|\leqslantqslant Cb^{-|\alpha|}$. The parameters $a$, $M$ and $b$ will be chosen below. The local existence of smooth solution $\rho(x,t)$ can be proved by standard method, see e.g. \cite{kiselev2012biomixing} for a closely related argument in the $\mathbb{R}^2$ setting. It is straightforward to check using parabolic comparison principles that if $\rho_0\geqslantqslant 0$, then $\rho(x,t)\geqslantqslant 0$ for all $t\geqslantqslant 0$. Also, we have $\int_{\mathbb{T}^2}\rho(x,t)\, dx=M,$ at least while $\rho(x,t)$ remain s smooth. The first quantity we would like to consider is $\int_{\mathbb{T}^2}\rho(x,t)\phi(2x)\, dx$. We need an estimate showing that the mass cannot leave $B_b$ too quickly. we note that the constants $C_k$ employed later in this section are not related to the constants $C_k$ in the previous sections. \begin{lem}\lambdabel{2.3} Suppose that $a$, $b$, $\phi$, $M$ and $\rho_0$ are as described above. Assume that the local solution $\rho(x,t)$ exists and remains regular in the time interval $[0,\tau]$. Then we have \begin{equation}\lambdabel{6th} \int_{\mathbb{T}^2}\rho(x,t)\phi(2x)\, dx\geqslantqslant M-C_1M^2b^{-2}t \end{equation} for every $t\in [0,\tau]$. \end{lem} Naturally, the bound (\ref{6th}) is only interesting if $t$ is sufficiently small. \begin{proof} We have $$ \partial_t \int_{\mathbb{T}^2} \rho(x)\phi(2x)\, dx=\int_{\mathbb{T}^2}\Delta \rho(x)\phi(2x)\, dx-\int_{\mathbb{T}^2} \phi(2x)\nabla\cdot(\rho(x)\nabla(-\Delta)^{-1}(\rho(x)-\bar{\rho}))\, dx. $$ First, using the periodic boundary conditions and integrating by parts, we find that \begin{equation}\lambdabel{7th} \leqslantft|\int_{\mathbb{T}^2} \Delta\rho(x)\phi(2x)\, dx\right|=4\leqslantft|\int_{\mathbb{T}^2}\rho(x)\Delta\phi(2x)\, dx\right|\leqslantqslant CMb^{-2}. \end{equation} Next, let $\psi\in C^{\infty}_0(\mathbb{R}^2)$ be a cutoff function, $\psi(x)=1$ if $|x|\leqslantqslant 1/2$, $\psi(x)=0$ if $|x|\geqslantqslant 1$, $0\leqslantqslant \psi(x)\leqslantqslant 1$, $|\nabla\psi(x)|\leqslantqslant C$. Using Lemma \ref{2.2}, we have \begin{align*} &\leqslantft|\int_{\mathbb{T}^2} \phi(2x)\nabla\cdot(\rho(x)\nabla(-\Delta)^{-1}(\rho(x)-\bar{\rho}))\, dx \right|\\ &=\frac{1}{\pi}\leqslantft|\int_{\mathbb{T}^2}\nabla \phi(2x)\rho(x,t)\lim_{\gamma\rightarrow 0+}\int_{\mathbb{R}^2}\frac{(x-y)}{|x-y|^2}(\rho(y,t)-\bar{\rho})e^{-\gamma|y|}\,dy dx\right|\\ &\leqslantqslant C\leqslantft|\int_{\mathbb{T}^2}\nabla \phi(2x)\rho(x)\int_{\mathbb{T}^2}\frac{(x-y)}{|x-y|^2}(\rho(y)-\bar{\rho})\psi(y)\,dydx\right|\\ &+C\leqslantft|\int_{\mathbb{T}^2}\nabla \phi(2x)\rho(x)\lim_{\gamma\rightarrow 0+}\int_{\mathbb{R}^2}\frac{(x-y)}{|x-y|^2}(\rho(y)-\bar{\rho})e^{-\gamma |y|^2}(1-\psi(y))\,dydx\right|\\ &=C(I)+C(II). \end{align*} We passed to the limit $\gamma\rightarrow 0+$ in the first integral since the integral of the limit converges absolutely. Using symmetrization, we can estimate \begin{align*} (I)&\leqslantqslant\bar{\rho}\leqslantft|\int_{\mathbb{R}^2}(\nabla\phi)(2x)\rho(x,t)\int_{\mathbb{R}^2}\frac{(x-y)}{|x-y|^2}\psi(y)\,dydx\right|\\ &+\int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \rho(x)\rho(y)\leqslantft|\leqslantft[\frac{(x-y)\cdot(\nabla\phi(2x)\psi(y)-\nabla \phi(2y)\psi(x))}{|x-y|^2}\right]\right|\,dxdy\\ &\leqslantqslant CM^2b^{-1}+ \int_{B_1}\int_{B_1}\rho(x,t)\rho(y,t)(\|\nabla^2\phi\|_{L^{\infty}}+\|\nabla\phi\|_{L^{\infty}}\|\nabla\psi\|_{L^{\infty}})\,dxdy\\ &\leqslantqslant CM^2 b^{-2}. \end{align*} We use the fact that supp$\phi\subset$ supp$\psi\subset B_1$ in the second step. Next let us estimate $(II)$. Note that in this case the kernel is not singular, since supp$\phi\subset B_{2b}$ while supp$(1-\psi)\subset B_1^c$. However, there is an issue of convergence of $y$ integral over the infinite region. Suppose that $x\in B_{2b}$. Define mean zero function $g$ via $\rho(y)-\bar{\rho}=\Delta g$. In fact, we have $\hat{g}(k)=-\hat{\rho}(k)(2\pi |k|^2)^{-1}$ for $k\neq 0$. By working on the Fourier side, it is easy to show that $\|g\|_{L^1(\mathbb{T}^2)}\leqslantqslant \|g\|_{L^2(\mathbb{T}^2)}\leqslantqslant C\|\rho\|_{L^1(\mathbb{T}^2)}$. Now we can estimate \begin{equation}\lambdabel{8th} \begin{split} &\lim_{\gamma\rightarrow 0+}\int_{\mathbb{R}^2}\frac{(x-y)}{|x-y|^2}(\rho(y,t)-\bar{\rho})e^{-\gamma|y|^2}(1-\psi(y))\,dxdy\\ &=\lim_{\gamma\rightarrow 0+}\int_{\mathbb{R}^2}\frac{(x-y)}{|x-y|^2}\Delta g(y,t)e^{-\gamma|y|^2}(1-\psi(y))\,dxdy\\ &=\lim_{\gamma\rightarrow 0+}\int_{\mathbb{R}^2}g(y,t)\Bigg( \Delta\leqslantft(\frac{(x-y)}{|x-y|^2}\right)e^{-\gamma|y|^2}(1-\psi(y))\\ &+2\nabla \leqslantft(\frac{(x-y)}{|x-y|^2}\right)\nabla\leqslantft(e^{-\gamma|y|^2}(1-\psi(y))\right)+\frac{(x-y)}{|x-y|^2}\Delta\leqslantft(e^{-\gamma|y|^2}(1-\psi(y))\right)\Bigg)\,dxdy. \end{split} \end{equation} Here $g(y,t)$ is extended periodically to the whole $\mathbb{R}^2$. Note that in the first summand in the last integral in (\ref{8th}) we can pass to the limit as $\gamma\rightarrow 0$ since $\Delta \leqslantft(\frac{(x-y)}{|x-y|^2}\right)$ decays sufficiently fast. For every $x\in B_b$, we obtain an integral which is bounded by $C\|g\|_{L^1}\sum_{n\in \mathbb{Z}^2,|n|>0}|n|^{-3}\leqslantqslant CM$. It is straightforward to estimate that the last two summands in (\ref{8th}) are bounded by \begin{align*} C\int_{B_{1/2}^c}|g(y,t)|(\gamma|y|^{-1}+\gamma^2|y|)e^{-\gamma|y|^2}(1-\psi(y))\,dy\leqslantqslant\\ C\|g\|_{L^1(\mathbb{T}^2)}\sum_{n\in \mathbb{Z}^2, |n|>0}(\gamma|n|^{-1}+\gamma^2|n|)e^{-\gamma|n|^2}\leqslantqslant C\|g\|_{L^1(\mathbb{T}^2)}\gamma^{1/2}\xrightarrow{\gamma\rightarrow 0} 0. \end{align*} Combining these estimates, we see that $$ (II)\leqslantqslant CM\int_{\mathbb{R}^2}(\nabla \phi)(2x)\rho(x,t)\,dx\leqslantqslant CM^2b^{-1}. $$ Therefore, for all times where smooth solution is still defined, and under our assumptions on values of parameters, we have $$ \leqslantft|\partial_t\int_{\mathbb{T}^2}\rho(x,t)\phi(2x)\,dx\right|\leqslantqslant CM^2b^{-2}. $$ This implies (\ref{6th}) and finishes the proof of the lemma. \end{proof} Let us now consider the second moment $\int_{\mathbb{T}^2}|x|^2\rho(x,t)\phi(x)\,dx$. Closely related quantities are well-known tools to establish finite time blow up in Keller-Segel equation; see e.g. \cite{perthame2006transport}, \cite{nagai2001blowup}. We have the following lemma. \begin{lem}\lambdabel{2.4} Suppose $1/4\geqslantqslant b>0$ and $\phi$ is a cutoff function on scale $b$ as described above. Let $\rho_0\in C^{\infty}(\mathbb{T}^2)$, and assume that the unique local smooth solution $\rho(x,t)$ to (\ref{chemo}) set on $\mathbb{T}^2$ is defined on $[0,T]$. Then for every $t\in[0,T]$ we have \begin{equation}\lambdabel{9th} \partial_t \int_{\mathbb{T}^2} |x|^2 \rho(x,t) \phi(x)\, dx\leqslantqslant -\frac{1}{2\pi}\leqslantft(\int_{\mathbb{T}^2} \rho(x)\phi(x)\, dx\right)^2+C_2M\|\rho\|_{L^1(\mathbb{T}^2\setminus B_b)}+C_3bM^2+C_4M. \end{equation} \end{lem} \begin{proof} In the estimate below, we will use the formula (\ref{infinsum}) with $\gamma$ set to be zero. All the estimates can be done completely rigorously similar to the proof of Lemma \ref{2.3}; we will proceed with the formal computation to reduce repetitive technicalities. We have \begin{align*} \partial_t \int_{\mathbb{T}^2}|x|^2\rho(x,t)\phi(x)\, dx&=\int_{\mathbb{T}^2} |x|^2\Delta \rho(x)\phi(x)\, dx \\ &+\int_{\mathbb{T}^2}|x|^2\phi\nabla\cdot(\rho\nabla(-\Delta)^{-1}(\rho-\bar{\rho}))\, dx\\ &=4\int_{\mathbb{T}^2}\phi\rho \, dx+\int_{\mathbb{T}^2}|x|^2\Delta\phi\rho \, dx+4\int_{\mathbb{T}^2} (x\cdot \nabla \phi)\rho \, dx\\ &-\frac{1}{\pi}\int_{\mathbb{T}^2} \phi(x)\rho(x)\int_{\mathbb{R}^2}\frac{x(x-y)}{|x-y|^2}(\rho(y)-\bar{\rho})\,dydx\\ &-\frac{1}{2\pi}\int_{\mathbb{T}^2}|x|^2\rho(x)\int_{\mathbb{R}^2}\frac{\nabla\phi(x)\cdot(x-y)}{|x-y|^2}(\rho(y)-\bar{\rho})\,dxdy.\\ &\equiv(i)+(ii)+(iii). \end{align*} Here $(i)$ denotes the first three terms. By our choice of $\phi$, $(i)$ does not exceed $C_4M$ for some constant $C_4$. Next, let us write \begin{equation}\lambdabel{10th} \begin{split} (ii)=&-\frac{1}{\pi}\int_{\mathbb{T}^2} \phi(x)\rho(x,t)\int_{\mathbb{R}^2}\frac{x(x-y)}{|x-y|^2}(\rho(y,t)-\bar{\rho})\psi(y)\,dydx\\ &-\frac{1}{\pi}\int_{\mathbb{T}^2} \phi(x)\rho(x,t)\int_{\mathbb{R}^2}\frac{x(x-y)}{|x-y|^2}(\rho(y,t)-\bar{\rho})(1-\psi(y))\,dydx, \end{split} \end{equation} where $\psi$ is a cutoff function as in Lemma \ref{2.3}. The absolute value of the integral $$ \int_{\mathbb{T}^2} \phi(x)\rho(x,t)\int_{\mathbb{R}^2}\frac{x(x-y)}{|x-y|^2}(\rho(y,t)-\bar{\rho})(1-\psi(y))\,dydx $$ can be controlled similarly to the estimates applied in bounding the term $(II)$ in the proof of Lemma \ref{2.3}, leading to an upper bound by $CM^2b$. Next, we can estimate $$ \leqslantft|\bar{\rho}\int_{\mathbb{T}^2}\phi(x)\rho(x,t)\int_{\mathbb{R}^2}\frac{x(x-y)}{|x-y|^2}\psi(y)\,dxdy\right|\leqslantqslant CM^2b $$ as well. Split the remaining part of the first integral in (\ref{10th}) into two parts: \begin{align*} &-\frac{1}{\pi}\int_{\mathbb{T}^2}\phi(x)\rho(x,t)\int_{\mathbb{R}^2}\frac{x(x-y)}{|x-y|^2}\rho(y,t)\phi(y)\,dxdy\\ &-\frac{1}{\pi}\int_{\mathbb{T}^2}\phi(x)\rho(x,t)\int_{\mathbb{R}^2}\frac{x(x-y)}{|x-y|^2}\rho(y,t)(1-\phi(y))\psi(y)\,dxdy \end{align*} Using symmetrization, we obtain \begin{align*} &-\frac{1}{\pi}\int_{\mathbb{T}^2} \phi(x)\rho(x)\int_{\mathbb{R}^2}\frac{x(x-y)}{|x-y|^2}\rho(y)\phi(y)\,dydx\\ &=-\frac{1}{2\pi}\int_{\mathbb{T}^2}\int_{\mathbb{T}^2}\phi(x)\rho(x)\phi(y)\rho(y)\,dxdy=-\frac{1}{2\pi}\leqslantft(\int_{\mathbb{T}^2} \rho(x)\phi(x)\,dx\right)^2. \end{align*} On the other hand, \begin{align*} &\int_{\mathbb{T}^2}\phi(x)\rho(x,t)\int_{\mathbb{T}^2}\frac{x(x-y)}{|x-y|^2}\rho(y,t)(1-\phi(y))\psi(y)\,dxdy\\ &=\frac{1}{2}\int_{\mathbb{R}^2}\int_{\mathbb{R}^2} \frac{\rho(x,t)\rho(y,t)}{|x-y|^2}(x-y)\cdot[x\phi(x)(1-\phi(y))\psi(y)-y\phi(y)(1-\phi(x))\psi(x)]\,dxdy.\\ \end{align*} Let us define $F(x,y)=x\phi(x)(1-\phi(y))\psi(y)-y\phi(y)(1-\phi(x))\psi(x)$. Observe that $F(x,y)=0$ on $B_b\times B_b$, $F(x,x)=0$ and $|\nabla F(x,y)|\leqslantqslant C$ for all $x,y$. This means \begin{align*} |F(x,y)|=|F(x,y)-F(x,x)|&\leqslantqslant \|\nabla F\|_{L^{\infty}}|x-y|\chi_{B_1\times B_1\setminus B_b\times B_b}(x,y)\\ &\leqslantqslant C|x-y|\chi_{B_1\times B_1\setminus B_b\times B_b}(x,y), \end{align*} where $\chi_{S}(x,y)$ denotes the characteristic function of a set $S\subset \mathbb{R}^2\times \mathbb{R}^2$. Therefore, \begin{align*} &\int_{\mathbb{T}^2}\phi(x)\rho(x,t)\int_{\mathbb{R}^2}\frac{x(x-y)}{|x-y|^2}\rho(y)(1-\phi(y))\psi(y)\,dxdy\\ &\leqslantqslant C\int\int_{B_1\times B_1\setminus B_b\times B_b}\rho(x,t)\rho(y,t)\,dxdy\\ &\leqslantqslant CM\|\rho\|_{L^1(D\setminus B_b(0))}. \end{align*} To summarize, $(ii)$ can be bounded above by $$ -\frac{1}{2\pi}\leqslantft(\int_{\mathbb{T}^2}\rho(x,t)\phi(x)\,dx\right)^2+CM\|\rho(\cdot, t)\|_{L^1(\mathbb{T}^2\setminus B_b)}+CbM^2. $$ Finally, let us estimate $(iii)$. Similarly to the previous part, we have $$ \int_{\mathbb{T}^2}|x|^2\rho(x,t)\int_{\mathbb{R}^2}\frac{\nabla\phi(x)\cdot (x-y)}{|x-y|^2}(\rho(y,t)-\bar{\rho})(1-\psi(y))\,dxdy\leqslantqslant CbM^2. $$ Also, $$ \bar{\rho}\int_{\mathbb{T}^2}|x|^2\rho(x,t)\int_{\mathbb{R}^2}\frac{\nabla\phi(x)\cdot (x-y)}{|x-y|^2}(1-\psi(y))\,dxdy\leqslantqslant CbM^2 $$ as well. The remaining part of $(iii)$ we can estimate by using symmetrization: \begin{align*} &\int_{\mathbb{R}^2}|x|^2 \rho(x,t)\int_{\mathbb{R}^2}\frac{\nabla\phi(x)\cdot (x-y)}{|x-y|^2}\rho(y,t)\psi(y)\,dxdy\\ &=\frac{1}{2}\int_{\mathbb{R}^2}\int_{\mathbb{R}^2}\rho(x,t)\rho(y,t)\leqslantft(\frac{(x-y)\cdot (\nabla \phi(x)\psi(y)|x|^2-\nabla \phi(y)\psi(x)|y|^2)}{|x-y|^2}\right)\,dxdy. \end{align*} Observe that \begin{align*} ||x|^2\nabla\phi(x)\psi(y)-|y|^2\nabla\phi(y)\psi(x)|\leqslantqslant C\chi_{B_{2b}\times B_{2b}\setminus B_b\times B_b}|x-y|. \end{align*} Therefore $$ (iii)\leqslantqslant CbM^2+CM\|\rho\|_{L^1(\mathbb{T}^2\setminus B_b)}. $$ Combining the estimate of $(i)$, $(ii)$ and $(iii)$ yields (\ref{9th}), proving the lemma. \end{proof} We are now ready to complete the proof of Theorem \ref{blowupt}. \begin{proof}[Proof of Theorem \ref{blowupt}] Let us recall that we assume $1/4\geqslantqslant b \geqslantqslant 2a$, and the initial data $\rho_0$ is supported inside $B_a$. Assume that the unique solution $\rho(x,t)$ of (\ref{chemo}) set on $\mathbb{T}^2$ remains smooth for all $t$. Then by Lemma \ref{2.3}, and conservation of mass, for all $t\geqslantqslant 0$ we have $$ \|\rho(\cdot, t)\|_{L^1(\mathbb{T}^2\setminus B_b)}\leqslantqslant M-\int_{\mathbb{T}^2}\rho(x,t)\phi(2x)\,dx\leqslantqslant C_1M^2 b^{-2}t. $$ Also, $$ \int_{\mathbb{T}^2}\rho(x,t)\phi(x)dx\geqslantqslant \int_{\mathbb{T}^2}\rho(x,t)\phi(2x)\,dx\geqslantqslant M-C_1M^2b^{-2}t. $$ Therefore, by Lemma \ref{2.4}, we have that \begin{align*} \partial_t \int_{\mathbb{T}^2} |x|^2 \rho(x,t) \phi(x)\,dx\leqslantqslant -\frac{1}{2\pi}(M-C_1M^2b^{-2}t)^2+C_2M^3b^{-2}t\\ +C_3M^2b+C_4M \end{align*} for all $0\leqslantqslant t\leqslantqslant \frac{b^2}{C_1M}$. We will now make the choice of all our parameters.\\ \noindent 1. Choose $b$ so that $C_3b\leqslantqslant 0.001$.\\ \noindent 2. Choose $M$ so that $M\geqslantqslant 1000 C_4$.\\ \noindent 3. Choose $a$ so that the following three inequalities hold: $$ a\leqslantqslant b/2, \,\,\, a \leqslantqslant \frac{b}{10\sqrt{2C_1}}, \mbox{ and } a\leqslantqslant \frac{b}{100\sqrt{C_2}}. $$ \noindent 4. Choose the time $\tau=\frac{100a^2}{M}$. With such choice of parameters, it is straightforward to check that \begin{equation}\lambdabel{11th} \partial_t \int_{\mathbb{T}^2} |x|^2 \rho(x,t) \phi(x)\,dx\leqslantqslant -\frac{M^2}{50} \end{equation} for every $t\in[0,\tau]$. But by assumption, supp$\rho_0\subset B_a$, and so \begin{equation}\lambdabel{12th} \int_{\mathbb{T}^2} |x|^2 \rho_0(x) \phi(x)\,dx\leqslantqslant a^2 M. \end{equation} Together, (\ref{11th}), (\ref{12th}) and our choice of $\tau$ imply that $\int_{\mathbb{T}^2}|x|^2\rho(x,\tau)\phi(x)\,dx$ must be negative. This is a contradiction with the assumption that $\rho(x,t)$ stays smooth throughout $[0,\tau]$. \end{proof} In fact, it is not hard to verify that finite time blow up persists if we add an advection term to the Keller-Segel equation, but change the order of the selection of $u$ and $\rho_0$ compared with the results on suppression of chemotactic explosion. Namely, the following theorem holds. \begin{thm}\lambdabel{buu} Consider the Keller-Segel equation \eqref{chemo1} set on $\mathbb Tm^2.$ Suppose that $u(x,t) \in C^\infty(\mathbb Tm^2 \times [0,\infty))$ is incompressible. Then there exists $\rho_0 \in C^\infty(\mathbb Tm^2),$ $\rho_0 \geqslantqslant 0,$ such that the corresponding solution $\rho(x,t)$ of \eqref{chemo1} blows up in finite time. \end{thm} The proof of Theorem~\ref{buu} closely follows that of Theorem~\ref{blowupt}. The advection term can be easily estimated and the proof requires only a few adjustments of constants. We omit the details. \section{Appendix II: Some inequalities} Here we first prove Proposition \ref{7.5}, and then sketch the proofs of the multiplicative inequalities (\ref{14th}) and (\ref{16th}). Let $\rho(x,t)$ be a solution of (\ref{chemo1}) $$ \partial_t\rho+(u\cdot\nabla) \rho -\Delta \rho + \nabla\cdot(\rho \nabla (-\Delta)^{-1}(\rho-\bar{\rho}))=0,\quad \rho(x,0)=\rho_0(x), $$ with some smooth incompressible vector field $u(x,t)$. \begin{prop}\lambdabel{9.1} Let $\rho_0\in C^{\infty}(\mathbb{T}^2)$. Suppose that $\|\rho(\cdot, t)-\bar{\rho}\|_{L^2}\leqslantqslant 2B$ for all $t\in [0,T]$ and some $B\geqslantqslant 1$. Then we also have $\|\rho(\cdot, t)-\bar{\rho}\|_{L^{\infty}}\leqslantqslant C_4B\max(B,\bar{\rho}^{1/2})$ for some universal constant $C_4$ and all $t\in [0,T]$. \end{prop} \begin{proof} Observe first that by a direct computation, for every integer $p\geqslantqslant 1$ we have \begin{equation}\lambdabel{64th} \begin{split} \partial_t \int_{\mathbb{T}^2}(\rho-\bar{\rho})^{2p}\,dx=-(4-\frac{2}{p})\int_{\mathbb{T}^2}|\nabla ((\rho-\bar{\rho})^p)|^2\,dx\\ +(2p-1)\int_{\mathbb{T}^2}(\rho-\bar{\rho})^{2p+1}\,dx+2p\bar{\rho}\int_{\mathbb{T}^2}(\rho-\bar{\rho})^{2p}\,dx. \end{split} \end{equation} To obtain (\ref{64th}), we need to multiply (\ref{chemo1}) by $2p(\rho-\bar{\rho})^{2p-1}$, integrate, and then simplify the obtained terms using integration by parts and the fact that $u$ is divergence free. Let us estimate $\|\rho-\bar{\rho}\|_{L^{2^n}}$ inductively. By assumption, we have $\|\rho(\cdot, t)-\bar{\rho}\|_{L^2}\leqslantqslant 2B$ for all $t\in [0,T]$. Assume that for some $n\geqslantqslant 1$, we have $$ \|\rho-\bar{\rho}\|_{L^{2^n}}\leqslantqslant \Upsilon_n \quad\mbox{for all }t\in [0,T]. $$ Let us derive an estimate for an upper bound $\Upsilon_{n+1}$ on $\|\rho-\bar{\rho}\|_{L^{2^{n+1}}}$ on $[0,T]$. For that purpose, let us set $p=2^n$ in (\ref{64th}) and let us define $f(x,t)=(\rho-\bar{\rho})^p\equiv (\rho-\bar{\rho})^{2^n}$. Then (\ref{64th}) implies \begin{equation}\lambdabel{65th} \partial_t\int_{\mathbb{T}^2} |f|^2\,dx\leqslantqslant -2 \int_{\mathbb{T}^2}|\nabla f|^2\,dx+ 2^{n+1}\int_{\mathbb{T}^2}|f|^{2+2^{-n}}\,dx+2^{n+1} \bar{\rho}\int_{\mathbb{T}^2}|f|^2. \end{equation} Also, in terms of $f$, our induction assumption is that $\int_{\mathbb{T}^2}|f|\,dx\leqslantqslant \Upsilon_n^{2^{n}}$. We will now need the following Gagliardo-Nirenberg inequality. \begin{lem}\lambdabel{9.2} Suppose $v\in C^{\infty}(\mathbb{T}^d)$, $d\geqslantqslant 2$, and the set where $v$ vanishes is nonempty. Assume that $q,r>0$, $\infty>q>r$, and $\frac{1}{d}-\frac{1}{2}+\frac{1}{r}>0$. Then \begin{equation}\lambdabel{66th} \|v\|_{L^q}\leqslantqslant C(d,q)\|\nabla v\|_{L^2}^a\|v\|_{L^r}^{1-a},\quad a=\frac{\frac{1}{r}-\frac{1}{q}}{\frac{1}{d}-\frac{1}{2}+\frac{1}{r}}. \end{equation} The constant $C(d,q)$ for a fixed $d$ is bounded uniformly when $q$ varies in any compact set in $(0,\infty)$. \end{lem} \begin{proof} This inequality is well known in the case $v\in C^{\infty}_0(\mathbb{R}^d)$, see e.g. \cite{maz2013sobolev}. A simple proof is contained in \cite{kiselev2012biomixing}. Going through the proof in \cite{kiselev2012biomixing}, it is not difficult to verify that the result still holds in the periodic case under the assumption that $v$ vanishes somewhere in $\mathbb{T}^d$ (which rules out increasing mean value without increasing variance). One can similarly trace the claim regarding the constant $C(d,q)$. We refer to \cite{kiselev2012biomixing} for details. \end{proof} Applying Lemma \ref{9.2} with $d=2$, $r=2$, and $q=2+2^{-n}$ yields \begin{equation}\lambdabel{67th} \|f\|_{L^{2+2^{-n}}}^{2+2^{-n}}\leqslantqslant C \|\nabla f\|_{L^2}^{2^{-n}}\|f\|_{L^2}^2\leqslantqslant \frac{1}{2^{n+1}}\|\nabla f\|_{L^2}^2+C\|f\|_{L^2}^{\frac{2}{1-2^{-n-1}}}, \end{equation} where we used Young's inequality in the last step. Moreover, we also have \begin{equation}\lambdabel{68th} \|f\|_{L^2}\leqslantqslant C\|\nabla f\|_{L^2}^{1/2}\|f\|_{L^1}^{1/2}. \end{equation} Applying (\ref{68th}) and (\ref{67th}) to (\ref{65th}), we obtain \begin{equation}\lambdabel{69th} \partial_t \|f\|_{L^2}^2\leqslantqslant -C_1\|f\|_{L^2}^4\|f\|_{L^1}^{-2}+C_2 2^{n+1}\|f\|_{L^2}^{\frac{2}{1-2^{-n-1}}}+2^{n+1}\bar{\rho}\|f\|_{L^2}^2, \end{equation} where $C_{1,2}$ are some fixed universal constants (not connected to $C_1$ and $C_2$ used earlier in the paper). Clearly, given the upper bound on $\|f\|_{L^1}$, the right hand side of (\ref{69th}) turns negative if $\|f\|_{L^{2}}$ becomes sufficiently large. Thus $\|f\|_{L^2}$ cannot cross this threshold. Assuming without loss of generality that $\Upsilon_n\geqslantqslant 1$ for all $n$, a direct computation shows that if $\|\rho-\bar{\rho}\|_{L^{2^{n+1}}}$ reaches the value $\Upsilon_{n+1}$ which satisfies the following recursive equality, then the right hand side of (\ref{69th}) is negative: \[ \log \Upsilon_{n+1} = {\rm max}(\Gamma_n, \mathbb Theta_n) \] where \begin{equation}\lambdabel{70th} \Gamma_n =\frac{2^{n+1}-1}{2^{n+1}-2}\log \Upsilon_n+\frac{1}{2^{n+1}}((n+1)\log 2+\log C) \end{equation} \begin{equation}\lambdabel{71th} \mathbb Theta_n =\log \Upsilon_n + \frac{1}{2^{n+1}}((n+1)\log 2+\log C +\max (\log \bar{\rho},0)). \end{equation} Here $C\geqslantqslant 1$ is some universal constant. Denote $q_j=\frac{2^{j+1}-1}{2^{j+1}-2}$ and observe that due to telescoping, $$ \prod_{j=1}^n q_j = \frac{2^{n+1}-1}{2^n}\xrightarrow{n\rightarrow \infty}2. $$ An elementary inductive computation shows that if $B\gtrsim\bar{\rho}^{1/2}$, then the first recursive relation (\ref{70th}) determines the size of $\Upsilon_{n+1},$ yielding the estimate $\Upsilon_{n+1}\leqslantqslant CB^2$. If $B\leqslantsssim \bar{\rho}^{1/2}$, then the second relation (\ref{71th}) dominates, yielding the estimate $\Upsilon_{n+1}\leqslantqslant CB\bar{\rho}^{1/2}$. Since $$ \|\rho-\bar{\rho}\|_{L^{\infty}}=\lim_{n\rightarrow \infty}\|\rho-\bar{\rho}\|_{L^{2^n}}, $$ we obtain that $$ \|\rho-\bar{\rho}\|_{L^{\infty}}\leqslantqslant CB\max(B,\bar{\rho}), $$ proving the proposition. \end{proof} \begin{prop} Suppose that $f\in C^{\infty}(\mathbb{T}^d)$ and is mean zero. Then $$ \|D^mf\|_{L^p}\leqslantqslant C\|f\|_{L^2}^{1-a}\|f\|_{\dot{H}^n}^a, \quad a=\frac{m-\frac{d}{p}+\frac{d}{2}}{n}, $$ where $D$ stands for any partial derivative, $2\leqslantqslant p\leqslantqslant \infty$, and we assume $n>m+d/2$. \end{prop} Note that the last assumption is not necessary. However it makes the proof simpler, and this is the only case we need in this paper. Indeed, in the estimates of Section 3 we have $s+1>l+d/2$ unless $l=s$ (recall $d\leqslantqslant 3$). But when $l=s$, we are only estimating $\|D^s\rho\|_{L^2}$, which is straightforward. \begin{proof} Consider $p=2$. Then \begin{equation}\lambdabel{72th} \|D^mf\|_{L^2}\leqslantqslant \|f\|_{L^2}^{1-\frac{m}{n}}\|f\|_{\dot{H}^n}^{\frac{m}{n}} \end{equation} by H\"older inequality on Fourier side. Next consider $p=\infty$. Then $$ \|D^mf\|_{L^{\infty}}\leqslantqslant C\sum_{0<|k|<\Lambda}|k|^m|\hat{f}(k)|+C\sum_{|k|\geqslantqslant \Lambda}|k|^m|\hat{f}(k)|\equiv (I)+(II). $$ Now $$ (I)\leqslantqslant C\Lambda^{m+\frac{d}{2}}\leqslantft(\sum_{0<|k|<\Lambda}|\hat{f}(k)|^2\right)^{1/2} $$ by Cauchy-Schwartz. On the other hand, $$ (II)\leqslantqslant C\leqslantft(\sum_{|k|\geqslantqslant \Lambda}|k|^{2n}|\hat{f}(k)|^2\right)^{1/2}\leqslantft(\sum_{|k|\geqslantqslant \Lambda}|k|^{2(m-n)}\right)^{1/2}\leqslantqslant C\|f\|_{\dot{H}^n}\Lambda^{(m-n)+\frac{d}{2}}, $$ provided that $n>m+\frac{d}{2}$. Choose $\Lambda$ so that $$ \|f\|_{L^2}\Lambda^{m+\frac{d}{2}}=\|f\|_{\dot{H}^n}\Lambda^{m-n+\frac{d}{2}}. $$ Such choice leads to the bound \begin{equation}\lambdabel{73th} \|D^mf\|_{L^{\infty}}\leqslantqslant C \|f\|_{L^2}^{\frac{n-m+d/2}{n}}\|f\|_{\dot{H}^n}^{\frac{m+d/2}{n}}. \end{equation} The general case $2<p<\infty$ follows immediately from (\ref{72th}) and (\ref{73th}). \end{proof} \begin{prop} Suppose that $f\in C^{\infty}(\mathbb{T}^d)$, and $m>0$. Then $$ \|f\|_{\dot{H}^s}\leqslantqslant C\|f\|_{\dot{H}^{s+1}}^{\frac{2s+d}{2s+2+d}}\|f\|_{L^1}^{\frac{2}{2s+2+d}}. $$ \end{prop} \begin{proof} The proof of this proposition can be done similarly to the previous one. One needs to use that $\|\hat{f}\|_{L^{\infty}}\leqslantqslant \|f\|_{L^1}$. We leave details to the interested reader. \end{proof} \end{document}
\begin{document} \title{\LARGE\sf Quaternionic quantum harmonic oscillator\\ } \author{\bf SERGIO GIARDINO} \email{[email protected]} \affiliation{ Departamento de Matem\'atica Pura e Aplicada, Universidade Federal do Rio Grande do Sul (UFRGS)\\ Avenida Bento Gon\c calves 9500, Caixa Postal 15080, 91501-970 Porto Alegre, RS, Brazil} \begin{abstract} \noindent {\bf Abstract:} In this article we obtained the harmonic oscillator solution for quaternionic quantum mechanics ($\mathbbm{H}$QM) in the real Hilbert space, both in the analytic method and in the algebraic method. The quaternionic solutions have many additional possibilities if compared to complex quantum mechanics ($\mathbbm{C}$QM), and thus there are many possible applications to these results in future research. \end{abstract} \maketitle \tableofcontents \section{\;\sf Introduction\label{I}} Quaternions ($\mathbbm{H}$) are generalized complex numbers comprising three anti-commutative imaginary units, namely $i,\,j$ and $k$. If $q\in\mathbbm{H}$, then \begin{equation}\label{i1} q=x_0 + x_1 i + x_2 j + x_3 k, \qquad\mbox{where}\qquad x_0,\,x_1,\,x_2,\,x_3\in\mathbbm{R},\qquad i^2=j^2=k^2=-1. \end{equation} Mathematical and physical introductions to quaternions are provided elsewhere \cite{Morais:2014rqc,Rocha:2013qtt,Garling:2011zz,Dixon:1994oqc,Ward:1997qcn}, and we notice only that the anti-commutativity of the imaginary units makes quaternionic numbers non-commutative hyper-complexes. By way of example $ij=k=-ji$. Adopting the symplectic notation for quaternions, (\ref{i1}) becomes \begin{equation}\label{i2} q=z_0+z_1j,\qquad\mbox{where}\qquad z_0=x_0+x_1i\qquad\textrm{and}\qquad z_1=x_2+x_3i. \end{equation} In quaternion quantum mechanics ($\mathbbm{H}$QM) the quantum states are evaluated over the quanternionic numbers. Thus, quaternionic wave functions replace the usual complex wave functions in Schr\"odinger equation, and therefore the $\mathbbm{H}$QM generalizes the usual complex quantum mechanics ($\mathbbm{C}$QM). The introduction of quaternions in quantum mechanics is not new, and Stephen Adler's book \cite{Adler:1995qqm} contains a large extent of their development, subsumming the anti-hermitian version of $\mathbbm{H}$QM, where anti-hermitian Hamiltonian operators are imposed on Schr\"odinger equation. Anti-hermitian $\mathbbm{H}$QM comprises several shortcomings, such as the ill-defined classical limit \cite{Adler:1995qqm}. Furthermore, anti-hermitian solutions of $\mathbbm{H}$QM are few, involved, and difficult to understand physically \cite{Davies:1989zza,Davies:1992oqq,Ducati:2001qo,Nishi:2002qd,DeLeo:2005bs,Madureira:2006qps,Ducati:2007wp,Davies:1990pm,DeLeo:2013xfa,DeLeo:2015hza,Giardino:2015iia,Sobhani:2016qdp,Procopio:2017vwa,Sobhani:2017yee,Hassanabadi:2017wrt,Hassanabadi:2017jiz,Bolokhov:2017ndw, Cahay:2019bqp,DeLeo:2019bcw}. We additionally point out that several applications of quaternions in quantum mechanics are not $\mathbbm{H}$QM because the anti-hermitian framework is not considered \cite{Arbab:2010kr,Brody:2011mg,Morais:2014jpm,Kober:2015bkv,Tabeu:2019cqw,Chanyal:2019gdi,Cahay:2019bqp} and the quaternions are simply an alternative way to describe specific results of $\mathbbm{C}$QM. More recently, a novel approach eliminated the anti-hermiticity requirement for the Hamiltonian operator in $\mathbbm{H}$QM \cite{Giardino:2018lem,Giardino:2018rhs}. Using this framework, several results have been obtained, including the explicit solutions of the Aharonov-Bohm effect \cite{Giardino:2016xap}, the free particle \cite{Giardino:2017yke,Giardino:2017nqs}, the square well \cite{Giardino:2020cee}, the Lorentz force \cite{Giardino:2019xwm,Giardino:2020uab} and the quantum scattering \cite{Giardino:2020ztf,Hasan:2020ekd}. Further conceptual results are the well defined classical limit \cite{Giardino:2018lem}, the virial theorem \cite{Giardino:2019xwm}, the Ehrenfest theorem and the real Hilbert space \cite{Giardino:2018lem,Giardino:2018rhs}. In the real Hilbert space approach, an arbitrary quaternionic wave function $\,\Psi\,$ is written in terms of the linear expansion \begin{equation} \Psi=\sum_{\ell=-\infty}^\infty c_\ell \Lambda_\ell, \end{equation} where $\,c_\ell\,$ are real coefficients and $\,\Lambda_\ell\,$ are quaternionic basis elements. We recall that in $\mathbbm C$QM the coefficients and the basis elements are both complex, and that in the anti-hermitian $\mathbbm H$QM the coefficients and the basis elements are both quaternionic. A real Hilbert space is endowed with a real valued inner product, and from \cite{Harvey:1990sca} a consistent real inner product between the quaternions $\Phi$ and $\Psi$ is simply \begin{equation}\label{u003} \langle\Phi,\,\Psi\rangle= \frac{1}{2}\int dx^3\Big[\Phi\overline\Psi^{\,} +\overline\Phi\Psi \Big], \end{equation} where $\overline \Phi$ and $\overline \Psi$ are quaternionic conjugates. This real inner product is the foundation of the quantum expectation value in the real Hilbert space $\mathbbm H$QM, and the breakdown of the Ehrenfest theorem in the anti-hermitian approach to $\mathbbm H$QM (cf. Section 4.4 of \cite{Adler:1995qqm}) is the physical motivation to the introduction of the real Hilbert space formalism to $\mathbbm H$QM. The consistency demonstrated in these previous results \cite{Giardino:2016xap,Giardino:2017yke,Giardino:2017nqs,Giardino:2020cee,Giardino:2019xwm,Giardino:2020uab,Giardino:2020ztf} encourage us to apply the real Hilbert space $\mathbbm{H}$QM formalism to quantum systems that do not have satisfactory quaternionic interpretations. A formal solution to the harmonic oscillator has been sketched in anti-hermitian $\mathbbm{H}$QM \cite{Finkelstein:1961tk,Adler:1995qqm}, and a coherent quantization has been obtained in using the regular function approach \cite{Muraleetharan:2014qma,Sabadini:2017qma}. Both of these examples consider the quaternionic Hilbert space, and in this article we use the much simpler approach of the real Hilbert space, and the connection to the $\mathbbm C$QM is accordingly clear and simple. A further example is the biquaternionic harmonic oscillator \cite{Lavoie:2010bcq}. The article is organized as follows. In Section \ref{U} we revisit the complex result of the infinite square well to obtain the quaternionic solution. In Secton \ref{T} we repeat the procedure to the finite square well. Section \ref{C} rounds off the article with our conclusions and future directions. \section{\;\sf One-dimensional harmonic oscillator\label{U}} The quaternionic Schr\"odinger equation for the one-dimensional harmonic oscillator of mass $\mu$ and frequency $\omega$ is simply \begin{equation}\label{u01} \hbar\frac{\partial\Psi}{\partial t}i=\left[-\frac{\hbar^2}{2\mu\,}\frac{\partial^2}{\partial x^2}+\frac{1}{2}\mu\omega^2x^2\right]\Psi. \end{equation} The imaginary unit $\,i\,$ multiplies the right hand side of the wave function, and this selection is important in order to define the momentum operator \cite{Giardino:2018lem,Giardino:2018rhs}. Furthermore, although the quaternionic imaginary units are equivalent, only one of them, $\,i,\,$ was elected to define the energy and the momentum operators. This common option is important in order to maintain the correspondence between $\mathbbm H$QM and $\mathbbm C$QM. However, a quaternionic theory in which different imaginary units are associated to the energy and momentum operatos is an interesting direction for future research. The quaternionic wave function $\,\Psi_{nm}\,$ that solves (\ref{u01}) comprises two complex wave functions $\psi_n$, such that \begin{equation}\label{u02} \Psi_{nm}=\cos\theta_{nm}\psi_n+\sin\theta_{nm}\overline{\psi}_m\, j,\qquad n,\,m\in\mathbbm{Z}_+, \end{equation} where $\,\theta_{mn}\,$ are constants and the complex wave functions are solutions of the quantum harmonic oscillator ($\mathbbm C$HO). The $\,\theta_{mn}\,$ angle is essential in order to obtain non trivial quaternionic solutions. We will see in a moment that $\psi_n$ and $\psi_n j$ are orthogonal, despite their identical energies. Consequently, (\ref{u02}) is more constrained than it seems because it expresses the orthogonality requirement for hetero-energetic states. Thus, let us use the well-known harmonic oscillator solutions of $\mathbbm C$QM \begin{equation}\label{u03} \psi_n=\sqrt[4]{\frac{\mu\omega}{\pi\hbar}\,}\frac{1}{\sqrt{2^nn!}\,}H_n(X)e^{-X^2/2}e^{-iE_n t/\hbar}, \qquad{\rm where}\qquad E_n=\left(n+\frac{1}{2}\right)\hbar\omega,\qquad X=\sqrt{\frac{\mu\omega}{\hbar}}x, \end{equation} and $H_n(X)$ are the Hermite polynomials. The quaternionic harmonic oscillator solution ($\mathbbm H$HO) given in (\ref{u02}) is not an eigenfunction of the time-independent Schr\"odinger equation, except in the particular case where $n=m$. Therefore, solution (\ref{u02}) describes a coupling between two complex eigenfunctions of the harmonic oscillator. The solution must be expressed in a basis for the Hilbert space and a suitable orthogonality condition is needed, a problem that is not solved in the anti-Hermitian case. Applying the definition of the inner product between quaternions (\ref{u003}), we obtain \begin{equation}\label{u04} \big\langle\Psi_{nm},\,\Psi_{n'm'}\big\rangle=\cos\theta_{nm}\cos\theta_{nm'}+\sin\theta_{nm}\sin\theta_{n'm} \end{equation} where $\langle\psi_n,\,\psi_{n'}\rangle=\delta_{nn'}$ has been used from $\mathbbm{C}$QM. The inner product (\ref{u04}) does not establish the orthogonality between the quaternionic solutions, and an additional constraint is necessary. Recalling that $p,\,q\in\mathbbm H$ are parallel ({\em cf.} Section 2.5 of \cite{Ward:1997qcn}) if \begin{equation} \mathfrak{Im}[p\bar q]=0, \end{equation} we impose the parallelism between the basis elements as this additional constraint, so that \begin{equation} \theta_{nm}=\theta_{n'm'}. \end{equation} Thus, we interpret the angle $\theta_{nm}$ as a parameter that ascribes the degree of interaction between the complex solutions that comprise the quaternionic solution. All the basis elements partake this unique degree of interaction, that we can also understand as polarization of the solution. Therefore, every element of the basis comprises two polarized wave functions of different energies, and $\theta_{mn}$ is the ``polarization angle'' between these complex components of the quaternionic wave function. Consequently, the condition $\theta_{nm}=\theta_{n'm'}$ sets basis elements of different polarization planes as orthogonal. Accordingly, \begin{equation}\label{u05} \big\langle\Psi_{nm},\,\Psi_{n'm'}\big\rangle\,=\,\delta_{nn'}\delta_{mm'}. \end{equation} We notice that the pure complex $\,\cos\theta_{nm}\psi_n\,$ and the pure quaternionic $\,\sin\theta_{nm}\psi_m j\,$ components of (\ref{u02}) are mutually orthogonal, in agreement with the interpretation of mechanical polarized waves. Afther defining the orthogonality conditions of the wave function, we turn our attention to the expectation values of quaternionic wave functions in a real Hilbert space \cite{Giardino:2018lem,Giardino:2018rhs,Giardino:2019xwm} are obtained from \begin{equation}\label{u013} \left\langle\widehat{\mathcal O}\right\rangle= \frac{1}{2}\int dx^3\Bigg[\big(\widehat{\mathcal O}\Psi\big)\overline\Psi^{\,} +\Psi\left(\,\overline{\widehat{\mathcal{O}}\Psi}\,\right) \Bigg], \end{equation} and from \cite{Giardino:2019xwm} we know that the expectation values of an arbitrary quaternionic operator $\widehat{\mathcal{O}}$ has the following expression \begin{equation}\label{q1} \left\langle\widehat{\mathcal O}_\mathbb{H}\right\rangle=\left\langle\,\widehat{\mathcal O}\right\rangle +\left\langle\big(\widehat{\mathcal O}\,|\,i\big)\right\rangle. \end{equation} The contribution of $\big\langle\big(\widehat{\mathcal O}\,|\,i\big)\big\rangle$ is justified physically in order to satisfy the Virial theorem, but this term will not contribute in the case of Hermitian operators. From a mathematical point of view, this term is the second possibility for defining the scalar product for quaternionic states \cite{Harvey:1990sca}, and consequently the expectation value (\ref{q1}) is well defined mathematically. In the case of Hermitian operators, we get \begin{equation}\label{u06} \left\langle\Psi_{nm},\,\widehat{\mathcal{O}}\,\Psi_{nm}\right\rangle\,=\,\cos^2\theta_{nm}\Big\langle\psi_n,\,\widehat{\mathcal{O}}\,\psi_n\Big\rangle\,+\,\sin^2\theta_{nm}\Big\langle\,\overline{\psi}_m,\,\widehat{\mathcal{O}}\,\overline{\psi}_m\Big\rangle, \end{equation} where we used the hermiticity of $\,\widehat{\mathcal O}.\,$ The pure imaginary off diagonal elements cancel out, and the usual complex result is recovered when $\,n=m.\,$ By way of example, the energy expectation value is \begin{equation}\label{u061} E_{nm}=\left(n\cos^2\theta_{mn}+m\sin^2\theta_{nm}+\frac{1}{2}\right)\hbar\omega. \end{equation} The zero point energy does not suffer any change in the quaternionic formulation, and we can also write the energy as \begin{equation}\label{u062} E_{nm}=\left(n+\frac{1}{2}+ (m-n)\sin^2\theta_{nm}\right)\hbar\omega. \end{equation} This expression enables us to see the quaternionic part as a correction to the complex part, and the $\theta_{nm}$ angle as the parameter that regulates the quaternionic influence in the solution. The quaternionic solution also admits the algebraic solution of the harmonic oscillator. Using the the operator algebra \cite{Messiah:1999cqm} and the notation $\;(a|b)f=afb\;$ \cite{Giardino:2018lem}, we have \begin{equation}\label{u07} \widehat{a}=\frac{1}{\sqrt{2}}\Big[\;X+\left(\widehat{P}\,\big|\,i\right)\,\Big],\qquad \widehat{a}^\dagger=\frac{1}{\sqrt{2}}\Big[\;X-\left(\widehat{P}\,\big|\,i\right)\,\Big], \qquad\left[\,\widehat a,\,\widehat a^\dagger\,\right]=1. \end{equation} The momentum operator $\widehat{P}$ is such that \begin{equation}\label{u08} \widehat P=\frac{1}{\sqrt{\mu\,\hbar\omega\,}}\,\widehat p_x,\qquad\widehat{p}_x=-\hbar(\partial_x|i)\qquad{\rm and}\qquad \mathcal{H}=\frac{1}{2}\,\hbar\omega\left(\widehat P^2+X^2\right), \end{equation} where $\mathcal{H}$ is the Hamiltonian operator of (\ref{u01}). The $\widehat a^\dagger$ is the creation operator, and thus the wave function can be written as \begin{equation}\label{u09} \Psi_{nm}=\Big[\,\cos\theta_{mn}A_n e^{-iE_n t/\hbar}\big(\widehat a^\dagger\big)^n\,+\,\sin\theta_{mn}A_m e^{-iE_m t/\hbar}\big(\widehat a^\dagger\big)^m\,j\,\Big]e^{-X^2/2} \end{equation} where $A_n$ are normalization constants for $\psi_n$. These results are very simple, and could be easily obtained in the anti-hermitian framework of $\mathbbm{H}$QM. However, the wave funtion (\ref{u02}) is inconsistent in the anti-hermitian context of $\mathbbm{H}$QM where the orthogonality conditions (\ref{u05}) and the expectation value (\ref{u013}) do not hold and have different definitions. The framework that supports the consistency of the results of this section is the real Hilbert space. The presented results are impossible otherwise and their novelty is totally dependent on it. \section{\;\sf harmonic oscillator in various dimensions\label{T}} The one-dimensional $\mathbbm H$HO is easily generalized to an arbitrary number $p$ of dimensions, according to \begin{equation}\label{t01} \mathcal{H}=\sum_{k=1}^p\mathcal{H}_k, \end{equation} where each direction has its own Hamilton operator $\mathcal{H}_k$ that is analogous to (\ref{u08}). However, the possible solutions for $\mathbbm{H}$QM are much more numerous compared to the $\mathbbm{C}$QM harmonic oscillator. We remember the multi-dimensional harmonic oscillator in $\mathbbm{C}$QM as \begin{equation}\label{t02} \psi_n(X)=\prod_{k=1}^p\psi_n^{(k)}(X_k),\qquad\mbox{where}\qquad X=(X_1,\,X_2,\dots X_p) \end{equation} and independent oscillations take place along every direction according to the one-dimensional wave function $\,\psi_n^{(k)}\,$. In the quaterninic case, however, there are several possibilities. Analogous to (\ref{t02}), we have \begin{eqnarray}\nonumber \Psi_{nm}(\bm X)&=&\prod_{k=1}^p \Psi_{nm}^{(k)}(X_k),\\ \label{t03} &=&\prod_{k=1}^p\left(\cos\theta_{nm}\psi_n^{(k)}+\sin\theta_{nm}\overline{\psi}_m^{(k)}\, j\right), \end{eqnarray} where $\,\Psi_k(X_k)\,$ is quaternionic and the total expectation value is the sum of the expectation value at each direction, in complete analogy to the complex case. We observe that the order of the product may change and the energy of the wave function does not change. A more general possibility for (\ref{t03}) is \begin{equation} \Psi_{nm}(\bm X)=\cos\theta_{nm}\prod_{k\in P}\psi_n^{(k)}+\sin\theta_{nm}\prod_{k'\in P'}\overline{\psi}_m^{(k')}\, j\qquad \mbox{where}\qquad P\cap P'=\{1,\,2,\,\dots p\}. \end{equation} This wave function admits much more possibilities than the previous. By way of example, there is a two-dimensional oscillator where the complex and imaginary quaternionic vibrations occur in different directions, and much more possibilities are admitted in higher dimensions. On the other hand, we may have a third possibility of building a higher dimensional $\mathbbm H$HO using polar coordinates. The time-independent Schr\"odinger equation is \begin{equation}\label{t05} \left(-\frac{\hbar^2}{2m}\nabla^2+\frac{1}{2}\mu\,\omega^2 r^2\right)\Phi=E\Phi, \end{equation} where $\,\Phi\,$ is a quaternionic wave function. Using spherical coordinates and a radial potential, we have the well known result \begin{eqnarray} \label{t06} \frac{\hbar^2}{2m}\nabla^2_{\hat\theta}\,\mathcal{Y}+\ell\big(\ell+1\big)\mathcal{Y}=0&&\\ \nonumber \left(-\frac{\hbar^2}{2m}\nabla^2_{\hat r}+\mathcal{V}\right)\mathcal{R}+ \left[\frac{\hbar^2}{2m}\frac{\ell(\ell+1)}{r^2}-E\right]\mathcal{R}=0&&\mbox{where}\qquad \Phi(r,\,\theta,\,\phi)\,=\,\mathcal{R}(r)\,\mathcal{Y}(\theta,\,\phi). \end{eqnarray} The above equations are well known from $\mathbbm{C}$QM but are valid in $\mathbbm{H}$QM as well. The real radial solutions of (\ref{t06}) comprise the generalized Laguerre polynomials, $\,L_n^{(\alpha)}(x),\,$ and consequently the quaternionic solutions will be \begin{equation}\label{t08} \mathcal{R}_{uv}(\rho)\,=\,\rho^\ell\, e^{-\rho^2/2}\left[\,\cos\theta_{uv}N_u L_u^{\left(\ell+\frac{1}{2}\right)}\left(\rho^2\right)+\sin\theta_{uv}N_v L_v^{\left(\ell+\frac{1}{2}\right)}\left(\rho^2\right) \,j\,\right] \qquad{\rm where}\qquad\rho=\sqrt{\frac{m\omega}{\hbar}}r. \end{equation} The normalization constants $N_u$ and $N_v$ of the Laguerre polynomials are known, and also the energy of each oscillator. Particularly, the energies are \begin{equation}\label{t12} E_{u\ell}=\left(2u+\ell+\frac{3}{2}\right)\hbar\omega\qquad\mbox{where}\qquad u\in\mathbbm{N}. \end{equation} In the real Hilbert space, the quaterninic parallelism condition and the orthogonality of the Laguerre polynomials give \begin{equation}\label{t09} \big\langle\mathcal{R}_{uv},\,\mathcal{R}_{u'v'}\big\rangle=\delta_{uu'}\delta_{vv'}. \end{equation} The radial solution give the energy, and this is absolutely expected considering that the oscillation takes place along the radial direction, and the energy comprises two independent oscillation in the same token as (\ref{u061}). However, we still have a quaternionic solution in the case of $\theta_{uv}=0$. In this specific case, the radial part of the wave function is identical to the complex case and the energy is also identical. On the other hand, the angular equation of (\ref{t06}) is a combination of spherical harmonics such as \begin{equation}\label{t10} \mathcal{Y}_\ell^{m_1m_2}\big(\theta,\,\phi\big)\,=\,\,\cos\theta_{m_1m_1}Y_\ell^{m_1}\big(\theta,\,\phi\big)\,+\,\sin\theta_{m_1m_1}Y_\ell^{m_2}\big(\theta,\,\phi\big)\,j, \end{equation} where $\,Y_\ell^m\,$ is the well known complex spherical harmonic and $\,m_1,\,m_2=\big\{-\ell,\,\dots,\ell\big\}.\,$ The orthogonality condition also take benefit of the parallelism condition to be \begin{equation}\label{t11} \Big\langle\,\mathcal{Y}_\ell^{m_1m_2},\,\mathcal{Y}_{\ell'}^{\,m'_1m'_2}\,\Big\rangle\,=\,\delta_{\ell\ell'}\,\delta_{m_1 m_1'} \,\delta_{m_2 m_2'}. \end{equation} As in the complex case, the azimuthal quantum number $\,m\,$ of the spherical harmonic does not contribute to the energy, and this feature is what makes this quaternionic solution possible. The physical properties of the quaternionic spherical harmonic can be further investigated in the scope of the quantum angular momentum and spin. \section{\;\sf Conclusion\label{C}} In this article we have provided one of the most important solutions of $\mathbbm{H}$QM in the real Hilbert space: the harmonic oscillator. The solution of this problem was never obtained in the anti-hermitian version of $\mathbbm{H}$QM, and this fact allows us to suppose that the research in real Hilbert space $\mathbbm{H}$QM may have a boost in the future. Almost every application of the harmonic oscillator of $\mathbbm{C}$QM may now be studied using $\mathbbm{H}$QM. Other fascinating possibilities are the quaternionic version quantum field theory and the supersymmetric quantum mechanics. In both of these the creation-annihilation algebra that has been obtained here will be fundamental. \end{document}
\begin{document} \title{\Large{Comparison of transport map generated by heat flow interpolation and the optimal transport Brenier map} \begin{abstract} This note shows that the non-expansive transport map constructed by Y.-H. Kim and E. Milman using heat flow interpolation is in general different from the optimal transport Brenier map. \end{abstract} \section{Introduction} Let $\mu$ and $\nu$ be two Borel probability measures on $\mathbb{R}^n$. A Borel map $T:\mathbb{R}^n\to\mathbb{R}^n$ is said to push $\mu$ forward to $\nu$ (or transport $\mu$ onto $\nu$), denoted by $T_{\#}\mu=\nu$, if $\mu(T^{-1}(\Omega))=\nu(\Omega)$ for every Borel set $\Omega\subset\mathbb{R}^n$, or equivalently, if for every bounded Borel function $\zeta:\mathbb{R}^n\to\mathbb{R}$ $$\int_{\mathbb{R}^n}\zeta\circ T d\mu=\int_{\mathbb{R}^n}\zeta d\nu.$$ Herein, we consider two pushforward maps: the optimal transport map for quadratic cost function (also known as the Brenier map) and the transport map constructed by Kim and Milman in \citep{kim-milman} using heat flow interpolation. We prove that they are generally different maps, thus answering the question discussed in \citep{kim-milman}. For this purpose, we consider a Gaussian measure $\mu$ with density \begin{equation}\label{measure} \frac{d\mu}{dx}=\frac{\sqrt{\det(A)}}{(2\pi)^{\frac{n}{2}}}\exp\left(-\frac{1}{2}x^\intercal Ax\right), \end{equation} where $A$ is a symmetric positive definite matrix, and a Borel probability measure $\nu$ log-concave with respect to $\mu$, that is, $d\nu=\exp(-F)d\mu$ for a convex function $F:\mathbb{R}^n\to\mathbb{R}$. The note is organized as follows. In Sections 2 and 3, we recall some facts about the Brenier optimal transport map and sketch the Kim-Milman construction. In Section 4, we show that if we take $\frac{d\nu}{d\mu}=c_0\cdot\exp\left(-\frac{1}{2}x^\intercal Bx\right)$, then we can find $A$ and $B$ such that the two maps do not coincide. These are the probability distributions suggested in Example 6.1 of \citep{kim-milman}, and we show that indeed they can give a counterexample. We also mention a numerical result that suggests that the maps are generally different even in the special case when $\mu$ is the standard normal distribution. \section{The Brenier map} The Monge-Kantorovich optimal transport problem with quadratic cost is the problem of finding a minimizer of the functional $$\int_{\mathbb{R}^n\times\mathbb{R}^n}\norm{x-y}^2d\pi(x,y)$$ over all couplings $\pi$ of $\mu$ and $\nu$, i.e. over all Borel probability measures $\pi$ on $\mathbb{R}^n\times\mathbb{R}^n$ such that for every Borel set $\Omega\subset\mathbb{R}^n$, $\pi(\Omega\times\mathbb{R}^n)=\mu(\Omega)$ and $\pi(\mathbb{R}^n\times \Omega)=\nu(\Omega)$. The following result is well-known in optimal transportation theory (for example, see \citep[Theorems 2.12 and 2.32]{villani1}). \begin{thm}\label{quadratic-cost} Let $\mu,\nu$ be Borel probability measures on $\mathbb{R}^n$ and assume that $\mu$ is absolutely continuous with respect to the Lebesgue measure. Then there exists a unique, up to a $\mu$-nullset, measurable map $T$ such that $T_{\#}\mu=\nu$ and $T=\nabla\varphi$ for some convex function $\varphi$. If in addition $\mu$ and $\nu$ have finite second order moments, then $(\text{Id}\times \nabla\varphi)_{\#}\mu$ is the unique solution of the Monge-Kantorovich optimal transport problem with quadratic cost. \end{thm} The map $\nabla\varphi$, defined up to a $\mu$-nullset, is called the Brenier map. It was observed by Caffarelli in \citep{caffarelli} that the Brenier map transporting a Gaussian measure $\mu$ onto a probability measure $\nu$ log-concave with respect to $\mu$ is non-expansive (i.e. $1$-Lipschitz). \section{The Kim-Milman construction} Kim and Milman's construction produces another non-expansive map transporting log-concave probability measure $\mu$ onto a probability measure $\nu$ log-concave with respect to $\mu$ via semigroup interpolation. Herein, we sketch the construction for the special case of Gaussian $\mu$ defined as in \eqref{measure}. Consider the second-order differential operator $$L=\exp\left(\frac{1}{2}x^\intercal Ax\right)\nabla\cdot\left(\exp\left(-\frac{1}{2}x^\intercal Ax\right)\nabla\right)=\Delta-Ax\cdot\nabla.$$ It is known that the solution to \begin{equation}\label{fokker-planck} \left\{ \begin{array}{ll} \frac{d}{dt}\left(P^A_t(f)\right)=L\left(P^A_t(f)\right)\\ P^A_0(f)=f \end{array} \right. \end{equation} (for $f$ smooth and bounded) is given by the Mehler formula (\citep{harge}) $$P^A_t(f)(x)=\int_{\mathbb{R}^n}f\left(\exp(-tA)x+\sqrt{\text{Id}-\exp(-2tA)}y\right)d\mu(y).$$ The family of operators $\left\{P^A_t\right\}_{t\in[0,\infty)}$ defined by \eqref{fokker-planck} is sometimes called the heat semigroup or heat flow with respect to the generator $L$. Let us now assume that, besides being convex, $F$ is smooth and bounded from below. If we define $d\nu_t=P^A_t(\exp(-F))d\mu$ then $\nu_0=\nu$ and $\nu_t\to\mu$ as $t\to\infty$ in $L^1(\mathbb{R}^n)$. The equation \eqref{fokker-planck} and the definition of $L$ can be used to show that the densities of $\nu_t$ with respect to the Lebesgue measure solve the following transport equation $$\frac{d}{dt}\left(\frac{d\nu_t}{dx}\right)-\nabla\cdot\left(\left(\frac{d\nu_t}{dx}\right)\nabla\log P^A_t(\exp(-F))\right)=0.$$ It is known for this equation (for example, see \citep[Theorem 5.34]{villani1}) that if there exists a locally Lipschitz family of homeomorphisms $\left\{S_t\right\}_{t\in[0,\infty)}$ solving the initial value problem \begin{equation}\label{initial_value} \frac{d}{dt}S_t(x)=w_t(S_t(x)),\quad S_0(x)=x, \end{equation} for the velocity field $w_t(x)=-\nabla\log P^A_t(\exp(-F))(x)$, then $S_{t\#}\nu=\nu_t$. It can be shown that if we additionally assume that $F$ is Lipschitz, then such a family of homeomorphisms $\left\{S_t\right\}_{t\in[0,\infty)}$ exists and is unique. Due to smoothess of $w_t$, $S_t$ are in fact diffeomorphisms, and the equation \eqref{initial_value} implies by differentiation that \begin{equation}\label{jacobian} \frac{d}{dt}DS_t(x)=Dw_t\big\vert_{S_t(x)}DS_t(x),\quad DS_0\equiv\text{Id}. \end{equation} By the Pr\'ekopa-Leindler inequality (see \citep[Theorems 3 and 6]{prekopa}), $-F$ being concave implies $\log P^A_t(\exp(-F))$ is concave and thus $Dw_t=-D^2\log P^A_t(\exp(-F))$ is positive semidefinite at each point. It follows that $$\frac{d}{dt}(DS_t)^\intercal(x)(DS_t)(x)=(DS_t)^\intercal(x)\left[(Dw_t)^\intercal\big\vert_{S_t(x)}+Dw_t\big\vert_{S_t(x)}\right](DS_t)(x)\ge 0,$$ and therefore $S_t$ are expansions for all $t\ge0$. Their inverses $T_t=S_t^{-1}$ are then non-expansive and can be shown to converge (uniformly on compact sets, up to a subsequence) to a non-expansive map $T$. Since $T_{t\#}\nu_t=\nu$, in the limit $T_{\#}\mu=\nu$. For arbitrary convex $F$, the non-expansive map $T$ transporting $\mu$ onto $\nu$ is obtained by an approximation argument (see Lemma 3.3 in \citep{kim-milman} and the discussion after it). \section{Comparison} In the last section of \citep{kim-milman} Kim and Milman compare their map $T$ with the Brenier map. They give a sufficient condition (6.3) for the two maps to be the same (in particular, when $n=1$, or when $\mu$ and $\nu$ are both radially symmetric, the maps do coincide), but do not manage to show that in general the maps are different. Continuing Example 6.1 in \citep{kim-milman}, we show that there exist Gaussian measures $\mu$ and $\nu$ such that the construction does not give the Brenier map between them. \begin{exmp} We consider the special case $\frac{d\mu}{dx}=c\cdot\exp\left(-\frac{1}{2}x^\intercal Ax\right), \frac{d\nu}{d\mu}=c_0\cdot\exp\left(-\frac{1}{2}x^\intercal Bx\right)$, where $A$ and $B$ are symmetric positive definite matrices, and achieve a contradiction assuming that for all such $A$ and $B$ the Kim-Milman map between $\mu$ and $\nu$ coincides with the Brenier map. The matrices $A$ and $B$ giving the contradiction are to be chosen later. The Mehler formula can be used to obtain that $$P^A_t\left(c_0\exp\left(-\frac{1}{2}.^\intercal B.\right)\right)(x)=c_t\exp\left(-\frac{1}{2}x^\intercal B_tx\right)$$ for some constants $c_t$ and constant in space symmetric matrices $B_t$ (with $B_0=B$), which are positive semidefinite by the Pr\'ekopa-Leindler inequality and decay exponentially to $0$ as $t\to\infty$. We obtain $w_t(x)=-\nabla\log P^A_t(\exp(-\frac{1}{2}.^\intercal B.))(x)=B_tx$ and $Dw_t\equiv B_t$. For such matrices $B_t$, Picard-Lindel\"of-type argument and integral Gronwall's lemma imply that both ordinary differential equations \eqref{initial_value} and \eqref{jacobian} have unique solutions well-defined for each $x\in\mathbb{R}^n$ and for all $t\in[0,\infty)$. Clearly, $S_t$ are then linear maps given by multiplication by constant in space matrices $DS_t$. The explicit expression for $\nu_t$ is $$d\nu_t=d_t\exp\left(-\frac{1}{2}x^\intercal (A+B_t)x\right)dx,$$ where $d_t=\frac{\sqrt{\det(A+B_t)}}{(2\pi)^{n/2}}$ are the normalizing constants. Hence, $\nu_t$ are also Gaussian and log-concave with respect to $\mu$. Fix $t\ge0$ and consider Kim and Milman's construction for measures $\mu$ and $\tilde{\nu}=\nu_t$. Notice that the flow of measures interpolating between $\tilde{\nu}$ and $\mu$ is the time-shifted initial flow $\nu_t$: $$d\tilde{\nu}_s=P^A_s\left(P^A_t\left(c_0\exp\left(-\frac{1}{2}.^\intercal B.\right)\right)\right)d\mu=P^A_{s+t}\left(c_0\exp\left(-\frac{1}{2}.^\intercal B.\right)\right)d\mu=d\nu_{t+s},\quad\forall s\ge 0.$$ This is a consequence of the semigroup property for $P^A$: $P^A_s\circ P^A_t=P^A_{s+t}$ for all $s,t\ge 0$, which can be derived, for example, from the Mehler formula. For the same reason, the corresponding velocity field $\tilde{w}_s=-\nabla\log P^A_s(P^A_t(c_0\exp(-\frac{1}{2}.^\intercal B.)))$ is the time-shifted initial velocity field: $\tilde{w}_s=w_{t+s}$. This implies that the flow of diffeomorphisms $S_s$ along $w_s$ and the flow of diffeomorphisms $\tilde{S}_s$ along $\tilde{w}_s$ ($\tilde{S}_{s\#}\tilde{\nu}=\tilde{\nu}_s$) satisfy $$S_{t+s}=\tilde{S}_s\circ S_t,\quad\forall s\ge 0.$$ Then the inverse diffeomorphisms $T_{s}=S_s^{-1}$ and $\tilde{T}_s=\tilde{S}_s^{-1}$ satisfy the relation \begin{equation}\label{T} \tilde{T}_s=S_t\circ T_{t+s},\quad\forall s\ge 0. \end{equation} Denote by $T_{0,opt}$ the Brenier map between $\mu$ and $\nu$, and by $T_{t,opt}$ the Brenier map between $\mu$ and $\tilde{\nu}=\nu_t$. By our assumption, $T_{t+s}\to T_{0,opt}$ and $\tilde{T}_s\to T_{t,opt}$ as $s\to\infty$. In particular, taking the limit as $s\to\infty$ in \eqref{T} gives \begin{equation}\label{formula} T_{t,opt}=S_t\circ T_{0,opt},\quad\forall t\ge0. \end{equation} Since $\nu_t$ and $\mu$ are Gaussian, the Brenier map between $\nu_t$ and $\mu$ is given explicitly (e.g. \citep[Example 1.7]{mccann2}) by multiplication by the symmetric positive definite matrix $$A^{1/2}(A^{1/2}(A+B_t)A^{1/2})^{-1/2}A^{1/2}.$$ Therefore, the Brenier map $T_{t,opt}$ between $\mu$ and $\nu_t$, being the unique map pushing $\mu$ forward to $\nu_t$ which is a gradient of a convex function, should be given by multiplication by the inverse of this matrix, i.e. $$DT_{t,opt}(x)=A^{-1/2}(A^{1/2}(A+B_t)A^{1/2})^{1/2}A^{-1/2},\quad\forall x\in\mathbb{R}^n.$$ Recall that \eqref{jacobian} becomes the following matrix differential equation (identical for all $x$): $$\frac{d}{dt}DS_t=B_t(DS_t),\quad DS_0=\text{Id}.$$ Multiplying this ODE from the right by the matrix $DT_{0,opt}$, we obtain from \eqref{formula} that $DT_{t,opt}$ satisfy the ODE $\frac{d}{dt}DT_{t,opt}=B_t(DT_{t,opt})$ as well. In particular, since $DT_{t,opt}$ are symmetric, $B_t(DT_{t,opt})$ should be symmetric for all $t$. Consider $t=0$: $$B_0(DT_{0,opt})=BA^{-1/2}(A^{1/2}(A+B)A^{1/2})^{1/2}A^{-1/2}.$$ This matrix is symmetric if and only if $C=A^{1/2}BA^{-1/2}(A^{1/2}(A+B)A^{1/2})^{1/2}$ is symmetric. But it is easy to find matrices $A$ and $B$ such that $C$ is not symmetric. For example, take $$A=\begin{pmatrix}4&0\\0&1\end{pmatrix}, \quad B=\begin{pmatrix}2&1\\1&3\end{pmatrix}.$$ In this case $$A^{1/2}BA^{-1/2}=\begin{pmatrix}2&2\\0.5&3\end{pmatrix},\quad A^{1/2}(A+B)A^{1/2}=\begin{pmatrix}24&2\\2&4\end{pmatrix},$$ and one can compute that $$C\approx\begin{pmatrix} 10.4 & 4.5\\ 3.3 & 6.1 \end{pmatrix}.$$ \qed \end{exmp} \textbf{The case of standard normal distribution $\mu$} When $\mu$ is the standard normal distribution ($A=\text{Id}$) and $\nu$ is Gaussian, the Kim-Milman construction does give the Brenier map (\citep[Section 6]{kim-milman}). However, our numerical result suggests that this does not hold for general $d\nu=\exp(-F)d\mu$ with convex function $F$. We consider the case $n=2$, $F(x)=F(x_1,x_2)=x_1^4+x_2^4+(x_1+x_2)^2$ and the starting point $x=(-0.5,0)$. Our numerical solution of \eqref{initial_value} yielded $S_\infty(x)\approx(-1.054,-0.231)$, while the numerical solution to \eqref{jacobian} converged as $t\to\infty$ to a non-symmetric matrix approximately equal to $$\begin{pmatrix} 2.303&0.441\\ 0.467&2.013 \end{pmatrix},$$ meaning that $S_\infty$ is unlikely to be a gradient of a convex function at point $x$. To obtain this approximations, we used the explicit Euler method for both ODE's with terminal time $T=30$ and the following time step sizes: \begin{table}[h!] \centering \begin{tabular}{|| c c ||} \hline time interval & $\Delta t$ \\ [0.5ex] \hline\hline [0,0.1] & 0.00002 \\ \hline [0.1,0.5] & 0.00005 \\ \hline [0.5,1] & 0.0002 \\ \hline [1,3] & 0.0005 \\ \hline [1,5] & 0.002 \\ \hline [5,30] & 0.005 \\ \hline \end{tabular} \end{table} To approximate the semigroup $P^A_t$, which becomes the Ornstein-Uhlenbeck semigroup when $A=\text{Id}$, \texttt{scipy.integrate.nquad} was used. \textbf{Acknowledgement.} I thank my Master's thesis advisor Joe Neeman for numerous helpful discussions of the topic. \footnotesize \end{document}
\begin{equation}gin{document} \begin{equation}gin{center} \lambdarge{ \bf On Nonlinear Asymptotic Stability of the Lane-Emden Solutions for the Viscous Gaseous Star Problem } \end{center} \begin{equation}gin{center} Tao Luo, Zhouping Xin, Huihui Zeng \end{center} \begin{equation}gin{abstract} This paper proves the nonlinear asymptotic stability of the Lane-Emden solutions for spherically symmetric motions of viscous gaseous stars if the adiabatic constant $\gammamma$ lies in the stability range $(4/3, 2)$. It is shown that for small perturbations of a Lane-Emden solution with same mass, there exists a unique global (in time) strong solution to the vacuum free boundary problem of the compressible Navier-Stokes-Poisson system with spherical symmetry for viscous stars, and the solution captures the precise physical behavior that the sound speed is $C^{{1}/{2}}$-H$\ddot{\rm o}$lder continuous across the vacuum boundary provided that $\gammamma$ lies in $(4/3, 2)$. The key is to establish the global-in-time regularity uniformly up to the vacuum boundary, which ensures the large time asymptotic uniform convergence of the evolving vacuum boundary, density and velocity to those of the Lane-Emden solution with detailed convergence rates, and detailed large time behaviors of solutions near the vacuum boundary. In particular, it is shown that every spherical surface moving with the fluid converges to the sphere enclosing the same mass inside the domain of the Lane-Emden solution with a uniform convergence rate and the large time asymptotic states for the vacuum free boundary problem \eqref{103} are determined by the initial mass distribution and the total mass. To overcome the difficulty caused by the degeneracy and singular behavior near the vacuum free boundary and coordinates singularity at the symmetry center, the main ingredients of the analysis consist of combinations of some new weighted nonlinear functionals (involving both lower-order and higher-order derivatives) and space-time weighted energy estimates. The constructions of these weighted nonlinear functionals and space-time weights depend crucially on the structures of the Lane-Emden solution, the balance of pressure and gravitation, and the dissipation. Finally, the uniform boundedness of the acceleration of the vacuum boundary is also proved. \end{abstract} \thetableofcontents \section{Introduction} \subsection{Problem} In the fundamental hydrodynamical setting (cf. \cite{ch}), the evolving boundary of a viscous gaseous star (the interface of fluids and vacuum states) can be modeled by the following free boundary problem of the compressible Navier-Stokes-Poisson equations: \begin{equation}gin{equation}\lambdabel{0.1}\begin{equation}gin{split} & \rho_t + {\rm div}(\rho {\bf u}) = 0 & {\rm in}& \ \ \Omega(t), \\ & (\rho {\bf u})_t + {\rm div}(\rho {\bf u}\otimes {\bf u})+{\rm div}\mathfrak{S }= - \rho \nablabla_{\bf x} \Psi & {\rm in}& \ \ \Omega(t),\\ &\rho>0 &{\rm in } & \ \ \Omega(t),\\ &\rho=0 \ \ {\rm and} \ \ \mathfrak{S}{\bf n}={\bf 0} & {\rm on}& \ \ \Gamma(t):=\partial \Omega(t),\\ & \mathcal{V}(\Gamma(t))={\bf u}\cdot {\bf n}, & &\\ &(\rho,{\bf u})=(\rho_0, {\bf u}_0) & {\rm on} & \ \ \Omega:= \Omega(0). \end{split} \end{equation} Here $({\bf x},t)\in \mathbb{R}^3\times [0,\infty)$, $\rho $, ${\bf u} $, $\mathfrak{S}$ and $\Psi$ denote, respectively, the space and time variable, density, velocity, stress tensor and gravitational potential; $\Omega(t)\subset \mathbb{R}^3$, $\Gamma(t)$, $\mathcal{V}(\Gamma(t))$ and ${\bf n}$ represent, respectively, the changing volume occupied by a fluid at time $t$, moving interface of fluids and vacuum states, normal velocity of $\Gamma(t)$ and exterior unit normal vector to $\Gamma(t)$. The gravitational potential is described by $$\Psi({\bf x}, t)=-G\int_{\Omega(t)} \frac{\rho({\bf y}, t)}{|{\bf x}-{\bf y}|}d{\bf y} \ \ {\rm satisfying} \ \ \Delta \Psi=4\pi G \rho \ \ {\rm in} \ \ \Omega(t) $$ with the gravitational constant $G$ taken to be unity for convenience. The stress tensor is given by $$ \mathfrak{S}=pI_3-\lambdambda_1 \left(\nablabla {\bf u}+\nablabla {\bf u}^t-\frac{2}{3}({\rm div} {\bf u}) I_3\right)-\lambdambda_2({\rm div} {\bf u})I_3, $$ where $I_3$ is the $3\times 3$ identical matrix, $p$ is the pressure of the gas, $\lambdambda_1>0$ is the shear viscosity, $\lambdambda_2>0$ is the bulk viscosity, and $\nablabla {\bf u}^t$ denotes the transpose of $\nablabla {\bf u}$. We consider the polytropic gases for which the equation of state is given by $$ p=p(\rho)=K\rho^{\gammamma}, $$ where $K>0$ is a constant set to be unity for convenience, $\gammamma>1$ is the adiabatic exponent. For a non-rotating gaseous star, it is important to consider spherically symmetric motions since the stable equilibrium configurations, which minimize the energy among all possible configurations (cf. \cite{liebyau}), are spherically symmetric, called Lane-Emden solutions. In this work, we are concerned with the three-dimensional spherically symmetric solutions to the free boundary problem \eqref{0.1} and its nonlinear asymptotic stability toward the Lane-Emden solutions. The aim is to prove the global-in-time regularity uniformly up to the vacuum boundary of solutions when $4/3<\gammamma<2$ (the stable index) capturing an interesting behavior called the physical vacuum (cf. \cite{10,10',jangmas,jm,tpliudamping,13,ya}) which states that the sound speed $c=\sqrt{p'(\rho)}$ is $C^{ {1}/{2}}$-H$\ddot{\rm o}$lder continuous near the vacuum boundary, as long as the initial datum is a suitably small perturbation of the Lane-Emden solution with the same total mass. Furthermore, we establish the large time asymptotic convergence of the global strong solution, in particular, the convergence of the vacuum boundary and the density, to the the Lane-Emden solutions with the detailed convergence rate as the time goes to infinity. In the spherically symmetric setting, that is, $\Omega(t)$ is a ball with the changing radius $R(t)$, $$ \rho({\bf x}, t) = \rho(r, t) \ \ {\rm and} \ \ {\bf u}({\bf x}, t) = u(r, t) {\bf x} /r \ \ {\rm with} \ \ r=|{\bf x}| \in \left(0, R(t)\right); $$ system \eqref{0.1} can then be rewritten as \begin{equation}gin{subequations}\lambdabel{103} \begin{equation}gin{align} & (r^2\rho)_t+ (r^2\rho u)_r=0 & {\rm in } & \ \ \left(0, \ R(t)\right) , \lambdabel{103a}\\ &\rho( u_t +u u_r)+ p_r+ {4\pi\rho}r^{-2}\int_0^r\rho(s,t) s^2ds=\mu \left(\frac{(r^2 u)_r}{r^2} \right)_r & {\rm in } & \ \ \left(0, \ R(t)\right), \lambdabel{103b}\\ & \rho>0 & {\rm in } & \ \ \left[0, \ R(t)\right), \lambdabel{103c}\\ & \rho=0 \ \ {\rm and} \ \ \frac{4}{3}\lambda_1\left( u_r-\frac{u}{r}\right) + \lambda_2 \left( u_r+2\frac{u}{r}\right)=0 & {\rm for} & \ \ r=R(t), \lambdabel{103d}\\ & \dot R(t)=u(R(t), t) \ \ {\rm with} \ \ R(0)=R_0, \ \ u(0,t)=0, & & \lambdabel{103e}\\ & (\rho, u) = (\rho_0, u_0) & {\rm on } & \ \ (0, \ R_0), \lambdabel{103f} \end{align} \end{subequations} where $\mu=4\lambdambda_1/3+\lambdambda_2>0$ is the viscosity constant. \eqref{103c} and \eqref{103d} state that $r=R(t)$ is the vacuum free boundary at which the normal stress $\mathfrak{S}{\bf n}=0$ reduces to $$p-\frac{4}{3}\lambda_1\left( u_r-\frac{u}{r}\right) - \lambda_2 \left( u_r+2\frac{u}{r}\right)=0 \ \ {\rm for} \ \ r=R(t), \ \ t\ge 0;$$ \eqref{103e} describes that the free boundary issues from $r=R_0$ and moves with the fluid velocity, and the center of the symmetry does not move. The initial domain is taken to be a ball $\{0\le r\le R_0\}$, and the initial density is assumed to satisfy the following condition: \begin{equation}\lambdabel{156} \rho_0(r)>0 \ \ {\rm for} \ \ 0\le r<R_0 , \ \ \rho_0(R_0)=0 \ \ {\rm and} \ \ -\infty< \left(\rho_0^{\gamma-1}\right)_r <0 \ \ {\rm at} \ \ r=R_0; \end{equation} so \begin{equation}\lambdabel{physicalvacuum}\rho_0^{\gamma-1}(r) \sim R_0-r \ {\rm as~} r {\rm~ close~ to~} R_0, \end{equation} that is, the initial sound speed is $C^{\frac{1}{2}}$-H${\rm \ddot{o}}$lder continuous across the vacuum boundary. The unknowns here are $\rho,\ u$ and $R(t)$. The requirement \eqref{156} for the initial density near the vacuum boundary is motivated by that of the Lane-Emden solution, $\begin{equation}tar\rho$, (cf. \cite{ch,linss}) which solves \begin{equation}\lambdabel{le} \partial_r(\begin{equation}tar \rho^{\gammamma})+ {4\pi}r^{-2} \begin{equation}tar{\rho} \int_0^r \begin{equation}tar \rho(s) s^2ds=0. \end{equation} The solutions to \eqref{le} can be characterized by the values of $\gammamma$ (cf. \cite{linss}) for given finite total mass $M > 0$, if $\gammamma \in (6/5, 2)$, there exists at least one compactly supported solution. For $\gammamma \in (4/3, 2)$, every solution is compactly supported and unique. If $\gammamma= 6/5$, the unique solution admits an explicit expression, and it has infinite support. On the other hand, for $\gammamma\in (1, 6/5)$, there are no solutions with finite total mass. For $\gammamma>6/5$, let $\begin{equation}tar R$ be the radius of the stationary star giving by the Lane-Emden solution, then it holds (cf. \cite{linss,makino}) \begin{equation}\lambdabel{pvforle} \begin{equation}tar\rho^{\gamma-1}(r) \sim \begin{equation}tar R-r \ {\rm as~} r {\rm~ close~ to~} \begin{equation}tar R.\end{equation} \subsection{Motivations and goals} The problem of nonlinear asymptotic stability of Lane-Emden solutions is of fundamental importance in both astrophysics and the theory of nonlinear PDEs. It is believed by astrophysicists that Lane-Emden solutions are stable for $ {4}/{3}<\gammamma<2$ (cf. \cite{ch,tokusky}) since they minimize the total energy among all the possible configurations. The main aim of this paper is to justify rigorously the precise sense of this stability. In fact, we prove for the viscous gaseous star with $ {4}/{3}<\gammamma<2$ , the Lane-Emden solution is strongly stable in the sense that it is asymptotically nonlinear stable. The first step for this purpose is to prove the global existence of strong solutions. However, due to the high degeneracy of system \eqref{103} caused by the behavior \eqref{156} near the vacuum boundary, it is a very challenging problem even for the local-in-time existence theory. Indeed, the local-in-time well-posedness of smooth solutions to vacuum free boundary problems with the behavior that the sound speed is $C^{ {1}/{2}}$-H$\ddot{\rm o}$lder continuous across vacuum boundaries was only established recently for compressible inviscid flows (cf. \cite{10, 10', jangmas, jm}) (see also \cite{LXZ} for a local-in-time well-posedness theory in a new functional space for the three-dimensional compressible Euler-Poisson equations in spherically symmetric motions). For the vacuum free boundary problem \eqref{103} of the compressible Navier-Stokes-Poisson equations featuring the behavior \eqref{156} near the vacuum boundary, a local-in-time well-posedness theory of strong solutions was established in \cite{jangnsp}. In order to obtain the nonlinear asymptotic stability of Lane-Emden solutions, it turns out that suitable estimates for higher order derivatives uniformly up to the vacuum boundary are necessary. Indeed, this turns out to be essential to prove the convergence of the evolving vacuum boundary and the uniform convergence of the density to those of Lane-Emden solutions, in addition to the uniform convergence of the velocity. We show the global-in-time regularity of solutions when $4/3<\gammamma<2$ capturing the behavior \eqref{physicalvacuum} (or \eqref{pvforle}) when the initial data are small perturbations of and have the same total mass as the stationary solution, $\begin{equation}tar\rho$, given by \eqref{le}. It should be remarked that the regularity estimates near boundaries are notoriously difficult. This is particularly so for the vacuum boundary problem \eqref{103} due to the high degeneracy caused by the singular behavior of \eqref{156} near vacuum states. Our nonlinear asymptotic stability results can be stated more precisely as follows. Suppose that the initial datum $(\rho_0, u_0, R_0)$ is a small perturbation of the Lane-Emden solution $(\begin{equation}tar \rho, 0, \begin{equation}tar R)$ in a suitable sense (see Theorem \ref{mainthm1}) and has the same total mass, $$\int_0^{R_0}r^2 \rho_0(r)dr= \int_0^{\begin{equation}tar R} r^2 \begin{equation}tar\rho(r)dr,$$ then there is a unique global-in-time strong solution $(\rho, u, R(t))$ $(0\le t<+\infty)$ to \eqref{103} which is regular uniformly up to the vacuum boundary $r=R(t)$. Moreover, let $r(x, t)$ be the radius of the ball inside $B_{R(t)}({\bf 0})$ satisfying: \begin{equation}\lambdabel{ma1} r_t(x, t)=u(r(x, t), t) \ \ {\rm and} \ \ r(x, 0)=r_0(x) \ \ {\rm for} \ \ 0\le x\le \begin{equation}tar R, \end{equation} \begin{equation}\lambdabel{ma2} \int_0^{r_0(x)}s^2\rho_0(s)ds=\int_0^xs^2\begin{equation}tar\rho(s)ds \ \ {\rm for} \ 0\le x\le \begin{equation}tar R.\end{equation} Then \begin{equation}\lambdabel{ma4} \lim_{t\to \infty} \|\left(r(x, t)-x, \ \rho(r(x, t), t)-\begin{equation}tar\rho(x), \ u(r(x, t), t) \right)\|_{L^{\infty}_x ([0, \begin{equation}tar R])}= 0 \end{equation} with some detailed convergence rates. Notice that \eqref{ma1} means that the sphere $r=r(x, t)$ with the initial position $r=r_0(x)$ is moving with the fluid and \eqref{ma2} means that the initial mass inside the ball $B_{r_0(x)}({\bf 0})$ is the same as that of the Lane-Emden solution inside the ball $B_{x}({\bf 0})$ for $0\le x\le \begin{equation}tar R$. It follows from the conservation of mass that the mass inside the ball $B_{r(x, t)}({\bf 0})$ at the instant $t$ is the same as that of the Lane-Emden solution inside the ball $B_{x}({\bf 0})$ for $0\le x\le \begin{equation}tar R$. In particular, the vacuum boundary is given by $$ R(t)=r(\begin{equation}tar R, t).$$ The convergence of $r(x,t)$ to $x$ in \eqref{ma4} means that every spherical surface moving with the fluid converges to that inside the domain of Lane-Emden solution enclosing the same mass, in particular, the evolving vacuum boundary $R(t) $ converges to the vacuum boundary $\begin{equation}tar R$ as time goes to infinity. This also gives the large time asymptotic convergence of every particle moving with the fluid since the motion is radial. Moreover, the convergence \eqref{ma4} means that the large time asymptotic states for the free boundary problem \eqref{103} are determined completely by the initial mass distribution and total mass. Besides the convergence \eqref{ma4}, we also establish convergence rates of higher-order norms involving derivatives, and show that the vacuum boundary $R(t)$ has the regularity of $W^{2, \infty}([0 ,\infty)) $ under a compatibility condition of the initial data with the boundary condition which implies that the acceleration of the vacuum boundary is uniformly bounded for $t\in [0, \infty)$. (Indeed, one may check from the proof that every particle moving with the fluid has the bounded acceleration for $t\in [0, \infty)$.) These results give a rather clear and complete characterization of the behavior of solutions both in large time and near the vacuum boundary. The results obtained in the present work are among few results of global {\it strong} solutions to vacuum free boundary problems of compressible fluids capturing the singular behavior of \eqref{156}, which is difficult and challenging due to the degeneracy caused by the physical vacuum and coordinates singularity at the center of the symmetry. We overcome this difficulty by establishing higher-order estimates involving the second-order derivatives of the velocity field, together with decay estimates of lower-order norms. This is achieved by combining some new weighted nonlinear functionals (involving both lower-order and higher-order derivatives) and space-time weighted energy estimates. The constructions of these weighted nonlinear functionals and space-time weights depend crucially on the structure of Lane-Emden solutions (in particular, the behavior \eqref{pvforle} near the vacuum boundary), the balance between the pressure and self-gravitation, and the dissipation of the viscosity. We sketch here the main ideas and methods used in this article. The original free boundary problem \eqref{103} is reduced to an initial boundary value problem on a fixed domain $x\in [0, \begin{equation}tar R]$ by the Lagrangian particle trajectory formulation \eqref{ma1} and \eqref{ma2} with $\begin{equation}tar R$ being the radius of the Lane-Emden solution so that the domain of the Lane-Emden solution becomes the reference domain. In this formulation, essentially the basic unknown is the particle trajectory $r(x, t)$ defined by \eqref{ma1} and \eqref{ma2} (more precisely, the radius of each evolving surface inside the evolving domain, which is called the particle trajectory for simplicity here and from now on), by which the density and velocity are determined. For problem \eqref{103}, this formulation is preferred because one can use it to trace each particle in the evolving domain, in particular, the evolving vacuum boundary. (Indeed, the approach of using the Lagrangian particle trajectory formulation, i.e., using the flow map of the Eulerian velocity field, to reduce free boundary problems of compressible fluids set on time-dependent domains to fixed time-independent domains was first adopted in \cite{10}, and latter on in \cite{10', jm, 17'}.) To make this strategy work, a crucial point is to obtain the positive lower and upper bounds of the derivative of the particle trajectory, which are derived by a pointwise estimate away from the center of symmetry, and an interior $L^2$-estimate of its second derivative. Various multipliers are applied to establish the decay estimates of lower-order norms and the regularity near the vacuum boundary. In order to obtain the higher-order estimates, we first study the problem obtained by differentiating the original problem in the tangential direction, and then get weighted estimates for the second derivatives of the velocity field using the viscosity term. However, due to the degeneracy of \eqref{156}, the dissipation of the viscosity alone is not enough for the global-in-time estimates, and we have to make full use of the balance between the pressure and gravitation. For this purpose, we decompose the gradient of the pressure as two parts, the first part is to balance the gravitation and the second part is an anti-derivative of the viscosity along the particle trajectory with a degenerate weight to match the viscosity. The main ideas and strategy of establishing the decay estimates and high-order regularity estimates will be given in Section \ref{sec3.2}. \subsection{Review of related works} There have been extensive works on the studies of the Euler-Poisson and the Navier-Stoke-Poisson equations with vacuum, especially in recent years. We will concentrate on those closely related to the stability of vacuum dynamics. The stability problem has been important in the theory of gaseous stars which has been studied extensively by astrophysicists (cf. \cite{ch,weinberg,lebovitz1}). The linear stability of Lane-Emden solutions was studied in \cite{linss}. A conditional nonlinear Lyapunov type stability theory of stationary solutions for $\gammamma> 4/3$ was established in \cite{rein} using a variational approach, by assuming the existence of global solutions of the Cauchy problem for the three-dimensional compressible Euler-Poisson equations (the same type of nonlinear stability results for rotating stars were given by \cite{luosmoller1,luosmoller2}. For $\gammamma\in (6/5, 4/3)$, the nonlinear dynamical instability of Lane-Emden solutions was proved by \cite{17'} and \cite{jangtice} in the framework of free boundary problems for Euler-Poisson systems and Navier-Stokes-Poisson equations, respectively. A nonlinear instability for $\gammamma= {6}/{5}$ was proved by \cite{jang65}. For $\gammamma= {4}/{3}$, an instability was identified in \cite{DLYY} that a small perturbation can cause part of the mass to go off to infinity for inviscid flows. It should be noted that the stability result in \cite{rein} is in the framework of initial value problems in the entire $\mathbb{R}^3$-space and involves only a Lyapunov functional which is essentially equivalent to a $L^p$-norm of difference of solutions, and the vacuum boundary cannot be traced. Another interesting work is on the vacuum free boundary problem of modified compressible Navier-Stokes-Poisson equations with spherical symmetry (cf. \cite{fangzhang1}), where the existence of a global weak solution was proved for a reduced initial boundary value problem after using the Lagrangian mass coordinates, under some constraints on the ratio of the coefficients of the shear viscosity and bulk viscosity. In contrast to the strong stability result in \eqref{ma4}, for the global weak solutions obtained in \cite{fangzhang1}, only the uniform convergence of the velocity $u(r, t)$ is proved, due to the lack of regularity near the vacuum boundary. The ideas and techniques developed in this paper can be applied to this modified compressible Navier-Stokes-Poisson equations to obtain a strong stability result as in \eqref{ma4}. Indeed, our global-in-time regularity gives not only the decay estimates for the weighted norms $\|\begin{equation}tar{\rho}^{\gammamma/2}(r_x(x, t)-1)\|_{L^{2}_x([0, \begin{equation}tar R])}$ and $ \|x v_x \|_{L^{2}_x([0, \begin{equation}tar R])} $ as in \cite{fangzhang1}, but also the decay estimates of the unweighted norms of $\| r_x(x, t)-1 \|_{L^{2}_x([0, \begin{equation}tar R])}$, $ \|v_x\|_{L^{\infty}_x([0, \begin{equation}tar R])}$ and some uniform estimates on the second derivatives valid up to the vacuum boundary, which are crucial to the nonlinear asymptotic stability for this modified model. Furthermore, our theory holds without the restrictions on the viscosity coefficients as in \cite{fangzhang1}. This will be reported in a forthcoming paper (cf. \cite{LXZ2}). The results obtained in the present work are for spherically symmetric motions. For the problem of general three dimensional perturbations, in addition to motions in the normal direction, one will have to estimate motions in the tangential direction. Besides this, the evolution of the geometry of the free surface will be estimated also. We believe the ideas and techniques developed in this paper will be useful to the study of the problem of general three dimensional perturbations. We conclude the introduction by noting that there are also other prior results on free boundary problems involving vacuum for compressible Navier-Stokes equations besides the ones aforementioned. For the one-dimensional motions, there are many results concerning global weak solutions to free boundary problems of the Navier-Stokes equations, one may refer to \cite{Okada,Okada3,LXY,fangzhang,JXZ,duan,YYZ,yangzhu,JXZ,zhu} and references therein. As for the spherically symmetric motions, global existence and stability of weak solutions were obtained in \cite{Okada1,OSM} to compressible Navier-Stokes equations for gases surrounding a solid ball (a hard core) without self-gravitation (see also \cite{Chengq} for a compressible heat-conducting flow). However, those results are restricted to cut-off domains excluding a neighborhood of the origin. It should be noted that for a modified system of Navier-Stokes equations, a global existence of weak solutions with spherical symmetry containing the origin was established in \cite{GLX} for which the density does not vanish on the boundary. For a class of free boundary problems of compressible Navier-Stokes-Poisson equations away from vacuum states, the readers may refer to \cite{94, 95} for the local-in-time well-posedness results and \cite{96} for linearized stability results of stationary solutions. \vskip 0.25cm The rest of the paper is organized as follows. In Section \ref{sec2}, we reduce the free boundary problem \eqref{103} to a fixed domain by using the Lagrangian formulation and state the global existence and nonlinear asymptotic stability theorems. The {\it a priori} estimates are derived in Section \ref{sec3}, whose main ideas and steps are outlined in Section \ref{sec3.2}. The global existence theorem is proved in Section \ref{sec3}. Section \ref{sec4} is devoted to the proof of the nonlinear asymptotic stability theorem in the original Eulerian coordinates which is an easy consequence of the estimates obtained in Section \ref{sec3}. The local existence of strong solutions in our function space framework given in Section \ref{sec2} is proved in Appendix, Part I. In Appendix, Part II, a linearized analysis is given to illustrate some ideas for the original nonlinear problem. \section{ Lagrangian formulation and main results}\lambdabel{sec2} \subsection{Lagrangian formulation} In this subsection, we adopt the the Lagrangian particle trajectory formulation as first used in \cite{10} for inviscid flows and latter on in \cite{10', jm, 17'} to reduce the original free boundary problem \eqref{103} to an initial boundary value problem on the fixed domain $x\in [0, \begin{equation}tar R]$. For this purpose, we first recall some properties of Lane-Emden solutions. For $\gamma\in(4/3, 2)$, it is known that for any given finite positive total mass, there exists a unique solution to equation \eqref{le} whose support is compact (cf. \cite{linss}). Without abusing notations, $x$ will denote the distance from the origin for the Lane-Emden solution. Therefore, for any $M\in(0,\infty)$, there exists a unique function $\begin{equation}tar\rho(x)$ such that \begin{equation}gin{equation}\lambdabel{lex1} \begin{equation}tar\rho_0:=\begin{equation}tar\rho(0)>0, \ \ \begin{equation}tar\rho(x)>0 \ \ {\rm for} \ \ x\in \left(0,\ \begin{equation}tar R\right), \ \ \begin{equation}tar\rho\left(\begin{equation}tar R\right)=0, \ \ M=\int_0^{\begin{equation}tar R} 4\pi \begin{equation}tar \rho(s) s^2ds ; \end{equation} \begin{equation}gin{equation*}\lambdabel{newle} -\infty<\begin{equation}tar\rho_x<0 \ \ {\rm for} \ \ x\in (0,\ \begin{equation}tar R) \ \ {\rm and} \ \ \begin{equation}tar\rho(x) \le \begin{equation}tar\rho_0 \ \ {\rm for} \ \ x\in \left(0,\ \begin{equation}tar R\right); \end{equation*} \begin{equation}gin{equation}\lambdabel{rhox} \left(\begin{equation}tar{\rho}^\gamma \right)_x=-x \phi \begin{equation}tar{\rho}, \ \ {\rm where} \ \ \phi:= x^{-3}\int_0^x 4\pi \begin{equation}tar\rho(s) s^2 ds \in \left[M/{\begin{equation}tar R}^3, \ 4\pi \begin{equation}tar\rho_0/3\right] ; \end{equation} for a certain finite positive constant $\begin{equation}tar R$ (indeed, $\begin{equation}tar R$ is determined by $M$ and $\gamma$). Note that $$ \left(\begin{equation}tar{\rho}^{\gamma-1} \right)_x=\gamma^{-1}(\gamma-1) {\begin{equation}tar \rho}^{-1} \left(\begin{equation}tar{\rho}^{\gamma} \right)_x = - \gamma^{-1}(\gamma-1) x\phi. $$ It then follows from \eqref{lex1} and \eqref{rhox} that $\begin{equation}tar\rho$ satisfies the physical vacuum condition, that is, $$ \begin{equation}tar{\rho}^{\gamma-1}(x) \sim \begin{equation}tar R- x \ \ {\rm as} \ \ x \ \ {\rm~ close~ to~} \begin{equation}tar R. $$ More precisely, there exists a constant $C$ depending on $M$ and $\gamma$ such that \begin{equation}gin{equation}\lambdabel{phy} C^{-1} \left( \begin{equation}tar R- x \right) \le \begin{equation}tar{\rho}^{\gamma-1}(x) \le C \left( \begin{equation}tar R- x \right), \ \ x\in \left(0,\ \begin{equation}tar R\right). \end{equation} The particle trajectory Lagrangian formulation for \eqref{103} is given as follows. Let $x$ be the reference variable and define the Lagrangian variable $r(x, t)$ by \begin{equation}\lambdabel{Aug9-1} r_t(x, t)= u(r(x, t), t) \ \ {\rm for} \ \ t>0 \ \ {\rm and} \ \ r(x,0)=r_0(x), \ \ x\in I:=\left(0, \begin{equation}tar R\right) . \end{equation} Here $r_0(x)$ is the initial position which maps $ \begin{equation}tar I \to \left [0, R_0\right]$ satisfying \begin{equation}gin{equation}\lambdabel{rox} \int_0^{r_0(x)} \rho_0(s) s^2ds = \int_0^x \begin{equation}tar\rho(s) s^2 ds, \ \ x\in \begin{equation}tar I, \end{equation} so that \begin{equation}gin{equation}\lambdabel{choice} \rho_0(r_0(x))r_0^2(x)r_0'(x)=\begin{equation}tar\rho(x)x^2, \ x\in \begin{equation}tar I.\end{equation} (Indeed, \eqref{rox} means that the initial mass in the ball with the radius $r_0(x)$ is the same as that of the Lane-Emden solution in the ball with the radius $x$. Then smoothness of $r_0(x)$ at $x=\begin{equation}tar R$ is equivalent to that the initial density $\rho_0$ has the same behavior near $R_0$ as that of $\begin{equation}tar\rho$ near $\begin{equation}tar R$.) The choice of $r_0$ can be described by \begin{equation}\lambdabel{r000} r_0(x)=\psi^{-1}(\xi(x)), \ \ 0\le x\le \begin{equation}tar R; \end{equation} where $\xi$ and $\psi$ are one-to-one mappings, defined by $$\xi: (0, \begin{equation}tar R) \to (0, M): \ x \mapsto \int_0^x s^2\begin{equation}tar \rho (s)d s \ \ {\rm and} \ \ \psi: (0, R_0) \to (0, M): \ z \mapsto \int_0^z s^2 \rho_0 (s)ds. $$ Moreover $r_0(x)$ is an increasing function and the initial total mass has to be the same as that for $\begin{equation}tar \rho$, that is, \begin{equation}gin{equation}\lambdabel{samemass} \int_0^{R_0}4\pi \rho_0(s) s^2 ds=\int_0^{r_0(\begin{equation}tar R)}4\pi \rho_0(s) s^2 ds = \int_0^{\begin{equation}tar R} 4\pi \begin{equation}tar \rho (s) s^2 ds=M, \end{equation} to ensure that $r_0$ is a diffeomorphism from $\begin{equation}tar I$ to $\left [0, R_0\right]$. In view of \eqref{103a}, we see \begin{equation}gin{equation}\lambdabel{masswithin} \int_0^{r(x, t)}\rho(s, t)s^2ds=\int_0^{r_0(x)} \rho_0(s) s^2ds, \ \ x\in I. \end{equation} Define the Lagrangian density and velocity respectively by $$f(x, t)=\rho(r(x, t), t) \ \ {\rm and} \ \ v(x, t)=u(r(x,t), t).$$ Then the Lagrangian version of \eqref{103a} and \eqref{103b} can be written on the reference domain $I$ as \begin{equation}gin{subequations}\lambdabel{new419} \begin{equation}gin{align} & (r^2f)_t +r^2f\frac{v_x}{r_x}=0 & {\rm in}& \ \ I\times (0, T],\lambdabel{new419a}\\ & f v_t+ \frac{ (f^{\gammamma})_x}{ r_x}+ {4\pi f} {r^{-2}} \int_0^{r_0(x)} \rho_0(s) s^2ds = \frac{\mu}{r_x } \left(\frac{ (r^2v)_x}{r^2 r_x}\right)_x \ \ &{\rm in}& \ \ I\times (0, T]. \lambdabel{new419b} \end{align} \end{subequations} Solving \eqref{new419a} gives that $$f(x, t)r^2(x, t) r_x(x, t)= \rho_0(r_0(x)) r_0^2(x) r_{0x}(x), \ \ x\in I.$$ Therefore, $$ f(x, t)= \frac{x^2\begin{equation}tar \rho(x)}{r^2(x, t) r_x(x, t)} \ \ {\rm for} \ \ x\in I, $$ due to \eqref{choice}. So, \eqref{103} can be written on the reference domain $I=\left(0, \begin{equation}tar R\right)$ as \begin{equation}gin{subequations}\lambdabel{419} \begin{equation}gin{align} & \begin{equation}tar\rho\left( \frac{x}{r}\right)^2 v_t + \left[ \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma \right]_x + \frac{x^2}{r^4} \begin{equation}tar\rho \int_0^x 4\pi y^2\begin{equation}tar\rho(y) dy =\mu \left(\frac{ (r^2v)_x}{r^2 r_x}\right)_x & {\rm in} & \ I\times (0, T], \lambdabel{419a}\\ & v(0, t)=0, \ \ \mathfrak{B}(\begin{equation}tar R, t)=0 & {\rm on} & \ [0,T],\lambdabel{419b}\\ & (r,\ v)(x, 0) = \left(r_0(x), \ u_0(r_0(x)) \right) & {\rm on} & \ I \times \{t=0\}, \lambdabel{419c} \end{align} \end{subequations} where $\mathfrak{B}$ is the normal stress at the boundary given by \begin{equation}gin{equation}\lambdabel{bdry1} \mathfrak{B}:=\frac{4}{3}\lambda_1\left(\frac{v_x}{r_x}-\frac{v}{r}\right) + \lambda_2 \left(\frac{v_x}{r_x}+2\frac{v}{r}\right) =\frac{4}{3}\lambda_1\frac{r}{r_x}\left(\frac{v}{r}\right)_x + \lambda_2 \frac{\left(r^2 v\right)_x}{r_x r^2}. \end{equation} Moreover, it can be derived from \eqref{Aug9-1}, \eqref{rox} and \eqref{419b} that \begin{equation}\lambdabel{Aug9-2} r(0,t)=r_0(0)+\int_0^t v(0,s) ds =0 \ \ {\rm on} \ \ [0, T]. \end{equation} {\bf Notation}. Throughout the rest of this paper, $c$ and $C$ will be used to denote generic positive constants which are independent of time $t$ but may depend on $\gammamma$, $\lambdambda_1$, $\lambdambda_2$, $M$ and the bounds of $\begin{equation}tar \rho$ such as $\begin{equation}tar\rho(0)$ and $\begin{equation}tar\rho(\begin{equation}tar R/2)$; and we will use the following notations: $$\int: =\int_{I}, \ \ \|\cdot\| :=\|\cdot\|_{L^2(I)}, \ \ \|\cdot\|_{L^p}:=\|\cdot\|_{L^p(I)} \ (p=1, \infty), \ \ {\rm and} \ \ \|\cdot\|_{H^s}:=\|\cdot\|_{H^s(I)} \ (s=1, 2) .$$ \subsection{Strong solutions and functionals}\lambdabel{sec2.2} A strong solution to problem \eqref{419} is defined as follows. \begin{equation}gin{defi}\lambdabel{definitionss} $v\in C\left([0, T]; H^2_{loc}([0, \begin{equation}tar R))\right)\cap C\left([0, T]; W^{1, \infty}(I)\right)$ with \begin{equation}\lambdabel{r} r(x, t)=r_0(x)+\int_0^t v(x, s)ds \ \ for \ \ (x, t)\in I\times [0, T] , \end{equation} satisfying the initial condition \eqref{419c} is called a strong solution of problem \eqref{419} in $[0, T] $, if\\ 1) $c_1\le r_x(x, t) \le c_2$, $(x, t)\in I\times [0, T]$, for some positive constants $c_1$ and $c_2$; \\ 2) $\begin{equation}tar\rho ^{ - {1}/{2}} \left[ ({r^2r_x} )^{-1}{(r^2 v)_x} \right]_x\in C([0, T]; L^2(I))$ and $\begin{equation}tar\rho ^{\gamma- {1}/{2}} \left(r_{xx }, \left(r/x\right)_x\right)\in C^1([0, T]; L^2(I))$;\\ 3) $\begin{equation}tar\rho^{ {1}/{2}} v \in C^1([0, T]; L^2(I))$;\\ 4) $v(0, t)=0$ and $\mathfrak{B}(\begin{equation}tar R, t)=0$ hold in the sense of $W^{1, \infty}$-trace and $H^1$-trace, respectively, for $t\in [0, T]$;\\ 5) \eqref{419a} holds for $ (x, t)\in I\times [0, T]$, a.e.. \end{defi} \begin{equation}gin{rmk}\lambdabel{9.20} Let $(r,v)$ be a strong solution of problem \eqref{419} defined in Definition \ref{definitionss}, then it holds that for any $a\in (0 ,\begin{equation}tar R)$, \begin{equation}gin{align} & r\in C^1\left([0, T]; W^{1, \infty}(I)\right) \cap C^1 \left([0, T]; H^2\left([0,a]\right)\right), \ \ {r}/{x} \in C^1 \left([0, T]; H^1(I)\right), \lambdabel{hhreg1}\\ & v \in C \left([0, T]; W^{1, \infty}(I)\right) \cap C \left([0, T]; H^2\left([0,a]\right)\right), \ \ {v}/{x} \in C \left([0, T]; H^1(I)\right), \lambdabel{hhreg2}\\ & \left( v/r, \ {v_x}/{r_x}, \ \mathfrak{B} \right) \in C\left([0, T]; H^1(I)\right). \lambdabel{hhreg3} \end{align} The arguments for \eqref{hhreg1}-\eqref{hhreg3} go as follows. It gives from $c_1\le r_x(x, t) \le c_2$ and $r(0,t)=0$ that $c_1\le x^{-1} {r(x, t)} \le c_2$. As a consequence of the facts $v\in C\left([0, T]; W^{1, \infty}(I)\right)$ and $v(0,t)=0$, one has that $v/x\in C\left([0, T]; L^\infty(I)\right)$; which, together with \eqref{r} and $c_1\le r/x \le c_2$, gives $r/x\in C^1\left([0, T]; L^\infty(I)\right)$. Moreover, it follows from \eqref{r}, $\begin{equation}tar\rho^{\gamma- {1}/{2}} \left(r_{xx }, \left(r/x\right)_x\right)\in C^1([0, T]; L^2(I))$, and $\begin{equation}tar\rho(x) \ge \begin{equation}tar\rho (a)$ for $x\in [0, a]$ that \begin{equation} \left( r_{xx}, \ \left(r/x\right)_x \right) \in C^1\left([0, T]; L^2 \left([0,a]\right)\right) \ \ {\rm and} \ \ \left( v_{xx}, \ \left(v/x\right)_x \right) \in C\left([0, T]; L^2 \left([0,a]\right)\right). \end{equation} So, \eqref{hhreg1} and \eqref{hhreg2} hold, since $|(r/x)_x|\le a^{-1}(|r_x|+|r/x|)$ for $x\in [a, \begin{equation}tar R]$. It follows from $v/r=(v/x)(x/r)$, $c_1\le r/x \le c_2$, \eqref{hhreg1} and \eqref{hhreg2} that $v/r \in C\left([0, T]; H^1(I)\right)$. Due to 2) of Definition \ref{definitionss}, it holds that $\left(v_x/r_x + 2 v/r \right)_x\in C([0, T]; L^2(I)) $. These imply $(v_x/r_x)_x\in C([0, T]; L^2(I))$. So, \eqref{hhreg3} holds. \end{rmk} In what follows, we give some remarks on the above definition of strong solutions to problem \eqref{419}. First of all, in order to make the transformation $x \mapsto r$ invertible to define particle trajectories, it is essential to require the positive lower and upper bounds for $r_x$, i.e., $1)$ of Definition \ref{definitionss}. The requirement $v\in C([0, +\infty); W^{1, \infty}(I))$ is also essential, because it is also related to the well-definiteness of particle trajectories defined by \eqref{Aug9-1}, whose uniqueness for the given initial values $r_0(x)$ is not ensured without the Lipschitz continuity of $u$ in $r$, and the bound of $u_r$ and $v_x$ are related by the identity $u_r(r(x, t), t)=v_x(x, t)/r_x(x, t)$. The above definition of strong solutions guarantees that each term in equation $\eqref{419a}$ is in $ C([0, T]; L^2(I))$. More than this, each term multiplied by $\begin{equation}tar\rho^{-{1}/{2}}$ is in $ C([0, T]; L^2(I))$. Indeed, it is quite natural to require $\begin{equation}tar\rho^{{1}/{2}}v_t \in C([0, T]; L^2(I))$ other than $\begin{equation}tar\rho v_t \in C([0, T]; L^2(I))$ from the kinetic energy point of view. The kinetic energy of the system is $\|x\begin{equation}tar\rho^{{1}/{2}}v(\cdot, t)\|$, which is expected to be bounded and continuous in time. (Indeed, this can be justified by applying the multiplier $r^2v$ to equation \eqref{419a}.) By studying the problem obtained by differentiating the original one with respect to $t$, one may expect $\|x\begin{equation}tar\rho^{ {1}/{2}}v_t(\cdot, t)\|$ is bounded and continuous in time. We can improve this to the boundedness and continuity in time of $\|\begin{equation}tar\rho^{{1}/{2}}v_t(\cdot, t)\|$ by using the viscosity, which will be shown later. This leads us naturally to require that each term in $\eqref{419a}$ multiplied by $\begin{equation}tar\rho^{-{1}/{2}}$ is in $ C([0, T]; L^2(I))$. To show the well-posedness of strong solutions to problem \eqref{419} defined in Definition \ref{definitionss}, we introduce the following higher-order functional: \begin{equation}\lambdabel{mathmarch} \mathfrak{E}(t)=\left\|(r_x-1, \ v_x)(\cdot,t)\right\|_{L^\infty}^2 + \left\| \begin{equation}tar\rho^{\gamma-{1}/{2}}(r_{xx},\ (r/x)_x)(\cdot, t) \right\|^2 + \left\| \begin{equation}tar\rho^{{1}/{2}} v_t(\cdot, t) \right\|^2, \ \ t\ge 0. \end{equation} We will prove the global existence and uniqueness of strong solutions satisfying $\mathfrak{E}(t)\le C\mathfrak{E}(0)$ $(t\ge 0)$ for some constant $C>0$ independent of $t$, and some decay estimates as in $i)$ of Theorem \ref{mainthm}, provided that $\mathfrak{E}(0)$ is suitably small and the following compatibility condition of the initial data with the boundary conditions holds. \begin{equation}\lambdabel{compatibility} v(0, 0)=0 \ \ {\rm and} \ \ \mathfrak{B}(\begin{equation}tar R, 0)=0. \end{equation} In order to gain further regularity of strong solutions obtained via $\mathfrak{E}(t)$, we introduce the following functional: \begin{equation}gin{align} &\mathfrak{F}_\alpha(t)=\mathfrak{E}(t)+ \left\|\begin{equation}tar\rho^{(2\gamma-1-\alpha)/2} r_{xx}(\cdot, t) \right\|^2, \ \ \alpha\in [0, 2\gamma-1], \ \ t\ge 0.\notag \end{align} Besides $\mathfrak{E}(0)$ is small, if the initial data are assumed to satisfy further regularity that $\mathfrak{F}_\alpha(0)<\infty$ for $0<\alpha\le2\gamma-1$, we will prove further regularity and decay (than those stated in $i)$ of Theorem \ref{mainthm}) of strong solutions with decay rates which may depend on $\alpha$ in various norms, as shown in $ii)$ and $iii)$ of Theorem \ref{mainthm}. Indeed, the index $\alpha$ indicates the behavior of solutions near the vacuum boundary, which has influence on the decay rates of solutions. In particular, we will prove the regularity that $v_{xx}(\cdot, t)\in L^2(I)$ for all $t\ge 0$ and the decay of $\|v_{xx}(\cdot, t)\|$, if the initial data satisfy additional regularity that $r_{xx}(\cdot, 0)\in L^2(I)$ (i.e., $\mathfrak{F}_{2\gamma-1}(0)<\infty$). Some remarks are given on the smallness requirement of functional $\mathfrak{E}(t)$. For problem \eqref{419}, $r(x, t)=x$ is the equilibrium solution. The basic stability requirement is on the smallness of $r(x,t)-x$ for all time in certain topology. The smallness of the quantity $r_x(x, t)-1$, the $x$-derivative of $r(x,t)-x$, in $L^{\infty}$-norm, ensures that $r_x(x, t)$ is bounded from below and above by positive constants. This also gives the smallness of $\|r/x-1\|_{L^{\infty}}$ and thus the lower and positive bounds for $r/x$ due to the condition $r(0, t)=0$ (see \eqref{Aug9-2}). So, it is essential to obtain the smallness of $\|r_x-1\|_{L^{\infty}}$. Since $v=0$ is the equilibrium for problem \eqref{419}, it is required that $\|v(\cdot, t)\|_{W^{1, \infty}(I)}$ to be small for all $t>0$ if this holds true initially, from the stability point of view. It is crucial to derive the smallness of $L^{\infty}$-bound for $v_x$, which also gives the bound for $v/x$ due to the condition that $v(0, t)=0$. Our basic strategy is to use the weighted $L^2$-energy method together with some pointwise estimates to bound $\|r_x-1\|_{L^{\infty}}$ and $\|v_x\|_{L^{\infty}}$. For example, we may bound $\|r_x-1\|_{L^{\infty}([0, \begin{equation}tar R/2])}$ by use of $\|r_{xx}\|_{L^2([0, \begin{equation}tar R/2])}$ which can be bounded by $\| \begin{equation}tar\rho^{\gamma-1/2} r_{xx} \|$. Therefore, that the functional $\mathfrak{E}(t)$ is small for all time is essential for our stability analysis of problem \eqref{419}. We give here some consequences of the smallness of $\mathfrak{E}(t)$ ($t\ge 0$), which ensure that each term in $\eqref{419a}$ multiplied by $\begin{equation}tar\rho^{-{1}/{2}}$ is in $L^2(I)$ ($t\ge 0$), and that the boundary condition $v(0,t)=0$ and $\mathfrak{B}(\begin{equation}tar R, t)=0$ are achieved in the sense of $W^{1,\infty}$-trace and $H^1$-trace, respectively. For $t\ge 0$ and $a\in (0, \begin{equation}tar R)$, it holds that \begin{equation}gin{align} & \left\|(r/x-1)(\cdot,t)\right\|_{L^\infty}^2 \le \left\|(r_x-1)(\cdot,t)\right\|_{L^\infty}^2 \le \mathfrak{E}(t), \lambdabel{9.21.1}\\ & 1/2 \le r(x,t)/x \le 3/2 \ \ {\rm and} \ \ 1/2 \le r_x(x,t) \le 3/2 , \ \ x\in I, \lambdabel{9.21.2}\\ & \left\|(v/x)(\cdot,t)\right\|_{L^\infty}^2 \le \left\|v_x(\cdot,t)\right\|_{L^\infty}^2 \le \mathfrak{E}(t) ,\lambdabel{9.21.3}\\ &\left\|\begin{equation}tar\rho ^{ - {1}/{2}} \left(v_x/r_x + 2 v/r \right)_x (\cdot, t) \right\|^2= \left\|\begin{equation}tar\rho ^{ - {1}/{2}} \left[ ({r^2r_x} )^{-1}{(r^2 v)_x} \right]_x (\cdot, t) \right\|^2 \le C\mathfrak{E}(t),\lambdabel{9.21.4}\\ & \left\| \begin{equation}tar\rho^{\gamma-{1}/{2}}(v_{xx}, \ (v/x)_x)(\cdot, t) \right\|^2 \le C\mathfrak{E}(t), \lambdabel{9.21.5}\\ & \left\|(v, r-x)(\cdot, t)\right\|_{H^2([0, a])}^2 \le C(a)\mathfrak{E}(t), \ \ \left\|\left(\frac{r}{x}-1, \frac{v}{x}, \frac{v}{r}, \frac{v_x}{r_x}, \mathfrak{B}\right)(\cdot, t)\right\|_{H^1}^2 \le C\mathfrak{E}(t),\lambdabel{9.21.6} \end{align} where $C$ and $C(a)$ are positive constants independent of $t$. The arguments for \eqref{9.21.1}-\eqref{9.21.6} go as follows. \eqref{9.21.1} and \eqref{9.21.3} follow from $r(0,t)=0$ and $v(0,t)=0$, respectively; \eqref{9.21.2} follows from the smallness of $\mathfrak{E}(t)$; \eqref{9.21.4} follows from \eqref{9.21.1}, \eqref{9.21.2}, the definition of $\mathfrak{E}(t)$ in \eqref{mathmarch}, and equation \eqref{419a} which can be rewritten as $$\mu \left(\frac{ (r^2v)_x}{r^2 r_x}\right)_x = \begin{equation}tar\rho\left( \frac{x}{r}\right)^2 v_t + \begin{equation}tar\rho^\gamma \left[ \left(\frac{x^2}{r^2}\frac{1}{ r_x}\right)^\gamma \right]_x + \left[\left(\frac{x^2}{r^2}\frac{1}{ r_x}\right)^\gamma - \frac{x^4}{r^4} \right] \left(\begin{equation}tar{\rho}^\gamma\right)_x ; $$ \eqref{9.21.5} follows from \eqref{9.21.3}, \eqref{9.21.4}, the definition of $\mathfrak{E}(t)$ in \eqref{mathmarch}, and the estimate below: \begin{equation}\lambdabel{9.21.7} \left\|\begin{equation}tar\rho^{\gamma-\frac{1}{2}}\left(v_{xx}, \ \left(\frac{v}{x}\right)_x \right) \right\|^2 \le C \left\| \left(\frac{v_x}{r_x} + 2 \frac{v}{r}\right)_x \right\|^2+ C \left\|\left(v_x, \frac{v}{x}\right)\right\|^2_{L^\infty}\left\|\begin{equation}tar\rho^{\gamma-\frac{1}{2}}\left(r_{xx}, \ \left(\frac{r}{x}\right)_x \right) \right\|^2 . \end{equation} (Indeed, \eqref{9.21.7} follows from \eqref{tlg2}, \eqref{tlg5}, \eqref{9.21.2}, and \eqref{weightvxx} which will be prove later.) In a similar argument as that for Remark \ref{9.20}, one can obtain \eqref{9.21.6}. \subsection{Main theorems and remarks} The first theorem of this paper is on the global existence and the regularity of strong solutions, which also gives the strong Lyapunov stability of the Lane-Emden solution in the functional $\mathfrak{E}(t)$: \begin{equation}gin{thm}\lambdabel{mainthm1} Let $\gamma\in(4/3,\ 2)$ and $\begin{equation}tar\rho$ be the Lane-Emden solution satisfying \eqref{lex1}-\eqref{rhox}. Assume that \eqref{compatibility} holds and the initial density $\rho_0$ satisfies \eqref{156} and \eqref{samemass}. There exists a constant $\begin{equation}tar\delta >0$ such that if $\mathfrak{E}(0)\le \begin{equation}tar\delta,$ then the problem \eqref{419} admits a unique strong solution in $I\times[0, \infty)$ with \begin{equation}\lambdabel{keyconclusion} \mathfrak{E}(t)\le C\mathfrak{E}(0), \ \ t\ge 0, \end{equation} for some constant $C$ independent of $t$. Moreover, if $\mathfrak{F}_{2\gamma-1}(0)<\infty$ (i.e., $\|r_{xx} (\cdot, 0)\|<\infty$), then the strong solution obtained above satisfies the following further regularity estimates \begin{equation}\lambdabel{Aug23-1} \|v_{xx}(\cdot, t)\|^2 \le C \mathfrak{F}_{2\gamma-1} (0) \left(1+ \mathfrak{F}_{2\gamma-1} (0) \right) \left(1+ \mathfrak{E} (0) \right), \ \ t\ge 0, \end{equation} for some constant $C$ independent of $t$; and \begin{equation}\lambdabel{Aug23-2} \|r_{xx}(\cdot, t)\|^2 \le C \mathfrak{F}_{2\gamma-1} (0) + C(T) \mathfrak{E}(0), \ \ t\in [0, T], \end{equation} for some constants $C$ independent of $t$ and $C(T)$ depending on $T$. \end{thm} For any $t\ge 0$, since $r_x(x, t)>0$ for $x\in \begin{equation}tar I$, $r(x, t)$ defines a diffeomorphism from the reference domain $\begin{equation}tar I$ to the changing domain $\{0\le r\le R(t)\}$ with the boundary \begin{equation}\lambdabel{vacuumboundary} R(t)=r\left(\begin{equation}tar R , t\right). \end{equation} It also induces a diffeomorphism from the initial domain, $\begin{equation}tar B_{R_0}(0)$, to the evolving domain, $\begin{equation}tar B_{R(t)}(0)$, for all $t\ge 0$: $${\bf x}\ne {\bf 0} \in \begin{equation}tar B_{R_0}(0)\to r\left(r_0^{-1}(|{\bf x}|), t\right)|{\bf x}|^{-1}{{\bf x}} \in \begin{equation}tar B_{R(t)}(0), $$ where $r_0^{-1}$ is the inverse map of $r_0$ defined in \eqref{r000}. Here $$\begin{equation}tar B_{R_0}(0):= \{{\bf x}\in \mathbb{R}^3: |{\bf x}|\le R_0\} \ \ {\rm and} \ \ \begin{equation}tar B_{R(t)}(0):=\{{\bf x}\in \mathbb{R}^3: |{\bf x}|\le R(t)\}. $$ Denote the inverse of the map $r(x, t)$ by $\mathcal{R}_t$ for $t\ge 0$ so that $$ {\rm if~} \ \ r=r(x, t)\ \ {\rm for ~} \ \ 0\le r\le R(t), \ \ {\rm then~} \ \ x=\mathcal{R}_t(r).$$ For the strong solution $(r, v)$ obtained in Theorem \ref{mainthm1}, we set for $0\le r\le R(t)$ and $t\ge 0$, \begin{equation}\lambdabel{solution} \rho(r, t)=\frac{x^2\begin{equation}tar \rho(x)}{r^2(x, t) r_x(x, t)} \ \ {\rm and} \ \ u(r, t)=v(x, t) \ \ {\rm with} \ \ x=\mathcal{R}_t(r). \ \ \end{equation} Then the triple $(\rho(r,t), u(r,t), R(t))$ ($t\ge 0$) defines a global strong solution to the free boundary problem \eqref{103}. Furthermore, we have the strong nonlinear asymptotic stability of the Lane-Emden solution as follows. \begin{equation}gin{thm}\lambdabel{mainthm2} Under the assumptions in Theorem \ref{mainthm1}. Then the triple $(\rho, u, R(t))$ defined by \eqref{vacuumboundary} and \eqref{solution} is the unique global strong solution to the free boundary problem \eqref{0.1} satisfying $R\in W^{1, \infty}( [0, \ +\infty) ).$ Moreover, the solution satisfies the following estimates. i) For any $0<\theta< {2(\gamma-1)}/({3\gamma}) $, there exists a positive constant $C(\theta)$ independent of $t$ such that for all $t\ge 0$, \begin{equation}gin{align} & \sup_{0\le x\le \begin{equation}tar R} |r(x, t)-x|\le C(\theta) (1+t)^{-\frac{\gamma-1}{\gamma}+\frac{\theta}{2}}\sqrt {\mathfrak{E}(0)}, \lambdabel{rr1} \\ & \sup_{0\le r\le R(t)} \left|u(r, t)\right| \le C(\theta) (1+t)^{-\frac{3\gamma-2}{4\gamma}+ \frac{\theta}{2} }\sqrt {\mathfrak{E}(0)}, \lambdabel{estthm1b} \\ & \sup_{0\le r\le R(t)} \left|\left(u_r, \ r^{-1} u\right)(r, t)\right| \le C(\theta) (1+t)^{-\frac{\gamma-1}{2\gamma}+ \frac{\theta}{2}}\sqrt {\mathfrak{E}(0)}, \lambdabel{estthm1b''} \\ & \sup_{0\le x\le \begin{equation}tar R }\left|\left(\begin{equation}tar\rho(x) \right)^{({3\gamma-6})/{4}} \left[\rho (r(x, t), t)-\begin{equation}tar\rho(x) \right]\right|\le C(\theta) (1+t)^{-\frac{ \gamma -1}{2\gamma}+ \frac{\theta}{2} } \sqrt {\mathfrak{E}(0)}. \lambdabel{estthm1d} \end{align} ii) Suppose that $\mathfrak{F}_\alpha(0)<\infty$ for some $\alpha\in (0, \gamma)$. Let $\theta$ be any constant satisfying $0<\theta< \min\left\{{2(\gamma-1)}/({3\gamma}), \ \ 2(\gamma-\alpha)/\gamma \right\}$. Set $\kappappa = (1/\gamma) \min\{ {\alpha-(\gamma-1)} , \ {\gamma-1} \} -\theta$ when $\alpha\in (\gamma-1, \gamma)$, and $\kappappa=0$ when $\alpha = \gamma-1$. Then there exists a positive constant $ C(\alpha,\theta)$ independent of $t$ such that for all $t\ge 0$, \begin{equation}\lambdabel{estthm1d8.23} \sup_{0\le x\le \begin{equation}tar R }\left|\left(\begin{equation}tar\rho(x) \right)^{({3\gamma-6}-\alpha)/{4}}\left[\rho (r(x, t), t)-\begin{equation}tar\rho(x) \right]\right|\le C(\alpha,\theta) (1+t)^{-\frac{ \gamma -1}{2\gamma}+ \frac{\theta}{2} } \sqrt {\mathfrak{F}_\alpha (0)}; \end{equation} and if $\alpha\in [\gamma-1, \gamma)$, \begin{equation}gin{align} &\sup_{0\le r\le R(t)} \left|u(r, t)\right|\le C(\alpha,\theta) (1+t)^{-\frac{8\gamma-5}{4\gamma}-\frac{\kappa}{4}+\frac{5}{4}\theta } \sqrt{ {\mathfrak{F}}_\alpha(0) + {\mathfrak{E}}(0) {\mathfrak{F}}_\alpha(0)}, \lambdabel{estthm2b} \\ & \sup_{0\le r\le R(t)} \left|(u_r, \ r^{-1} u)(r, t)\right|\le C(\alpha,\theta) (1+t)^{-({1}/{2})\min\{b_1, b_2\} } \sqrt{ {\mathfrak{F}}_\alpha(0) + {\mathfrak{E}}(0) {\mathfrak{F}}_\alpha(0)},\lambdabel{estthm2b'''} \\ &\sup_{0\le x\le \begin{equation}tar R}\left|\left(\begin{equation}tar\rho(x) \right)^{(\gamma-2)/2}\left[\rho (r(x, t), t)-\begin{equation}tar\rho(x) \right]\right| \notag \\ &\qquad \le C(\alpha,\theta)(1+t)^{-\frac{\kappa}{2}- \frac{2\gamma-1}{2\gamma}+\frac{\theta}{2}} \sqrt{{\mathfrak{F}}_\alpha(0) + {\mathfrak{E}}(0) {\mathfrak{F}}_\alpha(0)}. \lambdabel{estthm2d}\end{align} Here \begin{equation}gin{align} b_1=&\min\left\{ \max\left\{ \left(\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta\right)\frac{\alpha+1}{2\gamma-1+\alpha}, \ \frac{3 }{2}\kappappa +\frac{2\gamma-1}{2\gamma}-\frac{ \theta}{2}\right\}, \right.\notag\\ & \left. \qquad \ \ \frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta \right\} + \frac{2\gamma-1}{\gamma} -\theta ,\lambdabel{newb1} \\ b_2 = & \min\left\{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta, \ \frac{\kappappa }{4}+\frac{10\gamma-9}{4\gamma}-\frac{9}{4}\theta\right\} + \frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta. \lambdabel{newb2} \end{align} iii) Furthermore, if $\|x\begin{equation}tar\rho^{ {1}/{2}} v_{tt}(\cdot, 0)\|^2 +|v_t(\begin{equation}tar R , 0)|<\infty$, then $R\in W^{2, \infty}([0 , +\infty))$ and \begin{equation}\lambdabel{accelaration} |\ddot{R}(t)|\le |v_t(\begin{equation}tar R , 0)|+ C(\mathfrak{E}(0))^{1/4}\left((\mathfrak{E}(0))^{1/4} +\|x\begin{equation}tar\rho^{ {1}/{2}} v_{tt}(\cdot, 0)\|^{\frac{1}{2}}\right), \ \ t\ge 0, \end{equation} for some constant $C$ independent of $t$. \end{thm} \begin{equation}gin{rmk} In $\mathfrak{E}(0)$, the term $\|\begin{equation}tar\rho^{{1}/{2}}v_t(\cdot, 0)\|$ can be given in terms of the initial data as follows: for $x\in I$, $$\begin{equation}tar\rho^{\frac{1}{2}} v_t (x, 0) = \begin{equation}tar\rho^{-\frac{1}{2}} \left( \frac{r}{x}\right)^2 \left\{ \mu \left(\frac{ (r^2v)_x}{r^2 r_x}\right)_x - \begin{equation}tar\rho^\gamma \left[ \left(\frac{x^2}{r^2}\frac{1}{ r_x}\right)^\gamma \right]_x - \left[\left(\frac{x^2}{r^2}\frac{1}{ r_x}\right)^\gamma - \frac{x^4}{r^4} \right] \left(\begin{equation}tar{\rho}^\gamma\right)_x \right\}(x,0) .$$ Indeed, this is equivalent to equation \eqref{419a} at $t=0$. \end{rmk} \begin{equation}gin{rmk}\lambdabel{rmk2'} The estimates in \eqref{estthm1d}, \eqref{estthm1d8.23} and \eqref{estthm2d} yield the uniform convergence with rates of the density to \eqref{0.1} to that of the Lane-Emden solution for both large time and near the vacuum boundary since $\gammamma<2$. \end{rmk} \begin{equation}gin{rmk}\lambdabel{rmk3}The initial perturbation here includes three parts: the deviation of the initial domain from that of the Lane-Emden solution, the difference of initial density from that of the Lane-Emden solution, and the velocity. Since the Lane-Emden solution is completely determined by the total mass $M$, our nonlinear asymptotic stability result shows that the time asymptotic state of the free boundary problem is determined by the total mass which is conserved in the time evolution. \end{rmk} \begin{equation}gin{rmk}The condition $\|x\begin{equation}tar\rho^{ {1}/{2}} v_{tt}(\cdot, 0)\|^2 +|v_t(\begin{equation}tar R , 0)|<\infty$ in $iii)$ of Theorem \ref{mainthm2} to ensure $R\in W^{2, \infty}([0 , +\infty))$ (uniform boundedness of the acceleration of the vacuum boundary) is a higher-order compatibility condition of the initial data with the vacuum boundary. Indeed, one may check from the proof that every particle moving with the fluid has the bounded acceleration for $t\in [0, \infty)$ if it does so initially. \end{rmk} \section{Proof of main results}\lambdabel{sec3} \subsection{A theorem with detailed estimates} For the convenience of presentation, we set $\begin{equation}tar R=1$ and $I=(0, \begin{equation}tar R)=(0, 1).$ Indeed, we will prove the following results for the global strong solutions obtained in Theorem \ref{mainthm1}, which gives not only the nonlinear asymptotic stability results stated in Theorem \ref{mainthm2}, but also detailed behavior of the solutions both in large time and near the vacuum boundary and the origin. \begin{equation}gin{thm}\lambdabel{mainthm} Let $v$ be the global strong solution to the problem \eqref{419} with $r$ given by \eqref{r} as obtained in Theorem \ref{mainthm1}. i) Let $\theta$ and $\delta$ be any constants satisfying $0<\theta< {2(\gamma-1)}/({3\gamma})$ and $ \delta\in (0, 1)$. Then there exist positive constants $C(\theta)$ and $C(\theta, \delta)$ independent of $t$ such that for all $t\ge 0$, \begin{equation}gin{align}\lambdabel{estthm1} & (1+t)^{\frac{2(\gamma-1)}{\gamma}-{\theta}}\left\|(r-x)(\cdot,t)\right\|_{L^\infty}^2 +(1+t)^{ \frac{3\gamma-2}{2\gamma}-{\theta} }\left\|(v,xv_x)(\cdot,t)\right\|_{L^\infty}^2 \notag\\ & + (1+t)^{\frac{\gamma-1}{ \gamma}- \theta } \left\|\begin{equation}tar\rho^{({3\gamma-2})/{4}} \left(r_x-1, {r}/{x}-1 \right)(\cdot,t)\right\|_{L^\infty}^2+ (1+t)^{\frac{\gamma-1}{ \gamma}- \theta } \left\| \left(v_x, {v}/{x} \right)(\cdot,t)\right\|_{L^\infty}^2 \notag\\ & +(1+t)^{\frac{2\gamma-1}{\gamma}-\theta}\left( \left\|\left(x\begin{equation}tar\rho^{{1}/{2}}v_t,v,xv_x\right) (\cdot, t) \right\|^2 + \left\|\begin{equation}tar\rho^{{\gamma}/{2}}\left(r-x, xr_x-x \right)(\cdot, t) \right\|^2 \right)\notag\\ &+ (1+t)^{\frac{3(\gamma-1)}{\gamma}-{\theta}}\left\|(r-x)(\cdot,t)\right\|^2 +(1+t)^{\frac{\gamma-1}{\gamma}-\theta}\left\|\left(r_x-1, {r}/{x}-1, v_x, {v}/{x}, \begin{equation}tar\rho^{{1}/{2}}v_t\right)(\cdot, t) \right\|^2 \notag\\ & + \left\|\begin{equation}tar\rho^{\frac{\gamma\theta}{4}-\frac{\gamma-1}{2}}\left(r-x, xr_x-x\right)(\cdot, t) \right\|^2 \le C(\theta)\mathfrak{E}(0) \end{align} and \begin{equation}gin{align}\lambdabel{estthm1'} &(1+t)^{\frac{\gamma-1}{\gamma}-\theta}\ \left\|\left(r_x-1, {r}/{x}-1, v_x, {v}/{x}\right)(\cdot,t)\right\|_{H^1 \left(\left[0, \delta\right]\right)}^2 \le C(\theta, \delta) \mathfrak{E}(0). \end{align} ii) Suppose that $\mathfrak{F}_\alpha(0)<\infty$ for some $\alpha\in (0, \gamma)$. Let $\theta$ and $\delta$ be any constants satisfying $0<\theta< \min\left\{{2(\gamma-1)}/({3\gamma}), \ \ 2(\gamma-\alpha)/\gamma \right\}$ and $\delta\in (0, 1)$. Then there exist positive constants $C(\alpha)$, $C( \alpha, \theta)$ and $C(\alpha, \theta, \delta)$ such that for all $t\ge 0$, \begin{equation}gin{align} & \mathfrak{F}_\alpha(t) \le C(\alpha) \mathfrak{F}_\alpha(0), \lambdabel{} \\ &(1+t)^{({\gamma-1})/{ \gamma} - \theta} \left\|\begin{equation}tar\rho^{ ({3\gamma-2-\alpha})/4} (r_x-1, r/x-1)(\cdot,t)\right\|_{L^\infty}^2 \le C(\alpha,\theta)\mathfrak{F}_\alpha (0);\lambdabel{} \end{align} and if $\alpha\in [\gamma-1, \gamma)$, \begin{equation}\lambdabel{estthm2}\begin{equation}gin{split} &(1+t)^{ \frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta } \left\|\left( v_x, v/{x} , \begin{equation}tar\rho^{{1}/{2}} v_t \right)(\cdot,t)\right\|^2 + (1+t)^{\frac{8\gamma-5}{4\gamma}+\frac{\kappa}{4}-\frac{5}{4}\theta } \left\|v(\cdot,t)\right\|_{L^\infty}^2 \\ & + (1+t)^{\frac{1}{2} b_1 } \|xv_x(\cdot,t)\|^2_{L^\infty} + (1+t)^{\frac{1}{2}\min\{b_1, b_2\} } \left\| \left( v_x, v/x\right) (\cdot,t)\right\|_{L^\infty}^2 \\ & + (1+t)^{\frac{\kappa}{2}+ \frac{2\gamma-1}{2\gamma}-\frac{\theta}{2}} \left\|\begin{equation}tar\rho^{ \gamma/2 } (r_x-1, r/x-1)(\cdot,t)\right\|_{L^\infty}^2 \le C( \alpha, \theta)\widetilde{\mathfrak{F}}_\alpha(0), \end{split}\end{equation} \begin{equation}\lambdabel{estthm2'}\begin{equation}gin{split} & (1+t)^{\min\left\{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta, \ \frac{\kappappa }{4}+\frac{10\gamma-9}{4\gamma}-\frac{9}{4}\theta\right\}} \left\|\left(r_x-1, {r}/{x}-1, v_x, v/{x}\right)(\cdot,t)\right\|^2_{H^1([0,\delta])} \\ & + (1+t)^{ \frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta } \left\|\left( r_x-1, r/x-1 \right)(\cdot,t)\right\|_{L^2([0,\delta])}^2 \le C( \alpha, \theta, \delta)\widetilde{\mathfrak{F}}_\alpha(0). \end{split}\end{equation} Here $\widetilde{\mathfrak{F}}_\alpha(0)= {\mathfrak{F}}_\alpha(0) + {\mathfrak{E}}(0) {\mathfrak{F}}_\alpha(0)$, $\kappappa=0$ when $\alpha = \gamma-1$, $\kappappa = (1/\gamma) \min\{ {\alpha-(\gamma-1)} , \ {\gamma-1} \} -\theta$ when $\alpha\in (\gamma-1, \gamma)$, and $b_1$ and $b_2$ are given by \eqref{newb1} and \eqref{newb2}. iii) Suppose that $\mathfrak{F}_\alpha(0)<\infty$ for some $\alpha\in [\gamma, 2\gamma-1]$. Let $\theta$ be any constant satisfying $0<\theta< {2(\gamma-1)}/({3\gamma})$. Then there exist positive constants $C$ and $C( \theta)$ such that for all $t\ge 0$, \begin{equation}gin{align} & \mathfrak{F}_\alpha (t) \le C \mathfrak{F}_\alpha (0) + C(\theta) (1+t)^{(\alpha-\gamma + \theta\gamma)/(\alpha-1)}\mathfrak{E}(0) ; \end{align} and if $\alpha=2\gamma-1$, \begin{equation}gin{align} & \left\| r_{xx}(\cdot, t)\right\|^2 \le C \mathfrak{F}_{2\gamma-1} (0) + C (\theta) (1+t)^{\frac{1}{2}+\frac{\gamma}{2\gamma-2}\theta} \mathfrak{E}(0), \lambdabel{finite}\\ &\left\|\left( v_{xx}, \ (v/x)_x \right)(\cdot, t)\right\|^2 \le C(\theta) (1+t)^{-\frac{7\gamma-6}{4\gamma} + 4 \theta } \mathfrak{F}_{2\gamma-1} (0) \left(1+ \mathfrak{F}_{2\gamma-1} (0) \right) \left(1+ \mathfrak{E} (0) \right) . \lambdabel{decayof2ndderivative} \end{align} iv) Suppose that $\|x\begin{equation}tar\rho^{ {1}/{2}} v_{tt}(\cdot, 0)\|^2<\infty $. Then there exists a positive constant $C$ independent of $t$ such that for all $t\ge 0$, \begin{equation}\lambdabel{furthregularity} \|x\begin{equation}tar\rho^{ {1}/{2}} v_{tt}(\cdot, t)\|^2+\int_0^{\infty} \left\|(v_{ss}, x v_{ssx})(\cdot,s)\right\|^2 ds\le C \mathfrak{E}(0)+ C\|x\begin{equation}tar\rho^{ {1}/{2}} v_{tt}(\cdot, 0)\|^2. \end{equation} \end{thm} \subsection{Main ideas and the structure of the proof}\lambdabel{sec3.2} The local existence and uniqueness of strong solutions to \eqref{419} on some time interval $[0, T_*]$ are given in Appendix, Part I, by using a finite difference method as used in \cite{Okada,LiXY,LXY,Chengq}. In order to prove the global existence of strong solutions, we need to derive the uniform-in-time boundedness of the nonlinear functional $\mathfrak{E}(t)$ defined in \eqref{mathmarch}. Our basic strategy is to use the weighted $L^2$-energy method together with some pointwise estimates. For the weighted $L^2$-energy estimates, motivated by the linearized analysis for problem \eqref{419} shown in Appendix, Part II, a natural functional should be $\mathcal{E}(t)$ given by \begin{equation}gin{align}\lambdabel{mathcalE} \mathcal{E}(t):=&\left\|(r-x, xr_x-x)(\cdot,t) \right\|^2 + \left\| \left(v, xv_x \right)(\cdot, t) \right\|^2 +\left\|(r_x-1)(\cdot,t)\right\|_{L^\infty\left([1/2,\ 1]\right)}^2 \notag\\ & + \left\| \begin{equation}tar\rho^{\gamma- {1}/{2}} (r_{xx}, (r/x)_x)(\cdot, t) \right\|^2 + \left\| \begin{equation}tar\rho^{1/2} v_t(\cdot, t) \right\|^2. \end{align} (Indeed, $\mathfrak{E}(t)$ and $\mathcal{E}(t)$ are equivalent under some assumptions which will be shown in Lemma \ref{boundsforrv}.) However, much efforts are needed to pass from the linear to nonlinear analysis. An important step for this is to identify the a priori bounds with which the basic bootstrap argument can work. It is found that the appropriate a priori assumption is that $|r_x(x, t)-1|$ and $|v_x|$ are suitably small to pass from the linear to nonlinear analysis. To this end, we use a bootstrap argument by making the following {\it a priori} assumptions. Let $v$ be a strong solution to \eqref{419} on $[0 , T]$ with $$r(x, t)=r_0(x)+\int_0^t v(x, \thetau)d\thetau, \ \ (x,t)\in [0, \ 1]\times[0,T].$$ The basic {\it a priori} assumption is that there exist suitably small fixed constants $\epsilon_0\in (0, {1}/{2}]$ and $\epsilon_1\in (0, 1]$ such that \begin{equation}gin{equation}\lambdabel{aprirx} \left|r_x(x,t)-1\right| \le \epsilon_0 \ \ {\rm and} \ \ \left|v_x(x,t)\right| \le \epsilon_1 \ \ {\rm for} \ \ (x,t)\in I\times [0, T]. \end{equation} It follows from \eqref{aprirx} and the boundary condition $v(0,t)=0$ (so $r(0,t)=0$) that \begin{equation}gin{equation}\lambdabel{rx} \left|x^{-1}r(x,t) -1 \right| \le \left|r_x(x,t)-1\right|\le \epsilon_0 \ \ {\rm for} \ \ (x,t)\in [0, \ 1]\times[0,T], \end{equation} \begin{equation}gin{equation}\lambdabel{vx} \left|x^{-1} v(x,t) \right| \le \left|v_x(x,t)\right| \le \epsilon_1 \ \ {\rm for} \ \ (x,t)\in [0, \ 1]\times[0,T]. \end{equation} In particular, it holds that \begin{equation}gin{equation}\lambdabel{Liy} {1}/{2}\le r_x(x,t)\le {3}/{2} \ \ {\rm and} \ \ {1}/{2}\le x^{-1}{r}(x,t) \le {3}/{2} \ \ {\rm for} \ \ (x,t)\in I\times [0, T]. \end{equation} In order to close the argument, it is needed to bound the $L^{\infty}$-norms of $r_x-1$ and $v_x$ by the initial data. The usual approach for this is to use energy estimates and the Sobolev embedding, for example, for $r_x-1$, $$\|(r_x -1)(\cdot,t)\|^2_{L^{\infty} }\le C \left(\| (r_x -1)(\cdot, t)\|^2 +\|r_{xx}(\cdot, t)\|^2 \right).$$ However, due to the strong degeneracy of the equation near the vacuum boundary, the $L^2$-norm of $r_{xx}$ may grow in time (see \eqref{finite}), the uniform $L^2$-bound for $r_{xx}$ is valid only for an interval of $x$ away from the vacuum boundary $x=1$ (see \eqref{estthm1'}), say, $\|r_{xx}\|_{L^2([0, 1/2])}$. The term $\|\begin{equation}tar\rho^{\gamma- {1}/{2}}r_{xx}(\cdot, t)\|$ in the functional $\mathcal{E}(t)$ can only give the bound of $|r_x(x, t)-1|$ for $x$ away from the vacuum boundary. This is the reason why we include $\|(r_x -1)(\cdot, t)\|^2_{L^{\infty}([1/2,1])}$ in the the functional $\mathcal{E}(t)$. Similar ideas apply to the estimate of $\|v_x(\cdot, t)\|_{L^{\infty}}$, which is routinely bounded by $\|v_x(\cdot, t)\|_{H^1}$ via energy methods in the region away from the vacuum boundary $x=1$, say $[0, {1}/{2}]$, since the system is not degenerate there. However, in the region near the vacuum boundary, say, $[{1}/{2}, 1]$, it is not a easy task to obtain the $L^2$-bound of the second derivative of $v$, $\|v_{xx}(\cdot, t)\|_{L^2([{1}/{2}, 1])}$, again, due to the strong degeneracy of the system near the vacuum. Indeed, our idea to bound $v_x$ near the vacuum boundary is to use the $L^2$-norm of the viscosity term to obtain the $L^2$-bound for $(v_x/r_x)_x$, instead of that for $v_{xx}$. Because $r_x$ has positive lower and upper bounds. The $L^2$-norm of the viscosity term is quite different from that of $v_{xx}$. This can be seen as follows: $$\mathcal{V}:=\left(\frac{ (r^2v)_x}{r^2 r_x}\right)_{x}=\left(\frac{v_x}{r_x}+2 \frac{v}{r}\right)_x=\frac{1}{r_x} v_{xx}-\frac{1}{r_x^2} v_x r_{xx} +2\left(\frac{v}{r}\right)_x.$$ Under the {\it a priori} assumption that $r_x$ is close to $1$, the difference between the viscosity term $\mathcal{V}$ and $v_{xx}$ involves the second derivative of $r$, $r_{xx}$, whose $L^2$-norm may grow in time. This growth may be balanced by the decay of the $L^{\infty}$-norm of $v_x$ so that the $L^2$-norm of $r_x^{-2}{v_xr_{xx}} $ is bounded. It is possible to make an ansatz of the decay of the $L^{\infty}$-norm of $v_x$ and then close the argument. However it is quite intricate to do so. On the level of the second derivatives, our strategy is to close the estimates by using the $L^2$-estimate of the viscosity term $\mathcal{V}$, instead of that of $v_{xx}$. It should be emphasized that it is enough to get the $L^2$-estimate for the viscosity term $\mathcal{V}$ to close the estimates and obtain the necessary bounds for the global existence of strong solutions and assurance of the well-definiteness of the boundary condition, as explained in Section \ref{sec2.2}. After proving the global existence of strong solutions and basic decay estimates, we can improve further regularity that $ v_{xx} \in L^2(I)$ for all $t\ge 0$ and the decay of $\| v_{xx} (\cdot, t)\|$ if the initial data satisfy additional regularity that $r_{xx}(\cdot, 0) \in L^2(I)$ (i.e., $\mathfrak{F}_{2\gamma-1}(0)<\infty$). Due to equation \eqref{viscosityequation}, the equivalent form of equation \eqref{419a}, the $L^2$-norm of the viscosity term $\mathcal{V}$ can be bounded by $C\mathcal{E}(t)$. (More precisely, $\|\begin{equation}tar\rho^{-1/2}\mathcal{V}(\cdot, t)\| \le C \mathcal{E}(t)$.) So, it suffices to show that the higher-order functional $\mathcal{E}(t)$ defined by \eqref{mathcalE} is bounded uniformly by the initial data, i.e., $$\mathcal{E}(t)\le C \mathcal{E}(0) \ \ {\rm for} \ \ {\rm all} \ \ t\in [0, T].$$ We outline the main steps for this as follows. {\em Lower-order estimates}. The key elements in our analysis are the weighted estimates by applying various multipliers to the following equation: $$ \begin{equation}tar\rho\left( \frac{x}{r}\right)^2 v_t + \left[ \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma \right]_x - \frac{x^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_x = \mathfrak{B}_x + 4\lambda_1 \left(\frac{v}{r}\right)_x , $$ which is equivalent to $\eqref{419a}$. Here $\mathfrak{B}$ is defined in \eqref{bdry1}. In Lemma \ref{lem1}, we use the multiplier $r^2 v$ to yield the bound for the basic energy $$\left\|x\begin{equation}tar\rho^{{1}/{2}} v (\cdot, t) \right\|^2 + \left\|x\begin{equation}tar\rho^{{\gamma}/{2}}\left( {r}/{x}-1, r_x-1 \right)(\cdot, t) \right\|^2 +\int_0^t \left\|(v,xv_x)(\cdot,s)\right\|^2ds . $$ The multiplier $r^3-x^3$ plays an important role in the proof of Lemma \ref{lem2}, in which a bound is obtained for $$ \left\|x\left( {r}/{x}-1, r_x-1 \right)(\cdot, t) \right\|^2 +\int_0^t \left\|x\begin{equation}tar\rho^{{\gamma}/{2}}\left( {r}/{x}-1, r_x-1 \right)(\cdot, s) \right\|^2 ds , $$ which refines the weighted estimate of $\|x\begin{equation}tar\rho^{{\gammamma}/{2}}\left( {r}/{x}-1, r_x-1\right)\|$ obtained in the basic energy estimates. We also show the decay estimates for the basic energy in Lemma \ref{lem2} by establishing a bound for $$(1+t)\left(\left\|x\begin{equation}tar\rho^{{1}/{2}} v (\cdot, t) \right\|^2 + \left\|\begin{equation}tar\rho^{{\gamma}/{2}}\left({r}-x, xr_x-x \right)(\cdot, t) \right\|^2\right) +\int_0^t (1+s) \left\|(v,xv_x)(\cdot,s)\right\|^2ds. $$ With those estimates, we are able to bound $|r_x-1|$ away from the origin in Lemma \ref{lem3}, by noting the fact that $\mathfrak{B}$ can be written as the time derivative of a function so that one can integrate equation \eqref{nsp1} with respect to both $x$ and $t$ to get the desired estimates, where the monotonicity of the Lane-Emden density plays an important role. Adopting the multiplier $r^2v_t$, a bound for $$(1+t)\left(\left\|x\begin{equation}tar\rho^{{1}/{2}} v_t (\cdot, t) \right\|^2 + \left\| \left(v, xv_x \right)(\cdot, t) \right\|^2\right) +\int_0^t (1+s) \left\|(v_t,xv_{tx})(\cdot,s)\right\|^2ds $$ is given in Lemma \ref{lem4} by studying the time differentiated problem of \eqref{419}. Further decay estimates and regularity are given in Lemma \ref{lem51}, which is important to the derivation of the decay for $\|r-x\|_{L^{\infty} }$ in \eqref{estthm1}. This in particular implies the convergence of the evolving boundary $r=R(t)$ to that of the Lane-Emden stationary solution. Lemma \ref{lem51} also shows that rates of time decay in various norms depend on $\gamma$ indicating the balance between the pressure and self-gravitation, and the dissipation of the viscosity. These further decay estimates are derived from the following two multipliers: \begin{equation}\lambdabel{9.22.2} \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(y)(r^3-y^3)_ydy \ \ {\rm and} \ \ \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(y)(r^2 v)_ydy \ \ {\rm for} \ \ 0<\begin{equation}ta<\gammamma-1. \end{equation} It should be noted that the first multiplier in \eqref{9.22.2} is motivated by virial equations in the study of stellar dynamics and equilibria (cf. \cite{lebovitz2,tokusky}). To the best of our knowledge, those multipliers have not been used in previous literatures. {\em Higher-order estimates}. To get the $L^2$-estimate of the viscosity term $\mathcal{V}$, we write it as follows $$ \mathcal{V} = \mathcal{G}_{xt}, \ \ {\rm where} \ \ \mathcal{G} : = \ln r_x + 2 \ln \left(\frac{r}{x}\right), $$ and rewrite $\eqref{419a}$ as, by virtue of \eqref{rhox}, \begin{equation}\lambdabel{viscosityequation}\mu \mathcal{G}_{xt} +\gamma \left(\frac{x^2 \begin{equation}tar{\rho}}{r^2 r_x } \right)^{\gamma } \mathcal{G}_x = \frac{x^2}{r^2} \begin{equation}tar{\rho} v_t - \left[ \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } -\left(\frac{x }{r } \right)^{4} \right] x \phi \begin{equation}tar\rho , \ {\rm where} \ \phi(x)= x^{-3}\int_0^x 4\pi \begin{equation}tar\rho(s) s^2 ds.\end{equation} This form has advantage in estimating the second derivative (in $x$) estimates here. Indeed, the interplay among the viscosity, pressure and gravitational force can be seen easily. The gradient of the pressure is decomposed as follows: $$\left[ \begin{equation}tar\rho^{\gamma}\left(\frac{x^2 }{r^2 r_x } \right)^{\gamma }\right]_x=-\left(\frac{x^2 }{r^2 r_x } \right)^{\gamma }x \phi \begin{equation}tar\rho-\gamma \left(\frac{x^2 \begin{equation}tar{\rho}}{r^2 r_x } \right)^{\gamma } \mathcal{G}_x.$$ The first part of this decomposition is used to balance the gravitational force and the second part is the $t$-antiderivative of the viscosity, $\mathcal{G}_{xt}$, multiplied by a weight which is equivalent to $\begin{equation}tar\rho^{\gamma} $. This weight is degenerate on the boundary, but strictly positive in the interior. The degeneracy near the vacuum boundary is one of main obstacles in higher-order estimates, which is overcome by choosing suitable weights and multipliers as we will outline below, and a delicate use of the Hardy and weighted Sobolev inequalities. (The method of using Hardy and weighted Sobolev inequalities to build regularity was first adopted for the physical vacuum problem for inviscid flows in \cite{10}, and latter on in \cite{10',jm,17'}.) Indeed, in terms of $\mathcal{G}$, the principal part of \eqref{viscosityequation} is $$\mu \mathcal{G}_{xt} +\gamma \left(\frac{x^2}{r^2 r_x } \right)^{\gamma } \begin{equation}tar{\rho}^\gamma \mathcal{G}_x,$$ which is linear in $\mathcal{G}_x$ and with a degenerate damping. This structure leads to desirable estimates on $\mathcal{G}$ and their derivatives. With the lower-order estimates obtained already, we can derive in Lemma \ref{lem5} the uniform bound for $\left\| \begin{equation}tar\rho^{\gamma- {1}/{2}} (r_{xx}, (r/x)_x ) (\cdot, t) \right\|$ (due to the bound for $\left\| \begin{equation}tar\rho^{\gamma- {1}/{2}} \mathcal{G}_x (\cdot, t) \right\|$ and \eqref{weightrxx}) and the decay estimates for $ \left\| \begin{equation}tar\rho^{ {1}/{2}} v_t(\cdot, t) \right\|$. This completes the proof of the uniform-in-time bounds for the higher-order energy functional $\mathcal{E}(t)$, which also verifies the {\it a priori} assumptions \eqref{rx} and \eqref{vx} due to the equivalence of $\mathcal{E}(t)$ and $\mathfrak{E}(t)$ shown in Lemma \ref{boundsforrv}, and consequently, the global existence of the strong solution is obtained. With the decay estimates for the lower-order norms in Lemma \ref{lem51} and the higher-order estimates in Lemma \ref{lem5}, we prove the decay estimates of $\left\|\left(r_x-1, \ {r}/{x}-1\right)(\cdot,t)\right\| $, $\left\|(v, \ v_x)(\cdot,t)\right\|_{L^\infty}$ and $ \left\|(r-x)(\cdot,t)\right\|_{L^\infty}$ in Lemma \ref{lem312}, with which part $i)$ of Theorem \ref{mainthm} is proved. The second part of the higher-order estimates will be given in Section \ref{sec3.4.3}, in which the faster decay estimates are given under the assumption of the finiteness of $\mathfrak{F}_\alpha(0)$, $\alpha\in (0, \gamma)$. A key ingredient in the proof is to use the new multiplier $x^2\begin{equation}tar\rho^{2\gamma-2} \mathfrak{P}_t$, where $$\mathfrak{P}(x,t)=\gamma \left(\frac{x^2 \begin{equation}tar{\rho}}{r^2 r_x } \right)^{\gamma } \mathcal{G}_x + \left[ \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } -\left(\frac{x }{r } \right)^{4} \right] x \phi \begin{equation}tar\rho,$$ the sum of the gradient of the pressure and gravitation force. The proof of $ii)$ of Theorem \ref{mainthm} consists of Lemmas \ref{lem8.19}-\ref{lem44}. Parts $iii)$ and $iv)$ of Theorem \ref{mainthm} on the further regularity of solutions are proved in Section \ref{sec3.4.4}. In particular, the bound for $\|r_{xx}(\cdot,t)\|$ and the decay for $\|v_{xx}(\cdot,t)\|$ are given in Lemma \ref{lem10}. \begin{equation}gin{rmk} The insight for the above estimates and the basic decay rates can be gained from the linearized analysis for problem \eqref{419}, which is shown in Appendix, Part II. The linearized analysis gives the ideas for the basic decay rates. The further decay rates of various norms in Theorem \ref{mainthm} (based on Lemma \ref{lem51}) are obtained by applying various multipliers, reflecting the fact that the asymptotic stability mechanism is due to the balance between the pressure and self-gravitation, and the dissipation of the viscosity, and also the decay rates depend on the behavior of the initial data near the vacuum boundary. In particular, for the multipliers in \eqref{9.22.2}, we choose $\begin{equation}ta<\gamma-1$ so that $\int \begin{equation}tar\rho^{-\begin{equation}ta}dx<\infty$ due to the physical vacuum behavior that $\begin{equation}tar\rho(x)\sim (1 -x)^{{1}/({\gamma-1})}$ near the vacuum boundary. This has an effect on the decay rates of various norms given in Lemma \ref{lem51} which depend on $\gammamma$, and indicates that the physical vacuum behavior may be the reason for the dependence of decay rates on $\gamma$. \end{rmk} In the rest of this article, we will frequently use the following weighted Sobolev embedding and general version of the Hardy inequality, whose proof can be found in \cite{KM}, to build the regularity. This idea was first used in \cite{10} for inviscid flows, and latter on in \cite{10',jm,17'}. \begin{equation}gin{lem}\lambdabel{lemebedding} {\rm (weighted Sobolev embedding)} Let $d$ denote the distance function to the boundary $\partial I$. Then the weighted Sobolev space $H^1_d(I)$, given by $$ H^1_d(I) := \left\{d F\in L^2(I): \ \ \int_I d^2 \left( |F|^2 + |F_x|^2 \right)dx<\infty\right\},$$ satisfies the embedding: \begin{equation}\lambdabel{sobolev} H^1_d(I)\hookrightarrow L^2(I). \end{equation} \end{lem} \begin{equation}gin{lem}\lambdabel{hardy} {\rm (Hardy inequality)} Let $k>1$ be a given real number and $g$ be a function satisfying $$ \int_0^{1/2} x^k\left(g^2 + g_x^2\right) dx < \infty, $$ then it holds that \begin{equation}\lambdabel{hardyorigin} \int_0^{1/2} x^{k-2} g^2 dx \le c \int_0^{1/2} x^k \left( g^2 + g_x^2 \right) dx, \end{equation} where $c$ is a generic constant independent of $g$. \end{lem} As a consequence of Lemma \ref{hardy}, one has \begin{equation}\lambdabel{hardybdry} \int_{1/2}^{1} (1-x)^{k-2} g^2 dx \le c \int_{1/2}^{1} (1-x)^k \left( g^2 + g_x^2 \right) dx, \end{equation} provided that the right-hand side is finite. \subsection{Lower-order estimates}\lambdabel{sec3.3} In this and next subsections, we derive the {\it a priori } estimates for the strong solution $(r,v)$ on the time interval $[0, T]$ defined in Definition \ref{definitionss}, under the assumption \eqref{rx} and \eqref{vx}. We start with the lower-order estimates in this subsection, for which we rewrite equation \eqref{419a} as \begin{equation}gin{equation}\lambdabel{nsp1}\begin{equation}gin{split} & \begin{equation}tar\rho\left( \frac{x}{r}\right)^2 v_t + \left[ \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma \right]_x - \frac{x^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_x = \mathfrak{B}_x + 4\lambda_1 \left(\frac{v}{r}\right)_x . \end{split} \end{equation} Here $\mathfrak{B}$ is defined in \eqref{bdry1}. Recall the boundary conditions \eqref{419b} and \eqref{lex1}: \begin{equation}\lambdabel{Aug7bdry} \begin{equation}tar\rho(1)=0, \ \ v(0,t)=0 \ \ {\rm and} \ \ \mathfrak{B}(1,t)=0. \end{equation} First, we estimate the basic energy, for which the condition $\gammamma>4/3$ is crucial. \begin{equation}gin{lem}\lambdabel{lem1} Suppose that \eqref{rx} holds for a suitably small positive number $\epsilon_0$. Then, \begin{equation}gin{equation}\lambdabel{lem1est}\begin{equation}gin{split} & \left\|x\begin{equation}tar\rho^{\frac{1}{2}} v (\cdot, t) \right\|^2 + (3\gamma -4) \left\|x\begin{equation}tar\rho^{\frac{\gamma}{2}}\left(\frac{r}{x}-1, r_x-1 \right)(\cdot, t) \right\|^2 + \sigma \int_0^t \left\|\left(v, xv_x \right) (\cdot,s) \right\|^2 ds \\ \le &c \left( \left\|x\begin{equation}tar\rho^{\frac{1}{2}} v(\cdot, 0) \right\|^2 + \left\|x \begin{equation}tar\rho^{\frac{\gamma}{2}}\left(\frac{r_0}{x}-1, r_{0x}-1\right) \right\|^2 \right), \ \ \ \ 0\le t\le T, \end{split} \end{equation} where $\sigma=\min\left\{2\lambda_1/3,\ \lambda_2 \right\}$. \end{lem} {\em Proof}. Multiplying equation \eqref{nsp1} by $r^2 v$ and integrating the product with respect to spatial variable, we have, using the integration by parts and boundary condition \eqref{Aug7bdry}, that \begin{equation}gin{equation}\lambdabel{hz1}\begin{equation}gin{split} \frac{d}{dt}\int \tilde{\eta}(x,t) dx = - \int \mathfrak{B} \left(r^2 v\right)_x dx + 4 \lambda_1 \int r^2 v \left(\frac{v}{r}\right)_x dx , \end{split} \end{equation} where \begin{equation}gin{equation}\lambdabel{hheta} \tilde{\eta}(x,t):=\frac{1}{2} x^2 \begin{equation}tar{\rho} v^2 + x^2\begin{equation}tar{\rho}^\gamma\left[\frac{1}{\gamma-1}\left(\frac{x}{r}\right)^{2\gamma-2}\left(\frac{1}{r_x}\right)^{\gamma-1} +\left(\frac{x}{r}\right)^{2}r_x - 4 \frac{x}{r} \right]. \end{equation} By the Taylor expansion, the quantity $[\cdot]$ in $\tilde\eta$ can be rewritten as $$ \frac{4-3\gamma}{\gamma-1} + (2-\gamma)\left( \frac{r}{x} - r_x \right)^2 + \frac{3\gamma-4}{2}\left[ 2\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] +\widetilde{Q} , $$ where $\widetilde{Q}$ represents the cubic terms which can be bounded by $$ |\widetilde{Q}|\le c \left( \left|r_x-1\right|^3 + \left| \frac{r}{x}-1\right|^3 \right) \le c \epsilon_0 \left( \left|r_x-1\right|^2 + \left| \frac{r}{x}-1\right|^2 \right),$$ due to \eqref{Liy} and \eqref{rx}. This implies that for $\gamma\in ( {4}/{3} ,2]$, $$ \frac{1}{\gamma-1}\left(\frac{x}{r}\right)^{2\gamma-2}\left(\frac{1}{r_x}\right)^{\gamma-1} +\left(\frac{x}{r}\right)^{2}r_x - 4 \frac{x}{r} \ge \frac{4-3\gamma}{\gamma-1} + \frac{3\gamma-4}{4} \left[2\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right], $$ provided that $\epsilon_0$ is less than a constant depending on $3\gamma-4$. Set \begin{equation}gin{equation}\lambdabel{etadefn} {\eta}(x,t):=\tilde{\eta}(x,t)- \frac{4-3\gamma}{\gamma-1}x^2\begin{equation}tar{\rho}^\gamma. \end{equation} Then the above calculations imply that \begin{equation}gin{equation}\lambdabel{etalower} {\eta}(x,t) \ge \frac{1}{2} x^2 \begin{equation}tar{\rho} v^2 + \frac{3\gamma-4}{4} x^2\begin{equation}tar{\rho}^\gamma\left[2\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right], \end{equation} \begin{equation}gin{equation}\lambdabel{etaup} {\eta}(x,t) \le \frac{1}{2} x^2 \begin{equation}tar{\rho} v^2 + c x^2\begin{equation}tar{\rho}^\gamma \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right]. \end{equation} Clearly, \eqref{hz1} and \eqref{bdry1} show that \begin{equation}gin{equation}\lambdabel{hz0}\begin{equation}gin{split} \frac{d}{dt}\int {\eta}(x,t) dx = &-\frac{4}{3}\lambda_1 \int \frac{r^4}{r_x}\left|\left(\frac{v}{r}\right)_x\right|^2 dx -\lambda_2 \int \frac{1}{r_xr^2} \left| \left(r^2 v\right)_x \right|^2dx. \end{split} \end{equation} Note that $$ \frac{r^4}{r_x}\left|\left(\frac{v}{r}\right)_x\right|^2 = \frac{r^2}{r_x}v_x^2+r_x v^2 - 2rvv_x \ \ {\rm and} \ \ \ \ \frac{\left| \left(r^2 v\right)_x \right|^2}{r_xr^2} =\frac{r^2}{r_x}v_x^2+4r_x v^2 +4 rvv_x. $$ We obtain \begin{equation}gin{equation}\lambdabel{heg1}\begin{equation}gin{split} \frac{d}{dt} \int {\eta}(x,t) dx \le - 3\sigma \int \left[ \frac{r^2}{r_x}v_x^2+ 2r_x v^2 \right]dx, \end{split} \end{equation} where $\sigma=\min\left\{2\lambda_1/3,\ \lambda_2 \right\}$; and \begin{equation}gin{equation}\lambdabel{eg1}\begin{equation}gin{split} \int {\eta}(x,t) dx + 3\sigma \int_0^t\int \left[ \frac{r^2}{r_x}v_x^2+ 2r_x v^2 \right]dxds \le \int {\eta}(x,0) dx, \ \ t\in [0,T]. \end{split} \end{equation} This, together with \eqref{etalower}, \eqref{etaup} and \eqref{Liy}, implies \eqref{lem1est}. $\Box$ In the following lemma, we use the multiplier $r^3-x^3$ which is motivated by virial equations in the study of stellar dynamics and equilibria (cf. \cite{lebovitz2,tokusky}) to refine the weighted estimate of $\|x\begin{equation}tar\rho^{{\gammamma}/{2}}\left({r}/{x}-1, r_x-1\right)\|$ obtained in Lemma \ref{lem1} by improving the estimates near the vacuum, and give the decay estimates for the basic energy. \begin{equation}gin{lem}\lambdabel{lem2} Suppose that \eqref{rx} holds for a suitably small positive number $\epsilon_0$. Then, \begin{equation}gin{equation}\lambdabel{lem2est}\begin{equation}gin{split} &\sigma \left\|x\left(\frac{r}{x}-1, r_x-1\right)(\cdot, t) \right\|^2 + (3\gamma-4)\int_0^t \left\|x\begin{equation}tar\rho^{\frac{\gamma}{2}}\left(\frac{r}{x}-1, r_x-1\right)(\cdot, s) \right\|^2 ds \\ \le & C \left(\left\|x\left(\frac{r_0}{x}-1, r_{0x}-1\right) \right\|^2 + \left\|x\begin{equation}tar\rho^{\frac{1}{2}} v(\cdot, 0)\right\|^2 \right), \ \ \ \ 0\le t\le T, \end{split} \end{equation} and \begin{equation}gin{equation}\lambdabel{lem2est'}\begin{equation}gin{split} &(1+t) \left\|x\begin{equation}tar\rho^{\frac{1}{2}} v (\cdot, t) \right\|^2 + (3\gamma-4) (1+t) \left\|x\begin{equation}tar\rho^{\frac{\gamma}{2}}\left(\frac{r}{x}-1, r_x-1 \right)(\cdot, t) \right\|^2 \\ & + (1+t)^{\frac{2\gamma-2}{\gamma}} \left\|(r-x)(\cdot, t) \right\|^2 + \sigma \int_0^t (1+s) \left\|\left(v, xv_x \right) (\cdot,s) \right\|^2 ds \\ & \le C \left(\left\|x\left(\frac{r_0}{x}-1, r_{0x}-1\right) \right\|^2 + \left\|x\begin{equation}tar\rho^{\frac{1}{2}} v(\cdot, 0)\right\|^2 \right), \ \ \ \ 0\le t\le T, \end{split} \end{equation} where $\sigma=\min\left\{2\lambda_1/3,\ \lambda_2 \right\}$. \end{lem} {\em Proof}. The proof consists of two steps. With the basic energy estimate obtained in the previous lemma, we can achieve the estimate for $\|x(r_x-1, {r}/{x}-1\|$ by a moment argument in Step 1. It should be pointed out the double integral obtained in Step 1 will play a crucial role in the derivation of higher-order estimates later. In Step 2, we show the time decay estimates for the basic energy. {\em Step 1}. Multiplying \eqref{nsp1} by $r^3-x^3$ and integrating the resulting equation with respect to the spatial variable, we have, with the help of the integration by parts and boundary condition \eqref{Aug7bdry}, that \begin{equation}gin{equation*}\lambdabel{decAug4}\begin{equation}gin{split} &\int \begin{equation}tar{\rho}^\gamma \left\{ \left[\frac{x^4}{r^4} \left(r^3-x^3\right)\right]_x - \left(\frac{x^2}{r^2 r_x} \right)^\gamma \left(r^3-x^3\right)_x \right\} dx \\ = & - \int \left[ \mathfrak{B} \left(r^3-x^3\right)_x - 4\lambda_1 \left(\frac{v}{r}\right)_x \left(r^3-x^3\right) \right] dx -\int x^3 \begin{equation}tar\rho v_t \left( \frac{r}{x}-\frac{x^2}{r^2}\right)dx . \end{split} \end{equation*} Notice that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\mathfrak{B} \left(r^3-x^3\right)_x - 4\lambda_1 \left(\frac{v}{r}\right)_x \left(r^3-x^3\right) = 3\mathfrak{B} \left(r^2 r_x-x^2\right) - 4\lambda_1\left(\frac{v_x}{r}-\frac{vr_x}{r^2}\right)\left(r^3-x^3\right) \\ =& 4\lambda_1 x^2\left(\frac{v}{r}-\frac{v_x}{r_x}+\frac{xv_x}{r} -\frac{xvr_x}{r^2}\right) + 3\lambda_2 x^2 \left( \frac{r^2}{x^2} v_x + 2 \frac{r v}{x^2 }r_x - \frac{v_x}{r_x} -2\frac{v}{r}\right)\\ =&4\lambda_1 x^2\left[\ln\left(\frac{r}{x r_x}\right) +\frac{xr_x}{r} -1\right]_t + 3\lambda_2 x^2 \left[ \frac{r^2}{x^2}r_x - \ln \left(\frac{r^2}{x^2}r_x \right) -1\right]_t; \end{split} \end{equation*} which implies $$ \int \left[ \mathfrak{B} \left(r^3-x^3\right)_x - 4\lambda_1 \left(\frac{v}{r}\right)_x \left(r^3-x^3\right) \right] dx = \frac{d}{dt} \int x^2\left[ 4\lambda_1 \Phi_1\left(\frac{r}{x r_x}\right) + 3\lambda_2 \Phi_2 \left(\frac{r^2}{x^2}r_x \right) \right]dx, $$ where \begin{equation}\lambdabel{phi12}\Phi_1(z):=\ln z + z^{-1} -1 \ \ {\rm and} \ \ \Phi_2(z):=z-\ln z -1.\end{equation} Notice also that $$ \int x^3 \begin{equation}tar\rho v_t \left( \frac{r}{x}-\frac{x^2}{r^2}\right)dx =\frac{d}{dt}\int x^3 \begin{equation}tar\rho v \left( \frac{r}{x}-\frac{x^2}{r^2}\right)dx - \int x^2 \begin{equation}tar\rho v^2 \left( 1+2 \frac{x^3}{r^3}\right)dx. $$ Then, we set \begin{equation}\lambdabel{toto6} \eta_0:= x^2 \tilde{\eta}_0 +x^3 \begin{equation}tar\rho v \left( \frac{r}{x}-\frac{x^2}{r^2}\right) \ \ {\rm and} \ \ \tilde{\eta}_0 := 4\lambda_1 \Phi_1\left(\frac{r}{x r_x}\right) + 3\lambda_2 \Phi_2 \left(\frac{r^2}{x^2}r_x \right), \end{equation} and obtain \begin{equation}gin{align}\lambdabel{toto7} &\frac{d}{dt}\int \eta_0 (x, t)dx+\int \begin{equation}tar{\rho}^\gamma \left\{ \left[\frac{x^4}{r^4} \left(r^3-x^3\right)\right]_x - \left(\frac{x^2}{r^2 r_x} \right)^\gamma \left(r^3-x^3\right)_x \right\} dx \notag\\ &= \int x^2 \begin{equation}tar\rho v^2 \left( 1+2 \frac{x^3}{r^3}\right)dx. \end{align} Noting that the quantity $\{\cdot\}$ on the left-hand side of \eqref{toto7} can be rewritten as $$ x^{2}\left[3\left(\frac{x^2}{r^2r_x}\right)^{\gamma} -3\left(\frac{x^2}{r^2r_x}\right)^{ \gamma-1 } - \left(\frac{x}{r}\right)^2r_x +4\left(\frac{x}{r}\right)^5 r_x - 7\left(\frac{x}{r}\right)^4 + 4 \frac{x}{r}\right], $$ we can then show, using a similar way as to the derivation of \eqref{etalower}, that \begin{equation}gin{align}\lambdabel{dec1} &\int \begin{equation}tar{\rho}^\gamma \left\{ \left[\frac{x^4}{r^4} \left(r^3-x^3\right)\right]_x - \left(\frac{x^2}{r^2 r_x} \right)^\gamma \left(r^3-x^3\right)_x \right\} dx \notag \\ &\ge \frac{3(3\gamma-4)}{2} \int x^2\begin{equation}tar{\rho}^\gamma\left[2\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx , \end{align} when \eqref{rx} holds for a small $\epsilon_0$. It follows from \eqref{toto7} and \eqref{dec1} that \begin{equation}\lambdabel{bye} \frac{d}{dt} \int \eta_0(x,t) dx + \frac{3(3\gamma-4)}{4} \int x^2\begin{equation}tar{\rho}^\gamma\left[2\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx \le C \int v^2 dx. \end{equation} This implies, with the aid of \eqref{eg1} and \eqref{Liy}, that \begin{equation}gin{equation}\lambdabel{newegrx}\begin{equation}gin{split} &\int \eta_0(x,t) dx + \frac{3(3\gamma-4)}{4}\int_0^t \int x^2\begin{equation}tar{\rho}^\gamma\left[2\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds \\ \le & C\int (\eta+\eta_0)(x,0) dx. \end{split} \end{equation} It remains to analyze $\tilde{\eta}_0$. By the Taylor expansion and \eqref{rx}, one may get that \begin{equation}gin{equation*}\lambdabel{eta0lower}\begin{equation}gin{split} \tilde{\eta}_0 \ge \frac{1}{4} \left[ 4\lambda_1 \left(\frac{r}{x r_x}-1\right)^2 + 3\lambda_2 \left(\frac{r^2}{x^2}r_x -1 \right)^2 \right] \ge \frac{3}{4} \sigma \left[2 \left(\frac{r}{x r_x}-1\right)^2 + \left(\frac{r^2}{x^2}r_x -1 \right)^2 \right], \end{split}\end{equation*} where $\sigma=\min\left\{2\lambda_1/3,\ \lambda_2 \right\}$; and \begin{equation}gin{equation}\lambdabel{eta0up}\begin{equation}gin{split} \tilde{\eta}_0\le 2 \left[ 4\lambda_1 \left(\frac{r}{x r_x}-1\right)^2 + 3\lambda_2 \left(\frac{r^2}{x^2}r_x -1 \right)^2 \right]\le C \left[\left(\frac{r}{x}-1\right)^2 +(r_x-1)^2 \right]; \end{split}\end{equation} provided that $\epsilon_0$ in \eqref{rx} is suitably small. Notice that $$ \left(\frac{r}{x}-1\right)^2 \le \left(\frac{r^3}{x^3}-1\right)^2 = \left(\frac{r}{x r_x} \frac{r^2}{x^2}r_x -1\right)^2 \le C \left[\left(\frac{r}{x r_x}-1\right)^2 + \left(\frac{r^2}{x^2}r_x -1 \right)^2 \right] \le C \sigma^{-1} \tilde \eta_0 $$ and also $$ \left(r_x-1\right)^2 \le C \tilde \eta_0 + C\left(\frac{r}{x}-1\right)^2 \le C \sigma^{-1} \tilde \eta_0. $$ We then achieve, with the help of \eqref{newegrx} and \eqref{eg1}, that \begin{equation}gin{equation}\lambdabel{we}\begin{equation}gin{split} \sigma \int x^2\left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2\right] dx + (3\gamma-4)\int_0^t \int x^2\begin{equation}tar{\rho}^\gamma\left[ \left(\frac{r}{x}-1\right)^2 \right. \\ \left. + \left(r_x-1\right)^2 \right] dxds \le C\int (\eta+\eta_0)(x,0) dx . \end{split} \end{equation} This, together with \eqref{etaup} and \eqref{eta0up}, implies \eqref{lem2est}. {\em Step 2}. We are ready to show the time decay of the basic energy. Let $\eta$ be given by \eqref{etadefn}. It follows from \eqref{heg1} that $$ (1+t) \int \eta(x,t)dx + 3 \sigma \int_0^t (1+s) \int \left(\frac{r^2}{r_x} v_{x}^2 + 2 r_x v^2 \right) dx ds \le \int \eta(x,0)dx + \int_0^t \int \eta(x,s)dx ds. $$ In view of \eqref{etaup}, \eqref{lem1est} and \eqref{lem2est}, one has that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \int_0^t \int \eta(x,s)dx ds \le & C \int_0^t \int v^2 dxds + C\int_0^t \int x^2\begin{equation}tar{\rho}^\gamma \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right]dxds \\ \le & C \int (\eta_0 + \eta)(x,0)dx. \end{split} \end{equation*} So, it holds that \begin{equation}gin{equation}\lambdabel{5-0}\begin{equation}gin{split} (1+t) \int \eta(x,t)dx + 3 \sigma \int_0^t (1+s) \int \left(\frac{r^2}{r_x} v_{x}^2 + 2 r_x v^2 \right) dx ds \le C \int (\eta_0 + \eta)(x,0)dx. \end{split} \end{equation} This, together with \eqref{etaup} and \eqref{eta0up}, implies \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &(1+t) \left\|x\begin{equation}tar\rho^{\frac{1}{2}} v (\cdot, t) \right\|^2 + (3\gamma-4) (1+t) \left\|x\begin{equation}tar\rho^{\frac{\gamma}{2}}\left(\frac{r}{x}-1, r_x-1 \right)(\cdot, t) \right\|^2 \\ & + \sigma \int_0^t (1+s) \left\|\left(v, xv_x \right) (\cdot,s) \right\|^2 ds \le C \left(\left\|x\left(\frac{r_0}{x}-1, r_{0x}-1\right) \right\|^2 + \left\|x\begin{equation}tar\rho^{\frac{1}{2}} v(\cdot, 0)\right\|^2 \right). \end{split} \end{equation*} Since $x\begin{equation}tar\rho^{\gammamma-1}$ is equivalent to the distance function, $dist(x, \partial I)$, it then follows from the Sobolev embedding \eqref{sobolev}, \eqref{phy} and the H${\rm \ddot{o}}$lder inequality that \begin{equation}gin{align}\lambdabel{rminusx} &\int(r-x)^2(x,t)dx \le \int x^2 \begin{equation}tar\rho^{2(\gamma-1)}\left((r-x)^2+( r_x-1)^2 \right)(x,t)dx\notag \\ \le & \left(\int x^2 \left((r-x)^2+( r_x-1)^2 \right)(x, t) dx\right)^{\frac{2-\gamma}{\gamma}} \left( \int x^2 \begin{equation}tar\rho^{\gamma}\left((r-x)^2+( r_x-1)^2 \right)dx \right)^{\frac{2\gamma-2}{\gamma}}\notag\\ \le & C(1+t)^{-\frac{2\gamma-2}{\gamma}}\left(\left\|x\left(\frac{r_0}{x}-1, r_{0x}-1\right) \right\|^2 + \left\|x\begin{equation}tar\rho^{\frac{1}{2}} v(\cdot, 0)\right\|^2 \right). \end{align} This finishes the proof of \eqref{lem2est'}. $\Box$ With the estimates obtained so far, we are able to derive a pointwise bound for $\left|r/{x}-1\right|$ and $\left|r_x-1\right|$ away from the origin, by realizing that equation \eqref{nsp1} can be integrated with respect to both $x$ and $t$. It should be noted that the monotonicity of the Lane-Emden density which decreases in the radial direction outward plays an important role for this estimate. \begin{equation}gin{lem}\lambdabel{lem3} Let $I_2=[1/2,1]$. For a suitably small constant $\epsilon_0$ in \eqref{rx}, it holds that for $(x,t)\in I_2\times [0,T]$, \begin{equation}gin{equation}\lambdabel{lem3est}\begin{equation}gin{split} & \left|x^{-1} {r(x,t)} -1\right|+\left|r_x(x,t)-1\right| \\ \le & C \left(\left\|x\begin{equation}tar\rho^{\frac{1}{2}} v(\cdot, 0) \right\| + \left\|x\left(\frac{r_0}{x}-1, r_{0x}-1\right) \right\|+ \left\|r_{0x}-1\right\|_{L^{\infty}\left( I_2\right)}\right). \end{split} \end{equation} \end{lem} {\em Proof}. The proof consists two steps. {\em Step 1 (bound for $ {r}/{x}-1$)}. Notice that $$ x(r-x)^2=\int_0^x \left[y(r(y,t)-y)^2\right]_y dy \le \left\|r-x\right\|^2 + 2 \left\|r-x\right\|\left\|x(r_x-1)\right\|. $$ This, together with \eqref{lem2est}, yields that for $x\in I_2$, \begin{equation}gin{equation}\lambdabel{we1}\begin{equation}gin{split} \left|\frac{r}{x}-1\right|^2 \le 8 x(r-x)^2 \le & C\left(\left\|x\begin{equation}tar\rho^{\frac{1}{2}} v(\cdot, 0) \right\|^2 + \left\|x\left(\frac{r_0}{x}-1, r_{0x}-1\right) \right\|^2\right). \end{split} \end{equation} {\em Step 2 (bound for $r_x-1$)}. Integrating equation \eqref{nsp1} over $[x, 1]$ and using the boundary condition \eqref{Aug7bdry}, one gets \begin{equation}gin{equation}\lambdabel{ec1}\begin{equation}gin{split} & \int_x^1 \begin{equation}tar\rho\left( \frac{y}{r}\right)^2 v_t dy - \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma - \int_x^1 \frac{y^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_y dy = - \mathfrak{B}+ 4\lambda_1 \int_x^1 \left(\frac{v}{r}\right)_ydy ; \end{split} \end{equation} where $\mathfrak{B}$, defined by \eqref{bdry1}, can be rewritten as $$\mathfrak{B}= \mu\left( \ln r_x\right)_t -\left(\frac{4}{3}\lambda_1-2\lambda_2\right) \left( \ln r\right)_t.$$ So, \eqref{ec1} is equivalent to \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \mu \left( \ln r_x\right)_t = & \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma - \left( \int_x^1 \begin{equation}tar\rho \frac{y^2}{r^2} v dy \right)_t -2 \int_x^1 \begin{equation}tar\rho\frac{ y^2 }{r^3} v^2 dy + \int_x^1 \frac{y^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_y dy \\ &+ \left(\frac{4}{3}\lambda_1-2\lambda_2\right) \left( \ln r\right)_t+ 4\lambda_1 \int_x^1 \left( \ln r\right)_{yt} dy . \end{split} \end{equation*} Integrate it with respect to the temporal variable to obtain $$ \mu\ln \left(\frac{r_x}{r_{0x}}\right) = \int_0^t \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma ds + \mathfrak{L} , $$ where \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \mathfrak{L} =& - \left. \int_x^1 \begin{equation}tar\rho \frac{y^2}{r^2} v dy \right|_0^t -2 \int_0^t \int_x^1 \begin{equation}tar\rho\frac{ y^2 }{r^3} v^2 dyds + \int_0^t \int_x^1 \frac{y^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_y dy ds \\ & + \left(\frac{4}{3}\lambda_1-2\lambda_2\right) \ln \left(\frac{r}{r_0}\right)+ 4\lambda_1 \ln \left(\frac{r(1,t) }{r_0(1)}\frac{r_0(x) }{r(x,t)}\right) ; \end{split} \end{equation*} which implies that \begin{equation}gin{equation}\lambdabel{ec2}\begin{equation}gin{split} r_x = r_{0x} \exp\left\{ \frac{1}{\mu} \mathfrak{A} \right\} \exp\left\{ \frac{1}{\mu} \mathfrak{L} \right\} , \ \ {\rm where} \ \ \mathfrak{A}= \int_0^t \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma ds . \end{split} \end{equation} On the other hand, direct calculations show, by virtue of \eqref{ec2}, that $$\mathfrak{A}_t=\left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma =\left(\frac{x^2}{r^2} \frac{\begin{equation}tar\rho}{r_{0x}} \right)^\gamma \exp\left\{ - \frac{\gamma}{\mu} \mathfrak{A}\right\} \exp\left\{ -\frac{\gamma}{\mu} \mathfrak{L} \right\},$$ so that $$ \exp\left\{ \frac{\gamma}{\mu} \mathfrak{A}\right\} =1+ \int_0^t \frac{\gamma}{\mu}\left(\frac{x^2}{r^2} \frac{\begin{equation}tar\rho}{r_{0x}} \right)^\gamma \exp\left\{ -\frac{\gamma}{\mu} \mathfrak{L} \right\} d\thetau.$$ It then follows from \eqref{ec2} that \begin{equation}gin{equation}\lambdabel{ec3}\begin{equation}gin{split} r_x =&r_{0x} \left[1+ \int_0^t \frac{\gamma}{\mu}\left(\frac{x^2}{r^2} \frac{\begin{equation}tar\rho}{r_{0x}} \right)^\gamma \exp\left\{ -\frac{\gamma}{\mu} \mathfrak{L}_1 \right\} \exp\left\{ -\frac{\gamma}{\mu} \int_0^\thetau \int_x^1 \frac{y^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_y dy ds \right\}d\thetau\right]^{1/\gamma} \\ & \times \exp\left\{ \frac{1}{\mu} \mathfrak{L}_1 \right\}\exp\left\{ \frac{1}{\mu} \int_0^t \int_x^1 \frac{y^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_y dy ds \right\}, \end{split} \end{equation} where $$\mathfrak{L}_1=\mathfrak{L}- \int_0^t \int_x^1 \frac{y^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_y dy ds. $$ In view of \eqref{lem1est} and \eqref{we1}, one can get that for $x\ge 1/2$, \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \left|\mathfrak{L}_1\right| \le &C \left(\int x^2 \begin{equation}tar\rho v^2 dx \int \begin{equation}tar\rho dx\right)^{1/2} + C \left(\int x^2 \begin{equation}tar\rho u^2_0(r_0(x)) dx \int \begin{equation}tar\rho dx\right)^{1/2}\\ &+C \int_0^t \int v^2 dyds +C \left\|\frac{r}{x}-1\right\|_{L^\infty\left(I_2\times[0,T]\right)} \le C\tilde{\mathfrak{e}} \end{split} \end{equation*} where $$\tilde{\mathfrak{e}}= \left\|x\begin{equation}tar\rho^{\frac{1}{2}} v(\cdot, 0) \right\| + \left\|x\left(\frac{r_0}{x}-1, r_{0x}-1\right) \right\| .$$ It therefore follows from \eqref{ec3} and \eqref{we1} that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} r_x \le &r_{0x} \left[1+ (1+ C\mathfrak{e} ) \int_0^t \frac{\gamma}{\mu}{\begin{equation}tar\rho}^\gamma \exp\left\{ -\frac{\gamma}{\mu} \int_0^\thetau \int_x^1 \frac{y^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_y dy ds \right\}d\thetau\right]^{1/\gamma} \\ & \times(1+ C\mathfrak{e} ) \exp\left\{ \frac{1}{\mu} \int_0^t \int_x^1 \frac{y^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_y dy ds \right\}\\ \le &r_{0x}\left(1+C\mathfrak{e} \right) \left[\exp\left\{ \frac{\gamma}{\mu} \int_0^t \int_x^1 \frac{y^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_y dy ds \right\} \right.\\ &\left.+ \left(1+C\mathfrak{e} \right)\int_0^t \frac{\gamma}{\mu}\begin{equation}tar\rho^\gamma \exp\left\{ \frac{\gamma}{\mu} \int_\thetau^t \int_x^1 \frac{y^4}{r^4} \left(\begin{equation}tar{\rho}^\gamma\right)_y dy ds \right\} d\thetau\right]^{1/\gamma}, \end{split}\end{equation*} where $\mathfrak{e}= \tilde{\mathfrak{e}}+\left\|r_{0x}-1\right\|_{L^{\infty}(I_2)}.$ Observe that $\left(\begin{equation}tar{\rho}^\gamma\right)_x<0$. So, one can derive from \eqref{we1} that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} r_x \le &r_{0x}\left(1+C\mathfrak{e} \right) \left[\exp\left\{ \frac{\gamma}{\mu} \int_0^t \int_x^1 \left(1-C\mathfrak{e} \right) \left(\begin{equation}tar{\rho}^\gamma\right)_y dy ds \right\} \right.\\ &\left.+ \left(1+C\mathfrak{e} \right)\int_0^t \frac{\gamma}{\mu}\begin{equation}tar\rho^\gamma \exp\left\{ \frac{\gamma}{\mu} \int_\thetau^t \int_x^1 \left(1-C\mathfrak{e} \right) \left(\begin{equation}tar{\rho}^\gamma\right)_y dy ds \right\} d\thetau\right]^{1/\gamma}\\ \le& r_{0x}\left(1+C\mathfrak{e} \right) \left[\exp\left\{- \frac{\gamma}{\mu} (1-C\mathfrak{e}) \begin{equation}tar{\rho}^\gamma t \right\} \right.\\ &\left.+ \left(1+C\mathfrak{e} \right)\int_0^t \frac{\gamma}{\mu}\begin{equation}tar\rho^\gamma \exp\left\{- \frac{\gamma}{\mu} (1-C\mathfrak{e}) \begin{equation}tar{\rho}^\gamma (t-\thetau) \right\} d\thetau\right]^{1/\gamma} \\ \le& r_{0x}\left(1+C\mathfrak{e} \right) \left[\exp\left\{- \frac{\gamma}{\mu} (1-C\mathfrak{e}) \begin{equation}tar{\rho}^\gamma t \right\} + \frac{1+C\mathfrak{e}}{1-C\mathfrak{e} }\left. \exp\left\{- \frac{\gamma}{\mu} (1-C\mathfrak{e}) \begin{equation}tar{\rho}^\gamma (t-\thetau) \right\}\right|_{\thetau=0 }^t\right]^{1/\gamma} \\ \le& r_{0x}\left(1+C\mathfrak{e} \right) \left\{\exp\left\{- \frac{\gamma}{\mu} (1-C\mathfrak{e}) \begin{equation}tar{\rho}^\gamma t \right\} \left[1-\frac{1+C\mathfrak{e} }{1-C\mathfrak{e}}\right]+ \frac{1+C\mathfrak{e} }{1-C\mathfrak{e}}\right\}^{1/\gamma} \le r_{0x} \left( 1+C\mathfrak{e} \right). \end{split} \end{equation*} Similarly, $r_x\ge r_{0x} \left( 1-C\mathfrak{e} \right). $ These two estimates, together with \eqref{we1}, imply \eqref{lem3est}. $\Box$ The following lemma gives the decay estimates for the weighted norms of both the time and spatial derivatives of $v$. \begin{equation}gin{lem}\lambdabel{lem4} Let \eqref{rx} and \eqref{vx} be true. Then it holds that, for $0\le t\le T$, \begin{equation}gin{equation}\lambdabel{lem4est}\begin{equation}gin{split} &(1+t) \left\|\left(x\begin{equation}tar\rho^{\frac{1}{2}} v_t, v, x v_x \right)(\cdot, t) \right\|^2 + \int_0^t (1+s) \left\|\left(v_t, xv_{tx}, \right) (\cdot,s) \right\|^2 ds \\ & \le C \left( \left\|\left(x\begin{equation}tar\rho^{\frac{1}{2}} v_t, v, x v_x \right)(\cdot, 0) \right\|^2 + \left\|x\left(\frac{r_0}{x}-1, r_{0x}-1\right) \right\|^2 \right). \end{split} \end{equation} \end{lem} {\em Proof}. Multiplying equation \eqref{nsp1} by $r^2$ and differentiating the resulting equation with respect to $t$, we obtain \begin{equation}gin{equation}\lambdabel{nsp1t}\begin{equation}gin{split} &\begin{equation}tar\rho x^2 v_{tt} -\gamma r^2 \left[ \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma \left(2\frac{v}{r}+\frac{v_x}{r_x}\right) \right]_{x} + 2 rv \left[ \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma \right]_{x} +2 \frac{x^4}{r^3} v \left(\begin{equation}tar{\rho}^\gamma\right)_x \\ = & r^2 \left[\mathfrak{B}_{xt} + 4\lambda_1 \left(\frac{v}{r}\right)_{xt} \right]+ 2rv \left[\mathfrak{B}_{x} + 4\lambda_1 \left(\frac{v}{r}\right)_{x}\right]. \end{split} \end{equation} Set \begin{equation}gin{equation}\lambdabel{eta1}\begin{equation}gin{split} \eta_1(x,t):=&\frac{1}{2}x^2 \begin{equation}tar{\rho} v_t^2 +\left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma\left[(2\gamma-1)r_x v^2 +2(\gamma-1)r vv_x +\frac{\gamma}{2}\frac{r^2}{r_x} v_x^2 \right] \\ &-\begin{equation}tar\rho^\gamma \left[\left(4\frac{x^3}{r^3}-3\frac{x^4}{r^4}r_x\right) v^2 + 2\frac{x^4}{r^3}vv_x \right]. \end{split}\end{equation} Following the estimates for $\eta$ defined in \eqref{etadefn}, we can show that, for $\gamma\in (4/3, 2]$, \begin{equation}gin{equation}\lambdabel{eta1lower} {\eta_1}(x,t) \ge \frac{1}{2} x^2 \begin{equation}tar{\rho} v_t^2 + \frac{3\gamma-4}{4} x^2\begin{equation}tar{\rho}^\gamma\left[2\left(\frac{v}{x} \right)^2 + v_x^2 \right], \end{equation} \begin{equation}gin{equation}\lambdabel{eta1up} {\eta_1}(x,t) \le \frac{1}{2} x^2 \begin{equation}tar{\rho} v_t^2 + c x^2\begin{equation}tar{\rho}^\gamma \left[\left(\frac{v}{x} \right)^2 + v_x^2 \right], \end{equation} provided that \eqref{rx} holds with $\epsilon_0$ being suitably small. Multiplying \eqref{nsp1t} by $ v_t$ and integrating the product with respect to the spatial variable, we have, using the integration by parts and boundary condition \eqref{Aug7bdry}, that \begin{equation}gin{equation}\lambdabel{hhevt}\begin{equation}gin{split} &\frac{d}{dt}\int \eta_1(x,t) dx +\int \left[\mathfrak{B}_t \left(r^2 v_t\right)_x - 4 \lambda_1 r^2 v_t \left(\frac{v}{r}\right)_{xt} \right]dx = \mathfrak{I}_1+\mathfrak{I}_2 , \end{split} \end{equation} where \begin{equation}gin{equation*}\lambdabel{df}\begin{equation}gin{split} \mathfrak{I}_1:= & - \int \left[ \mathfrak{B} (2rvv_t )_x - 8 \lambda_1 \left(\frac{v}{r}\right)_{x} rvv_t \right] dx, \\ \mathfrak{ I}_2:=& (2\gamma-1)\int \left[\left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma r_x \right]_tv^2 dx +2(\gamma-1)\int \left[\left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma r \right]_tvv_x dx \\ &+\frac{\gamma}{2}\int \left[\left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma \frac{r^2}{r_x} \right]_t v_x^2 dx - \int \begin{equation}tar\rho^\gamma \left[\left(4\frac{x^3}{r^3}-3\frac{x^4}{r^4}r_x\right)_t v^2 + 2\left(\frac{x^4}{r^3}\right)_tvv_x \right] dx. \end{split} \end{equation*} The second term on the left-hand side of \eqref{hhevt} can be estimated as follows. Notice that $$ \mathfrak{B}_t=\frac{4}{3}\lambda_1\frac{r}{r_x}\left(\frac{v_t}{r}\right)_x + \lambda_2 \frac{\left(r^2 v_t\right)_x}{r_x r^2} + \begin{equation}tar{\mathfrak{B}}, $$ where \begin{equation}gin{equation}\lambdabel{barB}\begin{equation}gin{split} \begin{equation}tar{\mathfrak{B}}:=&\frac{4}{3}\lambda_1 \left[\left(\frac{v}{r}\right)^2-\left(\frac{v_x}{r_x}\right)^2 \right] -\lambda_2 \left[2\left(\frac{v}{r}\right)^2+\left(\frac{v_x}{r_x}\right)^2 \right]. \end{split} \end{equation} Thus, \begin{equation}gin{equation}\lambdabel{bfevt4}\begin{equation}gin{split} &\int \left[\mathfrak{B}_t \left(r^2 v_t\right)_x - 4 \lambda_1 r^2 v_t \left(\frac{v}{r}\right)_{xt} \right]dx\\ =& \int \left\{ \frac{4}{3}\lambda_1\left(\frac{v_t}{r}\right)_x \left[\frac{r}{r_x}\left(r^2 v_t\right)_x- 3 r^2 v_t\right] +\lambda_2\frac{ \left| \left(r^2 v_t\right)_x \right|^2}{r_xr^2}\right\}dx -\mathfrak{ I}_3\\ \ge & 3\sigma \int \left[ \frac{r^2}{r_x}v_{tx}^2+ 2r_x v_t^2 \right]dx -\mathfrak{I}_3, \end{split} \end{equation} where $\sigma=\min\left\{2\lambda_1/3, \ \lambda_2 \right\}$ and $$ \mathfrak{I}_3:= - \int \begin{equation}tar{\mathfrak{B}}\left(r^2 v_t\right)_x dx - 4\lambda_1 \int r^2 v_t \left(\frac{v^2}{r^2}\right)_x dx . $$ So, \eqref{hhevt} implies that $$ \frac{d}{dt}\int \eta_1(x,t) dx + 3\sigma \int \left( \frac{r^2}{r_x}v_{tx}^2+ 2r_x v_t^2 \right)dx \le \mathfrak{ I}_1 +\mathfrak{I}_2 + \mathfrak{I}_3. $$ For $\mathfrak{ I}_1$ and $\mathfrak{ I}_3$, it follows from \eqref{Liy}, \eqref{vx} and the Cauchy inequality that $$ \mathfrak{ I}_1+\mathfrak{ I}_3 \le \sigma \int \left( \frac{r^2}{r_x}v_{tx}^2+ 2r_x v_t^2 \right)dx + C\sigma^{-1} \epsilon_1^2 \int \left(x^2 v_{x}^2 + v^2 \right) dx. $$ Similarly, $\mathfrak{ I}_2$ can be bounded by $$ \mathfrak{ I}_2 \le C \epsilon_1 \int \left(x^2 v_{x}^2 + v^2 \right) dx. $$ So, we arrive at the following estimate \begin{equation}gin{equation}\lambdabel{later1}\begin{equation}gin{split} \frac{d}{dt}\int \eta_1(x,t) dx + 2\sigma \int \left( \frac{r^2}{r_x}v_{tx}^2+ 2r_x v_t^2 \right)dx \le C\int \left(x^2 v_{x}^2 + v^2 \right) dx , \end{split} \end{equation} provided that \eqref{vx} holds for $\epsilon_1\le 1$. This, together with \eqref{eg1}, implies that \begin{equation}gin{equation}\lambdabel{eg2}\begin{equation}gin{split} \int \eta_1(x,t) dx +\sigma\int_0^t \int \left( x^2 v_{sx}^2+ v_s^2 \right)dxds \le \int \eta_1(x,0) dx + C\int \eta(x,0)dx \end{split} \end{equation} and \begin{equation}gin{equation*}\lambdabel{5-1}\begin{equation}gin{split} & (1+t) \int \eta_1(x,t)dx + \sigma \int_0^t (1+s) \int \left(x^2 v_{sx}^2 + v_s^2 \right) dx ds\\ \le & \int \eta_1(x,0)dx + \int_0^t \int \eta_1(x,s)dx ds + C\int_0^t (1+s)\int \left(x^2 v_{x}^2 + v^2 \right) dxds\\ \le & \int \eta_1(x,0)dx + C \int_0^t \int v_s^2 dx ds + C\int_0^t (1+s) \int \left(x^2 v_{x}^2 + v^2 \right) dxds. \end{split} \end{equation*} Here \eqref{eta1up} has been used. This, together with \eqref{lem2est'}, \eqref{eg2} and \eqref{eta1up}, implies \begin{equation}gin{equation}\lambdabel{tttt}\begin{equation}gin{split} &(1+t) \left( \left\|x\begin{equation}tar\rho^{\frac{1}{2}} v_t (\cdot, t) \right\|^2 + \left\| \begin{equation}tar\rho^{\frac{\gamma}{2}}\left(v, xv_x \right)(\cdot, t) \right\|^2\right) + \int_0^t (1+s) \left\|\left(v_s, xv_{sx}, \right) (\cdot,s) \right\|^2 ds \\ & \le C \left(\left\|x\begin{equation}tar\rho^{\frac{1}{2}} (v_t,v) (\cdot, 0) \right\|^2 + \left\| \begin{equation}tar\rho^{\frac{\gamma}{2}}\left(v, xv_x \right)(\cdot, 0) \right\|^2 + \left\|x\left(\frac{r_0}{x}-1, r_{0x}-1\right) \right\|^2 \right). \end{split} \end{equation} Observe that $$ (1+t)v^2(x,t) \le v^2(x,0)+\int_0^t (1+s)\left[2v^2(x,s) + v_s^2(x,s) \right] ds. $$ Integrate the above inequality with respect to the spatial variable to give $$ (1+t)\int v^2(x,t)dx \le \int v^2(x,0)dx +\int_0^t (1+s) \int \left[2v^2(x,s) + v_s^2(x,s) \right]dx ds. $$ Similarly, it holds that $$ (1+t)\int x^2 v_x^2(x,t)dx \le \int x^2 v^2(x,0)dx +\int_0^t (1+s) \int \left[2x^2 v^2(x,s) + x^2 v_s^2(x,s) \right]dx ds. $$ This, together with \eqref{lem2est'} and \eqref{tttt}, implies that \begin{equation}gin{equation}\lambdabel{jump}\begin{equation}gin{split} &(1+t) \left( \left\|x\begin{equation}tar\rho^{\frac{1}{2}} v_t (\cdot, t) \right\|^2 + \left\| \left(v, xv_x \right)(\cdot, t) \right\|^2\right) + \int_0^t (1+s) \left\|\left(v_s, xv_{sx}, \right) (\cdot,s) \right\|^2 ds \\ & \le C \left(\left\|x\begin{equation}tar\rho^{\frac{1}{2}}v_t (\cdot, 0) \right\|^2 + \left\| \left(v, xv_x \right)(\cdot, 0) \right\|^2 + \left\|x\left(\frac{r_0}{x}-1, r_{0x}-1\right) \right\|^2 \right). \end{split} \end{equation} This finishes the proof of \eqref{lem4est}. $\Box$ Next, we derive further time decay estimates based on Lemmas \ref{lem1}, \ref{lem2} and \ref{lem4} by using two multipliers $$\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(y)(r^3-y^3)_ydy \ \ {\rm and} \ \ \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(y)(r^2 v)_ydy \ \ {\rm for} \ \ 0<\begin{equation}ta<\gammamma-1.$$ The key is to deal with the behavior of solutions near both the boundary and geometrical singularity at the origin simultaneously. The improved decay estimates obtained in this lemma give the convergence of the evolving boundary $r=R(t)$ to that of the Lane-Emden stationary solution. \begin{equation}gin{lem}\lambdabel{lem51} Suppose that \eqref{rx} and \eqref{vx} hold. Then for any $\theta\in \left(0,\ {2(\gamma-1)}/({3\gamma})\right)$, there exists a constant $C(\theta)$ independent of $t$ such that \begin{equation}gin{align}\lambdabel{estlem51} &\left\|\begin{equation}tar\rho^{\frac{\gamma\theta}{4}-\frac{\gamma-1}{2}}\left(r-x, xr_x-x\right)(\cdot, t) \right\|^2 +(1+t)^{\frac{\gamma-1}{\gamma}-\theta}\left\|\left(xr_x-x\right)(\cdot, t) \right\|^2 +(1+t)^{\frac{3(\gamma-1)}{\gamma}-\theta} \notag\\ & \times\left\|(r-x)(\cdot, t) \right\|^2 +(1+t)^{\frac{2\gamma-1}{\gamma}-\theta}\left( \left\|\left(x\begin{equation}tar\rho^{\frac{1}{2}}v_t,v,xv_x\right) (\cdot, t) \right\|^2 + \left\|\begin{equation}tar\rho^{\frac{\gamma}{2}}\left(r-x, xr_x-x \right)(\cdot, t) \right\|^2 \right) \notag \\ &+ \int_0^t \left[\left\|\begin{equation}tar\rho^{\frac{\theta\gamma+2}{4}}\left(r-x, xr_x-x \right)(\cdot, s) \right\|^2 + (1+s)^{\frac{\gamma-1}{\gamma}-\theta}\left\|\begin{equation}tar\rho^{\frac{\gamma}{2}}\left(r-x, xr_x-x \right)(\cdot, s) \right\|^2\right] ds\notag\\ & + \int_0^t \left[(1+s)^{\frac{2\gamma-1}{\gamma}-\theta} \left\|\left(v, xv_x, v_s, x v_{sx} \right) (\cdot,s) \right\|^2 + (1+s)^{\frac{2\gamma-1}{2\gamma}-\frac{\theta}{2}} \left\|\begin{equation}tar\rho^{\frac{\gamma\theta}{4}-\frac{\gamma-1}{2}}\left(v, xv_x \right) (\cdot,s) \right\|^2 \right] ds \notag\\ & \le C(\theta)\left( \left\|\left( v, x v_x , x\begin{equation}tar\rho^{\frac{1}{2}} v_t \right)(\cdot, 0) \right\|^2 + \left\| r_{0x}-1 \right\|_{L^\infty}^2\right) , \ \ \ \ t\in [0, T]. \end{align} Moreover, we have for any $a\in (0,1)$ and $\theta\in \left(0,\ {2(\gamma-1)}/({3\gamma})\right)$, \begin{equation}gin{align} (1+t)^{2(\gamma-1)/\gamma-\theta}\|(r-x)(\cdot,t)\|_{L^\infty([a,1])}^2 +(1+t)^{(2\gamma-1)/\gamma-\theta}\|v(\cdot,t)\|_{L^\infty([a,1])}^2 \notag\\ \le C(a,\theta)\left( \left\|\left( v, x v_x , x\begin{equation}tar\rho^{{1}/{2}} v_t \right)(\cdot, 0) \right\|^2 + \left\| r_{0x}-1 \right\|_{L^\infty}^2\right) , \ \ \ \ t\in [0, T]. \lambdabel{8/12-1} \end{align} \end{lem} {\em Proof}. For any given $\theta\in \left(0, \ {2(\gamma-1)}/({3\gamma})\right)$, we set \begin{equation}gin{equation}\lambdabel{alphaiota} \begin{equation}ta:=\gammamma-1-\frac{\gammamma\theta}{2}, \ \ \iota:=\frac{\theta}{2}, \ \ \kappappa:=\frac{\begin{equation}ta}{\gamma}-\iota, \ \ \nu:=\frac{1}{2}\left(1+\frac{\begin{equation}ta}{\gamma}-\iota\right), \end{equation} so that $0<\begin{equation}ta<\gammamma-1$ and $ 0<\iota< {\begin{equation}ta}/({2\gammamma}).$ The proof of this lemma consists of the following four steps. {\em Step 1}. In this step, we prove that \begin{equation}gin{align}\lambdabel{key} &\int \left[(1+t)^{ \nu}\begin{equation}tar\rho^\gamma +1\right] x^2\begin{equation}tar{\rho}^{-\begin{equation}ta}\left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] (x,t) dx\notag\\ &+ \int_0^t (1+s)^{ \nu} \int \begin{equation}tar\rho^{-\begin{equation}ta} \left(x^2 v_x^2+ v^2 \right)dxds + \int_0^t \int x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds\notag\\ & \le C \left\| r_{0x}-1 \right\|_{L^\infty}^2 + C \sum_{i=1}^3 \int_0^t (1+s)^{ \nu}| K_i |ds+ C \sum_{i=1}^3 \int_0^t |L_i| ds, \end{align} where \begin{equation}gin{equation}\lambdabel{Li}\begin{equation}gin{split} L_1= &-\int \begin{equation}tar\rho \frac{x^2}{r^2} v_t \left( \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(r^3-y^3)_ydy \right) dx , \\ L_2=& \int \begin{equation}tar{\rho}^\gamma \left(\frac{x^4}{r^4}\right)_x \left[\begin{equation}tar\rho^{-\begin{equation}ta} \left(r^3-x^3\right) - \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(r^3-y^3)_ydy \right] dx, \\ L_3=& 4\lambda_1 \int \left(\frac{v}{r}\right)_x\left[\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(r^3-y^3)_ydy - \begin{equation}tar\rho^{-\begin{equation}ta}\left(r^3-x^3\right) \right] dx; \end{split} \end{equation} and \begin{equation}gin{equation}\lambdabel{Ki}\begin{equation}gin{split} K_1 = &-\int \begin{equation}tar\rho\frac{x^2}{r^2}v_t \left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(r^2 v)_ydy \right)dx,\\ K_2= & \int \begin{equation}tar{\rho}^{\gamma}\left(\frac{x^4}{r^4}\right)_x\left[\begin{equation}tar{\rho}^{-\begin{equation}ta}{r^2}v - \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(r^2 v)_ydy\right]dx,\\ K_3= & 4 \lambda_1 \int \left(\frac{v}{r}\right)_x \left[ \left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(r^2 v)_ydy \right)-\begin{equation}tar\rho^{-\begin{equation}ta} r^2 v \right]dx. \end{split} \end{equation} To this end, we multiply \eqref{nsp1} by $\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(y)(r^3-y^3)_ydy$ and integrate the resulting equation with respect to the spatial variable to obtain, with the aid of the integration by parts and the boundary condition \eqref{Aug7bdry}, that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} & \int \begin{equation}tar{\rho}^{\gamma-\begin{equation}ta} \left\{ \left[\frac{x^4}{r^4} \left(r^3-x^3\right)\right]_x - \left(\frac{x^2}{r^2 r_x} \right)^\gamma \left(r^3-x^3\right)_x \right\} dx \\ &+\int \begin{equation}tar\rho^{-\begin{equation}ta} \left[ \mathfrak{B} \left(r^3-x^3\right)_x - 4\lambda_1 \left(\frac{v}{r}\right)_x \left(r^3-x^3\right) \right] dx =\sum_{i=1}^3 L_i. \end{split} \end{equation*} Noticing that \begin{equation}e\lambdabel{}\begin{equation}gin{split} &\int x^2\begin{equation}tar{\rho}^{-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right](x,0) dx \\ &\le C \left\|\left(r_0-x, \ x r_{0x}-x \right) \right\|_{L^\infty}^2 \int_0^1 (1-x)^{-\frac{\begin{equation}ta}{\gamma-1}}dx \le C\frac{\left\| r_{0x}-1 \right\|_{L^\infty}^2}{(\gamma-1)-\begin{equation}ta} \end{split}\end{equation}e due to \eqref{phy} and $r_0(0)=0$, one can obtain, following the derivation of \eqref{we}, that \begin{equation}gin{align}\lambdabel{beauty1} &\int x^2\begin{equation}tar{\rho}^{-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right](x,t) dx + \int_0^t \int x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds\notag\\ & \le C \left\| r_{0x}-1 \right\|_{L^\infty}^2 + C \sum_{i=1}^3 \int_0^t |L_i| ds. \end{align} Next, multiplying equation \eqref{nsp1} by $\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(y)(r^2 v)_ydy$, integrating the product with respect to spatial variable, and using the integration by parts and the boundary condition \eqref{Aug7bdry}, one obtains, $$ \int \begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left[\left(\frac{x^4}{r^2}v\right)_x-\left(\frac{x^2}{r^2r_x}\right)^{\gamma} \left(r^2 v\right)_x \right]dx + \int \begin{equation}tar\rho^{-\begin{equation}ta}\left[\mathfrak{B} \left(r^2 v\right)_x dx - 4 \lambda_1 r^2 v \left(\frac{v}{r}\right)_x \right]dx = \sum_{i=1}^3 K_i , $$ Following the derivation of \eqref{heg1}, one can then obtain $$ \frac{d}{dt} \int {\eta}_2 (x,t) dx + 3\sigma \int \begin{equation}tar\rho^{-\begin{equation}ta} \left[ \frac{r^2}{r_x}v_x^2+ 2r_x v^2 \right]dx \le \sum_{i=1}^3 K_i, $$ where $$ {\eta}_2(x,t): =\begin{equation}tar\rho^{-\begin{equation}ta}\left(\eta(x,t)- \frac{1}{2} x^2 \begin{equation}tar{\rho} v^2 \right)\approx x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right]. $$ Here and thereafter, $f\approx g$ means that $C^{-1}g\le f\le C g$ with a generic positive constant $C$. Multiplying the equation above by $(1+t)^{\nu}$ and integrating the product with respect to the temporal variable lead to \begin{equation}gin{equation}\lambdabel{beauty}\begin{equation}gin{split} &(1+t)^{\nu}\int x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] (x,t) dx + \int_0^t (1+s)^{\nu} \int \begin{equation}tar\rho^{-\begin{equation}ta} \left(x^2 v_x^2+ v^2 \right)dxds\\ & \le C \left\| r_{0x}-1 \right\|_{L^\infty}^2 + C \sum_{i=1}^3 \int_0^t (1+s)^{\nu}| K_i |ds + C\int_0^t \int x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds, \end{split} \end{equation} due to the fact ${\begin{equation}ta}/{\gamma}<1$. So, estimate \eqref{key} follows by a suitable combination of \eqref{beauty} and \eqref{beauty1}. {\em Step 2.} In this step, we show that \begin{equation}\lambdabel{14eye}\begin{equation}gin{split} &\int \left[(1+t)^{2\nu}\begin{equation}tar\rho^\gamma +(1+t)^{\kappappa} +\begin{equation}tar{\rho}^{-\begin{equation}ta}\right] x^2\left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] (x,t) dx\\ &+(1+t)^{2\nu}\int x^2 \begin{equation}tar\rho v^2 (x,t) dx+\int_0^t \int \left[(1+s)^{\kappappa} + \begin{equation}tar\rho^{-\begin{equation}ta} \right]x^2\begin{equation}tar{\rho}^{\gamma} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds \\ & + \int_0^t \int \left[(1+s)^{2\nu}+ (1+s)^{\nu} \begin{equation}tar\rho^{-\begin{equation}ta} \right] \left(x^2 v_x^2+ v^2 \right)dxds\\ &\le C \left( \left\|x \begin{equation}tar\rho^{\frac{1}{2}}v(\cdot,0)\right\|^2 + \left\| r_{0x}-1 \right\|_{L^\infty}^2\right) + C \sum_{i=1}^3 \int_0^t (1+s)^{\nu}| K_i |ds+ C \sum_{i=1}^3 \int_0^t |L_i| ds, \end{split}\end{equation} where $L_i$ and $K_i$ ( $i=1, 2, 3$) are given by \eqref{Li} and \eqref{Ki}, respectively. To prove \eqref{14eye}, one can integrate the product of $(1+t)^{2\nu}$ and \eqref{heg1} with respect to the temporal variable to get \begin{equation}gin{align*}\lambdabel{} &(1+t)^{2\nu}\int \left\{x^2 \begin{equation}tar\rho v^2 + x^2\begin{equation}tar{\rho}^{\gamma}\left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right]\right\} (x,t) dx\\ & +\int_0^t (1+s)^{2\nu} \int \left(x^2 v_x^2+ v^2 \right)dxds \le C \left\|x \begin{equation}tar\rho^{\frac{1}{2}}v(\cdot,0)\right\|^2 + C\left\| r_{0x}-1 \right\|_{L^\infty}^2 \\ &+C\int_0^t ({1+s})\int v^2 dxds +C\int_0^t (1+s)^{\frac{\begin{equation}ta}{\gamma}-\iota} \int \ x^2\begin{equation}tar{\rho}^{\gamma}\left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds, \end{align*} since $\begin{equation}ta/\gamma<1$. Integrate the product of $(1+t)^{\kappappa}$ and \eqref{bye} with respect to the temporal variable to give \begin{equation}gin{align*}\lambdabel{} & (1+t)^{\kappappa} \int x^2 \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right](x,t) dx\\ & + \int_0^t (1+s)^{\kappappa} \int x^2\begin{equation}tar{\rho}^{\gamma} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds \le C\left\| r_{0x}-1 \right\|_{L^\infty}^2 \\ & \quad + C\int (1+s) \int v^2 dxds + C \int (1+s)^{\frac{\begin{equation}ta}{\gamma}-\iota-1}\int x^2 \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right](x,t) dxds. \end{align*} The last term on the right-hand side of the inequality above is estimated as follows. It follows from the H$\ddot{\rm o}$lder inequality and the Young inequality that \begin{equation}gin{align*}\lambdabel{} \int x^2 \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx\le & \left(\int x^2 \begin{equation}tar\rho^{\gamma-\begin{equation}ta } \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx\right)^{\frac{\begin{equation}ta }{\gamma}} \\ & \times \left(\int x^2 \begin{equation}tar\rho^{-\begin{equation}ta } \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx\right)^{\frac{\gamma-\begin{equation}ta }{\gamma}} \end{align*} and \begin{equation}e\lambdabel{}\begin{equation}gin{split} &\int_0^t(1+s)^{\begin{equation}ta/\gamma-\iota-1}\int x^2 \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds\\ &\le C\int_0^t \int x^2 \begin{equation}tar\rho^{\gamma-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds \\ &+ C\int_0^t (1+s)^{\left(\begin{equation}ta/\gamma-\iota-1\right)\frac{\gamma}{\gamma-\begin{equation}ta}}ds \sup_{s\in [0,t]}\int x^2 \begin{equation}tar\rho^{-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right](x,s) dx \\ &\le C\frac{\gamma-\begin{equation}ta}{\iota \gamma}\sup_{s\in [0,t]}\int x^2 \begin{equation}tar\rho^{-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right](x,s) dx\\ &+C\int_0^t \int x^2 \begin{equation}tar\rho^{\gamma-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds. \end{split}\end{equation}e In a similar way as to deriving \eqref{key}, we then have, noting \eqref{lem2est'}, that \begin{equation}\lambdabel{bridge}\begin{equation}gin{split} &(1+t)^{2\nu}\int \left\{x^2 \begin{equation}tar\rho v^2 + x^2\begin{equation}tar{\rho}^{\gamma}\left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right]\right\} (x,t) dx\\ &+(1+t)^{\kappappa} \int x^2 \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right](x,t) dx +\int_0^t (1+s)^{2\nu} \int \left(x^2 v_x^2+ v^2 \right)dxds\\ & +\int_0^t (1+s)^{\kappappa} \int x^2\begin{equation}tar{\rho}^{\gamma} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds \\ & \le C \left\|x \begin{equation}tar\rho^{\frac{1}{2}}v(\cdot,0)\right\|^2 + C \left\| r_{0x}-1 \right\|_{L^\infty}^2 + \sup_{s\in [0,t]}\int x^2 \begin{equation}tar\rho^{-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right](x,s) dx\\ &+C\int_0^t \int x^2 \begin{equation}tar\rho^{\gamma-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds. \end{split}\end{equation} Make a summation of $k\times\eqref{key}$ and \eqref{bridge} with suitable large $k$ to give \eqref{14eye}. {\em Step 3}. We claim that \begin{equation}gin{align}\lambdabel{step4} &\int \left[(1+t)^{\frac{2\gamma-1}{\gamma}-\theta}\begin{equation}tar\rho^\gamma +(1+t)^{\frac{\gamma-1}{\gamma}-\theta} +\begin{equation}tar{\rho}^{-\left(\gamma-1-\frac{1}{2}\gamma\theta\right)}\right] x^2\left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] (x,t) dx\notag\\ &+\int_0^t \int \left[(1+s)^{\frac{2\gamma-1}{\gamma}- {\theta} }+ (1+s)^{\frac{2\gamma-1}{2\gamma}-\frac{\theta}{2}} \begin{equation}tar\rho^{-\left(\gamma-1-\frac{1}{2}\gamma\theta\right)} \right]\left(x^2 v_x^2+ v^2 \right)dxds\notag\\ & +\int_0^t \int \left[(1+s)^{\frac{\gamma-1}{\gamma}-\theta} + \begin{equation}tar\rho^{-\left(\gamma-1-\frac{1}{2}\gamma\theta\right)} \right]x^2\begin{equation}tar{\rho}^{\gamma} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds\notag \\ &+(1+t)^{\frac{2\gamma-1}{\gamma}-\theta}\int x^2 \begin{equation}tar\rho v^2 (x,t) dx \le C Q(0), \end{align} where and in the following $$ \lambdabel{}Q(0):=\left\|\left( v, x v_x , x\begin{equation}tar\rho^{\frac{1}{2}} v_t \right)(\cdot, 0) \right\|^2 + \left\| r_{0x}-1 \right\|_{L^\infty}^2. $$ To prove this claim, it remains to estimate $K_i$ and $L_i$ in \eqref{14eye}. First, it follows from \eqref{phy} that for any given constants $\delta\in (0,1]$ and $\begin{equation}ta\in (0, \gamma-1)$, \begin{equation}gin{equation}\lambdabel{fact}\begin{equation}gin{split} \int_{1-\delta}^1 \left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta} dy \right) dx \le & C \int_{1-\delta}^1 \left(1-\frac{\begin{equation}ta}{\gamma-1}\right)^{-1}\left[1 -(1-x)^{1-\frac{\begin{equation}ta}{\gamma-1}}\right]dx\\ \le & C \frac{\gamma-1}{(\gamma-1)-\begin{equation}ta} \delta= C\left[(\gamma-1)-\begin{equation}ta\right]^{-1}\delta . \end{split} \end{equation} Let $\omega\in(0,1/2)$ be a small constant to be determined at the end of this step. It follows from the Cauchy inequality, \eqref{lem4est}, the H$\ddot{o}$lder inequality and \eqref{fact} that \begin{equation}gin{equation}\lambdabel{14k1}\begin{equation}gin{split} &\int_0^t (1+s)^{\nu}|K_1|ds \le \omega \int_0^t (1+s)^{\nu} \int \left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}y (| v| + |y v_y|)dy \right)^2 dx ds \\ & + C\omega^{-1} \int_0^t (1+s) \int v_s^2 dxds \le C\omega^{-1} Q(0)+ C \omega \int_0^t (1+s)^{\nu} \int_0^1 \begin{equation}tar\rho^{-\begin{equation}ta} (| v|^2 + |y v_y|^2 )dyds, \end{split} \end{equation} since \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \int \left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}y (| v| + |y v_y|)dy \right)^2 dx \le & \int_0^1 \begin{equation}tar\rho^{-\begin{equation}ta} (| v|^2 + |y v_y|^2 )dy \int \left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta} y^2 dy \right) dx \\ \le & C \int_0^1 \begin{equation}tar\rho^{-\begin{equation}ta} (| v|^2 + |y v_y|^2 )dy . \end{split} \end{equation*} Similarly, one can obtain \begin{equation}gin{equation}\lambdabel{14l1}\begin{equation}gin{split} &\int_0^t |L_1|ds \le C \omega^{-1} \int_0^t \int v_s^2 dxds + \omega \int_0^t \int \begin{equation}tar\rho^2 \left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta} y^2 \left(\left|\frac{r}{y}-1\right| + \left|r_y-1\right| \right) dy \right)^2 dx ds\\ & \le C \omega^{-1} Q(0) + C \omega \int_0^t \int x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds , \end{split} \end{equation} since $$ \int \begin{equation}tar\rho^2 \left( \int_0^x \begin{equation}tar\rho^{-\gamma-\begin{equation}ta}dy \right) dx \le C \int \begin{equation}tar\rho^2 \begin{equation}tar\rho^{-\gamma-\begin{equation}ta+(\gamma-1)} dx = C \int \begin{equation}tar\rho^{2-\gamma} \begin{equation}tar\rho^{-\begin{equation}ta+(\gamma-1)} dx\le C. $$ $K_2$ can be rewritten as \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} K_2=& \int_0^{1-\omega} \begin{equation}tar{\rho}^{\gamma}\left(\frac{x^4}{r^4}\right)_x \int_0^x \left(\begin{equation}tar\rho^{-\begin{equation}ta}\right)_y r^2 vdydx \\ &+ \int_{1-\omega}^1 \begin{equation}tar{\rho}^{\gamma}\left(\frac{x^4}{r^4}\right)_x\left[\begin{equation}tar{\rho}^{-\begin{equation}ta}{r^2}v - \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(r^2 v)_ydy\right]dx=:K_{21}+K_{22}. \end{split} \end{equation*} Note that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &|K_{21}| = \left|\frac{\begin{equation}ta}{\gamma} \int_0^{1-\omega} \begin{equation}tar{\rho}^{\gamma}\left(\frac{x^4}{r^4}\right)_x \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta-(\gamma-1)}y \phi r^2 vdydx \right|\\ \le & C\int_0^{1-\omega} \begin{equation}tar{\rho}^{\gamma}x\left(|r_x-1|+\left|\frac{r}{x}-1\right|\right) x^{-2} \left( \int_0^x y^6 dy \right)^{1/2}dx \left(\int_0^1 v^2 dy\right)^{1/2}\\ \le & C \left(\int x^2\begin{equation}tar{\rho}^{\gamma} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx\right)^{1/2} \left(\int_0^1 v^2 dy\right)^{1/2}\\ \le & C \omega^{-1} (1+t)^{-\nu} \int x^2\begin{equation}tar{\rho}^{\gamma} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx +C \omega (1+t)^\nu \int v^2 dx \end{split} \end{equation*} and \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &|K_{22}|\le \int_{1-\omega}^1 \begin{equation}tar{\rho}^{\gamma}\left(\frac{x^4}{r^4}\right)_x\left[\begin{equation}tar{\rho}^{-\begin{equation}ta}{r^2}v - \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(r^2 v)_ydy\right]dx\\ & \le C \int_{1-\omega}^1 x\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left(|r_x-1|+\left|\frac{r}{x}-1\right|\right)|v|dx \\ & + C \int_{1-\omega}^1 x\begin{equation}tar{\rho}^{\gamma}\left(|r_x-1|+\left|\frac{r}{x}-1\right|\right)\left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}\left(|v|+|yv_y|\right)dy \right) dx\\ &\le C \omega^{\frac{\gamma-\begin{equation}ta}{2(\gamma-1)}} \left[ (1+t)^{-\nu} \int_{1-\omega}^1 x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left(|r_x-1|^2+\left|\frac{r}{x}-1\right|^2\right)dx +(1+t)^\nu\int v^2 dx\right]\\ &+ C \omega^{\frac{\gamma}{4(\gamma-1)}} \left[ (1+t)^{-\nu} \int_{1-\omega}^1 x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left(|r_x-1|^2+\left|\frac{r}{x}-1\right|^2\right)dx + (1+t)^\nu \int_0^1 (v^2 + x^2 v_x^2 )dx\right], \end{split} \end{equation*} due to \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} & \int_{1-\omega}^1 \begin{equation}tar\rho^{\frac{\gamma}{2}+\begin{equation}ta}\left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta} (| v|^2 + |y v_y|^2 )dy\right)^2 dx\\ \le & \left(\int_{1-\omega}^1 \begin{equation}tar\rho^{\frac{\gamma}{2}+\begin{equation}ta}\int_0^x \begin{equation}tar\rho^{-2\begin{equation}ta} dydx \right) \int_0^1 (| v|^2 + |y v_y|^2 )dy \le C \int_0^1 (| v|^2 + |y v_y|^2 )dy. \end{split} \end{equation*} Then, one gets, using \eqref{lem2est}, that \begin{equation}gin{equation}\lambdabel{14k2}\begin{equation}gin{split} &\int_0^t (1+s)^\nu|K_2|ds \le C\omega^{-1} Q(0) + C \left(\omega +\omega^{\frac{\gamma}{4(\gamma-1)}} + \omega^{\frac{\gamma-\begin{equation}ta}{2(\gamma-1)}} \right) \\ &\times \left[ \int_0^t (1+s)^{2\nu} \int_0^1 (v^2 + x^2 v_x^2 )dxds+ \int_0^t \int x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left(|r_x-1|^2+\left|\frac{r}{x}-1\right|^2\right)dx ds \right]. \end{split} \end{equation} Similarly, $L_2$ can be rewritten as \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} L_2 =& \int_0^{1-\omega} \begin{equation}tar{\rho}^{\gamma}\left(\frac{x^4}{r^4}\right)_x \int_0^x \left(\begin{equation}tar\rho^{-\begin{equation}ta}\right)_y(r^3-y^3)dydx \\ &+ \int_{1-\omega}^1 \begin{equation}tar{\rho}^{\gamma}\left(\frac{x^4}{r^4}\right)_x\left[\begin{equation}tar{\rho}^{-\begin{equation}ta}(r^3-x^3) - \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(r^3-y^3)_ydy\right]dx=:L_{21}+L_{22}. \end{split} \end{equation*} Note that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &|L_{21}| = \left|\frac{\begin{equation}ta}{\gamma} \int_0^{1-\omega} \begin{equation}tar{\rho}^{\gamma}\left(\frac{x^4}{r^4}\right)_x \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta-(\gamma-1)}y \phi(r^3-y^3)dydx \right|\\ \le & C\int_0^{1-\omega} \begin{equation}tar{\rho}^{\gamma}x\left(|r_x-1|+\left|\frac{r}{x}-1\right|\right) x^{-2} \left( \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta-(\gamma-1)}y^2 |r-y|dy\right)dx\\ \le & C(\omega) \int x^2\begin{equation}tar{\rho}^{\gamma} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx \end{split} \end{equation*} and \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} |L_{22}| \le & C\int_{1-\omega}^1 x\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left(|r_x-1|+\left|\frac{r}{x}-1\right|\right)|r-x|dx \\ & +C \int_{1-\omega}^1 x\begin{equation}tar{\rho}^{\gamma}\left(|r_x-1|+\left|\frac{r}{x}-1\right|\right)\left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}y^2\left(|r_y-1|+\left|\frac{r}{y}-1\right|\right)dy \right) dx\\ \le & \omega \int x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx + C\omega^{-1} \int_{1-\omega}^1 \begin{equation}tar\rho^{\gamma-\begin{equation}ta} (r-x)^2 dx\\ &+ C \left(\int x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx\right)^{1/2}\\ &\times \left( \int_{1-\omega}^1 \begin{equation}tar{\rho}^{\gamma +\begin{equation}ta }\left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}y^2\left(|r_y-1|+\left|\frac{r}{y}-1\right|\right)dy \right)^2 dx \right)^{1/2}\\ \le & C \int \left(\omega \begin{equation}tar{\rho}^{\gamma-\begin{equation}ta} + \omega^{-1}\begin{equation}tar\rho^\gamma \right)x^2 \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx, \end{split} \end{equation*} where we have used the following simple estimates due to \eqref{hardybdry} and \eqref{phy}: \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \int_{1-\omega}^1 \begin{equation}tar\rho^{\gamma-\begin{equation}ta} (r-x)^2 dx \le & C \int_{1/2}^1 \begin{equation}tar\rho^{\gamma-\begin{equation}ta + 2(\gamma-1)} \left[(r-x)^2 +(r_x-1)^2\right] \\ \le & C \int_{1/2}^1 \begin{equation}tar\rho^{\gamma} \left[(r-x)^2 +x^2 (r_x-1)^2\right] \end{split} \end{equation*} and \begin{equation}gin{align*}\lambdabel{} &\int_{1-\omega}^1 \begin{equation}tar{\rho}^{\gamma +\begin{equation}ta }\left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}y^2\left(|r_y-1|+\left|\frac{r}{y}-1\right|\right)dy \right)^2 dx \\ &\le C\int_{1-\omega}^1\begin{equation}tar{\rho}^{\gamma +\begin{equation}ta }\int_0^x \begin{equation}tar\rho^{-2\begin{equation}ta-\gamma}(y)dy\int_0^x\begin{equation}tar\rho^{\gamma} y^2\left(|r_y-1|^2+\left|\frac{r}{y}-1\right|^2\right)dydx\\ &\le C\int_{1-\omega}^1\begin{equation}tar{\rho}^{-\begin{equation}ta }dx \int_0^1x^2\begin{equation}tar\rho^{\gamma} \left(|r_x-1|^2+\left|\frac{r}{x}-1\right|^2\right)dx\\ &\le C \int_0^1x^2\begin{equation}tar\rho^{\gamma} \left(|r_x-1|^2+\left|\frac{r}{x}-1\right|^2\right)dx. \end{align*} Thus, it follows from these and \eqref{lem2est} that \begin{equation}gin{equation}\lambdabel{14l2}\begin{equation}gin{split} \int_0^t |L_2|ds \le C(\omega) Q(0) +C \omega \int_0^t \int x^2\begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left(|r_x-1|^2+\left|\frac{r}{x}-1\right|^2\right)dxds. \end{split} \end{equation} Rewrite $K_3$ as \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} K_3= &- 4 \lambda_1 \int_0^{1-\omega} \left(\frac{v}{r}\right)_x \int_0^x \left(\begin{equation}tar\rho^{-\begin{equation}ta}\right)_y(r^2 v)dy dx \\ &- 4 \lambda_1 \int_{1-\omega}^1 \left(\frac{v}{r}\right)_x \left[ \begin{equation}tar\rho^{-\begin{equation}ta} r^2 v - \left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(r^2 v)_ydy \right) \right]dx=:K_{31}+K_{32}. \end{split} \end{equation*} $K_{31}$ can be bounded by \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} |K_{31}|= &\left| 4\frac{\begin{equation}ta}{\gamma} \lambda_1 \int_0^{1-\omega} \left(\frac{v}{r}\right)_x \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta-(\gamma-1)}y \phi r^2 vdy dx \right|\\ \le & C \int_0^{1-\omega} ( |xv_x| +|v| ) x^{-2} \left(\int_0^x y^3 |v| dy\right) dx\\ \le& C \int_0^{1-\omega} ( |xv_x| +|v| ) \left(\int_0^1 v^2 dy\right)^{1/2} dx \le C \int (v^2+x^2 v_x^2) dx, \end{split} \end{equation*} and $K_{32}$ can be bounded by \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} |K_{32}|\le &C \int_{1-\omega}^1 \begin{equation}tar\rho^{-\begin{equation}ta} ( |xv_x| +|v| ) |v| dx + \int_{1-\omega}^1 ( |xv_x| +|v| ) \left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}\left(|v|+|yv_y|\right)dy \right) dx\\ \le & \omega \int_{1-\omega}^1 \begin{equation}tar\rho^{-\begin{equation}ta} ( |xv_x|^2 +|v|^2 ) dx + \omega^{-1} \int_{1-\omega}^1 \begin{equation}tar\rho^{-\begin{equation}ta} |v|^2 dx \\ &+ \omega^{-1} \int_{1-\omega}^1 ( |xv_x|^2 +|v|^2 ) dx + \omega \int_0^1 \begin{equation}tar\rho^{-\begin{equation}ta} (| v|^2 + |y v_y|^2 )dy, \end{split} \end{equation*} due to \eqref{fact}. Since $\begin{equation}ta<\gamma-1$, the Hardy inequality \eqref{hardybdry} implies that $$ \int_{1-\omega}^1 \begin{equation}tar\rho^{-\begin{equation}ta} |v|^2 dx \le C \int_{1/2}^1 \begin{equation}tar\rho^{-\begin{equation}ta+2(\gamma-1)} (v^2+v_x^2) dx \le C\int (v^2+v_x^2) dx. $$ These, together with \eqref{lem2est'}, yield \begin{equation}gin{equation}\lambdabel{14k3}\begin{equation}gin{split} &\int_0^t (1+s)^{\nu}|K_3|ds \\ \le & C\omega^{-1}\int_0^t (1+s) \int (x^2v_x^2 + v^2)dxds + C\omega \int_0^t (1+s)^{ \nu} \int_{1-\omega}^1 \begin{equation}tar\rho^{-\begin{equation}ta} ( |xv_x|^2 +|v|^2 ) dxds\\ \le & C(\omega) Q(0) + C\omega \int_0^t (1+s)^{\nu} \int_{1-\omega}^1 \begin{equation}tar\rho^{-\begin{equation}ta} ( |xv_x|^2 +|v|^2 ) dxds. \end{split} \end{equation} Similarly, $L_3$ is rewritten as \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} L_3= &- 4 \lambda_1 \int_0^{1-\omega} \left(\frac{v}{r}\right)_x \int_0^x \left(\begin{equation}tar\rho^{-\begin{equation}ta}\right)_y(r^3-y^3)dy dx \\ &- 4 \lambda_1 \int_{1-\omega}^1 \left(\frac{v}{r}\right)_x \left[ \begin{equation}tar\rho^{-\begin{equation}ta}(r^3-x^3) - \left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}(r^3-y^3)_ydy \right) \right]dx=:L_{31}+L_{32}. \end{split} \end{equation*} Clearly, $L_{31}$ and $L_{32}$ can be bounded by $$ |L_{31}|\le C \int (v^2+x^2 v_x^2) dx + C \int \begin{equation}tar\rho^{\gamma}(r-x)^2 dx $$ and $$ |L_{32}| \le C\int_{1-\omega}^1 \begin{equation}tar{\rho}^{-\begin{equation}ta}\left(|xv_x|+\left|v\right|\right)|r-x|dx +C \int_{1-\omega}^1 \left(|xv_x|+\left|v\right|\right)\left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}y|r_y-1|dy \right) dx. $$ The second term in $L_{32}$ is bounded by \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} & \int_{1-\omega}^1 \left(|xv_x|+\left|v\right|\right)\left(\int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}y|r_y-1|dy \right) dx\\ \le& C\omega^{\frac{1}{2}}(1+t)^{2\nu}\int_{1-\omega}^1 ( |xv_x|^2 +|v|^2 ) dx\\ & +C\omega^{-\frac{1}{2}} (1+t)^{-1-\frac{\begin{equation}ta}{\gamma}+\iota}\int_{0}^1 \begin{equation}tar\rho^{-\begin{equation}ta} y^2 (r_y-1)^2 dy \int_{1-\omega}^1 \int_0^x \begin{equation}tar\rho^{-\begin{equation}ta}dydx\\ \le & C\omega^{\frac{1}{2}}\left[(1+t)^{2\nu}\int_{1-\omega}^1 ( |xv_x|^2 +|v|^2 ) dx +(1+t)^{-1-\frac{\begin{equation}ta}{\gamma}+\iota}\int_{0}^1 \begin{equation}tar\rho^{-\begin{equation}ta} y^2 (r_y-1)^2 dy\right], \end{split} \end{equation*} due to \eqref{fact}. The first term in $L_{32}$ can be bounded as follows. Since $\begin{equation}ta< \gamma-1$, it follows from \eqref{phy} and the Hardy inequality \eqref{hardybdry} that for $\gamma>4/3$, \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\int_{1-\omega}^1 \begin{equation}tar{\rho}^{-\begin{equation}ta}\left(|xv_x|+\left|v\right|\right)|r-x|dx \\ \le & C \omega^{\frac{h}{2(\gamma-1)}} \left[ (1+t)^{\nu} \int_{1-\omega}^1 \begin{equation}tar{\rho}^{-\begin{equation}ta}\left(v_x^2+v^2\right)dx + (1+t)^{-\nu}\int_{1-\omega}^1 \begin{equation}tar\rho^{-\begin{equation}ta-h}|r-x|^2dx\right]\\ \le & C \omega^{\frac{h}{2(\gamma-1)}} \left[ (1+t)^{\nu} \int_{1-\omega}^1 \begin{equation}tar{\rho}^{-\begin{equation}ta}\left(v_x^2+v^2\right)dx +\int_{1/2}^1 \begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left[(r-x)^2+x^2 (r_x-1)^2 \right]dx \right.\\ &\left. + (1+t)^{-\frac{\nu \gamma}{h+2-\gamma}} \int_{1/2}^1 \begin{equation}tar{\rho}^{-\begin{equation}ta}\left[(r-x)^2+x^2 (r_x-1)^2 \right]dx \right], \end{split} \end{equation*} where $h=\min\left\{\begin{equation}ta/8, \ \ (\gamma-1-\begin{equation}ta)/4\right\}$ (it should be noted that $\begin{equation}ta+h<\gamma-1$), and we have used the estimate \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\int_{1-\omega}^1 \begin{equation}tar\rho^{-\begin{equation}ta-h}|r-x|^2dx \le \int_{1/2}^1 \begin{equation}tar\rho^{2(\gamma-1)-\begin{equation}ta-h}\left[|r-x|^2+(r_x-1)^2 \right]dx\\ \le & \left(\int_{1/2}^1 \begin{equation}tar{\rho}^{-\begin{equation}ta}\left[(r-x)^2+x^2 (r_x-1)^2 \right]dx\right)^{\frac{h+2-\gamma}{\gamma}} \left(\int_{1/2}^1 \begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left[(r-x)^2+x^2 (r_x-1)^2 \right]dx\right)^{\frac{2(\gamma-1)-h}{\gamma}}. \end{split} \end{equation*} Consequently, taking into account of \eqref{lem1est}, \eqref{lem2est} and \eqref{fact}, one gets that \begin{equation}gin{equation}\lambdabel{14l3}\begin{equation}gin{split} \int_0^t |L_3| ds \le & C(\omega) Q(0)+ C \left[\omega^{\frac{1}{2}} + \omega^{\frac{h}{2(\gamma-1)}} \right] \left[\int_0^t \int \left[(1+s)^{2\nu}+(1+s)^\nu \begin{equation}tar\rho^{-\begin{equation}ta}\right] \right.\\ &\left. \times ( |xv_x|^2 +|v|^2 ) dxds + \sup_{[0,t]}\int \begin{equation}tar{\rho}^{-\begin{equation}ta}\left[(r-x)^2+x^2 (r_x-1)^2 \right]dx \right.\\ &\left.+ \int_0^t \int \begin{equation}tar{\rho}^{\gamma-\begin{equation}ta}\left[(r-x)^2+x^2 (r_x-1)^2 \right]dx\right]. \end{split} \end{equation} We finally derive from \eqref{14eye} and \eqref{14k1}-\eqref{14l3}, by choosing $\omega\in (0, {1}/{2})$ suitably small, that \begin{equation}e\lambdabel{}\begin{equation}gin{split} &\int \left[(1+t)^{2\nu}\begin{equation}tar\rho^\gamma +(1+t)^{\kappappa} +\begin{equation}tar{\rho}^{-\begin{equation}ta}\right] x^2\left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] (x,t) dx\\ &+(1+t)^{2\nu}\int x^2 \begin{equation}tar\rho v^2 (x,t) dx+\int_0^t \int \left[(1+s)^{\kappappa} + \begin{equation}tar\rho^{-\begin{equation}ta} \right]x^2\begin{equation}tar{\rho}^{\gamma} \left[\left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds\\ &+ \int_0^t \int \left[(1+s)^{2\nu}+ (1+s)^{\nu} \begin{equation}tar\rho^{-\begin{equation}ta} \right]\left(x^2 v_x^2+ v^2 \right)dxds \le C Q(0) . \end{split}\end{equation}e Due to \eqref{alphaiota}, this completes the proof of \eqref{step4}. {\em Step 4}. Multiply equation \eqref{later1} by $(1+t)^{\frac{2\gamma-1}{\gamma}-\theta}$ and integrate the product to deduce that $$ (1+t)^{\frac{2\gamma-1}{\gamma}-\theta}\int \left[\begin{equation}tar{\rho} v_t^2 + \begin{equation}tar{\rho}^\gamma \left(v^2 + x^2 v_x^2 \right) \right] dx + \int_0^t (1+s)^{\frac{2\gamma-1}{\gamma}-\theta} \int \left( x^2 v_{sx}^2+ v_s^2 \right)dxds \le C Q(0) . $$ In a similar way to the derivation of \eqref{rminusx} and \eqref{jump}, one can show $$ \int(r-x)^2(x,t)dx \le C(1+t)^{-\frac{3(\gamma-1)}{\gamma}+\theta} Q(0) $$ and $$ \int \left(v^2 + x^2 v_x^2\right)(x,t)dx \le C (1+t)^{-\frac{2\gamma-1}{\gamma}+\theta} Q(0). $$ This finishes the proof of \eqref{estlem51}. Moreover, it follows from \eqref{estlem51} that for $x\in [0,1]$, \begin{equation}gin{align} xv^2(x,t)= & \int_0^x (y v^2(y,t))_y dy \le 2 \left(\int v^2(y,t)dy\right)^{1/2}\left(\int y^2 v_y^2(y,t)dy\right)^{1/2} \notag\\ & + \int v^2(y,t)dy \le C (1+t)^{-\frac{2\gamma-1}{\gamma}+\theta} Q(0). \lambdabel{8/12-2} \end{align} Similarly, $$ x\left(r(x,t)-x\right)^2 \le C (1+t)^{-\frac{2(\gamma-1)}{\gamma}+\theta} Q(0). $$ This finish the proof of \eqref{8/12-1}. $\Box$ \subsection{Higher-order estimates}\lambdabel{sec3.4} In this subsection, we derive the higher-order part of the {\it a priori } estimates for the strong solution $(r,v)$ on the time interval $[0, T]$ defined in Definition \ref{definitionss}, under the assumption \eqref{rx} and \eqref{vx}. To obtain the higher-order estimates, we define \begin{equation}\lambdabel{mathG} \mathcal{G} : = \ln r_x + 2 \ln \left(\frac{r}{x}\right). \end{equation} This transformation between $\mathcal{G}$ and $r$ is one-to-one, and we can solve for $r$ in terms of $\mathcal{G}$ by \begin{equation}\lambdabel{rg} r(x, t)=\left(3\int_0^x y^2 \exp\left\{\mathcal G(y, t)\right\}dy\right)^{1/3} \ \ {\rm for} \ \ x\in \begin{equation}tar I \ \ {\rm and} \ \ \ t\ge 0. \end{equation} Indeed, we will show in Section \ref{sec3.4.1} that $\mathcal{G}\sim r_x-1$, $\mathcal{G}_t \sim v_x$, $\mathcal{G}_x\sim r_{xx}$ and $\mathcal{G}_{tx}\sim v_{xx}$. Then equation \eqref{419a} can be written in the form of \begin{equation}gin{equation}\lambdabel{7-2}\begin{equation}gin{split} \mu \mathcal{G}_{xt} +\gamma \left(\frac{x^2 \begin{equation}tar{\rho}}{r^2 r_x } \right)^{\gamma } \mathcal{G}_x = \frac{x^2}{r^2} \begin{equation}tar{\rho} v_t - \left[ \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } -\left(\frac{x }{r } \right)^{4} \right] x \phi \begin{equation}tar\rho. \end{split}\end{equation} (This is the same as \eqref{viscosityequation}, we recall it here for the convenience of readers). \subsubsection{Preliminaries for higher-order estimates}\lambdabel{sec3.4.1} The main goal of this subsubsection is to derive some preliminary estimates for the strong solution $(r,v)$ on the time interval $[0, T]$ defined in Definition \ref{definitionss}, and to prove the equivalence of the functionals $\mathfrak{E}(t)$ and $\mathcal{E}(t)$, under the {\it a priori} assumptions \eqref{rx} and \eqref{vx}. We illustrate how to use $\mathcal{G}$ and its derivatives to control $r$ and $v$ and their derivatives by identifying the principal parts of $\mathcal G$, $\mathcal G_t$, $\mathcal G_x$ and $\mathcal G_{xt}$. Note that \begin{equation}gin{equation}\lambdabel{tlg1} \mathcal{G}=(r_x-1) + 2 \left(\frac{r}{x}-1\right) +O\left(|r_x-1|^2 + \left|\frac{r}{x}-1\right|^2\right), \end{equation} \begin{equation}gin{equation}\lambdabel{tlg2} \mathcal{G}_x=\frac{r_{xx}}{r_x}+ 2\frac{x}{r}\left(\frac{r}{x}\right)_x=\left[r_{xx} + 2\left(\frac{r}{x}\right)_x\right]+ \left(\frac{1}{r_x}-1\right)r_{xx}+2 \left(\frac{r}{x}-1\right)\left(\frac{r}{x}\right)_x, \end{equation} \begin{equation}gin{equation}\lambdabel{tlg3} \mathcal{G}_t=\frac{v_x}{r_x}+2\frac{v}{r}=\left(v_{x} + 2\frac{v}{x}\right)+ \left(\frac{1}{r_x}-1\right)v_{x}+2 \left(\frac{r}{x}-1\right)\left(\frac{v}{x}\right), \end{equation} \begin{equation}gin{align}\lambdabel{tlg5} \mathcal{G}_{xt}=\left(\frac{v_x}{r_x}+2\frac{v}{r}\right)_x =& \left(v_x +2 \frac{v}{x}\right)_x + \left[ \left(\frac{1 }{r_x}-1\right) v_{xx} + 2\left(\frac{r}{x}-1\right) \left(\frac{v}{x} \right)_x \right] \notag \\& -\left[\frac{r_{xx}}{r_x^2}v_x+2\left(\frac{x}{r}\right)^2\left(\frac{r}{x}\right)_x \frac{v}{x} \right]. \end{align} Thus, it follows from \eqref{hhreg1}-\eqref{hhreg3} that for $t\in [0, T]$, \begin{equation}gin{align} & \left(r_x, \ r/x, \ v_x, \ v/x, \ \mathcal{G}, \ \mathcal{G}_t \right) \in L^\infty(I), \ \ \left( r/x, \ v/x , \ v/r \right) \in H^1(I), \ \ \mathcal{G}_{xt} \in L^2(I), \lambdabel{9-21-1}\\ & (r, \ v) \in H^2([0 ,a]) \ \ {\rm for} \ \ a \in (0, 1),\lambdabel{9-21-2} \\ & (r, \ v) \in H^2(I), \ \ {\rm if} \ \ \mathcal{G}_x \in L^2. \lambdabel{9-21-3} \end{align} In fact, \eqref{9-21-3} can be derived easily by noting that $$r_{xx}=r_x\left[\mathcal{G}_x- 2 \frac{x}{r}\left(\frac{r}{x}\right)_x \right] \ \ {\rm and} \ \ v_{xx}=r_x\left[\mathcal{G}_{xt}+ \frac{v_x}{r_x^2}r_{xx} - 2 \left(\frac{v}{r}\right)_x \right]. $$ With the regularity \eqref{9-21-1}-\eqref{9-21-3}, we have the following Lemmas. \begin{equation}gin{lem}\lambdabel{lem2.3} Suppose that \eqref{rx} holds for a suitably small $\epsilon_0$. Then for $t\in [0, T]$, \begin{equation}gin{equation}\lambdabel{gjvx}\begin{equation}gin{split} &\left\|\left(v_x, v/x\right) \right\|^2 \le 4 \left\|\mathcal{G}_{t}\right\|^2 , \end{split}\end{equation} \begin{equation}gin{equation}\lambdabel{gjrx}\begin{equation}gin{split} &\left\|\left(r_x-1, {r}/{x}-1\right) \right\|^2 \le 4 \left\|\mathcal{G} \right\|^2 , \end{split}\end{equation} \begin{equation}gin{equation}\lambdabel{gjrxx}\begin{equation}gin{split} &\left\|\left( r_{xx}, \ (r/x)_x \right)\right\|^2 \le 4 \left\|\mathcal{G}_{x}\right\|^2 , \end{split}\end{equation} \begin{equation}gin{equation}\lambdabel{gjvxx}\begin{equation}gin{split} &\left\|\left( v_{xx}, \ (v/x)_x \right)\right\|^2 \le 4 \left\|\mathcal{G}_{tx}\right\|^2 + c \left\|(v_x, v/x)\right\|_{L^\infty}^2 \left\|\mathcal{G}_{x}\right\|^2 . \end{split}\end{equation} Here the estimates \eqref{gjrxx} and \eqref{gjvxx} hold if $ \|\mathcal{G}_{x} \|< \infty$. \end{lem} {\em Proof}. The proof consists of three steps. {\em Step 1}. In this step, we prove \eqref{gjvx} and \eqref{gjrx}. Let $\varthetarepsilon\in (0, 1/4)$ be an arbitrary constant, and $\chi_\varthetarepsilon\in[0,1]$ be a cut-off function satisfying \begin{equation}gin{equation}\lambdabel{cutoff1}\begin{equation}gin{split} &\chi_\varthetarepsilon=0 \ \ {\rm on} \ \ [0,\varthetarepsilon]\cup [1-\varthetarepsilon, 1], \ \ \chi_\varthetarepsilon=1 \ \ {\rm on} \ \ [2\varthetarepsilon,1-2\varthetarepsilon] ,\\ & \chi_\varthetarepsilon=x/\varthetarepsilon-1 \ \ {\rm on} \ \ [\varthetarepsilon,2\varthetarepsilon], \ \ \ \ \ \ \chi_\varthetarepsilon=(1-x)/\varthetarepsilon-1 \ \ {\rm on} \ \ [1-2\varthetarepsilon,1- \varthetarepsilon] . \end{split}\end{equation} Note that $v_x=x(v/x)_x+v/x$ and $ v/x \in H^1$, we use the integration by parts to get $$ \int \chi_\varthetarepsilon v_x \frac{v}{x}dx= \int \chi_\varthetarepsilon \left(\frac{v}{x}\right)^2dx +\frac{1}{2}\int \chi_\varthetarepsilon x \left(\left(\frac{v}{x}\right)^2\right)_xdx = \frac{1}{2}\int \chi_\varthetarepsilon \left(\frac{v}{x}\right)^2dx-\frac{1}{2}\int x\chi'_\varthetarepsilon \left(\frac{v}{x}\right)^2dx. $$ Thus, we have \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} & \int \chi_\varthetarepsilon\left|v_x+2 \frac{v}{x}\right|^2dx =\int \chi_\varthetarepsilon\left[ v_x^2 +4\left(\frac{v}{x}\right)^2 \right]dx +4\int \chi_\varthetarepsilon v_x \frac{v}{x} dx \\ = & \int \chi_\varthetarepsilon\left[ v_x^2 +6\left(\frac{v}{x}\right)^2 \right]dx -2\int \chi_\varthetarepsilon' x \left(\frac{v}{x}\right)^2dx \\ =&\int \chi_\varthetarepsilon\left[ v_x^2 +6\left(\frac{v}{x}\right)^2 \right]dx -2\int_\varthetarepsilon^{2\varthetarepsilon} \frac{ x}{\varthetarepsilon} \left(\frac{v}{x}\right)^2dx +2\int_{1-2\varthetarepsilon}^{1-\varthetarepsilon} \frac{ x}{\varthetarepsilon} \left(\frac{v}{x}\right)^2dx \\ \ge & \int \chi_\varthetarepsilon\left[ v_x^2 +6\left(\frac{v}{x}\right)^2 \right]dx -4\int_\varthetarepsilon^{2\varthetarepsilon} \left(\frac{v}{x}\right)^2dx . \end{split}\end{equation*} Note that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} v_x+2\frac{v}{x} =\mathcal{G}_{t}-\left(\frac{1}{r_x}-1\right)v_x-2\left(\frac{x }{r}-1\right)\frac{v}{x} \end{split}\end{equation*} Thus, \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} & \int \chi_\varthetarepsilon\left[ v_x^2 +6\left(\frac{v}{x}\right)^2 \right]dx \le \int \chi_\varthetarepsilon\left|v_x+2 \frac{v}{x}\right|^2dx + 4\int_\varthetarepsilon^{2\varthetarepsilon} \left(\frac{v}{x}\right)^2dx \\ \le & 2 \int \chi_\varthetarepsilon \mathcal{G}_{t}^2 dx+ c\epsilon_0^2\int \chi_\varthetarepsilon\left[ v_x^2 +\left(\frac{v}{x}\right)^2 \right]dx +4\int \left(\frac{v}{x}\right)^2dx ; \end{split}\end{equation*} which implies that \begin{equation}gin{equation}\lambdabel{ht23}\begin{equation}gin{split} \int \chi_\varthetarepsilon\left[ \frac{1}{2} v_x^2 + \frac{11}{2}\left(\frac{v}{x}\right)^2 \right]dx \le 2 \int \mathcal{G}_{t}^2 dx +4\int \left(\frac{v}{x}\right)^2dx, \end{split}\end{equation} provided that \eqref{rx} holds for a suitably small number $\epsilon_0$. Since $( v_x, v/x) \in L^2$ due to \eqref{9-21-1}, we can obtain by letting $\varthetarepsilon\to 0$ and using the dominated convergence theorem that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \int \left[ \frac{1}{2} v_x^2 + \frac{11}{2}\left(\frac{v}{x}\right)^2 \right]dx \le 2 \int \mathcal{G}_{t}^2 dx +4\int \left(\frac{v}{x}\right)^2dx, \end{split}\end{equation*} which means \begin{equation}gin{equation*}\lambdabel{sad}\begin{equation}gin{split} \int \left[ v_x^2 + \left(\frac{v}{x}\right)^2 \right]dx \le 4 \int \mathcal{G}_{t}^2 dx. \end{split}\end{equation*} This finishes the proof of \eqref{gjvx}. Clearly, \eqref{gjrx} follows from similar arguments. {\em Step 2}. In this step, we prove \eqref{gjrxx} under the assumption $\|\mathcal{G}_x\|<\infty$. Due to $\mathcal{G}_x\in L^2(I)$ and \eqref{9-21-3}, we have $r\in H^2(I)$. Then, a similar argument to the proof of \eqref{gjvx} will lead to the proof of \eqref{gjrxx}, which goes as follows. For the cut-off function $\chi_\varthetarepsilon$ defined in \eqref{cutoff1}, we use $r\in H^2(I)$ which ensures $(r/x)_x \in H^1([\varthetarepsilon,1])$, and the integration by parts to get \begin{equation}e\lambdabel{9152}\begin{equation}gin{split} \int \chi_\varthetarepsilon r_{xx} \left(\frac{r}{x}\right)_xdx = & \int \chi_\varthetarepsilon \left(x\frac{r}{x}\right)_{xx}\left (\frac{r}{x}\right)_xdx =\int \chi_\varthetarepsilon \left\{\frac{1}{2} x\left[\left(\left(\frac{r}{x}\right)_x\right)^2\right]_x+2\left|\left(\frac{r}{x}\right)_x\right|^2\right\}dx\\ = & \frac{3}{2}\int \chi_\varthetarepsilon \left|\left(\frac{r}{x}\right)_x\right|^2dx -\frac{1}{2} \int x\chi_\varthetarepsilon' \left|\left(\frac{r}{x}\right)_x\right|^2dx, \end{split}\end{equation}e which implies \begin{equation}e\lambdabel{9151}\begin{equation}gin{split} \int \chi_\varthetarepsilon \left|r_{xx}+2\left(\frac{r}{x}\right)_x\right|^2dx= & \int \chi_\varthetarepsilon\left( r_{xx}^2+10\left|\left(\frac{r}{x}\right)_x\right|^2\right)dx- 2\int x\chi_\varthetarepsilon' \left|\left(\frac{r}{x}\right)_x\right|^2dx\\ \ge & \int \chi_\varthetarepsilon\left( r_{xx}^2+10\left|\left(\frac{r}{x}\right)_x\right|^2\right)dx - 4 \int_\varthetarepsilon^{2\varthetarepsilon} \left|\left(\frac{r}{x}\right)_x\right|^2dx . \end{split}\end{equation}e This, together with \eqref{tlg2} and \eqref{rx}, gives that for small $\epsilon_0$ in \eqref{rx}, \begin{equation}\lambdabel{9151} \int \chi_\varthetarepsilon\left(\frac{1}{2}r_{xx}^2+\frac{19}{2}\left|\left(\frac{r}{x}\right)_x\right|^2\right) dx\le 2 \int \mathcal{G}_x^2dx+ 4 \int \left|\left(\frac{r}{x}\right)_x\right|^2dx.\end{equation} Therefore, by virtue of \eqref{9-21-1} and \eqref{9-21-3} (which implies $(r_{xx}, (r/x)_x)\in L^2$), we obtain \eqref{gjrxx} with the help of the dominated convergence theorem. {\em Step 3}. In this step, we prove \eqref{gjvxx} under the assumption $\|\mathcal{G}_x\|<\infty$. In view of \eqref{tlg5}, it follows from the Cauchy inequality and \eqref{rx} that \begin{equation}gin{equation*}\lambdabel{nj4}\begin{equation}gin{split} \mathcal{G}_{xt}^2 \ge \frac{1}{2}\left|\left(v_x +2 \frac{v}{x}\right)_x \right|^2 -c \epsilon_0^2 \left(v_{xx}^2 + \left|\left(\frac{v}{x}\right)_x\right|^2\right) - c\left\|\left(v_x,\frac{v}{x}\right)\right\|_{L^\infty}^2\left(r_{xx}^2 + \left|\left(\frac{r}{x}\right)_x\right|^2\right). \end{split}\end{equation*} Due to $\mathcal{G}_x\in L^2(I)$ and \eqref{9-21-3}, we have $v\in H^2(I)$. So, we can use the same way as that for the derivation of \eqref{9151} to obtain \begin{equation}gin{equation}\lambdabel{9.22.1}\begin{equation}gin{split} & \int \chi_\varthetarepsilon\left( \frac{1}{2} v_{xx}^2 + \frac{19}{2}\left|\left(\frac{v}{x}\right)_x\right|^2 \right)dx \\ \le & 2 \int \mathcal{G}_{tx}^2 dx +4\int \left|\left(\frac{v}{x}\right)_x\right|^2 dx + c\left\|\left(v_x,\frac{v}{x}\right)\right\|_{L^\infty}^2\int \left(r_{xx}^2 + \left|\left(\frac{r}{x}\right)_x\right|^2\right)dx , \end{split}\end{equation} where $\chi_\varthetarepsilon$ is the cut-off function defined in \eqref{cutoff1}. With the aid of \eqref{9-21-1} and \eqref{9-21-3}, we see that all the quantities appearing on the right-hand side of \eqref{9.22.1} are finite. Since $(v_{xx}, (v/x)_x) \in L^2$ due to \eqref{9-21-1} and \eqref{9-21-3}, we can get, by letting $\varthetarepsilon\to 0$ and using the dominated convergence theorem, that \begin{equation}gin{equation}\lambdabel{3.3.12Aug6}\begin{equation}gin{split} \int \left( v_{xx}^2 + \left|\left(\frac{v}{x}\right)_x\right|^2 \right)dx \le 4 \int \mathcal{G}_{xt}^2 dx + c\left\|\left(v_x,\frac{v}{x}\right)\right\|_{L^\infty}^2\int \left(r_{xx}^2 + \left|\left(\frac{r}{x}\right)_x\right|^2\right)dx . \end{split}\end{equation} This, together with \eqref{gjrxx}, gives \eqref{gjvxx}. $\Box$ \begin{equation}gin{rmk} In view of \eqref{9-21-1}, we know that $\mathcal{G}_{xt}\in L^2(I)$, $ t\in [0, T] $. However, this does not mean that $\mathcal{G}_{x}(t)\in L^2(I)$, $ t\in [0, T] $, unless we assume that $\mathcal{G}_{x}(0)\in L^2(I)$, because $$\mathcal{G}_{x}(t)=\mathcal{G}_{x}(0)+\int_0^t \mathcal{G}_{xs}(x, s)ds , \ t\in [0, T]. $$ $\mathcal{G}_{x}(0)\in L^2(I)$ is an additional regularity assumption of the initial data other than $\mathfrak{E}(0)<\infty$. By \eqref{gjrxx}, \eqref{tlg2} and \eqref{9.21.6}, we know that the condition $\mathcal{G}_{x}(0)\in L^2(I)$ is equivalent to $ r_{xx} (x, 0)\in L^2(I)$ if $\mathfrak{E}(0) $ is small. \end{rmk} \begin{equation}gin{rmk} We can see from the proof of \eqref{gjvxx}, in particular, \eqref{3.3.12Aug6}, that \begin{equation}gin{align}\lambdabel{Aug6-1} \left\|\left( v_{xx}, \ (v/x)_x \right)\right\|^2 \le & 4 \left\|\mathcal{G}_{tx}\right\|^2 + c \left\|(v_x, v/x)\right\|_{L^\infty}^2 \left\|\left( r_{xx}, \ (r/x)_x \right)\right\|^2 , \end{align} if $\|r_{xx}\|<\infty$. To bound $\|( v_{xx}, (v/x )_x )\|$, we have to control the product of $ \| ( r_{xx}, (r/x )_x )\|$ and $ \|(v_x, v/x) \|_{L^\infty}$, in addition to the bound of $ \|\mathcal{G}_{xt} \|$. Indeed, we prove in Lemma \ref{lem10} that although $ \| ( r_{xx}, \ (r/x )_x)(\cdot, t) \|$ may grow with respect to time, $ \|(v_x, v/x)(\cdot, t) \|_{L^\infty}$ decays faster so that the product is bounded. \end{rmk} \begin{equation}gin{rmk} Similar to \eqref{gjvxx}, we can obtain that for any $a\in (0,1)$, \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \left\|\left( v_{xx}, \ (v/x)_x \right)\right\|_{L^2([a,1])}^2 \le 4 \left\|\mathcal{G}_{tx}\right\|^2_{L^2([a,1])} + c \left\|(v_x, v/x)\right\|_{L^\infty ([a,1]) }^2 \left\|\mathcal{G}_{x}\right\|^2_{L^2([a,1])}, \end{split}\end{equation*} which implies \begin{equation}gin{equation}\lambdabel{gjvxxa}\begin{equation}gin{split} a^2 \left\|\left( v_{xx}, \ (v/x)_x \right)\right\|_{L^2([a,1])}^2 \le 4 \left\|x\mathcal{G}_{tx}\right\|^2 + c \left\|(xv_x, v)\right\|_{L^\infty }^2 \left\|\mathcal{G}_{x}\right\|^2 , \end{split}\end{equation} provided that $\left\|\mathcal{G}_{x}\right\|<\infty$. \end{rmk} \begin{equation}gin{lem}\lambdabel{lemhh1} Let $\delta>0$ be any fixed constant. Suppose that \eqref{rx} holds for a suitable small $\epsilon_0$, then for $t\in [0 ,T]$, \begin{equation}gin{align} &\left\|\begin{equation}tar\rho^{\delta} \left( r_{xx}, (r/x)_x \right) \right\|^2 +\left\|x \begin{equation}tar\rho^{\delta-(\gamma-1)/2} \left( r/x \right)_x\right\|^2 \le c \left\|\begin{equation}tar\rho^{\delta} \mathcal{G}_{x}\right\|^2, \lambdabel{weightrxx} \\ &\left\|\begin{equation}tar\rho^{\delta} \left( v_{xx}, \left({v}/{x}\right)_x \right)\right\|^2 \le c \left\|\begin{equation}tar\rho^{\delta} \mathcal{G}_{xt} \right\|^2 +c\left\|\left(v_x, {v}/{x}\right)\right\|_{L^\infty}^2 \left\|\begin{equation}tar\rho^{\delta} \mathcal{G}_{x}\right\|^2, \lambdabel{weightvxx} \\ &\left\|\begin{equation}tar\rho^{\delta} \left(r_{x}-1, {r}/{x}-1 \right)\right\|^2 +\left\| \begin{equation}tar\rho^{\delta-(\gamma-1)/2} (r-x) \right\|^2 \le c \left\|\begin{equation}tar\rho^{\delta} \mathcal{G} \right\|^2 , \lambdabel{weightrx} \\ &\left\|\begin{equation}tar\rho^{\delta} \left( v_{x}, v/x \right)\right\|^2 +\left\| \begin{equation}tar\rho^{\delta-(\gamma-1)/2} v \right\|^2 \le c \left\|\begin{equation}tar\rho^{\delta} \mathcal{G}_{t} \right\|^2 . \lambdabel{weightvx} \end{align} Here the estimates \eqref{weightrxx} and \eqref{weightvxx} hold if $\left\|\begin{equation}tar\rho^{\delta} \mathcal{G}_{x}\right\|<\infty$. \end{lem} {\em Proof}. The idea of the proof of this lemma is similar to that of Lemma \ref{lem2.3}. Due to $\begin{equation}tar\rho(1)=0$, we only need to cut off the origin. Let $\varthetarepsilon\in (0, 1/4)$ be an arbitrary constant, and $\eta_\varthetarepsilon\in[0,1]$ be a cut-off function satisfying \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\eta_\varthetarepsilon=0 \ \ {\rm on} \ \ [0,\varthetarepsilon],\ \ & \eta_\varthetarepsilon=x/\varthetarepsilon-1 \ \ {\rm on} \ \ [\varthetarepsilon,2\varthetarepsilon] \ \ {\rm and} \ \ \chi_\varthetarepsilon=1 \ \ {\rm on} \ \ [2\varthetarepsilon,1]. \end{split}\end{equation*} In a similar way to deriving \eqref{ht23}, we can get \begin{equation}gin{equation*}\lambdabel{ht23Aug8}\begin{equation}gin{split} \int \eta_\varthetarepsilon \begin{equation}tar\rho^{2\delta} \left[ \frac{1}{2} v_x^2 + \frac{11}{2}\left(\frac{v}{x}\right)^2 \right]dx + 4\frac{\delta}{\gamma}\int \eta_\varthetarepsilon \phi \begin{equation}tar\rho^{2\delta-(\gamma-1)} v^2 dx \le 2 \int \begin{equation}tar\rho^{2\delta} \mathcal{G}_{t}^2 dx +4\int \begin{equation}tar\rho^{2\delta} \left(\frac{v}{x}\right)^2dx, \end{split}\end{equation*} due to \eqref{rhox}. Letting $\varthetarepsilon\to 0$ to give \eqref{weightvx}. Clearly, \eqref{weightrxx}-\eqref{weightrx} follow from similar arguments. $\Box$ \begin{equation}gin{lem} Suppose that \eqref{rx} holds for a suitably small $\epsilon_0$. Then for any $a\in (0,1)$, \begin{equation}gin{equation}\lambdabel{f2}\begin{equation}gin{split} \left\|v\right\|_{L^\infty}^2 \le 2\left\|v\right\|\|v_x\|, \ \ t\in [0, T], \end{split}\end{equation} \begin{equation}gin{equation}\lambdabel{f1}\begin{equation}gin{split} \left\|x v_x\right\|_{L^\infty}^2 \le c \left\|xv_x\right\|\left( \left\|x\mathcal{G}_{tx} \right\|+\left\|v_x\right\|+ \left\|v/x\right\|\right), \ \ t\in [0, T], \end{split}\end{equation} \begin{equation}gin{equation}\lambdabel{nj2}\begin{equation}gin{split} &\left\|v_x \right\|_{L^\infty([0, a ])}^2 \le (1/a)\left\| v_x \right\|^2_{L^2\left([0, a]\right)} + 2\left\|v_x\right\|_{L^2\left([0, a]\right)} \left\| v_{xx}\right\|_{L^2\left([0, a]\right)}, \ \ t\in [0, T],\\ \end{split}\end{equation} \begin{equation}gin{equation}\lambdabel{nj2.new}\begin{equation}gin{split} &\left\|v/x \right\|_{L^\infty([0, a ])}^2 \le (1/a)\left\| v/x \right\|^2_{L^2\left([0, a]\right)} + 2\left\|v/x\right\|_{L^2\left([0, a]\right)}\left\| (v/x)_x\right\|_{L^2\left([0, a]\right)}, \ \ t\in [0, T]. \end{split}\end{equation} \end{lem} {\em Proof}. Clearly, \eqref{f2} follows from the boundary condition $v(0,t)=0$ and the H$\ddot{\rm o}$lder inequality. For $xv_x$, notice that $$ \left(x v_x\right)^2 =r_x^2 \left(x \frac{v_x}{r_x}\right)^2 = 2 r_x^2 \int_0^x \left(y \frac{v_y}{r_y}\right) \left(y \frac{v_y}{r_y}\right)_y dy \le c \left\|x \frac{v_x}{r_x}\right\| \left(\left\|x\left( \frac{v_x}{r_x}\right)_x\right\| +\left\|\frac{v_{x}}{r_x}\right\| \right) $$ and $$ \left\|x\left( \frac{v_x}{r_x}\right)_x\right\|=\left\|x\left( \mathcal{G}_t - 2\frac{v}{r} \right)_x\right\| \le \left\|x\mathcal{G}_{tx} \right\| + 2\left\|x \left(\frac{v}{r} \right)_x\right\|. $$ Thus, \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \left\|x v_x\right\|_{L^\infty}^2 \le c \left\|xv_x\right\|\left( \left\|x\mathcal{G}_{tx} \right\|+\left\|v_x\right\|+ \left\|v/x\right\|\right), \end{split}\end{equation*} which verifies \eqref{f1}. \eqref{nj2} and \eqref{nj2.new} follow from simple calculations. $\Box$ \begin{equation}gin{lem} Let $\delta$ be a fixed positive constant. Then for $t\in [0, T]$, \begin{equation}gin{equation}\lambdabel{girl}\begin{equation}gin{split} &\left\|r-x\right\|_{L^\infty}^2 \le 2\left\|r-x\right\|\|r_x-1\|, \end{split}\end{equation} \begin{equation}gin{equation}\lambdabel{woman}\begin{equation}gin{split} &\left\|x^{{3}/{2}}\begin{equation}tar\rho^\delta(r_x-1)\right\|_{L^\infty}^2 \le 3\left\|x\begin{equation}tar\rho^{\delta}(r_x-1)\right\|^2 + 2\left\|x^3 \begin{equation}tar\rho^{2\delta}(r_x-1)r_{xx}\right\|_{L^1}, \end{split}\end{equation} \begin{equation}gin{equation}\lambdabel{woman.new}\begin{equation}gin{split} &\left\|x^{{3}/{2}}\begin{equation}tar\rho^\delta(r/x-1)\right\|_{L^\infty}^2 \le 3\left\|x\begin{equation}tar\rho^{\delta}(r/x-1)\right\|^2 + 2\left\|x^3 \begin{equation}tar\rho^{2\delta}(r/x-1)(r/x)_x\right\|_{L^1}. \end{split}\end{equation} Here \eqref{woman} holds if the quantities appearing on its right-hand side are finite. \end{lem} {\em Proof}. Clearly, \eqref{girl} follows from \eqref{Aug9-2} and the H$\ddot{\rm o}$lder inequality. For any $x\in[0,1]$, \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &x^3 \begin{equation}tar\rho^{2\delta}(r_x-1)^2= 2\int_0^x \left(y^3 \begin{equation}tar\rho^{2\delta}(r_y-1)^2\right)_ydy \\ \le & 3 \int_0^x y^2 \begin{equation}tar\rho^{2\delta}(r_y-1)^2dy + 2\int_0^x y^3 \begin{equation}tar\rho^{2\delta}(r_y-1)r_{yy}dy, \end{split}\end{equation*} due to \eqref{rhox}. This gives \eqref{woman}. Similarly, we can obtain \eqref{woman.new}. $\Box$ The following lemma is on the equivalence of the functionals $\mathcal{E}(t)$ and $\mathfrak{E}(t)$ and is the key to the verification of the {\it a priori} assumptions \eqref{rx} and \eqref{vx}. \begin{equation}gin{lem}\lambdabel{boundsforrv} Suppose that \eqref{rx} and \eqref{vx} hold for suitably small numbers $\epsilon_0$ and $\epsilon_1$. Then, \begin{equation}gin{align}\lambdabel{bfrv} \left\|\left(r_x-1, {r}/{x}-1, v_x, {v}/{x}\right)(\cdot,t)\right\|_{L^\infty}^2 \le C\mathcal{E}(t), \ \ t\in [0, T], \end{align} \begin{equation}\lambdabel{equivalence} c \mathcal{E}(t)\le \mathfrak{E}(t)\le C \mathcal{E}(t), \ \ t\in [0, T]. \end{equation} \end{lem} {\em Proof}. The proof of \eqref{bfrv} consists of two steps, in which the $L^\infty$-bounds on the intervals $I_1=[0,1/2]$ and $I_2=[1/2,1]$ will be shown, respectively. Once \eqref{bfrv} is proved, \eqref{equivalence} follows then from the definitions of $\mathcal{E}(t)$ and $\mathfrak{E}(t)$ by noticing that $v(0, t)=r(0,t)=0$. {\em Step 1} (away from the boundary). It follows from \eqref{7-2}, \eqref{tlg2} and \eqref{mathcalE} that \begin{equation}\lambdabel{Aug11.1} \| \mathcal{G}_{xt}\|^2 \le C \left(\|\begin{equation}tar\rho^\gamma\mathcal{G}_x\|^2 + \|\begin{equation}tar\rho v_t\|^2 + \|\begin{equation}tar\rho(r-x, xr_x-x)\|^2 \right) \le C \mathcal{E}, \end{equation} which, together with \eqref{weightvxx} and \eqref{tlg2}, implies \begin{equation}e \|\begin{equation}tar\rho^{\gamma-1/2}(v_{xx}, (v/x)_x)(\cdot,t)\|^2 \le C \mathcal{E}(t). \end{equation}e Thus, \begin{equation}e \|(r_{xx}, (r/x)_x, v_{xx}, (v/x)_x)(\cdot,t)\|^2_{L^2(I_1)}\le C \mathcal{E}(t). \end{equation}e In view of \eqref{hardyorigin}, we then have \begin{equation}gin{equation}\lambdabel{Aug10-1}\begin{equation}gin{split} \|(r_{x}-1, r/x-1, v_{x}, v/x)(\cdot,t)\|^2_{L^2(I_1)}\le C \|(xr_{x}-x, r-x, xv_{x}, v)(\cdot,t)\|^2_{L^2(I_1)} \\ + C \|(xr_{xx}, x(r/x)_x, xv_{xx}, x(v/x)_x)(\cdot,t)\|^2_{L^2(I_1)} \le C \mathcal{E}(t). \end{split}\end{equation} Hence, \begin{equation}gin{equation}\lambdabel{Aug9-4}\begin{equation}gin{split} &\|(r_{x}-1, r/x-1, v_{x}, v/x)(\cdot,t)\|^2_{L^\infty(I_1)} \\ \le & C\|(r_{x}-1, r/x-1, v_{x}, v/x)(\cdot,t)\|^2_{H^1(I_1)} \le C \mathcal{E}(t). \end{split}\end{equation} {\em Step 2} (away from the origin). It follows from \eqref{mathcalE} and \eqref{Aug10-1} that \begin{equation}gin{equation}\lambdabel{Aug10-2}\begin{equation}gin{split} \|(r_{x}-1, r/x-1, v_{x}, v/x)(\cdot,t)\|^2 \le C \mathcal{E}(t), \end{split}\end{equation} which implies, with the aid of \eqref{mathcalE}, that \begin{equation}gin{align} & \|( {r}/{x}-1)(\cdot,t)\|_{L^\infty(I_2)}^2\le 4 \|( {r}-x)(\cdot,t)\|_{L^\infty(I_2)}^2 \le C \left\|\left(r-x \right)(\cdot,t)\right\|_{H^1(I_2)}^2 \le C \mathcal{E}(t), \lambdabel{Aug10-3}\\ & \|(v/x)(\cdot,t)\|_{L^\infty(I_2)}^2\le 4 \| v(\cdot,t)\|_{L^\infty(I_2)}^2 \le C \left\|v(\cdot,t)\right\|_{H^1(I_2)}^2 \le C \mathcal{E}(t). \lambdabel{Aug10-4} \end{align} Clearly, \begin{equation}\lambdabel{Aug10-5} \|(r_x-1)(\cdot,t)\|_{L^\infty(I_2)}^2\le \mathcal{E}(t). \end{equation} In view of \eqref{f1}, \eqref{Aug10-2} and \eqref{Aug11.1}, we see that \begin{equation}gin{align}\lambdabel{Aug10-6} \|v_x(\cdot, t)\|_{L^\infty(I_2)}^2 \le 4 \left\|(x v_x)(\cdot,t)\right\|_{L^\infty(I_2)}^2 \le 4 \left\|(x v_x)(\cdot,t)\right\|_{L^\infty}^2\le C \mathcal{E}(t). \end{align} So, \eqref{bfrv} is a consequence of \eqref{Aug9-4}, \eqref{Aug10-3}-\eqref{Aug10-6}. $\Box$ \subsubsection{Part I: global existence and decay of strong solutions}\lambdabel{sec3.4.2} In this subsubsection, we prove the global existence and large time decay of the strong solution for suitably small $\mathfrak{E}(0)$. \begin{equation}gin{lem}\lambdabel{lem5} Suppose that \eqref{rx} and \eqref{vx} hold. Then there exist positive constants $C$, $C(\theta)$ and $C(\theta,a)$ independent of $t$ such that for any $\theta\in (0, \ {2(\gamma-1)}/({3\gamma}))$ and $a\in (0,1)$, \begin{equation}gin{align} &\left\|\begin{equation}tar\rho^{\gamma-{1}/{2}}\left(r_{xx}, (r/x)_x\right)(\cdot, t) \right\|^2 \le C \mathcal{E}(0), \lambdabel{lem5est}\\ & (1+t)^{(\gamma-1)/\gamma-\theta} \left\| \left(r_{xx}, (r/x)_x\right)(\cdot, t) \right\|^2_{L^2([0,a])} \le C(\theta, a) \left(\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right), \lambdabel{lem5est8/9} \\ &(1+t)^{({\gamma-1})/{\gamma}-\theta} \left\| \begin{equation}tar\rho^{ {1}/{2}} v_t(\cdot, t) \right\|^2 +\int_0^t (1+s)^{({\gamma-1})/{\gamma}-\theta} \left\|\left( v_{x}, {v}/{x} , v_{sx}, {v_s}/{x} \right)(\cdot,s)\right\|^2 ds\notag\\ &\quad \le C(\theta)\left(\mathcal{E}(0)+\|r_{0x}-1\|_{L^{\infty}}^2\right), \lambdabel{lem5est'}\\ & (1+t)^{(\gamma-1)/\gamma-\theta} \left\| \left(v_{xx}, (v/x)_x\right)(\cdot, t) \right\|^2_{L^2([0,a])} \le C(\theta, a) \left[\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right], \lambdabel{lem5est'8/9} \end{align} for all $t\in [0, T]$. \end{lem} {\em Proof}. The proof consists of four steps. {\em Step 1}. In this step, we prove \eqref{lem5est} and \eqref{lem5est8/9}. Multiplying equation \eqref{7-2} by $ \begin{equation}tar\rho^{2\gamma-1}\mathcal{G}_x $ and integrating the product with respect to the spatial variable, one gets, using the Cauchy inequality, that \begin{equation}gin{equation}\lambdabel{catch}\begin{equation}gin{split} & \frac{\mu}{2}\frac{d}{dt} \int \begin{equation}tar\rho^{2\gamma-1} \mathcal{G}_{x}^2dx +\frac{\gamma}{2} \int \left(\frac{x^2}{r^2 r_x } \right)^{\gamma } \begin{equation}tar\rho ^{3\gamma-1} \mathcal{G}_x^2 dx \\ \le & C\int v_t^2dx +C \int x^2\begin{equation}tar{\rho}^\gamma\left[ \left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx . \end{split}\end{equation} It follows from \eqref{catch}, \eqref{lem2est} and \eqref{lem4est} that \begin{equation}gin{equation}\lambdabel{bt1}\begin{equation}gin{split} & \int \begin{equation}tar\rho^{2\gamma-1} \mathcal{G}_{x}^2 (x,t)dx + \int_0^t \int \begin{equation}tar\rho ^{3\gamma-1} \mathcal{G}_x^2 dx ds \le C\mathcal{E}(0). \end{split}\end{equation} This, together with \eqref{weightrxx}, gives \eqref{lem5est}. In a similar way to deriving \eqref{catch}, we can get \begin{equation}gin{equation*}\lambdabel{catch8/9}\begin{equation}gin{split} & \frac{\mu}{2}\frac{d}{dt} \int \begin{equation}tar\rho^{3\gamma-2} \mathcal{G}_{x}^2dx +\frac{\gamma}{2} \int \left(\frac{x^2}{r^2 r_x } \right)^{\gamma } \begin{equation}tar\rho ^{4\gamma-2} \mathcal{G}_x^2 dx \\ \le & C\int v_t^2dx +C \int x^2\begin{equation}tar{\rho}^\gamma\left[ \left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx . \end{split}\end{equation*} Multiply the inequality above by $(1+t)^{({\gamma-1})/{\gamma}-\theta}$ and integrate the product to give \begin{equation}gin{align} & (1+t)^{\frac{\gamma-1}{\gamma}-\theta} \int \begin{equation}tar\rho^{3\gamma-2} \mathcal{G}_{x}^2(x,t)dx +\int_0^t (1+s)^{\frac{\gamma-1}{\gamma}-\theta} \int \begin{equation}tar\rho^{4\gamma-2} \mathcal{G}_x^2 dx ds\notag\\ \le & C\int_0^t (1+s)^{\frac{\gamma-1}{\gamma}-\theta} \int v_s^2dx ds +C \int_0^t (1+s)^{\frac{\gamma-1}{\gamma}-\theta} \int x^2\begin{equation}tar{\rho}^\gamma\left[ \left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx ds \notag\\ & + C\int_0^t (1+s)^{-\frac{1}{\gamma}-\theta} \int \begin{equation}tar\rho^{3\gamma-2} \mathcal{G}_x^2 dx ds \le C(\theta) \left(\mathcal{E}(0) + \| r_{0x}-1\|_{L^\infty}^2 \right), \lambdabel{Aug9-3} \end{align} due to \eqref{estlem51}, \eqref{bt1} and the following estimate: \begin{equation}gin{align*} &\int_0^t (1+s)^{-\frac{1}{\gamma}-\theta} \int \begin{equation}tar\rho^{3\gamma-2} \mathcal{G}_x^2 dx ds\\ = &\int_0^t (1+s)^{-\frac{1}{\gamma}-\theta} \left(\int \begin{equation}tar\rho^{2\gamma-1} \mathcal{G}_x^2 dx \right)^{\frac{1}{\gamma}}\left(\int \begin{equation}tar\rho^{3\gamma-1} \mathcal{G}_x^2 dx \right)^{\frac{\gamma-1}{\gamma}} ds \\ \le& C \int_0^t (1+s)^{-1-\gamma\theta} ds \sup_{s\in [0,t]} \int \begin{equation}tar\rho^{2\gamma-1} \mathcal{G}_x^2 (x,s) dx + \int_0^t \int \begin{equation}tar\rho^{3\gamma-1} \mathcal{G}_x^2 dx ds. \end{align*} It gives from \eqref{weightrxx} and \eqref{Aug9-3} that \begin{equation}gin{equation}\lambdabel{didadida}\begin{equation}gin{split} (1+t)^{\frac{\gamma-1}{\gamma}-\theta} \int \begin{equation}tar\rho^{3\gamma-2} \left(r_{xx}^2 + \left|\left({r}/{x}\right)_x\right|^2 \right)(x,t)dx \le C(\theta) \left(\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right). \end{split}\end{equation} So, \eqref{lem5est8/9} follows directly from \eqref{didadida}. {\em Step 2}. In this step, we prove that \begin{equation}gin{align}\lambdabel{didadida'} \int_0^t (1+s)^{\frac{\gamma-1}{\gamma}-\theta} \int \left(v_{x}^2 + \left|{v}/{x}\right|^2 \right)dx ds \le C(\theta) \left(\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right). \end{align} Multiplying the square of \eqref{7-2} by $ \begin{equation}tar\rho^{2\gamma-2} (1+t)^{(\gamma-1)/\gamma-\theta}$ and integrating the product with respect to the spatial and temporal variables, we have, using \eqref{Aug9-3} and \eqref{estlem51}, that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} & \int_0^t (1+s)^{\frac{\gamma-1}{\gamma}-\theta} \int \begin{equation}tar\rho^{2\gamma-2} \mathcal{G}_{xs}^2dx ds \le C \int_0^t (1+s)^{\frac{\gamma-1}{\gamma}-\theta} \int \left(\begin{equation}tar\rho^{4\gamma-2} \mathcal{G}_x^2 + v_s^2 \right) dx ds \\ & +C \int_0^t (1+s)^{\frac{\gamma-1}{\gamma}-\theta} \int x^2\begin{equation}tar{\rho}^\gamma\left[ \left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx ds \le C(\theta) \left(\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right). \end{split}\end{equation*} This, together with \eqref{weightvxx} and \eqref{Aug9-3}, implies \begin{equation}gin{equation*}\lambdabel{huala}\begin{equation}gin{split} \int_0^t (1+s)^{\frac{\gamma-1}{\gamma}-\theta} \int \begin{equation}tar\rho^{4\gamma-2} \left(v_{xx}^2 + \left|\left(\frac{v}{x}\right)_x\right|^2 \right)dx ds \le C(\theta) \left(\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right). \end{split}\end{equation*} Clearly, \begin{equation}gin{equation}\lambdabel{8.21.4}\begin{equation}gin{split} \int_0^t (1+s)^{\frac{\gamma-1}{\gamma}-\theta} \int_0^{1/2} \left(v_{xx}^2 + \left|\left(\frac{v}{x}\right)_x\right|^2 \right)dx ds \le C(\theta) \left(\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right). \end{split}\end{equation} Then, it follows from \eqref{hardyorigin} and \eqref{estlem51} that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \int_0^t (1+s)^{\frac{\gamma-1}{\gamma}-\theta} \int_0^{1/2} \left(v_{x}^2 + \left|{v}/{x}\right|^2 \right)dx ds \le C(\theta) \left(\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right). \end{split}\end{equation*} We use \eqref{estlem51} again to obtain \eqref{didadida'}. {\em Step 3}. In this step, we show that \begin{equation}gin{equation}\lambdabel{8/10-1}\begin{equation}gin{split} & (1+t)^{({\gamma-1})/{\gamma}-\theta} \int \begin{equation}tar{\rho} v_t^2 (x,t) dx + \int_0^t (1+s)^{({\gamma-1})/{\gamma}-\theta} \int \left( v_{xs}^2 + (v_s/x)^2 \right)dxds \\ \le & C(\theta) \left(\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right). \end{split} \end{equation} Differentiating \eqref{nsp1} with respect to $t$ yields \begin{equation}gin{equation}\lambdabel{nsptime}\begin{equation}gin{split} &\begin{equation}tar\rho\left( \frac{x}{r}\right)^2 v_{tt} -2\begin{equation}tar\rho\left( \frac{x}{r}\right)^3 \frac{v}{x} v_{t} -\gamma \left[ \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma \left(2\frac{v}{r}+\frac{v_x}{r_x}\right) \right]_{x} +4 \left( \frac{x}{r}\right)^5 \frac{v}{x}\left(\begin{equation}tar{\rho}^\gamma\right)_x \\ = & \mu \left(\frac{v_{xt}}{r_x}+2\frac{v_t}{r}\right)_x - \mu \left( \frac{v_x^2 }{r_x^2} +2 \frac{v^2}{r^2} \right)_x. \end{split} \end{equation} Let $\psi$ be a non-increasing function defined on $[0,1]$ satisfying \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \psi=1 \ \ {\rm on } \ \ [0,1/4], \ \ \psi=0 \ \ {\rm on } \ \ [1/2, 1] \ \ {\rm and} \ \ |\psi'|\le 32. \end{split} \end{equation*} Multiplying equation \eqref{nsptime} by $ \psi v_t$ and integrating the product with respect to the spatial variable, one has, using the integration by parts and the boundary condition $v(0,t)=0$ (so $v_t(0,t)=0$), that \begin{equation}gin{equation}\lambdabel{levt'}\begin{equation}gin{split} & \frac{d}{dt}\int \frac{1}{2}\begin{equation}tar{\rho} \psi \left(\frac{x}{r}\right)^2 v_t^2 dx + \mu \int \left(\frac{v_{xt}}{r_x}+2\frac{v_t}{r}\right) \left(\psi v_t\right)_x dx =J_1+J_2+J_3 , \end{split} \end{equation} where \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} J_1:=&\int \frac{v}{r} \begin{equation}tar{\rho}\psi\left(\frac{x}{r}\right)^2 v_{t}^2dx + 4 \int \left(\frac{x}{r}\right)^5v \phi \begin{equation}tar{\rho}\psi v_t dx \le C \int_0^{1/2} \left( v^2 + v_t^2\right) dx,\\ J_2:=&-\gamma \int \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma \left(2\frac{v}{r}+\frac{v_x}{r_x}\right) \left(\psi v_t\right)_x dx, \\ J_3:= & \mu \int \left[ \frac{v_x^2 }{r_x^2} +2 \frac{v^2}{r^2} \right] \left(\psi v_t\right)_x dx . \end{split} \end{equation*} The second term on the left-hand side of \eqref{levt'} can be estimated as follows: \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\int \left(\frac{v_{xt}}{r_x}+2\frac{v_t}{r}\right) \left(\psi v_t\right)_x dx = \int \psi\frac{v_{xt}^2}{r_x} dx +2 \int \frac{\psi}{r} v_t v_{tx}dx + \int \psi' \left(\frac{v_{xt}}{r_x}+2\frac{v_t}{r}\right) v_t dx \\ \ge & \int \psi\frac{v_{xt}^2}{r_x} dx -\int \left(\frac{\psi}{r}\right)' v_t^2dx -C \int_0^{1/2}\left( {x^2v_{xt}^2} + v_t^2\right) dx \\ \ge & \int \psi\left[\frac{v_{xt}^2}{r_x} + \frac{r_x v_t^2}{r^2}\right] dx -C \int_0^{1/2}\left( {x^2v_{xt}^2} + v_t^2\right) dx. \end{split} \end{equation*} Then, \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \frac{d}{dt}\int \frac{1}{2}\begin{equation}tar{\rho}\psi \left(\frac{x}{r}\right)^2 v_t^2 dx + \mu \int \psi \left[ \frac{v_{xt}^2}{r_x} + \frac{r_x v_t^2}{r^2} \right]dx \le C \int_0^{1/2}\left( {x^2v_{xt}^2} + v_t^2+v^2\right) dx + J_2+ J_3. \end{split} \end{equation*} It therefore follows from the Cauchy inequality that \begin{equation}gin{equation}\lambdabel{wawa2}\begin{equation}gin{split} \frac{d}{dt}\int \frac{1}{2}\begin{equation}tar{\rho}\psi \left(\frac{x}{r}\right)^2 v_t^2 dx + \frac{\mu}{2} \int \psi \left[ \frac{v_{xt}^2}{r_x} + \frac{r_x v_t^2}{r^2} \right]dx \le C \int_0^{1/2}\left( {x^2v_{xt}^2} + v_t^2+( v/x)^2+ v_x^2 \right) dx . \end{split} \end{equation} This, together with \eqref{lem4est} and \eqref{didadida'}, implies that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} (1+t)^{\frac{\gamma-1}{\gamma}-\theta} \int_0^{1/4} \begin{equation}tar{\rho} v_t^2 dx + \int_0^t (1+s)^{\frac{\gamma-1}{\gamma}-\theta} \int_0^{1/4} \left( v_{xs}^2 + \frac{ v_s^2}{x^2} \right)dxds \le C(\theta) \left(\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right). \end{split} \end{equation*} Using \eqref{lem4est} again, we obtain \eqref{8/10-1}. So, \eqref{lem5est'} follows from \eqref{8/10-1} and \eqref{didadida'}. {\em Step 4}. In this step, we prove \eqref{lem5est'8/9}. It follows from \eqref{7-2}, \eqref{8/10-1}, \eqref{estlem51} and \eqref{Aug9-3} that \begin{equation}e\lambdabel{}\begin{equation}gin{split} & \int \begin{equation}tar\rho^{\gamma-2} \mathcal{G}_{xt}^2 (x,t)dx \le C\int \begin{equation}tar{\rho} v_t^2 (x,t) dx +C\int \begin{equation}tar\rho^{3\gamma-2} \mathcal{G}_{x}^2(x,t)dx \\ & + C \int x^2 \begin{equation}tar{\rho}^\gamma \left[ \left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right](x,t) dx \le C(\theta) (1+t)^{-\frac{\gamma-1}{\gamma}+\theta} \left(\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right). \end{split}\end{equation}e This, together with \eqref{weightvxx} and \eqref{Aug9-3}, implies \begin{equation}gin{equation}\lambdabel{hualaAug10}\begin{equation}gin{split} (1+t)^{\frac{\gamma-1}{\gamma}-\theta} \int \begin{equation}tar\rho^{3\gamma-2} \left(v_{xx}^2 + \left| ( {v}/{x} )_x\right|^2 \right)(x,t)dx \le C(\theta) \left(\mathcal{E}(0)+\|r_{0x}-1\|^2_{L^\infty}\right). \end{split}\end{equation} So, \eqref{lem5est'8/9} follows from \eqref{hualaAug10}. $\Box$ \noindent{\bf Proof of Theorem \ref{mainthm1}}. The proof of the first part of Theorem \ref{mainthm1} follows from the local existence and uniqueness results of strong solutions, of which a sketched proof is given in Appendix, Part I. So, the global well-posedness of strong solutions with the estimate \eqref{keyconclusion} can be shown by Lemma \ref{lem5}, together with the equivalence of $\mathcal{E}(t)$ and $\mathfrak{E}(t)$ shown in \eqref{equivalence} and the lower-order estimates obtained in Subsection \ref{sec3.3}, through the standard continuation argument . Moreover, \eqref{Aug23-2} and \eqref{Aug23-1} follow from \eqref{8/23/1} and \eqref{growof2nd8.22} in Lemma \ref{lem10}, which will be proved later. $\Box$ To complete the proof of part $i)$ of Theorem \ref{mainthm}, it suffices to show the following Lemma. \begin{equation}gin{lem}\lambdabel{lem312} For the global strong solution obtained in Theorem \ref{mainthm1}, there exist positive constants $C(\theta)$ and $C(\theta,a)$ independent of $t$ such that for any $\theta\in (0, \ {2(\gamma-1)}/({3\gamma}))$ and $a\in (0,1)$, \begin{equation}gin{align} &(1+t)^{({\gamma-1})/{\gamma}-\theta} \left\|\left(r_x-1, {r}/{x}-1, v_x, v/{x}\right)(\cdot,t)\right\|^2 \le C(\theta) \mathfrak{E}(0), \lambdabel{8/10-2}\\ &(1+t)^{({\gamma-1})/{\gamma}-\theta} \left\|\left(r_x-1, {r}/{x}-1, v_x, v/{x}\right)(\cdot,t)\right\|^2_{H^1([0,a])} \le C(\theta,a) \mathfrak{E}(0), \lambdabel{8/10-3}\\ &(1+t)^{2({\gamma-1})/{ \gamma}- \theta } \left\|(r-x)(\cdot,t)\right\|_{L^\infty}^2 \le C(\theta)\mathfrak{E}(0),\lambdabel{8/11-1}\\ &(1+t)^{({3\gamma-2})/({2\gamma})- {\theta}} \left\|(v, x v_x)(\cdot,t)\right\|_{L^\infty}^2 \le C(\theta) \mathfrak{E}(0),\lambdabel{8/11-2}\\ & (1+t)^{({ \gamma-1})/\gamma- {\theta}} \left\| \left(v_x, v/x\right)(\cdot,t)\right\|_{L^\infty}^2 \le C(\theta) \mathfrak{E}(0),\lambdabel{8/11-3}\\ &(1+t)^{({\gamma-1})/{ \gamma} - \theta} \left\|\begin{equation}tar\rho^{ ({3\gamma-2})/4} (r_x-1, r/x-1)(\cdot,t)\right\|_{L^\infty}^2 \le C(\theta) \mathfrak{E}(0).\lambdabel{8/11-4} \end{align} \end{lem} {\em Proof}. It follows from \eqref{hardyorigin}, \eqref{estlem51}, \eqref{lem5est8/9} and \eqref{lem5est'8/9} that \begin{equation}gin{align} \left\|\left(r_x-1, {r}/{x}-1, v_x, v/{x}\right)(\cdot,t)\right\|_{L^2 \left(\left[0, {1}/{2}\right]\right)}^2 \le C \|(xr_{x}-x, r-x, xv_{x}, v)(\cdot,t)\|^2_{L^2([0,1/2])} \notag\\ + C \|(xr_{xx}, x(r/x)_x, xv_{xx}, x(v/x)_x)(\cdot,t)\|^2_{L^2([0,1/2])} \le C(\theta) (1+t)^{-({\gamma-1})/{\gamma}+\theta} \mathfrak{E}(0),\notag \end{align} which, together with \eqref{estlem51}, gives \eqref{8/10-2}. Clearly, \eqref{8/10-3} follows from \eqref{8/10-2}, \eqref{lem5est8/9} and \eqref{lem5est'8/9}; \eqref{8/11-1} from \eqref{girl}, \eqref{estlem51} and \eqref{8/10-2}; the estimate for $v$ in \eqref{8/11-2} from \eqref{f2}, \eqref{estlem51} and \eqref{8/10-2}. Due to \eqref{7-2}, \eqref{estlem51} and \eqref{Aug9-3}, we have \begin{equation}e\lambdabel{}\begin{equation}gin{split} \| x \mathcal{G}_{xt}(\cdot,t)\|^2 \le & C \left(\|x\begin{equation}tar\rho^\gamma\mathcal{G}_x(\cdot,t)\|^2 + \|x\begin{equation}tar\rho v_t(\cdot,t)\|^2 + \|\begin{equation}tar\rho(r-x, xr_x-x)(\cdot,t)\|^2 \right) \\ \le & C(\theta) (1+t)^{-({\gamma-1})/{\gamma}+\theta} \mathfrak{E}(0), \end{split}\end{equation}e which implies, using \eqref{f1}, \eqref{estlem51} and \eqref{8/10-2}, that $$(1+t)^{({3\gamma-2})/({2\gamma})- {\theta}} \left\| xv_x(\cdot,t)\right\|_{L^\infty}^2 \le C(\theta) \mathfrak{E}(0).$$ This verifies \eqref{8/11-2}. \eqref{8/11-3} follows from \eqref{8/11-2}, \eqref{8/10-3} and the fact $\|\cdot\|_{L^\infty([0,1/2])}\le C \|\cdot\|_{H^1([0,1/2])}$. It follows from \eqref{estlem51} that \begin{equation}gin{align} \left\|x\begin{equation}tar\rho^{(\gamma-1)/2}(r_x-1)(\cdot,t)\right\| \le & \left\|x(r_x-1)(\cdot,t)\right\|^{1/\gamma} \left\|x\begin{equation}tar\rho^{\gamma/2}(r_x-1)(\cdot,t)\right\|^{(\gamma-1)/\gamma} \notag\\ \le & C(\theta) (1+t)^{-(\gamma-1)/\gamma +\theta/2} \mathfrak{E}(0) , \notag \end{align} which implies, using \eqref{woman}, the H$\ddot{o}$lder inequality and \eqref{lem5est}, that \begin{equation}gin{align} \left\|x^{{3}/{2}}\begin{equation}tar\rho^{(3\gamma-2)/4}(r_x-1)(\cdot, t)\right\|_{L^\infty}^2 \le C\left\|x\begin{equation}tar\rho^{(\gamma-1)/2}(r_x-1)(\cdot, t)\right\| \left\|\begin{equation}tar\rho^{(2\gamma-1)/2}r_{xx}(\cdot, t)\right\| \notag\\ + C \left\|x(r_x-1)(\cdot, t)\right\|^2 \le C(\theta) (1+t)^{-(\gamma-1)/\gamma +\theta} \mathfrak{E}(0) . \notag \end{align} This, together with \eqref{8/10-3}, gives $$ \left\|\begin{equation}tar\rho^{(3\gamma-2)/4}(r_x-1)(\cdot, t)\right\|_{L^\infty}^2 \le C(\theta) (1+t)^{-(\gamma-1)/\gamma +\theta} \mathfrak{E}(0) . $$ Similarly, we can use \eqref{woman.new} to get $$ \left\|\begin{equation}tar\rho^{(3\gamma-2)/4}(r/x-1)(\cdot, t)\right\|_{L^\infty}^2 \le C(\theta) (1+t)^{-(\gamma-1)/\gamma +\theta} \mathfrak{E}(0) . $$ This finishes the proof of \eqref{8/11-4}. $\Box$ \subsubsection{Part II: faster decay }\lambdabel{sec3.4.3} In this subsection, we prove part $ii)$ of Theorem \ref{mainthm} under the assumption \begin{equation}\lambdabel{finitenessofF} \mathfrak{F}_\alpha(0)<\infty, \ \ \ \ \alpha \in (0, \gamma). \end{equation} The estimates in this subsection are for the global strong solution of \eqref{419} as stated in Theorem \ref{mainthm1}. To obtain the faster time decay estimates of the higher-order norms, we rewrite equation \eqref{7-2} in the form of \begin{equation}gin{equation}\lambdabel{x1}\begin{equation}gin{split} \mathfrak{P}(x,t) + \mu \mathcal{G}_{xt} = \frac{x^2}{r^2} \begin{equation}tar{\rho} v_t , \ \ {\rm where} \ \ \mathfrak{P}(x,t):=\gamma \left(\frac{x^2 \begin{equation}tar{\rho}}{r^2 r_x } \right)^{\gamma } \mathcal{G}_x + \left[ \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } -\left(\frac{x }{r } \right)^{4} \right] x \phi \begin{equation}tar\rho. \end{split}\end{equation} It should be noted that \begin{equation}\lambdabel{8.19.1} \mathfrak{P}_t =\gamma \begin{equation}tar{\rho}^\gamma \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } \mathcal{G}_{xt} + \mathfrak{P}_1, \ \ {\rm where} \ \ \mathfrak{P}_1 : = \gamma \begin{equation}tar{\rho}^\gamma \left[\left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } \right]_t\mathcal{G}_{x} - \left[ \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } -\left(\frac{x }{r } \right)^{4} \right]_t x \phi \begin{equation}tar\rho. \end{equation} This equation is convenient for us to derive the time decay estimates for $r_{xx}$ and $v_{xx}$ with weights. \begin{equation}gin{lem}\lambdabel{lem8.19} Let $\alpha\in (0, \gamma)$ and $\mathfrak{F}_\alpha(0)<\infty$. For the global strong solution obtained in Theorem \ref{mainthm1}, there exist positive constants $C(\alpha)$ and $C(\theta,\alpha)$ independent of $t$ such that for any $0<\theta< \min\{2(\gamma-1)/(3\gamma), \ 2(\gamma-\alpha)/\gamma\}$, \begin{equation}gin{align} & \left\|\begin{equation}tar\rho^{(2\gamma-1-\alpha)/2}\left(r_{xx}, (r/x)_x\right)(\cdot, t) \right\|^2 \le C(\alpha) \mathfrak{F}_\alpha (0), \lambdabel{8.20-1} \\ &(1+t)^{({\gamma-1})/{ \gamma} - \theta} \left\|\begin{equation}tar\rho^{ ({3\gamma-2-\alpha})/4} (r_x-1, r/x-1)(\cdot,t)\right\|_{L^\infty}^2 \le C(\theta, \alpha)\mathfrak{F}_\alpha (0).\lambdabel{8/11-4.Aug21} \end{align} \end{lem} {\em Proof}. In a similar way to deriving \eqref{bt1}, we have \begin{equation}gin{equation}\lambdabel{bt1.8.19}\begin{equation}gin{split} & \int \begin{equation}tar\rho^{2\gamma-1-\alpha} \mathcal{G}_{x}^2 (x,t)dx + \int_0^t \int \begin{equation}tar\rho ^{3\gamma-1-\alpha} \mathcal{G}_x^2 dx ds \le C\int_0^t \int v_s^2dxds \\ & \qquad +C \int_0^t \int x^2\begin{equation}tar\rho^{\gamma+1-\alpha} \left[ \left({r}/{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx ds \le C(\alpha) \mathfrak{F}_\alpha(0), \end{split}\end{equation} which, together with \eqref{weightrxx}, gives \eqref{8.20-1}. The proof for \eqref{8/11-4.Aug21} is the same as that for \eqref{8/11-4}. We omit the detail here. $\Box$ \begin{equation}gin{lem}\lambdabel{lem41} Let $\alpha\in [\gamma-1, \gamma)$ and $\mathfrak{F}_\alpha(0)<\infty$. For the global strong solution obtained in Theorem \ref{mainthm1}, there exist positive constants $C(\alpha,\theta)$ and $C(\alpha,\theta, a)$ independent of $t$ such that for any $0<\theta< \min\{2(\gamma-1)/(3\gamma), \ 2(\gamma-\alpha)/\gamma\}$ and $a\in (0,1)$, \begin{equation}gin{align} &(1+t)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta} \left\| \begin{equation}tar\rho^{ {1}/{2}} v_t(\cdot, t) \right\|^2 +\int_0^t (1+s)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta} \left\|\left( v_{x}, {v}/{x} , v_{sx}, {v_s}/{x} \right)(\cdot,s)\right\|^2 ds\notag\\ & \quad \le C(\alpha, \theta) \left( {\mathfrak{E}}(0) +1 \right) {\mathfrak{F}}_\alpha(0), \lambdabel{8-21-2}\\ & (1+t)^{\min\left\{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta, \ \frac{\kappappa }{4}+\frac{10\gamma-9}{4\gamma}-\frac{9}{4}\theta\right\}} \left\| \left(r_{xx}, (r/x)_x, v_{xx}, (v/x)_x \right)(\cdot, t) \right\|^2_{L^2([0,a])} \notag\\ & \quad \le C(\alpha, \theta,a ) \left( {\mathfrak{E}}(0) +1 \right) {\mathfrak{F}}_\alpha(0). \lambdabel{8-21-1} \end{align} Here $\kappappa=0$ when $\alpha = \gamma-1$, and $\kappappa = (1/\gamma) \min\{ {\alpha-(\gamma-1)} , \ {\gamma-1} \} -\theta$ when $\alpha\in (\gamma-1, \gamma)$. \end{lem} {\em Proof}. The proof consists of four steps. {\em Step 1}. In this step, we prove \begin{equation}gin{align} &(1+t)^{\kappa} \int \begin{equation}tar\rho^{\gamma} \mathcal{G}_{x}^2(x,t)dx +\int_0^t (1+s)^{\kappa} \int \begin{equation}tar\rho^{2\gamma} \mathcal{G}_x^2 dx ds\le C(\theta,\alpha) \mathfrak{F}_\alpha(0), \lambdabel{8.20.1}\\ & \int_0^t (1+s)^{\kappa} \int \left(\mathcal{G}_{sx}^2 + v_x^2 +|v/x|^2 \right)dx ds\le C(\theta,\alpha) \mathfrak{F}_\alpha(0). \lambdabel{8.20.2} \end{align} When $\alpha=\gamma-1$, \eqref{bt1.8.19} implies \eqref{8.20.1}. When $\alpha\in (\gamma-1, \gamma)$, we use the same way as that for the derivation of \eqref{Aug9-3} to obtain \begin{equation}gin{align} & (1+t)^{\kappa} \int \begin{equation}tar\rho^{\gamma} \mathcal{G}_{x}^2(x,t)dx +\int_0^t (1+s)^{\kappa} \int \begin{equation}tar\rho^{2\gamma} \mathcal{G}_x^2 dx ds\notag\\ \le & C\int_0^t (1+s)^{\kappappa} \int v_s^2dx ds +C \int_0^t (1+s)^{\kappa} \int x^2\begin{equation}tar{\rho}^\gamma\left[ \left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dx ds \notag\\ & + C\int_0^t (1+s)^{\kappa-1} \int \begin{equation}tar\rho^{\gamma} \mathcal{G}_x^2 dx ds \le C(\theta,\alpha) \mathfrak{F}_\alpha(0), \notag \end{align} due to $$ \int \begin{equation}tar\rho^{\gamma} \mathcal{G}_x^2 dx \le \left( \int \begin{equation}tar\rho^{2\gamma-1-\alpha} \mathcal{G}_x^2 dx \right)^{(2\gamma-1-\alpha)/\gamma}\left( \int \begin{equation}tar\rho^{3\gamma-1-\alpha} \mathcal{G}_x^2 dx \right)^{( \alpha + 1 -\gamma)/\gamma} $$ and \begin{equation}gin{align*} \int_0^t (1+s)^{\kappa-1} & \int \begin{equation}tar\rho^{\gamma} \mathcal{G}_x^2 dx ds \le C \int_0^t \int \begin{equation}tar\rho ^{3\gamma-1-\alpha} \mathcal{G}_x^2 dx ds \\ & +C \int_0^t (1+s)^{-1-\gamma\theta/(2\gamma-1-\alpha)} ds \sup_{s\in [0,t]} \int \begin{equation}tar\rho^{2\gamma-1-\alpha} \mathcal{G}_x^2(x,s) dx. \end{align*} This verifies \eqref{8.20.1}. It follows from \eqref{7-2}, \eqref{8.20.1} and \eqref{estlem51} that $$ \int_0^t (1+s)^{\kappa} \int \mathcal{G}_{sx}^2 dx ds\le C(\theta,\alpha) \mathfrak{F}_\alpha(0) ,$$ which, together with \eqref{didadida'}, gives \eqref{8.20.2}. {\em Step 2}. In this step, we prove that \begin{equation}gin{equation}\lambdabel{f3}\begin{equation}gin{split} &(1+t)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta}\int x^2\begin{equation}tar\rho^{4\gamma-2}\mathcal{ G}_{x}^2(x,t) dx +\int_0^t (1+s)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta}\int x^2\begin{equation}tar\rho^{3\gamma-2} \mathcal{G}_{xs}^2 dx ds\\ & \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0), \ \ \ \ {\rm where } \ \ \widetilde{\mathfrak{F}}_\alpha(0):= {\mathfrak{F}}_\alpha(0) + {\mathfrak{E}}(0) {\mathfrak{F}}_\alpha(0) . \end{split}\end{equation} Multiplying \eqref{x1} by $x^2\begin{equation}tar\rho^{2\gamma-2} \mathfrak{P}_t$ and integrating the resulting equation give that $$ \frac{1}{2}\frac{d}{dt}\int x^2\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2 dx + \mu\int x^2 \begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}_t \mathcal{G}_{xt} dx = \int x^2 \begin{equation}tar\rho^{2\gamma-1}\mathfrak{P}_t \frac{x^2}{r^2} v_t dx, $$ which implies, using \eqref{8.19.1}, that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\frac{1}{2}\frac{d}{dt}\int x^2\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2 dx +\mu \gamma\int x^2\begin{equation}tar\rho^{3\gamma-2} \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } \mathcal{G}_{xt}^2 dx \\ = &\gamma \int x^2 \begin{equation}tar\rho^{3\gamma-1} \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } \mathcal{G}_{xt} \frac{x^2}{r^2} v_t dx + \int x^2 \begin{equation}tar\rho^{2\gamma-1}\mathfrak{P}_{1} \frac{x^2}{r^2} v_t dx -\mu\int x^2 \begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}_{1} \mathcal{G}_{xt} dx . \end{split}\end{equation*} Thus, one has \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \frac{1}{2}\frac{d}{dt}\int x^2\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2 dx +\frac{\mu \gamma}{2}\int x^2\begin{equation}tar\rho^{3\gamma-2} \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } \mathcal{G}_{xt}^2 dx\le C \int \left( x^2 \begin{equation}tar\rho^{\gamma-2}\mathfrak{P}_{1}^2 +v_t^2 \right) dx \\ \le C \int \left(x^2 v_x^2 +v^2\right)\begin{equation}tar{\rho}^{3\gamma-2}\mathcal{G}_x^2 dx + C \int \left(x^2 v_x^2 +v^2+v_t^2\right) dx. \end{split}\end{equation*} Combining this with \eqref{Aug9-3} shows that \begin{equation}gin{equation}\lambdabel{ppqq}\begin{equation}gin{split} &\frac{1}{2}\frac{d}{dt}\int x^2\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2 dx +\frac{\mu \gamma}{2}\int x^2\begin{equation}tar\rho^{3\gamma-2} \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } \mathcal{G}_{xt}^2 dx \\ \le & C\left( \left\|x v_x\right\|_{L^\infty}^2 +C\left\|v\right\|_{L^\infty}^2\right) \int \begin{equation}tar{\rho}^{3\gamma-2}\mathcal{G}_x^2 dx + C \int \left(x^2 v_x^2 +v^2 + v_t^2\right) dx \\ \le & C(\theta) \mathfrak{E}(0) (1+t)^{-(\gamma-1)/\gamma+\theta} \left( \left\|x v_x\right\|_{L^\infty}^2 +\left\|v\right\|_{L^\infty}^2\right) + C \int \left(x^2 v_x^2 +v^2 + v_t^2\right) dx. \end{split}\end{equation} It then follows from \eqref{f1} and \eqref{f2} that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\frac{1}{2}\frac{d}{dt}\int x^2\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2 dx +\frac{\mu \gamma}{2}\int x^2\begin{equation}tar\rho^{3\gamma-2} \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } \mathcal{G}_{xt}^2 dx\\ \le & C(\theta) \mathfrak{E}(0) (1+t)^{\frac{\kappappa }{2}-\frac{4\gamma-3}{2\gamma}+\frac{3}{2}\theta}\int \left( \mathcal{G}_{tx}^2 +v_x^2 + \left({v}/{x}\right)^2 \right)dx \\ &+ C(\theta) \mathfrak{E}(0) (1+t)^{\frac{1}{2\gamma}-\frac{\kappappa }{2}+\frac{\theta}{2}} \int \left(x^2 v_x^2 +v^2\right) dx + C \int \left(x^2 v_x^2 +v^2 + v_t^2\right) dx. \end{split}\end{equation*} Multiplying the inequality above by $(1+t)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta}$ and integrating the product give that \begin{equation}gin{equation}\lambdabel{8.21.1}\begin{equation}gin{split} &(1+t)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta}\int x^2\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2(x,t) dx +\int_0^t (1+s)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta}\int x^2\begin{equation}tar\rho^{3\gamma-2} \mathcal{G}_{xs}^2 dx ds \\ \le & C \int x^2\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2(x,0) dx+ C(\theta) \mathfrak{E}(0) \int_0^t (1+s)^{\kappappa}\int \left( \mathcal{G}_{sx}^2 +v_x^2 + (v/x)^2 \right)dx ds \\ & + \int_0^t \left[C(\theta) \mathfrak{E}(0)(1+s)^{\frac{2\gamma-1}{\gamma}-{\theta}} + C (1+s)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta}\right]\int \left(x^2 v_x^2 +v^2 + v_s^2 \right) dxds \\ & + C \int_0^t (1+s)^{(1+s)^{\frac{\kappappa }{2}+\frac{2\gamma-3}{2\gamma}-\frac{3}{2}\theta}}\int x^2\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2 dx ds. \end{split}\end{equation} Note that \begin{equation}gin{align} &\int x^2\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2 dx \le C \int x^2\begin{equation}tar\rho^{4\gamma-2}\mathcal{ G}_{x}^2 dx + C \int x^2 \begin{equation}tar\rho^\gamma \left[ \left(\frac{r}{x}-1\right)^2 + \left(r_x-1\right)^2 \right]dx , \notag\\ & \frac{\kappappa }{2}+\frac{2\gamma-3}{2\gamma}-\frac{3}{2}\theta \le \frac{\gamma-1}{\gamma}-{\theta}, \ \ \ \ \frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta \le \frac{2\gamma-1}{\gamma}-{\theta} . \lambdabel{8.21.3} \end{align} Then, it follows from \eqref{8.21.1}, \eqref{8.20.2}, \eqref{estlem51} and \eqref{Aug9-3} that $$ (1+t)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta}\int x^2\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2 dx +\int_0^t (1+s)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta}\int x^2\begin{equation}tar\rho^{3\gamma-2} \mathcal{G}_{xs}^2 dx ds \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0). $$ This, together with \eqref{estlem51}, implies \eqref{f3}. {\em Step 3}. In this step, we prove that \begin{equation}gin{align} & \int_0^t (1+s)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta}\int \left( v_x^2 +(v/x)^2\right) dx ds \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0) , \lambdabel{8.21.2}\\ & (1+t)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta} \int \begin{equation}tar{\rho} v_t^2 (x,t) dx + \int_0^t (1+s)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta} \int \left( v_{xs}^2 + (v_s/x)^2 \right)dxds \notag\\ & \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0) . \lambdabel{8/10-1Aug21} \end{align} As a consequence of \eqref{8.21.2} and \eqref{8/10-1Aug21}, we get \eqref{8-21-2}. It follows from \eqref{gjvx}, \eqref{hardyorigin} and \eqref{tlg3} that \begin{equation}gin{align} & \int \left( v_x^2 +(v/x)^2\right) dx \le 4 \int_0^{1/2} \mathcal{G}_{t}^2 dx + 4 \int_{1/2}^1 \mathcal{G}_{t}^2 dx \le C \int_0^{1/2} x^2 \left( \mathcal{G}_{t}^2 + \mathcal{G}_{tx}^2 \right)dx + 16 \int_{1/2}^1 x^2 \mathcal{G}_{t}^2 dx \notag\\ & \le C \int x^2 \mathcal{G}_{t}^2 dx + C \int_0^{1/2} x^2 \begin{equation}tar\rho^{3\gamma-2} \mathcal{G}_{tx}^2 dx \le C \int \left(x^2 v_x^2 + v^2\right) + C \int x^2 \begin{equation}tar\rho^{3\gamma-2} \mathcal{G}_{tx}^2 dx. \lambdabel{hh8.21} \end{align} This, together with \eqref{f3}, \eqref{8.21.3} and \eqref{estlem51}, gives \eqref{8.21.2}. With the aid of \eqref{8.21.2}, we can use the same way as to the derivation of \eqref{8/10-1} to obtain \eqref{8/10-1Aug21}. {\em Step 4}. In this step, we prove \begin{equation}gin{align} &(1+t)^{\min\left\{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta, \ \frac{\kappappa }{4}+\frac{10\gamma-9}{4\gamma}-\frac{9}{4}\theta\right\}}\int \begin{equation}tar\rho^{4\gamma-2}\mathcal{ G}_{x}^2(x,t) dx \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0) ,\lambdabel{f3.8.21} \\ &(1+t)^{\min\left\{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta, \ \frac{\kappappa }{4}+\frac{10\gamma-9}{4\gamma}-\frac{9}{4}\theta\right\}}\int \begin{equation}tar\rho^{2\gamma-2}\mathcal{ G}_{x t}^2(x,t) dx \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0) .\lambdabel{8.21.5} \end{align} With \eqref{f3.8.21} and \eqref{8.21.5}, we can obtain \eqref{8-21-1} by use of \eqref{weightrxx} and \eqref{weightvxx}. Let $\begin{equation}tar{\psi}$ be a non-increasing function defined on $[0,1]$ satisfying $$ \begin{equation}tar\psi=1 \ \ {\rm on } \ \ [0,1/8], \ \ \begin{equation}tar\psi=0 \ \ {\rm on } \ \ [1/4,1] \ \ {\rm and} \ \ |\begin{equation}tar\psi'|\le 32.$$ Following the derivation of \eqref{ppqq}, one can obtain \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\frac{1}{2}\frac{d}{dt}\int \begin{equation}tar\psi\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2 dx +\frac{\mu \gamma}{2}\int \begin{equation}tar\psi \begin{equation}tar\rho^{3\gamma-2} \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } \mathcal{G}_{xt}^2 dx\\ \le & C(\theta) \mathfrak{E}(0)(1+t)^{-(\gamma-1)/\gamma+\theta} \left\|(v_x, v/x)\right\|_{L^\infty\left(\left[0,1/4\right]\right)}^2 + C \int \left( v_x^2 + (v/x)^2+ v_t^2\right) dx. \end{split}\end{equation*} In view of \eqref{nj2} and \eqref{nj2.new}, we see \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\left\|v_x\right\|_{L^\infty\left(\left[0,1/4\right]\right)}^2 +\left\|v/x\right\|_{L^\infty\left(\left[0,1/4\right]\right)}^2 \\ \le & C \int \left( v_{x}^2 + \left| {v}/{x}\right|^2 \right) dx + C \left[\int \left( v_{x}^2 + \left| {v}/{x}\right|^2 \right) dx\right]^{1/2}\left[\int_0^{1/2} \left( v_{xx}^2 + \left|\left( {v}/{x}\right)_x\right|^2\right)dx \right]^{1/2} , \end{split}\end{equation*} which implies, \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\frac{1}{2}\frac{d}{dt}\int \begin{equation}tar\psi\begin{equation}tar\rho^{2\gamma-2}\mathfrak{P}^2 dx +\frac{\mu \gamma}{2}\int \begin{equation}tar\psi \begin{equation}tar\rho^{3\gamma-2} \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } \mathcal{G}_{xt}^2 dx\\ \le & C(\theta) \mathfrak{E}(0)(1+t)^{\frac{5-6\gamma}{4\gamma}-\frac{\kappa}{4}+\frac{5}{4}\theta} \int_0^{1/2} \left( v_{xx}^2 +|( v/x)_x|^2 \right) dx \\ & + C(\theta)\mathfrak{E}(0) (1+t)^{\frac{3-2\gamma}{4\gamma}+\frac{\kappa}{4}+\frac{3}{4}\theta} \int \left( v_x^2 + (v/x)^2 \right) dx + C \int \left( v_x^2 + (v/x)^2+ v_t^2\right) dx. \end{split}\end{equation*} Similar to \eqref{f3}, one can use \eqref{8.21.2} and \eqref{8.21.4} to obtain \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &(1+t)^{\frac{\kappappa }{4}+\frac{10\gamma-9}{4\gamma}-\frac{9}{4}\theta}\int \begin{equation}tar\psi \begin{equation}tar\rho^{4\gamma-2}\mathcal{ G}_{x}^2(x,t) dx +\int_0^t (1+s)^{\frac{\kappappa }{4}+\frac{10\gamma-9}{4\gamma}-\frac{9}{4}\theta}\int \begin{equation}tar\psi\begin{equation}tar\rho^{3\gamma-2} \mathcal{G}_{xs}^2 dx ds \\ & \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0), \end{split}\end{equation*} due to $$\frac{\kappappa }{4}+\frac{10\gamma-9}{4\gamma}-\frac{9}{4}\theta \le \frac{2\gamma-1}{\gamma}-{\theta} .$$ This, together with \eqref{f3}, implies \eqref{f3.8.21}. Finally, we can use \eqref{7-2}, \eqref{8/10-1Aug21}, \eqref{estlem51} and \eqref{f3.8.21} to show \eqref{8.21.5}. $\Box$ \begin{equation}gin{lem}\lambdabel{lem44} Let $\alpha\in [\gamma-1, \gamma)$ and $\mathfrak{F}_\alpha(0)<\infty$. For the global strong solution obtained in Theorem \ref{mainthm1}, there exist positive constants $C(\alpha,\theta)$ and $C(\alpha,\theta, a)$ independent of $t$ such that for any $0<\theta< \min\{2(\gamma-1)/(3\gamma), \ 2(\gamma-\alpha)/\gamma\}$ and $a\in (0,1)$, \begin{equation}gin{align} &(1+t)^{ \frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta } \left\|\left( r_x-1, r/x-1 \right)(\cdot,t)\right\|_{L^2([0,a])}^2 \le C( \alpha, \theta, a)\widetilde{\mathfrak{F}}_\alpha(0), \lambdabel{Aug21.3}\\ &(1+t)^{ \frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta } \left\|\left( v_x, v/{x}\right)(\cdot,t)\right\|^2 \le C( \alpha, \theta)\widetilde{\mathfrak{F}}_\alpha(0), \lambdabel{Aug21.1}\\ &(1+t)^{\min\left\{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta, \ \frac{\kappappa }{4}+\frac{10\gamma-9}{4\gamma}-\frac{9}{4}\theta\right\}} \left\|\left(r_x-1, {r}/{x}-1, v_x, v/{x}\right)(\cdot,t)\right\|^2_{H^1([0,a])} \notag\\ & \quad \le C( \alpha, \theta, a)\widetilde{\mathfrak{F}}_\alpha(0), \lambdabel{Aug21.2}\\ &(1+t)^{\frac{8\gamma-5}{4\gamma}+\frac{\kappa}{4}-\frac{5}{4}\theta } \left\|v(\cdot,t)\right\|_{L^\infty}^2 \le C( \alpha, \theta)\widetilde{\mathfrak{F}}_\alpha(0),\lambdabel{Aug21.4}\\ & (1+t)^{\frac{1}{2} b_1 } \|xv_x(\cdot,t)\|^2_{L^\infty} + (1+t)^{\frac{1}{2}\min\{b_1, b_2\} } \left\| \left( v_x, v/x\right) (\cdot,t)\right\|_{L^\infty}^2 \le C( \alpha, \theta)\widetilde{\mathfrak{F}}_\alpha(0),\lambdabel{Aug21.5}\\ &(1+t)^{\frac{\kappa}{2}+ \frac{2\gamma-1}{2\gamma}-\frac{\theta}{2}} \left\|\begin{equation}tar\rho^{ \gamma/2 } (r_x-1, r/x-1)(\cdot,t)\right\|_{L^\infty}^2 \le C( \alpha, \theta)\widetilde{\mathfrak{F}}_\alpha(0).\lambdabel{Aug21.6} \end{align} Here $\widetilde{\mathfrak{F}}_\alpha(0)= {\mathfrak{F}}_\alpha(0) + {\mathfrak{E}}(0) {\mathfrak{F}}_\alpha(0)$, $\kappappa=0$ when $\alpha = \gamma-1$, $\kappappa = (1/\gamma) \min\{ {\alpha-(\gamma-1)} , \ {\gamma-1} \} -\theta$ when $\alpha\in (\gamma-1, \gamma)$, \begin{equation}gin{align} b_1=&\min\left\{ \max\left\{ \left(\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta\right)\frac{\alpha+1}{2\gamma-1+\alpha}, \ \frac{3 }{2}\kappappa +\frac{2\gamma-1}{2\gamma}-\frac{ \theta}{2}\right\}, \right.\notag\\ & \left. \qquad \ \ \frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta \right\} + \frac{2\gamma-1}{\gamma} -\theta ,\lambdabel{8.23.b1}\\ b_2 = & \min\left\{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta, \ \frac{\kappappa }{4}+\frac{10\gamma-9}{4\gamma}-\frac{9}{4}\theta\right\} + \frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta. \lambdabel{8.21.b2} \end{align} \end{lem} {\em Proof}. It follows from \eqref{hardyorigin} and \eqref{tlg1} that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \int \begin{equation}tar\rho^{\gamma} \mathcal{G}^2 dx \le & C \int_0^{1/2} \mathcal{G}^2 dx + 4 \int_{1/2}^1 x^2 \begin{equation}tar\rho^{\gamma} \mathcal{G}^2 dx \le C \int_0^{1/2} x^2 \left( \mathcal{G}^2 + \mathcal{G}_x^2\right) dx + 4 \int_{1/2}^1 x^2 \begin{equation}tar\rho^{\gamma} \mathcal{G}^2 dx \\ \le & C \int x^2 \begin{equation}tar\rho^\gamma \mathcal{G}^2 dx + C \int_0^{1/2} x^2 \begin{equation}tar\rho^{4\gamma-2} \mathcal{G}_x^2 dx \\ \le & C \int \begin{equation}tar\rho^\gamma \left(|xr_x-x|^2 + |r-x|^2\right) dx +C \int x^2 \begin{equation}tar\rho^{4\gamma-2} \mathcal{G}_x^2 dx , \end{split}\end{equation*} which implies, using \eqref{weightrx}, \eqref{estlem51} and \eqref{f3}, that $$ (1+t)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta} \int \begin{equation}tar\rho^{\gamma} \left(|r_x-1|^2 + |r/x-1|^2\right) (x,t)dx \le C(\theta, \alpha)\widetilde{\mathfrak{F}}_\alpha(0). $$ This gives \eqref{Aug21.3}. It follows from \eqref{7-2}, \eqref{estlem51} and \eqref{f3} that $$ (1+t)^{\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta}\int x^2\begin{equation}tar\rho^{2\gamma-2}\mathcal{ G}_{tx}^2(x,t) dx \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0), $$ which, together with \eqref{hh8.21} and \eqref{estlem51}, gives \eqref{Aug21.1}. \eqref{Aug21.2} is a conclusion of \eqref{Aug21.3}, \eqref{Aug21.1} and \eqref{8-21-1}. \eqref{Aug21.4} follows from \eqref{f2}, \eqref{Aug21.1} and \eqref{estlem51}. Next, we prove \eqref{Aug21.5}. It follows from \eqref{f3} and \eqref{bt1.8.19} that \begin{equation}gin{equation}\lambdabel{8.23.1}\begin{equation}gin{split} &(1+t)^{ \left(\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta\right)\frac{\alpha+1}{2\gamma-1+\alpha}}\int x^2 \begin{equation}tar\rho^{2\gamma} \mathcal{G}_x^2 (x,t) dx \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0) , \end{split}\end{equation} due to $$ \int x^2 \begin{equation}tar\rho^{2\gamma} \mathcal{G}_x^2 dx\le \left(\int x^2 \begin{equation}tar\rho^{4\gamma-2} \mathcal{G}_x^2 dx\right)^{\frac{\alpha+1}{2\gamma-1+\alpha}}\left(\int x^2 \begin{equation}tar\rho^{2\gamma-1-\alpha} \mathcal{G}_x^2 dx\right)^{\frac{2\gamma-2}{2\gamma-1+\alpha}}. $$ In a similar way to deriving \eqref{f3}, we have $$ (1+t)^{\frac{3 }{2}\kappappa +\frac{2\gamma-1}{2\gamma}-\frac{ \theta}{2}}\int x^2\begin{equation}tar\rho^{2\gamma}\mathcal{ G}_{x}^2(x,t) dx +\int_0^t (1+s)^{ \frac{3 }{2}\kappappa +\frac{2\gamma-1}{2\gamma}-\frac{ \theta}{2}}\int x^2\begin{equation}tar\rho^{ \gamma} \mathcal{G}_{xs}^2 dx ds \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0), $$ since \eqref{ppqq} can be replaced by \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\frac{1}{2}\frac{d}{dt}\int x^2 \mathfrak{P}^2 dx +\frac{\mu \gamma}{2}\int x^2\begin{equation}tar\rho^{\gamma} \left(\frac{x^2 }{r^2 r_x } \right)^{\gamma } \mathcal{G}_{xt}^2 dx \\ \le & C\left( \left\|x v_x\right\|_{L^\infty}^2 +C\left\|v\right\|_{L^\infty}^2\right) \int \begin{equation}tar{\rho}^{\gamma}\mathcal{G}_x^2 dx + C \int \left(x^2 v_x^2 +v^2 + v_t^2\right) dx \\ \le & C(\theta) \mathfrak{E}(0) (1+t)^{-\kappappa} \left( \left\|x v_x\right\|_{L^\infty}^2 +\left\|v\right\|_{L^\infty}^2\right) + C \int \left(x^2 v_x^2 +v^2 + v_t^2\right) dx. \end{split}\end{equation*} This, together with \eqref{8.23.1}, gives \begin{equation}gin{equation}\lambdabel{8.23.2}\begin{equation}gin{split} &(1+t)^{\max\left\{ \left(\frac{\kappappa }{2}+\frac{4\gamma-3}{2\gamma}-\frac{3}{2}\theta\right)\frac{\alpha+1}{2\gamma-1+\alpha}, \ \frac{3 }{2}\kappappa +\frac{2\gamma-1}{2\gamma}-\frac{ \theta}{2}\right\}}\int x^2 \begin{equation}tar\rho^{2\gamma} \mathcal{G}_x^2 (x,t) dx \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0) . \end{split}\end{equation} So, it yields from \eqref{7-2}, \eqref{8-21-2}, \eqref{8.23.2} and \eqref{estlem51} that \begin{equation}gin{equation}\lambdabel{8.22.3}\begin{equation}gin{split} &(1+t)^{b_1-\frac{2\gamma-1}{\gamma} + \theta}\int x^2 \mathcal{ G}_{tx}^2(x,t) dx \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0) , \end{split}\end{equation} where $b_1$ is defined by \eqref{8.23.b1}. This, together with \eqref{f1}, \eqref{estlem51} and \eqref{Aug21.1}, gives $$ (1+t)^{ b_1/2} \|xv_x(\cdot,t)\|^2_{L^\infty} \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0). $$ Moreover, it follows from \eqref{nj2}, \eqref{nj2.new}, \eqref{Aug21.1} and \eqref{Aug21.2} that \begin{equation}gin{equation}\lambdabel{8/21/2}\begin{equation}gin{split} &(1+t)^{ b_2/ 2} \|(v_x, v/x)(\cdot,t)\|^2_{L^\infty([0,1/4])} \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0) , \end{split}\end{equation} where $b_2$ is defined by \eqref{8.21.b2}. Therefore, we have \eqref{Aug21.5}. In a similar way to deriving \eqref{8/21/2}, one can get \begin{equation}gin{equation}\lambdabel{8/21/3}\begin{equation}gin{split} (1+t)^{b_2 /2 } \|(r_x-1,r/x-1)(\cdot,t)\|^2_{L^\infty([0,1/4])} \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0), \end{split}\end{equation} where $b_2$ is defined in \eqref{8.21.b2}. It follows from \eqref{woman}, \eqref{estlem51}, \eqref{weightrxx} and \eqref{8.20.1} that \begin{equation}gin{align} \left\|x^{{3}/{2}}\begin{equation}tar\rho^{\gamma/2}(r_x-1)(\cdot, t)\right\|_{L^\infty}^2 \le C\left\|x\begin{equation}tar\rho^{\gamma /2}(r_x-1)(\cdot, t)\right\| \left\| \begin{equation}tar\rho^{\gamma /2}r_{xx}(\cdot, t)\right\| \notag\\ + C \left\|x\begin{equation}tar\rho^{\gamma /2}(r_x-1)(\cdot, t)\right\|^2 \le C(\alpha, \theta) \widetilde{\mathfrak{F}}_\alpha(0) (1+t)^{-\frac{\kappa}{2} - \frac{2\gamma-1}{2\gamma} + \frac{\theta}{2}} . \notag \end{align} Clearly, we can get the same estimate for $r/x-1$. Due to $ {\kappa} + ({2\gamma-1})/\gamma - {\theta} \le b_2 $, we then obtain \eqref{Aug21.6}. $\Box$ \begin{equation}gin{rmk}\lambdabel{rmk8.22} As a consequence of Lemma \ref{lem44}, we have for any $0<\theta< 2(\gamma-1)/(3\gamma)$, \begin{equation}gin{align} &(1+t)^{ \frac{11\gamma-10}{4\gamma} -\frac{5}{2}\theta } \left\| \left( v_{xx}, (v/x)_x \right) (\cdot,t)\right\|_{L^2([0,1/2])}^2 \le C( \theta)\widetilde{\mathfrak{F}}_{\gamma-\gamma\theta}(0) , \lambdabel{8-23-1}\\ &(1+t)^{ \frac{9\gamma-6}{4\gamma} - \frac{3}{2}\theta} \left\| \left( xv_x, v \right) (\cdot,t)\right\|_{L^\infty}^2 \le C( \theta)\widetilde{\mathfrak{F}}_{\gamma-\gamma\theta}(0),\lambdabel{8-22-1} \\ & (1+t)^{ \frac{5\gamma-4}{2\gamma} - 2\theta } \left\| x \mathcal{G}_{tx} (\cdot,t)\right\|^2 \le C( \theta)\widetilde{\mathfrak{F}}_{\gamma-\gamma\theta}(0).\lambdabel{8-22-2} \end{align} Indeed, \eqref{8-23-1} follows from \eqref{Aug21.2}, \eqref{8-22-1} from \eqref{Aug21.4} and \eqref{Aug21.5}, and \eqref{8-22-2} from \eqref{8.22.3}. \end{rmk} \subsubsection{Part III: further regularity}\lambdabel{sec3.4.4} In this subsection, we further study the higher regularity of the strong solution obtained in Theorem \ref{mainthm1} and prove part $iii)$ and part $iv)$ of Theorem \ref{mainthm}. \begin{equation}gin{lem}\lambdabel{lem10} Let $\alpha\in [\gamma, \ 2\gamma-1]$ and $\mathfrak{F}_\alpha(0)<\infty$. For the global strong solution obtained in Theorem \ref{mainthm1}, there exist positive constants $C$ and $C(\theta)$ independent of $t$ such that for any $0<\theta< 2(\gamma-1)/(3\gamma)$, \begin{equation}gin{align} & \left\| \begin{equation}tar\rho^{(2\gamma-1-\alpha)/2} \left( r_{xx}, (r/x)_x \right)(\cdot, t)\right\|^2 \le C \mathfrak{F}_\alpha (0) + C(\theta) (1+t)^{(\alpha-\gamma + \theta\gamma)/(\alpha-1)}\mathfrak{E}(0), \lambdabel{8.22.1} \\ & \left\|\left( r_{xx}, \ (r/x)_x \right)(\cdot, t)\right\|^2 \le C \mathfrak{F}_{2\gamma-1} (0) + C (\theta) (1+t)^{\frac{1}{2}+\frac{\gamma}{2\gamma-2}\theta} \mathfrak{E}(0), \lambdabel{8/23/1}\\ &\left\|\left( v_{xx}, \ (v/x)_x \right)(\cdot, t)\right\|^2 \le C(\theta) (1+t)^{-\frac{7\gamma-6}{4\gamma} + 4 \theta } \mathfrak{F}_{2\gamma-1} (0) \left(1+ \mathfrak{F}_{2\gamma-1} (0) \right) \left(1+ \mathfrak{E} (0) \right) , \lambdabel{growof2nd8.22} \end{align} provided that $\mathfrak{F}_{2\gamma-1} (0)<\infty$ in \eqref{8/23/1} and \eqref{growof2nd8.22}. \end{lem} {\em Proof}. In a similar way to the derivation of \eqref{bt1}, we have \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} & \int \begin{equation}tar\rho^{2\gamma-1-\alpha} \mathcal{G}_{x}^2(x,t)dx + \int_0^t \int \begin{equation}tar\rho^{3\gamma-1-\alpha}\mathcal{G}_x^2 dx ds\\ \le & C \mathfrak{F}_\alpha (0) + C\int_0^t \int x^2 \begin{equation}tar\rho ^{\gamma+1-\alpha} \left[ \left({r}/{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds. \end{split}\end{equation*} It follows from the H$\ddot{o}$lder inequality that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} & \int \begin{equation}tar\rho ^{\gamma+1-\alpha} \left[ \left(r-x\right)^2 + \left(xr_x-x\right)^2 \right] dx\\ & \le \left( \int \left[ \left(r-x\right)^2 + \left(xr_x-x\right)^2 \right] dx\right)^{({\alpha-1})/{\gamma}} \left( \int \begin{equation}tar\rho ^{\gamma} \left[ \left(r-x\right)^2 + \left(xr_x-x\right)^2 \right] dx\right)^{({\gamma+1-\alpha})/{\gamma}}\\ &\le (1+t)^{-({\gamma-1})/{\gamma}+\theta}\left( (1+t)^{({\gamma-1})/{\gamma}-\theta} \int \left[ \left(r-x\right)^2 + \left(xr_x-x\right)^2 \right] dx\right)^{({\alpha-1})/{\gamma}}\\ &\times \left( (1+t)^{({\gamma-1})/{\gamma}-\theta} \int \begin{equation}tar\rho ^{\gamma} \left[ \left(r-x\right)^2 + \left(xr_x-x\right)^2 \right] dx\right)^{({\gamma+1-\alpha})/{\gamma}}, \end{split}\end{equation*} which, together with \eqref{estlem51} and the Young inequality, implies that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} &\int_0^t \int x^2 \begin{equation}tar\rho ^{\gamma+1-\alpha} \left[ \left({r}/{x}-1\right)^2 + \left(r_x-1\right)^2 \right] dxds\\ &\le C\int_0^t(1+s)^{({\gamma-1})/{\gamma}-\theta}\int \begin{equation}tar\rho ^{\gamma} \left[ \left(r-x\right)^2 + \left(xr_x-x\right)^2 \right] dxds\\ &+C (1+t)^{1-\left( {\gamma-1} - \gamma \theta\right)/(\alpha-1) }\sup_{s\in [0,t]}\left\{(1+s)^{({\gamma-1})/{\gamma}-\theta} \int \left[ \left(r-x\right)^2 + \left(xr_x-x\right)^2 \right](x,s) dx\right\} \\ &\le C(\theta) \mathfrak{E}(0)(1+t)^{(\alpha-\gamma + \theta\gamma)/(\alpha-1) }. \end{split}\end{equation*} Thus, \begin{equation}gin{equation}\lambdabel{8.22.2}\begin{equation}gin{split} & \int \begin{equation}tar\rho^{2\gamma-1-\alpha} \mathcal{G}_{x}^2(x,t)dx + \int_0^t \int \begin{equation}tar\rho^{3\gamma-1-\alpha}\mathcal{G}_x^2 dx ds \le C \mathfrak{F}_\alpha (0) + C(\theta) \mathfrak{E}(0)(1+t)^{\frac{\alpha-\gamma + \theta\gamma}{\alpha-1}}. \end{split}\end{equation} This, together with \eqref{weightrxx} and \eqref{gjrxx}, gives \eqref{8.22.1}. Choose $\alpha=2\gamma-1$ in \eqref{8.22.2} to give \begin{equation}gin{equation}\lambdabel{8/23/2}\begin{equation}gin{split} \int \mathcal{G}_{x}^2(x,t)dx \le C \mathfrak{F}_{2\gamma-1} (0) + C (\theta) (1+t)^{\frac{1}{2}+\frac{\gamma}{2\gamma-2}\theta} \mathfrak{E}(0). \end{split}\end{equation} This, together with \eqref{gjrxx}, gives \eqref{8/23/1}. It follows from \eqref{gjvxxa}, \eqref{8/23/2}, \eqref{8-22-1} and \eqref{8-22-2} that \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} & \left\|\left( v_{xx}, \ (v/x)_x \right)(\cdot, t)\right\|_{L^2([1/2,1])}^2 \\ \le & C( \theta) \widetilde{\mathfrak{F}}_{\gamma-\gamma\theta}(0) \left\{(1+t)^{- \frac{ 5\gamma-4 }{2\gamma } + {2}\theta } + (1+t)^{- \frac{9\gamma-6}{4\gamma} + \frac{3}{2} \theta } \left( \mathfrak{F}_{2\gamma-1} (0) + (1+t)^{\frac{1}{2}+\frac{\gamma\theta}{2\gamma-2}} \mathfrak{E}(0)\right) \right\} \\ \le & C( \theta) \widetilde{\mathfrak{F}}_{\gamma-\gamma\theta}(0) \left(\mathfrak{F}_{2\gamma-1} (0) +1 \right)(1+t)^{- \frac{7\gamma-6}{4\gamma} + 4 \theta } . \end{split}\end{equation*} This, together with \eqref{8-23-1}, implies \eqref{growof2nd8.22}. $\Box$ \begin{equation}gin{lem}\lambdabel{thelastlemmaforii} Suppose that $\|x\begin{equation}tar\rho^{ {1}/{2}} v_{tt}(\cdot, 0)\|^2<\infty $. For the global strong solution obtained in Theorem \ref{mainthm1}, there exists a positive constant $C$ independent of $t$ such that \begin{equation}\lambdabel{furthregularityofR} \|x\begin{equation}tar\rho^{ {1}/{2}} v_{tt}(\cdot, t)\|^2+\int_0^{\infty} \left\|(v_{ss}, x v_{ssx})(\cdot,s)\right\|^2 ds\le C \mathfrak{E}(0)+ C\|x\begin{equation}tar\rho^{ {1}/{2}} v_{tt}(\cdot, 0)\|^2 , \ t\ge 0. \end{equation} \end{lem} {\em Proof.} Multiplying $\partial^2_t \eqref{nsp1} $ by $r^2v_{tt}$ and integrating the resulting equation both in $x$ and $t$, one can show \eqref{furthregularityofR}. Indeed, the derivation of \eqref{furthregularityofR} is similar to that of \eqref{eg2}, so we omit the details here. $\Box$ \section{Proof of Theorem \ref{mainthm2}}\lambdabel{sec4} Due to Theorem \ref{mainthm} and Theorem \ref{mainthm1}, the triple $(\rho, u, R(t))$ ($t\ge 0$) defined by \eqref{vacuumboundary} and \eqref{solution} gives the unique global strong solution to the free boundary problem \eqref{103}. The decay estimates $i)$ and $ii)$ in Theorem \ref{mainthm2} follow from the corresponding ones in Theorem \ref{mainthm}, by noting that $$| \rho(r(x, t) ,t)-\begin{equation}tar\rho(x) | \le C \begin{equation}tar\rho(x) \left(|r_x(x,t)-1| + |x^{-1}r(x,t)-1|\right)$$ and $$u_r(r,t)=\frac{v_x(x,t)}{r_x(x,t)} \ \ {\rm and} \ \ \frac{u(r,t)}{r}= \frac{x}{r(x,t)} \frac{v(x,t)}{x}.$$ The $W^{2, \infty}$-estimate of $R(t)$ can be proved as follows. First, it follows from \eqref{lem4est} that $$\int_0^\infty v_t^2(1, t)dt\le C\int_0^\infty\int_{\frac{1}{2}}^{1} (v_t^2+v_{xt}^2)dxdt\le C\mathfrak{E}(0). $$ One the other hand, \eqref{furthregularity} implies that $$\int_0^\infty v_{tt}^2(1, t)dt\le C\int_0^\infty\int_{\frac{1}{2}}^{1} (v_{tt}^2+v_{xtt}^2)dxdt\le C \mathfrak{E}(0)+ C\|x\begin{equation}tar\rho^{ {1}/{2}} v_{tt}(\cdot, 0)\|^2. $$ Combining these two estimates with the fact that $$\ddot{R}^2(t)=v_t^2(1, t)\le v_t^2(1, 0)+2\left(\int_0^\infty v_t^2(1, t)dt\right)^{1/2}\left(\int_0^\infty v_{tt}^2(1, t)dt\right)^{1/2}$$ gives \eqref{accelaration} immediately. This finishes the proof of Theorem \ref{mainthm2}. $\Box$ \begin{equation}gin{rmk} One may prove the boundedness of $r_{tt}(x, t)$ for any fixed $x\in [0, 1]$ and $t\ge 0$ if $|v_t(x, 0)|$ is finite by an argument similar to the above. This implies that every particle moving with the fluid has the bounded acceleration for $t\in (0, \infty)$ if it does so initially. \end{rmk} \centerline{Acknowledgement} This research was partially supported by the Zheng Ge Ru Foundation, and Hong Kong RGC Earmarked Research Grants, a Focus Area Grant from The Chinese University of Hong Kong, a grant from Croucher Foundation, a NSF grant and a NSFC grant. Zeng was also supported by the Center for Mathematical Sciences and Applications at Harvard University. \begin{equation}gin{thebibliography}{100} \bibitem{ch} S. Chandrasekhar, An Introduction to the Study of Stellar Structures. University of Chicago Press, Chicago, 1938. \bibitem{Chengq} Chen, Gui-Qiang; Kratka, Milan Global solutions to the Navier-Stokes equations for compressible heat-conducting flow with symmetry and free boundary. Comm. Partial Differential Equations 27 (2002), no. 5-6, 907--943. \bibitem{10} Coutand, D., Shkoller, S.: Well-posedness in smooth function spaces for the moving- boundary 1-D compressible Euler equations in physical vacuum. Commun. Pure Appl. Math. 64, 328-366 (2011) \bibitem{10'} Coutand, Daniel; Shkoller, Steve; Well-Posedness in Smooth Function Spaces for the Moving-Boundary Three-Dimensional Compressible Euler Equations in Physical Vacuum. Arch. Ration. Mech. Anal. 206 (2012), no. 2, 515-616. \bibitem{DLYY} Y. Deng, T.P. Liu, T. Yang, Z. Yao, Solutions of Euler-Poisson equations for gaseous stars. Arch. Ration. Mech. Anal. 164 (2002), no. 3, 261-285. \bibitem{DZ} B. Ducomet, A. Zlotnik: Stabilization and stability for the spherically symmetric Navier-Stokes-Poisson system, Appl.Math.Lett. 18 (2005), 1190-1198 \bibitem{duan} Q. Duan, On the dynamics of Navier-Stokes equations for a shallow water model, J. Differential Equations, 250 (2011), 2687-2714. \bibitem{KM} A. Kufner, L. Maligranda, L.-E. Persson: The Hardy inequality. Vydavatelsksy Servis, PlzeTn, 2007. About its history and some related results. \bibitem{fangzhang}D.-Y. Fang and T. Zhang, Global behavior of compressible Navier-Stokes equations with a degenerate viscosity coefficient, Arch. Rational Mech. Anal., 182 (2006), 223-253. \bibitem{fangzhang1} D.-Y. Fang and T. Zhang, Global behavior of spherically symmetric Navier-Stokes-Poisson system with degenerate viscosity coefficients, Arch. Rational Mech. Anal., 191 (2009), 195-243. \bibitem{GLX} Guo, Zhenhua; Li, Hai-Liang; Xin, Zhouping, Lagrange structure and dynamics for solutions to the spherically symmetric compressible Navier-Stokes equations. Comm. Math. Phys. 309 (2012), no. 2, 371-412 \bibitem{jangnsp} Jang, J., Local well-posedness of dynamics of viscous gaseous stars. Arch. Rational Mech. Anal. 195, 797-863 (2010). \bibitem{jang65} J. Jang. Nonlinear instability in gravitational Euler-Poisson system for $\gammamma=\frac{6}{5}$. Arch. Ration. Mech. Anal. 188 (2008), no. 2, 265--307. \bibitem{jangmas} Jang, J., Masmoudi, N.:Well-posedness for compressible Euler with physical vacuum singularity. Commun. Pure Appl. Math. 62, 1327-1385 (2009) \bibitem{jm} Jang, J., Masmoudi, N.: Well-posedness of compressible Euler equations in a physical vacuum, Comm. Pure Appl. Math. 68 (2015), 61--111. \bibitem{jangtice} J. Jang, I. Tice, Instability theory of the Navier-Stokes-Poisson equations, Anal. PDE 6 (2013), no. 5, 1121-1181. \bibitem{17'} Jang, J. Nonlinear Instability Theory of Lane-Emden stars, Comm. Pure Appl. Math. 67 (2014), 1418--1465. \bibitem{JXZ} S. Jiang, Z. Xin, P. Zhang, Global weak solutions to 1D compressible isentropic Navier-Stokes equations with density-dependent viscosity. Methods Appl. Anal. 12 (2005), no. 3, 239-251. \bibitem{lebovitz1} Lebovitz, N.R., Lifschitz, A.: Short-wavelength instabilities of Riemann ellipsoids, Philos. Trans. Roy. Soc. London Ser. A 354(1709), 927-950 (1996). \bibitem{lebovitz2} Lebovitz, N.R.: The virial tensor and its application to self-gravitating fluids. Astrophys. J. 134, (1961) 500-536. \bibitem{liebyau} Lieb, E.H., Yau, H.T.: The Chandrasekhar theory of stellar collapse as the limit of quantum mechanics. Commun. Math. Phys. 112(1), 147-174 (1987) \bibitem{linss} S.-S. Lin. Stability of gaseous stars in spherically symmetric motions. SIAM J. Math. Anal. 28 (1997), no. 3, 539-569. \bibitem{tpliudamping} Liu, T.-P. Compressible flow with damping and vacuum. Japan J. Appl. Math. 13 (1996), 25-32. \bibitem{13} Liu, T.-P.; Yang, T., Compressible flow with vacuum and physical singularity. Methods Appl. Anal. 7 (2000), 495-509. \bibitem{LiXY} T.-P. Liu, Z. Xin and T. Yang, Vacuum states of compressible flow, Discrete and Continuous Dynamical Systems, 4(1998), 1-32. \bibitem{LXY} T. Luo, Z. Xin and T. Yang, Interface behavior of compressible Navier-Stokes equations with vacuum, SIAM J. Math. Anal. 31 (6) (2000) 1175-1191. \bibitem{luosmoller1} Luo, T.; Smoller, J.: Nonlinear dynamical stability of Newtonian rotating and non-rotating white dwarfs and rotating supermassive stars. Comm. Math. Phys. 284 (2008), no. 2, 425-457. \bibitem{luosmoller2} Luo, T.; Smoller, J.: Existence and non-linear stability of rotating star solutions of the compressible Euler-Poisson equations. Arch. Ration. Mech. Anal. 191 (2009), no. 3, 447-496. \bibitem{LXZ} T. Luo, Z. Xin and H. Zeng, Well-Posedness for the Motion of Physical Vacuum of the Three-dimensional Compressible Euler Equations with or without Self-Gravitation, Arch. Ration. Mech. Anal. 213 (2014), 763-831. \bibitem{LXZ2} T. Luo, Z. Xin and H. Zeng, Nonlinear asymptotic stability of the Lane-Emden solutions for the viscous gaseous star problem with degenerate density dependent viscosities, arXiv:1507.01069. \bibitem{makino} Makino, T.: On a local existence theorem for the evolution equation of gaseous stars. Patterns and Waves. Stud. Math. Appl., Vol. 18. North-Holland, Amsterdam, 459-479, 1986 \bibitem{Okada} M. Okada, Free boundary value problems for the equation of one-dimensional motion of viscous gas, Japan J. Appl. Math., 6(1989), 161-177. \bibitem{Okada1} M. Okada and T. Makino, Free boundary problem for the equations of spherically symmetrical motion of viscous gas, Japan J. Indust. Appl. Math., 10(1993), 219-235. \bibitem{OSM} S. Matusu-Necasova, M. Okada, T. Makino: Free boundary problem for the equation of spherically symmetric motion of viscous gas III, Japan J.Indust.Appl.Math. 14 (1997), 199-213 \bibitem{Okada3} Mari Okada, Free boundary problem for one-dimensional motions of compressible gas and vacuum, Japan J. Indust. Appl. Math. 21 (2) (2004) 109-128. \bibitem{rein}G. Rein, Non-linear stability of gaseous stars. Arch. Ration. Mech. Anal. 168 (2003), no. 2, 115-130. \bibitem{94} P. Secchi, On the evolution equations of viscous gaseous stars. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 18 (1991), no. 2, 295--318. \bibitem{95} P. Secchi, On the uniqueness of motion of viscous gaseous stars. Math. Methods Appl. Sci. 13 (1990), no. 5, 391--404. \bibitem{tokusky} S. H. Shapiro \& S. A. Teukolsky, Black Holes, White Dwarfs, and Neutron Stars, {\it WILEY-VCH}, (2004) \bibitem{96} Strohmer, G., Asymptotic estimates for a perturbation of the linearization of an equation for compressible viscous fluid flow. Studia Math. 185 (2008), no. 2, 99--125. \bibitem{weinberg} S. Weinberg , Gravitation and Cosmology {\it John Wiley and Sons}, New York , 1972. \bibitem{ya} T. Yang, Singular behavior of vacuum states for compressible fluids. J. Comput. Appl. Math. 190 (2006), no. 1-2, 211-231. \bibitem{YYZ}Yang, Tong; Yao, Zheng-an; Zhu, Changjiang, Compressible Navier-Stokes equations with density-dependent viscosity and vacuum. (English summary) Comm. Partial Differential Equations 26 (2001), no. 5-6, 965-981. \bibitem{yangzhu}Yang, Tong; Zhu, Changjiang, Compressible Navier-Stokes equations with degenerate viscosity coefficient and vacuum. Comm. Math. Phys. 230 (2002), no. 2, 329-363. \bibitem{zhu}Zhu, Changjiang; Zi, Ruizhao, Asymptotic behavior of solutions to 1D compressible Navier-Stokes equations with gravity and vacuum. Discrete Contin. Dyn. Syst. 30 (2011), no. 4, 1263-1283 \end{thebibliography} \renewcommand{A-\arabic{equation}}{A-\arabic{equation}} \renewcommand{A-\arabic{thm}}{A-\arabic{thm}} \setcounter{equation}{0} \section*{Appendix} \subsection*{Part I. Local existence of strong solutions in the functional space $\mathfrak{E}(t)$} In this part, we prove the local existence of strong solutions to problem \eqref{419} on a time interval $[0, T_*]$ for some $T_*>0$ in the function space $\{(r, v): \mathfrak{E}\in C([0, T_*])\} $ by using a finite difference method, as used in \cite{Okada, LiXY, LXY, Chengq}, where either a one-dimensional model or a three-dimensional model with spherical symmetry in a cut-off domain excluding a neighborhood of the origin is considered. Ideas used to derive the estimates in Section \ref{sec3} will be employed here to deal with the differences between the problem considered in this paper and those considered in \cite{Okada, LiXY, LXY, Chengq}. The proof works for the case when $\|r_x(x, 0)-1\|_{L^{\infty}}$ is small. It may be possible to obtain a local existence theory in our function space by only assuming that $r_x(x, 0)$ has both lower and upper positive bounds. However, we do not pursue this generality here which may need extra work because the main purpose of this paper is to establish the global existence of strong solutions for small data. Recall that \eqref{419a} reads \begin{equation}gin{equation}\lambdabel{nsp1-9.3}\begin{equation}gin{split} & \begin{equation}tar\rho\left( \frac{x}{r}\right)^2 v_t - \left( \mathfrak{B} + 4\lambda_1 \frac{v}{r} \right)_x = \left\{ \begin{equation}tar\rho^{\gamma} \left[1- \left(\frac{x^2}{r^2}\frac{1}{ r_x}\right)^\gamma \right] \right\}_x + \left(\begin{equation}tar{\rho}^\gamma\right)_x \left( \frac{x^4}{r^4} -1 \right) , \end{split} \end{equation} where $\mathfrak{B}$ is given by \eqref{bdry1}, that is, $$\mathfrak{B}=\mu \frac{v_x}{r_x}+ \left(2\lambda_2- \frac{4}{3} \lambda_1 \right) \frac{v}{r}.$$ We use $(r^0, v^0)(x)$ to denote the initial data $(r, v)(x, 0)$(This notation will avoid possible confusions when we define the finite difference scheme.) The finite difference scheme is defined as follows. Let $N$ be a positive integer, and $h={1}/{N}$. For $n=0, 1, \cdots N$, set \begin{equation}\lambdabel{a1} x_n=nh, \ \ \begin{equation}tar\rho_n=\begin{equation}tar\rho(x_n), \ \ \begin{equation}tar q_n=(\begin{equation}tar\rho^{\gamma})_x(x_n). \end{equation} We can approximate \eqref{nsp1-9.3} by the following initial value problem for the system for $v_1,\cdots,v_{N-1}$ of ordinary differential equations: { \begin{equation}gin{subequations}\lambdabel{a2}\begin{equation}gin{align} & \begin{equation}tar\rho_n\left(\frac{x_n}{r_n}\right)^2\frac{dv_n}{dt} + \frac{1}{h} \left[ \left( \mathfrak{B}_{n} + 4\lambda_1 \frac{v_{n-1}}{r_{n-1}} \right)-\left( \mathfrak{B}_{n+1} + 4\lambda_1 \frac{v_n}{r_n} \right) \right]= \begin{equation}tar q_n \left( \frac{x_n^4}{r_n^4} -1 \right) \notag \\ & + \frac{1}{h} \left\{\begin{equation}tar\rho_{n+1}^\gamma \left[ 1 - \left(\frac{h}{r_{n+1}-r_n}\right)^\gamma \left(\frac{x_n}{r_n}\right)^{2\gamma} \right] -\begin{equation}tar\rho_{n}^\gamma \left[ 1 - \left(\frac{h}{r_{n }-r_{n-1}}\right)^\gamma \left(\frac{x_{n-1}}{r_{n-1}}\right)^{2\gamma} \right] \right\} , \lambdabel{a2-1}\\ & v_n(0)=v^0(x_n), \lambdabel{a2-2} \end{align}\end{subequations}} where \begin{equation}gin{align} & r_n(t)=r^0(x_n)+\int_0^t v_n(s)ds ,\lambdabel{9.3-2} \\ & \mathfrak{B}_n=\mu\frac{v_n-v_{n-1}}{r_n-r_{n-1}}+ \left(2\lambdambda_2-\frac{4}{3}\lambdambda_1\right)\frac{v_{n-1}}{r_{n-1}} . \lambdabel{9.3-1} \end{align} This system is supplemented by the following conditions to match the boundary conditions $v(0, t)=0$ and $\mathfrak{B}(1, t)=0$: \begin{equation}\lambdabel{a3} v_0(t) =0, \ \ \mathfrak{B}_N(t)=\mu\frac{v_N-v_{N-1}}{r_N-r_{N-1}}+ \left(2\lambdambda_2-\frac{4}{3}\lambdambda_1\right)\frac{v_{N-1}}{r_{N-1}}=0 .\end{equation} Clearly, if $r(0, 0)=0$, it follows from $v_0(t)=0$ that \begin{equation}\lambdabel{a3.1} r_0(t)=0. \end{equation} Next, we use the condition $\mathfrak{B}_N(t)=0$ to determine $v_N$ and $r_N$ in terms of $v_{N-1}$ and $r_{N-1}$. It follows from $(d/ dt) r_n = v_n$ that $$0=\mathfrak{B}_N(t) = \frac{d}{dt}\left\{\mu \ln \left(\frac{r_N-r_{N-1}}{h}\right) + \left(2\lambdambda_2-\frac{4}{3}\lambdambda_1\right) \ln \left(\frac{r_{N-1}}{x_{N-1}}\right) \right\},$$ which implies \begin{equation}\lambdabel{a3.2} r_N(t)=r_{N-1}(t) + \left[r^0(x_N)- r^0(x_{N-1})\right] \left(\frac{r^0(x_{N-1})}{r_{N-1}(t)}\right)^{\frac{2\lambdambda_2}{\mu}-\frac{4\lambdambda_1}{3\mu}}. \end{equation} This, together with \eqref{a3}, gives \begin{equation}\lambdabel{a3.3} v_N(t)= v_{N-1}(t)- \left(\frac{2\lambdambda_2}{\mu}-\frac{4\lambdambda_1}{3\mu}\right) \left[r^0(x_N)- r^0(x_{N-1})\right] \left(\frac{r^0(x_{N-1})}{r_{N-1}(t)}\right)^{\frac{2\lambdambda_2}{\mu}-\frac{4\lambdambda_1}{3\mu}} \frac{v_{N-1}(t)}{r_{N-1}(t)} . \end{equation} With \eqref{a3}-\eqref{a3.3}, ODE system \eqref{a2} is closed. The approximation of the functional $\mathfrak{E}$ defined in \eqref{mathmarch} is given by \begin{equation}gin{align} &\mathfrak{E}_N(t) = \max_{1\le n\le N}\left\{\left|\frac{r_n(t)-r_{n-1}(t)}{h}-1\right|^2+\left|\frac{v_n(t)-v_{n-1}(t)}{h}\right|^2\right\}+h\sum_{n=1}^{N-1}\begin{equation}tar\rho_{n}|\dot{v}_n(t)|^2 \notag\\ &\quad+h\sum_{n=1}^{N-1}\begin{equation}tar\rho_n^{2\gamma-1}\left\{\left|\frac{r_{n+1}(t)-2r_n(t)+r_{n-1}(t)}{h^2}\right|^2 +\left|\frac{1}{h}\left(\frac{r_{n}(t)}{x_{n }}-\frac{r_{n-1}(t)}{x_{n-1}}\right) \right|^2\right\}, \lambdabel{a6} \end{align} where and thereafter, $\dot g=(d/dt)g$ for a function $g=g(t)$. It should be noted that $h\sum_{n=1}^{N-1}\begin{equation}tar\rho_{n}|\dot{v}_n(0)|^2$ can be given by the initial data via equation \eqref{a2-1}. From now on, we choose a positive integer $N_0$ so large that \begin{equation}\lambdabel{a4'} \mathfrak{E}_N(0)\le 2\mathfrak{E}(0), \ \ N\ge N_0.\end{equation} \begin{equation}gin{lem}\lambdabel{lem9.15} Let $\gammamma\in (4/3, 2)$. Suppose that $\mathfrak{E}(0)< \infty$ and $r^0(0)=0$. Then there exist positive constants $N_1\ge N_0$ and $ \begin{equation}tar \varthetarepsilon>0$ independent of $N$ such that problem \eqref{a2} admits a unique solution $(r_n, v_n)(t)$ on $[0, T^*]$ for some positive constant $T^*$ independent of $N$ satisfying \begin{equation}\lambdabel{a5} \mathfrak{E}_N(t)\le K\mathfrak{E}_N(0) \le 2 K\mathfrak{E}(0) , \ \ t\in [0, T^*], \ N\ge N_1 \end{equation} for some positive constant $K$ independent of $N$, provided that \begin{equation}\lambdabel{qqsn} \left\|r^0_x-1\right\|_{L^{\infty}(I)}\le \begin{equation}tar \varthetarepsilon. \end{equation} Moreover, $T^*$ satisfies $$ T^*\ge \min\left\{\frac{\begin{equation}tar c }{ \sqrt{ 2 K \mathfrak{E}(0) }}, \ \ \frac{1}{K(1+2 K \mathfrak{E}(0))} \right\} $$ for some positive constant $\begin{equation}tar c$ independent of $N$. \end{lem} \begin{equation}gin{rmk} $\gammamma\in (4/3, 2)$ is in general not necessary for the local existence, $\gammamma>1$ should be sufficient. The reason we put this condition in the lemma is to ensure the existence and uniqueness of the stationary solution, the Lane-Emden solution, to keep the consistence with the global existence theory. \end{rmk} \noindent{\em Proof of Lemma \ref{lem9.15}}. It follows from the ODE theory that problem \eqref{a2} has a solution on a time interval. Let $T_N>0$ be the maximum existence time. It follows from $v_0(t)=0$ that \begin{equation}\lambdabel{9.7-4} \max_{1\le n \le N} \left|{v_n(t)}/{x_n} \right|^2 \le \mathfrak{E}_N(t), \ \ \ \ t\in [0, T_N); \end{equation} which, together with $\dot{r}_n =v_n$, implies that for $n=1,\cdots, N$, and $t\in [0, T_N)$, $$ \left|\frac{r_n(t)-r_{n-1}(t)}{h}-1\right|\le \left|\frac{r_0(x_n)-r_{0}(x_{n-1})}{h}-1\right| + \int_0^t \left| \frac{v_n(s)-v_{n-1}(s)}{h} \right| ds \le \left\|r^0_x-1\right\|_{L^{\infty}(I)}+ t \sup_{s\in [0,t]} \sqrt{ \mathfrak{E}_N(s)} . $$ This implies, due to $r_0(t)=0$, that for $n=1,\cdots, N$, and $t\in [0, T_N)$, $$ \left|{r_n(t)}/{x_n}-1\right|\le \left\|r^0_x-1\right\|_{L^{\infty}(I)}\le \left\|r^0_x-1\right\|_{L^{\infty}(I)}+ t \sup_{s\in [0,t]} \sqrt{ \mathfrak{E}_N(s)} . $$ Therefore, one can check that, for $n=1,\cdots, N$, and $t\in [0, T]$, \begin{equation}gin{align}\lambdabel{9.7-1} \left| \left(r_n(t)-r_{n-1}(t)\right)/h-1\right|\le 2\begin{equation}tar\varthetarepsilon \ \ {\rm and} \ \ \left|{r_n(t)}/{x_n}-1\right|\le 2\begin{equation}tar\varthetarepsilon. \end{align} provided \begin{equation}\lambdabel{9.29.3} T \sup_{s\in [0,T]} \sqrt{ \mathfrak{E}_N(s)} \le \begin{equation}tar\varthetarepsilon, \end{equation} and \begin{equation}\lambdabel{initialforrox}\left\|r^0_x-1\right\|_{L^{\infty}(I)}\le \begin{equation}tar\varthetarepsilon\end{equation} for some constant $\begin{equation}tar\varthetarepsilon>0$. In particular, it holds that for $n=1,\cdots, N$, and $t\in [0, T]$, \begin{equation}\lambdabel{9.6-1} \frac{1}{2} \le \frac{r_n(t)}{x_n} \le \frac{3}{2} \ \ {\rm and} \ \ \frac{1}{2} \le \frac{r_n(t)-r_{n-1}(t)}{h} \le \frac{3}{2} , \end{equation} if \begin{equation} \begin{equation}tar\varthetarepsilon\le \frac{1}{4}. \end{equation} With \eqref{9.7-4}, \eqref{9.7-1} and \eqref{9.6-1}, we will prove in {\em Steps 1-3} that for sufficiently large $N$, there exists constant $\begin{equation}tar C>1$ independent of $N$ such that if $T$ satisfies \eqref{9.29.3}, then \begin{equation}gin{align} \lambdabel{9-29-2} \mathfrak{E}_N(t) \le \begin{equation}tar C \mathfrak{E}_N(0) + \begin{equation}tar C \int_0^t \left[ \mathfrak{E}_N(s) + \mathfrak{E}_N^2 (s) \right] ds, \ \ \ \ t\in [0, T]. \end{align} Once this statement is proved, the lemma will follow from an argument which we give in {\em Step 4}. {\em Step 1}. In this step, we prove that \begin{equation}gin{align} &h \sum_{n=1}^{N-1}\begin{equation}tar\rho_n x_{n-1}^2 \dot{v}_n^2(t) + \int_0^t h \sum_{n=1}^{N-1} \left[ x_{n}^2 \left(\frac{\dot{v}_{n+1}(s)- \dot{v}_{n}(s)}{h} \right)^2 + \dot{v}_n^2(s) \right]ds \notag\\ \le & C \mathfrak{E}_N(0) + C \int_0^t \left[ \mathfrak{E}_N(s) + \mathfrak{E}_N^2 (s) \right] ds. \lambdabel{9-7-1} \end{align} Here and in the rest of this part of Appendix, $C$ denotes a generic positive constant independent of $t$ and $N$. It follows from \eqref{a2-1}, $\dot{r}_n=v_n$ and \eqref{9.6-1} that \begin{equation}gin{align} \begin{equation}tar\rho_n\left(\frac{x_n}{r_n}\right)^2 \ddot{v}_n + \frac{1}{h} \left[\left( \dot{\mathfrak{B}}_{n} -\dot{\mathfrak{B}}_{n+1} \right) + 4\lambda_1\left( \frac{\dot{v}_{n-1}}{r_{n-1}} - \frac{\dot{v}_n}{r_n} \right) \right] = \frac{1}{h} \left( \mathcal{P}_{n+1} - \mathcal{P}_n \right) + e_n , \lambdabel{9.7-5} \end{align} where \begin{equation}gin{align*} &\mathcal{P}_n = \gamma\begin{equation}tar\rho_n \left(\frac{h}{r_{n }-r_{n-1}}\right)^\gamma \left(\frac{x_{n-1}}{r_{n-1}}\right)^{2\gamma} \left( \frac{v_{n }-v_{n-1}}{r_{n }-r_{n-1}} + 2\frac {v_{n-1}}{r_{n-1}}\right),\\ & |e_n| \le C \left( \begin{equation}tar\rho_n \left|\frac{v_n}{x_n}\right| |\dot{v}_n| + \frac{1}{h} \left|\frac{v_n }{r_n }- \frac{v_{n-1} }{r_{n-1} }\right| \left|\frac{v_n }{r_n } + \frac{v_{n-1} }{r_{n-1} }\right| + \left| \frac{v_n}{x_n} \right| \right). \end{align*} \eqref{9-7-1} follows from the summation of the product of $r_{n-1}^2 \dot{v}_n$ and \eqref{9.7-5}. First, we analyze the second term on the left-hand side of \eqref{9.7-5}. Notice that \begin{equation}gin{align*} & \sum_{n=1}^{N-1} \frac{1}{h}\left( \dot{\mathfrak{B}}_{n} -\dot{\mathfrak{B}}_{n+1} \right) r_{n-1}^2 \dot{v}_n = \sum_{n=1}^{N-1} \frac{1}{h}\dot{\mathfrak{B}}_{n+1}\left( r_{n}^2 \dot{v}_{n+1} - r_{n-1}^2 \dot{v}_{n} \right) \\ & = \sum_{n=1}^{N-1} \dot{\mathfrak{B}}_{n+1}\left( r_{n}^2 \frac{\dot{v}_{n+1} -\dot{v}_{n}}{h} + \frac{r_n^2 - r_{n-1}^2 }{h} \dot{v}_{n} \right) \ge \sum_{n=1}^{N-1} \dot{\mathfrak{B}}_{n+1} \left( x_n^2 \frac{\dot{v}_{n+1} -\dot{v}_{n}}{h} +2 x_n \dot{v}_{n} \right) \\ &\qquad - C \sum_{n=1}^{N-1} \left| \dot{\mathfrak{B}}_{n+1} \right| \left(\begin{equation}tar\varthetarepsilon x_n^2 \left| \frac{\dot{v}_{n+1} -\dot{v}_{n}}{h}\right| + \left(\begin{equation}tar\varthetarepsilon+ n^{-1} \right) x_n \left| \dot{v}_{n}\right| \right) \end{align*} and \begin{equation}gin{align*} &\left| \dot{\mathfrak{B}}_{n+1} -\left(\mu \frac{\dot{v}_{n+1}-\dot{v}_n}{h} + \left(2\lambda_2-\frac{4}{3}\lambda_1\right) \frac{\dot{v}_n}{x_n} \right) \right|\\ & \le C \begin{equation}tar\varthetarepsilon \left( \left|\frac{\dot{v}_{n+1}-\dot{v}_n}{h}\right| + \left|\frac{\dot{v}_n}{x_n}\right| \right) + C \left( \left|\frac{ {v}_{n+1}- {v}_n}{h}\right|^2 + \left|\frac{ {v}_n}{x_n}\right|^2 \right). \end{align*} It then yeilds from the Cauchy inequality that, for any $\delta\in (0,1)$, \begin{equation}gin{align} & \sum_{n=1}^{N-1} \frac{1}{h}\left( \dot{\mathfrak{B}}_{n} -\dot{\mathfrak{B}}_{n+1} \right) r_{n-1}^2 \dot{v}_n \ge - C \delta^{-1} \mathfrak{E}_N \sum_{n=1}^{N-1} \left\{ x_n^2 \left( \frac{ {v}_{n+1} - {v}_{n}}{h} \right)^2 + {v}_{n}^2 \right\}\notag\\ & \quad + \sum_{n=1}^{N-1} \left\{\mu x_n^2 \left( \frac{\dot{v}_{n+1} -\dot{v}_{n}}{h} \right)^2 + \left(4\lambda_2- \frac{8}{3}\lambda_1\right) \dot{v}_{n}^2 + \left( 4\lambda_2 + \frac{4}{3}\lambda_1 \right)x_n \frac{\dot{v}_{n+1} -\dot{v}_{n}}{h}\dot{v}_{n} \right\} \notag\\ &\quad - C \sum_{n=1}^{N-1} \left(n^{-1} + \delta^{-1} n^{-2}\right) \dot{v}_{n}^2 - C ( \begin{equation}tar\varthetarepsilon + 2\delta)\sum_{n=1}^{N-1} \left\{ x_n^2 \left( \frac{\dot{v}_{n+1} -\dot{v}_{n}}{h} \right)^2 + \dot{v}_{n}^2 \right\}. \lambdabel{9.29.1} \end{align} Similarly, \begin{equation}gin{align} &\sum_{n=1}^{N-1} \frac{1}{h}\left( \frac{\dot{v}_{n-1}}{r_{n-1}} - \frac{\dot{v}_n}{r_n} \right) r_{n-1}^2 \dot{v}_n = \sum_{n=1}^{N-1} r_{n-1} \frac{\dot{v}_{n-1} -\dot{v}_{n}}{h} \dot{v}_n + \sum_{n=1}^{N-1} \frac{r_{n-1}}{r_n} \frac{r_{n} -r_{n-1}}{h} \dot{v}_n^2 \notag\\ \ge & -\sum_{n=0}^{N-2} \left\{ r_n \frac{\dot{v}_{n+1} -\dot{v}_{n}}{h}\dot{v}_{n} + h r_n \left(\frac{\dot{v}_{n+1} -\dot{v}_{n}}{h}\right)^2 \right\} + \sum_{n=1}^{N-1} \left\{ \dot{v}_n^2 - C \left( n^{-1} + \begin{equation}tar\varthetarepsilon\right)\dot{v}_n^2 \right\}. \lambdabel{9.29.2} \end{align} So, if $\begin{equation}tar\varthetarepsilon$ is small (the smallness is independent of $N$) and $N$ is large (which implies $h$ is small), it follows from \eqref{9.29.1} and \eqref{9.29.2} that \begin{equation}gin{align} & \sum_{n=1}^{N-1} \frac{1}{h} \left[\left( \dot{\mathfrak{B}}_{n} -\dot{\mathfrak{B}}_{n+1} \right) + 4\lambda_1\left( \frac{\dot{v}_{n-1}}{r_{n-1}} - \frac{\dot{v}_n}{r_n} \right) \right]r_{n-1}^2 \dot{v}_n \ge - C \sum_{n=1}^{N-1} \frac{1}{n} \dot{v}_{n}^2 \notag\\ & \quad + \frac{1}{2} \sigma \sum_{n=1}^{N-1}\left[ x_{n}^2 \left(\frac{\dot{v}_{n+1} - \dot{v}_{n} }{h} \right)^2 + \dot{v}_n^2 \right] - C \mathfrak{E}_N \sum_{n=1}^{N-1} \left\{ x_n^2 \left( \frac{ {v}_{n+1} - {v}_{n}}{h} \right)^2 + {v}_{n}^2 \right\} . \lambdabel{9.7-6} \end{align} Here $\sigma=\min\{2\lambda_1/3, \ \lambda_2\}$. (Indeed, the first term on the second line of \eqref{9.7-6} is obtained by considering the following two cases: $n=1,\cdots, N-2$, and $n=N-1$, separately.) The first term on the right-hand side of \eqref{9.7-6} can be bounded by \begin{equation}gin{align*} \sum_{n=1}^{N-1} \frac{1}{n} \dot{v}_{n}^2 \le \sum_{n=1}^{[N/2]} \dot{v}_{n}^2 + \frac{2}{N} \sum_{n=[N/2]+1}^{N-1} \dot{v}_{n}^2\le C \sum_{n=1}^{[N/2]} \begin{equation}tar\rho_n \dot{v}_{n}^2 + \frac{2}{N} \sum_{n=[N/2]+1}^{N-1} \dot{v}_{n}^2 \le \sum_{n=1}^{N-1} \left( C\begin{equation}tar\rho_n + \frac{2}{N} \right) \dot{v}_{n}^2 . \end{align*} This, together with \eqref{9.7-6}, gives that for sufficiently large $N$, \begin{equation}gin{align} & \sum_{n=1}^{N-1} \frac{1}{h} \left[\left( \dot{\mathfrak{B}}_{n} -\dot{\mathfrak{B}}_{n+1} \right) + 4\lambda_1\left( \frac{\dot{v}_{n-1}}{r_{n-1}} - \frac{\dot{v}_n}{r_n} \right) \right]r_{n-1}^2 \dot{v}_n \ge - C \sum_{n=1}^{N-1} \begin{equation}tar\rho_n \dot{v}_{n}^2 \notag\\ & \quad + \frac{1}{4} \sigma \sum_{n=1}^{N-1}\left[ x_{n}^2 \left(\frac{\dot{v}_{n+1} - \dot{v}_{n} }{h} \right)^2 + \dot{v}_n^2 \right] - C \mathfrak{E}_N \sum_{n=1}^{N-1} \left\{ x_n^2 \left( \frac{ {v}_{n+1} - {v}_{n}}{h} \right)^2 + {v}_{n}^2 \right\} .\lambdabel{9.7-7} \end{align} In a similar but much easier way, we can deal with the other terms in \eqref{9.7-5} and obtain \eqref{9-7-1}. {\em Step 2}. In this step, we prove that \begin{equation}gin{align} & h \sum_{n=1}^{N-1}\begin{equation}tar\rho_n \dot{v}_n^2(t) + \int_0^t h \left[\sum_{n=1}^{N} \left(\frac{\dot{v}_{n }(s)- \dot{v}_{n-1}(s)}{h} \right)^2 + \sum_{n=1}^{N} \frac{\dot{v}_n^2 (s)}{ x_{n}^2 } \right]ds \notag\\ \le & C \mathfrak{E}_N(0) + C \int_0^t \left[ \mathfrak{E}_N(s) + \mathfrak{E}_N^2 (s) \right] ds. \lambdabel{9-7-2} \end{align} Rewrite \eqref{9.7-5} as \begin{equation}gin{align} \begin{equation}tar\rho_n\left(\frac{x_n}{r_n}\right)^2 \ddot{v}_n + \frac{1}{h} \mu \left(\mathcal{Q}_n - \mathcal{Q}_{n+1} \right) = \frac{1}{h} \left( \mathcal{P}_{n+1} - \mathcal{P}_n \right) + \begin{equation}tar e_n , \lambdabel{9-7-3} \end{align} where \begin{equation}gin{align*} &\mathcal{Q}_n= \frac{\dot{v}_n-\dot{v}_{n-1}}{r_n-r_{n-1}}+ 2 \frac{\dot{v}_{n-1}}{r_{n-1}} -\frac{\left( {v}_n- {v}_{n-1}\right)^2}{\left(r_n-r_{n-1}\right)^2}- 2 \frac{ {v}_{n-1}^2}{r_{n-1}^2} ,\\ & |\begin{equation}tar e_n| \le C \left( \begin{equation}tar\rho_n |\dot{v}_n| \left| {v_n}/{x_n} \right| + \left| {v_n}/{x_n} \right| \right) . \end{align*} Let $\psi$ be a non-increasing function defined on $[0,1]$ satisfying \begin{equation}gin{equation*}\lambdabel{}\begin{equation}gin{split} \psi=1 \ \ {\rm on } \ \ [0,1/4], \ \ \psi=0 \ \ {\rm on } \ \ [1/2, 1] \ \ {\rm and} \ \ |\psi'|\le 32. \end{split} \end{equation*} Set $\psi_n=\psi(x_n)$. Similarly, we consider the summation of the product of \eqref{9-7-3} and $\psi_n \dot{v}_n$. To deal with the second term on the left-hand side of \eqref{9-7-3}, one notices that \begin{equation}gin{align*} \sum_{n=1}^{N-1}\frac{1}{h} \left(\mathcal{Q}_n - \mathcal{Q}_{n+1} \right)\psi_n \dot{v}_n = &\sum_{n=1}^{N-1}\frac{1}{h} \mathcal{Q}_n \left(\psi_n \dot{v}_n - \psi_{n-1} \dot{v}_{n-1} \right) \\ =& \sum_{n=1}^{N-1} \mathcal{Q}_n \left(\psi_n \frac{\dot{v}_n - \dot{v}_{n-1} }{h} + \frac{\psi_n-\psi_{n-1}}{h} \dot{v}_{n-1} \right); \end{align*} and \begin{equation}gin{align*} &\sum_{n=1}^{N-1} \frac{\dot{v}_{n-1}}{r_{n-1}} \psi_n \frac{\dot{v}_n - \dot{v}_{n-1} }{h} = \sum_{n=1}^{N-1} \frac{1}{h} \left( \psi_n \frac{\dot{v}_n \dot{v}_{n-1} }{r_{n-1}} -\psi_{n+1} \frac{\dot{v}_n ^2}{r_n}\right) = -\sum_{n=1}^{N-1} \frac{\dot{v}_{n-1}}{r_{n-1}} \psi_n \frac{\dot{v}_n - \dot{v}_{n-1} }{h} \\ &\quad - h \sum_{n=1}^{N-1} \frac{\psi_n}{r_{n-1}} \left( \frac{ \dot{v}_{n-1} -\dot{v}_n }{ h} \right)^2 + \sum_{n=1}^{N-1} \frac{\psi_n-\psi_{n+1}}{h r_n} \dot{v}_n^2 + \sum_{n=1}^{N-1} \frac{ \psi_n }{r_{n-1}} \frac{r_n-r_{n-1}}{h r_n } \dot{v}_n^2 \\ & = -\sum_{n=1}^{N-1} \frac{\dot{v}_{n-1}}{r_{n-1}} \psi_n \frac{\dot{v}_n - \dot{v}_{n-1} }{h} - h \sum_{n=2}^{N-1} \frac{\psi_n}{r_{n-1}} \left( \frac{ \dot{v}_{n-1} -\dot{v}_n }{ h} \right)^2 + \sum_{n=1}^{N-1} \frac{\psi_n-\psi_{n+1}}{h r_n} \dot{v}_n^2 \\ &\quad + \sum_{n=2}^{N-1} \frac{ \psi_n }{r_{n-1}} \frac{r_n-r_{n-1}}{h r_n } \dot{v}_n^2, \end{align*} which implies \begin{equation}gin{align} 2 \sum_{n=1}^{N-1} \frac{\dot{v}_{n-1}}{r_{n-1}} \psi_n \frac{\dot{v}_n - \dot{v}_{n-1} }{h} \ge & \sum_{n=2}^{N-1} \psi_n \frac{r_n-r_{n-1}}{h } \frac{ \dot{v}_n^2 }{r_n^2 } - h \sum_{n=2}^{N-1} \frac{\psi_n}{r_{n-1}} \left( \frac{ \dot{v}_{n-1} -\dot{v}_n }{ h} \right)^2 \notag\\ &+ \sum_{n=1}^{N-1} \frac{\psi_n-\psi_{n+1}}{h r_n} \dot{v}_n^2 . \notag \end{align} Then, \eqref{9-7-2} follows from \eqref{9-7-1}, \eqref{a3}, \eqref{a3.2}, \eqref{a3.3} and simple calculations. {\em Step 3}. In this step, we prove \begin{equation}gin{align} \lambdabel{9-29-1} & \mathfrak{E}_N(t) + h\sum_{n=1}^{N-1}\begin{equation}tar\rho_n^{2\gamma-1}\left\{\left|\frac{v_{n+1}(t)-2v_n(t)+v_{n-1}(t)}{h^2}\right|^2 +\left|\frac{1}{h}\left(\frac{v_{n }(t)}{x_{n }}-\frac{v_{n-1}(t)}{x_{n-1}}\right) \right|^2\right\}\notag\\ \le & C \mathfrak{E}_N(0) + C \int_0^t \left[ \mathfrak{E}_N(s) + \mathfrak{E}_N^2 (s) \right] ds, \ \ \ \ t\in [0, T]. \end{align} To this end, we rewrite \eqref{a2-1} as \begin{equation}gin{align} \mu \frac{\dot{\mathcal{G}}_n-\dot{\mathcal{G}}_{n+1}}{h} = & -\begin{equation}tar\rho_n\left(\frac{x_n}{r_n}\right)^2\dot{v}_n + \begin{equation}tar\rho_{n}^\gamma \frac{\exp\{-\gamma \mathcal{G}_{n+1}\} - \exp\{-\gamma \mathcal{G}_{n}\}}{h} \notag\\ & + \frac{\begin{equation}tar\rho_{n+1}^\gamma - \begin{equation}tar\rho_{n}^\gamma}{h} \left[ 1 - \left(\frac{h}{r_{n+1}-r_n}\right)^\gamma \left(\frac{x_n}{r_n}\right)^{2\gamma} \right] + \begin{equation}tar q_n \left( \frac{x_n^4}{r_n^4} -1 \right) = :\ell_n. \lambdabel{9.9.1} \end{align} Here for $n=1,\cdots,N-1$, \begin{equation}gin{align} & \mathcal{G}_n= \ln\left(\frac{r_n-r_{n-1}}{h}\right) + 2\ln\left(\frac{r_{n-1}}{x_{n-1}}\right), \lambdabel{9.16-1}\\ & |\ell_n | \le C \begin{equation}tar\rho_n \left\{ |\dot{v}_n| + \begin{equation}tar\rho_{n}^{\gamma-1} \left|\frac{\mathcal{G}_{n}-\mathcal{G}_{n+1}}{h}\right| +\left|\frac{r_{n+1}-r_{n}}{h}-1\right|+\left|\frac{r_{n}}{x_n}-1\right|\right\} . \notag \end{align} It is easy to derive from \eqref{9.9.1} and \eqref{9-7-2} that \begin{equation}gin{align} h \sum_{n=1}^{N-1} \frac{1}{\begin{equation}tar\rho_n} \left(\frac{\dot{\mathcal{G}}_n(t)-\dot{\mathcal{G}}_{n+1}(t)}{h}\right)^2 \le C \mathfrak{E}_N(0) + C \int_0^t \left[ \mathfrak{E}_N(s) + \mathfrak{E}_N^2 (s) \right] ds , \lambdabel{9.10.1}\end{align} and then \begin{equation}gin{align} h \sum_{n=1}^{N-1} \begin{equation}tar\rho_n^{2\gamma-1} \left(\frac{ {\mathcal{G}}_n(t)- {\mathcal{G}}_{n+1}(t)}{h}\right)^2 \le C \mathfrak{E}_N(0) + C \int_0^t \left[ \mathfrak{E}_N(s) + \mathfrak{E}_N^2 (s) \right] ds . \lambdabel{9.10.2} \end{align} Following the arguments in Section \ref{sec3.4.1} (in particular, Lemmas \ref{lemhh1} and \ref{boundsforrv}), we can use \eqref{9-7-2}, \eqref{9.10.1} and \eqref{9.10.2} to get \eqref{9-29-1}. {\em Step 4}. Set $T^*=\sup\{t: \mathfrak{E}_N(t)\le 2\begin{equation}tar C\mathfrak{E}_N(0)\}$, where $\begin{equation}tar C>1$ is the constant in \eqref{9.15-1}. Obviously $T^*>0$ since $\begin{equation}tar C>1$. We may assume that $T^*< {\begin{equation}tar \varthetarepsilon}/ { \sqrt {2\begin{equation}tar C\mathfrak{E}_N(0)}}$. (Otherwise, $T^* \ge {\begin{equation}tar \varthetarepsilon}/ { \sqrt {4\begin{equation}tar C\mathfrak{E} (0)}}$ due to $ \mathfrak{E}_N(0) \le 2 \mathfrak{E}(0)$, and the lower bound is independent of $N$, so that the lemma is proved.) By the definition of $T^*$, $\mathfrak{E}_N(t)\le 2\begin{equation}tar C\mathfrak{E}_N(0)$ for $t\in [0, T^*]$. Then, $$ T^*\sup_{s\in [0,T^*]} \sqrt{ \mathfrak{E}_N(s)} \le T^* \sqrt{2\begin{equation}tar C\mathfrak{E}_N(0)} \le \begin{equation}tar\varthetarepsilon.$$ So, $T^*$ satisfies \eqref{9.29.3}, and it follows from estimate \eqref{9-29-2} and definition of $T^*$ that \begin{equation}gin{align*} 2\begin{equation}tar C\mathfrak{E}_N(0)= \mathfrak{E}_N(T^*) \le \begin{equation}tar C \mathfrak{E}_N(0) + \begin{equation}tar C \int_0^{T^*} \left[ \mathfrak{E}_N(s) + \mathfrak{E}_N^2 (s) \right] ds \\ \le \begin{equation}tar C\mathfrak{E}_N(0) + \begin{equation}tar C {T^*} 2 \begin{equation}tar C\mathfrak{E}_N(0) \left(1 +2 \begin{equation}tar C\mathfrak{E}_N(0) \right). \end{align*} This implies $$T^*\ge \frac{1}{2 \begin{equation}tar C \left(1 +2 \begin{equation}tar C\mathfrak{E}_N(0) \right) }\ge \frac{1}{2 \begin{equation}tar C \left(1 + 4 \begin{equation}tar C\mathfrak{E}(0) \right) }. $$ The Lemma is proved by taking $K=2\begin{equation}tar C$. $\Box$ \begin{equation}gin{rmk} Let $ T_*=\min\{T^*, 1\}$. For the solution $(r_n, v_n)$ obtained in Lemma \ref{lem9.15}, it holds that \begin{equation}gin{align} \lambdabel{9.15-1} & \mathfrak{E}_N(t) + h\sum_{n=1}^{N-1}\begin{equation}tar\rho_n^{2\gamma-1}\left\{\left|\frac{v_{n+1}(t)-2v_n(t)+v_{n-1}(t)}{h^2}\right|^2 +\left|\frac{1}{h}\left(\frac{v_{n }(t)}{x_{n }}-\frac{v_{n-1}(t)}{x_{n-1}}\right) \right|^2\right\}\notag\\ & \le C \mathfrak{E}(0) + C \mathfrak{E}^2(0) , \ \ t\in [0, T_*] . \end{align} Indeed, \eqref{9.15-1} follows from \eqref{9-29-1} and \eqref{a5}. \end{rmk} \begin{equation}gin{lem}\lambdabel{lem9.16} Suppose that the assumptions in Lemma \ref{lem9.15} are satisfied. Then the following estimates are obtained for any $n=1,\cdots, N$, and $t,s \in [0, T_*]$, \begin{equation}gin{align} &\frac{1}{2} \le \frac{r_n(t)}{x_n} \le \frac{3}{2} , \ \frac{1}{2} \le \frac{(r_n -r_{n-1})(t)}{h} \le \frac{3}{2} , \ \left|\frac{v_n(t)}{x_n} \right| + \left|\frac{(v_n-v_{n-1})(t)}{h} \right| \le \sqrt{ C\mathfrak{E}(0) }, \lambdabel{9.15.2} \\ & h\sum_{n=1}^N \left|\frac{r_n(t)-r_{n-1}(t)}{h}\right|^2+ h\sum_{n=1}^N\left|\frac{v_{n}(t)-v_{n-1}(t)}{h}\right|^2 \le C\left( \mathfrak{E}(0) +1 \right) ,\lambdabel{9.16.1}\\ & h\sum_{n=1}^{N } \left|\frac{1}{h}\left(\frac{r_{n }(t)}{x_{n }}-\frac{r_{n-1}(t)}{x_{n-1}}\right) \right|^2 + h\sum_{n=1}^{N }\left|\frac{1}{h}\left(\frac{v_{n }(t)}{x_{n }}-\frac{v_{n-1}(t)}{x_{n-1}}\right) \right|^2 \le C \sum_{i=0}^2\mathfrak{E}^i(0) ,\lambdabel{9.16.2}\\ &h\sum_{n=1}^{N-1} \left|\frac{1}{h}\left( \begin{equation}tar\rho_{n+1}^{\gamma-\frac{1}{2}}\frac{r_{n+1}(t)-r_n(t)}{h} - \begin{equation}tar\rho_{n}^{\gamma-\frac{1}{2}}\frac{r_{n }(t)-r_{n-1}(t)}{h}\right) \right|^2 \le C( \mathfrak{E}(0) +1 ), \lambdabel{9.16.3} \\ &h\sum_{n=1}^{N-1} \left|\frac{1}{h}\left( \frac{v_{n+1}(t)-v_n(t)}{r_{n+1}(t)-r_n(t)} - \frac{v_{n }(t)-v_{n-1}(t)}{r_{n}(t)-r_{n-1}(t)}\right) \right|^2 \le C \sum_{i=0}^3\mathfrak{E}^i(0) , \lambdabel{9.16.4} \\ & h \sum_{n=1}^N \left|v_n(t)-v_n(s)\right|^2 + h\sum_{n=1}^{N}\left| \frac{v_{n }(t)}{x_{n }}-\frac{v_{n }(s)}{x_{n }} \right|^2 \le C\mathfrak{E}(0)|t-s|, \lambdabel{a9} \\ & h\sum_{n=1}^{N } \left| \frac{r_{n }(t)-r_{n-1}(t)}{h} -\frac{r_{n }(s)-r_{n-1}(s)}{h}\right|^2 \le C\mathfrak{E}(0)|t-s|^2 ,\lambdabel{9.16.5}\\ & h\sum_{n=1}^{N } \left| \frac{v_{n }(t)-v_{n-1}(t)}{r_{n}(t)-r_{n-1}(t)} -\frac{v_{n }(s)-v_{n-1}(s)}{r_{n}(s)-r_{n-1}(s)} \right|^2 \le C \sum_{i=1}^2 \mathfrak{E}^i(0) |t-s| + C \mathfrak{E}^2(0)|t-s|^2 . \lambdabel{9.16.6} \end{align} Here $C$ is a constant independent of $N$. \end{lem} {\em Proof}. Clearly, \eqref{9.15.2} follows from \eqref{a5}, \eqref{9.7-4} and \eqref{9.6-1}; and \eqref{9.16.1} follows from \eqref{a5}. For \eqref{9.16.2}, it follows from \eqref{9.15.2} and \eqref{9.15-1} that \begin{equation}gin{align*} &h\sum_{n=1}^{N }\left|\frac{1}{h}\left(\frac{v_{n }(t)}{x_{n }}-\frac{v_{n-1}(t)}{x_{n-1}}\right) \right|^2 = h \left( \sum_{n=1}^{[N/2] } + \sum_{n=[N/2]+1}^{N }\right)\left|\frac{1}{h}\left(\frac{v_{n }(t)}{x_{n }}-\frac{v_{n-1}(t)}{x_{n-1}}\right) \right|^2 \\ \le & C h \sum_{n=1}^{[N/2] } \begin{equation}tar\rho_n^{2\gamma-1}\left|\frac{1}{h}\left(\frac{v_{n }(t)}{x_{n }}-\frac{v_{n-1}(t)}{x_{n-1}}\right) \right|^2 + C h \sum_{n=[N/2]+1}^{N }\left( \left|\frac{v_n(t)-v_{n-1}(t) }{h} \right|^2 + \left|\frac{v_n(t)}{x_n}\right|^2\right)\\ \le & C\mathfrak{E}(0) + C \mathfrak{E}^2(0). \end{align*} Similarly, the estimates for $r_n$ in \eqref{9.16.2} follows from \eqref{9.15.2} and \eqref{a5}. \eqref{9.16.3} follows from simple calculations and \eqref{a5}. For \eqref{9.16.4}, we note that \begin{equation}gin{align*} \frac{1}{h}\left| \frac{v_{n+1}(t)-v_n(t)}{r_{n+1}(t)-r_n(t)} - \frac{v_{n }(t)-v_{n-1}(t)}{r_{n}(t)-r_{n-1}(t)} \right| =\frac{1}{h} \left|\left(\dot{\mathcal{G}}_{n+1} - 2\frac{v_{n}}{r_{n}}\right) -\left(\dot{\mathcal{G}}_{n } - 2\frac{v_{n-1}}{r_{n-1}}\right)\right|\\ \le \left|\frac{\dot{\mathcal{G}}_{n+1} -\dot{\mathcal{G}}_{n} }{h}\right|+ \frac{2}{h} \left(\frac{x_n}{r_n}\left|\frac{v_n}{x_n}-\frac{v_{n-1}}{x_{n-1}}\right| + \left|\frac{v_{n-1}}{x_{n-1}} \right| \frac{x_n}{r_n}\frac{x_{n-1}}{r_{n-1}} \left|\frac{r_n}{x_n}-\frac{r_{n-1}}{x_{n-1}}\right| \right). \end{align*} Then \eqref{9.16.4} follows from \eqref{9.10.1}, \eqref{9.15.2} and \eqref{9.16.2}. For \eqref{a9}, notice that \begin{equation}gin{align*} h\sum_{n=1}^{N}\left| \frac{v_{n }(t)}{x_{n }}-\frac{v_{n }(s)}{x_{n }} \right|^2 =h\sum_{n=1}^{N}\left| \int_s^t \frac{\dot{v}_{n }(\thetau)}{x_{n }}d\thetau \right|^2 \le h\sum_{n=1}^{N} \int_s^t \left|\frac{\dot{v}_{n }(\thetau)}{x_{n }}\right|^2 d\thetau |t-s| . \end{align*} Then the estimate for $v_n/x_n$ in \eqref{a9} follows from \eqref{9-7-2}. With this, the estimate for $v_n$ in \eqref{a9} holds obviously. Similarly, \eqref{9.16.5} follows from \eqref{9.15.2}; \eqref{9.16.6} from \eqref{9-7-2} and \eqref{9.15.2}. $\Box$ For $h={1}/{N}$, we define the functions $(r^h, v^h)(x, t )$ as follows: \begin{equation}gin{align*}\lambdabel{a9} r^h(x,t)=r_{n-1}(t)+\frac{r_n(t)-r_{n-1}(t)}{h}(x-x_{n-1}), \\ v^h(x,t)=v_{n-1}(t)+\frac{v_n(t)-v_{n-1}(t)}{h}(x-x_{n-1}), \end{align*} for $x_{n-1}<x<x_n$, $1\le n\le N$ and $0\le t\le T_*$. Then we have \begin{equation}e r^h_t(x,t)=v^h(x,t), \ \ r^h_x(x,t)=\frac{r_n(t)-r_{n-1}(t)}{h} \ \ {\rm and} \ \ v^h_x(x,t)=\frac{v_n(t)-v_{n-1}(t)}{h}. \end{equation}e \begin{equation}gin{prop}\lambdabel{prop9.16} If the assumptions in Lemma \ref{lem9.15} are satisfied, then there exist subsequences of $\{r^h\}$, $\{v^h\}$, $\{r^h/x\}$, $\{v^h/x\}$, $\{\begin{equation}tar\rho^{\gamma-1/2} r^h_x\}$ and $\{v^h_x /r^h_x\}$, still labeled by $\{r^h\}$, $\{v^h\}$, $\{r^h/x\}$, $\{v^h/x\}$, $\{\begin{equation}tar\rho^{\gamma-1/2} r^h_x\}$ and $\{v^h_x /r^h_x\}$ for convenience, such that $\{r^h\}$, $\{v^h\}$, $\{r^h/x\}$, $\{v^h/x\}$, $\{\begin{equation}tar\rho^{\gamma-1/2} r^h_x\}$ and $\{v^h_x /r^h_x\}$ converge boundedly and almost everywhere in $[0,1]\times[0,T_*]$ for $h\to 0$. \end{prop} {\em Proof}. We use similar arguments as in \cite{Okada, LiXY}. First, we consider $\{r^h\}$ and $\{v^h\}$. It follows from \eqref{9.15.2} and \eqref{9.16.1} that the functions of the families $\{r^h\}$ and $\{v^h\}$ as functions of $x$ have uniformly bounded total variations with respect to $h$ for each fixed time $t\in [0, T_*]$. Let $t=t_k$ ($k=1, 2, \cdots$) be a countable set which is everywhere dense in $[0, T_*]$. By Helly's theorem and a diagonal process, from the family of functions $\{v^h\}$, we can select a subsequence, still labeled as $\{v^h\}$ for convenience, converging boundedly and almost everywhere in $x\in I$ on the dense set $\{t_k; k=1, 2, \cdots\}$ in $[0, T^*]$ as $h\to 0$. Consequently, by the Lebesgue's theorem, the subsequence $\{v^h\}$ converges in $L^2$-norm on $\{t_k; k=1, 2, \cdots\}$. Next, by \eqref{a9}, the continuity in time for the $L^2$-norm of $v^h$, it is standard to show that the subsequence $\{v^h\}$ converges in $L^2(I)$ uniformly in $t\in [0, T_*]$, as $h\to 0$. So, we can select further a subsequence, again still labeled as $\{v^h\}$ for convenience, converges almost everywhere in $(x, t)\in I\times [0, T_*]$. Denoting the limiting function of $\{v^h\}$ by $v$. Since $r^h_t(x, t)=v^h(x, t)$, we know that the corresponding subsequence of $\{r^h\}$ (still labeled as $\{r^h\}$ for convenience) converges almost everywhere to $r(x, t):=r_0(x)+\int_0^t v(x, s)ds$ in $I\times [0, T_*]$. Similarly, one can derive from estimates \eqref{9.15.2}, \eqref{9.16.2} and \eqref{a9} that there exist certain subsequences of $\{r^h/x\}$ and $\{v^h/x\}$, still labeled as $\{r^h/x\}$ and $\{v^h/x\}$ for convenience, converge boundedly and almost everywhere in $(x, t)\in I\times [0, T^*]$ to the functions $r(x,t)/x$ and $v(x,t)/x$, respectively. The estimates \eqref{9.15.2}, \eqref{9.16.3}, \eqref{9.16.4}, \eqref{9.16.5} and \eqref{9.16.6} guarantee that one can select certain subsequences of $\{\begin{equation}tar\rho^{\gamma-1/2} r^h_x\}$ and $\{v^h_x /r^h_x\}$ (still labeled as $\{\begin{equation}tar\rho^{\gamma-1/2} r^h_x\}$ and $\{v^h_x /r^h_x\}$ for convenience) such that they converge boundedly and almost everywhere in $(x, t)\in I\times [0, T_*]$ to the functions $\begin{equation}tar\rho^{\gamma-1/2}r_x(x,t)$ and $v_x(x,t)/r_x(x,t)$, respectively. $\Box$ Due to the above convergence and uniform estimates on the approximating sequence, one may first verify that the limiting function $(r, v)$ is a weak solution to problem \eqref{419} in the sense of \begin{equation}gin{equation*}\begin{equation}gin{split} &\int_0^{T_*}\int_{I} \left\{ \begin{equation}tar\rho\left( \frac{x}{r}\right)^2 v\psi_t + \left(\frac{x^2}{r^2}\frac{\begin{equation}tar\rho}{ r_x}\right)^\gamma \psi_x - \frac{x^2}{r^4} \begin{equation}tar\rho \int_0^x 4\pi y^2\begin{equation}tar\rho(y) dy \right\} (x, t) dxdt \\ & =\left.\int_{I} \left(\begin{equation}tar\rho\left( \frac{x}{r}\right)^2 v\psi \right) (x, t)dx\right|_{t=0}^{T^*}+\int_0^{T^*}\int_{I} \left\{ \mu\left(\frac{v_x}{r_x}+2\frac{v}{r}\right) \psi_x + 2 \begin{equation}tar\rho\left( \frac{x}{r}\right)^3 \frac{v}{x} v\psi \right\}(x, t)dxdt \end{split}\end{equation*} for any test function $\psi\in C^1([0, T_*]\times I)$ satisfying $\psi(\cdot, t)\in C_c(I)$. Then one can use the standard regularity argument (cf. \cite{LXY}) which is ensured by the above uniform estimates to show it is also a strong solution as in Definition \ref{definitionss} satisfying $$\mathfrak{E}(t)\in C([0, T_*]) \ \ {\rm and} \ \ \int_{I} \frac{1}{\begin{equation}tar\rho(x)} \mathfrak{B}_x^2(x, t) dx\le C \mathfrak{E}(0), \ \ t\in [0, T_*].$$ This in particular implies that $\mathfrak{B}(\cdot, t)\in H^1(I)$ for $t\in [0, T_*]$, so that one can define the trace of $\mathfrak{B}$ at $x=1$. The uniqueness of the strong solution follows from the argument in \cite{LXZ} (Section 11) of the weighted energy estimates. \subsection*{Part II. Linearized analysis} We use the simple case that $\lambdambda_2=(2/3)\lambdambda_1=1/3$ to illustrate the main ideas of the linear analysis. As in \cite{17'}, we set $w=r/x-1$. Then \begin{equation}\lambdabel{9.19.3} r=x+x w, \ \ r_x=1+ w +x w_x, \ \ v= x w_t, \ \ v_x= w_t+ xw_{tx}, \ \ v_t=x w_{tt}. \end{equation} The linearized problem for \eqref{419} around $w=0$ (the equilibrium of \eqref{419}) reads \begin{equation}gin{subequations}\lambdabel{linearization}\begin{equation}gin{align} &x\begin{equation}tar\rho w_{tt} - (3\gamma-4)(\begin{equation}tar\rho^{\gamma})_x w -\gamma x^{-3} \left(\begin{equation}tar\rho^{\gamma} x^4 w_x\right)_x = (xw_{xt}+3w_t)_x,\lambdabel{7-2-3}\\ & \mathfrak{B}_L:=xw_{xt}+w_t=0 \ \ {\rm at} \ \ x=1. \end{align}\end{subequations} The condition $v(0, t)=0$ has be incorporated in the transformation from $r$ to $w$ since $v=xw_t$. (Indeed, \eqref{linearization} follows from the facts that the principal parts of $(r/x)^{-2\gamma}$, $r_x^{-\gamma}$, $(r/x)^{-4}$, $v_x/r_x$ and $v/r$ are $1 - 2\gamma w$, $1 -\gamma (w+xw_x)$, $1-4 w$, $xw_{xt}+w_t$ and $w_t$, respectively. The derivation of the left-hand side of \eqref{7-2-3} can be also found in \cite{17'}.) Naturally, we may rewrite \eqref{7-2-3} as \begin{equation}\lambdabel{9.19.1} x^4\begin{equation}tar\rho w_{tt} + (3\gamma-4) \phi x^4 \begin{equation}tar\rho w -\gamma \left( x^4 \begin{equation}tar\rho^{\gamma}w_x\right)_x = x^3 \left(\mathfrak{B}_L\right)_x +2 x^3 w_{tx}, \end{equation} where $\phi$ is defined in \eqref{rhox}, which is bounded from above and below by positive constants, so that it is easy to see that $\gamma>4/3$ is crucial to the linearized stability. {\em Lower-order estimates}. The basic multipliers for \eqref{9.19.1} are $ w_t$ and $ w$ which yield the boundedness of \begin{equation}\lambdabel{9.19.4} \left\|(r-x, xr_x-x)(\cdot,t) \right\|^2 + (1+t )\left\| \left(v, xv_x \right)(\cdot, t) \right\|^2 + (1+t) \left\| x \begin{equation}tar\rho^{1/2} v_t(\cdot, t) \right\|^2. \end{equation} Indeed, we have the following basic estimates for the solution of \eqref{linearization} if $\gamma>4/3$. \begin{equation}e\begin{equation}gin{split} &\frac{1}{2}\frac{d}{dt} \int x^4 \left( \begin{equation}tar\rho w_t^2+ (3\gamma-4)\phi \begin{equation}tar\rho w^2 + \gamma \begin{equation}tar\rho^\gamma w_x^2 \right)dx + \int x^2\left[ (w_t+xw_{tx})^2+ 2 w_t^2\right]dx =0, \\ & \frac{1}{2}\frac{d}{dt} \int \left[x^2(w+xw_{x})^2+ 2 x^2 w^2 + x^4 \begin{equation}tar\rho w w_t \right]dx + \int \left[ ( 3\gamma-4)\phi x^4 \begin{equation}tar\rho w^2 + \gamma x^4 \begin{equation}tar\rho^\gamma w_x^2 \right]dx \notag \\ &\qquad =\int x^4 \begin{equation}tar\rho w_t^2 dx, \end{split}\end{equation}e which implies \begin{equation}\lambdabel{basicestimate}\begin{equation}gin{split} & \int \left[x^2(w+xw_{x})^2+ x^2 w^2 \right](x,t)dx + (1+t) \int \left(x^4 \begin{equation}tar\rho w_t^2+ x^4 \begin{equation}tar\rho w^2 + x^4 \begin{equation}tar\rho^\gamma w_x^2 \right)(x,t)dx\\ & +\int_0^t \int \left( x^4 \begin{equation}tar\rho w^2 + x^4 \begin{equation}tar\rho^\gamma w_x^2 \right)dx ds + \int_0^t (1+s) \int \left[x^2 (w_s+xw_{sx})^2+ x^2 w_s^2\right] dx ds \\ \le & C \int \left[x^4 w_{x} ^2+ x^2 w^2 + x^4 \begin{equation}tar\rho w_t^2 \right](x,0)dx . \end{split}\end{equation} Since the coefficients for the equation and boundary conditions in \eqref{linearization} are independent of $t$, differentiating \eqref{linearization} with respect to $t$, one obtains the same estimates for the corresponding $t$-derivatives of each quantity appearing in \eqref{basicestimate}, in particular, \begin{equation}gin{align} & (1+t) \int x^4 \left( \begin{equation}tar\rho w_{tt}^2+ \begin{equation}tar\rho w_t^2 + \begin{equation}tar\rho^\gamma w_{tx}^2 \right)(x,t)dx + \int_0^t (1+s) \int x^2\left[ (w_{ss}+xw_{ssx})^2+ w_{ss}^2\right] dx ds \notag\\ & \le C \int \left[x^4 w_{x}^2+ x^2 w^2 + x^4 \begin{equation}tar\rho w_t^2 +x^4 \begin{equation}tar\rho w_{tt}^2 + x^4 \begin{equation}tar\rho^\gamma w_{tx}^2 \right](x,0)dx . \lambdabel{9.19.2} \end{align} Integrating the identity \begin{equation}gin{align*}\frac{d}{dt}\left[ (1+t) \int x^2 \left[(w_t+xw_{tx})^2+ w_t^2\right] (x, t)dx \right] =\int x^2 \left[(w_t+xw_{tx})^2+ w_t^2\right] (x, t)dx \\ + 2 (1+t) \int x^2 \left[(w_t+xw_{tx})(w_{tt}+xw_{ttx}) + w_t w_{tt}\right] (x, t)dx \end{align*} with respect to $t$, and use \eqref{basicestimate} and \eqref{9.19.2} to get \begin{equation}\lambdabel{basicestimate1'}\begin{equation}gin{split} & (1+t) \int \left[x^2 (w_t+xw_{tx})^2+ x^2 w_t^2\right] (x, t)dx \\ \le & C \int \left[x^4 w_{x}^2+ x^2 w^2 + x^2 w_t^2 +x^4 \begin{equation}tar\rho w_{tt}^2 + x^4 w_{tx}^2 \right](x,0)dx.\end{split}\end{equation} Therefore, the boundedness of \eqref{9.19.4} is a consequence of \eqref{basicestimate}-\eqref{basicestimate1'} and \eqref{9.19.3}. {\em Higher-order estimates}. We may rewrite \eqref{7-2-3} as \begin{equation}gin{equation}\lambdabel{linearization2}\begin{equation}gin{split} & G_{xt}+\gamma\begin{equation}tar\rho^{\gamma}G_x =x\begin{equation}tar\rho w_{tt} + \gamma \phi x^2 \begin{equation}tar\rho w_x+(3\gamma-4)\phi x \begin{equation}tar\rho w, \ \ {\rm where} \ \ G=xw_x+3w. \end{split}\end{equation} Based on \eqref{basicestimate}, \eqref{9.19.2} and \eqref{hardyorigin}, we may apply a multiplier $\begin{equation}tar\rho^{a}G_x$ with $a\ge 2\gamma-2$ to \eqref{linearization2} to get \begin{equation}\lambdabel{basicestimate3}\begin{equation}gin{split} &\int \begin{equation}tar\rho^{a}G_x^2(x, t)dx+\int_0^t \int \begin{equation}tar\rho^{\gamma+a}G_{x}^2 dxds \\ \le & C \int \left[ \begin{equation}tar\rho^{a}G_x^2 + x^4 w_{x}^2+ x^2 w^2 + x^4 \begin{equation}tar\rho w_t^2 +x^4 \begin{equation}tar\rho w_{tt}^2 + x^4 \begin{equation}tar\rho^\gamma w_{tx}^2 \right](x,0)dx.\end{split}\end{equation} Due to equation \eqref{linearization2}, we have \begin{equation}e\lambdabel{9.19} \int x^4 \begin{equation}tar\rho w_{tt}^2 (x,0)dx \le C \int x^2 \begin{equation}tar\rho^{-1} \left( |G_{xt}|^2 + \left|\begin{equation}tar\rho^{\gamma}G_x\right|^2 +| x^2 \begin{equation}tar\rho w_x |^2+ | x \begin{equation}tar\rho w |^2 \right) (x, 0) dx, \end{equation}e which implies that the largest $a$ in \eqref{basicestimate3} could be $2\gamma-1$. So, we choose $a=2\gamma-1$. Moreover, it follows from \eqref{basicestimate3} and equation \eqref{linearization2} that $$ \int_0^t \int \begin{equation}tar\rho^{ 3\gamma -1 }G_{xt}^2 dxds \le C \int \left[ \begin{equation}tar\rho^{2\gamma-1}G_x^2 + x^4 w_{x}^2+ x^2 w^2 + x^4 \begin{equation}tar\rho w_t^2 +x^4 \begin{equation}tar\rho w_{tt}^2 + x^4 \begin{equation}tar\rho^\gamma w_{tx}^2 \right](x,0)dx .$$ This may suggest \begin{equation}\lambdabel{9.19-1}\begin{equation}gin{split} &\int \begin{equation}tar\rho^{2\gamma-1}(x^2w^2_{xx}+w_x^2)(x, t)dx+\int_0^{t} \int \begin{equation}tar\rho^{3 \gamma -1 }(x^2w_{sxx}+w_{sx}^2) dxds \\ \le & C \int \left[ \begin{equation}tar\rho^{2\gamma-1}G_x^2 + x^4 w_{x}^2+ x^2 w^2 + x^4 \begin{equation}tar\rho w_t^2 +x^4 \begin{equation}tar\rho w_{tt}^2 + x^4 \begin{equation}tar\rho^\gamma w_{tx}^2 \right](x,0)dx .\end{split}\end{equation} (Indeed, it is one of main ideas to justify this for the corresponding nonlinear equation). The higher-order estimate \eqref{9.19-1} can improve the regularity near the origin as follows. It follows from \eqref{9.19-1}, \eqref{basicestimate} and \eqref{hardyorigin} that \begin{equation}\lambdabel{9.19-2} \int_0^{t} \int (w_s^2 + w_{xs}^2) dx ds \le C \int \left[ \begin{equation}tar\rho^{2\gamma-1}G_x^2 + x^2 w^2 + x^4 \left( w_{x}^2 + \begin{equation}tar\rho w_t^2 + \begin{equation}tar\rho w_{tt}^2 + \begin{equation}tar\rho^\gamma w_{tx}^2 \right)\right](x,0)dx. \end{equation} Let $\psi$ be a non-increasing cut-off function defined on $[0, 1]$ satisfying $\psi=1$ on $ [0,1/4]$, $ \psi=0$ on $ [{1}/{2}, 1] $, and $|\psi'|\le 32$ . With \eqref{9.19-2} and \eqref{9.19.2}, we can integrate the product of $ \eqref{7-2-3}_t$ and $ \psi x w_{tt}$ with respect to $x$ and $t$ to get $$\int_0^{1/4} x^2 w_{tt}^2 (x, t)dx \le C \int \left[ \begin{equation}tar\rho^{2\gamma-1}G_x^2 + x^4 w_{x}^2+ x^2 w^2 + x^4 \begin{equation}tar\rho w_t^2 +x^2 \begin{equation}tar\rho w_{tt}^2 + x^4 \begin{equation}tar\rho^\gamma w_{tx}^2 \right](x,0)dx,$$ which, together with \eqref{9.19.2}, gives \begin{equation}\lambdabel{9.19} \int x^2 \begin{equation}tar\rho w_{tt}^2 (x, t)dx \le C \int \left[ \begin{equation}tar\rho^{2\gamma-1}G_x^2 + x^4 w_{x}^2+ x^2 w^2 + x^4 \begin{equation}tar\rho w_t^2 +x^2 \begin{equation}tar\rho w_{tt}^2 + x^4 \begin{equation}tar\rho^\gamma w_{tx}^2 \right](x,0)dx. \end{equation} As a consequence of \eqref{9.19.3}, \eqref{9.19-1} and \eqref{9.19}, we can obtain the boundedness of \begin{equation}\lambdabel{9.19.4} \left\| \begin{equation}tar\rho^{1/2} v_t(\cdot, t) \right\|^2 + \left\| \begin{equation}tar\rho^{\gamma-1/2}\left(r_{xx}, \ (r/x)_x\right)(\cdot, t) \right\|^2 . \end{equation} {\em Conclusion}: A natural higher-order functional for the study of the linear problem \eqref{linearization} is: $$E(t)=\left\|(r-x, xr_x-x)(\cdot,t) \right\|^2 + \left\| \left(v, xv_x \right)(\cdot, t) \right\|^2 + \left\| \begin{equation}tar\rho^{1/2} v_t(\cdot, t) \right\|^2 + \left\| \begin{equation}tar\rho^{\gamma-1/2}\left(r_{xx}, \ (r/x)_x\right)(\cdot, t) \right\|^2.$$ \begin{equation}gin{small} \noindent T. Luo \\ {\it Department of Mathematics and Statistics, Georgetown University, Washington, DC, 20057, USA} \\ {E-mail: [email protected]} \\ Z. Xin\\ {\it Institute of Mathematical Sciences, The Chinese University of Hong Kong, Hong Kong}\\ {E-mail: [email protected]} \\ H. Zeng {\it Yau Mathematical Sciences Center, Tsinghua University, Beijing, 100084, China$^1$}\\ {E-mail: [email protected]} \\ {\it Center of Mathematical Sciences and Applications, Harvard University, Cambridge, MA 02318, USA} \\ \end{small} \end{document}
\begin{document} \def\spacingset#1{\renewcommand{\baselinestretch} {#1}\small\normalsize} \spacingset{1} \if00 { \title{\bf Metropolis-Hastings within\\ Partially Collapsed Gibbs Samplers} \author{David A. van Dyk and Xiyun Jiao\thanks{Professor David A. van Dyk holds a Chair in Statistics in the Department of Mathematics at Imperial College London, SW7 2AZ ([email protected]); Xiyun Jiao is a postgraduate student in Statistics at Imperial College.}} \date{} \maketitle } \fi \if10 { \begin{center} {\LARGE\bf Metropolis-Hastings Algorithm within \\ Partially Collapsed Gibbs Samplers} \end{center} } \fi \vspace*{-0.02in} \begin{abstract} The Partially Collapsed Gibbs (PCG) sampler offers a new strategy for improving the convergence of a Gibbs sampler. PCG achieves faster convergence by reducing the conditioning in some of the draws of its parent Gibbs sampler. Although this can significantly improve convergence, care must be taken to ensure that the stationary distribution is preserved. The conditional distributions sampled in a PCG sampler may be incompatible and permuting their order may upset the stationary distribution of the chain. Extra care must be taken when Metropolis-Hastings (MH) updates are used in some or all of the updates. Reducing the conditioning in an MH within Gibbs sampler can change the stationary distribution, even when the PCG sampler would work perfectly if MH were not used. In fact, a number of samplers of this sort that have been advocated in the literature do not actually have the target stationary distributions. In this article, we illustrate the challenges that may arise when using MH within a PCG sampler and develop a general strategy for using such updates while maintaining the desired stationary distribution. Theoretical arguments provide guidance when choosing between different MH within PCG sampling schemes. Finally we illustrate the MH within PCG sampler and its computational advantage using several examples from our applied work. \end{abstract} \noindent {\it Key Words:} Astrostatistics; Blocking; Factor Analysis; Gibbs sampler; Incompatible Gibbs sampler; Metropolis-Hastings; Metropolis within Gibbs; Spectral Analysis. \spacingset{1.45} \section{Introduction} \label{sec:intro} The popularity of the Gibbs sampler stems from its simplicity and power to effectively generate samples from a high-dimensional probability distribution. It can sometimes, however, be very slow to converge, especially when it is used to fit highly structured or complex models. The Partially Collapsed Gibbs (PCG) sampler offers a strategy for improving the convergence characteristics of a Gibbs sampler \citep{vand:park:08,park:vand:09,vand:park:11}. A PCG sampler achieves faster convergence by reducing the conditioning in some or all of the component draws of its parent Gibbs sampler. That is, one or more of the complete conditional distributions is replaced by the corresponding complete conditional distribution of a multivariate marginal distribution of the target. For example, we might consider sampling $p({\psi}_{1}|{\psi}_{2})$ rather than $p({\psi}_{1}|{\psi}_{2},{\psi}_{3})$, where $p({\psi}_{1}|{\psi}_{2})$ is a conditional distribution of the marginal distribution, $p({\psi}_{1},{\psi}_{2})$, of the target $p({\psi}_{1},{\psi}_{2},{\psi}_{3})$. This strategy has already been proven useful in improving the convergence properties of numerous samplers \citep[e.g.,][etc.]{ber:gay:13,berr:cald:12,car:teh:14,dobi:tour:10,hans:etal:12,hu:gram:lian:12,hu:lian:13,kail:etal:10,kail:etal:11, lin:tour:10,lin:sch:13,park:etal:08,park:vand:09,park:11,park:jeonl:lee:12,park:kraf:sanc:12, zhao:lian:13}. Although the PCG sampler can be very efficient, it must be implemented with care to make sure that the stationary distribution of the resulting sampler is indeed the target. Unlike the ordinary Gibbs sampler, the conditional distributions sampled in a PCG sampler may be incompatible, meaning there is no joint distribution of which they are simultaneously the conditional distributions. In this case, permuting the order of the updates can change the stationary distribution of the chain. As with an ordinary Gibbs sampler, we sometimes find that one or more of the conditional draws of a PCG sampler is not available in closed form and we may consider implementing such draws with the help of a Metropolis-Hastings (MH) sampler. Reducing the conditioning in one draw of an MH within Gibbs sampler, however, may alter the stationary distribution of the chain. This can happen even when the PCG sampler would work perfectly well if all of the conditional updates were available without resorting to MH updates. Examples arise even in a two-step MH within PCG sampler. \citet{wood:etal:12}, for example, points out this problem in certain samplers described in the literature for regression with functional predictors. Although they do not use the framework of PCG, these samplers are simple special cases of improper MH within PCG samplers. They first analyze the functional predictors in isolation of the regression and then use MH to update the regression parameters conditional on parameters describing the functional predictors. The first step effectively samples the functional parameters marginally and the second uses MH for sampling from the complete conditional of the regression parameters. In this article we pay special attention to this situation because it is both conceptually simple and important in practice. In Section~\ref{sec:mhrs} we propose two simple strategies that maintain the target distribution and in Section~\ref{sec:the} we compare the performance of the two strategies theoretically. In this article, we illustrate difficulties that may arise when using MH updates within a PCG sampler and develop a general strategy for using such updates while maintaining the target stationary distribution. We begin in Section~\ref{sec:examp} with two motivating examples that are chosen to review the subtleties of the PCG sampler, illustrate the complications that arise when MH is introduced into PCG, and set the stage for the methodological and theoretical contributions of this article. Section~\ref{sec:examp} ends by reviewing the method of \citet{vand:park:08} for establishing the stationary distribution of a PCG sampler. The MH within PCG sampler is introduced in Section~\ref{sec:mhpcg} along with methods for ensuring that its stationary distribution is the target distribution and several strategies for implementing the sampler while maintaining this target. Theoretical arguments are presented in Section~\ref{sec:the} that aim to guide the choice between different implementations of the MH within PCG sampler. The proposed methods and theoretical results are illustrated in Section~\ref{sec:exa} in the context of several examples, including factor analysis and two examples from high-energy astrophysics. The factor analysis example contrasts the step-ordering constraints of MH within PCG and of the related ECME algorithm \citep{liu:rubi:94}. Final discussion appears in Section~\ref{sec:disc}. \section{Background and Motivating Examples} \label{sec:examp} \subsection{Notation} \label{sec:note} We aim to sample from the target distribution, $p(\psi)$, by constructing a Markov chain \{${\psi}^{(t)}, t=1,2,\dots$\} with the stationary distribution $\pi(\psi)$, where $\psi$ is a multivariate random variable. That is, we aim to construct a Markov chain such that $\pi(\psi)=p(\psi)$. We refer to a sampler as {\it proper} if it has a stationary distribution and that distribution coincides with the target, i.e., $\pi(\psi)=p(\psi)$; otherwise we call the sampler {\it improper}. Typically $p(\psi)$ is the posterior distribution in a Bayesian analysis, but this is not necessary. In data-driven examples, we use standard Bayesian notation. To facilitate discussion of the relevant samplers, we divide $\psi$ into $J$ possibly multivariate {non-overlapping} subcomponents, i.e., $\psi=({\psi}_{1},\dots,{\psi}_{J})$, and define $\mathscr{J}=\{1,2,\dots,J\}$. {The methods that we consider are Gibbs-type samplers that rely on the conditional distributions of either $p(\psi)$ or its multivariate marginal distributions. When conditional distributions cannot be sampled directly, we may use MH. For example, suppose we wish to sample the conditional distribution $p(\psi_{j_1}|\psi_{j_2})$ of the marginal distribution $p(\psi_{j_1},\psi_{j_2})$, but cannot do so directly. In this case, we specify a jumping rule (i.e., a proposal distribution), denoted by $\mathcal{J}_{j_{1}|j_{2}}({\psi}_{j_{1}}|{\psi}_{j_{1}}^\prime,{\psi}_{j_{2}}^\prime,{\psi}_{j_{3}}^\prime)$, where the subscript specifies the target conditional distribution and we use primes to indicate the current value of the subcomponents of $\psi$; notice that the jumping rule may depend on subcomponents other than $\psi_{j_1}^\prime$ and $\psi_{j_2}^\prime$, namely, $\psi_{j_3}^\prime$. In the MH update, we sample $\psi_{j_1}^{\rm prop}\sim\mathcal{J}_{j_{1}|j_{2}}({\psi}_{j_{1}}|{\psi}_{j_{1}}^\prime,{\psi}_{j_{2}}^\prime,{\psi}_{j_{3}}^\prime)$ and set $\psi_{j_1}=\psi_{j_1}^{\rm prop}$ with probability $r=\mbox{min}\left\{1, \ \displaystyle{\frac{p(\psi_{j_1}^{\rm prop} | \psi_{j_2}^\prime)\mathcal{J}_{j_{1}|j_{2}}({\psi}_{j_1}^\prime|{\psi}_{j_{1}}^{\rm prop},{\psi}_{j_{2}}^\prime,{\psi}_{j_{3}}^\prime)}{p(\psi_{j_1}^\prime|\psi_{j_2}^\prime)\mathcal{J}_{j_{1}|j_{2}}({\psi}_{j_{1}}^{\rm prop}|{\psi}_{j_{1}}^\prime,{\psi}_{j_{2}}^\prime,{\psi}_{j_{3}}^\prime)}}\right\}$; otherwise the current value is retained, i.e., $\psi_{j_1}=\psi_{j_1}^\prime$. This MH transition kernel, denoted by $\mathcal{M}_{j_{1}|j_{2}}({\psi}_{j_{1}}|{\psi}_{j_{1}}^\prime,{\psi}_{j_{2}}^\prime,{\psi}_{j_{3}}^\prime)$, has stationary distribution $p({\psi}_{j_{1}}|{\psi}_{j_{2}})$. We can also express the iterates explicitly. For instance, ${\psi}_{2}^{(t+1)}\sim\mathcal{M}_{2|1,3}({\psi}_{2}|{\psi}_{1}^{(t+1)},{\psi}_{2}^{(t)},{\psi}_{3}^{(t)})$ is a typical expression for sampling from an MH transition kernel with stationary distribution $p({\psi}_{2}|{\psi}_{1}^{(t+1)},{\psi}_{3}^{(t)})$. Notice that this transition kernel depends on ${\psi}_{2}^{(t)}$ because the acceptance probability involves ${\psi}_{2}^{(t)}$ and because ${\psi}_{2}^{(t+1)}$ is set to ${\psi}_{2}^{(t)}$ if the proposal is rejected.} Here we introduce two examples that illustrate the advantages and potential pitfalls that may arise when using PCG samplers when MH is required for some of their updates. \subsection{Spectral analysis in X-ray astronomy} \label{sec:saxa} We begin with an example from our applied work in X-ray astronomy that involves a spectral analysis model that can be fitted with the Data Augmentation algorithm and Gibbs-type samplers \citep{vand:conn:kash:siem:01,vand:meng:10}. We use variants of this example as a running illustration of the methods we propose. The X-ray detectors used in astronomy are typically on board space-based observatories and record the number of photons detected in each of a large number of energy bins. Spectral analysis aims to estimate the distribution of the photon energies. We use Poisson models for the recorded photon counts, where the expected count is parameterized as a function of the energy, $E_{i}$ of bin $i$. A simple example is \begin{eqnarray} X_{i}\stackrel{\mbox{\tiny{ind}}}{\sim}{\rm Poisson}\bigg\{{\Lambda}_{i}=\alpha({E_{i}}^{-\beta}+{\gamma}I\{i=\mu\})e^{-\phi/E_{i}}\bigg\},\mbox{ for } i=1,\dots,n, \label{eq:sesa} \end{eqnarray} where $X_{i}$ is the count in bin $i$; $\alpha$, $\beta$, $\gamma$, $\mu$ and $\phi$ are model parameters; $I\{\cdot\}$ is the indicator function; and $n$ is the number of energy bins. The $\alpha{E_{i}}^{-\beta}$ term in~(\ref{eq:sesa}) is a {\it continuum}---a smooth term that extends over a wide range of energies. The $\alpha\gamma I\{i=\mu\}$ term is an {\it emission line}---a sharp narrow term that describes a distinct aberration from the continuum. The emission line in~(\ref{eq:sesa}) is very narrow in that it is contained entirely in one energy bin. The parameters of the continuum and emission line describe the composition, temperature, and general physical environment of the source. The factor $e^{-\phi/E_{i}}$ in~(\ref{eq:sesa}) accounts for absorption---lower energy photons are more likely to be absorbed by inter-stellar material and not be recorded by the detector. A typical spectral model might contain multiple summed continua and emission lines. We use a simple example here to focus attention on computational issues. Since $\alpha$, $\beta$, $\gamma$ and $\phi$ {are often} blocked in the samplers we discuss, we refer to them jointly as $\theta=(\alpha,\beta,\gamma,\phi)$. We assume that $\theta$ and $\mu$ are {\it a priori} independent and that $\mu$ is {\it a priori} uniform on $\{1,\dots,n\}$. In practice, we do not observe $X=(X_{1},\dots,X_{n})$ directly because photon counts are subject to stochastic censoring, misclassification, and background contamination. First, because the sensitivity of the detector varies with energy, the probability that a photon is detected depends on its energy. Combining this with background contamination, \begin{eqnarray} \tilde X_i \mid X_i \stackrel{\mbox{\tiny{ind}}}{\sim}{\rm Binomial}\big\{X_i,A_i\big\}+{\rm Poisson}(\xi_i), \ \mbox{ for } \ i=1,\dots,n, \label{eq:nusesa} \end{eqnarray} where ${\tilde X}=(\tilde X_{1},\dots,\tilde X_{n})$ are the photon counts, including background, that are not absorbed, $A=(A_{1},\dots,A_{n})$ is the {\it effective area} of the detector which describes its sensitivity, and $\xi=(\xi_{1},\dots,\xi_{n})$ is the expected background count. Second, misclassification occurs because a photon with energy $E_i$ has probability $P_{ij}$ of being recorded in bin $j$. Combining these effects, the conditional distribution of the observed photon counts $Y=(Y_{1},\dots,Y_{n})$ given ${\tilde X}$ is \begin{eqnarray} Y \mid {\tilde X} \stackrel{\mbox{\tiny{ind}}}{\sim} \sum_{i=1}^n {\rm Multinomial}\bigg\{ \tilde X_i, \ (P_{i1},\dots,P_{in})\bigg\}, \label{eq:nusa} \end{eqnarray} and marginally, \begin{eqnarray} Y_{j}\stackrel{\mbox{\tiny{ind}}}{\sim}{\rm Poisson}\bigg\{ \displaystyle \sum\limits_{i=1}^{n}P_{ij}(A_i {\Lambda}_{i}+{\xi}_{i})\bigg\}, \mbox{ for }j=1,\dots,n, \label{eq:sa} \end{eqnarray} where $\Lambda_{i}$ is given by~(\ref{eq:sesa}). While $A$ and $P=\{P_{ij}\}$ are typically assumed known from instrumental calibration (see~\citeauthor{lee:etal:11},~\citeyear{lee:etal:11}, for an exception), $\xi$ is often specified in terms of a number of unknown parameters. The model in~(\ref{eq:sesa}) is a finite mixture model and can be fitted via the standard data augmentation scheme that sets $X_{i}=X_{iC}+X_{iL}$, where $X_{iC}\stackrel{\mbox{\tiny{ind}}}{\sim}{\rm Poisson}\left(\alpha{E_{i}}^{-\beta}e^{-\phi/E_{i}}\right)$ and $X_{iL}\stackrel{\mbox{\tiny{ind}}}{\sim}{\rm Poisson}\left(\alpha{\gamma}I\{i=\mu\}e^{-\phi/E_{i}}\right)$, are the photon counts in bin $i$ generated from the continuum and emission line, respectively. We consider samplers that target $p(X, X_L, \theta, \mu |Y)$ rather than $p(\theta, \mu |Y)$ both because the ideal data, $X$, is of scientific interest and because its introduction simplifies the complete conditional distributions, especially in more complex models with multiple summed continua and spectral lines. Assuming $\xi$ is known, this leads to a Gibbs sampler for (\ref{eq:sesa})--(\ref{eq:sa}): \begin{steps} \itemsep=0in \step $(X^{(t+1)},X_L^{(t+1)})\sim p(X,X_L|Y,\theta^{(t)},\mu^{(t)})$, (Sampler 1) \step $\theta^{(t+1)}\sim p(\theta|Y,X^{(t+1)},X_L^{(t+1)},\mu^{(t)})$, \step $\mu^{(t+1)}\sim p(\mu|Y,X^{(t+1)},X_L^{(t+1)},\theta^{(t+1)})$, \end{steps} \noindent where $X_L=(X_{1L},\dots,X_{nL})$. We separate $\mu$ and $\theta$ into two steps to facilitate derivation of the partially collapsed versions of this sampler. Because $X_L$ completely specifies the line location, $\mu$, ${\rm Var}_{\pi}(\mu|X_L)=0$, Sampler~1 is not irreducible, and $\mu^{(t)}=\mu^{(0)}$ for all $t$, for any choice of $\mu^{(0)}$. This problem can be solved by updating $\mu$ without conditioning on $X_L$. In particular, we can replace Step~3 of Sampler~1 with $(X_L^{(t+1)},\mu^{(t+1)})\sim p(X_L,\mu|Y,X^{(t+1)},\theta^{(t+1)})$ and permute the steps to \begin{steps} \itemsep=0in \step $(X_L^{*},\mu^{(t+1)})\sim p(X_L,\mu|Y,X^{(t)},\theta^{(t)})$, (Sampler 2) \step $(X^{(t+1)},X_L^{(t+1)})\sim p(X,X_L|Y,\theta^{(t)},\mu^{(t+1)})$, \step $\theta^{(t+1)}\sim p(\theta|Y,X^{(t+1)},X_L^{(t+1)},\mu^{(t+1)})$. \end{steps} \noindent The sampled $X_L$ in Step~1 is denoted by $X_L^{*}$ because it is not an output of the Markov transition kernel; $X_L$ is updated again in Step~2. In fact $X_L^{*}$ is a redundant quantity in that it is not used at all subsequent to Step~1 and replacing Step~1 with $\mu^{(t+1)} \sim p(\mu|Y,X^{(t)},\theta^{(t)})$ does not alter the Markov transition kernel of Sampler~2. The resulting sampler, that is, \begin{steps} \itemsep=0in \step $\mu^{(t+1)}\sim p(\mu|Y,X^{(t)},\theta^{(t)})$, (Sampler 3) \step $(X^{(t+1)},X_L^{(t+1)})\sim p(X,X_L|Y,\theta^{(t)},\mu^{(t+1)})$, \step $\theta^{(t+1)}\sim p(\theta|Y,X^{(t+1)},X_L^{(t+1)},\mu^{(t+1)})$, \end{steps} \noindent is an example of a PCG sampler composed of incompatible conditional distributions. A variant of this sampler was discussed in~\citet{park:vand:09}. By its construction, the stationary distribution of Sampler~3 is $p(X,X_L,\theta,\mu|Y)$, see Section~\ref{sec:con}. Unlike an ordinary Gibbs sampler, however, permuting its steps may alter its stationary distribution. Suppose, for example, we obtain $(X^{(t)},X_L^{(t)},\theta^{(t)},\mu^{(t)})$ from $p(X,X_L,\theta,\mu|Y)$ and update $\mu$ according to Step~1 of Sampler~3. The joint distribution of $(X^{(t)},X_L^{(t)},\theta^{(t)},\mu^{(t+1)})$ would be \begin{eqnarray} \int p(\mu^{(t+1)}|Y,X^{(t)},\theta^{(t)})p(X^{(t)},X_L^{(t)},\theta^{(t)},\mu^{(t)}|Y)d\mu^{(t)}=p(X^{(t)},\theta^{(t)},\mu^{(t+1)}|Y)p(X_L^{(t)}|Y,X^{(t)},\theta^{(t)}). \label{eq:inter} \end{eqnarray} It is the conditional independence of $X_L^{(t)}$ and $\mu^{(t+1)}$ in~(\ref{eq:inter}) that makes Sampler~3 so much faster than Sampler~1; recall ${\rm Var}_{\pi}(\mu|X_L)=0$. Because the joint distribution of $\theta^{(t)}$ and $\mu^{(t+1)}$ in~(\ref{eq:inter}) is their posterior distribution and Step~2 conditions only on $\theta^{(t)}$ and $\mu^{(t+1)}$, the joint distribution of the unknowns after Step~2, that is, of $(X^{(t+1)},X_L^{(t+1)},\theta^{(t)},\mu^{(t+1)})$, is again the target posterior. Thus a cyclic permutation of the steps in Sampler~3 that ends either with Step~2 or Step~3 results in a proper sampler, but ending with Step~1 does not. With non-cyclic permutations, the stationary distribution is unknown. \subsection{A common error in the simplest PCG sampler} \label{sec:cespcg} The potential pitfalls of introducing MH updates into a PCG sampler can be illustrated using the simplest possible PCG sampler. To see this, we start with a two-step Gibbs sampler with target distribution $p({\psi}_{1},{\psi}_{2})$, where the second step relies on an MH update: \begin{steps} \itemsep=0in \step ${\psi}_{1}^{(t+1)} \sim p({\psi}_{1}|{\psi}_{2}^{(t)})$, (Sampler 4) \step ${\psi}_{2}^{(t+1)}\sim{{\mathcal{M}}_{2|1}({\psi}_{2}|{\psi}_{1}^{(t+1)},\psi_2^{(t)})}$. \end{steps} \noindent While this sampler is proper, replacing Step 1 with ${\psi}_{1}^{(t+1)}\sim p({\psi}_{1})$ results in an improper sampler: \begin{steps} \itemsep=0in \step ${\psi}_{1}^{(t+1)}\sim p({\psi}_{1})$, (Sampler 5) \step ${\psi}_{2}^{(t+1)}\sim{{\mathcal{M}}_{2|1}({\psi}_{2}|{\psi}_{1}^{(t+1)},\psi_2^{(t)})}$. \end{steps} \begin{figure} \caption{Proper and improper samplers, for the bivariate normal target distribution. The first two panels give scatter plots of $\psi_1$ and $\psi_2$ for 10,000 draws from Samplers~4 and~5, respectively. The marginal distributions of the two samplers are compared in the two quantile-quantile plots. The improper Sampler 5 severely underestimates the correlation between $\psi_1$ and $\psi_2$, and slightly overestimates the variance of $\psi_{2} \label{fig:pitfall} \end{figure} \noindent The problem with Sampler~5 can be illustrated using a simulation study. Figure~\ref{fig:pitfall} compares 10,000 draws generated by Samplers~4 and~5 with $p({\psi}_{1},{\psi}_{2})$ given by \begin{eqnarray} \left(\begin{array}{c} {\psi}_{1}\\ {\psi}_{2} \end{array} \right) \sim{\rm N}_{2}\left[\left(\begin{array}{c} 0\\ 0 \end{array} \right),\left(\begin{array}{cc} 1&0.9\\ 0.9&1 \end{array} \right)\right]. \label{eq:normal} \end{eqnarray} The MH jumping rule in Step~2 of both samplers is a Gaussian distribution centered at the previous draw with variance equal to 3. Sampler~5 underestimates the correlation of the target distribution and overestimates the marginal variance of $\psi_2$. {Of course, if we repeat Step 2 a sufficient number of times within each iteration of Sampler 5, it would deliver a draw (nearly) from its target, $p(\psi_2|\psi_1)$, and Sampler 5 would deliver (nearly) independent draws from $p(\psi_1,\psi_2)$. We discuss this strategy for constructing an approximately proper sampler in Section~\ref{sec:mhrs}. Similarly, iterating Step~2 of Sampler~4 would (nearly) lead to a standard two-step Gibbs sampler.} The key to understanding the failure of Sampler~5 (without iterating Step 2) lies in the MH jumping rule used in Step~2 of both samplers. {The kernel ${\mathcal{M}}_{2|1}$ depends on ${\psi}_{2}^{(t)}$ through its acceptance probability and its output if its proposal is rejected, thus ${\mathcal{M}}_{2|1}$ must be written as ${\mathcal{M}}_{2|1}({\psi}_{2}|{\psi}_{1}^{(t+1)},{\psi}_{2}^{(t)})$. Although ${\mathcal{M}}_{2|1}$ delivers a draw from $p({\psi}_{2}|{\psi}_{1}^{(t+1)})$ if given a sample $({\psi}_{1}^{(t+1)},{\psi}_{2}^{(t)})$ from the target distribution, in Sampler~5, ${\psi}_{1}^{(t+1)}$ and ${\psi}_{2}^{(t)}$ are independent and ${\mathcal{M}}_{2|1}$ does not deliver a draw from $p({\psi}_{2}|{\psi}_{1}^{(t+1)})$.} Unfortunately, there are several examples of samplers in the literature that have the same structure as the improper Sampler 5, for instance, \citet{liu:etal:09}, \citet{lunn:etal:09}, \citet{mccan:etal:10}, and even in the popular WinBUGS package (Spiegelhalter, Thomas, Best and Lunn 2003), see Section~\ref{sec:smhpcg}. These samplers do not generally exhibit the desired stationary distributions. \subsection{Convergence of the Partially Collapsed Gibbs sampler} \label{sec:con} \begin{figure} \caption{A three-phase framework for deriving a proper PCG sampler. The parent Gibbs sampler appears in (a). The sampler in (b) reduces the conditioning in Step~1 by updating $\psi_3$ rather than conditioning on it. The steps of this sampler are permuted in (c) to allow the redundant draw of $\psi_3^{\star} \label{fig:pcg} \end{figure} A three-phase framework for deriving proper PCG samplers is given in~\citet{vand:park:08}. { Consider the Gibbs sampler in Figure~\ref{fig:pcg}(a) that updates the components of~$\psi=({\psi}_{1},\psi_2,\psi_3,{\psi}_{4})$~in three steps. In the first phase of the framework, one or more steps of the parent Gibbs sampler are replaced by steps that update rather than condition upon some components of~$\psi$. This is illustrated in Figure~\ref{fig:pcg}(b), where the update $\psi_1\sim p(\psi_1|\psi_2^\prime,\psi_3^\prime,\psi_4^\prime)$ in Step~1 is replaced with~$(\psi_1,\psi_3^\star)\sim p(\psi_1,\psi_3|\psi_2^\prime,\psi_4^\prime)$. Notice that in the modified step,~$\psi_3$~is sampled rather than conditioned upon.} This {\it conditioning reduction} phase is key to the improved convergence properties of the PCG sampler. By conditioning on less, we expect to increase the variance of the updating distribution, at least on average. This is evident in Section~\ref{sec:saxa} where the complete conditional for $\mu$ in Sampler~1 has zero variance, but its update with reduced conditioning in Sampler~2 readily allows $\mu$ to move across its parameter space. More formally,~\citet{vand:park:08}~showed that sampling more unknowns in any set of steps of a Gibbs sampler can only reduce the so-called cyclic-permutation bound on the spectral radius of the sampler. The resulting substantial improvement in the rate of convergence is illustrated in the examples given in~\citet{ber:gay:13},~\citet{berr:cald:12},~\citet{car:teh:14},~\citet{dobi:tour:10},~\citet{hu:gram:lian:12},~\citet{hu:lian:13},~\citet{kail:etal:10,kail:etal:11}, ~\citet{lin:tour:10},~\citet{lin:sch:13},~\citet{park:etal:08},~\citet{park:vand:09},~\citet{park:jeonl:lee:12},~\citet{park:kraf:sanc:12}, and~\citet{zhao:lian:13}, etc. ({\it Conditioning reduction} was called {\it marginalization} by \citet{vand:park:08}.) The conditioning reduction phase results in one or more components of $\psi$ being updated in multiple steps; $\psi_3$ is updated in Steps~1 and 3 in Figure~\ref{fig:pcg}(b). If the same component is updated in two consecutive steps, the Markov transition kernel does not depend on the first update. We call quantities that are updated in a sampler, but do not affect its transition kernel {\it redundant quantities}---they must be updated subsequently or they would be part of the output of the iteration. The second phase of the framework is to {\it permute} the steps of the sampler with reduced conditioning to make as many of the updates redundant as possible. {For example, we permuted the steps in Figure~\ref{fig:pcg}(b) so that $\psi_3$ is updated in Steps~2 and~3 of Figure~\ref{fig:pcg}(c) and $\psi_3^\star$ is redundant. {In the third phase, redundant quantities are removed or {\it trimmed} from the updating scheme. For example, Step~2 in Figure~\ref{fig:pcg}(d) does not update $\psi_3$. By construction, this does not affect the overall transition kernel. The resulting step samples from a conditional distribution of a marginal distribution of $p(\psi)$. For example, Step~2 in Figure~\ref{fig:pcg}(d) simulates from a conditional distribution of $p(\psi_1, \psi_2,\psi_4)$ rather than of $p(\psi_1, \psi_2,\psi_3,\psi_4)$. We refer to steps that sample or target such distributions as {\it reduced steps} and to steps that sample or target a complete conditional as {\it full steps}. In some cases, the result of the three-phase framework is simply a blocked or collapsed~\citep{liu:wong:kong:94} version of the parent Gibbs sampler. In other cases, however, the resulting PCG sampler is composed of samples from a set of incompatible conditional distributions (e.g., Sampler~3). Since all three phases preserve the stationary distribution of the parent sampler, we know that the resulting PCG sampler is proper. Because reducing the conditioning can significantly improve the rate of convergence of the sampler, while permutation typically has a minor effect, and trimming has no effect on the rate of convergence, we generally expect the PCG sampler to exhibit better and often much better convergence properties than its parent Gibbs sampler. \section{Using MH Algorithm within the PCG Sampler} \label{sec:mhpcg} \subsection{Identifying the stationary distributions} \label{sec:idsd} We now consider the use of MH updates for some of the steps of a PCG sampler. As the example in Section~\ref{sec:cespcg} illustrates, introducing MH into a well behaved PCG sampler can destroy the sampler's stationary distribution. Thus, care must be taken to guarantee that an MH within PCG sampler is proper. Here we describe the basic complication that arises when MH is introduced into a PCG sampler and give advice as to how to ensure that the sampler is proper. When deriving a PCG sampler (without MH), the conditioning reduction phase means some components of $\psi$ are updated in multiple steps. If the same component is updated in consecutive steps, the Markov transition kernel does not depend on the first update. The first update is therefore redundant and can be omitted without affecting the stationary distribution of the chain. This situation is more complicated when some of the steps of the PCG sampler require MH updates. Suppose, for example, we wish to sample from $p(\psi)$ with $\psi=({\psi}_{1},{\psi}_{2},{\psi}_{3})$ using a proper PCG sampler in which $\psi_1$ and $\psi_2$ are jointly updated in Step~$K$ via a draw from the conditional distribution $p({\psi}_{1},{\psi}_{2}|{\psi}_{3})$. Suppose also that $\psi_2$ is to be updated according to its full conditional distribution, $p({\psi}_{2}|{\psi}_{1},{\psi}_{3})$ in Step~$K+1$, but this cannot be done directly and we wish to use an MH update. The remaining unknowns, $\psi_3$, are updated in other steps of the sampler, which perhaps involve dividing $\psi_3$ into multiple subcomponents. That is, Steps $K$ and~$K+1$ of the sampler are \begin{description} \itemsep=0in \item[Step $K$:] $(\psi_1^{(t+1)},\psi_2^{*}) \sim p(\psi_1,\psi_2|\psi_3^\prime)$, (Sampler Fragment 1) \item[Step $K+1$:] $\psi_2^{(t+1)} \sim {\mathcal M}_{2|1,3}(\psi_2|\psi_1^{(t+1)},\psi_2^{*},\psi_3^\prime)$. \end{description} \noindent If we were able to draw $\psi_2$ directly from its complete conditional distribution in Step~$K+1$, $\psi_2^{*}$ would be redundant and we could remove it from the sampler by replacing the update in Step~$K$ with the reduced step $\psi_1^{(t+1)} \sim p(\psi_1|\psi_3^\prime)$. The MH update in Step~$K+1$, however, depends on $\psi_2^{*}$ and replacing it with $\psi_2^{(t)}$ may change the chain's stationary distribution in an unpredictable way. In short, the MH update used in Step~$K+1$ means that we cannot reduce Step~$K$. Generally speaking, an MH update in a step that follows a reduced step is problematic because reduced steps result in independences that do not exist in the target. (A reduced step that follows an MH step, however, is not inherently problematic.) {More precisely, the kernel, $\mathcal{M}_{j_{1}|j_{2}}({\psi}_{j_{1}}|{\psi}_{j_{1}}^\prime,{\psi}_{j_{2}}^\prime,{\psi}_{j_{3}}^\prime)$, can only be used if no component of $({\psi}_{j_{1}},{\psi}_{j_{2}},{\psi}_{j_{3}})$ is trimmed in the previous step.} \begin{figure} \caption{Three-phase framework used to derive Sampler~6 from its parent MH within Gibbs sampler. The parent sampler appears in (a) with Steps 3, 5 and 6 requiring MH updates. The conditioning in steps 2, 3, 5, and 6 is reduced in (b). The steps are permuted in (c) to allow redundant draws of $X_L^{\star} \label{fig:spectralmodel61} \end{figure} Luckily, the stationary distribution of an MH within PCG sampler can be verified using the same methods that are used for an ordinary PCG sampler. In particular, the three-phase framework of~\citet{vand:park:08} can be directly applied. The first two phases, conditioning reduction and permutation, apply equally well to MH within Gibbs samplers. Neither updating additional components of $\psi$ in one or more steps nor permuting the order of the steps upsets the stationary distribution of an MH within Gibbs sampler. The final phase involves removing redundant updates. Because MH steps generally depend on the current draws of {\it all} of the components of $\psi$ not marginalized out in that step, there are fewer redundant draws when some steps involve MH. Nonetheless, any redundant updates that are identified can safely be removed in the trimming phase---by definition they do not affect the transition kernel. {\it The critical point is that unlike with an ordinary Gibbs sampler, we cannot simply replace some of the component draws of a PCG sampler with MH updates. Rather we must construct an MH within PCG sampler by applying the three-phase framework.} Now suppose we wish to reduce the conditioning in an MH step. In Sampler Fragment~1, for example, if $p(\psi_3 | \psi_1, \psi_2)$ is a standard distribution with known normalization, then we can evaluate $p(\psi_2 | \psi_1) \propto p(\psi_1, \psi_2) = p(\psi_1, \psi_2, \psi_3) / p(\psi_3 | \psi_1, \psi_2)$ and sample $\psi_2 \sim {\mathcal M}_{2|1}(\psi_2|\psi_1^\prime,\psi_2^\prime)$. Replacing Step $K+1$ of Sampler Fragment~1 with this reduced MH step, however, can alter the chain's stationary distribution in unpredictable ways. Instead, we propose to replace the full MH step with the reduced MH step {\it followed immediately} by a direct draw from the complete conditional of the reduced quantities. In Sampler Fragment~1 this would entail replacing Step $K+1$ with \begin{description} \itemsep=0in \item[Step $K+1$ with Reduced Conditioning:] $\psi_2^{(t+1)} \sim {\mathcal M}_{2|1}(\psi_2|\psi_1^{(t+1)},\psi_2^{*})$ and $\psi_3 \sim p(\psi_3 | \psi_1^{(t+1)}, \psi_2^{(t+1)})$. \end{description} This strategy ensures that the target stationary distribution is maintained. The expectation is that the updates of the reduced quantities will be trimmed after the steps are appropriately permuted and that the reduced MH step can be employed in the final sampler. We denote the transition kernel of the full step (i.e., the reduced MH step followed by the complete conditional of the reduced quantities) by ${\cal M}^\star$. In Sampler Fragment~1, we rewrite the step with reduced conditioning \begin{description} \itemsep=0in \item[Step $K+1$ with Reduced Conditioning:] $(\psi_2^{(t+1)},\psi_3) \sim {\mathcal M}^\star_{2,3|1}(\psi_2, \psi_3|\psi_1^{(t+1)},\psi_2^{*}).$ \end{description} Notice that this full update is not formally a MH update and has the advantage that it does not depend on all of the components of $\psi$. Thus, this step can follow a step that reduces $\psi_3$ out. We now illustrate the construction of a proper MH within PCG sampler for the spectral model given in~(\ref{eq:sesa}). For simplicity, we assume that $X$ is observed directly and we can ignore {(\ref{eq:nusesa})--(\ref{eq:sa})}. Figure~\ref{fig:spectralmodel61}(a) gives a six-step Gibbs sampler. Three of its steps require MH updates; the details of all the steps are given in Appendix B. The conditioning in four steps is reduced in Figure~\ref{fig:spectralmodel61}(b), and the steps are permuted in Figure~\ref{fig:spectralmodel61}(c) to allow the redundant draws of $X_L^{\star}$ and $\alpha^{\star}$ to be trimmed in four steps. Sampler 6, the resulting proper MH within PCG sampler, appears in Figure \ref{fig:spectralmodel67}. \begin{figure} \caption{Samplers 6 and 7. Figure~\ref{fig:spectralmodel61} \label{fig:spectralmodel67} \end{figure} \subsection{Using MH following a reduced step} \label{sec:mhrs} Using a full MH step immediately following a reduced step can be problematic. Sampler~5 illustrates this in its simplest form: a draw from a marginal distribution followed by an MH update of the conditional distribution of the remaining unknowns. As noted in Section~\ref{sec:cespcg} this is a particularly common problem in practice, even in its simplest form. In more complicated PCG samplers, the general phenomenon of introducing a full MH step immediately following a reduced step is the typical path by which introducing MH leads to an improper sampler. This is illustrated in Sampler~Fragment~1, where we are unable to replace the update in Step~$K$ with the reduced step $\psi_1^{(t+1)} \sim p(\psi_1|\psi_3^\prime)$. Thus, this case is particularly important and we propose two alternate samplers that maintain the basic structure of the underlying PCG sampler while allowing a form of MH in the step following a reduced step. Both solutions are conceptually straightforward. We begin by studying a special case that is useful for illustrating the two alternative samplers that we propose. We discuss the more general situation below. In particular we start in the general setting of Sampler~Fragment~1, but consider a PCG sampler in which $\psi_1$ is updated in Step~$K$ via a direct draw from the conditional distribution $p(\psi_1|\psi_3)$ of the marginal distribution $p(\psi_1,\psi_3)$, i.e., a reduced step. Again suppose that an MH update is required to update $\psi_2$ in Step~$K+1$. That is, Steps~$K$ and $K+1$ of the parent PCG sampler are \begin{description} \itemsep=0in \item[Step $K$:] $\psi_1^{(t+1)} \sim p(\psi_1|\psi_3^\prime)$, (Sampler Fragment 2) \item[Step $K+1$:] $\psi_2^{(t+1)} \sim p(\psi_2|\psi_1^{(t+1)},\psi_3^\prime)$. \end{description} \noindent Because MH is needed for Step $K+1$, these steps cannot be blocked. One straightforward general solution to the intractability of $p(\psi_2|\psi_1^{(t+1)},\psi_3^\prime)$ is simply to iterate the MH update within Step~$K+1$ to obtain a draw from the conditional distribution, \noindent {\it Iterated MH Strategy}: \begin{description} \itemsep=0in \item[Step $K$:] $\psi_1^{(t+1)} \sim p(\psi_1|\psi_3^\prime)$, (Sampler Fragment 3) \item[Step $K+1$:] Sample $\psi_2^{(t+l/L)} \sim {\mathcal M}_{2|1,3}(\psi_2|\psi_1^{(t+1)},\psi_2^{(t+(l-1)/L)},\psi_3^\prime)$, for $l=1,\dots,L$, to obtain $\psi_2^{(t+1)}\stackrel{\mbox{\tiny{approx}}}{\sim} p(\psi_2|\psi_1^{(t+1)},\psi_3^\prime)$ at the subiteration $l=L$. \end{description} \noindent {We discuss methods for determining how large $L$ must be in {Sections~\ref{sec:cijmh} and~\ref{sec:smhpcg}}. With sufficiently large $L$, the Iterative MH Strategy delivers a draw that approximately follows $p(\psi_2|\psi_1^{(t+1)},\psi_3^\prime)$ and thus the sampler is {\it approximately proper}. In this special case the iterated MH strategy effectively blocks Steps~$K$ and $K+1$ to (nearly) deliver an independent draw from $p(\psi_1,\psi_2|\psi_3^\prime)$.} Another solution to the intractability of $p(\psi_2|\psi_1^{(t+1)},\psi_3^\prime)$ is a joint MH update on the blocked version of Steps~$K$ and $K+1$, \noindent {\it Joint MH Strategy}: \begin{description} \itemsep=0in \item[Step $K$:] Update $(\psi_1,\psi_2)$ jointly via the MH jumping rule ${\mathcal J}_{1,2|3}(\psi_1,\psi_2|\psi_2^{(t)},\psi_3^\prime)=p(\psi_1|\psi_3^\prime)\linebreak {\mathcal J}_{2|1,3}(\psi_2|\psi_1,\psi_2^{(t)},\psi_3^\prime)$, \item[Step $K+1$:] Omit. (Sampler Fragment 4) \end{description} \noindent The jumping rule in Step~$K$ of Sampler~Fragment~4 is exactly the concatenation of Step~$K$ and the jumping rule in Step~$K+1$ of Sampler~Fragment~3. By concatenating we avoid iteration. The iterated MH strategy is in some sense a thinned version of the joint MH strategy. This, however, is an over simplification for two reasons. First, the iterated MH strategy updates $\psi_1$ only once for every $L$ updates of $\psi_2$ whereas the joint MH strategy updates both together. Second, although the jumping rule in the joint MH strategy is the same as that used by the iterated MH strategy at its first subiteration, the acceptance probabilities differ. This results in a systematic difference in the performance of the resulting samplers, see Section~\ref{sec:cijmh}. Generalizing Sampler~Fragment~2, Steps~$K$ and $K+1$ may not block even without MH. Suppose $\psi=(\psi_1,\psi_2,\psi_3,\psi_4)$ and the parent PCG sampler contains the two steps \begin{description} \itemsep=0in \item[Step $K$:] $\psi_1^{(t+1)} \sim p(\psi_1|\psi_3^{(t)},\psi_4^\prime)$, (Sampler Fragment 5) \item[Step $K+1$:] $(\psi_2^{(t+1)},\psi_3^{(t+1)}) \sim p(\psi_2,\psi_3|\psi_1^{(t+1)},\psi_4^\prime)$, \end{description} \noindent where Step~$K$ is a reduced step and Step~$K+1$ cannot be sampled directly. Here the conditional distributions cannot be blocked into a single step. {We can still use the iterated MH strategy in Step~$K+1$ to obtain a draw approximately from $p(\psi_2,\psi_3|\psi_1^{(t+1)},\psi_4^\prime)$ and an approximately proper sampler.} Likewise we can implement the joint MH strategy, using the jumping rule $p(\psi_1|\psi_3^{(t)},\psi_4^\prime){\mathcal J}_{2,3|1,4}(\psi_2,\psi_3|\psi_1,\psi_2^{(t)},\psi_3^{(t)},\psi_4^\prime)$. The stationary distribution of the joint jumping rule is $p(\psi_1|\psi_3^{(t)},\psi_4^\prime)p(\psi_2,\psi_3|\psi_1,\psi_4^\prime)$. Although a legitimate joint distribution on $(\psi_1,\psi_2,\psi_3)$, this does not correspond to a conditional distribution of $p(\psi)$. \subsection{To block or not to block} \label{sec:bnb} \begin{figure} \caption{A dataset simulated under the spectral model~(\ref{eq:sesa} \label{fig:simulate} \end{figure} Section~\ref{sec:mhrs} discusses the case where Step~$K+1$ of Sampler Fragment 2 requires MH. We now consider the case where Step~$K$ requires MH. In particular, \begin{description} \itemsep=0in \item[Step $K$:] $\psi_1^{(t+1)} \sim {\mathcal M}_{1|3}(\psi_1|\psi_1^{(t)},\psi_3^\prime)$, (Sampler Fragment 6) \item[Step $K+1$:] $\psi_2^{(t+1)} \sim p(\psi_2|\psi_1^{(t+1)},\psi_3^\prime)$. \end{description} \noindent Sampler~Fragment~6 does not lead to convergence problems because the inputs to Step $K+1$ follow the correct distribution; Figure~\ref{fig:ex3} verifies the stationary distribution of its parent chain. We might consider blocking the two steps in Sampler~Fragment~6 into a single MH update as \begin{description} \itemsep=0in \item[Step $K$:] Update $(\psi_1,\psi_2)$ jointly via the MH jumping rule ${\mathcal J}_{1,2|3}(\psi_1,\psi_2|\psi_1^{(t)},\psi_2^{(t)},\psi_3^\prime)=\linebreak {\mathcal J}_{1|3}(\psi_1|\psi_1^{(t)},\psi_3^\prime)p(\psi_2|\psi_1,\psi_3^\prime)$, \item[Step $K+1$:] Omit. (Sampler Fragment 7) \end{description} \noindent The jumping rule in Sampler~Fragment~7 is exactly the concatenation of the jumping rules in the two steps of Sampler~Fragment~6. There is a fundamental difference, however, in that the concatenated jumping rule depends on $\psi_2^{(t)}$: if the MH proposal is rejected, $(\psi_1^{(t+1)},\psi_2^{(t+1)})=(\psi_1^{(t)},\psi_2^{(t)})$, whereas neither of the steps in Sampler~Fragment~6 depends on $\psi_2^{(t)}$. This means that care must be taken to ensure blocking in this way does not upset the stationary distribution of the chain. Steps 3 and 4 of Sampler 6 are an example of Sampler~Fragment~6, with $\psi_1=\beta$, $\psi_2=\alpha$ and $\psi_3=(\gamma,\mu,\phi)$. Blocking Steps 3 and 4 of Sampler 6 results in Sampler 7, see the second panel of Figure \ref{fig:spectralmodel67}. Unfortunately, this is an improper sampler, which we verify using a simulation study. We begin by generating an artificial data set consisting of $n=550$ bins with $\alpha=37.62$, $\beta=1$, $\gamma=40/37.62$, $\mu=250$, and $\phi=0.2$, see Figure~\ref{fig:simulate}. We run two versions of Sampler 7. Sampler 7(a) uses the concatenated jumping rule given in Sampler Fragment 7 to update $(\alpha,\beta)$, while Sampler 7(b) uses an independent bivariate normal jumping rule centered at the current value of $(\alpha,\beta)$. We use a uniform prior distribution for each parameter, and run 30,000 iterations of Samplers~6, 7(a), and 7(b) using the same starting values ($\alpha=30$, $\beta=3$, $\gamma=1$, $\mu=10$ and $\phi=0.5$). Scatter plots of $(\alpha,\beta,\phi)$ for the last 10,000 draws from the three samplers appear in Figure~\ref{fig:scatter1}, which shows that Samplers 7(a) and 7(b) underestimate the correlations of the target distribution; this effect is especially dramatic for Sampler 7(b). Figure~\ref{fig:specqq} compares the marginal distributions of $\alpha$, $\beta$, and $\phi$ generated with Samplers 6 and 7(b), and shows that Sampler 7(b) underestimates the marginal variances of all three parameters. (The marginals generated with Sampler 7(a) are more similar to those generated with Sampler 6.) \begin{figure} \caption{Scatter plots of $\alpha$, $\beta$ and $\phi$ for 10,000 draws from Samplers 6, 7(a) and 7(b) respectively. The two versions of Sampler 7 block the two steps of Sampler 6 that update $\alpha$ and $\beta$. Unfortunately, this results in an improper sampler. When updating $(\alpha,\beta)$, Sampler 7(a) uses the concatenation of Sampler 6's jumping rules for $\alpha$ and $\beta$, while Sampler 7(b) uses an independent bivariate normal jumping rule. The impropriety of Sampler 7(b) is especially dramatic.} \label{fig:scatter1} \end{figure} \begin{figure} \caption{Quantile-quantile plots of $\alpha$, $\beta$ and $\phi$ corresponding to draws generated with Samplers 6 and 7(b). Sampler 7(b) severely underestimates the marginal variances of all three parameters.} \label{fig:specqq} \end{figure} The problem with Sampler 7 can be understood in the terms of Section \ref{sec:mhrs}. Blocking the updates for $\alpha$ and $\beta$ results in an MH step that follows directly after a pair of reduced steps (the updates of $\mu$ and $\phi$). If $\mu$ and $\phi$ were known, and Steps 1 and 2 were removed, both versions of Samplers 7 would be proper. As it is, the stationary distribution of Sampler 7 cannot be verified with the three-phase framework. The comparison between Sampler~Fragments~6--7 is similar to that between the iterated and joint MH strategies in Section~\ref{sec:mhrs}. Theoretical perspectives on these choices appear in Section \ref{sec:the}. \section{Theory} \label{sec:the} \subsection{Comparing the iterated and joint MH strategies} \label{sec:cijmh} In this section we compare the iterated and joint MH strategies in terms of their acceptance probabilities. Although it is generally recognized that an acceptance probability of $20\%$ to $40\%$ is best for a symmetric Metropolis jumping rule~\citep{robe:etal:97}, we argue that the better choice between the two strategies is determined by maximizing the acceptance probability. This is because both the iterated and joint MH strategies start with the {\it same proposal}---they are numerically identical. The rule of thumb for tuning the acceptance probability to between $20\%$ and $40\%$ is based on comparing {\it different proposal distributions} with an eye on avoiding high acceptance rates because they typically correspond to jumping rules that propose very small steps. In this case the initial step sizes are the same and we aim to reduce correlation by increasing the jumping probability. We begin with theoretical results and then illustrate them numerically. To simplify notation we suppress the conditioning on $\psi_3$ in Sampler~Fragments~3 and~4. This is equivalent to a formal comparison of the iterated and joint MH strategies as alternatives to the improper two-step Sampler~5. We assume that (i) the sampler has been verified to be proper so that $\pi=p$ and (ii) the jumping rule used to update $\psi_2$ does not depend on $\psi_1$, i.e., ${\mathcal J}_{2|1}(\psi_2|\psi_1^\prime,\psi_2^\prime)={\mathcal J}_{2|1}(\psi_2|\psi_2^\prime)$. While the transition kernel ${\mathcal{M}}_{2|1}({\psi}_{2}|{\psi}_{1}^{\prime},{\psi}_{2}^{\prime})$ will typically depend on ${\psi}_{1}^{\prime}$, the jumping rule often will not, for example, a symmetric Metropolis-type jumping rule does not. The acceptance probability of the first draw in Step~$K+1$ of the iterated MH strategy is \begin{eqnarray} r_{\rm iter}=\frac{p(\psi_2^{\rm prop}|\psi_1^{(t+1/L)}){\mathcal J}_{2|1}(\psi_2^{(t)}|\psi_2^{\rm prop})}{p(\psi_2^{(t)}|\psi_1^{(t+1/L)}){\mathcal J}_{2|1}(\psi_2^{\rm prop}|\psi_2^{(t)})}, \label{eq:itermh} \end{eqnarray} where $\psi_1^{(t+1/L)} \sim p(\psi_1)$ and $\psi_2^{\rm prop} \sim {\mathcal J}_{2|1}(\psi_2|\psi_2^{(t)})$. With the joint MH strategy, it is \begin{eqnarray} r_{\rm joint}=\frac{p(\psi_1^{\rm prop},\psi_2^{\rm prop}) \{p(\psi_1^{(t)}){\mathcal J}_{2|1}(\psi_2^{(t)}|\psi_2^{\rm prop})\}}{p(\psi_1^{(t)},\psi_2^{(t)}) \{p(\psi_1^{\rm prop}){\mathcal J}_{2|1}(\psi_2^{\rm prop}|\psi_2^{(t)})\}}=\frac{p(\psi_2^{\rm prop}|\psi_1^{\rm prop}) {\mathcal J}_{2|1}(\psi_2^{(t)}|\psi_2^{\rm prop})}{p(\psi_2^{(t)}|\psi_1^{(t)}) {\mathcal J}_{2|1}(\psi_2^{\rm prop}|\psi_2^{(t)})}, \label{eq:jointmh} \end{eqnarray} where $\psi_1^{\rm prop} \sim p(\psi_1)$ and $\psi_2^{\rm prop} \sim {\mathcal J}_{2|1}(\psi_2|\psi_2^{(t)})$. \begin{lemma} In the setting described in the previous paragraph, \begin{eqnarray} {\rm E}_{\pi}[r_{\rm iter}/r_{\rm joint}] \ge 1. \label{eq:eratio} \end{eqnarray} \end{lemma} The expectation in (\ref{eq:eratio}) is under the common stationary distribution, $\pi$, of both chains and is conditional on the random seed used at the start of each iteration. That is, since $(\psi_1^{(t+1/L)},\psi_2^{\rm prop})$ sampled under the iterated MH strategy and $(\psi_1^{\rm prop},\psi_2^{\rm prop})$ sampled under the joint MH strategy are drawn in exactly the same way, we assume these quantities are numerically equal. Expression~(\ref{eq:eratio}) asserts that while both strategies start with the same proposal---$(\psi_1^{(t+1/L)},\psi_2^{\rm prop})$ under the iterated MH strategy and $(\psi_1^{\rm prop},\psi_2^{\rm prop})$ under the joint---the iterated MH strategy is on average more likely to accept $\psi_2$. (The iterated MH strategy {\it always} accepts $\psi_1$.) \begin{proof} With the numerical equality of the proposals, \begin{eqnarray} \frac{r_{\rm iter}}{r_{\rm joint}}=\frac{p(\psi_2^{(t)}|\psi_1^{(t)})}{p(\psi_2^{(t)}|\psi_1^{(t+1/L)})}, \label{eq:ratio} \end{eqnarray} where $(\psi_1^{(t)},\psi_2^{(t)},\psi_1^{(t+1/L)}) \sim \pi(\psi_1^{(t)},\psi_2^{(t)})\pi_1(\psi_1^{(t+1/L)})$ with $\pi_1$ the $\psi_1$ marginal distribution of $\pi$. Because $(\psi_1^{(t)},\psi_2^{(t)}) \sim \pi$ and $\pi=p$, the numerator of~(\ref{eq:ratio}) is the conditional density of $\psi_2$ evaluated at $\psi_2^{(t)}$. This is not true of the denominator because $\psi_2^{(t)}$ is independent of $\psi_1^{(t+1/L)}$. Thus, we might expect that the numerator of~(\ref{eq:ratio}) is typically larger than the denominator, as claimed in~(\ref{eq:eratio}). Recalling that $\pi=p$, substituting~(\ref{eq:ratio}) into~(\ref{eq:eratio}), and applying Jensen's inequality, we need only verify that \begin{eqnarray} \int {\rm log}\left[\pi(\psi_2|\psi_1)\right]\pi(\psi_1,\psi_2)d\psi_1 d\psi_2 \ge \int {\rm log}\left[\pi(\psi_2|\psi_1)\right]\pi(\psi_1)\pi(\psi_2)d\psi_1 d\psi_2. \label{eq:jensen} \end{eqnarray} Expression~(\ref{eq:jensen}) can be verified using a standard property of entropy along with the Kullback-Leiber (KL) divergence. In particular, because KL is nonnegative, \begin{eqnarray} \int {\rm log}\left[\pi(\psi_2)\right]\pi(\psi_1)\pi(\psi_2)d\psi_1 d\psi_2 \ge \int {\rm log}\left[\pi(\psi_2|\psi_1)\right]\pi(\psi_1)\pi(\psi_2)d\psi_1 d\psi_2. \label{eq:kl} \end{eqnarray} (The standard KL expression can be recovered by adding $\int {\rm log}\left[\pi(\psi_2)\right]\pi(\psi_1)\pi(\psi_2)d\psi_1 d\psi_2$ to both sides of~(\ref{eq:kl}).) But a standard property of entropy~\citep[e.g.,][]{ebra:etal:99} is \begin{eqnarray} \int {\rm log}\left[\pi(\psi_2|\psi_1)\right]\pi(\psi_1,\psi_2)d\psi_1 d\psi_2 \ge \int {\rm log}\left[\pi(\psi_2)\right]\pi(\psi_1)\pi(\psi_2)d\psi_1 d\psi_2. \label{eq:entropy} \end{eqnarray} Combining~(\ref{eq:kl}) and~(\ref{eq:entropy}) gives~(\ref{eq:jensen}) and hence the desired result. \QEDA \end{proof} \begin{figure} \caption{Autocorrelation functions of~$\psi_{2} \label{fig:auto} \end{figure} We now return to the bivariate Gaussian simulation of Section~\ref{sec:cespcg} to compare the computational performance of the iterated and joint MH strategies. Again we sample $\psi_1$ from its marginal distribution and use the same MH jumping rule to update $\psi_2$ according to its conditional distribution. The iterated strategy is run with $L=7$, in order to return $\psi_2^{(t+1)}$ that is essentially independent of $\psi_2^{(t)}$. {The value of $L$ was set using an initial MH run of $5,000$ iterations and inspecting the autocorrelation function. The initial MH sampler delivers essentially independent draws after $7$ iterations, see Figure~\ref{fig:auto}(a). Of course, the computational cost per iteration of the iterated MH strategy depends on $L$. With $L=7$, each iteration requires eight univariate normal draws, whereas the joint strategy requires two. The autocorrelation functions of $\psi_{2}$ for both the iterated and joint MH strategies appear in Figure~\ref{fig:auto}(b)--(c) and show the clear computational advantage of the iterated MH strategy.} It returns essentially independent draws, whereas the joint MH strategy requires almost thirty iterations to obtain nearly independent draws. In practice, it is important to check that the value of $L$ used in Sampler Fragment~3 delivers samples that are essentially independent of the starting value of the iterated MH strategy. Fortunately, a simple diagnostic is available through the autocorrelation function of $\psi_2^{(t)}$ in Sampler Fragment~3, e.g., Figure~\ref{fig:auto}(b). If the lag one autocorrelation is not essentially zero, the run should be repeated with a larger value of $L$. If $\psi_2$ is updated elsewhere in the sampler, the efficacy of the iterated MH strategy can be isolated by computing the correlation between the initial input of $\psi_2$ and the final output after iteration of the MH update in Step~$K+1$ of Sampler Fragment~3. \subsection{Comparing the samplers with and without blocking} \label{sec:tbnb} To compare the blocking strategy in Sampler Fragment 7 with Sampler Fragment 6, we compute its acceptance rate, again suppressing the conditioning on $\psi_3$ for simplicity, as \begin{eqnarray} r_{\rm blocked}=\frac{p(\psi_1^{\rm prop},\psi_2^{\rm prop}){\mathcal J}_{1}(\psi_1^{(t)}|\psi_1^{\rm prop})p(\psi_2^{(t)}|\psi_1^{(t)})}{p(\psi_1^{(t)},\psi_2^{(t)}){\mathcal J}_{1}(\psi_1^{\rm prop}|\psi_1^{(t)})p(\psi_2^{\rm prop}|\psi_1^{\rm prop})}=\frac{p(\psi_1^{\rm prop}){\mathcal J}_{1}(\psi_1^{(t)}|\psi_1^{\rm prop})}{p(\psi_1^{(t)}){\mathcal J}_{1}(\psi_1^{\rm prop}|\psi_1^{(t)})}=r_{\rm not\;blocked}, \label{eq:block} \end{eqnarray} where $r_{\rm not\;blocked}$ is the acceptance probability of Step $K$ in Sampler Fragment 6, where there is no blocking. This means that Sampler Fragments 6 and 7 are identical in terms of their update of $\psi_1$, but whereas Sampler Fragment 6 updates $\psi_2$ with a new value at every iteration, blocking causes $\psi_2$ to only be updated if $\psi_1$ is updated. Thus, we expect the blocking strategy of Sampler Fragment 7 to reduce the efficiency of the sampler, and contrary to general advice regarding blocking \citep[e.g.,][]{liu:wong:kong:94}, the blocking strategy of Sampler Fragment 7 should be avoided. Together, the results of Sections~\ref{sec:cijmh} and~\ref{sec:tbnb} should be taken to discourage the combining of an MH update and a direct draw from a conditional distribution into a single MH update. \section{Examples} \label{sec:exa} \subsection{The simplest MH within PCG sampler} \label{sec:smhpcg} MH within PCG samplers are useful for fitting multi-component models in which part of the model must be fitted off-line. Consider a two-step sampler that updates $\psi_1$ and $\psi_2$ each in turn, but for computational reasons, we wish to update $\psi_1$ off-line. This may, for example, stem from the use of computer models that involve some costly evaluations in the update of $\psi_1$. As an illustration, we consider the problem of accounting for calibration uncertainty in high-energy astrophysics~\citep{lee:etal:11} using a special case of model~(\ref{eq:sa}) in Section~\ref{sec:saxa}: \begin{eqnarray} Y_{j}{\sim}{\rm Poisson}\{A_{j}\alpha{E_{j}}^{-\beta}\},\mbox{ for }j=1,\dots,n. \label{eq:cali} \end{eqnarray} Here we consider the case where the effective area vector $A=(A_{1},\dots,A_{n})$ is not known, and must be estimated along with $\alpha$ and $\beta$. In-space calibration and sophisticated modelling of the instrument result in a representative sample of possible $A$ values.~\citet{lee:etal:11} shows how a Principal Component Analysis (PCA) of this sample can be used to derive a degenerate multivariate normal prior for $A$. In particular, we can write $A(Z)=A_0+QZ$, where $A_0\,(n\times 1)$ and $Q\,(n\times q)$ are known, the components of the $(q\times 1)$ vector, $Z$, are independent standard normal variables, and $q\ll n$. Since $A$ is a deterministic function of $Z$, we can confine attention to the parameter $(Z, \alpha, \beta)$. With the expectation that $Y$ would be relatively noninformative for $A(Z)$ and to simplify computation,~\citet{lee:etal:11} suggests adopting $p(Z)p(\alpha,\beta|Z,Y)$ as the target distribution for statistical inference, an approximation that they call {\it Pragmatic Bayes}. Thus, the target can be sampled by first drawing $Z\sim p(Z)$ and then updating $\alpha$ and $\beta$ given $Z$. Using a uniform prior for $\alpha$ and $\beta$: $p(\alpha,\beta)\propto 1$, the complete conditional for $\alpha$ is in closed form, but $\beta$ requires MH. One might be tempted to implement the following improper MH within PCG sampler: {\begin{steps} \itemsep=0in \step $Z^{(t+1)}\sim{p(Z)}$, (Sampler 8) \step $\beta^{(t+1)}\sim{\mathcal{M}_{\beta|Y,\alpha,A(Z)}(\beta|\alpha^{(t)},\beta^{(t)},A(Z^{(t+1)}))}$, \step $\alpha^{(t+1)}\sim{p(\alpha|Y,\beta^{(t+1)},A(Z^{(t+1)}))}$. \end{steps}} \noindent This update of $\alpha$ and $\beta$ reflects the simple form of~(\ref{eq:cali}). Methods for fitting more general spectral models were considered by~\citet{lee:etal:11}. To derive an (approximately) proper sampler, we can remove the conditioning on $\alpha$ and implement the iterated MH strategy in Step~2: {\begin{steps} \itemsep=0in \step $Z^{(t+1)}\sim{p(Z_{j})}$, (Sampler 9) \step $\beta^{(t+l/L)}\sim{\mathcal{M}_{\beta|Y,A(Z)}(\beta|\beta^{(t+(l-1)/L)},A(Z^{(t+1)}))}$, for $l=1,\dots,L$, \step $\alpha^{(t+1)}\sim{p(\alpha|Y,\beta^{(t+1)},A(Z^{(t+1)}))}$. \end{steps}} \noindent As suggested in Section~\ref{sec:cijmh}, we determine $L$ using an initial MH run of $1,000$ iterations and inspecting its autocorrelation function. We found that the component MH sampler delivers essentially independent draws of $\beta$ after $20$ iterations and thus set $L=20$ in Step 2 of Sampler~9. We use a simulation study to illustrate the impropriety of Sampler 8. The data are simulated using $n=1078$ energy bins ranging from $0.225$ to $10.995$ keV, $q=7$, $Z_j=1.5\ (j=1,\dots,q)$, $\alpha=30$ and $\beta=1$. For each sampler, a chain of length 20,000 is run with a burnin of 10,000 from the starting values $Z=0$, $\alpha=1$ and $\beta=1$. Figure~\ref{fig:scacal} shows that using $L=20$ in Sampler~9 is sufficiently large and that Sampler 8 both underestimates the correlation of $Z_2$ and $\beta$ and the marginal variability of both $\alpha$ and (more dramatically) $\beta$. \begin{figure} \caption{Numerical Evaluation of Samplers 8 and 9 using data simulated under model (\ref{eq:cali} \label{fig:scacal} \end{figure} While~\citet{lee:etal:11} recognized the hazard of Sampler 8 and proposed Sampler 9, there are other examples in the literature where MH is used within a PCG sampler incorrectly, resulting in improper samplers. \citet{liu:etal:09}, for example, proposed a sampler very similar to Sampler 8 in structure, but in a completely different setting. To predict the temperature of a particular device at a certain time point, the parameters describing the physical properties of the device were linked to the other parameters via a computationally expensive computer model. One of the approaches described in \citet{liu:etal:09} for sampling all the model parameters from their posterior distribution was to update the physical-property parameters from their prior distributions first, and then sample the remaining parameters conditioning on the prior-generated values of the physical-property parameters. This approach was expected to reduce the confoundedness between the parameters and thus improve the mixture of the Markov chain. Since the updates of the other parameters relied on MH, this approach is problematic as illustrated in Section~\ref{sec:cespcg}. In analogy to Figure \ref{fig:scacal}, \citet{liu:etal:09} showed that the marginal distributions of the other parameters sampled via this approach were more variable than via the full Bayesian analysis or some other approaches. Other examples of improper samplers that are similar in structure to Sampler 8 were proposed in \citet{lunn:etal:09}, \citet{mccan:etal:10}, and even the popular WinBUGS package (Spiegelhalter, Thomas, Best and Lunn 2003), see \citet{wood:etal:12} for discussion. \subsection{Spectral analysis with narrow lines in high-energy astrophysics} \label{sec:sanlha} \begin{figure} \caption{Samplers 10 and 11. Sampler 10 is the proper MH within PCG sampler for the spectral model~(\ref{eq:sesa} \label{fig:standandproper} \end{figure} \begin{figure} \caption{Three-phase framework used to derive Sampler~11 from its parent MH within Gibbs sampler. The parent sampler appears in (a). The conditioning in Steps 2, 3, 5, and 6 is reduced in (b) and the steps are permuted in (c) to allow redundant draws of $X_L^{\star} \label{fig:spectralmodel111} \end{figure} \begin{figure} \caption{Comparing Samplers 10 and 11 using data simulated under model (\ref{eq:sesa} \label{fig:conspectral} \end{figure} \begin{figure} \caption{Two samplers for fitting (\ref{eq:factor} \label{fig:parentproper} \end{figure} \begin{figure} \caption{Using the three-phase framework to derive Sampler~13 from its parent Gibbs sampler, i.e., Sampler 12. The parent Gibbs sampler is in (a); the conditioning in Steps 3--6 is reduced in (b); and the steps are permuted in (c) to allow redundant draws of $Z^{\star} \label{fig:factormodel} \end{figure} \begin{figure} \caption{Comparing Samplers 12 and 13 using data simulated under the factor analysis model (\ref{eq:factor} \label{fig:refactor} \end{figure} Section~\ref{sec:bnb} uses a simulation study to illustrate a potential problem with Sampler Fragment 7, that is, how the blocking of an MH update and a direct draw from a conditional distribution can result in an improper sampler. Here we use the same simulation study to illustrate the improved convergence properties of three proper MH within PCG samplers relative to their parent Gibbs sampler. The only difference is that for each sampler here, a chain of 20,000 iterations is run with a burnin of 10,000 iterations. As pointed out in Section~\ref{sec:saxa}, the standard Gibbs sampler for the spectral model~(\ref{eq:sesa}) breaks down since the resulting subchain for $\mu$ does not move from its starting value~\citep{park:vand:09}. To solve this problem, we sample $\mu$ without conditioning on $X_{L}$ and obtain an MH within PCG sampler, i.e., Sampler~10, given in the first panel of Figure~\ref{fig:standandproper}. Sampler~6 in Figure \ref{fig:spectralmodel67} is another MH within PCG sampler but with a higher degree of partial collapsing, by which we mean more quantities are marginalized out in Sampler~6 than in Sampler~10. Not only does Sampler 6 update $\mu$ without conditioning on $X_{L}$, but it also marginalizes $\alpha$ out of its first three steps, whereas Sampler 10 does not remove $\alpha$ from any step. Sampler 11 attempts to further improve Sampler 6 by blocking the MH updates of $\beta$ and $\phi$, see the second panel of Figure~\ref{fig:standandproper}. Unlike Sampler 7 which also blocks 2 steps of Sampler 6, Sampler 11 is proper, see Figure \ref{fig:spectralmodel111}. Thus Samplers 6, 10 and 11 are all proper MH within PCG samplers with common parent Gibbs sampler given in Figure \ref{fig:spectralmodel61}(a), but with different degrees of partial collapsing. (The derivation of Sampler 6 appears in Figure \ref{fig:spectralmodel61} and that of Sampler 10 is omitted to save space.) The convergence characteristics of $\alpha$, $\beta$, and $\phi$ using Samplers 10 and 11 are compared in Figure \ref{fig:conspectral}; $\gamma$ and $\mu$ converge well for all three samplers. All three MH within PCG samplers outperform the parent Gibbs sampler, since the latter does not converge to the target. Sampler 11 performs much better than Sampler 10 in terms of the mixing and autocorrelations of $\alpha$, $\beta$, and $\phi$. The performance of Sampler 6 is better than Sampler 10, but not as good as Sampler 11. (To save space, the results of the intermediate Sampler 6 are omitted in Figure \ref{fig:conspectral}.) These results show that proper MH within PCG samplers outperform their parent Gibbs sampler in computational efficiency and a higher degree of partial collapsing can improve the convergence even further. \subsection{Relating ECME with Newton-type updates to MH within PCG samplers} \label{sec:enu} The Expectation-Maximization (EM) algorithm is a frequently used technique for computing maximum likelihood or maximizing a posterior estimate. The Expectation/Conditional Maximization (ECM) algorithm~\citep{meng:rubi:93} extends the EM algorithm by replacing the M-step of each EM iteration with a sequence of CM-steps, each of which maximizes the \emph{constrained} expected complete-data loglikelihood function.~\citet{liu:rubi:94} further generalized ECM with the Expectation/Conditional Maximization Either (ECME) algorithm by replacing some of its CM-steps with steps that maximize the corresponding constrained \emph{actual} likelihood function. ECME can converge substantially faster than either EM or ECM while maintaining the stable monotone convergence and basic simplicity of its parent algorithms. The Gibbs sampler can be viewed as the stochastic counterpart of ECM, see~\citet{vand:meng:10}. PCG extends Gibbs sampling in a manner analogous to ECME's extension of ECM: both PCG and ECME reduce conditioning in a subset of their parameter updates \citep{park:vand:09}. The analogy is not perfect, however. In ECME, for example, the CM-steps maximizing the constrained actual likelihood must be last to guarantee monotone convergence \citep{meng:vand:97}. On the other hand, with PCG, the corresponding partially collapsed steps must be the first to guarantee a proper sampler. For ECME, numerical methods, such as Newton-Raphson, may be used to maximize the actual likelihood if no closed-form solution is available. In the context of PCG samplers, these Newton-Raphson steps can often be implemented using MH updates. Here we illustrate how this is done by using an ECME algorithm developed for a factor analysis model by \citet{liu:rubi:98}. They derived EM and ECME algorithms and showed that ECME with Newton-type updates converges more quickly than EM. Analogously, it is natural to expect that when fitting this model under a Bayesian framework, a proper MH within PCG sampler will be more efficient than its parent Gibbs sampler. \citet{liu:rubi:98} considered the model, \begin{eqnarray} Y_{i}{\sim}{\rm N}_{p}\Big[Z_{i}\beta, \Sigma={\rm Diag}({\sigma}_{1}^{2},\dots,{\sigma}_{p}^{2})\Big],\mbox{ for }i=1,\dots,n, \label{eq:factor} \end{eqnarray} where $Y_{i}$ is the $(1\times{p})$ vector for observation $i$, $Z_{i}$ is the $(1\times{q})$ vector of the $q$ factors, ${\sigma}_{j}^{2}$ is component $j$ of the diagonal variance-covariance matrix, and $\beta$ is the $(q\times p)$ matrix of factor loadings. We use $\beta_{j}$ to represent column $j$ of $\beta$ and set $Y={\left(Y_1^{\rm T},\dots,Y_n^{\rm T}\right)}^{\rm T}$ and $Z={\left(Z_1^{\rm T},\dots,Z_n^{\rm T}\right)}^{\rm T}$. We use ${\rm N}_{q}(0, I)$ as the prior for $Z_{i}\ (i=1,\dots,n)$ and specify noninformative priors for $\beta$ and $\Sigma$, that is, $p(\sigma_j^2)=\mbox{Inv-Gamma}(0.01,0.01)$ and $p({\beta}_{j})={\rm N}_q\left[0,V={\rm Diag}(100,\dots,100)\right]$ $(j=1,\dots,p)$. \citet{gho:dun:09} discuss this model and its priors in detail. Sampler 12 (see top panel of Figure~\ref{fig:parentproper}) is a standard Gibbs sampler in which each complete conditional distribution can be sampled directly. To improve its convergence, we construct a proper MH within PCG sampler, Sampler 13, which is also given in Figure~\ref{fig:parentproper}. Because $Z$ is highly correlated with $\sigma_2^2,\dots,\sigma_5^2$, Sampler 13 updates $\sigma_2^2,\dots,\sigma_5^2$ without conditioning on $Z$. Since $\sigma_1^2$ converges well with the standard Gibbs sampler in the simulation described below, we do not alter its update in Sampler 13. The reduced updates of $\sigma_2^2,\dots,\sigma_5^2$ require MH steps. The derivation of Sampler 13 from its parent Gibbs sampler, i.e., Sampler 12, using the three-phase framework appears in Figure~\ref{fig:factormodel}. We use a simulation study to illustrate the improved convergence of the MH within PCG sampler over its parent Gibbs sampler. In particular, we set $p=5$, $q=2$, and $n=100$; ${\sigma}_{j}^{2}\ (j=1,\dots,5)$ are generated from Inv-Gamma$(1,0.25)$ and ${\beta}_{hj}\ (h=1,2;j=1,\dots,5)$ from ${\rm N}(0,3^2)$. We run 20,000 iterations for each sampler with a burnin of 10,000 using the same starting values ($Z_{i}={[1,1]}^{\rm T}$, $\beta_{hj}=1$, and ${\sigma}_{j}^{2}=1$). Figure~\ref{fig:refactor} compares Samplers 12 and 13 in terms of mixing, autocorrelation, and density estimation of ${\sigma}_{2}^{2}$ and ${\sigma}_{3}^{2}$; the first two columns correspond to Sampler 12, and the last two columns correspond to Sampler 13; ${\sigma}_{1}^{2}$ converges well for both samplers, and ${\sigma}_{4}^{2}$ and ${\sigma}_{5}^{2}$ behave similarly as ${\sigma}_{2}^{2}$ and ${\sigma}_{3}^{2}$. The computational advantage of Sampler 13 is evident. More importantly, the MH within PCG sampler delivers a much more trustworth estimate of the marginal posterior distributions as illustrated in the histograms in Figure~\ref{fig:refactor}. We repeated the simulation with $p=50$ and $q=30$ and found that Sampler~13 again outperformed Sampler~12 in a manner similar to what is reported in Figure~\ref{fig:refactor}. When run with $p=50$ and $q=2$, however, both samplers delivered nearly uncorrelated draws. \section{Discussion} \label{sec:disc} Since its introduction in 2008, the PCG sampler has been deployed to improve the convergence properties of numerous Gibbs-type samplers in a variety of applied settings. As with ordinary Gibbs samplers, MH updates are sometimes required within PCG samplers. Ensuring that the target stationary distribution is maintained in this situation involves subtleties that do not arise in ordinary MH within Gibbs samplers. This has led to the proposal of a number of improper samplers in the literature. This article elucidates these subtleties, offers a strategy for guaranteeing that the target stationary distribution is maintained, and provides advice as to how best to implement MH within PCG samplers. Some of this advice applies equally to ordinary MH within Gibbs samplers. It is commonly understood, for example, that blocking steps within a Gibbs sampler should improve its convergence. We find, however, that this may not be true if MH is involved. {Reducing conditioning in one or more steps of a Gibbs sampler as prescribed by PCG can only improve convergence. If MH is required to implement the reduced steps, however, the overall performance of the algorithm may deteriorate, especially if a poor choice is made for MH jumping rule. Thus, there is a natural trade-off between the computational complexity of MH and the reduced correlation afforded by partial collapsing. Generally speaking, some trial and error may be needed to negotiate this trade-off. In practice we often start with an MH within Gibbs sampler, which already involves MH and can be improved by partial collapsing without any added complexity.} We expect our strategies to extend the application of PCG samplers in practice and to provide researchers with additional tools to improve the convergence of Gibbs-type samplers. \noindent {\bf Acknowledgements:} The authors thank Taeyoung Park for helpful comments on a preliminary version of the paper. They also gratefully acknowledge funding for this project partially provided by the NSF (DMS-12-08791), the Royal Society (Wolfson Merit Award) and the European Commission (Marie-Curie Career Integration Grant). \spacingset{1.4} \spacingset{1.45} \appendix \counterwithin{figure}{section} \begin{center} {\Large\bf ONLINE SUPPLEMENT: APPENDIX} \end{center} \section{Stationary Distribution of Sampler Fragment 6} \label{sec:figs} {Figure~\ref{fig:ex3} illustrates how the three-phase framework can be used to verify the stationary distribution of Sampler Fragment 6 of Section~\ref{sec:bnb}, with $\psi_3$ sampled from its complete conditional distribution either before or after Steps~$K$ and $K+1$.} \vskip 0.4in \begin{figure} \caption{Three-phase framework to derive Sampler~Fragment~6 in Section \ref{sec:bnb} \label{fig:ex3} \end{figure} \section{Details of the Steps in the Gibbs-type Samplers} This section consists of two parts. The first describes details of sampling steps of the parent Gibbs sampler and proper MH within PCG samplers, i.e., Samplers 6, 10 and 11, for the spectral model~(\ref{eq:sesa}). The second describes the steps of Samplers 12 and 13 which fit the factor analysis model~(\ref{eq:factor}). \subsection*{B1. Details of the steps in the Gibbs-type samplers based on model~(\ref{eq:sesa})} \label{ap:specsteps} Here we assume $X$ is directly observed and we can ignore (\ref{eq:nusesa}) -- (\ref{eq:sa}). With noninformative uniform prior distributions for all of the parameters, the posterior distribution of the parameters $\alpha$, $\beta$, $\gamma$, $\mu$, and $\phi$ under the spectral model~(\ref{eq:sesa}) is $$p(\alpha,\beta,\gamma,\mu,\phi|X)\propto\displaystyle{\prod_{i=1}^n {\left[\alpha({E_{i}}^{-\beta}+{\gamma}I\{i=\mu\})e^{-\phi/E_{i}}\right]}^{X_{i}}{\rm exp}\left\{ -\alpha\sum_{i=1}^n({E_{i}}^{-\beta}+{\gamma}I\{i=\mu\})e^{-\phi/E_{i}}\right\}}.\eqno{({\rm B}1.1)}$$ \noindent The joint posterior distribution of the parameters and augmented data $X_L$ is $$ \begin{array}{ll} p(\alpha,\beta,\gamma,\mu,\phi,X_{L}|X)\propto & \displaystyle{\alpha^{\sum_{i=1}^n X_{i}}e^{-\phi\sum_{i=1}^n (X_{i}/E_{i})}\prod_{i=1}^n E_{i}^{-\beta (X_i-X_{iL})} \gamma^{\sum_{i=1}^n X_{iL}}\times}\\ &\displaystyle{\prod_{i=1}^n{\left\{I(i=\mu)\right\}}^{X_{iL}}{\rm exp}\left\{-\alpha\sum_{i=1}^n({E_{i}}^{-\beta}+{\gamma}I\{i=\mu\})e^{-\phi/E_{i}}\right\}}. \end{array}\eqno{({\rm B}1.2)}$$ Thus the steps of the parent MH within Gibbs sampler in Figure \ref{fig:spectralmodel61}(a) or \ref{fig:spectralmodel111}(a) are \begin{steps} \itemsep=0in \step Sample $X_{iL}$ from ${\rm Binomial}\displaystyle{\left\{X_i,\frac{{\gamma}I\{i=\mu\}}{{E_{i}}^{-\beta}+{\gamma}I\{i=\mu\}}\right\}}$, for $i=1,\dots,n$, \step Sample $\alpha$ from ${\rm Gamma}\displaystyle{\left\{\sum_{i=1}^n X_{i}+1,\sum_{i=1}^n({E_{i}}^{-\beta}+{\gamma}I\{i=\mu\})e^{-\phi/E_{i}}\right\}}$, \step Use MH to sample $\beta$ from $p(\beta|X,X_{L},\alpha,\gamma,\mu,\phi)\propto p(\alpha,\beta,\gamma,\mu,\phi,X_{L}|X)$, \step Sample $\gamma$ from ${\rm Gamma}\displaystyle{\left\{\sum_{i=1}^n X_{iL}+1,\alpha\sum_{i=1}^n I\{i=\mu\}e^{-\phi/E_{i}}\right\}}$, \step Use MH to sample $\mu$ from $p(\mu|X,X_L,\alpha,\beta,\gamma,\phi)\propto p(\alpha,\beta,\gamma,\mu,\phi,X_L|X)$, \step Use MH to sample $\phi$ from $p(\phi|X,X_{L},\alpha,\beta,\gamma,\mu)\propto p(\alpha,\beta,\gamma,\mu,\phi,X_{L}|X)$. \end{steps} \noindent The steps of Sampler 10 are \begin{steps} \itemsep=0in \step Use MH to sample $\mu$ from $p(\mu|X,\alpha,\beta,\gamma,\phi)\propto p(\alpha,\beta,\gamma,\mu,\phi|X)$, \step Sample $X_{iL}$ from ${\rm Binomial}\displaystyle{\left\{X_i,\frac{{\gamma}I\{i=\mu\}}{{E_{i}}^{-\beta}+{\gamma}I\{i=\mu\}}\right\}}$, for $i=1,\dots,n$, \step Sample $\alpha$ from ${\rm Gamma}\displaystyle{\left\{\sum_{i=1}^n X_{i}+1,\sum_{i=1}^n({E_{i}}^{-\beta}+{\gamma}I\{i=\mu\})e^{-\phi/E_{i}}\right\}}$, \step Use MH to sample $\beta$ from $p(\beta|X,X_{L},\alpha,\gamma,\mu,\phi)\propto p(\alpha,\beta,\gamma,\mu,\phi,X_{L}|X)$, \step Sample $\gamma$ from ${\rm Gamma}\displaystyle{\left\{\sum_{i=1}^n X_{iL}+1,\alpha\sum_{i=1}^n I\{i=\mu\}e^{-\phi/E_{i}}\right\}}$, \step Use MH to sample $\phi$ from $p(\phi|X,X_{L},\alpha,\beta,\gamma,\mu)\propto p(\alpha,\beta,\gamma,\mu,\phi,X_{L}|X)$. \end{steps} \noindent Integrating $({\rm B}1.1)$ over $\alpha$, we have, $$\begin{array}{ll}p(\beta,\gamma,\mu,\phi|X)\propto&\displaystyle{\prod_{i=1}^n {\left[({E_{i}}^{-\beta}+{\gamma}I\{i=\mu\})e^{-\phi/E_{i}}\right]}^{X_{i}}\times}\\&\displaystyle{{\left[\sum_{i=1}^n({E_{i}}^{-\beta}+{\gamma}I\{i=\mu\})e^{-\phi/E_{i}}\right]}^{-(\sum_{i=1}^n X_{i}+1)}}.\end{array}\eqno{({\rm B}1.3)}$$ \noindent Hence, the steps of Sampler 6 are \begin{steps} \itemsep=0in \step Use MH to sample $\mu$ from $p(\mu|X,\beta,\gamma,\phi)\propto p(\beta,\gamma,\mu,\phi|X)$, \step Use MH to sample $\phi$ from $p(\phi|X,\beta,\gamma,\mu)\propto p(\beta,\gamma,\mu,\phi|X)$, \step Use MH to sample $\beta$ from $p(\beta|X,\gamma,\mu,\phi)\propto p(\beta,\gamma,\mu,\phi|X)$, \step Sample $\alpha$ from ${\rm Gamma}\displaystyle{\left\{\sum_{i=1}^n X_{i}+1,\sum_{i=1}^n({E_{i}}^{-\beta}+{\gamma}I\{i=\mu\})e^{-\phi/E_{i}}\right\}}$, \step Sample $X_{iL}$ from ${\rm Bin}\displaystyle{\left\{X_i,\frac{{\gamma}I\{i=\mu\}}{{E_{i}}^{-\beta}+{\gamma}I\{i=\mu\}}\right\}}$, for $i=1,\dots,n$, \step Sample $\gamma$ from ${\rm Gamma}\displaystyle{\left\{\sum_{i=1}^n X_{iL}+1,\alpha\sum_{i=1}^n I\{i=\mu\}e^{-\phi/E_{i}}\right\}}$. \end{steps} \noindent The steps of Sampler 11 are almost the same as Sampler 6, except Steps 2 and 3 are combined into one step. That is, we use MH to sample $(\beta,\phi)$ from $p(\beta,\phi|X,\gamma,\mu)\propto p(\beta,\gamma,\mu,\phi|X)$. We use a uniform distribution on $\{1,\dots,n\}$ as the jumping rule when updating $\mu$. When updating either $\beta$ or $\phi$ via MH, we use a normal distribution centered at the current draw of the parameter for the jumping rule; the variance of the jumping rule is adjusted to obtain an acceptance rate of around $40\%$. Analogously, when sampling $\beta$ and $\phi$ jointly via MH, the jumping rule is a bivariate normal distribution centered at the current draw with variance-covariance matrix adjusted to obtain an acceptance rate of around $20\%$. \subsection*{B2. Details of the steps in the Gibbs-type samplers based on model~(\ref{eq:factor})} \label{ap:factsteps} With priors $p(\sigma_j^2)=\mbox{Inv-Gamma}(a,b)$ and $p(\beta_j)={\rm N}_2\left(0,V\right)$ $(j=1,\dots,5)$, the posterior distribution of the parameters $Z$, $\beta$, and $\Sigma$ under the factor analysis model~(\ref{eq:factor}) is $$\begin{array}{lll}p(Z,\beta,\Sigma|Y)&\propto&\displaystyle{{\left|\Sigma\right|}^{-n/2}{\left(\prod_{j=1}^5 \sigma_j^{-2(a+1)}\right)}{\rm exp}\left\{ -\frac{1}{2}\sum_{i=1}^n\left[(Y_i-Z_i\beta)\Sigma^{-1}{(Y_i-Z_i\beta)}^{\rm T} +Z_i {Z_i}^{\rm T}\right]\right\}}\\&&{\rm exp}\displaystyle{\left\{-\frac{1}{2}\sum_{j=1}^5 \beta_j^{\rm T} V^{-1} \beta_j-b\sum_{j=1}^5 \sigma_j^{\rm -2}\right\}}.\end{array}\eqno{({\rm B}2.1)}$$ Thus the steps of Sampler 12 are \begin{description} \itemsep=0in \item[Step $1$:] Sample $Z_{i}$ from $\displaystyle{{\rm N}_2\left[{(I_2+\beta\Sigma^{-1}\beta^{\rm T})}^{-1}\beta\Sigma^{-1}Y_i^{\rm T},{(I_2+\beta\Sigma^{-1}\beta^{\rm T})}^{-1}\right]}$, for $i=1,\dots,100$, \item[Step $2$:] Sample $\sigma_{j}^{2}$ from Inv-Gamma$\displaystyle{\left\{a+\frac{n}{2},b+\frac{1}{2}\sum_{i=1}^n {(Y_{ij}-\beta_j^{\rm T} Z_i^{\rm T})}^2\right\}}$, for $j=1,\dots,5$, \item[Step $3$:] Sample $\beta_{j}$ from $\displaystyle{{\rm N}_2\left[({V^{-1}+Z^{\rm T}Z/\sigma_j^2)}^{-1}Z^{\rm T} Y_{.j}/\sigma_j^2,{(V^{-1}+Z^{\rm T}Z/\sigma_j^2)}^{-1}\right]}$, for $j=1,\dots,5$, \end{description} where $Y_{.j}$ represents the $j$th column of $Y$. Integrating $({\rm B}2.1)$ over $Z$, we have, $$\begin{array}{lll}p(\beta,\Sigma|Y)&\propto&\displaystyle{{\left|I_2+\beta\Sigma^{-1}\beta^{\rm T}\right|}^{-n/2}{\left|\Sigma\right|}^{-n/2}{\rm exp}\left\{ -\frac{1}{2}\sum_{i=1}^n\left[Y_i(\Sigma^{-1}-\Sigma^{-1}\beta^{\rm T}{(I_2+\beta\Sigma^{-1}\beta^{\rm T})}^{-1}\beta\Sigma^{-1})Y_i^{\rm T} \right]\right\}}\\&&\displaystyle{{\left(\prod_{j=1}^5 \sigma_j^{-2(a+1)}\right)}}{\rm exp}\displaystyle{\left\{-\frac{1}{2}\sum_{j=1}^5 \beta_j^{\rm T} V^{-1} \beta_j-b\sum_{j=1}^5 \sigma_j^{\rm -2}\right\}}.\end{array}\eqno{({\rm B}2.2)}$$ \noindent Hence, the steps of Sampler 13 are \begin{description} \itemsep=0in \item[Step $1$:] Sample $\sigma_{1}^{2}$ from Inv-Gamma$\displaystyle{\left\{a+\frac{n}{2},b+\frac{1}{2}\sum_{i=1}^n {(Y_{i1}-\beta_1^{\rm T} Z_i^{\rm T})}^2\right\}}$, \item[Step $j$:] Use MH to sample $\sigma_{j}^{2}$ from $p(\sigma_{j}^2|Y,\beta,\sigma_{1}^{2},\dots,\sigma_{j-1}^{2},\sigma_{j+1}^{2},\dots,\sigma_{5}^{2})\propto p(\beta,\Sigma|Y)$, for $j=2,\dots,5$, \item[Step $6$:] Sample $Z_{i}$ from $\displaystyle{{\rm N}_2\left[{(I_2+\beta\Sigma^{-1}\beta^{\rm T})}^{-1}\beta\Sigma^{-1}Y_i^{\rm T},{(I_2+\beta\Sigma^{-1}\beta^{\rm T})}^{-1}\right]}$, for $i=1,\dots,100$, \item[Step $7$:] Sample $\beta_{j}$ from $\displaystyle{{\rm N}_2\left[({V^{-1}+Z^{\rm T}Z/\sigma_j^2)}^{-1}Z^{\rm T} Y_{.j}/\sigma_j^2,{(V^{-1}+Z^{\rm T}Z/\sigma_j^2)}^{-1}\right]}$, for $j=1,\dots,5$. \end{description} When updating $\sigma_j^2$ $(j=2,\dots,5)$ via MH, we use a log-normal distribution centered at the log of the current value of the parameter for the jumping rule; the variance is adjusted to obtain an acceptance rate of around $40\%$. \end{document}
\begin{document} \renewcommand\thesubsection{\arabic{subsection}} \def175{175} \defDraft of\ {Draft of\ } \def\today{\today} \def\ISSN{} \pagespan{69}{\pageref{page:lastpage}} \def\relax{\relax} \textitle {Recurrence Relations\rlap{\smash{\quadquad\quadquad\relax}}\cr for Elliptic Sequences\,:\crevery Somos~$4$ is a Somos~$k$} \author{Alfred J. van der Poorten} \address{Centre for Number Theory Research, 1 Bimbil Place, Killara, Sydney, NSW 2071, Australia} {\smash{\overline{e}}}mail{[email protected] (Alf van der Poorten)} \author{Christine S. Swart} \address{Department of Mathematics, Statistics, and Computer Science\cr The University of Illinois at Chicago\cr Chicago, IL 60607-7045 USA} {\smash{\overline{e}}}mail{[email protected] (Christine Swart)} \thanks{The first author was supported in part by a grant from the Australian Research Council.} \subjclass[2000]{Primary: 11B83, 11G05; Secondary: 11A55, 14H05, 14H52} \date{\Now.} \keywords{elliptic curve, Somos sequence, elliptic divisibility sequence} \begin{abstract} In his `Memoir on Elliptic Divisibility Sequences', Morgan Ward's definition of the said sequences has the remarkable feature that it does not become at all clear until deep into the paper that there exist nontrivial such sequences. Even then, Ward's proof of coherence of his definition relies on displaying a sequence of values of quotients of Weierstra\ss\ $\sigma$-functions. We give a direct proof of coherence and show, rather more generally, that a sequence defined by a so-called Somos~relation of gap~$4$ always also is given by a three term Somos~relation of all larger gaps $5$, $6$, $7$, $\ldots\,$. {\smash{\overline{e}}}nd{abstract} \maketitle \pagestyle{myheadings}\markboth{{\headlinefont Alf van der Poorten and Christine Swart}}{{\headlinefont Recurrence relations for elliptic sequences}} \subsection{Morgan Ward's elliptic sequences} In his `Memoir on elliptic divisibility sequences'~\cite{Wa}, Morgan Ward in effect (thus, for all practical purposes) defines anti\-symmetric double-sided sequences $(W_h)$, that is with $W_{-h}=-W_h$, by requiring that, for all integers $h$, $m$, and $n$, \begin{equation} \label{eq:redundant'} W_{h-m}W_{h+m}W_n^2+W_{n-h}W_{n+h}W_m^2 +W_{m-n}W_{m+n}W_h^2=0\,.\tag{\ref{eq:redundant}$'$} {\smash{\overline{e}}}nd{equation} If one dislikes double-sided sequences then one rewrites {\smash{\overline{e}}}qref{eq:redundant'} less elegantly as \begin{equation} \label{eq:redundant} W_{h-m}W_{h+m}W_n^2=W_{h-n}W_{h+n}W_m^2 - W_{m-n}W_{m+n}W_h^2\,, {\smash{\overline{e}}}nd{equation} just for $h\ge m\ge n$. In any case, {\smash{\overline{e}}}qref{eq:redundant} seems more dramatic than it is. An easy exercise confirms that if $W_1=1$ then~{\smash{\overline{e}}}qref{eq:redundant} is equivalent to just \begin{equation} \label{eq:general} W_{h-m}W_{h+m}=W_m^2 W_{h-1}W_{h+1} -W_{m-1}W_{m+1}W_h^2 {\smash{\overline{e}}}nd{equation} for all integers $h\ge m$. Indeed, {\smash{\overline{e}}}qref{eq:general} is just a special case of {\smash{\overline{e}}}qref{eq:redundant}. However, given~{\smash{\overline{e}}}qref{eq:general}, obvious substitutions in {\smash{\overline{e}}}qref{eq:redundant} quickly show one may return from {\smash{\overline{e}}}qref{eq:general} to the apparently more general {\smash{\overline{e}}}qref{eq:redundant}. But there is a drama here. The recurrence relation $$ W_{h-2}W_{h+2}=W_2^2 W_{h-1}W_{h+1} -W_{1}W_{3}W_h^2\,, $$ and non-zero initial values $W_1=1$, $W_2$, $W_3$, $W_4$, already suffices to produce the complete sequence! Thus {\smash{\overline{e}}}qref{eq:general} for all $m$ is entailed by its special case m = 2. We could show directly that the case $m=3$ follows, see a remark in \cite{Sw}, or a footnote in the corresponding discussion in \cite{169}, but the case $m=4$, if done asymetrically as in subsequent remarks of~\cite{169}, plainly was not worth the effort. Plan B, to look it up, fared little better. In her thesis \cite{Shi}, Rachel Shipsey shyly refers the reader back to Morgan Ward's memoir \cite{Wa}; but at first glance Ward seems not to comment on the matter at all, having {\smash{\overline{e}}}mph{defined} his sequences by {\smash{\overline{e}}}qref{eq:general}. Of course, Ward does comment. The issue is whether {\smash{\overline{e}}}qref{eq:general} is {\smash{\overline{e}}}mph{coherent}: do different $m$ yield the one sequence? Ward notes that if $\sigma$ is the Weierstra\ss\ $\sigma$-function then a sequence $\bigl(\sigma(hu)/\sigma(u)^{h^2}\,\bigr)$ satisfies {\smash{\overline{e}}}qref{eq:general} for all $m$. He then painfully shows there is a related cubic curve for every choice of initial values. Whatever, a much more direct argument would be much more satisfying: we supply such an argument below. \subsection{Somos sequences} Some years ago, Michael Somos, see~\cite{Somos}, {\smash{\overline{e}}}mph{inter~alia} asked for the inner meaning of the behaviour of the sequences $(C_h)=(\dots\,$, $2$, $1$, $1$, $1$, $1$, $2$, $3$, $7$, $23$, $59$, $\ldots\,)$ defined by $C_{h-2}C_{h+2}=C_{h-1}C_{h+1}+C_{h}^2$; and of $(B_h)= (\dots\,$, $2$, $1$, $1$, $1$, $1$, $1$, $2$, $3$, $5$, $11$, $37$, $83$, $\ldots\,)$ defined by $B_{h-2}B_{h+3}=B_{h-1}B_{h+2}+B_{h}B_{h+1}$: that is, the sequences $4$-Somos and $5$-Somos \cite[A006720 and A006721]{Sl}. More generally, of course, one may both vary the `initial' values and coefficients and generalise the `gap' to $2m$ or $2m+1$ by studying Somos~$2m$, respectively Somos~$2m+1$, namely sequences satisfying the respective recursions $$ D_{h-m}D_{h+m}=\sum_{i=1}^m \kappa_iD_{h-m+i}D_{h+m-i} \text{\ or\ } D_{h-m}D_{h+m+1}=\sum_{i=1}^m \kappa_iD_{h-m+i}D_{h+m-i}\,. $$ Direct, but somewhat painful, attacks allow one to prove that in fact a Somos~$4$ always is a Somos~$5$, Somos~$6$, and Somos~$8$. For example, see \cite{169}, $4$-Somos satisfies all of \begin{align*} C_{h-2}C_{h+3}&=-C_{h-1}C_{h+2}+5C_{h}C_{h+1}\,,\cr C_{h-3}C_{h+3}&=C_{h-1}C_{h+1}+5C_{h}^2\,,\cr C_{h-4}C_{h+4}&=25C_{h-1}C_{h+1}-4C_{h}^2\,. {\smash{\overline{e}}}nd{align*} In the light of such results one feels some confidence that in general a Somos~$4$ is a Somos~$k$, for all $k=5$, $6$, $7$, $\ldots\,$: indeed, it is that which we show below. \subsection{Elliptic sequences} Given a model (thus, an equation) \begin{equation}\label{eq:equation} \mathcal E: y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x {\smash{\overline{e}}}nd{equation} for an elliptic curve containing the point $S=(0,0)$, and some point $M$, denote the $x$-co-ordinate $x(M+hS)$ of $M+hS$ by $x_{M+hS}=-e_h$. Then straightforward computations lead to several remarkable, and remarkably useful, identities. \begin{proposition}\label{pr:basic} There are constants $\alpha$, $\beta$, $\gamma$, depending on the model $\mathcal E$ but independent of the integer parameter $h$, {\smash{\overline{e}}}mph{and of} `the translation' $M$, so that \begin{gather}\label{eq:1} e_{h-1}e_{h}^2e_{h+1}=\alpha^2 e_h-\beta\,;\cr \label{eq:2} (e_{h-1}+e_{h+1})e_h^2=\gamma e_h-\alpha^2\,. {\smash{\overline{e}}}nd{gather} {\smash{\overline{e}}}nd{proposition} \noindent It is a straightforward exercise \footnote{It is worth noticing that {\smash{\overline{e}}}qref{eq:2} follows from {\smash{\overline{e}}}qref{eq:1}. Indeed, we have $$ e_{h}e_{h+1}^2e_{h+2}-\alpha^2 e_{h+1}=e_{h-1}e_{h}^2e_{h+1}-\alpha^2 e_h\,. $$ Dividing by $e_{h}e_{h+1}$ and cutely inserting $e_{h}e_{h+1}$ on each side then yields $$e_{h+1}e_{h+2}+\alpha^2/e_{h+1}+e_{h}e_{h+1} =e_{h}e_{h+1}+\alpha^2/e_{h}+e_{h-1}e_{h}=\gamma\,,\quaduad\text{some constant.} $$} to confirm such identities by the formulaire for adding points on $\mathcal E$, see~\cite{SS}; the arguments of \cite{169}, making explicit the continued fraction expansion\ of Adams and Razar~\cite{AR}, provide a seemingly very different proof. We also mention the following corollary. \begin{corollary}\label{co:odd} Thus $\alpha^2 (e_{h}+e_{h+1})=e_he_{h+1}(\gamma -e_he_{h+1}) +\beta$, and therefore \begin{equation}\label{eq:odd} e_{h-1}e_{h}^2e_{h+1}^2e_{h+2}= \beta e_he_{h+1}+(\alpha^4-\beta\gamma)\,. {\smash{\overline{e}}}nd{equation} {\smash{\overline{e}}}nd{corollary} \begin{proof} Proposition~\ref{pr:basic} reports that $$ (X-e_{h-1}e_h)(X-e_he_{h+1}) =X^2-(\gamma e_h-\alpha^2)X/e_h+(\alpha^2e_h-\beta)\,; $$ and then $X=e_he_{h+1}$ provides the `thus'. Therefore, indeed, \begin{multline*} e_{h-1}e_{h}^2e_{h+1}\cdot e_{h}e_{h+1}^2e_{h+2}= (\alpha^2 e_h-\beta)(\alpha^2 e_{h+1}-\beta)\cr =\alpha^4 e_he_{h+1}-\alpha^2\beta(e_h+e_{h+1})+\beta^2 =e_he_{h+1}\bigl(\alpha^4+\beta(e_he_{h+1}-\gamma)\bigr)\,, {\smash{\overline{e}}}nd{multline*} completing the proof.{\smash{\overline{e}}}nd{proof} Further, define the {\smash{\overline{e}}}mph{elliptic sequence} $(A_h)$ by a pair of initial values and the recursive definition \begin{equation}\label{eq:recursive} A_{h-1}A_{h+1}=e_{h}A_h^2\,. {\smash{\overline{e}}}nd{equation} One checks readily that in immediate consequence of {\smash{\overline{e}}}qref{eq:recursive}: \begin{multline}\label{eq:examples} A_{h-2}A_{h+2}=e_{h-1}e_{h}^2e_{h+1}A_h^2\,;\quaduad\cr A_{h-1}A_{h+2}=e_{h}e_{h+1}A_hA_{h+1}\,;\quaduad A_{h-2}A_{h+3}=e_{h-1}e_{h}^2e_{h+1}^2e_{h+2}A_hA_{h+1}\,. {\smash{\overline{e}}}nd{multline} In particular, multiplying {\smash{\overline{e}}}qref{eq:1} by $A_h^2$, or {\smash{\overline{e}}}qref{eq:odd} by $A_hA_{h+1}$, yields the recursions \begin{gather}\label{eq:recursion4} A_{h-2}A_{h+2}=\alpha^2 A_{h-1}A_{h+1}-\beta A_h^2\,,\cr A_{h-2}A_{h+3}=\beta A_{h-1}A_{h+2}+(\alpha^4-\beta\gamma)A_hA_{h+1}\,, \label{eq:recursion5}{\smash{\overline{e}}}nd{gather} for the sequence $(A_h)$. So an {elliptic sequence} $(A_h)$ is not quite the most general Somos~$4$ because the first coefficient in the recursion is necessarily a square. \label{page:equivalence} However, replacing the sequence $(e_h)$ by $(\alpha e_h)$, thus the co-ordinates $x_{M+hS}$ by $\alpha x_{M+hS}$, transforms $(A_h)$ into an {\smash{\overline{e}}}mph{equivalent} sequence $(A'_h)$ with $$ A'_h=\alpha^{h(h-1)/2}A_h\,,\quaduad\text{so that}\quaduad A'_{h-2}A'_{h+2}=\alpha^{-1} A'_{h-1}A'_{h+1}-\beta\alpha^{-4} \smash{A'}_h^2\,. $$ Thus any Somos~$4$ is, in the sense just described, at worst equivalent to an elliptic sequence. Of course, in place of our quadratic `twist' by $\alpha$, we could simply have confessed to viewing the sequence as being an elliptic sequence over a quadratic extension of the base field. Plainly, in the sequel we may suppose without additional comment that claims we prove for elliptic sequences hold appropriately for a general Somos~$4$ sequence. \subsection{Singular elliptic sequences} Several of our confident remarks above need to be announced more falteringly if some $e_h$ vanishes. That case is $M+kS=\pm S$, some $k\in{\mathbb Z}} \def\Q{{\mathbb Q}} \def\A{\overlineerline{\mathbb Q}$, and thus, by changing the translation $M$ if necessary, there is no serious loss of generality in supposing that in fact $e_1=0$ and $W_0=0$. If some $e_k$ vanishes, then $M+kS=\pm S$, some $k\in{\mathbb Z}} \def\Q{{\mathbb Q}} \def\A{\overlineerline{\mathbb Q}$, and thus, by changing the translation $M$ if necessary, there is no serious loss of generality in supposing that in fact $e_1=0$ and $W_0=0$. In this {\smash{\overline{e}}}mph{singular}\footnote{We make no attempt to stick to Ward's terminolgy~\cite{Wa}; there, all the sequences are `singular', in our sense of the adjective as `special'. In \cite{Wa} a sequence is called `singular' if it arises from a singular cubic curve: such singularity is no issue for us.} case we set ${\smash{\overline{e}}}_h=-x_{hS}$ and define $W_{h-1}W_{h+1}={\smash{\overline{e}}}_hW_{h}^2$. So $W_0=0$, and we may take $W_1=1$; by {\smash{\overline{e}}}qref{eq:1} we have ${\smash{\overline{e}}}_2=\beta/\alpha^2$ and it is reasonable to select $W_2=\alpha$, hence $W_3=\beta$; and --- we leave the computation of ${\smash{\overline{e}}}_3$ for the energetic reader (see \cite{Sw} or~\cite{169}) --- $W_4=-W_2^5+W_2W_3\gamma$. Of course, {\smash{\overline{e}}}qref{eq:recursion4} becomes \begin{equation}\label{eq:even'} W_1^2A_{h-2}A_{h+2}=W_{2}^2A_{h-1}A_{h+1}-W_{1}W_{3}A_{h}^2\,; {\smash{\overline{e}}}nd{equation} and {\smash{\overline{e}}}qref{eq:recursion5} plainly is \begin{equation}\label{eq:odd'} W_1W_2A_{h-2}A_{h+3}=W_{2}W_3A_{h-1}A_{h+2}-W_{1}W_{4}A_{h}A_{h+1}\,. {\smash{\overline{e}}}nd{equation} Here we have pretended to forget that $W_1=1$, the more vividly to emphasise the anti-symmetry of the two-sided sequence $(W_h)$ and related pattern. Of course $(W_h)$ is precisely the, well let's say it, `untranslated' elliptic sequence discussed by Morgan Ward~\cite{Wa}. \subsection{Asides} Christine Swart~\cite{Sw} shows that the $A_h^2$ `try to be' the denominators of the $x$~co-ordinates $-e_h=x_{M+hS}$ in that they succeed in so being at worst up to finitely many primes involved in the initial values and the defining recursion of the sequence (and thus in the coefficients of the model $\mathcal E$ of the underlying elliptic curve). More specifically, in the singular case, Rachel Shipsey~\cite{Shi} confirms that if the model~$\mathcal E$ is minimal integral with $\gcd(a_3, a_4)=1$ then $W_2\divides W_4$ guarantees $(W_h)$ is an exact division sequence: $\gcd (W_i,W_j)=W_{\gcd(i,j)}$. If both $-x_S={\smash{\overline{e}}}_1=0$ {\smash{\overline{e}}}mph{and} ${\smash{\overline{e}}}_{m+1}=0$, then the sequence $({\smash{\overline{e}}}_h)$ is periodic of period~$m$ --- for this case see~\cite{163} and remarks at \cite[\S VIII]{Sw}--- but the singular elliptic sequence $(W_h)$ need be no more than quasi-periodic of quasi-period~$m$. We skirt by the fact that then ${\smash{\overline{e}}}_0$, ${\smash{\overline{e}}}_m$, $\ldots\,$ are infinite (so, of course $W_0$, $W_m$, $\ldots\,$ must all vanish) by noting in particular that the recursion relations for $(A_h)$ and $(W_h)$ allow one to skip over and then fill in any difficulties; we define the `undefined' portions of our sequences accordingly. \subsection{Induction and symmetry} A surprisingly simple inductive argument together with pleasing applications of symmetry suffice to prove our main result: A Somos~$4$ also is a three-term Somos~$k$ for $k=5$, $6$, $7$ $\ldots\,$. \begin{comment} ; and a Somos~$5$ also is a three-term Somos~$k$ for $k=7$, $9$, $11$ $\ldots\,$ \begin{theorem}\label{eq:main} If, for $h\in{\mathbb Z}} \def\Q{{\mathbb Q}} \def\A{\overlineerline{\mathbb Q}$, $A_{h-1}A_{h+1}=e_hA_{h}^2$ and $W_{h-1}W_{h+1}={\smash{\overline{e}}}_hW_{h}^2$ then for $m\in{\mathbb Z}} \def\Q{{\mathbb Q}} \def\A{\overlineerline{\mathbb Q}$ \begin{equation}\label{eq:evenm} W_1^2A_{h-m}A_{h+m}=W_{m}^2A_{h-1}A_{h+1}- W_{m-1}W_{m+1}A_{h}^2 {\smash{\overline{e}}}nd{equation} and \begin{equation}\label{eq:oddm} W_{1}W_{2}A_{h-m}A_{h+m+1}=W_{m}W_{m+1}A_{h-1}A_{h+2}- W_{m-1}W_{m+2}A_{h}A_{h+1}\,. {\smash{\overline{e}}}nd{equation} {\smash{\overline{e}}}nd{theorem} \begin{proof} We note that {\smash{\overline{e}}}qref{eq:evenm} is $$ \frac{A_{h-m}A_{h+m}}{A_h^2}=W_m^2 \left(\frac{A_{h-1}A_{h+1}}{A_h^2} -\frac{W_{m-1}W_{m+1}}{W_m^2}\right)=W_m^2(e_h-{\smash{\overline{e}}}_m)\,, $$ so that, seeing that {\smash{\overline{e}}}qref{eq:evenm} is trivially true for $m=0$ and $m=1$, it suffices to show that $$ \frac{A_{h-m-1}A_{h+m+1}}{A_h^2}=W_{m+1}^2(e_h-{\smash{\overline{e}}}_{m+1}) $$ follows. However, by appropriate inductive hypotheses, \begin{multline*} \frac{A_{h-m-1}A_{h+m+1}}{A_h^2}\crequiv \frac{A_{h-1-m}A_{h-1+m}}{A_{h-1}^2}\cdot \frac{A_{h-1}A_{h+1}}{A_h^2} \cdot\frac{A_{h+1-m}A_{h+1+m}}{A_{h+1}^2} \boldsymboligg/ \frac{A_{h-(m-1)}A_{h+(m-1)}}{A_h^2} \crequiv \frac{W_m^2}{W_{m-1}^2}\frac{(e_{h-1}-{\smash{\overline{e}}}_{m})e_h^2(e_{h+1}-{\smash{\overline{e}}}_{m})} {(e_{h}-{\smash{\overline{e}}}_{m-1})}. {\smash{\overline{e}}}nd{multline*} This is in fact $W_{m+1}^2(e_h-{\smash{\overline{e}}}_{m+1})$, as hoped for, if and only if \begin{equation}\label{eq:symmetriceven} (e_{h-1}-{\smash{\overline{e}}}_{m})e_h^2(e_{h+1}-{\smash{\overline{e}}}_{m})= ({\smash{\overline{e}}}_{m-1}-e_{h}){\smash{\overline{e}}}_m^{\,2}({\smash{\overline{e}}}_{m+1}-e_{h})\,. {\smash{\overline{e}}}nd{equation} We would like to be able to declare that {\smash{\overline{e}}}qref{eq:symmetriceven} is blatantly true by a principle of symmetry but, sadly, a slightly more brutal argument seems necessary. By Proposition~\ref{pr:basic} we have \begin{multline*} (e_{h-1}-{\smash{\overline{e}}}_{m})e_h^2(e_{h+1}-{\smash{\overline{e}}}_{m})\cr =e_{h-1}e_{h}^2e_{h+1}-{\smash{\overline{e}}}_m(e_{h-1}+e_{h+1})e_h^2+{\smash{\overline{e}}}_m^{\,2}e_h^2\cr =\alpha^2e_h-\beta-(\gamma\,{\smash{\overline{e}}}_me_h-\alpha^2\,{\smash{\overline{e}}}_m)+{\smash{\overline{e}}}_m^{\,2}e_h^2\,, {\smash{\overline{e}}}nd{multline*} making manifest the symmetry $e_h\longleftrightarrow {\smash{\overline{e}}}_m$. Similarly, proving {\smash{\overline{e}}}qref{eq:oddm} requires we show as inductive step that $$ \frac{W_1W_2}{W_{m+1}W_{m+2}}\frac{A_{h-(m+1)}A_{h+(m+2)}}{A_hA_{h+1}} =(e_he_{h+1}-{\smash{\overline{e}}}_{m+1}{\smash{\overline{e}}}_{m+2}). $$ But by permissible inductive hypotheses \begin{multline*} \frac{W_1W_2}{W_{m+1}W_{m+2}}\frac{A_{h-(m+1)}A_{h+(m+2)}}{A_hA_{h+1}}\crequiv \frac{W_1W_2}{W_{m}W_{m+1}}\frac{A_{h-1-m}A_{h+m}}{A_{h-1}A_{h}} \cdot\frac{W_m^2}{W_{m-1}W_{m+1}}\frac{W_{m+1}^2}{W_{m}W_{m+2}} \frac{A_{h-1}A_{h+2}}{A_hA_{h+1}}\cdot\cr\cdot \frac{W_1W_2}{W_{m}W_{m+1}}\frac{A_{h-m+1}A_{h+m+2}}{A_hA_{h+1}} \boldsymboligg/ \frac{W_1W_2}{W_{m-1}W_{m}}\frac{A_{h-(m-1)}A_{h+m}}{A_hA_{h+1}} \crequiv \frac{(e_{h-1}e_h-{\smash{\overline{e}}}_{m}{\smash{\overline{e}}}_{m+1})e_he_{h+1}(e_{h+1}e_{h+2}-{\smash{\overline{e}}}_{m}{\smash{\overline{e}}}_{m+1})} {{\smash{\overline{e}}}_{m}{\smash{\overline{e}}}_{m+1}(e_{h}e_{h+1}-{\smash{\overline{e}}}_{m-1}{\smash{\overline{e}}}_m)}. {\smash{\overline{e}}}nd{multline*} So {\smash{\overline{e}}}qref{eq:oddm} follows if and only if \begin{multline*} (e_{h-1}e_h-{\smash{\overline{e}}}_{m}{\smash{\overline{e}}}_{m+1})e_he_{h+1}(e_{h+1}e_{h+2}-{\smash{\overline{e}}}_{m}{\smash{\overline{e}}}_{m+1})\cr =({\smash{\overline{e}}}_{m-1}{\smash{\overline{e}}}_m-e_{h}e_{h+1}){\smash{\overline{e}}}_{m}{\smash{\overline{e}}}_{m+1}({\smash{\overline{e}}}_{m+1}{\smash{\overline{e}}}_{m+2}-e_{h}e_{h+1}). {\smash{\overline{e}}}nd{multline*} Thus, here too, it suffices to notice that \begin{multline*} (e_{h-1}e_h-{\smash{\overline{e}}}_{m}{\smash{\overline{e}}}_{m+1})e_he_{h+1}(e_{h+1}e_{h+2}-{\smash{\overline{e}}}_{m}{\smash{\overline{e}}}_{m+1})\cr =e_{h-1}e_{h}^2e_{h+1}^2e_{h+2}-(e_{h-1}e_h+e_{h+1}e_{h+2})e_he_{h+1}{\smash{\overline{e}}}_m{\smash{\overline{e}}}_{m+1} +e_he_{h+1}{\smash{\overline{e}}}_m^{\,2}{\smash{\overline{e}}}_{m+1}^{\,2} \crequiv\beta e_he_{h+1}+(\alpha^4-\beta\gamma)-\bigl(\alpha^2(e_h+e_{h+1})-2\beta\bigr){\smash{\overline{e}}}_m{\smash{\overline{e}}}_{m+1} +e_he_{h+1}{\smash{\overline{e}}}_m^{\,2}{\smash{\overline{e}}}_{m+1}^{\,2} \crequiv\beta e_he_{h+1}+(\alpha^4-\beta\gamma)- \bigl(e_he_{h+1}(\gamma-e_he_{h+1})-\beta\bigr){\smash{\overline{e}}}_m{\smash{\overline{e}}}_{m+1} +e_he_{h+1}{\smash{\overline{e}}}_m^{\,2}{\smash{\overline{e}}}_{m+1}^{\,2} {\smash{\overline{e}}}nd{multline*} is visibly symmetric for $e_h\longleftrightarrow {\smash{\overline{e}}}_m$. {\smash{\overline{e}}}nd{proof} \subsection{Comments}\subsubsection{}\label{ss:5} Almost precisely the argument just given shows also that: {\smash{\overline{e}}}mph{A Somos~$5$ also is a three-term Somos~$k$ for $k=7$, $9$, $11$ $\ldots\,$.} Namely, sequences $(e_h)$ and $(c_h)$ give rise to a Somos~$5$ sequence $(B_h)$ by way of the definition $c_hB_{h-1}B_{h+1}=e_hB_h^2$ for $h\in{\mathbb Z}} \def\Q{{\mathbb Q}} \def\A{\overlineerline{\mathbb Q}$. Indeed, that yields $$ c_{h-1}c_{h}^2c_{h+1}^2c_{h+2}W_1W_2B_{h-2}B_{h+3} =c_{h}c_{h+1}W_{2}W_{3}B_{h-1}B_{h+2} -W_{1}W_{4}B_{h}B_{h+1}\,;\ $$ and this relation has constant coefficients (thus, independent of~$h$) exactly when $c_hc_{h+1}=:v$, say, is constant: so when the sqeuences $(c_{2h})$ and$(c_{2h+1})$ are constant. It now suffices to replace $e_h$ by $e_h/c_h$ in the argument above (while not changing ${\smash{\overline{e}}}_h$) to see that the just stated relation implies that for all integers $h$ and $m$ \begin{multline}\label{eq:5} v^{\frac12m(m+1)}W_1W_2B_{h-m}B_{h+m+1} \crequivvW_{m}W_{m+1}B_{h+1}B_{h+2}-W_{m-1}W_{m+2}B_{h}B_{h+1}\,. {\smash{\overline{e}}}nd{multline} For example \cite{169}, $5$-Somos comes from the points $M+hS$ on the elliptic curve $$ y^2+xy+6y=x^3+7x^2+12x\,,\quadquad\text{with $M=(-2,-2)$ and $S=(0,0)$\,.} $$ In this case $W_1=1$, $W_2=6$, $W_3=6^2$, $W_4=-6^4$ so the choice $v=6$ (more precisely, $c_0=2$, $c_1=3$) is felicitous. \subsubsection{} Plainly \begin{multline*} A_{h-m}A_{h+m}W_n^2 =(W_{m}^2A_{h-1}A_{h+1}-W_{m-1}W_{m+1}A_{h}^2)W_n^2\cr = (W_{n}^2A_{h-1}A_{h+1}-W_{n-1}W_{n+1}A_{h}^2)W_m^2 -(W_{n}^2W_{m-1}W_{m+1}-W_{n-1}W_{n+1}W_{m}^2)A_h^2\cr =A_{h-n}A_{h+n}W_{m}^2-W_{m-n}W_{m+n}A_{h}^2\,, {\smash{\overline{e}}}nd{multline*} confirming also that {\smash{\overline{e}}}qref{eq:general} and {\smash{\overline{e}}}qref{eq:redundant} are indeed equivalent. Just so, \begin{multline*} A_{h-m}A_{h+m+1}W_nW_{n+1} \crequiv(W_{m}W_{m+1}A_{h-1}A_{h+2}-W_{m-1}W_{m+2}A_{h}A_{h+1})W_nW_{n+1}/W_2\cr =(W_nW_{n+1}A_{h-1}A_{h+2}-W_{n-1}W_{n+2}A_{h}A_{h+1})W_{m}W_{m+1}/W_2\phantom{pushleft} \cr-(W_nW_{n+1}W_{m-1}W_{m+1}-W_{n-1}W_{n+2}W_{m}W_{m+1})A_hA_{h+1}/W_2\cr =W_{m}W_{m+1}A_{h-n}A_{h+n+1}-W_{m-n}W_{m+n+1}A_{h}A_{h+1}\,. {\smash{\overline{e}}}nd{multline*} \subsubsection{} It warrants remark that elliptic curves play at most an implicit role in our arguments. It suffices to start from the identities of Proposition~\ref{pr:basic}. Moreover, {\smash{\overline{e}}}qref{eq:1} {\smash{\overline{e}}}mph{is} just the Somos~$4$ relation (if necessary by enlarging the base field to include $\alpha$); and, as remarked, {\smash{\overline{e}}}qref{eq:2} follows. \subsubsection{} Nonetheless, it is clear that all Somos~4 and Somos~$5$ sequences {\smash{\overline{e}}}mph{are} elliptic sequences. We confine ourselves here to just a sketch. Indeed, to be given a Somos~$4$ sequence is to be provided with a sequence $(e_h)$ by way of $e_h=A_{h-1}A_{h+1}/A_h^2$ where the $e_h$ satisfy {\smash{\overline{e}}}qref{eq:1} and hence all of Proposition~\ref{pr:basic}. Our main argument then provides all the recurrence relations $A_{h-m}A_{h+m}=W_{m}^2A_{h-1}A_{h+1} -W_{m-1}W_{m+1}A_{h}^2$. Those yield the sequence $(W_m)$ --- which, see for example \cite{Shi} or \cite{Wa}, amounts to having the related elliptic curve. Of course\footnote{Of course this may not seem all that evident; fortunately, it is part of the content of \cite{169} and~of~\cite{SS}.} $e_0$ provides the `translation' $M$. \begin{comment}\r because the recurrence relations~{\smash{\overline{e}}}qref{eq:5} yield just $W_{2m}/v^{(m-1)(m+1)}W_2$ and $W_{2m+1}/v^{m(m+1)}$ for $m=1$, $2$, $\ldots\,$. The Somos~$5$ case is a little less straightforward. Here we have $B_{h-1}B_{h+1}/B_h^2=e_h/c_h=:f_h$, say, where $c_hc_{h+1}=v$ is constant --- equivalently, that $(c_{2h})$ and $(c_{2h+1})$ are constant sequences --- and we might define an `equivalent' sequence $(A_h)$ by $$ B_{2h}=c_0^{-h(h+1)}c_1^{-h^2}A_{2h} \quaduad\text{and}\quaduad B_{2h+1}=c_0^{-(h+1)^2}c_1^{-h(h+1)}A_{2h+1}\,. $$ Then $(A_h)$ is a Somos~$4$. Moreover, by arguments similar to those we use above, if \begin{multline*}\label{eq:} A_{h-2}A_{h+2}=W_{2}^2A_{h-1}A_{h+1}-W_{3}A_{h}^2\,, \cr \quaduad\text{then}\quaduad A_{hk-2k}A_{hk+2k}=(W_{2k}/W_k)^2A_{hk-k}A_{hk+k} -(W_{3k}/W_k)A_{hk}^2 {\smash{\overline{e}}}nd{multline*} for all $k=1$, $2$, $3$, $\ldots\,$. Thus $(A_{hk})$ is a Somos~$4$ for all those $k$ and in particular it follows readily that both $(B_{2h})$ and $(B_{2h+1})$ are Somos~$4$ sequences. \begin{comment} This is good enough to allow us to solve for the unknowns $c_0\alpha^2/v^2$, $c_1\alpha^2/v^2$, and so for $c_0/c_1$ and $\alpha^4/v^3$, and $\beta/v^2$. Mind you, there may a few \i's that need dotting to account for degenerate cases. \begin{thebibliography}{ABC} \bibitem{AR}{William W. Adams and Michael J. Razar, `Multiples of points on elliptic curves and continued fractions', {\it Proc. London Math. Soc.\/} {\bf 41\/} (1980), 481--498.} \bibitem{EPSW} Graham Everest, Alf van der Poorten, Igor Shparlinski, and Thomas Ward, {\it Recurrence Sequences\/}, Mathematical Surveys and Monographs 104, American Mathematical Society 2003, 318pp. \bibitem{Somos} David Gale `The strange and surprising saga of the Somos sequences', {\smash{\overline{e}}}mph{ Mathematical Intelligencer} {\bf 13\/}.1 (1991), 40--42; and `Somos sequence update', {\it ibid.\/} {\bf 13\/}.4 (1991), 49--50. For more see Jim Propp, `The Somos Sequence Site', \url{http://www.math.wisc.edu/~propp/somos.html}. \bibitem{163} Alfred J. van der Poorten, `Periodic continued fractions and elliptic curves', in {\it High Primes and Misdemeanours\/}: lectures in honour of the 60th birthday of Hugh Cowie Williams, Alf van der Poorten and Andreas Stein eds., Fields Institute Communications {\bf 42\/}, American Mathematical Society, 2004, 353--365. \bibitem{169} \bysame, `Elliptic curves and continued fractions', \url{http://www.arxiv.org/math.NT/0403225}; to appear in {\it J. Integer Sequences\/}. \bibitem{Shi} Rachel Shipsey, {\it Elliptic divisibility sequences\/}, Phd Thesis, Goldsmiths College, University of London, 2000 (see \url{http://homepages.gold.ac.uk/rachel/}). \bibitem{Sl} N. J. A. Sloane, `The on-line encyclopedia of integer sequences', \url{http://www.research.att.com/~njas/sequences/}. \bibitem{SS} Nelson Stephens and Christine S. Swart, manuscript with tentative title `Somos~$4$ sequences and elliptic curves'. \bibitem{Sw} Christine S. Swart, {\smash{\overline{e}}}mph{Elliptic curves and related sequences}, PhD Thesis, Royal Holloway and Bedford New College, University of London, 2003; 226pp. \bibitem{Wa} Morgan Ward, `Memoir on elliptic divisibility sequences', {\it Amer. J. Math.\/} {\bf 70\/} (1948), 31--74. {\smash{\overline{e}}}nd{thebibliography} \label{page:lastpage} {\smash{\overline{e}}}nd{document}
\begin{document} \title{{Fast binomial-code holonomic quantum computation with ultrastrong light-matter coupling}} \author{Ye-Hong Chen} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \author{Wei Qin} \email[E-mail: ]{[email protected]} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \author{Roberto Stassi} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \affiliation{Dipartimento di Scienze Matematiche e Informatiche, Scienze Fisiche e Scienze della Terra, Universit\`{a} di Messina, 98166, Messina, Italy} \author{Xin Wang} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \affiliation{Institute of Quantum Optics and Quantum Information, School of Science, Xi'an Jiaotong University, Xi'an 710049, China} \author{Franco Nori} \email[E-mail: ]{[email protected]} \affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan} \affiliation{Department of Physics, University of Michigan, Ann Arbor, Michigan 48109-1040, USA} \affiliation{RIKEN Center for Quantum Computing (RQC), Wako-shi, Saitama 351-0198, Japan} \date{\today} \begin{abstract} {We propose a protocol for bosonic binomial-code nonadiabatic holonomic quantum computation in a system composed of an artificial atom ultrastrongly coupled to a cavity resonator. In our protocol, the binomial codes, formed by superpositions of Fock states, can greatly save physical resources to correct errors in quantum computation. We apply to the system strong driving fields designed by shortcuts-to-adiabatic methods. This reduces the gate time to \textit{tens of nanoseconds}. Noise induced by control imperfections can be suppressed by a systematic-error-sensitivity nullification method. As a result, this protocol can rapidly ($\sim 35~{\rm{ns}}$) generate fault-tolerant and high-fidelity ($\gtrsim 98\%$ with experimentally realistic parameters) quantum gates.} \end{abstract} \keywords{Nonadiabatic holonomic quantum computation; Bosonic code; Ultrastrong coupling} \maketitle \section{Introduction} The generation of robust and fault-tolerant quantum gates is a basic requirement for quantum computation. To reach this goal, much attention has been given to holonomic quantum computation \cite{Pla26494,Prl89097902,Njp14103035,Prl109170501} based on Abelian \cite{Prsa39245,Prl581593} and non-Abelian geometric phases \cite{Pla133171,Prl522111,Pla133171,Prl95130501}. These can provide a robust way towards universal quantum computation, because the geometric phases are determined by the global properties of the evolution paths and possess a built-in noise-resilience feature against certain types of local noises \cite{Pra70042316,Pra72020301,Njp14093006,Pra86062322}. {In particular, nonadiabatic holonomic quantum computation (NHQC) \cite{Pra101022330,Pra95043608,Pra93040305,Prl123100501,Pra101032322,Prl111050404,Prappl14034038} releases the variations of parameters from the limitation of the adiabatic condition, making the computation fast and robust against local parameter fluctuations over the cyclic evolution.} However, due to the huge physical resource overhead and the difficulties in scaling up the number of qubits \cite{Prl102070502,Prappl10054051,Prappl13014055,Pra97022335,Pra86032324,arXiv201209034,npjQI11,Nat432602,Sci3321059,Nat51966,Nat506204}, previous work \cite{Prl89097902,Njp14103035,Prl109170501,Pra89042302,Pra95062308,Pra95043608,Pra93040305,Prappl7054022,Pra101022330,Prl123100501,Pra101032322,Prl111050404,Prappl14034038,Nat496482,Prl110190501,Prappl12024024,Nat5147520,Nphoto11309,Prl121110501,Prappl14044043,Prl124230503} showed that is experimentally difficult to implement quantum error correction protocol \cite{gottesman2010introduction,Pra52R2493,Prsa4522551,Pra86032324} in NHQC. For this reason, holonomic computation via bosonic codes \cite{Pra561114,Pra64012310} has attracted much interest recently \cite{Prl116140502,Qst4035007}. Bosonic codes allow quantum error correction extending only the number of excitation instead of the number of qubits, while keeping the noise channels fixed \cite{Pra97032346,Njp16045014,Prl119030502,Qst4035007,Fr150,Prl116140502,Prx6031006,Prl124120501,Prl124120501,Nc894,Nc978,Np14705,Nat561368,Np15503,Sci342607,Sci347853,Nc9652,Nat584205}. For instance, binomial codes \cite{Prx6031006} formed from superposition of Fock states are protected against continuous dissipative evolution under loss, gain, and dephasing errors. Unfortunately, universal control of a single bosonic mode is difficult due to its harmonicity. Although adding direct and indirect nonlinear interactions can induce weak anharmonicity \cite{Fr150}, it is still difficult to manipulate independently and simultaneously every needed Fock state. Moreover, weak nonlinear interactions may induce additional noises into the system and limit the gate fidelities \cite{Fr150}. This, with additional operations (e.g., feedback \cite{Nat536441,Np15503,Nat584368} and driven-dissipative controls \cite{Njp16045014,Prl116140502}) and conditions (e.g., oscillators and qubits are never driven simultaneously \cite{Pra92040303,Prl115137002,Prl124120501}), makes it difficult to implement NHQC \cite{Pra101022330,Prl123100501,Pra101032322} with bosonic error-correction codes. Note that the first experiment for binomial-code conditional geometric gates was recently realized \cite{Prl124120501} using 3D superconducting cavities, but it is not a holonomic computation. {The eigenstates of a two-level atom and a cavity field interacting in the ultrastrong coupling (USC) regime are anharmonic dressed atom-light states \cite{Jmp46042311,Prl99173601,Prl107100401,Nr119,Rmp91025005,Epjd59473,Prl98103602,Pra81042311,Prl98103602,Pra81042311,Prl112016401,Prl105263603, Prl117043601,Pra96063820,Pra98062327,Np15803,Pra96013849,Pra82022119,Prl116113601, Prl119053601,Prl122190403,Pra84043832}. In this manuscript, to overcome the problems mentioned in the previous paragraph, we use these dressed states as intermediate states \cite{Prl110243601,Njp19053010,Pra89033827,Pra94012328,npjQI667,Jpsj88061011} to simultaneously couple different Fock basis and induce population transitions between them. To implement NHQC with binomial codes, we populate Fock states in one step, driving the atom with a composite pulse. The strong anharmonicity in the USC regime allows one to apply strong driving fields \cite{Pra89033827,npjQI667,Pra94012328} in order to shorten the gate time to nanoseconds. These drives are designed by an invariant-based method \cite{Rmp91045001,Aamo62117,Prl116230503,Prl126023602,Jmp101458,Pra86033405,Prl116230503,Pra89033856} and a systematic-error-sensitivity nullification method \cite{Njp14093040,Pra101032322,Prl111050404}, making our protocol fast and robust against pulse imperfections. Additionally, the NHQC protocol presented here is scalable for multi-qubit gates ultrastrongly coupling the atom to a multi-mode cavity. } \begin{figure} \caption{(a) Schematic illustration of an atom-cavity combined system. (b) Level diagram of the bare three-level atom. The upper two levels ($|e\rangle,|g\rangle$) of the atom are ultrastrongly coupled to the cavity mode with strength $g$. The lower two levels ($|g\rangle,|\mu\rangle$) are off-resonantly driven by a composite pulse $\Omega=\sum_{k} \label{figmodel} \end{figure} \section{Model and effective Hamiltonian} Our system consists of a three-level ($|e\rangle$, $|g\rangle$, $|\mu\rangle$) artificial atom and a cavity resonator \cite{Epl9924003}. The states $|e\rangle$ and $|g\rangle$ are ultrastrongly coupled to a cavity mode \cite{Np1344}, with coupling strength $g$ (see Fig.~\ref{figmodel}). The atom-cavity interaction is described by $H_{0}=H_{R}+\hbar \omega_{\mu}|\mu\rangle\langle\mu|$, where \begin{align}\label{eq1-1} H_{{R}}=\hbar\omega_{{c}}a^{\dag}a+\frac{\hbar\omega_{{q}}}{2}\sigma_{g}^{z}+\hbar g(a+a^{\dag})\sigma_{g}^{x}, \end{align} is the Rabi Hamiltonian. Here, $\sigma_{g}^{x}=|e\rangle \langle g|+|g\rangle\langle e|$ and $\sigma_{g}^{z}=|e\rangle\langle e|-|g\rangle\langle g|$ are Pauli matrices, $a$ ($a^{\dag}$) is the annihilation (creation) operator of the cavity field, $\omega_{\mu}$ is the frequency of the level $|\mu\rangle$, $\omega_{{c,(q)}}$ is the cavity (qubit) frequency. In the USC regime ($g/\omega_{c}\gtrsim 0.1$), the eigenstates $|\mathcal{E}_{j}\rangle$ with eigenvalues $\xi_{j}$ of $H_{0}$ can be separated into (i) noninteracting sectors $|\mu\rangle|n\rangle$ with eigenvalues $\omega_{\mu}+n\omega_{{c}}$; and (ii) dressed atom-cavity states $|\zeta_{m}\rangle$ with eigenvalues $E_{m}$ ($j,n,m=0,1,2,\ldots$). Here, $|n\rangle$ denote the Fock states of the cavity mode, and \begin{align}\label{eq2} |\zeta_{m}\rangle=\sum_{n}\left(c_{n}^{m}|g\rangle|n\rangle+d_{n\pm 1}^{m}|e\rangle|n\pm 1\rangle\right), \end{align} denote the dressed states of $H_{R}$. The coefficients $c_{n}^{m}=\langle\zeta_{m}|g\rangle|n\rangle$ and $d_{n\pm 1}^{m}=\langle\zeta_{m}|e\rangle|n\pm 1\rangle$ can be obtained numerically. Note that we impose $d_{-1}^{m}=0$ for Eq.~(\ref{eq2}). Oscillations $|\mu\rangle|n\rangle\leftrightarrow|\zeta_{m}\rangle$ can be induced by driving the atomic transition $|{\mu}\rangle\leftrightarrow|{g}\rangle$ [see Fig.~\ref{figmodel}(b)] with an additional control Hamiltonian \begin{align} H_{{D}}(t)=\hbar\Omega(|\mu\rangle\langle g|+|g\rangle\langle\mu|). \end{align} Here, \begin{align} \Omega=\sum_{k}\Omega_{k}\cos{(\omega_{k}t+\phi_{k})}, \end{align} is a composite pulse \cite{Prl118223604,Sa5eaau5999,Pra89033827,Nat536441} with amplitudes $\Omega_{k}$, frequencies $\omega_{k}$, and phases $\phi_{k}$. We omit the explicit time dependence of all the parameters (e.g., $\Omega_{k}$ and $\phi_{k}$) regarding the drivings. {The total Hamiltonian is $H_{\rm{tot}}(t)=H_{0}+H_{{D}}(t)$.} Choosing \begin{align} \omega_{k}=&~(E_{m}-\omega_{\mu}-k\omega_{{c}})\cr \Omega_{k}\ll&~\omega_{{c}},g, \end{align} and performing a unitary transformation $\exp{\left(-iH_{0}t\right)}$, we derive an effective Hamiltonian that, under the rotating wave approximation, is (see details in Appendix~\ref{AA}) \begin{align}\label{eq1-2} H_{\rm{eff}}(t)=\frac{\hbar}{2}\sum_{k=0}^{k_{\rm{max}}}{c_{k}^{m}\Omega_{k}}e^{i\phi_{k}}|\mu\rangle|k\rangle\langle\zeta_{m}|+{\rm{H.c.}}. \end{align} This effective Hamiltonian describes transitions between the Fock states $|k\rangle$ through the dressed intermediate state $|\zeta_{m}\rangle$. We assume \begin{align} \omega_{{\mu}}=E_{m}-(k_{\rm{max}}+0.25)\omega_{{c}}, \end{align} so that the dressed state $|\zeta_{m}\rangle$ is the highest level in the evolution subspace. In Fig.~\ref{figefft}(a), we illustrate the effective transitions for $m={0}$. {Note that each Fock state can be freely populated by the drivings $\Omega_{k}$ when the system is in the USC regime. Instead, in the weak-coupling regime, the qubit driving $H_{D}(t)$ only induces oscillations $|g\rangle|0\rangle \leftrightarrow|\mu\rangle|0\rangle$ because $c_{n\neq0}^{{m=0}}\simeq0$. } \begin{figure} \caption{(a) Illustraton of the effective transitions according to Eq.~(\ref{eq1-2} \label{figefft} \end{figure} \section{Nonadiabatic holonomic quantum computation via binomial codes} An example of the binomial codes \cite{Prx6031006} for single-qubit gates protecting against the single-photon loss error is \begin{align}\label{eq7} |\tilde{1}\rangle=|2\rangle,\ \ \ \ |\tilde{0}\rangle=(|0\rangle+|4\rangle)/\sqrt{2}, \end{align} which form a computational subspace $\mathcal{S}_{{c}}=\{|\tilde{0}\rangle,|\tilde{1}\rangle\}$. With this definition, a photon loss error brings the logical code words to a subspace with odd photon numbers that is clearly disjoint from the even-parity subspace of the logical code words \cite{Prx6031006}. The Knill-Laflamme condition \cite{Pra543824,Pra55900} for this kind of codes reads $\langle\tilde{\varrho}|a^{\dag}a|\tilde{\varrho}'\rangle=2$ ($\varrho,\varrho'=0,1$). This means that the probability of a photon jump to occur is the same for $|\tilde{0}\rangle$ and $|\tilde{1}\rangle$, implying that the quantum state is not deformed under the error of a photon loss. For instance, when encoding quantum information as \begin{align} |{\psi}_{0}\rangle=\cos{\chi}|\tilde{0}\rangle+\sin{\chi}|\tilde{1}\rangle, \end{align} a photon jump leads to \begin{align} |{\psi}_{1}\rangle=\frac{a|{\psi}_{0}\rangle}{\sqrt{\langle{\psi}_{0}|a^{\dag}a|{\psi}_{0}\rangle}}=\cos\chi|3\rangle+\sin{\chi}|1\rangle, \end{align} which means that the information ($\cos\chi$ and $\sin\chi$) is not deformed \cite{Prx6031006}. {To manipulate the codes in Eq.~(\ref{eq7}), we need a three-frequency composite pulse, i.e., $k=(0,2,4)$ in Eq.~(\ref{eq1-2}). When $g/\omega_{c}\gtrsim 0.5$, the probability amplitudes $\left(c_{0}^{{2}},c_{2}^{{2}},c_{4}^{{2}}\right)$ of the Fock states $\left(|0\rangle,|2\rangle,|4\rangle\right)$ in the third dressed state $|\zeta_{{2}}\rangle$ are greater than in the other dressed states (see more details in Appendix~\ref{AA}). For this reason, we choose $m={2}$ in Eq.~(\ref{eq1-2}). Assuming $c_{0}^{{2}}\Omega_{0}=c_{4}^{{2}}\Omega_{4}$ and $\phi_{0}=\phi_{4}$, $H_{\rm{eff}}(t)$ becomes an effective $\Lambda$-type system with two ground states $\{|\tilde{0}\rangle,|\tilde{1}\rangle\}$ and an excited state $|\zeta_{2}\rangle\equiv|\zeta_{{m=2}}\rangle$. The NHQC in a $\Lambda$-type system has been well studied \cite{Njp14103035,Prl109170501,Pra72020301}.} For clarity, we define an effective driving amplitude \begin{align} {\Xi}=\sqrt{\sum_{k}{\left(c_{k}^{{2}}\Omega_{k}\right)^2}}, \end{align} and a time-independent parameter \begin{align} \theta=\frac{1}{2}\arctan\left(\frac{\sqrt{2}c_{0}^{{2}}\Omega_{0}}{c_{2}^{{2}}\Omega_{2}}\right), \end{align} to rewrite $H_{\rm{eff}}(t)$ to be \begin{eqnarray}\label{eq9} {H}_{\rm{eff}}(t)=\frac{\hbar}{2}{\Xi}\exp{\left({i{\phi}_{2}}\right)}|b\rangle\langle\zeta_{{2}}|+{\rm{H.c.}}, \end{eqnarray} where $\phi=\phi_{2}-\phi_{0}$ and \begin{align} |b\rangle=e^{-i\phi}\sin({\theta}/{2})|\tilde{0}\rangle|\mu\rangle+\cos({\theta}/{2})|\tilde{1}\rangle|\mu\rangle, \end{align} Initially, quantum information is stored in the logical qubit states of the subspace $\mathcal{S}_{{c}}$ (the atom is in $|\mu\rangle$). According to the invariant-based approaches for $\Lambda$-type transitions \cite{Pra83062116,Pra89033856,Pra86033405}, when \begin{align} {\Xi}\sin{\phi_{2}}&=\Omega_{{p}}(\beta,\varphi)\equiv(\dot{\beta} \cot \varphi \sin \beta+\dot{\varphi} \cos \beta), \cr {\Xi}\cos{\phi_{2}}&=\Omega_{{s}}(\beta,\varphi)\equiv(\dot{\beta} \cot \varphi \cos \beta-\dot{\varphi} \sin \beta), \end{align} the Hamiltonian in Eq.~(\ref{eq9}) can drive the system to evolve exactly along one of the two user-defined path (see details in Appendix~\ref{AB}) \begin{align} |\psi_{+}(t)\rangle=&\sin({\varphi}/2)|\mu\rangle|b\rangle+i\exp({i{\beta}})\cos({\varphi}/2)|\zeta_{{2}}\rangle,\cr |\psi_{-}(t)\rangle=&i\exp({i{\beta}})\cos({\varphi}/2)|b\rangle+\sin({\varphi}/2)|\zeta_{{2}}\rangle, \end{align} which are two eigenstates of a dynamical invariant $I(t)$ obeying $\hbar{\partial_{t}}I(t)=i[H_{\rm{eff}}(t),I(t)]$. For instance, when $\varphi(0)=0$, the evolution is along $|\psi_{-}(t)\rangle$, which acquires a dynamical phase \begin{align} \vartheta_{-}(t)=-\frac{1}{\hbar}\int_{0}^{t}\langle{\psi_{-}(t')}|H_{\rm{eff}}(t')|{\psi_{-}(t')}\rangle dt', \end{align} and a geometric phase \begin{align} \Theta_{-}(t)=\int_{0}^{t}\langle{\psi_{-}(t')}|{i\partial_{t'}}|{\psi_{-}(t')}\rangle dt'. \end{align} {For a cyclic evolution, the time-dependent auxiliary parameters ${\beta}$ and ${\varphi}$ need to satisfy $\beta(0)\neq \beta(t_{f})$ and ${\varphi}(0)={\varphi}(t_{{f}})=0$. We can choose \begin{align}\label{eq7a} {\varphi}=&\pi\sin^{2}(\pi t/T),\cr\cr {\beta}=&\frac{2}{3}\left\{\begin{array}{ll} 2\sin^{3}{\varphi},& \ \ \ \ t\in[0,t_{{f}}/2] \\ 2\sin^{3}{\varphi}-3\Theta_{{s}}.&\ \ \ \ t\in[t_{{f}}/2,t_{{f}}] \end{array} \right. \end{align} Thus, the final phases are $\vartheta_{-}(t_{f})=0$ and $\Theta_{-}(t_{f})=2\Theta_{s}$ [see Fig.~\ref{figefft}(b) and Appendix~\ref{AB}], resulting in a geometric evolution. In the computational subspace $\mathcal{S}_{{c}}$, the evolution operator is (omitting a global phase $\Theta_{s}$) \begin{eqnarray*}\label{eq10} U_{T}=\left(\begin{array}{cc} \cos{\Theta_{s}}+i\sin{\Theta_{s}}\cos{\theta} & i\sin{\Theta_{s}}\sin{\theta}e^{i\phi} \\ i\sin{\Theta_{s}}\sin{\theta}e^{-i\phi} & \cos{\Theta_{s}}-i\sin{\Theta_{s}}\cos{\theta} \end{array} \right). \end{eqnarray*} This is a universal single-qubit gate. In this case, the gate time is $T\sim 18 /c_{2}^{k}\Omega_{k}^{\rm{peak}}$. In the USC regime we can assume $c_{k}^{m}\gtrsim 0.1$ and $\omega_{c}/2\pi\sim 5~{\rm{GHz}}$ \cite{Nr119,Rmp91025005}, resulting in $T\gg 5~{\rm{ns}}$, i.e., the gate time can be tens of nanoseconds. Choosing $T=35~{\rm{ns}}$, $g\simeq 0.8\omega_{c}$ and $\omega_{c}/2\pi=6.25~{\rm{GHz}}$ \cite{Np1344}, the pulses $\Omega_{0,(2,4)}$ are shown in Fig.~\ref{figPF}(a). Note there that the peak values of the pulses are $\Omega_{k}^{\rm{peak}}/2\pi\sim 200~{\rm{MHz}}$. These satisfy the condition $\omega_{{c}},g\gg\Omega_{k}$. } {\section{Robustness against control imperfections and decoherence} It has been experimentally verified \cite{Prappl14054062} that the pulses chosen based on $\beta$ and $\varphi$ in Eq.~(\ref{eq7a}) can counteract the systematic errors induced by imperfections of the control fields $\Omega_{k}$, making the computation insensitive to such errors \cite{Njp14093040,Pra101032322,Prl111050404}. In the presence of such imperfections with error parameter $\delta_{{i}}$, the driving amplitudes become $\Omega_{k}^{{i}}=(1+\delta_{{i}})\Omega_{k}$. Accordingly, the effective Hamiltonian ${H}_{\rm{eff}}(t)$ should be corrected as ${H}_{\rm{eff}}^{{i}}(t)=(1+\delta_{{i}}){H}_{\rm{eff}}(t)$. By using time-dependent perturbation theory up to $\mathcal{O}(\delta_{{i}})$, the evolution state of the system is approximatively \begin{eqnarray*} |{\psi}_{-}^{{i}}(t)\rangle\approx |{\psi}_{-}(t)\rangle-\frac{i\delta_{{i}}}{\hbar}\int_{0}^{t_{{f}}}U(t_{{f}},t){H}_{\rm{eff}}^{{i}}(t)|{\psi}_{-}(t)\rangle dt \end{eqnarray*} where $U(t_{{f}},t)$ is the unperturbed time evolution operator. \begin{figure} \caption{ {For $(\Theta_{{s} \label{figPF} \end{figure} We assume that the protocol works perfectly when $\delta_{{i}}=0$, resulting in \begin{eqnarray*}\label{eqS23} P_{\rm{out}}\approx 1-\frac{\delta_{{i}}^2}{\hbar{^2}}\left|\int_{0}^{t_{{f}}} e^{2i\mathcal{R}_{-}(t)}\langle{\psi}_{+}(t)|{H}_{\rm{eff}}(t)|{\psi}_{-}(t)\rangle dt\right|^{2}, \end{eqnarray*} where $P_{\rm{out}}$ is the population of the output state after the gate operation and \begin{align} \mathcal{R}_{-}(t)=&\frac{1}{\hbar}\int_{0}^{t}\langle{\psi_{-}(t')}|\left[i{\hbar}{\partial_{t'}}-H_{\rm{eff}}(t')\right]|{\psi_{-}(t')}\rangle dt', \end{align} is the Lewis-Riesenfeld phase \cite{Jmp101458}. Then, the systematic error sensitivity can be defined as \cite{Njp14093040} \begin{align}\label{eqS22} q_{{i}}:=&-\left.\frac{1}{2}\frac{\partial^2 P_{\rm{out}}}{\partial \delta_{{i}}^2}\right|_{\delta_{{i}}=0} \cr =&\left|\int_{0}^{t_{{f}}}e^{i{\beta}+2i\mathcal{R}_{-}(t)}\dot{{\varphi}}\sin^{2}{\varphi} dt\right|^{2}. \end{align} Substituting $\varphi$ and $\beta$ [see Eq.~(\ref{eq7a})] into Eq.~(\ref{eqS22}), we obtain $q_{{i}}\simeq 0$ \cite{Njp14093040,Pra101032322,Prl111050404}, which means that the holonomic gates are insensitive to the systematic errors induced by the pulse imperfections. } \begin{figure} \caption{(a) Average infidelities $(1-\bar{F} \label{fig4} \end{figure} The average fidelity of a gate over all possible initial states can be defined by \cite{Pra70012315,Pla36747} \begin{align}\label{e15} \bar{F}=\left[\mathrm{Tr}(MM^\dag) +|\mathrm{Tr}(M)|^2\right]/(D^2+D), \end{align} with $M=\mathcal{P}_{{c}}U^\dag_{T}U\mathcal{P}_{{c}}$. Here, $\mathcal{P}_{{c}}$ ($D$) is the projector (dimension) of the subspace $\mathcal{S}_{{c}}$. The evolution operator $U$, describing the actual dynamical evolution, is calculated with the total Hamiltonian $H_{\rm{tot}}(t)=H_{0}+H_{D}(t)$. {Using the above definition, in Fig.~\ref{figPF}(b), we show the average fidelity $\bar{F}$ of the Hadamard gate versus the error coefficient $\delta_{{i}}$. Note that, when $\delta_{{i}}\in[-0.1,0.1]$, the average fidelity is nearly $99.9\%$, indicating that our protocol is insensitive to the systematic error caused by pulse imperfections. } The average infidelities ($1-\bar{F}$) of arbitrary single-qubit gates are shown in Fig.~\ref{fig4}(a). The left (right) side of each circle denotes the average infidelity in the absence (presence) of pulse imperfections. When considering pulse imperfections with an error coefficient $\delta_{{i}}=0.1$, the infidelities only slightly increase from $\sim10^{-4}$ to $\sim 10^{-3}$. For instance, in the case of the Hadamard gate, pulse imperfections with an error coefficient $\delta_{i}=0.1$ only increase the infidelity from $<10^{-4}$ to $\sim10^{-3}$. This indicates that the generated gates are mostly insensitive to systematic errors. {Generally, a geometric gate can be robust against noise caused by amplitude fluctuations. Without loss of generality, we use additive white Gaussian noise to investigate the influence of such noise. In this case, the driving amplitudes $\Omega_{k}$ should be corrected to be $\Omega_{k}^{s}={\rm{AWGN}}(\Omega_{k},r)$. Here, ${\rm{AWGN}}(\Omega_{k},r)$ is a function that generates the additive white Gaussian noise (AWGN) to the original signal $\Omega_{k}$ with a signal-to-noise ratio $r$. Because the additive white Gaussian noise is generated randomly in each single simulation, we perform the numerical simulation 20 times to estimate its average influence [see Fig.~\ref{fig4}(b) with an illustraton of the Hadamard gate]. As shown in Fig.~\ref{fig4}(b), when considering relatively strong noises with $r=15$, the gate fidelities can still be higher than $99\%$. This indicates that our protocol is mostly insensitive to noise caused by amplitude fluctuations. } In the USC regime, relaxation and dephasing are studied in the basis $|\mathcal{E}_{j}\rangle$, which diagonalizes the Hamiltonian $H_{0}$. The master equation in the Born-Markov approximation, valid for generic hybrid-quantum systems, is \cite{Pra84043832,Prl110243601,Njp19053010,Pra89033827,Pra97033823} \begin{eqnarray}\label{eq1-6} \hbar \dot{\rho}(t)&=&i[\rho(t),H_{\rm{tot}}(t)]+ \sum_{\nu=0}^{3}\mathcal{D}\left[\sum_{j}\sqrt{\Lambda_{\nu}^{jj}}|\mathcal{E}_{j}\rangle\langle\mathcal{E}_{j}|\right]\rho(t) \cr&&+\sum_{\nu'=0}^{5}\sum_{j>j',j'}\Gamma_{\nu'}^{jj'}\mathcal{D}[|\mathcal{E}_{j'}\rangle\langle\mathcal{E}_{j}|]\rho(t), \end{eqnarray} where $\mathcal{D}[\mathcal{O}]\rho(t)=\mathcal{O}\rho(t)\mathcal{O}^{\dag}-[\rho(t) \mathcal{O}^{\dag}\mathcal{O}+\mathcal{O}^{\dag}\mathcal{O}\rho(t)]/2$ is the Lindblad superoperator. For simplicity, the dephasing and relaxation parameters have been written in a compact form: \begin{align} \Lambda_{0}^{jj}&=\kappa^{\phi}|\langle\mathcal{E}_{j}|a^{\dag}a|\mathcal{E}_{j}\rangle|^{2}, \cr \Lambda_{1}^{jj}&=\kappa|\langle\mathcal{E}_{j}|a^{\dag}+a|\mathcal{E}_{j}\rangle|^{2}, \cr \Lambda_{2,(3)}^{jj}&=\gamma_{g,(\mu)}^{\phi}|\langle\mathcal{E}_{j}|\sigma_{g,(\mu)}^{z}|\mathcal{E}_{j}\rangle|^{2},\cr \Gamma_{{0}}^{jj'}&=\kappa^{\phi}|\langle\mathcal{E}_{j'}|a^{\dag}a|\mathcal{E}_{j}\rangle|^{2}, \cr \Gamma_{{1}}^{jj'}&=\kappa|\langle\mathcal{E}_{j'}|a^{\dag}+a|\mathcal{E}_{j}\rangle|^{2},\cr \Gamma_{2,(3)}^{jj'}&=\gamma_{g,(\mu)}|\langle\mathcal{E}_{j'}|\sigma_{g,(\mu)}^{x}|\mathcal{E}_{j}\rangle|^{2},\cr \Gamma_{4,(5)}^{jj'}&=\gamma_{g,(\mu)}^{\phi}|\langle\mathcal{E}_{j'}|\sigma_{g,(\mu)}^{z}|\mathcal{E}_{j}\rangle|^{2}. \end{align} Here, $\sigma_{\mu}^{x}=|\mu\rangle\langle g|+|g\rangle\langle\mu|$, $\kappa$ ($\kappa^{\phi}$) is the cavity decay (dephasing) rate, $\gamma_{g,(\mu)}$ is the spontaneous emission rate of the transition $|e\rangle\rightarrow|g\rangle$ ($|g\rangle\rightarrow|\mu\rangle$), and $\gamma^{\phi}_{g,(\mu)}$ is the atomic dephasing rate corresponding to $\sigma_{g,(\mu)}^{z}$ ($\sigma_{\mu}^{z}=|g\rangle\langle g|-|\mu\rangle\langle \mu|$). To check the robustness of the geometric gates against decoherence, we assume the input state as $|{\psi}_{\rm{in}}\rangle=|\tilde{0}\rangle$, corresponding to an output state $|{\psi}_{\rm{out}}\rangle=U_{T}|{\psi}_{\rm{in}}\rangle$. Using $(\Theta_{{s}},\theta,\phi)=(\pi/2,\pi/4,0)$ (Hadamard gate), in Fig.~\ref{fig5}(a) we show the fidelity $F_{\rm{out}}=\langle\psi_{\rm{out}}|\rho(t_{{f}})|\psi_{\rm{out}}\rangle$ versus $\gamma$ and $\kappa$ in the presence of pulse imperfections when $\delta_{{i}}=0.1$. In this figure we notice that the dissipation and dephasing of the atom affect the evolution much weaker than those of the cavity. For experimentally realistic parameters of superconducting circuit experiments \cite{Prl124120501}, $\left(\kappa,\kappa^{\phi},\gamma_{g,(\mu)},\gamma_{g,(\mu)}^{\phi}\right)\simeq2\pi\times\left(0.33,0.3,8,8\right)~{\rm{kHz}}$, the fidelity of the output state is $F_{\rm{out}}\simeq99.56\%$, indicating that our protocol is robust against decoherence. \begin{figure} \caption{ (a) Fidelity $F_{\rm{out} \label{fig5} \end{figure} {\section{Multi-qubit gates} Our protocol can be extended to implement multi-qubit holonomic gates, such as two-qubit gates. We consider that the $\Xi$-type atom ultrastrongly couples to a bimodal cavity (frequencies $\omega_{a}$ and $\omega_{b}$). The system Hamiltonian is described by \begin{align} H'_{\rm{tot}}=&H'_{{R}}+\hbar \omega_{\mu}|\mu\rangle\langle\mu|+H_{{D}}(t), \cr\cr H'_{{R}}=&\hbar\omega_{{a}}a^{\dag}a+\hbar\omega_{{b}}b^{\dag}b+\frac{\hbar\omega_{{q}}}{2}\sigma^{z}_{g}\cr &+\hbar[g_{a}(a+a^{\dag})+g_{b}(b+b^{\dag})]\sigma^{x}_{g}. \end{align} The eigenstates of $H'_{{R}}$ corresponding to the eigenvalues $E'_{m}$ can be described by \begin{align*} |\zeta'_{m}\rangle=\sum_{n_{a},n_{b},m}c^{m}_{n_{a},n_{b}}|g\rangle|n_{a}\rangle_{a}|n_{b}\rangle_{b} +d^{m}_{n_{a},n_{b}}|e\rangle|n_{a}\rangle_{a}|n_{b}\rangle_{b}, \end{align*} where $|n_{a}\rangle_{b}$ and $|n_{b}\rangle_{b}$ denote the Fock states of the two cavity modes, respectively. {\renewcommand\arraystretch{4.2} \begin{table} \centering{ \caption{Implementation examples of two-qubit gates\\ } \label{tab2} \begin{tabular}{p{1.3cm}<{\centering}|p{4cm}<{\centering}|p{3cm}<{\centering}} \hline \hline gate & matrix & parameters $(\Theta_{{s}},\theta_{0},\theta_{1},\theta_{2},\phi)$ \\ \hline CNot & \renewcommand{0.1pt}{0.1pt} 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{array} \right) $ & $({\pi}/{2},0,{\pi}/{2},{\pi}/{2},\pi)$ \\ \hline SWAP & \renewcommand{cccc}{cccc} 1 & 0 & 0 & 0 \cr 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 \end{array} \right) $ & $({\pi}/{2},-{\pi}/{2},0,\pi,\pi)$ \\ \hline $\sqrt{\rm{SWAP}}$ & {\renewcommand{cccc}{cccc} 1 & 0 & 0 & 0\\ 0 & \frac{1}{2}(1+i) & \frac{1}{2}(1-i) & 0\\ 0 & \frac{1}{2}(1-i) & \frac{1}{2}(1+i) & 0\\ 0 & 0 & 0 & 1 \end{array} \right) $} & $({\pi}/{4},-{\pi}/{2},\pi,0,\pi)$ \\ \hline\hline \end{tabular}} \end{table} Then, we assume that the driving field is \begin{align} \Omega=\Omega_{k_{a},k_{b}}\cos(\omega_{k_{a},k_{b}}+\phi_{k_{a},k_{b}}). \end{align} When choosing the frequencies that $\omega_{a}/\omega_{b}\neq 0, 1, 2\ldots$, and \begin{align} \omega_{k_{a},k_{b}}=E'_{m}-\omega_{\mu}-k_{a}\omega_{a}-k_{b}\omega_{b}, \end{align} the effective Hamiltonian is approximatively \begin{align*}\label{eqS25} H'_{\rm{eff}}(t)=&\frac{\hbar}{2}\sum_{k_{a},k_{b}}{c^{m}_{k_{a},k_{b}}\Omega_{k_{a},k_{b}}}\exp{\left(i\phi_{k_{a},k_{b}}\right)}|\mu\rangle|k_{a}\rangle_{a}|k_{b}\rangle_{b}\langle\zeta'_{m}|\cr &+{\rm{H.c.}}. \end{align*} For simplicity, we assume that the intermediate state is the dressed state $|\zeta'_{{0}}\rangle$, the driving amplitudes become \begin{align*} &c^{{0}}_{0,0}\Omega_{0,0}=c^{{0}}_{0,4}\Omega_{0}^{4}=c^{{0}}_{4,0}\Omega_{4,0}=c^{{0}}_{4,4}\Omega_{4,4}=\Xi_{\tilde{0}\tilde{0}}(t)/2,\cr &c^{{0}}_{0,2}\Omega_{0,2}=c^{{0}}_{4,2}\Omega_{4,2}=\Xi_{\tilde{0}\tilde{1}}(t)/\sqrt{2},\cr &c^{{0}}_{2,0}\Omega_{2,0}=c^{{0}}_{2,4}\Omega_{2,4}=\Xi_{\tilde{1}\tilde{0}}(t)/\sqrt{2},\cr &c^{{0}}_{2,2}\Omega_{2,2}=\Xi_{\tilde{1}\tilde{1}}(t), \end{align*} and the phases are \begin{align} &\phi_{0,0}= \phi_{0,4} =\phi_{4,0} =\phi_{4,4}=\phi_{\tilde{0}\tilde{0}},\cr &\phi_{0,2}=\phi_{4,2}=\phi_{\tilde{0}\tilde{0}}+\phi,\cr &\phi_{2,0}=\phi_{2,4}=\phi_{\tilde{0}\tilde{0}}+\phi,\cr &\phi_{2,2}=\phi_{\tilde{0}\tilde{0}}+\phi. \end{align} Here, the auxiliary parameter $\phi_{\tilde{0}\tilde{0}}$ is time-dependent and the auxiliary parameter $\phi$ is time-independent. } The effective Hamiltonian in Eq.~(\ref{eqS25}) becomes \begin{align} \tilde{H}'_{\rm{eff}}(t)= \frac{\hbar}{2}{{\Xi'}_{0}(t)\exp{\left[i\phi_{\tilde{0}\tilde{0}}\right]}}|\mu\rangle|{b'}\rangle\langle\zeta'_{{0}}|+{\rm{H.c.}}, \end{align} with the binomial codes \begin{align} |\tilde{0}\rangle_{a}=&\frac{1}{\sqrt{2}}(|0\rangle_{a}+|4\rangle_{a}),\ \ \ |\tilde{1}\rangle_{a}=|2\rangle_{a},\cr |\tilde{0}\rangle_{b}=&\frac{1}{\sqrt{2}}(|0\rangle_{b}+|4\rangle_{b}),\ \ \ \ |\tilde{1}\rangle_{b}=|2\rangle_{b}. \end{align} Here, the bright state $|b'\rangle$ can be defined as \begin{align*} |b'\rangle=& e^{-i\phi}\cos{\frac{\theta_{0}}{2}}\cos{\frac{\theta_{1}}{2}}|\tilde{0}\rangle_{a}|\tilde{0}\rangle_{b} +\cos{\frac{\theta_{0}}{2}}\sin{\frac{\theta_{1}}{2}}|\tilde{0}\rangle_{a}|\tilde{1}\rangle_{b}\cr &+\sin{\frac{\theta_{0}}{2}}\cos{\frac{\theta_{2}}{2}}|\tilde{1}\rangle_{a}|\tilde{0}\rangle_{b} +\sin{\frac{\theta_{0}}{2}}\sin{\frac{\theta_{2}}{2}}|\tilde{1}\rangle_{a}|\tilde{1}\rangle_{b}, \end{align*} with auxiliary parameters \begin{align*} {\Xi'}_{0}(t)=&\sqrt{\left[\Xi_{\tilde{0}\tilde{0}}(t)\right]^{2} +\left[\Xi_{\tilde{0}\tilde{1}}(t)\right]^{2} +\left[\Xi_{\tilde{1}\tilde{0}}(t)\right]^{2} +\left[\Xi_{\tilde{1}\tilde{1}}(t)\right]^{2}}, \cr \theta_{0}=&2\arctan\left[\frac{\sqrt{\Xi^{2}_{\tilde{1}\tilde{0}}(t)+\Xi^{2}_{\tilde{1}\tilde{1}}(t)}} {{\sqrt{\Xi^{2}_{\tilde{0}\tilde{0}}(t)+\Xi^{2}_{\tilde{0}\tilde{1}}(t)}}}\right], \cr \theta_{1}=&2\arctan\left[\frac{\Xi_{\tilde{0}\tilde{1}}(t)}{\Xi_{\tilde{0}\tilde{0}}(t)}\right], \ \ \ \theta_{2}=2\arctan\left[\frac{\Xi_{\tilde{1}\tilde{1}}(t)}{\Xi_{\tilde{1}\tilde{0}}(t)}\right]. \end{align*} For simplicity, we choose $\theta_{0,(1,2)}$ to be time-independent. The orthogonal partners of the state $|b'\rangle$ become \begin{align*} |d_{1}\rangle=& e^{-i\phi}\sin{\frac{\theta_{0}}{2}}\cos{\frac{\theta_{1}}{2}}|\tilde{0}\rangle_{a}|\tilde{0}\rangle_{b} +\sin{\frac{\theta_{0}}{2}}\sin{\frac{\theta_{1}}{2}}|\tilde{0}\rangle_{a}|\tilde{1}\rangle_{b}\cr &-\cos{\frac{\theta_{0}}{2}}\cos{\frac{\theta_{2}}{2}}|\tilde{1}\rangle_{a}|\tilde{0}\rangle_{b} -\cos{\frac{\theta_{0}}{2}}\sin{\frac{\theta_{2}}{2}}|\tilde{1}\rangle_{a}|\tilde{1}\rangle_{b}, \cr |d_{2}\rangle=& e^{-i\phi}\cos{\frac{\theta_{0}}{2}}\sin{\frac{\theta_{1}}{2}}|\tilde{0}\rangle_{a}|\tilde{0}\rangle_{b} -\cos{\frac{\theta_{0}}{2}}\cos{\frac{\theta_{1}}{2}}|\tilde{0}\rangle_{a}|\tilde{1}\rangle_{b}\cr &+\sin{\frac{\theta_{0}}{2}}\sin{\frac{\theta_{2}}{2}}|\tilde{1}\rangle_{a}|\tilde{0}\rangle_{b} -\sin{\frac{\theta_{0}}{2}}\cos{\frac{\theta_{2}}{2}}|\tilde{1}\rangle_{a}|\tilde{1}\rangle_{b}, \cr |d_{3}\rangle=& e^{-i\phi}\sin{\frac{\theta_{0}}{2}}\sin{\frac{\theta_{1}}{2}}|\tilde{0}\rangle_{a}|\tilde{0}\rangle_{b} -\sin{\frac{\theta_{0}}{2}}\cos{\frac{\theta_{1}}{2}}|\tilde{0}\rangle_{a}|\tilde{1}\rangle_{b}\cr &-\cos{\frac{\theta_{0}}{2}}\sin{\frac{\theta_{2}}{2}}|\tilde{1}\rangle_{a}|\tilde{0}\rangle_{b} +\cos{\frac{\theta_{0}}{2}}\cos{\frac{\theta_{2}}{2}}|\tilde{1}\rangle_{a}|\tilde{1}\rangle_{b}. \end{align*} Then, by using the same strategy as that of the single-qubit case, we choose \begin{align*} {\Xi'}_{0}(t)\sin{\phi_{\tilde{0}\tilde{0}}} =&\Omega_{{p}}({\beta},{\varphi})/2 =\dot{{\beta}} \cot {\varphi} \sin {\beta}+\dot{{\varphi}} \cos {\beta},\cr {\Xi'}_{0}(t)\cos{\phi_{\tilde{0}\tilde{0}}} =&\Omega_{{s}}({\beta'},{\varphi})/2 =\dot{{\beta}} \cot {\varphi} \cos {\beta}-\dot{{\varphi}} \sin {\beta}. \end{align*} The evolution operator after a cyclic evolution along \begin{align} |\psi'_{-}(t)\rangle=ie^{i{\beta}}({\varphi}/2)|\mu\rangle|b\rangle+\sin({\varphi}/2)|\zeta'_{{0}}\rangle, \end{align} in the subspace spanned by $\{|b'\rangle,|d_{1}\rangle,|d_{2}\rangle,|d_{3}\rangle\}$ is given by \begin{align} U'_{T}=\left(\begin{array}{cccc} \exp{(2i\Theta_{{s}})} & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{array} \right). \end{align} In the computational subspace spanned by $\{|\tilde{0}\rangle_{a}|\tilde{0}\rangle_{b},~|\tilde{0}\rangle_{a}|\tilde{1}\rangle_{b},~|\tilde{1}\rangle_{a}|\tilde{0}\rangle_{b},~|\tilde{1}\rangle_{a}|\tilde{1}\rangle_{b}\}$, the evolution operator $U'_{T}$ describes a universal two-qubit geometric gate (see table~\ref{tab2} for examples). These two-qubit gates using the same strategy as the single-qubit case are also insensitive to the errors induced by pulse imperfections. Therefore, when considering the error coefficient $\delta_{{i}}=0.1$, in Fig.~\ref{fig5}(b), we show that arbitrary two-qubit gates can be implemented with high fidelities. } \section{Preparing superpositions of Fock states} \begin{figure} \caption{Histograms of Fock-state populations calculated with the total Hamiltonian $H_{\rm{tot} \label{fig1} \end{figure} High-fidelity input states are needed to verify the feasibility of the proposed NHQC in experiments. To generate these input states, starting from Eq.~(\ref{eq1-2}), we assume \begin{align} \phi_{0}&=0,\ \ \ \ \ \ c_{k'}^{m}\Omega_{k'}=\epsilon_{k'}\Omega_{{s}}(\tilde{\beta},\tilde{\varphi}), \cr \phi_{k'}&=\pi, \ \ \ \ \ \ c_{0}^{m}\Omega_{0}=\Omega_{{p}}(\tilde{\beta},\tilde{\varphi}) \end{align} where, $k'\neq 0$ are even numbers, $\epsilon_{k'}$ are time-independent coefficients satisfying $\sum_{k'}|\epsilon_{k'}|^2=1$, $\tilde{\beta}$ and $\tilde{\varphi}$ satisfying $\tilde\beta(0)\simeq\tilde\varphi(0)\simeq0$ are time-dependent auxiliary parameters to be determined. Then, the evolution governed by $H_{\rm{eff}}(t)$ is \begin{align} |{\tilde\psi_{0}(t)}\rangle=&\cos{\tilde\varphi}(\cos{\tilde\beta}|0\rangle+\sin{\tilde\beta}\sum\nolimits_{k'}\epsilon_{k'}|k'\rangle)|\mu\rangle\cr &-i\sin{\tilde\varphi}|\zeta_{{2}}\rangle. \end{align} When $\tilde\varphi(t_{f})\simeq0$ and $\tilde\beta(t_{f})=\tilde\beta_{f}$, we obtain \begin{align}|\tilde\psi_{0}(t_{f})\rangle=\left(\cos{\tilde\beta_{f}}|0\rangle+\sin{\tilde\beta_{f}}\sum\nolimits_{k'}\epsilon_{k'}|k'\rangle\right)|\mu\rangle,\end{align} which is an arbitrary superposition of even-number Fock states. The boundaries for $\tilde\beta$ and $\tilde\varphi$ can be satisfied by choosing \begin{align} \tilde\beta&=\frac{\tilde\beta_{f}}{\left[1+\exp{\left({-{t}/{\tau}+{T}/{2\tau}}\right)}\right]},\cr \tilde\varphi&=\frac{\tilde\varphi_{0}} {\exp{\left({t}/{\tau_{c}}-{T}/{2\tau_{c}}\right)^{2}}}, \end{align} with parameters $\left(\tilde\varphi_{0},\tau,\tau_{c}\right)=\left(\pi/5,0.11 5T,0.3T\right)$ \cite{Pra80013417,Rmp701003}. For instance, when $k=(0,2,4)$, $m={2}$, $\tilde{\beta}_{f}=\arccos(1/\sqrt{6})$, and $\epsilon_{2}=2/\sqrt{5}$ we can generate an input state $|\psi_{\rm{in}}\rangle=(|\tilde{0}\rangle+\sqrt{2}|\tilde{1}\rangle)/\sqrt{3}$, as shown in Fig.~\ref{fig1}(a). This figure shows the final populations $P_{k}=\langle\mu|\langle k|\rho(t_{{f}})|k\rangle|\mu\rangle$ and the Wigner function $W(\alpha)={2}{\rm{Tr}}[D_{\alpha}^{\dag}\rho(t_{{f}})D_{\alpha}e^{i\pi a^{\dag}a}]/\pi$, where $D_{\alpha}=\exp(\alpha a^{\dag}-\alpha^{*}a)$ is the displacement operator. As shown in Fig.~\ref{fig1}(a) the full dynamics [green histograms] is in excellent agreement with the effective dynamics [yellow histograms]. In the presence of decoherence, the populations [red-solid broken line in Fig.~\ref{fig1}(a)], calculated using the master equation in Eq.~(\ref{eq1-6}) are almost the same as those calculated using the coherent dynamics when feasible parameters are considered. This indicates that our protocol for the state preparations is robust against decoherence. The above approach can be used to generate Schr\"{o}dinger's cat states \cite{Phys72597,Pra71063820,Np7799,Pra100012124,Zhou2021,Qin2021}, e.g., the even cat state \begin{align} |\mathcal{C}_{{e}}^{\eta}\rangle=e^{|\eta|^2/2}\sqrt{\rm{sech}|\eta|^2}(|\eta\rangle+|-\eta\rangle)/2, \end{align} when $m={0}$, ${\tilde{\beta}_{f}}=\arccos{\left(\sqrt{{\rm{sech}|\eta|^2}}\right)}$ and $\epsilon_{k'}=-(\eta^{k'}\cot{\tilde{\beta}_{f}})/\sqrt{k'!}$, where $\eta$ is the amplitude of the coherent state $|\eta\rangle$. In Fig.~\ref{fig1}(b), we show that the even cat state can be generated with a high fidelity. These generated high-fidelity cat stats are useful for cat-code quantum computation \cite{Prl116140502,Njp16045014}. \section{Conclusion} We have investigated the possibility of using USC systems for the implementation of \textit{fast, robust, and fault-tolerant} holonomic computation. The dressed-state properties of the USC systems allow to \textit{simultaneously couple} the dressed state $|\zeta_{m}\rangle$ to multiple Fock sates, such that one can manipulate the population and the phase of each Fock state as desired. The binomial codes formed from these Fock states are protected against the single-photon loss, making the computation fault-tolerant. Moreover, by designing the pulses with invariant-based engineering, we can eliminate the dynamical phase and achieve only the geometric phase in a cyclic evolution. Such a control technique is compatible with the systematic-error-sensitivity nullification method, making the evolution mostly insensitive to the systematic errors caused by pulse imperfections. Additionally, using the USC regime allows to apply relatively strong driving fields, such that our protocols are fast. As results, our protocols are robust against the decays and dephasings of the cavity and the atom. Note that this work can freely control a bosonic mode. The proposed idea can be generalized to realize NHQC with other bosonic error-correction qubits, such as cat-qubits \cite{Prl116140502,Njp16045014}, for fast, robust, and fault-tolerant quantum computation. The proposed protocols can be realized in superconducting circuits \cite{Nr119,Rmp91025005,Rmp841,Rmp85623,Pr7181,Nat474589,An16767,Pra90053833,Prl103147003,Np6772,Prl105237001,npjQI346,Np1344,Sr626720,Prb93214501,Pra96012325,Pra95053824,Prl120183601}. For instance, one can inductively couple a flux qubit and an $LC$ oscillator via Josephson junctions \cite{Np1344} to reach the needed coupling strength. The quantized level structure in Fig.~\ref{figmodel}(b) can be realized adjusting the external magnetic flux through the qubit loop \cite{Prl110243601,Njp19053010,Pra89033827}. {Some experimental observations of the ultrastrong light-matter coupling in superconducting quantum circuits are listed in Table~\ref{tabS2}. To reach the ultrastrong and deep-strong coupling regimes, we can choose a setup with a flux qubit coupled to a lumped-element $LC$ resonator \cite{Np1344,Pra95053824}. In such superconducting circuit experiments, qubit and resonator frequencies are usually in the range $\omega_{c,(q)}/2\pi\sim1$--$10~{\rm{GHz}}$. Thus, we choose $g/\omega_{c}\simeq 0.7~(0.8)$ and $\omega_{c}/2\pi=6.25~{\rm{GHz}}$, which are experimentally feasible, as shown in Table~\ref{tabS2}. } {\renewcommand\arraystretch{1.4} \begin{table} \centering \caption{Superconducting experiments that have achieved the ultrastrong light-matter coupling. Abbreviations are FQ=flux qubit, TR=transmon qubit, TL=transmission line resonator, and LE=lumped-element resonator.} \label{tabS2} \begin{tabular}{p{2cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}p{1cm}<{\centering}} \hline \hline {Year \& Ref.} & {Qubit} & Cavity & $g/2\pi$ (MHz) & $\omega_{c}/2\pi$ (GHz) & $g/\omega_{c}$ \\ \hline 2010 \cite{Np6772} & FQ & TL & 636 & 5.357 & 0.12 \\ 2010 \cite{Prl105237001} & FQ &LE& 810 & 8.13& 0.1 \\ 2017 \cite{Np1344} & FQ &LE & 7630 & 5.711 & 1.34 \\ 2017 \cite{Pra95053824} &FQ &LE & 5310 & 6.203 & 0.86 \\ 2017 \cite{npjQI346} &TR & TL & 897 & 4.268 & 0.19 \\ 2018 \cite{Prl120183601} & FQ & LE & 7480 & 6.335 & 1.18\\ \hline \hline \end{tabular} \end{table}} Recent experimental work has demonstrated that dissipation and dephasing rates in a flux qubit is of the order of $2\pi\times 10~{\rm{kHz}}$ \cite{Rmp91025005,Prl113123601,Prb93104518}. The transmon qubits, which have lower anharmonicity than flux qubits, can have dissipation and dephasing rates approaching $2\pi\times 1~\rm{kHz}$ \cite{Prb75140515,Nc712964}. For transmission-line resonators, quality factors factors $Q=\omega_{c}/\kappa$ on the order of $10^{6}$ have been realized \cite{Prl107240501}, which indicates that quantum coherence of single photons up to $1 \sim 10 ~\rm{ms}$ is within current experimental capabilities \cite{Apl100113510}. Therefore, our proposal works well in the USC regime, and it may find compelling applications for quantum information processing for various USC systems, in particular, superconducting systems. \begin{appendix} \section{Effective Hamiltonian}\label{AA} The total Hamiltonian for this protocol can be written as \begin{align}\label{eqs1-1} H_{\rm{tot}}&=H_{0}+H_{{D}}(t), \cr H_{0}&=\hbar\sum_{m=0}^{\infty}E_{m}|\zeta_{m}\rangle\langle\zeta_{m}| +\sum_{n=0}^{\infty}\hbar(\omega_{\mu}+n{\omega}_{{c}})|\mu\rangle\langle\mu|\otimes|n\rangle\langle n|, \cr H_{{D}}(t)&=\hbar\Omega(|\mu\rangle\langle g|+|g\rangle\langle\mu|). \end{align} Here, $|\zeta_{m}\rangle$ are the dressed eigenstates of the Rabi Hamiltonian with eigenvalues $E_{m}$, $\omega_{\mu}$ denotes the energy of the lowest atomic level $|\mu\rangle$, $n$ is the cavity photon number, and $\Omega=\Omega_{k}\cos(\omega_{k}t+\phi_{k})$ is a composite pulse driving the atomic transition $|\mu\rangle\leftrightarrow|g\rangle$. Performing the unitary transformation $U_{d}=\exp(-iH_{0}t/\hbar)$ and choosing the frequencies as $\omega_{{k}}=E_{m}-\omega_{\mu}-k\omega_{{c}}$, we have \begin{widetext} \begin{align}\label{eqs1-2} H'_{D}(t)=&\frac{\hbar}{2}\sum_{k}\sum_{m'}\sum_{n}c_{n}^{m'}{\Omega_{k}}|\mu\rangle|n\rangle\langle\zeta_{m'}|\left\{\right.\exp{\left[-i\Delta E_{m,m'}t+i (n-k) {\omega}_{{c}}t+i\phi_{k}\right]}\cr &+\exp{[-i\Delta E_{m,m'}t+i (n-k) {\omega}_{{c}}t-2i\omega_{{k}}t-i\phi_{k}]}\left.\right\} +\rm{H.c.}, \end{align} \end{widetext} where $\Delta E_{m,m'}=E_{m'}-E_{m}$ is the energy gap between the eigenstates $|\zeta_{m'}\rangle$ and $|\zeta_{m}\rangle$. \begin{figure*} \caption{{Probability amplitudes of $|g\rangle|0\rangle$, $|g\rangle|2\rangle$, and $|g\rangle|4\rangle$ in dressed states (a) $|\zeta_{{0} \label{figS2} \end{figure*} Obviously, when satisfying \begin{align}\label{eqS5} c_{n}^{m'}\Omega_{k}\ll ~ &|(n-k)\omega_{{c}}-\Delta E_{m,m'}| , \cr c_{n}^{m'}\Omega_{k}\ll ~ &|(n-k)\omega_{{c}}-2\omega_{k}-\Delta E_{m,m'}|, \end{align} the fast-oscillating terms can be neglected in the rotating wave approximation (RWA). Then, the effective Hamiltonian becomes \begin{align}\label{eqs1-4} H_{\rm{eff}}(t)=&\frac{\hbar}{2}\sum_{k}{c_{k}^{m}\Omega_{k}}e^{i\phi_{k}}|\mu\rangle|k\rangle\langle\zeta_{m}|+{\rm{H.c.}}, \end{align} i.e., the effective Hamiltonian in Eq.~(\ref{eq1-2}). The coefficients $c_{n}^{m}=\langle\zeta_{m}|g\rangle|n\rangle$ and $d_{n\pm 1}^{m}=\langle\zeta_{m}|e\rangle|n\pm 1\rangle$ can be obtained numerically [see Fig.~\ref{figS2}(a) as an example for the ground dressed state $|\zeta_{{0}}\rangle$]. According to our numerical results, when $0.5\lesssim g/\omega_{c}\lesssim 1$, the probability amplitudes $\left(c_{0}^{{2}},c_{2}^{{2}},c_{4}^{{2}}\right)$ of the states $\left(|g\rangle|0\rangle,|g\rangle|2\rangle,|g\rangle|4\rangle\right)$ in the third dressed state $|\zeta_{{2}}\rangle$ are greater [see Fig.~\ref{figS2}(b)] than in the others. Thus, when focusing on manipulating the Fock states ($|0\rangle,|2\rangle,|4\rangle$), the effective driving intensities (i.e., $c_{0}^{{2}}\Omega_{0}$, $c_{2}^{{2}}\Omega_{2}$, and $c_{4}^{{2}}\Omega_{4}$) can be much stronger by using $|\zeta_{{2}}\rangle$ to be the intermediate state. Therefore, the gate time can be shortened. In Fig.~\ref{figS2}(b), we find that the coefficients $c_{n}^{m}$ jump from zero to nonzero values when $g/\omega_{c}\simeq 0.43$. This is caused by an avoided level crossing \cite{Nr119,Rmp91025005} when $g/\omega_{c}\simeq 0.43$ [see the red circle in Fig.~\ref{figS2}(c)]. In Fig.~\ref{figS2}(c) we notice that the dressed states become nonequidistant as the energy gap $\Delta E_{m,m+1}\neq {\rm{constant}}$ when $0.1\lesssim g/ \omega_{c}\lesssim1$. For instance, when $g/\omega_{c}\sim 0.5$, we have $|\Delta E_{m,m+1}-\Delta E_{m+1,m+2}|\gtrsim 0.5\omega_{c}$. This indicates that the USC can induce strong anharmonicity in the dressed states $|\zeta_{m}\rangle$. \section{Dynamical and geometric phases}\label{AB} An operator $I(t)$ satisfying $\hbar{\partial_{t}}I(t)=i[H(t),I(t)]$ is a dynamical invariant of an arbitrary Hamiltonian $H(t)$. According to \cite{Jmp101458}, an arbitrary solution of the Schr\"{o}dinger equation \begin{align}\label{eq2s-5} i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle=H(t)|\psi(t)\rangle, \end{align} can be expressed by using the eigenstates of $I(t)$ as \begin{align}\label{eq2s-6} |\psi(t)\rangle=&\sum_{n} {C_{n}e^{i\mathcal{R}_{n}(t)}|{\psi_{n}(t)}}\rangle, \cr \mathcal{R}_{n}(t)=&\frac{1}{\hbar}\int_{0}^{t}\langle{\psi_{n}(t')}|\left[i{\hbar}{\partial_{t'}}-{H(t')}\right]|{\psi_{n}(t')}\rangle dt', \end{align} where $C_{n}$ are time-independent amplitudes, $|\psi_{n}(t)\rangle$ are the orthonormal eigenvectors of $I(t)$, and $\mathcal{R}_{n}(t)$ are the Lewis-Riesenfeld phases \cite{Jmp101458}. These phases include dynamical phases \begin{align} \vartheta_{n}(t)=-\frac{1}{\hbar}\int_{0}^{t}\langle{\psi_{n}(t')}|H(t')|{\psi_{n}(t')}\rangle dt', \end{align} and geometric phases \begin{align} \Theta_{n}(t)=\int_{0}^{t}\langle{\psi_{n}(t')}|{i\partial_{t'}}|{\psi_{n}(t')}\rangle dt'. \end{align} For instance, when $\langle\psi(0)|\psi_{0}(0)\rangle=1$, we have $C_{0}=1$ and $C_{n\neq0}=0$. The evolution of the system is exactly along the eigenstate $|\psi_{0}(t)\rangle$, which is a shortcut to the adiabatic passage of $H(t)$. The effective Hamiltonian \begin{align}\label{eqS9} {H}_{\rm{eff}}(t)=\frac{\hbar}{2}{\Omega}_{0}e^{i{\phi}_{2}}|\zeta_{{2}}\rangle\langle b|\langle\mu|+{\rm{H.c.}}, \end{align} in Eq.~(\ref{eq9}) for the NHQC can be regarded as the intermediate state $|\zeta_{{2}}\rangle$ coupled to the bright state \begin{align}\label{eqS10} |b\rangle=e^{-i\phi}\sin({\theta}/{2})|\tilde{0}\rangle+\cos({\theta}/{2})|\tilde{1}\rangle, \end{align} but decoupled from the dark state \begin{align}\label{eqS11} |d\rangle=e^{-i\phi}\cos({\theta}/{2})|\tilde{1}\rangle-\sin({\theta}/{2})|\tilde{0}\rangle. \end{align} A dynamical invariant of ${H}_{\rm{eff}}(t)$ is \begin{align}\label{eqS12} {I}(t)=&\cos{{\varphi}}(|\zeta_{{2}}\rangle\langle\zeta_{{2}}| -|b\rangle\langle b|\otimes|\mu\rangle\langle \mu|)\cr &+\left(e^{i{\beta}}\sin{{\varphi}}|\zeta_{{2}}\rangle\langle b|\langle\mu|+{\rm{H.c.}}\right), \end{align} with eigenvectors \begin{align}\label{eqS13} |\psi_{+}(t)\rangle&=\sin({\varphi}/2)|\mu\rangle|b\rangle+ie^{-i{\beta}}\cos({\varphi}/2)|\zeta_{{2}}\rangle, \cr |\psi_{-}(t)\rangle&=ie^{i{\beta}}\cos({\varphi}/2)|\mu\rangle|b\rangle+\sin({\varphi}/2)|\zeta_{{2}}\rangle. \end{align} Then, substituting Eqs.~(\ref{eqS9}) and (\ref{eqS13}), into Eq.~(\ref{eq2s-6}), the time derivatives of the dynamic phases and geometric phases acquired by $|{\psi}_{\pm}(t)\rangle$ are \begin{align}\label{eqS14} \dot{\vartheta}_{\pm}(t)=&\mp\frac{\dot{{\beta}}}{2}\sin{\varphi}\tan{{\varphi}}, \cr \dot{\Theta}_{\pm}(t)=&\pm\frac{\dot{{\beta}}}{2}(1-\cos{\varphi}), \end{align} respectively. Obviously, $\dot{\vartheta}_{\pm}(t)$ and $\dot{\Theta}_{\pm}(t)$ obey the same mathematical symmetry. To eliminate the dynamical phases and achieve only the geometric phases, we can design a piecewise function for ${\beta}$, e.g., \begin{align}\label{eqS15} {\beta}=\left\{\begin{array}{ll} f(t),& t\in[0,t_{{f}}/2] \\ \\ f(t)-2\Theta_{{s}},& t\in[t_{{f}}/2,t_{{f}}] \end{array} \right. \end{align} where $\Theta_{{s}}$ is a constant. Then, we assume $\dot{\vartheta}_{\pm}(t-t_{{f}}/2)$ to be odd functions, leading to \begin{align}\label{eqS16} \vartheta_{\pm}&=\mp\int_{\frac{t_{{f}}}{2}}^{\frac{t_{{f}}}{2}+\Delta t}\Theta_{{s}}(\sin{\varphi}\tan{{\varphi}})dt+\int_{0}^{t_{{f}}}\dot{\vartheta}_{\pm}(t) dt \cr &=\mp\Theta_{{s}}\sin{\varphi}\left(\frac{t_{{f}}}{2}\right)\tan{{\varphi}\left(\frac{t_{{f}}}{2}\right)}. \end{align} Here, $\Delta t$ is a small increase in time, and we have assumed ${\varphi}$ to be continuous in time. Meanwhile, for the geometric phases, $\dot{\Theta}_{\pm}(t-t_{{f}}/2)$ are also odd functions, leading to \begin{align}\label{eqS17} \Theta_{\pm}=& \mp\int_{\frac{t_{{f}}}{2}}^{\frac{t_{{f}}}{2}+\Delta t}\Theta_{{s}}(1-\cos{\varphi})dt+\int_{0}^{t_{{f}}}\dot{\Theta}_{\pm}(t) dt \cr =&\mp \Theta_{{s}}\left[1-\cos{{{\varphi}\left(\frac{t_{{f}}}{2}\right)}}\right]. \end{align} Thus, we obtain $\vartheta_{\pm}=0$ and $\Theta_{\pm}=\mp2\Theta_{{s}}$ when ${\varphi}(t_{{f}}/2)=\pi$. \end{appendix} \end{document}
\begin{document} \title{Graham's pebbling conjecture on Cartesian product of the middle graphs of even cycles\thanks{Supported by ``the Fundamental Research Funds for the Central Universities" and the NSF of the People's Republic of China(Grant No. 61272008, No. 11271348 and No. 10871189).} } \author {Zheng-Jiang Xia,\;\; Yong-Liang Pan\footnote{Corresponding author: [email protected]},\quad Jun-Ming Xu,\quad Xi-Ming Cheng\\ \\ {\small School of Mathematical Sciences,}\\ {\small University of Science and Technology of China,} \\ {\small Hefei, Anhui, 230026, P. R. China}\\ {\small Email: [email protected]} \\ } \date{} \maketitle \date{} \noindent\textbf{Abstract}: A pebbling move on a graph $G$ consists of taking two pebbles off one vertex and placing one on an adjacent vertex. The pebbling number of a graph $G$, denoted by $f(G)$, is the least integer $n$ such that, however $n$ pebbles are located on the vertices of $G$, we can move one pebble to any vertex by a sequence of pebbling moves. Let $M(G)$ be the middle graph of $G$. For any connected graphs $G$ and $H$, Graham conjectured that $f(G\times H)\leq f(G)f(H)$. In this paper, we give the pebbling number of some graphs and prove that Graham's conjecture holds for the middle graphs of some even cycles.\\ \par \vskip 0.5pt {\bf Keywords:} Graham's conjecture, even cycles, middle graphs, pebbling number.\\ {\bf 2010 Mathematics Subject Classification:} 15A18, 05C50 \\ \section{Introduction} Pebbling in graphs was first introduced by Chung \cite{c89}. Consider a connected graph with a fixed number of pebbles distributed on its vertices. A pebbling move consists of the removal of two pebbles from a vertex and the placement of one pebble on an adjacent vertex. The pebbling number of a vertex $v$, the target vertex, in a graph $G$ is the smallest number $f(G,v)$ with the property that, from every placement of $f(G,v)$ pebbles on $G$, it is possible to move one pebble to $v$ by a sequence of pebbling moves. The pebbling number of a graph $G$, denoted by $f(G)$, is the maximum of $f(G,v)$ over all the vertices of $G$.\\ There are some known results regarding the pebbling number (see \cite{c89,yzz12,fk01,lqwm06,sf00}). If one pebble is placed on each vertex other than the vertex $v$, then no pebble can be moved to $v$. Also, if $u$ is at a distance $d$ from $v$, and $2^d-1$ pebbles are placed on $u$, then no pebble can be moved to $v$. So it is clear that $f (G) \geq \max \{|V(G)|, 2^D\}$, where $D$ is the diameter of graph $G$. Furthermore, we know that $f (K_n) = n$ and $f (P_n) = 2^n-1$ (see \cite{c89}), where $K_n$ is the complete graph and $P_n$ is the path, respectively on $n$ vertices.\\ The {\it middle graph} of a graph $G$, denoted by $M(G)$, is obtained from $G$ by inserting a new vertex into each edge of $G$, and joining the new vertices by an edge if the two edges they inserted share the same vertex of $G$.\\ \indent Given two disjoint graphs $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$, the Cartesian product of them is denoted by $G_1\times G_2$. It has vertex set $V_1\times V_2=\{(u_i,v_j)| u_i\in V_1, v_j\in V_2\}$, where $(u_1,v_1)$ is adjacent to $(u_2,v_2)$ if and only if $u_1=u_2$ and $(v_1,v_2)\in E_2$, or $(u_1,u_2)\in E_1$ and $v_1=v_2$. One may view $G_1\times G_2$ as the graph obtained from $G_2$ by replacing each of its vertices with a copy of $G_1$, and each of its edges with $|V_1|$ edges joining corresponding vertices of $G_1$ in the two copies. Let $u\in G, v\in H$, then $u(H)$ and $v(G)$ are subgraphs of $G\times H$ with $V(u(H))=\{(u,v)|v\in V(H)\}$, $E(u(H))=\{(u,v)(u,v')|vv'\in E(H)\}$ and $V(v(G))=\{(u,v)|u\in V(G)\}$, $E(v(G))=\{(u,v)(u',v)|uu'\in E(G)\}$. It is clear that $u(H)\cong H$ and $v(G)\cong G$.\\ The following conjecture (see [2]), by Ronald Graham, suggests a constraint on the pebbling number of the product of two graphs.\\ \indent Conjecture (Graham). The pebbling number of $G\times H$ satisfies $f(G\times H)\leq f(G)f(H)$.\\ Ye {\it et al.} (see \cite{yzz}) proved that $f(M(C_{2n+1})\times M(C_{2m+1}))\leq f(M(C_{2n+1}))f(M(C_{2m+1}))$ and $f(M(C_{2n})\times M(C_{2m+1}))\leq f(M(C_{2n}))f(M(C_{2m+1})).$ In this paper, we will prove that $f(M(C_{2n})\times M(C_{2m}))\leq f(M(C_{2n}))f(M(C_{2m}))$ for $m,n\geq5$ and $|n-m|\geq2$.\\ Throughout this paper, $G$ will denote a simple connected graph with vertex set $V(G)$ and edge set $E(G)$. $P_n$ and $C_n$ will denote a path and a cycle with $n$ vertices, respectively. Given a distribution of pebbles on the vertices of $G$, define $p(K)$ to be the number of pebbles on a subgraph $K$ of G and $p(v)$ to be the number of pebbles on a vertex $v$ of $G$. Moreover, we let $\tilde{p}(K)$ and $\tilde{p}(v)$ denote the numbers of pebbles on $K$ and $v$ after some sequence of pebbling moves, respectively. \section{Main results} \begin{Def}{\rm(see \cite{sf00})} Let $P_{n}=v_1v_2\cdots v_{n}$ be a path. We say that $P_n$ has weight $\sum\limits_{i=1}^{n-1}2^{i-1}p(v_i)$ with respect to $v_n$ and this is written as $\omega_{P_n}(v_n)$. \end{Def} \begin{pps}{\rm{(see \cite{sf00})}}\label{pps2.1} Let $P_{n}=v_1v_2\cdots v_{n}$ be a path. If $\omega_{P_n}(v_n)\geq k2^{n-1}$, then at least $k$ pebbles can be moved from $P_n\backslash v_n$ to $v_n$. \end{pps} \begin{cor}\label{cor4.1} Let $P_{n}=v_1v_2\cdots v_{n}$ be a path. Let $\omega_{P_n}(v_k)=\sum\limits_{i=1}^{k-1}2^{i-1}p(v_i) +\sum\limits_{j=k+1}^{n}2^{n-j}p(v_j)$ for $2\leq k\leq n-1$. If $\omega_{P_n}(v_k)\geq t2^{k-1}+2^{n-k}-1$ for $\frac{n+1}{2}\leq k\leq n$, $\omega_{P_n}(v_k)\geq2^{k-1}+t2^{n-k}-1$ for $1\leq k<\frac{n+1}{2}$, then at least $t$ pebbles can be moved from $P_{n}\backslash v_k$ to $v_k$. \end{cor} \begin{pf} Without loss of generality, we assume that $\frac{n+1}{2}\leq k\leq n$. If $k=n$, it follows from Proposition~\ref{pps2.1}. If $\frac{n+1}{2}\leq k\leq n-1$, let $L_1=v_1v_2\cdots v_k$, $L_2=v_kv_{k+1}\cdots v_n$ be two subpaths of $P_n$. Suppose $\omega_{P_n}(v_k)\geq t2^{k-1}+2^{n-k}-1$, then either $\sum\limits_{i=1}^{k-1}2^{i-1}p(v_i)\geq t2^{k-1}$ or $\sum\limits_{j=k+1}^{n}2^{n-j}p(v_j)\geq2^{n-k}$ holds. Case $1$. $\sum\limits_{i=1}^{k-1}2^{i-1}p(v_i)\geq t2^{k-1}$, by Proposition~\ref{pps2.1}, we can move $t$ pebbles from $L_1\backslash v_k$ to $v_k$. Case $2$. $\sum\limits_{j=k+1}^{n}2^{n-j}p(v_j)\geq2^{n-k}$, we may assume that $\sum\limits_{j=k+1}^{n}2^{n-j}p(v_j)=s2^{n-k}+h$, where $s$ and $h$ are integers satisfying $s\geq1$ and $0\leq h< 2^{n-k}$. With $p(v_j)$ pebbles on $v_j$ $(k+1\leq j\leq n)$, we can move $s$ pebbles from $L_2\backslash v_k$ to $v_k$. Note that $2^{k-1}\geq2^{n-k}$ for $k\geq\frac{n+1}{2}$, we have \begin{align*} \sum\limits_{i=1}^{k-1}2^{i-1}p(v_i)=&\omega_{P_n}(v_k)-\sum\limits_{j=k+1}^{n}2^{n-j}p(v_j)\\ \geq&t2^{k-1}+2^{n-k}-1-(s2^{n-k}+h)\\ =&(t2^{k-1}-s2^{n-k})+(2^{n-k}-h)-1\\ \geq&(t-s)2^{k-1}. \end{align*} So we can move $t-s$ pebbles from $L_1\backslash v_k$ to $v_k$ with $p(v_i)$ pebbles on $v_i$ $(1\leq i\leq k-1)$. That is to say we can move $s+(t-s)=t$ pebbles to $v_k$. \end{pf} \begin{cor}\label{cor2.1} Let $P_n=v_1v_2\cdots v_n$ be a path. Then $f(M(P_n)-\{v_1,v_n\})=2^{n-2}+n-2$. \end{cor} \begin{figure} \caption{\small The graph $M(P_n)-\{v_1,v_n\} \label{fig1} \end{figure} \begin{pf} To get $M(P_n)$, we insert $u_i$ into the edge $v_iv_{i+1}$ and add the edge $u_iu_{i+1}$ for each $i\in\{1,2,\ldots ,n-2\}$. Let $U=u_1u_2\cdots u_{n-1}$ be a subpath of $M(P_n)-\{v_1,v_n\}$. It is clear that $f(M(P_n)-\{v_1,v_n\})\geq2^{n-2}+n-2$. If we place one pebble on each of vertices $v_2,\ldots ,v_{n-1}$, and place $2^{n-2}-1$ pebbles on $u_{n-1}$, then we can not move one pebble to $u_1$. So $f(M(P_n)-\{v_1,v_n\})\geq2^{n-2}+n-2$. Now, assume that $2^{n-2}+n-2$ pebbles are located at $V(M(P_n)-\{v_1,v_n\})$. First, we prove that one pebble can be moved to $u_k$ $(1\leq k\leq n-1)$. While $m\leq k$, we can move $\lfloor p(v_m)/2\rfloor$ pebbles from $v_m$ to $u_m$. While $m> k$, we can move $\lfloor p(v_m)/2\rfloor$ pebbles from $v_m$ to $u_{m-1}$. \begin{align*} \omega_U(u_k)\geq& 2^{n-2}+n-2-\sum\limits_{t=2}^{n-1}p(v_t)+2\sum\limits_{t=2}^{n-1}\lfloor p(v_t)/2\rfloor\\ \geq&2^{n-2}. \end{align*} It is clear that $2^{n-2}\geq 2^{k-1}+2^{n-k-1}-1$ for $1\leq k\leq n-1$. By Corollary~\ref{cor4.1}, we can move one pebble from $U\backslash u_k$ to $u_k$ $(1\leq k\leq n-1)$. Now we prove that one pebble can be moved to $v_k$ $(2\leq k\leq n-1)$. Without loss of generality, we assume that $k\geq\frac{n+1}{2}$. While $m< k$, we can move $\lfloor p(v_m)/2\rfloor$ pebbles from $v_m$ to $u_m$. While $m> k$, we can move $\lfloor p(v_m)/2\rfloor$ pebbles from $v_m$ to $u_{m-1}$. We will prove that after a sequence of pebbling moves above, two pebbles can be moved from $U$ to $u_{k-1}$, so that one pebble can be moved from $u_{k-1}$ to $v_k$. We consider the worst case, that is $p(u_{k-1})=0$. \begin{align*} \omega_U(u_{k-1})\geq&2^{n-2}+n-2-\sum\limits_{j=2\atop j\neq k}^{n-1}p(v_j)+2\sum\limits _{j=2\atop j\neq k}^{n-1}\lfloor p(v_j)/2\rfloor\\ \geq&2^{n-2}+1. \end{align*} It is clear that $2^{n-2}+1\geq 2\times2^{(k-1)-1}+2^{n-(k-1)-1}-1$ for $\frac{n-1}{2}\leq k-1\leq n-2$. By Corollary~\ref{cor4.1}, we can move two pebbles from $U\backslash u_{k-1}$ to $u_{k-1}$ $(\frac{n-1}{2}\leq k-1\leq n-2)$. So we can move one pebble to $v_k$ $(\frac{n+1}{2}\leq k\leq n-1)$, and we are done. \end{pf} \begin{Def}{\rm{(see \cite{sf00})}} The $t$-pebbling number of a graph $G$ is the smallest number $f_t(G)$ with the property that from every placement of $f_t(G)$ pebbles on $G$, it is possible to move $t$ pebbles to any vertex $v$ by a sequence of pebbling moves. \end{Def} \begin{lem}{\rm(see \cite{yzz})}\label{lem2.3} If $n\geq 2$, then $f(M(C_{2n}))=2^{n+1}+2n-2$. \end{lem} \begin{cor}\label{cor2.2} If $n\geq 2$, then $f_t(M(C_{2n}))\leq t2^{n+1}+2n-2$. \end{cor} \begin{pf} Let $C_{2n}=v_0v_1\cdots v_{2n-1}v_0$, $M(C_{2n})$ is obtained from $C_{2n}$ by inserting $u_i$ into $v_iv_{(i+1)mod(2n)}$, and connecting $u_iu_{(i+1)mod(2n)}$ $(0\leq i \leq2n-1)$. Without loss of generality, we may assume that our target vertex is $u_0$ or $v_0$. Case $1$. The target vertex is $u_0$. In this case, we use induction on $t$. The result is obvious for $t=1$ from Lemma~\ref{lem2.3}. Now suppose that $t2^{n+1}+2n-2$ pebbles are located at the vertices of $M(C_{2n})$. We consider the worst case, that is $p(u_0)=0$. Let $A=\{u_0,v_1,u_1,\ldots ,v_n,u_n\}$, $B=\{u_n,v_{n+1},\ldots ,v_{2n-1},u_{2n-1},v_0,u_0\}$ and $G= M(C_{2n})$. Then we have either $A$ or $B$ contains more than $2^n+n$ pebbles. Note that $G[A]\cong G[B]\cong M(P_{n+2})-\{v_1,v_{n+2}\}$, according to Corollary~\ref{cor2.1}, with $2^n+n$ pebbles on $A$ or $B$, one pebble can be moved to $u_0$. Note that $2^n+n\leq2^{n+1}$, the number of remaining pebbles is more than $(t-1)2^{n+1}+2n-2$. So we can move $t-1$ pebbles to $u_0$ with the remaining pebbles by induction, and we are done. Case $2$. The target vertex is $v_0$. Let $A'=\{u_0,v_1,\ldots ,v_{n-1},u_{n-1}\}$, $B'=\{u_{2n-1},v_{2n-1},\ldots ,v_{n+1},u_n\}$. Suppose that $t2^{n+1}+2n-2$ pebbles are located at the vertices of $M(C_{2n})$. We consider the worst case, that is $p(v_0)=0$. By proposition~\ref{pps2.1}, while $p(v_n)\geq t2^{n+1}$, $t$ pebbles can be moved to $v_0$. Now suppose that $t2^{n+1}-h$ pebbles are located at $v_n$, without loss of generality, we assume that $p(A')\geq p(B')$, that is $p(A')\geq n-1+\lceil h/2\rceil$. Let $L=v_0u_0u_1\cdots u_{n-1}v_n$ be a subpath of $G$ with length $n+1$ and $q=\sum\limits_{i=0}^{n-1}p(u_i)$. While $q\geq\lceil h/2\rceil$, $$\omega_L(v_0)=p(v_n)+\sum_{i=0}^{n-1}2^{n-i}p(u_i)\geq t2^{n+1}-h+2q\geq t2^{n+1}.$$ By Proposition~\ref{pps2.1}, $t$ pebbles can be moved from $L\backslash v_0$ to $v_0$. While $q<\lceil h/2\rceil$, then $\sum\limits_{j=1}^{n-1}p(v_j)\geq n-1+\lceil h/2\rceil-q$. So we can move at least $\left\lfloor\frac{1}{2}(\lceil\frac{h}{2}\rceil+1-q)\right\rfloor$ pebbles to the set $\{u_0,u_1,\ldots ,u_{n-2}\}$. Then we have $$\omega_L(v_0)=p(v_n)+\sum_{i=0}^{n-1}2^{n-i}\tilde{p}(u_i)\geq t2^{n+1}-h+2q+4\times\frac{1}{2}(\frac{h}{2}-q)\geq t2^{n+1}.$$ By Proposition~\ref{pps2.1}, $t$ pebbles can be moved from $L\backslash v_0$ to $v_0$. The result follows. \end{pf} \begin{thm}\label{thm2.2} If $m,n\geq5$ and $|n-m|\geq2$, then $$f(M(C_{2n})\times M(C_{2m}))\leq f(M(C_{2n}))f(M(C_{2m})).$$ \end{thm} \begin{pf} Without loss of generality, we assume that $n\geq m+2$ $(m\geq5)$. Let $V(M(C_{2n}))=\{u_1,u_2,\ldots ,u_{4n}\}$, $V(M(C_{2m}))=\{v_1,v_2,\ldots ,v_{4m}\}$. For simplicity, let $G= M(C_{2n})\times M(C_{2m})$. Now assume $(2^{n+1}+2n-2)(2^{m+1}+2m-2)$ pebbles have been placed arbitrarily at the vertices of $G$. We may assume our target vertex is $(u_i,v_j)$, then $(u_i,v_j)$ belongs to both $u_i(M(C_{2m}))$ and $v_j(M(C_{2n}))$. If $p(u_i(M(C_{2m})))\geq2^{m+1}+2m-2$ or $p(v_j(M(C_{2n})))\geq2^{n+1}+2n-2$, we can move one pebble to $(u_i,v_j)$ by lemma~\ref{lem2.3}. Suppose that $p(u_i(M(C_{2m})))\leq2^{m+1}+2m-3$ and $p(v_j(M(C_{2n})))\leq2^{n+1}+2n-3$. We will prove that if we move as many as possible pebbles from $u_l(M(C_{2m}))$ to $(u_l,v_j)$ which belongs to $v_j(M(C_{2n}))$ $(1\leq l\leq 4n)$, then one pebble can be moved from $v_j(M(C_{2n}))$ to $(u_i,v_j)$. We may assume that $$p_k=p(u_k(M(C_{2m})))\leq2^{m+1}+2m-3~(1\leq k\leq s)$$ and $$p_k=p(u_k(M(C_{2m})))\geq2^{m+1}+2m-2~(s+1\leq k\leq 4n).$$ Now we consider the worst case scenario (i.e. the most wasteful distribution of pebbles possible). Therefore we may assume that \begin{align*} p_k=\left\{ \begin{array}{ll} 2^{m+1}+2m-3 & 1\leq k\leq s,\\ t_k2^{m+1}+2m-2+(2^{m+1}-1) & s+1\leq k\leq 4n-1,\\ t_k2^{m+1}+2m-2+R & k=4n, \end{array} \right. \end{align*} where $0\leq R\leq2^{m+1}-1$ and $t_k$ is a positive integer. According to Corollary~\ref{cor2.2}, we can move at least $\sum\limits_{k=s+1}^{4n}t_k$ pebbles to $v_j(M(C_{2n}))$. Let \begin{align*} \Delta =&~(2^{n+1}+2n-2)(2^{m+1}+2m-2)-s(2^{m+1}+2m-3)\\ &~-(4n-s-1)(2^{m+1}-1)-(4n-s)(2m-2)\\ = &~(2^{n+1}-2n-2)(2^{m+1}+2m-2)+2^{m+1}+4n-1. \end{align*} Therefore, $$\frac{\Delta}{2^{m+1}}=2^{n+1}-2n-1+\frac{1}{2^{m+1}}\left[(2^{n+1}-2n-2)(2m-2)+4n-1\right].$$ Note that $\Delta=\left(\sum\limits_{k=s+1}^{4n}t_k\right)2^{m+1}+R$, so $\sum\limits_{k=s+1}^{4n}t_k>\frac{\Delta}{2^{m+1}}-1$. It follows that \begin{align*} p(v_j(M(C_{2n})))\geq~\sum_{k=s+1}^{4n}t_k >~2^{n+1}-2n-2+\frac{1}{2^{m+1}}\left[(2^{n+1}-2n-2)(2m-2)+4n-1\right]. \end{align*} To the end, we only need to prove that we can move one pebble from $v_j(M(C_{2n}))$ to $(u_i,v_j)$ with $2^{n+1}-2n-2+\frac{1}{2^{m+1}}\left[(2^{n+1}-2n-2)(2m-2)+4n-1\right]$ pebbles. So we only need to prove that $$2^{n+1}-2n-2+\frac{1}{2^{m+1}}\left[(2^{n+1}-2n-2)(2m-2)+4n-1\right]\geq2^{n+1}+2n-2,$$ that is \begin{equation}\label{eq1} 2^{m+1}<\frac{m-1}{n}(2^n-1)-m+2. \end{equation} For $n\geq m+2\geq7$, it is clear that the right side of (\ref{eq1}) is an increasing function of $n$. So we only need to prove (\ref{eq1}) under $n=m+2$. Substituting $n=m+2$ into (\ref{eq1}), we have $$2^{m+1}<\frac{m-1}{m+2}(2^{m+2}-1)-m+2,$$ that is \begin{equation}\label{eq2} (2m-8)2^m-m^2-m+5>0. \end{equation} The left side of (\ref{eq2}) is an increasing function of $m$ while $m\geq5$. When $m=5$, (\ref{eq2}) holds. This completes the proof. \end{pf} \section{Remark} In fact, by a similar processing as in the proof of Corollary~\ref{cor2.2}, for any $u\in M(C_{2n})$ but $u\not\in C_{2n}$, we can prove that \begin{cor} If $n\geq 2$, then $f_t(M(C_{2n}),u)\leq2^{n+1}+2n-2+(t-1)(2^n+n).$ \end{cor} Then we can prove the following theorem. \begin{thm} If $(u,v)\not\in C_{2n}\times C_{2m}$, where $C_{2n}\times C_{2m}$ is a subgraph of $M(C_{2n})\times M(C_{2m})$, then $$f(M(C_{2n})\times M(C_{2m}),(u,v))\leq f(M(C_{2n}))f(M(C_{2m})).$$ \end{thm} \begin{pf} If $(u,v)\not\in C_{2n}\times C_{2m}$, then we can get $u(M(C_{2m}))\nsubseteq C_{2n}\times M(C_{2m})$ or $v(M(C_{2n}))\nsubseteq M(C_{2n})\times C_{2m}$. Without loss of generality, we assume that $u(M(C_{2m}))\nsubseteq C_{2n}\times M(C_{2m})$. Let $V(M(C_{2m}))=\{v_1,v_2,\ldots,v_{4m}\}$. If we move as many as possible pebbles from $v_j(M(C_{2n}))$ to $(u,v_j)$ which belongs to $u(M(C_{2m}))$ $(1\leq j\leq 4m)$, by a similar processing as in the proof of Theorem~\ref{thm2.2}, we can prove that the number of pebbles on $u(M(C_{2m}))$ is more than $2^{m+1}+2m-2$, so one pebble can be moved from $u(M(C_{2m}))$ to $(u,v)$ with these pebbles. \end{pf} In this paper, we have shown that while $m,n\geq5$ and $|m-n|\geq2$, $f(M(C_{2n})\times M(C_{2m}))\leq f(M(C_{2n}))f(M(C_{2m}))$. The remaining question is open. \begin{pro} $f(M(C_{2n})\times M(C_{2m}))\leq f(M(C_{2n}))f(M(C_{2m}))$, for $m=n$ or $m=n-1$. \end{pro} \end{document}
\betaegin{document} \thetaitle[Construction of Nikulin configurations]{Construction of Nikulin configurations on some Kummer surfaces and applications} \alphaddtolength{\thetaextwidth}{0mm} \alphaddtolength{\hoffset}{-0mm} \alphaddtolength{\thetaextheight}{0mm} \alphaddtolength{\voffset}{-0mm} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm Alb}{{\rm Alb}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm Jac}{{\rm Jac}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm Disc}{{\rm Disc}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm Tr}{{\rm Tr}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm NS}{{\rm NS}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm PicVar}{{\rm PicVar}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm Pic}{{\rm Pic}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm Br}{{\rm Br}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm Pr}{{\rm Pr}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm Km}{{\rm Km}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm rk}{{\rm rk}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm Hom}{{\rm Hom}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm End}{{\rm End}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm Aut}{{\rm Aut}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm NS}{{\rm NS}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm S}{{\rm S}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm PSL}{{\rm PSL}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathbb{C}{\muathbb{C}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathbb{B}{\muathbb{B}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathbb{P}{\muathbb{P}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathbb{Q}{\muathbb{Q}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathbb{R}{\muathbb{R}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathbb{F}{\muathbb{F}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathbb{D}{\muathbb{D}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathbb{N}{\muathbb{N}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathbb{Z}{\muathbb{Z}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathbb{H}{\muathbb{H}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef{\rm Gal}{{\rm Gal}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathcal{O}{\muathcal{O}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathfrak{p}{\muathfrak{p}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathfrak{p}P{\muathfrak{P}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathfrak{q}{\muathfrak{q}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathcal{M}{\muathcal{M}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\muathfrak{a}{\muathfrak{a}} \gammalobal\langlembdaong\deltaef\alpha{\alphalpha} \gammalobal\langlembdaong\deltaef\beta{\betaeta} \gammalobal\langlembdaong\deltaef\delta{\deltaelta} \gammalobal\langlembdaong\deltaef\Delta{\Deltaelta} \gammalobal\langlembdaong\deltaef\Lambda{\Lambdaambda} \gammalobal\langlembdaong\deltaef\gamma{\gammaamma} \gammalobal\langlembdaong\deltaef\Gamma{\Gammaamma} \gammalobal\langlembdaong\deltaef\delta{\deltaelta} \gammalobal\langlembdaong\deltaef\Delta{\Deltaelta} \gammalobal\langlembdaong\deltaef\varepsilon{\varepsilon} \gammalobal\langlembdaong\deltaef\kappa{\kappaappa} \gammalobal\langlembdaong\deltaef\langlembda{\langlembdaambda} \gammalobal\langlembdaong\deltaef\mu{\muu} \gammalobal\langlembdaong\deltaef\omega{\omegamega} \gammalobal\langlembdaong\deltaef\pi{\pii} \gammalobal\langlembdaong\deltaef\Pi{\Pii} \gammalobal\langlembdaong\deltaef\sigma{\sigmaigma} \gammalobal\langlembdaong\deltaef\Sigma{\Sigmaigma} \gammalobal\langlembdaong\deltaef\theta{\thetaheta} \gammalobal\langlembdaong\deltaef\Theta{\Thetaheta} \gammalobal\langlembdaong\deltaef\varphi{\varphi} \gammalobal\langlembdaong\deltaef\deltaeg{{\rm deg}} \gammalobal\langlembdaong\deltaef\deltaet{{\rm det}} \gammalobal\langlembdaong\deltaef\deltaps{{\deltaisplaystyle }} \gammalobal\langlembdaong\deltaef\Deltaem{D\alphacute{e}monstration: } \gammalobal\langlembdaong\deltaef\kappaer{{\rm Ker\,}} \gammalobal\langlembdaong\deltaef{\rm Im\,}{{\rm Im\,}} \gammalobal\langlembdaong\deltaef{\rm rg\,}{{\rm rg\,}} \gammalobal\langlembdaong\deltaef{\rm car}{{\rm car}} \gammalobal\langlembdaong\deltaef\varphiix{{\rm Fix( }} \gammalobal\langlembdaong\deltaef{\rm car}d{{\rm Card\ }} \gammalobal\langlembdaong\deltaef{\rm codim\,}{{\rm codim\,}} \gammalobal\langlembdaong\deltaef{\rm Coker\,}{{\rm Coker\,}} \gammalobal\langlembdaong\deltaef\muod{{\rm mod }} \gammalobal\langlembdaong\deltaef\pigcd{{\rm pgcd}} \gammalobal\langlembdaong\gammalobal\langlembdaong\deltaef\mathfrak{a}{\muathfrak{a}} \gammalobal\langlembdaong\deltaef\pipcm{{\rm ppcm}} \gammalobal\langlembdaong\deltaef\langlembdaa{\langlembdaangle} \gammalobal\langlembdaong\deltaef\rangle{\ranglengle} \sigmaubjclass[2000]{Primary: 14J28 ; Secondary: 14J50, 14J29, 14J10} \kappaeywords{Kummer surfaces, Nikulin configurations, Hyperelliptic curves on Abelian surfaces} {\rm Aut}hor{Xavier Roulleau, Alessandra Sarti} \betaegin{abstract} A Nikulin configuration is the data of $16$ disjoint smooth rational curves on a K3 surface. According to a well known result of Nikulin, if a K3 surface contains a Nikulin configuration $\muathcal{C}$, then $X$ is a Kummer surface $X={\rm Km}(B)$ where $B$ is an Abelian surface determined by $\muathcal{C}$. Let $B$ be a generic Abelian surface having a polarization $M$ with $M^{2}=k(k+1)$ (for $k>0$ an integer) and let $X={\rm Km}(B)$ be the associated Kummer surface. To the natural Nikulin configuration $\muathcal{C}$ on $X={\rm Km}(B)$, we associate another Nikulin configuration $\muathcal{C}'$; we denote by $B'$ the Abelian surface associated to $\muathcal{C}'$, so that we have also $X={\rm Km}(B')$. For $k\gammaeq2$ we prove that $B$ and $B'$ are not isomorphic. We then construct an infinite order automorphism of the Kummer surface $X$ that occurs naturally from our situation. Associated to the two Nikulin configurations $\muathcal{C},$ $\muathcal{C}'$, there exists a natural bi-double cover $S\thetao X$, which is a surface of general type. We study this surface which is a Lagrangian surface in the sense of Bogomolov-Tschinkel, and for $k=2$ is a Schoen surface. \varepsilonnd{abstract} \muaketitle \sigmaection{Introduction} To a set $\muathcal{C}$ of $16$ disjoint smooth rational curves $A_{1},\deltaots,A_{16}$ on a K3 surface $X$, Nikulin proved that one can associate a double cover $\thetailde{B}\thetao X$ branched over the curve $\sigmaum A_{i}$, such that the minimal model $B$ of $\thetailde{B}$ is an Abelian surface and the $16$ exceptional divisors of $\thetailde{B}\thetao B$ are the curves above $A_{1},\deltaots,A_{16}$. The K3 surface $X$ is thus a Kummer surface. We call a set of $16$ disjoint $(-2)$-curves on a K3 surface a \thetaextit{Nikulin configuration}. Let us recall a classical construction of Nikulin configurations. The Kummer surface $X={\rm Km}(B)$ of a Jacobian surface $B$ can be embedded birationally onto a quartic $Y$ of $\muathbb{P}^{3}$ with $16$ nodes. Projecting from one node one gets another projective model for $X$, this is a double cover $Y'\thetao \muathbb P ^2$ of the plane branched over $6$ lines tangent to a conic. The strict transform (in $X$) of that conic is the union of two $(-2)$-curves $A_{1},A_{1}'$, with $A_{1}A_{1}'=6$. One of these two curves, $A_{1}$ say, corresponds to the node from which we project. Above the $15$ intersection points of the $6$ lines there are $15$ disjoint $(-2)$-curves $A_{2},\deltaots,A_{16}$ on $X$, which corresponds to the $15$ other nodes of the quartic $Y$.\\ The divisors $\muathcal{C}=\sigmaum_{i=1}^{16}A_{i},\,\muathcal{C}'=A_{1}'+\sigmaum_{i=2}^{16}A_{i}$ are two Nikulin configurations. The Abelian surface $B$ is then the Jacobian of the double cover of $A_{1}$ branched over $A_{1}\cap A_{1}'$. Let now $k>0$ be an integer and let $(B,M)$ be a polarized Abelian surface with $M^{2}=k(k+1)$, such that $B$ is generic, i.e. ${\rm NS}(B)=\muathbb{Z} M$. Let $X={\rm Km}(B)$ be the associated Kummer surface, let $L\in{\rm NS}(X)$ be the class corresponding to $M$ (so that $L^{2}=2M^{2}$), and let $\muathcal{C}=A_{1}+\deltaots+A_{16}$ be the natural Nikulin configuration on ${\rm Km}(B)$ (the class $L$ is orthogonal to the $A_{i}$'s). We obtain the following results, which for $k=1$ are the results we recalled for Jacobian Kummer surfaces: \betaegin{thm} \langlembdaabel{thm:main1}Let be $t\in\{1,\deltaots,16\}$. There exists a $(-2)$-curve $A_{t}'$ on ${\rm Km}(B)$ such that $A_{t}A_{t}'=4k+2$ and $\muathcal{C}_{t}=A_{t}'+\sigmaum_{j\neq t}A_{j}$ is another Nikulin configuration. \\ The numerical class of $A_{t}'$ is $2L-(2k+1)A_{t}$; the class \[ L_{t}'=(2k+1)L-2k(k+1)A_{t} \] generates the orthogonal complement of the $16$ curves $A'_{t}$ and $\{A_{j}\,|\,j\neq t\}$; moreover $L_{t}'^{2}=L^{2}$. \varepsilonnd{thm} A \thetaextit{Kummer structure} on a Kummer surface $X$ is an isomorphism class of Abelian surfaces $B$ such that $X\sigmaimeq{\rm Km}(B)$. It is known that Kummer structures on $X$ are in one-to-one correspondence with the orbits of Nikulin configurations by the action of the automorphism group of $X$ (see Proposition \ref{prop:The-Kummer-structures}). In \cite[Question 5]{Sh1}, Shioda raised the question whether if there could be more than one Kummer structure on a Kummer surface. In \cite{GH}, Gritsenko and Hulek noticed that ${\rm Km}(B)\sigmaimeq{\rm Km}(B^{*})$, where $B^{*}$ is the dual of $B$, a $(1,t)$-polarized Abelian surface (thus $B\not\sigmaimeq B^{*}$ if $t>1$). In \cite{HLOY} Hosono, Lian, Oguiso and Yau proved that the number of Kummer structures is always finite and they construct for any $N\in\muathbb{N}^{*}$ a Kummer surface of Picard number $18$ with at least $N$ Kummer structures. When the Picard number is $17$ (which is the case of our paper), by results of Orlov \cite{Orlov} on derived categories, the number of Kummer structures on $X$ equals $2^s$ where $s$ is the number of prime divisors of $\varphirac{1}{2}M^{2}$. In Section \ref{subsec:Nikulin-structures-and Fermat}, we obtain the following result \betaegin{thm} \langlembdaabel{thm:Main 3}Suppose $k\gammaeq2$. There is no automorphism of $X$ sending the Nikulin configuration $\muathcal{C}=\sigmaum_{j=1}^{16}A_{j}$ to the configuration $\muathcal{C}_{t}=A_{t}'+\sigmaum_{j\neq t}A_{j}$. \varepsilonnd{thm} Therefore the two configurations $\muathcal{C},\,\muathcal{C}_{t}$ belong in two distinct orbits of Nikulin configurations under the action of ${\rm Aut}(X)$. As far as we know, Theorem \ref{thm:Main 3} gives the first explicit construction of two distinct Kummer structures on a Kummer surface: the constructions in \cite{HLOY} and \cite{GH} use lattice theory and do not give a geometric description of the Nikulin configurations. We already recalled that when $X$ is a Jacobian Kummer surface, there exists a non-symplectic involution $\iota$ on $X$ such that the double cover $\pii:X\thetao\muathbb{P}^{2}$ is the quotient of $X$ by $\iota$ (after contraction of the $16$ $(-2)$-curves). That involution exchanges the $(-2)$-curves $A_{1}$ and $A_{1}'$ and fixes the $15$ other curves $\{A_{j}\,|\,j\neq1\}$. For $X$ a K3 surface with a polarization $L$ such that $L^{2}=2k(k+1)$ and $t\in\{1,\deltaots,16\}$, let $\thetaheta_{t}$ be the involution of ${\rm NS}(X)\omegatimes\muathbb{Q}$ defined by $L\thetao L_{t}'$, $A_{t}\thetao A_{t}'$ (as defined in Theorem \ref{thm:main1}), and $\theta_{t}(A_{j})=A_{j}$ for $j\neq t$. When $k=1$, $\thetaheta_{1}$ is in fact the action of the involution $\iota$ on ${\rm NS}(X)$ : $\iota^{*}=\theta_{1}$. We do not have such an interpretation when $k>1$ (this is in fact the content of Theorem \ref{thm:Main 3}), but we obtain the following result on the product $\theta_{i}\theta_{j}$: \betaegin{thm} \langlembdaabel{thm:Main-2}For $1\langlembdaeq i\neq j\langlembdaeq16$ there exists an infinite order automorphism $\muu_{ij}$ of $X$ such that the action of $\muu_{ij}$ on ${\rm NS}(X)$ is $\muu_{ij}^{*}=\theta_{i}\theta_{j}$ . \varepsilonnd{thm} The classification of the automorphism group of a generic Jacobian Kummer surface has been has been completed by Keum \cite{Keum} (who constructed the last unknown automorphisms) and by Kondo \cite{Kondo} (who proved that there was indeed no more automorphisms). We are far from such a knowledge for non Jacobian Kummer surfaces, thus it is interesting to have a construction of such automorphisms $\muu_{ij}$. Let $A$ be an Abelian variety. In \cite{NarNori}, Narasimhan and Nori prove that the orbits by ${\rm Aut}(A)$ of the principal polarisations in the N\'eron-Severi group ${\rm NS}(A)$ are finite. Similarly, one could think to prove that the number of Kummer structures on a K3 is finite by associating to each Nikulin configuration $\muathcal{C}$ the pseudo-ample divisor $L_{\muathcal{C}}$ orthogonal to $\muathcal{C}$ and by proving that the number of orbits of such $L_{\muathcal{C}}$ under the action of ${\rm Aut}(X)$ in ${\rm NS}(X)$ is finite. Our approach is closer to that idea than to the solutions previous proposed e.g. in \cite{HLOY} or \cite{GH}, and it gives us more informations on ${\rm Aut}(X)$.\\ Observe that one can repeat the construction in Theorem \ref{thm:main1}, starting with configuration $\muathcal{C}_{i}$ instead of $\muathcal{C}$, but Theorem \ref{thm:Main-2} tells us that the Nikulin configurations so obtained will be in the orbit of the Nikulin configuration $\muathcal{C}$ under the automorphism group $X$, thus we do not obtain new Nikulin structures in that way (observe also that $\muathcal{C}_{t}$ and $\muathcal{C}_{t'}$ ($t\neq t'$) are in the same orbit). The paper is organized as follows: In Section \ref{sec1:Two-Nikulin-configurations} we construct the curve $A'_{i}$ such that $A_{i}A_{i}'=4k+2$ and we prove Theorem \ref{thm:main1}. This is done by geometric considerations on the properties of the divisor $L_{i}'$, which we prove is big and nef. In Section \ref{sec:Nikulin-configurations-and}, we construct the automorphisms mentioned in Theorem \ref{thm:Main-2}. This is done by using the Torelli Theorem for K3 surfaces. We then prove Theorem \ref{thm:Main 3}, which is obtained by considerations on the lattice $H^{2}(X,\muathbb{Z})$. In Section \ref{sec:bi-double-covers-associated}, we study the bi-double cover $Z\thetao X$ associated to the two Nikulin configurations $\muathcal{C}=\sigmaum_{i=1}^{16}A_{i},\,\muathcal{C}'=A_{1}'+\sigmaum_{i=2}^{16}A_{i}$. When $k=2$, $Y$ is a so-called Schoen surface, a fact that has been already observed in \cite{RRS}. Schoen surfaces carry many remarkable properties (see e.g. \cite{CMR, RRS}). For example the kernel of the natural map \[ \wedge^{2}H^{0}(Z,\Omega_{Z})\thetao H^{0}(Z,K_{Z}) \] is one dimensional, and is not of the form $w_{1}\wedge w_{2}$, i.e. by the Castelnuovo De Franchis Theorem, it does not come from a fibration of $Z$ onto a curve of genus $\gammaeq2$. Surfaces with this property are called Lagrangian. We will see that for the other $k>1$, the surfaces are also Lagrangian. \\ In Subsection \ref{subsec:An-hyperelliptic-curve}, we discuss the singularities of the curve $A_{i}+A_{i}'$. The transversality of the intersection of two rational curves on a K3 surface is an interesting but open problem in general (see e.g. \cite{Huyb}). We also study the curve $\Gammaamma_{i}$ on the Abelian surface $B$ coming from the pull-back of the curve $A'_{i}$. That curve $\Gammaamma_{i}$ is hyperelliptic and has a unique singularity, which is a point of multiplicity $4k+2$, and therefore $\Gammaamma_{i}$ has geometric genus $\langlembdaeq2g$. In the case of a Jacobian surface, $\Gammaamma_{i}$ has been used as the branch locus of covers of $B$ by Penegini \cite{Pene} and Polizzi \cite{Polizzi}, for creating new surfaces of general type. We end this paper by remarking that $\Gamma_{i}$ is a curve with the lowest known H-constant (see \cite{RoulleauIMRN} for definitions and motivations) on an Abelian surface. {\betaf Acknowledgements} The authors thank the anonymous referee for useful remarks improving the exposition of the paper. \sigmaection{Two Nikulin configurations on Kummer surfaces\langlembdaabel{sec1:Two-Nikulin-configurations}} \sigmaubsection{Two rational curves $A_{1},\,A_{1}'$ such that $A_{1}A_{1}'=2(2k+1)$} Let $k>0$ be an integer and let $B$ be an abelian surface with a polarization $M$ such that $M^{2}=k(k+1)$. We suppose that $B$ is generic so that $M$ generates the Néron-Severi group of $B$. Let $X={\rm Km}(B)$ be the associated Kummer surface and $A_{1},\deltaots,A_{16}$ be its $16$ disjoint $(-2)$-curves coming from the desingularization of $B/[-1]$. \\ By \cite[Proposition 3.2]{Morrison}, \cite[Proposition 2.6]{GS}, corresponding to the polarization $M$ on $B$, there is a polarization $L$ on ${\rm Km}(B)$ such that \[ L^{2}=2k(k+1) \] and $LA_{i}=0,\,i\in\{1,\deltaots,16\}$. The Néron-Severi group of $X={\rm Km}(B)$ satisfies: \[ \muathbb{Z} L\omegaplus K\sigmaubset{\rm NS}(X), \] where $K$ denotes the Kummer lattice (the saturated sub-lattice of $NS(X)$ containing the $16$ classes $A_{i}$). For $B$ generic among polarized Abelian surfaces ${\rm rk}({\rm NS}(X))=17$ and ${\rm NS}(X)$ is an overlattice of finite index of $\muathbb{Z} L\omegaplus K$ which is described precisely in \cite{GS}, in particular we will use the following result: \betaegin{lem} \langlembdaabel{lem:At most 4 beta}(\cite[Remarks 2.3 \& 2.10]{GS}) An element $\Gammaamma\in{\rm NS}(X)$ has the form $\Gammaamma=\alphalpha L-\sigmaum\betaeta_{i}A_{i}$ with $\alphalpha,\betaeta_{i}\in\varphirac{1}{2}\muathbb{Z}$. If $\alphalpha$ or $\betaeta_{i}$ for some $i$ is in $\varphirac{1}{2}\muathbb{Z}\sigmaetminus\muathbb{Z},$ then at least $4$ of the $\betaeta_{j}$'s are in $\varphirac{1}{2}\muathbb{Z}\sigmaetminus\muathbb{Z}$, if moreover $\alphalpha\in\muathbb{Z}$, at least $8$ of the $\betaeta_{j}$'s are in $\varphirac{1}{2}\muathbb{Z}\sigmaetminus\muathbb{Z}$. \varepsilonnd{lem} The divisor \[ A_{1}'=2L-(2k+1)A_{1} \] is a $(-2)$-class, indeed: \[ (2L-(2k+1)A_{1})^{2}=8k(k+1)-2(2k+1)^{2}=-2, \] and one has $A_{1}'A_{i}=0$ for $i=2,\cdots,16$. By the Riemann-Roch Theorem and since $LA_1 '>0$, the class $A_1 '$ is represented by an effective divisor. Let us prove the following result \betaegin{thm} \langlembdaabel{thm:The-class-is -2}The class $A_{1}'$ can be represented by a $(-2)$-curve and $A_{1}A'_{1}=2(2k+1)$. The set of $(-2)$-curves \[ A_{1}',A_{2},\deltaots, A_{16} \] is another Nikulin configuration on $X$. \varepsilonnd{thm} In order to prove Theorem \ref{thm:The-class-is -2}, let us define \[ L'=(2k+1)L-2k(k+1)A_{1}. \] One has $L'A_{1}'=0$ and \[ L'^{2}=(2k+1)^{2}2k(k+1)-8k^{2}(k+1)^{2}=2k(k+1)=L^{2}. \] First let us prove: \betaegin{prop} \langlembdaabel{prop:Suppose-.-Thena)}One has:\\ a) The divisor $L'$ is nef and big. Moreover a $(-2)$-class $\Gammaamma$ satisfies $\Gammaamma L'=0$ if and only if $\Gammaamma=A_1 '$ or $\Gammaamma=A_j$ for $j$ in $\{2,..., 16\}$.\\ b) The linear system $|L'|$ has no base components.\\ c) The linear system $|L'|$ defines a morphism from $X={\rm Km}(B)$ to $\muathbb{P}^{k^{2}+k+1}$ which is birational onto its image and contracts the divisor $A_{1}'$ and the $15$ $(-2)$-curves $A_{i},\,i\gammaeq2$. \varepsilonnd{prop} \betaegin{proof} \thetaextbf{Proof of a).} We already know that $\varepsilonnsuremath{L'^{2}=2k(k+1)>0}$. By the Riemann-Roch Theorem either $L'$ or $-L'$ is effective. Since $LL'>0$, we see that $L'$ is effective. On a K3 surface, the $(-2)$-curves are the only irreducible curves with negative self-intersection, thus $L'$ is nef if and only if $L'\Gammaamma\gammaeq0$ for each irreducible $(-2)$-curve $\Gammaamma$. Let \[ \Gammaamma=\alphalpha L-\sigmaum_{i=1}^{16}\betaeta_{i}A_{i},\qquad\alphalpha,\betaeta_{i}\in\varphirac{1}{2}\muathbb{Z} \] be the class of $\Gammaamma$ in ${\rm NS}(X)$. Since $\Gammaamma$ represents an irreducible curve we have $\alphalpha\gammaeq0$. Moreover if $\Gammaamma=A_{i}$ then the condition $L'\Gammaamma\gammaeq0$ is trivially verified so that we can assume $\Gammaamma A_{i}\gammaeq0$, which gives $\betaeta_{i}\gammaeq0$. From the condition $\Gammaamma^{2}=-2$, we get \betaegin{equation} k(k+1)\alphalpha^{2}-\sigmaum_{i}\betaeta_{i}^{2}=-1\langlembdaabel{eq:carre} \varepsilonnd{equation} Assume that the $(-2)$-curve $\Gammaamma$ satisfies $L'\Gammaamma<0$. We have \[ 0>L'\Gammaamma=\langlembdaeft((2k+1)L-2k(k+1)A_{1}\right)\Gammaamma=2\alphalpha k(k+1)(2k+1)-4k(k+1)\betaeta_{1}, \] thus \[ \betaeta_{1}>\varphirac{(2k+1)}{2}\alphalpha. \langlembdaabel{eq:BETA} \] Combining with equation \varepsilonqref{eq:carre} we get \[ -1=k(k+1)\alphalpha^{2}-\sigmaum_{i}\betaeta_{i}^{2}<-\varphirac{1}{4}\alphalpha^{2}-\sigmaum_{i=2}^{15}\betaeta_{i}^{2}. \] which is \betaegin{equation} \varphirac{1}{4}\alphalpha^{2}+\sigmaum_{i=2}^{15}\betaeta_{i}^{2}<1\langlembdaabel{eq:somme limite} \varepsilonnd{equation} thus $\alphalpha\in\{0,1/2,1,3/2\}$. \\ If $\alphalpha=0$, by \varepsilonqref{eq:carre} either exactly one of the $\betaeta_{i}=1$ (but this is not possible since it would give $\Gammaamma=-A_{i}$) or exactly $4$ of the $\betaeta_{i}'s$ are equal to $\varphirac{1}{2}$ and the others are $0$ but such a class is not contained in ${\rm NS}(X)$ by Lemma \ref{lem:At most 4 beta}.\\ If $\alphalpha=\varphirac{1}{2}$, then from inequality \varepsilonqref{eq:somme limite}, $\betaeta_{i}\in\{0,\varphirac{1}{2}\}$ for $i\gammaeq2$ and at most $3$ of these $\betaeta_{i}$'s equal $\varphirac{1}{2}$. By Lemma \ref{lem:At most 4 beta} at least $4$ of the $\betaeta_{i}$ are in $\varphirac{1}{2}\muathbb{Z}\sigmaetminus\muathbb{Z}$, thus $3$ of the $\betaeta_{i},\,i\gammaeq2$ equals $\varphirac{1}{2}$ and the others are $0$. Then from equation \varepsilonqref{eq:carre}, we get: \[ \betaeta_{1}^{2}=\varphirac{k^{2}+k+1}{4}. \] Suppose that there exists $n\in \muathbb{N}$ such that $k^{2}+k+1=n^2$. Then $n> k$, but since $n^2\gammaeq (k+1)^2>k^{2}+k+1$, we get a contradiction. Hence $\varphiorall k\in\muathbb{N}^{*}$, the integer $k^{2}+k+1$ is never a square and therefore the case $\alphalpha=\varphirac{1}{2}$ is impossible. \\ If $\alphalpha=1$, at most $2$ of the $\betaeta_{i}$'s with $i>1$ are equal $\varphirac{1}{2}$ and the others are $0$, by applying Lemma \ref{lem:At most 4 beta} we get $\betaeta_{i}=0$ for $i>1$ and $\betaeta_{1}\in\muathbb{N}$. Then equation \varepsilonqref{eq:carre} implies \[ \betaeta_{1}^{2}=k^{2}+k+1, \] which we know has no integral solutions for $k>0$.\\ If $\alphalpha=\varphirac{3}{2}$, at most $1$ of the $\betaeta_{i}$'s with $i>1$ is $\varphirac{1}{2}$, this is also impossible by Lemma \ref{lem:At most 4 beta}, therefore such $\Gammaamma$ does not exist and this concludes the proof that $L'$ is big and nef for all $k\gammaeq1$.\\ Assume that the $(-2)$-curve $\Gammaamma$ satisfies $L'\Gammaamma=0$ and is not $A_j$ for $j\gammaeq 2$. Then one has $\betaeta_{1}=\varphirac{(2k+1)}{2}\alphalpha $, and one computes that either $\alpha =2$, $\beta_1=2k+1$ and $\Gammaamma=A_1 '$, or $\alpha=1, \beta_1=\varphirac{(2k+1)}{2}\alphalpha$ and (up to re-ordering) $\beta_2=b_3=b_4=1/2$. Since $\alpha$ is an integer the second case is impossible by Lemma \ref{lem:At most 4 beta}. \thetaextbf{Proof of b).} By \cite[Section 3.8]{reid} either $|L'|$ has no fixed part or $L'=aE+\Gammaamma$, where $|E|$ is a free pencil, and $\Gammaamma$ a $(-2)$-curve with $E\Gammaamma=1$. In that case, write $\Gammaamma=\alphalpha L-\sigmaum\betaeta_{i}A_{i}$. Then \[ 2k(k+1)=L'^{2}=2a-2 \] gives $a=k^{2}+k+1$. In particular, $a$ is odd. But \[ a-2=L'\Gammaamma=2k(k+1)(2k+1)\alphalpha-4k(k+1)\betaeta_{1} \] and since $\alphalpha,\betaeta_{1}\in\varphirac{1}{2}\muathbb{Z},$ one gets that $a$ is even, which yields a contradiction. Therefore $|L'|$ has no base components. By \cite[Corollary 3.2]{SD}, it then has no base points. \thetaextbf{Proof of c).} The linear system $|L'|$ is big and nef without base points. We have to show that the resulting morphism has degree one, i.e. that $|L'|$ is not hyperelliptic (see \cite[Section 4]{SD}). By loc. cit., $|L'|$ is hyperelliptic if there exists a genus $2$ curve $C$ such that $L'=2C$ or there exists an elliptic curve $E$ such that $L'E=2$.\\ In the first case $L'^{2}=8$, but since $L'^{2}=2k(k+1)$, that cannot happen. Assume now \[ E=\alphalpha L-\sigmaum\betaeta_{i}A_{i}, \] for $E$ with $EL'=2$, we get \[ 2=\langlembdaeft(\alphalpha L-\sigmaum\betaeta_{i}A_{i}\right) \langlembdaeft((2k+1)L-2k(k+1)A_{1}\right)=k(k+1)\langlembdaeft(2(2k+1)\alphalpha-2\betaeta_{1}\right). \] Since $\alphalpha,\betaeta_{1}\in\varphirac{1}{2}\muathbb{Z}$, $2(2k+1)\alphalpha-2\betaeta_{1}$ is an integer, thus we get $k=1$ and $6\alphalpha-2\betaeta_{1}=1$. Since $E^{2}=0$, one obtain \[ 2\alphalpha^{2}=\sigmaum\betaeta_{i}^{2}, \] using $\betaeta_{1}=3\alphalpha-\varphirac{1}{2}$, one reaches a contradiction. \\ Therefore $|L'|$ defines a birational map $X\thetao\muathbb{P}^{N}$ onto its image, contracting the $(-2)$-curves $\Gammaamma$ such that $L'\Gammaamma=0$, moreover $N=h^{0}(L')-1=\varphirac{L'^{2}}{2}+1=k^{2}+k+1$. \varepsilonnd{proof} We can now prove Theorem \ref{thm:The-class-is -2}: \betaegin{proof} We proved that the only $(-2)$-classes that are contracted by $L'$ are $A_1'$, $ A_2,$ $\deltaots,A_{16}$. We know moreover that $A_1'A_j=A_i A_j=0$ for $2 \langlembdaeq i \neq j\langlembdaeq 16$. Since one has $L'A_{1}'=0$ the base point free linear system $|L'|$ contracts the connected components of $A_{1}'$ to some points. Therefore by the Grauert contraction Theorem (see \cite[Chapter III, Theorem 2.1]{BPVdV}), the support of $A_1'$ is the union of irreducible curves $(C_i)_{i\in \{1,\deltaots,m\}}$ (for $m\in \muathbb{N},\,m\neq 0$) such that the intersection matrix $(C_i C_j)$ is negative definite. \\ Since $X$ is a K3 surface, the curves $C_i$ are $(-2)$-curves. Since $L'$ only contracts the (-2)-classes $A_1'$, $ A_2,$ $\deltaots,A_{16}$ that are disjoint, we get that $m=1$ and we conclude that $A_{1}'$ is the class of a $(-2)$-curve $C_1$. \varepsilonnd{proof} \sigmaubsection{A projective model of the surface $\pirotect{\rm Km}(B)$} Let us describe a natural map from ${\rm Km}(B)$ to $\muathbb{P}^{k+1}$, which is birational for $k>1$: \betaegin{thm} \langlembdaabel{thm7} The class $D=L-kA_{1}$ is big and nef with \[ (L-kA_{1})^{2}=2k \] and for $k\gammaeq2$ it defines a birational map \[ \pihi:{\rm Km}(B)\thetao\muathbb{P}^{k+1} \] onto its image $X$ such that $X$ (of degree $2k$) has $15$ ordinary double points and moreover the curves $A_{1}'$ and $A_{1}$ are sent to two rational curves of degree $2k$ such that $A_{1}A_{1}'=2(2k+1)$. \varepsilonnd{thm} \betaegin{rem} We have \[ A_{1}'+A_{1}=2(L-kA_{1}) \] so that $A_{1}'+A_{1}$ is cut out by a quadric of $\muathbb{P}^{k+1}$ and is $2$-divisible. \\ \varepsilonnd{rem} \betaegin{proof} We proceed as in Proposition \ref{prop:Suppose-.-Thena)}.\\ \thetaextbf{Let us show that $D$ is nef and big.} We have to prove that $D\Gammaamma\gammaeq0$ for each irreducible $(-2)$-curve $\Gammaamma$. As above, let \[ \Gammaamma=\alphalpha L-\sigmaum\betaeta_{i}A_{i},\qquad\alphalpha,\betaeta_{i}\in\varphirac{1}{2}\muathbb{Z}, \] be such that $\Gammaamma D<0$. Then \[ \Gammaamma D=2\alphalpha k(k+1)-2k\betaeta_{1}<0, \] implies $\betaeta_{1}>(k+1)\alphalpha.$\\ Combining with the equation \varepsilonqref{eq:carre}, we get \[ 1>(k+1)\alphalpha^{2}+\sigmaum_{i\gammaeq2}\betaeta_{i}^{2}, \] thus $\alphalpha<1$. As in Proposition \ref{prop:Suppose-.-Thena)}, the case $\alphalpha=0$ is impossible. If $\alphalpha=\varphirac{1}{2}$, then $k\in\{1,2\}$, but as above, Lemma \ref{lem:At most 4 beta} implies that this is not possible. Thus $D$ is nef and big.\\ Let us now suppose $k>1$. \thetaextbf{Let us show that $|D|$ has no base components}. Suppose that there is a base component. Then $D=aE+\Gammaamma$, where $a\in\muathbb{N}$, $|E|$ is a free pencil, $\Gammaamma$ is a $(-2)$-curve and $E\Gammaamma=1$. One has \[ 2k=D^{2}=2a-2, \] thus $a=k+1$, so that \[ L-kA_{1}=(k+1)E+\Gammaamma. \] Suppose that $\Gammaamma=A_{1}$, then $2k=A_{1}D=k-1$ and $k=-1$, which is impossible. If $\Gammaamma=A_{i},$ $i\gammaeq2$, then $0=DA_{i}=k-1,$ thus $k=1$, but we assumed that $k>1$.\\ Thus we can assume that $\Gammaamma$ is not one of the $A_{i}$ and write $\Gammaamma=\alphalpha L-\sigmaum\betaeta_{i}A_{i}$ with $\alphalpha,\betaeta_{i}\gammaeq0$. One has \betaegin{equation} 2k=DA_{1}=(k+1)EA_{1}+2\betaeta_{1},\langlembdaabel{eq:2k} \varepsilonnd{equation} moreover \betaegin{equation} 2k(k+1)=(L-kA_{1})L=(k+1)EL+2k(k+1)\alphalpha.\langlembdaabel{eq:2k(k+1)} \varepsilonnd{equation} Since $EA_{1}\gammaeq0$ we obtain from equation \varepsilonqref{eq:2k} that either $\betaeta_{1}=k$ (and $EA_{1}=0$) or $\betaeta_{1}=\varphirac{k-1}{2}$ and $EA_{1}=1$, in that second case since \[ E(L-kA_{1})=E((k+1)E+\Gammaamma)=1 \] one obtains $EL=k+1.$\\ Since $EL\gammaeq0$, we obtain from equation \varepsilonqref{eq:2k(k+1)} that $\alphalpha\in\{0,\varphirac{1}{2},1\}$, but as in Proposition \ref{prop:Suppose-.-Thena)}, $\alphalpha=0$ is not possible. Moreover if $\alphalpha=1$, $EL=0$, but this contradicts the Hodge Index Theorem since $E^{2}=0$ and $L^{2}>0$, therefore $\alphalpha=\varphirac{\varepsilonnsuremath{1}}{2}$. If $\betaeta_{1}=k$, from $\Gammaamma^{2}=-2$, one gets \[ \varphirac{k(k+1)}{4}-k^{2}-\sigmaum_{i\gammaeq2}\betaeta_{i}^{2}=-1 \] which is \[ \sigmaum_{i\gammaeq2}\betaeta_{i}^{2}=\varphirac{1}{4}(-3k^{2}+k+4). \] But for $k>1$, $-3k^{2}+k+4<0$ and we obtain a contradiction. If now $\betaeta_{1}=\varphirac{k-1}{2}$, then $EL=k+1$, but equation \varepsilonqref{eq:2k(k+1)} gives $EL=k$, contradiction. Therefore $|D|$ has no base component.\\ \thetaextbf{Let us show that $|D|$ defines a birational map.} We have to show that $|D|$ is not hyperelliptic. Suppose that $D=2C$ where $C$ is a genus $2$ curve. Then $D^{2}=8$; since $D^{2}=2k$, we get $k=4$. One has $D=L-4A_{1}$ and the class of $C$ is $\varphirac{1}{2}L-2A_{1}$. Then $\varphirac{1}{2}L\in{\rm NS}(X)$, which contradicts the fact that $L$ generates the orthogonal complement of ${\rm NS}({\rm Km}(B)),$ and so $L$ is primitive. Suppose now that there exists an elliptic curve $E$ such that $DE=2$. Let \[ E=\alphalpha L-\sigmaum\betaeta_{i}A_{i}, \] with $\alphalpha\in\varphirac{1}{2}\muathbb{Z}$. Since $D=L-kA_{1},$ one has \[ DE=2k(k+1)\alphalpha-2k\betaeta_{1}, \] therefore $k(k+1)\alphalpha-k\betaeta_{1}=1$. If $\alphalpha\in\muathbb{Z},$ then if $\betaeta_{1}\in\muathbb{Z}$, one gets $k=1$, if $\betaeta_{1}=\varphirac{b}{2}$ with $b$ odd, then \[ k(2(k+1)\alphalpha-b)=2 \] and $k=2$ (we supposed $k>1$), $6\alphalpha-b=2,$ which is impossible since $b$ is odd. If $\alphalpha=\varphirac{a}{2}$ with $a\in\muathbb{Z}$ odd , then $k((k+1)a-2\betaeta_{1})=2$. Then since $2\betaeta_{1}\in\muathbb{Z}$ and $k>1$, one has $k=2$ and $3a-2\betaeta_{1}=1$, thus $\betaeta_{1}=\varphirac{3a-1}{2}=3\alphalpha-\varphirac{1}{2}\in\muathbb{Z}$. We have moreover (since $k=2$): \[ 0=E^{2}=6\alphalpha^{2}-\sigmaum\betaeta_{i}^{2} \] thus \[ 9\alphalpha^{2}-3\alphalpha+\varphirac{1}{4}+\sigmaum_{i\gammaeq2}\betaeta_{i}^{2}=6\alphalpha^{2}, \] and $3\alphalpha^{2}-3\alphalpha+\varphirac{1}{4}\langlembdaeq0$, the only possibility is $\alphalpha=\varphirac{1}{2}$, but then $\sigmaum_{i\gammaeq2}\betaeta_{i}^{2}=\varphirac{1}{2}$, which is impossible since, by Lemma \ref{lem:At most 4 beta}, there is no class with $\betaeta_{i}=\varphirac{1}{2}$ for only $2$ indices $i$. Therefore when $k>1$, $|D|$ defines a birational map to $\muathbb{P}^{N}$, with $N=\varphirac{D^{2}}{2}+1=k+1$. That maps contracts the curves $\Gammaamma$ with $\Gammaamma D=0$, ie $A_{2},\deltaots,A_{16}$. One has \[ A_{1}(L-kA_{1})=2k=A_{1}'(L-kA_{1}), \] thus the curves $A_{1},A_{1}'$ in $\muathbb{P}^{k+1}$ have degree $2k$. Moreover $A_{1}A_{1}'=2(2k+1)$.\\ Let us prove that the 15 $(-2)$-curves $A_{i},\,i>1$ are the only ones contracted i.e. they are the only solutions of the equation $\Gammaamma D=0$, ($D=L-kA_{1}$). Suppose $\Gammaamma\neq A_{i}$, $\Gammaamma=\alphalpha L-\sigmaum\betaeta_{i}A_{i}$. One has $\Gammaamma D=0$ if and only if \[ \alphalpha(k+1)=\betaeta_{1}, \] and $\alphalpha^{2}k(k+1)-\sigmaum\betaeta_{i}^{2}=-1$, which gives \[ (k+1)\alphalpha^{2}+\sigmaum_{i>1}\betaeta_{i}^{2}=1, \] which has no solutions by Lemma \ref{lem:At most 4 beta}. \varepsilonnd{proof} \betaegin{rem} \langlembdaabel{rem:To-the-pair}To the pair $(L,A_{1})$ one can associate the pair $(L',A_{1}')$, with \[ L'=(2k+1)L-2k(k+1)A_{1},\,\,A_{1}'=2L-(2k+1)A_{1} \] with the same numerical properties \[ L^{2}=L'^{2}=2k,\,LA_{1}=0=L'A_{1}',\,LA_{1}'=4k(k+1)=L'A_{1}. \] The polarization $L'$ comes from a polarization $M'$ on the Abelian surface $B'$ associated to the Nikulin configuration $A_{1}',A_{2},\deltaots,A_{16}$. We will see that for $k=1$ the mapping $\Pisi:(L,A_{1})\thetao(L',A_{1}')$ is an involution of ${\rm NS}(X)$ which comes from an involution of $X$, and the Abelian surfaces $B$, $B'$ are isomorphic. \\ One can repeat the construction with $(L',A_{2})$ instead of $L,A_{1}$ etc... Let us define the maps $\Pisi_{i},\,\Pisi_{j}$, $\{i,j\}=\{1,2\}$ by $\Pisi_{i}(L)=(2k+1)L-2k(k+1)A_{i}$, $\Pisi_{i}(A_{i})=2L-(2k+1)A_{i}$, $\Pisi_{i}(A_{j})=A_{j}$. It is easy to check that $\Pisi_{1}\circ\Pisi_{2}$ has infinite order, and we therefore obtain in that way an infinite number of Nikulin configurations. For any $k\in \muathbb{N},\,k\neq0$, we will see that the map $\Pisi_{i}\circ\Pisi_{j}$ for $i\neq j$ is in fact the restriction of the action of an automorphism of $X$ on ${\rm NS}(X)$. \varepsilonnd{rem} \sigmaubsection{The first cases $k=1,2,3,4$} In this subsection, we give a more detailed description of our construction when $k$ is small. One has \\ \betaegin{tabular}{|c|c|c|c|c|} \hline $k$ & $1$ & $2$ & $3$ & $4$\thetaabularnewline \hline $A_{1}A_{1}'$ & $6$ & $10$ & $14$ & $18$\thetaabularnewline \hline $L^{2}$ & $4$ & $12$ & $24$ & $40$\thetaabularnewline \hline \varepsilonnd{tabular}\\ and the morphism $\pihi$ associated to the linear system $|L-kA_{1}|$ is from ${\rm Km}(B)$ to $\muathbb{P}^{k+1}$, with $k+1=2,3,4,5$ (which produce the most famous geometric examples of K3 surfaces). The case $k=1$ has been discussed in the Introduction. For $k=2$, the result was already observed in \cite{RRS}. The image of $\pihi$ is a $15$-nodal quartic $Q=Q_{4}$ in $\muathbb{P}^{3}$, the curves $A_{1},A_{1}'$ are sent to two degree $4$ rational curves (denoted by the same letters) meeting in $10$ points. As we already observed, the divisor $A_{1}+A_{1}'$ is a {\it 2-divisible class}. The double cover $Y\thetao Q$ branched over $A_{1}+A_{1}'$ has $40$ ordinary double points coming from the $15$ singular points on $Q$ and from the $10$ intersection points of $A_{1}$ and $A_{1}'$. This surface $Y$ is described in \cite{RRS}. It is a general type surface, a complete intersection in $\muathbb{P}^{4}$ of a quadric and the Igusa quartic. It is the canonical image of its minimal resolution. The double cover $S$ of $Y$ branched over the $40$ nodes is a so-called Schoen surface. It is a surface with $p_{g}(S)=p_{g}(Y)=5$, thus the canonical image of $S$ is $Y$ and the degree of the canonical map of the Schoen surface is $2$. For $k=3$, one get a model $Q_{6}$ of $X$ in $\muathbb{P}^{4}$ which is the complete intersection of a quadric and a cubic. In a similar way as before, $Q_{6}$ has $15$ ordinary double points and $A_{1}$ and $A_{1}'$ are sent by $|L-3A_{1}|$ to two rational curves of degree $6$ with intersection number $14$. For $k=4$, one get a degree $8$ model $Q_{8}$ of $X$ in $\muathbb{P}^{5}$ which is the complete intersection of $3$ quadrics. That model has $15$ ordinary double points and the curves $A_{1}$ and $A_{1}'$ are sent by $|L-4A_{1}|$ to two rational curves of degree $8$ with intersection number $18$. \sigmaection{Nikulin configurations and automorphisms\langlembdaabel{sec:Nikulin-configurations-and}} \sigmaubsection{Construction of an infinite order automorphism} Let us denote by $K_{abcd}$ with $a,b,c,d\in\{0,1\}$ the $16$ $(-2)$-curves on the K3 surface $X={\rm Km}(A)$, and as before let $L$ be the polarization coming from the polarization of $A$. \\ Let $K$ be the lattice generated by the following $16$ vectors $v_{1},\deltaots,v_{16}$: \[ \betaegin{array}{c} \varphirac{1}{2}\sigmaum_{p\in A[2]}K_{p},\,\varphirac{1}{2}\sigmaum_{W_{1}}K_{p},\,\varphirac{1}{2}\sigmaum_{W_{2}}K_{p},\,\varphirac{1}{2}\sigmaum_{W_{3}}K_{p},\,\varphirac{1}{2}\sigmaum_{W_{4}}K_{p},\,K_{0000}, \\ K_{1000},\,K_{0100},\,K_{0010},\,K_{0001},\,K_{0011},\,K_{0101},\,K_{1001},\,K_{0110},\,K_{1010},\,K_{1100} \varepsilonnd{array} \] where $W_{i}=\{(a_{1},a_{2},a_{3},a_{4})\in(\muathbb{Z}/2\muathbb{Z})^{4}\,|\,a_{i}=0\}$. By results of Nikulin, \cite{Nikulin}, the lattice $K$ is the minimal primitive sub-lattice of $H^{2}(X,\muathbb{Z})$ containing the $(-2)$-curves $K_{abcd}$. The discriminant group $K^{\vee}/K$ is isomorphic to $(\muathbb{Z}_{2})^{6}$ and the discriminant form of $K$ is isometric to the discriminant form of $U(2)^{\omegaplus3}$. \betaegin{lem} (See \cite[Remark 2.3]{GS}) The Néron-Severi group ${\rm NS}(X)$ is generated by $K$ and $v_{17}:=\varphirac{1}{2}(L+\omegamega_{4d})$, where $L$ is the positive generator of $K^{\pierp}$ with $L^{2}=4d$ (here $d=\varphirac{k(k+1)}{2}$), and if $L^{2}=0$ mod $8$, \[ \omegamega_{4d}=K_{0000}+K_{1000}+K_{0100}+K_{1100}, \] if $L^{2}=4$ mod $8$, \[ \omegamega_{4d}=K_{0001}+K_{0010}+K_{0011}+K_{1000}+K_{0100}+K_{1100}. \] \varepsilonnd{lem} One has moreover \betaegin{lem} \langlembdaabel{lem:The-discriminant-group}(\cite[Remark 2.11]{GS}) The discriminant group of ${\rm NS}(X)$ is isomorphic to $(\muathbb{Z}/2\muathbb{Z})^{4}\thetaimes\muathbb{Z}/4d\muathbb{Z}$. Suppose that $d=4$ mod $8$. Then ${\rm NS}(X)^{\vee}/{\rm NS}(X)$ is generated by \[ \betaegin{array}{c} w_{1}=\varphirac{1}{2}(v_{6}+v_{8}+v_{10}+v_{12}),\,\,\,w_{2}=\varphirac{1}{2}(v_{12}+v_{13}+v_{14}+v_{15}), \\ w_{3}=\varphirac{1}{2}(v_{11}+v_{13}+v_{14}+v_{16}),\,\,\,w_{4}=\varphirac{1}{2}(v_{9}+v_{10}+v_{12}+v_{13}), \\ w_{5}=\varphirac{1}{2}(v_{6}+v_{12}+v_{13})+\varphirac{1}{4d}(v_{7}+v_{8}+v_{9}+v_{10}+(1+2d)v_{11}+v_{16}-2v_{17}) \varepsilonnd{array} \] Suppose that $d=0$ mod $8$. Then ${\rm NS}(X)^{\vee}/{\rm NS}(X)$ is generated by \[ \betaegin{array}{c} w_{1}=\varphirac{1}{2}(v_{6}+v_{12}+v_{14}+v_{16}),\,\,\,w_{2}=\varphirac{1}{2}(v_{6}+v_{13}+v_{15}+v_{16}), \\ w_{3}=\varphirac{1}{2}(v_{6}+v_{8}+v_{10}+v_{12}),\,\,\,w_{4}=\varphirac{1}{2}(v_{6}+v_{8}+v_{9}+v_{13}), \\ w_{5}=\varphirac{1}{2}(v_{11}+v_{12}+v_{13})+\varphirac{1}{4d}((1+2d)v_{6}+v_{7}+v_{8}+v_{16}-2v_{17}) \varepsilonnd{array} \] In both cases, the discriminant form of ${\rm NS}(X)$ is isometric to the discriminant form of $U(2)^{\omegaplus3}\omegaplus\langlembdaa4d\rangle$ and the transcendent lattice $T_{X}={\rm NS}(X)^{\pierp}$ is isomorphic to $U(2)^{\omegaplus3}\omegaplus\langlembdaa-4d\rangle$. \varepsilonnd{lem} \betaegin{proof} The columns of the inverse of the intersection matrix $(v_{i}v_{j})_{1\langlembdaeq i,j\langlembdaeq17}$ is a base of ${\rm NS}(X)^{\vee}$ in the base $v_{1},\deltaots,v_{17}$. From that data we obtain the generators $w_{1},\deltaots,w_{5}$ of ${\rm NS}(X)^{\vee}/{\rm NS}(X)$. The matrix $(w_{i}w_{j})_{1\langlembdaeq i,j\langlembdaeq5}$ is \[ \langlembdaeft(\betaegin{array}{ccccc} 0 & \varphirac{1}{2} & 0 & 0 & 0\\ \varphirac{1}{2} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & \varphirac{1}{2} & 0\\ 0 & 0 & \varphirac{1}{2} & 0 & 0\\ 0 & 0 & 0 & 0 & \varphirac{1}{4d} \varepsilonnd{array}\right)\in M_{5}(\muathbb{Q}/\muathbb{Z}), \] one has moreover $w_{i}^{2}=0$ mod $2\muathbb{Z}$ for $1\langlembdaeq i\langlembdaeq4$ and $w_{5}^{2}=\varphirac{1}{4d}$ mod $2\muathbb{Z}$. Thus the discriminant form \[ q:{\rm NS}(X)^{\vee}/{\rm NS}(X)\thetao\muathbb{Q}/2\muathbb{Z} \] is isometric to the discriminant form of $U(2)^{\omegaplus3}\omegaplus\langlembdaa4d\rangle.$ Since $H^{2}(X,\muathbb{Z})$ is unimodular, and $U(-2)\sigmaimeq U(2)$, we obtain $T_{X}$ (for more details see e.g. \cite[Chap. 14, Proposition 0.2]{Huyb}). \varepsilonnd{proof} In Section \ref{sec1:Two-Nikulin-configurations}, we associated to $L$ and to $A_{j}$ the divisors \[ L_{j}=(2k+1)L-2k(k+1)A_{j},\,\,A_{j}'=2L-(2k+1)A_{1}. \] The vector space endomorphism \[ \thetaheta_{j}:{\rm NS}(X)\omegatimes\muathbb{Q}\thetao{\rm NS}(X)\omegatimes\muathbb{Q} \] defined by $\thetaheta_{j}(A_{i})=A_{i}$ for $i\neq j$ and \[ \thetaheta_{j}(A_{j})=A_{j}',\;\thetaheta_{j}(L)=L_{j} \] is an involution, and we will see that it is an isometry (cf. Lemma \ref{lem:The-morphisms isometries}). Let us define \[ \Pihi_{1}=\thetaheta_{2}\thetaheta_{1}. \] The endomorphism $\Pihi_{1}$ has infinite order, its characteristic polynomial $\deltaet(T\thetaext{I}_{\thetaext{d}}-\Pihi_{1})$ is the product of $(T-1)^{15}$ and the Salem polynomial \[ T^{2}+(2-4k^{2})T+1. \] The aim of this section is to prove the following result: \betaegin{thm} \langlembdaabel{thm:There-exists-an-invol}The automorphism $\Pihi_{1}$ extends to an effective Hodge isometry $\Pihi$ of $H^{2}(X,\muathbb{Z})$ and there exists an automorphism $\iota$ of $X$ which acts on $H^{2}(X,\muathbb{Z})$ by $\iota^{*}=\Pihi$. \varepsilonnd{thm} Let us start by the following Lemma: \betaegin{lem} \langlembdaabel{lem:The-morphisms isometries}The morphisms $\theta_{1},\,\theta_{2},\,\Pihi_{1}$ preserve ${\rm NS}(X)$ and are isometries of ${\rm NS}(X)$. \varepsilonnd{lem} \betaegin{proof} It is simple to check that $\thetaheta_{j}$ preserves the lattice generated by $K,L$ and $v_{17}=\varphirac{1}{2}(L+\omegamega_{4d})$. Since for all $1\langlembdaeq i,j\langlembdaeq16$ one has $\thetaheta_{j}(A_{i})\thetaheta_{j}(A_{k})=A_{i}A_{k}$, $\thetaheta_{j}(L)\thetaheta_{j}(A_{i})=LA_{i}=0$, $\thetaheta_{j}(L)^{2}=L^{2}$, $\thetaheta_{j}$ is an isometry of ${\rm NS}(X)$, hence so is $\Pihi_{1}=\thetaheta_{2}\thetaheta_{1}$. \varepsilonnd{proof} Let $T_{X}={\rm NS}(X)^{\pierp}$. We define $\Pihi_{2}:T_{X}\thetao T_{X}$ as the identity. The map $(\Pihi_{1},\Pihi_{2})$ is an isometry of ${\rm NS}(X)\omegaplus T_{X}$. \betaegin{lem} \langlembdaabel{lem:The-map-extends}The morphism $(\Pihi_{1},\Pihi_{2})$ extends to an isometry $\Pihi$ of $H^{2}(X,\muathbb{Z})$. \varepsilonnd{lem} \betaegin{proof} Let $L_{1},L_{2}$ be the lattices $L_{1}={\rm NS}(X),$ $L_{2}=T_{X}={\rm NS}(X)^{\pierp}.$ Let us denote by \[ q_{i}:L_{i}^{\vee}/L_{i}\thetao\muathbb{Q}/2\muathbb{Z} \] the discriminant form of $L_{i}$. By Lemma \ref{lem:The-discriminant-group} and its proof, we know the form $q_{1}$ on the base $w_{i}$. \\ One has $L_{2}=U(2)\omegaplus U(2)\omegaplus\langlembdaa-4d\rangle$. Let us take the base $e_{i},\,1\langlembdaeq i\langlembdaeq5$ of $L_{2}$ such that the intersection matrix of the $e_{j}$'s is \[ (e_{i}e_{j})_{1\langlembdaeq i,j\langlembdaeq5}=-\langlembdaeft(\betaegin{array}{ccccc} 0 & 2 & 0 & 0 & 0\\ 2 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 2 & 0\\ 0 & 0 & 2 & 0 & 0\\ 0 & 0 & 0 & 0 & 4d \varepsilonnd{array}\right). \] The elements $w_{i}'=\varphirac{1}{2}e_{i}$ for $1\langlembdaeq i\langlembdaeq4$ and $w_{5}'=\varphirac{1}{4d}e_{5}$ are generators of $L_{2}^{\vee}/L_{2}$. Let \[ \pihi:L_{2}^{\vee}/L_{2}\thetao L_{1}^{\vee}/L_{1} \] be the isomorphism (called the gluing map) defined by \[ \pihi(w_{i}')=w_{i}. \] One has $q_{1}(\pihi(\sigmaum a_{i}w_{i}'))=-q_{2}(\sigmaum a_{i}w_{i}')$ i.e. \[ q_{2}=-\pihi^{*}q_{1}. \] Since $L_{1},L_{2}$ are primitive sub-lattices of the even unimodular lattice $H^{2}(X,\muathbb{Z})$ with $L_{2}=L_{1}^{\pierp}$, the lattice $H^{2}(X,\muathbb{Z})$ is obtained by gluing $L_{1}$ with $L_{2}$ by the gluing isomorphism $\pihi$. In other words $H^{2}(X,\muathbb{Z})$ is generated by all the lifts in $L_{1}^{\vee}\omegaplus L_{2}^{\vee}$ of the elements $(w_{i},w_{i}')$, $i=1,\deltaots,5$ of the discriminant group of $L_{1}\omegaplus L_{2}$. \\ According to general results (see e.g. \cite[Page 5]{Mc}), the element $(\Pihi_{1},\Pihi_{2})$ of the orthogonal group of $L_{1}\omegaplus L_{2}$ extends to $H^{2}(X,\muathbb{Z})$ if and only if the gluing map $\pihi$ satisfies $\pihi\circ\Pihi_{2}=\Pihi_{1}\circ\pihi$. A simple computation gives that for $1\langlembdaeq i\langlembdaeq4$, one has $\thetaheta_{j}w_{i}=-w_{i}=w_{i}$ (for $j\in\{1,2\}$), thus $\Pihi_{1}(w_{i})=w_{i}$. Moreover we compute that \[ \thetaheta_{j}(w_{5})=(1-2k^{2})w_{5} \] and since $(1-2k^{2})^{2}=1$ modulo $4d=2k(k+1)$, one gets $\Pihi_{1}(w_{5})=\thetaheta_{2}\thetaheta_{1}w_{5}=\omegamega_{5}$. Since by definition $\Pihi_{2}(w_{i}')=w_{i}'$ for $i=1,\deltaots,5$, we obtain the desired relation $\pihi\circ\Pihi_{2}=\Pihi_{1}\circ\pihi$. \varepsilonnd{proof} \betaegin{rem} \langlembdaabel{rem15:Because-of-the}Because of the relation $\thetaheta_{j}(w_{5})=(1-2k^{2})w_{5}$, $j\in\{1,2\}$ at the end of the proof of Lemma \ref{lem:The-map-extends}, it is not possible to extend the involution $\thetaheta_{j}$ to an isometry, unless $k=1$. In that case, using the proof of Lemma \ref{lem:The-Hodge-isometry is effective} below, the involution $\thetaheta_{j}$ extends to an effective Hodge isometry (with action by multiplication by $-1$ on $T_{X}$). The resulting non-symplectic involution is in fact known under the name of projection involution, see e.g. \cite{Keum}. \varepsilonnd{rem} \betaegin{lem} The morphism $\Pihi$ is an Hodge isometry: its $\muathbb{C}$-linear extension $\Pihi_{\muathbb{C}}:H^{2}(X,\muathbb{C})\thetao H^{2}(X,\muathbb{C})$ preserves the Hodge decomposition. \varepsilonnd{lem} \betaegin{proof} The map $\Pihi$ is the identity on the space $T_{X}\omegatimes\muathbb{C}$ containing the period. \varepsilonnd{proof} \betaegin{lem} \langlembdaabel{lem:The-Hodge-isometry is effective}The Hodge isometry $\Pihi$ is effective. \varepsilonnd{lem} \betaegin{proof} Since $X$ is projective by \cite[Proposition 3.11]{BPVdV}, it is enough to prove that the image by $\Pihi$ of one ample class is an ample class. Let $m\gammaeq2$ be an integer. By \cite[Proposition 4.3]{GS}, the divisor $D=mL-\varphirac{1}{2}\sigmaum_{i\gammaeq1}A_{i}$ is ample. The image by $\thetaheta_{1}$ of $D$ is \[ \thetaheta_{1}(D)=mL_{1}-\varphirac{1}{2}\langlembdaeft(A_{1}'+\sigmaum_{i\gammaeq2}A_{i}\right) \] where by Section \ref{sec1:Two-Nikulin-configurations} we have that $A_{1}'$ is a $(-2)$-curve, which is disjoint from the $A_{j},\,j\gammaeq2$, and these $16$ $(-2)$-curves have intersection $0$ with $L_{1}=\thetaheta_{1}(L)$. There exists an Abelian surface $B'$ such that $X={\rm Km}(B')$ and these $16$ $(-2)$-curves are resolution of the $16$ singularities in $B'/[-1]$. Moreover $L_{1}$ comes from a polarization $M'$ on $B'$, which clearly generates ${\rm NS}(B')$. Thus again by \cite[Proposition 4.3]{GS}, $\thetaheta_{1}(D)$ is ample. \\ The analogous proof with $(\thetaheta_{2},\,A_{2})$ instead of $(\theta_{1},\,A_{1})$ gives us that $\thetaheta_{2}(D)$ is also ample. Since $\thetaheta_{i},\,i=1,2$ are involutions and $\Pihi=\thetaheta_{2}\thetaheta_{1}$, we conclude that \[ \Pihi(\thetaheta_{1}(D))=\thetaheta_{2}(D) \] is ample, and thus $\Pihi$ is effective. \varepsilonnd{proof} We can now apply the Torelli Theorem for K3 surfaces (see \cite[Chap. VIII, Theorem 11.1]{BPVdV}): since $\Pihi$ is an effective Hodge isometry there exists an automorphism $\iota:X\thetao X$ such that $\iota^{*}=\Pihi$. This finishes the proof of Theorem \ref{thm:There-exists-an-invol}. \qed \betaegin{rem} The Lefschetz formula for the fixed locus $X^{\iota}$ of $\iota$ on $X$ gives \[ \chi(X^{\iota})=\sigmaum_{i=0}^{4}(-1)^{i}tr(\Pihi|H^{i}(X,\muathbb{R}))=1+(4k^{2}+18)+1=20+4k^{2}, \] (here $\iota^{*}=\Pihi$). If $k=1$ then $\chi(X^{\iota})=24$ and we can easily see that $X^{\iota}$ contains two rational curves. Indeed in this case as remarked before (Remark \ref{rem15:Because-of-the}) $\thetaheta_{i}$, $i=1,2$ can be extended to a non-symplectic involution (still denoted $\theta_{i}$) of the whole lattice $H^{2}(X,\muathbb{Z})$. The fixed locus of each $\thetaheta_{i}$, $i=1,2$ are the curves pull-back on $X$ of the six lines in the branching locus of the double cover of $\muathbb{P}^{2}$ (the $\thetaheta_{i}$, $i=1,2$ are the covering involutions). These curves are different except for the pull-backs $\varepsilonll_{1}$ and $\varepsilonll_{2}$ of two lines, which are the lines passing through the point of the branching curve corresponding to $A_{2}$ if we consider the double cover determined by the involution $\thetaheta_{1}$, respectively through the point corresponding to $A_{1}$ if we consider $\thetaheta_{2}$. So the infinite order automorphism $\iota$ corresponding to $\Pihi=\thetaheta_{2}\thetaheta_{1}$ fixes the two rational curves $\varepsilonll_{1}$ and $\varepsilonll_{2}$ on $X$. By using results of Nikulin on non-symplectic involutions \cite{NikiPezzo} the invariant sublattices $H^{2}(X,\muathbb{Z})$ for the action of $\thetaheta_{i}$, $i=1,2$ are both isometric to $U\omegaplus E_{8}(-1)\omegaplus\langlembdaangle-2\ranglengle^{\omegaplus6}$. \varepsilonnd{rem} \sigmaubsection{\langlembdaabel{subsec:Some-remarks-on Auto}Action of the automorphism group on Nikulin configurations} The aim of this sub-section is to prove the following result \betaegin{thm} \langlembdaabel{thm:no automorphisms}Suppose that $k\gammaeq2$. There is no automorphism $f$ of $X$ sending the configuration $\muathcal{C}=\sigmaum_{i=1}^{16}A_{i}$ to the configuration $\muathcal{C}'=A_{1}'+\sigmaum_{i=2}^{16}A_{i}$. \varepsilonnd{thm} Suppose that such an automorphism $f$ exists. The group of translations by the $2$-torsion points on $B$ acts on $X={\rm Km}(B)$ and that action is transitive on the set of curves $A_{1},\deltaots,A_{16}$. Thus up to changing $f$ by $f\circ t$ (where $t$ is such a translation), one can suppose that the image of $A_{1}$ is $A_{1}'$. Then the automorphism $f$ induces a permutation of the curves $A_{2},\deltaots,A_{16}$. The $(-2)$-curve $A_{1}''=f^{2}(A_{1})=f(A_{1}')$ is orthogonal to the $15$ curves $A_{i},\,i>1$ and therefore its class is in the group generated by $L$ and $A_{1}$. By the description of ${\rm NS}(X)$, the $(-2)$-class $A_{1}''=aA_{1}+bL$ has coefficients $a,b\in\muathbb{Z}$ . Moreover $a,b$ satisfy the Pell-Fermat equation \betaegin{equation} a^{2}-k(k+1)b^{2}=1.\langlembdaabel{eq:Pell-Fermat-1} \varepsilonnd{equation} Let us prove: \betaegin{lem} Let $C=aA_{1}+bL$ be an effective $(-2)$-class. Then there exists $u,v\in\muathbb{N}$ such that $aA_{1}+bL=uA_{1}+vA_{1}'$, in particular the only $(-2)$-curves in the lattice generated by $L$ and $A_{1}$ are $A_{1}$ and $A_{1}'$. \varepsilonnd{lem} \betaegin{proof} If $(a,b)$ is a solution of equation \varepsilonqref{eq:Pell-Fermat-1}, then so are $(\pim a,\pim b)$. We say that a solution is positive if $a\gammaeq0$ and $b\gammaeq0$. Let us identify $\muathbb{Z}^{2}$ with $A=\muathbb{Z}[\sigmaqrt{N}]$ by sending $(a,b)$ to $a+b\sigmaqrt{N}$, where $N=k(k+1)$. The solutions of \varepsilonqref{eq:Pell-Fermat-1} are units of the ring $A$. According to the Chakravala method solving equation \varepsilonqref{eq:Pell-Fermat-1}, there exists a solution $\alpha+\beta\sigmaqrt{N}$ (called fundamental) with $\alpha,\beta\in\muathbb{N}^{*}$ such that the positive solutions are the elements of the form \[ a_{m}+b_{m}\sigmaqrt{N}=(\alpha+\beta\sigmaqrt{N})^{m},\,m\in\muathbb{N}. \] The first term of the sequence of convergents of the regular continued fraction for $\sigmaqrt{N}$ is \[ \varphirac{2k+1}{2}, \] and since $(2k+1,2)$ is a solution of \varepsilonqref{eq:Pell-Fermat-1}, the fundamental solution is $(\alpha,\beta)=(2k+1,2)$. \\ An effective $(-2)$-class $C=aA_{1}+bL$ either equals $A_{1}$ or satisfies $CL>0$ and $CA_{1}>0$, therefore $b>0$ and $a<0$. Thus if $C\neq A_{1}$, there exists $m$ such that $C=-a_{m}A_{1}+b_{m}L$. Since $A_{1}'=2L-(2k+1)A_{1}$, one obtains \[ -a_{m}A_{1}+b_{m}L=\varphirac{b_{m}}{2}A_{1}'+((2k+1)\varphirac{b_{m}}{2}-a_{m})A_{1} \] and the Lemma is proved if the coefficients $u_{m}=\varphirac{b_{m}}{2}$ and $v_{m}=(2k+1)\varphirac{b_{m}}{2}-a_{m}$ are both positive and in $\muathbb{Z}$. Using the relation \[ a_{m+1}+b_{m+1}\sigmaqrt{N}=(2k+1+2\sigmaqrt{N})(a_{m}+b_{m}\sigmaqrt{N}), \] that follows from an easy induction. \varepsilonnd{proof} Therefore we conclude that $A_{1}''=A_{1}$ i.e. $f$ permutes $A_{1}$ and $A_{1}'$. Let us finish the proof of Theorem \ref{thm:no automorphisms}: \betaegin{proof} The class $f^{*}L$ is orthogonal to $A_{1}',A_{2},\deltaots,A_{16}$, thus this is a multiple of the class $L'=(2k+1)L-2k(k+1)A_{1}$ which has the same property. Since both classes have the same self-intersection and are effective, one gets $f^{*}L=L'$; by the same reasoning, since $f^{*}A_{1}'=A_{1}$, one gets $f^{*}L'=L$. By \cite[Proposition 4.3]{GS}, the divisor \[ D=2L-\varphirac{1}{2}\sigmaum_{i\gammaeq1}A_{i} \] is ample, thus $f^{*}D=2L'-\varphirac{1}{2}(A_{1}'+\sigmaum_{i\gammaeq2}A_{i})$ is also ample and so is $D+f^{*}D$. Moreover $D+f^{*}D$ is preserved by $f$, thus by \cite[Proposition 5.3.3]{Huyb}, the automorphism $f$ has finite order. Up to taking a power of it, one can suppose that $f$ has order $2^{m}$ for some $m\in\muathbb{N}^{*}$. Suppose $m=1$, ie $f$ is an involution. Then \[ \varphirac{1}{2}(A_{1}+A_{1}')=L-kA_{1} \] is fixed, there are curves $A_{i},\,i>1$ such that $f(A_{i})=A_{i}$ (say $s$ of such curves; necessarily $s$ is odd) and $f$ permutes the remaining curves $A_{j}$ by pairs (there are $t=\varphirac{1}{2}(15-s)$ such pairs). Let $\Gamma$ be the lattice generated by the classes $A_{i}$ fixed by $f$, by $A_{j}+f(A_{j})$ if $f(A_{j})\neq A_{j}$ and by $L-kA_{1}$. It is a finite index sub-lattice of ${\rm NS}(X)^{f}$, the fix sub-lattice of the Néron-Severi group. The discriminant group of $\Gamma$ is \[ \muathbb{Z}/2k\muathbb{Z}\thetaimes(\muathbb{Z}/2\muathbb{Z})^{s}\thetaimes(\muathbb{Z}/4\muathbb{Z})^{t}. \] Since in ${\rm NS}(X)$ there is at most a coefficient $\varphirac{1}{2}$ on $L$, the discriminant of ${\rm NS}(X)^{f}$ contains $\muathbb{Z}/k\muathbb{Z}$. If $f$ was non-symplectic, then $\muathcal M={\rm NS}(X)^{f}$ would be a $2$-elementary lattice (see \cite{AST}; it means that the discriminant group $\muathcal{M}^{*}/\muathcal{M}\sigmaimeq(\muathbb{Z}/2\muathbb{Z})^{a}$ for some integer positive $a$). But for $k>2$ this is impossible, therefore $f$ has to be symplectic. \\ For $k=2$, we use the model $Y\hookrightarrow\muathbb{P}^{3}$ of degree $4$ with $15$ nodes of $X$ determined by the divisor $L-2A_{1}$. Since $f$ preserves $L-kA_{1}$, the involution on $X$ induces an involution (still denoted $f$) on $\muathbb{P}^{3}=|L-kA_{1}|$ preserving $Y$. Up to conjugation, $f$ is $x\thetao(-x_{1}:x_{2}:x_{3}:x_{4})$ or $x\thetao(-x_{1}:-x_{2}:x_{3}:x_{4})$. \\ Suppose that $f$ is $f:x\thetao(-x_{1}:x_{2}:x_{3}:x_{4})$. The hyperplane $x_{1}=0$ cuts the quartic $Y$ into a quartic plane curve $C_{0}\hookrightarrow Y$. The surface $Y$ is a double cover of $\muathbb{P}(2,1,1,1)$ branched over $C_{0}\hookrightarrow\muathbb{P}(2,1,1,1)$. The quartic $C_{0}$ is irreducible and reduced, since otherwise $X$ would have Picard number $>17$. The singularities on $C_{0}$ are at most nodes and the corresponding nodes on $Y$ are fixed by $f$. Let us recall that the number $s$ of fixed nodes is odd. \\ Suppose that $C_{0}$ contains $3$ nodes. Its pull back $C_{0}'$ on $X$ is a smooth rational curve. The rank of the sub-lattice ${\rm NS}(X)^{f}$ is $1+s+t=10$. By \cite[Figure 1]{AST}, the genus of the fixed curve $C_{0}'$ must be strictly positive, which is a contradiction. \\ Suppose that $C_{0}$ contains $2$ nodes, then the isolated fixed point $(1:0:0:0)$ is also a node; the rank of ${\rm NS}(X)^{f}$ is still $10$. One has \[ [{\rm NS}(X)^{f}:\Gammaamma]^{2}=\varphirac{\deltaet\Gamma}{\deltaet{\rm NS}(X)^{f}}=\varphirac{2^{2+1+2t}}{2^{a}}=2^{17-a}, \] thus $a$ is odd. However by \cite[Figure 1]{AST}, when ${\rm NS}(X)^{f}$ has rank $10$, the integer $a$ is always even, this is a contradiction.\\ Suppose that $C_{0}$ contains $1$ node. Its pull back on $X$ is a smooth genus $2$ curve. One has ${\rm rk}{\rm NS}(X)^{f}=9$. By \cite[Figure 1]{AST}, since the fixed curve has genus $2$, one has $a=9$, therefore \[ [{\rm NS}(X)^{f}:\Gammaamma]^{2}=2^{17-a}=2^{8}, \] and there are at most $4$-classes which are $2$-divisible in the discriminant group \[ \muathbb{Z}/4\muathbb{Z}\thetaimes\muathbb{Z}/2\muathbb{Z}\thetaimes(\muathbb{Z}/4\muathbb{Z})^{7} \] of $\Gamma$. But then the discriminant group of ${\rm NS}(X)^{f}$ would contain a sub-group $\muathbb{Z}/4\muathbb{Z}$, which is a contradiction. \\ Suppose that $f$ is $f:x\thetao(-x_{1}:-x_{2}:x_{3}:x_{4})$ (observe that we can not exclude immediately this case since $Y$ is singular. If $Y$ would be smooth then such an $f$ would correspond to a symplectic automorphism). The line $x_{1}=x_{2}=0$ or $x_{3}=x_{4}=0$ cannot be included in $Y,$ otherwise $Y$ would be singular along that line (this is seen using the equation of $Y$). The number of fixed nodes being odd, there are $1$ or $3$ fixed nodes of $Y$ on these two lines (the intersection number of each lines with $Y$ being $4$). \\ Suppose that one node is fixed. The corresponding $(-2)$-curve on $X$ must be stable, moreover ${\rm rk}{\rm NS}(X)^{f}=9$. But by \cite[Figure 1]{AST}, there is no non-symplectic involution on a K3 such that ${\rm rk}{\rm NS}(X)^{f}=9$ and the fix-locus is a $(-2)$-curve or is empty. By the same reasoning, one can discard the case of $3$ stable rational curves. \\ We therefore proved that for any $k>1$, $f$ must be symplectic. A symplectic automorphism acts trivially on the transcendental lattice $T_{X}$, which in our situation has rank $5$. Therefore the trace of $f$ on $H^{2}(X,\muathbb{Z})$ equals $6+s>6$. But the trace of a symplectic involution equals $6$ (see e.g. \cite[Section 1.2]{SvG}). This is a contradiction, thus $f$ cannot have order $2$ and $m$ is larger than $1$. The automorphism $g=f^{2^{m-1}}$ has order $2$ and $g(A_{1})=A_{1},\,g(A_{1}')=A_{1}'$, thus $g(L)=L$. There are curves $A_{i},\,i>1$ such that $f(A_{i})=A_{i}$ (say $s$ of such, $s$ is odd since $A_{1}$ is fixed) and the remaining curves $A_{j}$ are permuted $2$ by $2$ (there are $t=\varphirac{1}{2}(15-s)$ such pairs). Let similarly as above $\Gamma'$ be the sub-lattice generated by $L,A_{1}$ and the fix classes $A_{i}$, $A_{j}+g(A_{j})$. It is a finite index sub-lattice of ${\rm NS}(X)^{g}$ and its discriminant group is \[ \muathbb{Z}/2k(k+1)\muathbb{Z}\thetaimes(\muathbb{Z}/2\muathbb{Z})^{s+1}\thetaimes(\muathbb{Z}/4\muathbb{Z})^{t}. \] By the same reasoning as before, the automorphism $g$ must be symplectic as soon as $k>1$. But the trace of $g$ is $8+s>6$, thus $g$ cannot be symplectic either. Therefore we conclude that such an automorphism $f$ does not exist. \varepsilonnd{proof} \sigmaubsection{Consequences on the Kummer structures on $X$\langlembdaabel{subsec:Nikulin-structures-and Fermat}} A Kummer structure on a K3 surface $X$ is an isomorphism class of Abelian surfaces $B$ such that $X\sigmaimeq{\rm Km}(B)$. The following Proposition is stated in \cite{HLOY}; we give here a proof for completeness: \betaegin{prop} \langlembdaabel{prop:The-Kummer-structures}The Kummer structures on $X$ are in one-to-one correspondence with the orbits of Nikulin configurations under the automorphism group ${\rm Aut}(X)$ of $X$. \varepsilonnd{prop} \betaegin{proof} Let $\muathcal{C}$ be a Nikulin configuration on the K3 surface $X$. By \cite[Theorem 1]{Nikulin} of Nikulin, there exists a unique (up to isomorphism) double cover $\thetailde B \thetao X$ branched over $\muathcal{C}$. Moreover the minimal model $B$ of $\thetailde B$ is an Abelian surface, and $X$ is the Kummer surface associated to $B$, $\muathcal C$ being the union of the exceptional curves of the resolution $X={\rm Km}(B)\thetao B/[-1]$. Let $\muu:X\thetao X$ be an automorphism sending a Nikulin configuration $\muathcal{C}$ to $\muathcal{C}'$. Let $B$, $B'$ be the abelian surfaces such that $\muathcal{C}$ (resp. $\muathcal{C}'$) is the configuration associated to ${\rm Km}(B)=X$ (resp. ${\rm Km}(B')=X$). \\ Let $\thetailde{B}\thetao B$ and $\thetailde{B}'\thetao B'$ be the blow-up at the sixteen $2$-torsion points of $B$ (resp. $B'$). Consider the natural map $\thetailde{B}\thetao X\sigmatackrel{\muu}{\thetao}X$: it is a double cover of $X$ branched over $\muathcal{C}'$ and ramified over the exceptional locus of $\thetailde{B}\thetao B$, thus by the results of Nikulin we just recalled, $\thetailde{B}$ is isomorphic to $\thetailde{B}'$ and $B\sigmaimeq B'$.\\ Reciprocally, suppose that there is an isomorphism $\pihi:B\thetao B'$. It induces an isomorphism $\thetailde{\pihi}:\thetailde{B}\thetao\thetailde{B}'$ that induces an isomorphism $X={\rm Km}(B)\thetao{\rm Km}(B')=X$ which sends the Nikulin configuration $\muathcal{C}$ corresponding to $B$ to the Kummer structure $\muathcal{C}'$ corresponding to $B'$. \varepsilonnd{proof} According to \cite{HLOY}, the number of Kummer structures is finite. If $X={\rm Km}(B)$ and $B^{*}$ is the dual of $B$, by result of Gritsenko and Hulek \cite{GH} one has also $X\sigmaimeq{\rm Km}(B^{*})$, thus if $B$ is not principally polarized, the number of Kummer structures is at least $2$. \\ When ${\rm NS}(B)=\muathbb{Z} M$, by results of Orlov \cite{Orlov} on derived categories, the number of Kummer structures equals $2^{s}$ where $s$ is the number of prime divisor of $\varphirac{1}{2}M^{2}$. In our situation one has $M^{2}=k(k+1)$. By subsection \ref{subsec:Some-remarks-on Auto} as soon as $k>2$, there is no automorphism sending the configuration $\muathcal{C}=\sigmaum_{i=1}^{16}A_{i}$ to $\muathcal{C}'=A_{1}'+\sigmaum_{i=2}^{16}A_{i}$, thus \betaegin{cor} Suppose $k\gammaeq2$. The two Nikulin configurations $\muathcal{C}=\sigmaum_{i=1}^{16}A_{i}$ and $\muathcal{C}'=A_{1}'+\sigmaum_{i=2}^{16}A_{i}$ represent two distinct Kummer structures on $X$. \varepsilonnd{cor} \betaegin{rem} When $k=2$ then $\varphirac{k(k+1)}{2}=3$ is divisible by one prime, thus the configurations $\muathcal{C}$ and $\muathcal{C}'$ are the two representatives of the set of Kummer structures on $X={\rm Km}(B)$. Observe that $X$ is also isomorphic to ${\rm Km}(B^{*})$, where $B^{*}$ is the dual of $B$. Since $B$ is not isomorphic to $B^{*}$, the double cover of $X$ branched over $\muathcal{C}'$ is (the blow-up of) $B^{*}$. \varepsilonnd{rem} \sigmaection{bi-double covers associated to Nikulin configurations\langlembdaabel{sec:bi-double-covers-associated}} \sigmaubsection{A hyperelliptic curve with genus $\langlembdaeq2k$ and a point of multiplicity $2(2k+1)$ on the Abelian surface $B$\langlembdaabel{subsec:An-hyperelliptic-curve}} We keep the notations as above: $(B,M)$ is a polarized Abelian variety with $M^{2}=k(k+1)$ and ${\rm Pic}(B)=\muathbb{Z} M$. The associated K3 surface $X={\rm Km}(B)$ contains the $17$ smooth rational curves \[ A_{1},A_{1}',\,A_{2},\deltaots,A_{16} \] such that $A_{1},\deltaots,A_{16}$ are the $16$ disjoint $(-2)$-curves arising from the Kummer structure, $A_{1}'$ is a $(-2)$-curve such that $A_{1}',\,A_{2},\deltaots,A_{16}$ is a Nikulin configuration and \[ A_{1}A_{1}'=4k+2. \] Let $\pii:\thetailde{B}\thetao B$ be the blow-up of $B$ at the $16$ points of $2$-torsion, so that there is a natural double cover $\thetailde{B}\thetao X={\rm Km}(B)$ branched over the $16$ exceptional divisors. Let $\thetailde{\Gamma}$ be the pull-back of $A_{1}'$ on $\thetailde{B}$ and let $\Gamma$ be the image of $\thetailde{\Gamma}$ on $B$. We denote by $E\hookrightarrow\thetailde{B}$ the $(-1)$-curve above $A_{1}$. Let us prove the following result \betaegin{prop} \langlembdaabel{prop:The-curve-is hyperelliptic}The curve $\Gamma\hookrightarrow B$ is hyperelliptic, it has geometric genus $\langlembdaeq2k$ and has a unique singularity, which is a point of multiplicity $2(2k+1)$. The curve $\Gammaamma$ is in the linear system $|4M|$, in particular $\Gamma^{2}=16k(k+1)$. \varepsilonnd{prop} \betaegin{proof} The singularities on a curve that is the union of two smooth curves on a smooth surface are of type \[ \mathfrak{a}_{2m-1},\,m\gammaeq1, \] where an equation of an $\mathfrak{a}_{2m-1}$ singularity is $\{x^{2m}-y^{2}=0\}$. This is well-known by experts but we couldn't find a reference and we therefore sketch a proof. At a singularity $p$, there are local parameters $x,y$ such that $C_1$ is given by $y=0$. By the implicit function theorem, we reduce to the case where the curve $C_2$ has equation $y=x^m$ for some $m>0$. Then the singularity has equation $\{y(y-x^m)=0\}$, which after a variable change becomes $\{x^{2m}-y^{2}=0\}$.\\ Let us denote by $\alpha_{m}$ the number of $\mathfrak{a}_{2m-1}$ singularities on the union $A_{1}+A_{1}'$. Since a $\mathfrak{a}_{2m-1}$ singularity contributes to $m$ in the intersection of $A_{1}$ and $A_{1}'$, one has \[ \sigmaum_{m\gammaeq0}m\alpha_{m}=4k+2. \] By \cite[Table 1, Page 109]{BPVdV}, the curve $\thetailde{\Gamma}\hookrightarrow\thetailde{B}$ has a singularity $\mathfrak{a}_{m-1}$ above a singularity $\mathfrak{a}_{2m-1}$ of $A_{1}+A_{1}'$ (by abuse of language a $\mathfrak{a}_{0}$-singularity means a smooth point). Let $\Gammaamma'$ be de normalization of $\thetailde{\Gamma}$; a $\mathfrak{a}_{2m-1}$-singularity contributes in the ramification locus of the double cover $\Gamma'\thetao A_{1}$ (induced by $\thetailde{\Gamma}\thetao A_{1}$) by $1$ if $m$ is odd and $0$ if $m$ is even. Therefore the geometric genus of $\Gamma$ is \[ 2g(\Gamma)-2=2\cdot(-2)+\sigmaum_{m\,odd}\alpha_{m}\langlembdaeq4k+2, \] which gives $g(\Gamma)\langlembdaeq2k$. The singularities of $\thetailde{\Gamma}$ are at its intersection with $E$, and since \[ \thetailde{\Gamma}E=\varphirac{1}{2}\pii_{1}^{*}A_{1}\pii_{1}^{*}A_{1}'=A_{1}A_{1}', \] we obtain $\thetailde{\Gamma}E=4k+2$. Since $E$ is contracted by the map $\thetailde{B}\thetao B$, the curve $\Gamma$ (image of $\thetailde{\Gamma}$) has a unique singular point of multiplicity $4k+2$. \\ Since $A_{1}'=2L-(2k+1)A_{1}$, its pull back on $\thetailde{B}_{1}$ is $4\thetailde{M}-2(2k+1)\thetailde{\Gamma}$ and its image $\Gammaamma$ has class $4M$, thus $\Gamma^{2}=16k(k+1)$. \varepsilonnd{proof} \betaegin{rem} Let us choose the point of multiplicity $2(2k+1)$ of $\Gamma$ as the origin $0$ of the group $B$. By construction the curve $\Gamma$ does not contain any non-trivial $2$-torsion point of $B_{1}$. \varepsilonnd{rem} \sigmaubsubsection*{The problem of the intersection of $A_{1}$ and $A_{1}'$} It is a difficult question to understand how the curves $A_{1}$ and $A_{1}'$ intersect on the Kummer surface $X={\rm Km}(B)$. For $k=1$ and $2$ we know that these curves intersects transversally in $4k+2$ points, and thus $g(\Gammaamma)=2k$. For $k=1$, it follows from the geometric description of the Jacobian Kummer surface as a double cover of the plane branched over $6$ line. For $k=2$ it is a by-product of \cite{RRS}. In \cite[Section 5, pp. 54--56]{BOPY} Bryan, Oberdieck, Pandharipande and Yin, quoting results of Graber, discuss on a related problem which is about hyperelliptic curves on Abelian surfaces. Let $f:C\thetao B$ be a degree $1$ morphism from a hyperelliptic curve $C$ to an Abelian surface $B$ with image $\betaar{C}$, such that the polarization $[\betaar{C}]$ is generic. Let $\iota:C\thetao C$ be the hyperelliptic involution. \betaegin{conjecture} \langlembdaabel{conj:(see-)}(see \cite{BOPY}) Suppose $B$ generic among polarized Abelian surfaces. The differential of $f$ is injective at the Weierstrass points of $C$, and no non-Weierstrass points $p$ is such that $f(p)=f(\iota(p))$. \varepsilonnd{conjecture} In our situation, that Conjecture means that the rational curves $A_{1}$ and $A_{1}'$ meet transversally. Indeed if they meet at a point tangentially with order $m\gammaeq2$, then the curve above $A_{1}'$ has a $\mathfrak{a}_{m-1}$ singularity. If $m$ is even, there is no branch points above that singular point, and thus there are points $p,\iota(p)$ (with $p$ non-Weierstrass) which are mapped to the same point by $f$. If $m$ is odd and $>1$, then the curve $C$ above $A_{1}'$ has a singularity $\mathfrak{a}_{m-1}$ of type ``cusp'', the differential of its normalization is $0$. Construction of (nodal or smooth) rational curves on K3 surfaces is an important problem, see e.g. \cite[Chapter 13]{Huyb} for a discussion. The existence of two smooth rational curves $C_{1},C_{2}$ intersecting transversely and such that $C_{1}+C_{2}$ is a multiple $nH$ of a polarization $H$ is also a key point for obtaining the existence of an integer $n$ such that there exists an integral rational curve in $|nH|$, see \cite[Chapter 13, Theorem 1.1]{Huyb} and its proof. \sigmaubsection{Invariants of the bidouble covers associated to the special configuration\langlembdaabel{subsec:Invariants-of-the}} Let us define \[ D_{1}=A_{1}',\,D_{2}=A_{1},\,D_{3}=\sigmaum_{i=2}^{16}A_{j}. \] By Nikulin results, the divisors $\sigmaum_{i=2}^{16}A_{j}+A_{1}$ and $\sigmaum_{i=2}^{16}A_{j}+A_{1}'$ are $2$-divisible and therefore there exists $L_{1},L_{2},L_{3}$ such that \[ 2L_{i}=D_{j}+D_{k} \] for $\{i,j,k\}=\{1,2,3\}$. Each $L_{i}$ defines a double cover \[ \pii_{i}:\thetailde{B_{i}}\thetao X \] branched over $D_{j}+D_{k}$ (here $\thetailde{B}_{1}=\thetailde{B}$). For $i=1,2$, above the $16$ $(-2)$-curves of the branch locus of $\pii_{i}:\thetailde{B_{i}}\thetao X$ there are $16$ $(-1)$-curves. Let $\thetailde{B}_{i}\thetao B_{i}$ be the contraction map, so that the surface $B_{i}$ ($i=1,2$) is an Abelian surface. \\ The divisors $D_{i},L_{i},\,i\in\{1,2,3\}$ are the data of a bi-double cover \[ \pii:V\thetao X \] which is a $(\muathbb{Z}/2\muathbb{Z})^{2}$-Galois cover of $X$ branched over the curves $A_{1}',\,A_{i},\,i\gammaeq1$. By classical formulas, the surface $V$ has invariants \[ \betaegin{array}{c} \chi(O_{V})=4\cdot2+\varphirac{1}{2}\sigmaum L_{i}^{2}=k\\ K_{V}^{2}=(\sigmaum L_{i})^{2}=8k-30.\quad \varepsilonnd{array} \] The surface $V$ contains $30$ $(-1)$-curves, which are above the $15$ curves $A_{i},\,i>1$. The surface $V$ is smooth if and only if the intersection of $A_{1}$ and $A_{1}'$ is transverse, i.e. if Conjecture \ref{conj:(see-)} holds. Let us suppose that this is indeed the case, then one has moreover the formula \[ p_{g}(V)=p_{g}(X)+\sigmaum h^{0}(X,L_{i}). \] The space $H^{0}(X,L_{i})$ is $0$ for $i=1,2$ because the double covers branched over $D_{2}+D_{3}$ or $D_{1}+D_{3}$ are Abelian surfaces $B_{i}$ ($i=1,2$) and $1=p_{g}(B_{i})=p_{g}(X)+h^{0}(X,L_{i})\gammaeq1$. It remains to compute $h^{0}(X,L_{3})$. The divisor $L_{3}=A_{1}+A_{1}'$ is big and nef (see section \ref{sec1:Two-Nikulin-configurations}). By Riemann-Roch, one has \[ \chi(L_{3})=\varphirac{1}{2}L_{3}^{2}+2=k+2. \] By Serre duality and Mumford vanishing Theorem, $h^{1}(L_{3})=h^{1}(L_{3}^{-1})=0$. Moreover $h^{2}(L_{3})=h^{0}(-L_{3})=0$, thus $h^{0}(L_{3})=k+2$ and therefore $p_{g}(V)=k+3$. Let $V\thetao Z$ be the blow-down map of the $30$ $(-1)$-curves on $V$ which are above the $15$ $(-2)$-curves $A_{i},\,i>1$ in $X$. We thus obtain: \betaegin{prop} Suppose that $A_{1}$ and $A_{1}'$ intersect transversally. The surface $Z$ has general type and its invariants are \[ \chi=k,\,K_{Z}^{2}=8k,\,p_{g}(Z)=k+3,\thetaext{ and }q=4. \] \varepsilonnd{prop} The surface $Z$ is minimal as we see by using the rational map of $Z$ onto the Abelian surface $B_{1}$. \betaegin{rem} The surface $Z$ satisfies \[ c_{1}^{2}=2c_{2}=8k. \] Among surfaces with $c_{1}^{2}=2c_{2}$ there are surfaces whose universal covers is the bi-disk $\muathbb{H}\thetaimes\muathbb{H}$. For $k=1$, it turns out that $Z$ is the product of two genus $2$ curves, thus its universal cover is $\muathbb{H}\thetaimes\muathbb{H}$. For $k=2$, we obtain the so-called Schoen surfaces, whose universal cover is not $\muathbb{H}\thetaimes\muathbb{H}$ (see \cite{CMR}, \cite{RRS}). \varepsilonnd{rem} Let $(W,\omegamega)$ be a smooth projective algebraic variety of dimension $2n$ over $\muathbb{C}$ equipped with a holomorphic $(2,0)$-form of maximal rank $2n$. Let us recall that a $n$ dimensional subvariety $Z\sigmaubset W$ is called Lagrangian if the restriction of $\omegamega$ to $Z$ is trivial. We remark that \betaegin{prop} The surface $Z$ is a Lagrangian surface in $B_{1}\thetaimes B_{2}$. \varepsilonnd{prop} \betaegin{proof} In \cite{BT}, Bogomolov and Tschinkel associate a Lagrangian surface to the data of Kummer surfaces $S_{1}={\rm Km}(A_{1}),S_{2}={\rm Km}(A_{2})$ and a $K3$ surface $S$ such that there is a rational map $S\thetao S_{i},$ $i=1,2$. \\ In our situation, we take $S_{1}=S_{2}=S={\rm Km}(B)$, we consider the Kummer structure ${\rm Km}(B_{1})$ for $S_{1}$ and the Kummer structure ${\rm Km}(B_{2})$ (see also Remark \ref{rem:To-the-pair}) for $S_{2}$, and the identity map for $S\thetao S_{i}$. \\ According to \cite[Section 3]{BT}, the bi-double cover $Z$ is a sub-variety of $B_{1}\thetaimes B_{2}$ which is Lagrangian. \varepsilonnd{proof} Let us now discuss what is happens if we do not make assumption on the transversality of the intersection of $A_{1}$ and $A_{1}'$. Let us denote by $\muathbb{A}_{m}$ a surface singularity with germ \[ \{x^{m+1}=y^{2}+z^{2}\} \] and by $\mathfrak{a}_{m}$ a curve singularity with germ $\{x^{m+1}=y^{2}\}$. \\ Since $A_{1},A_{1}'$ are smooth, the singularities of $A_{1}+A_{1}'$ are of type $\mathfrak{a}_{2m-1}$, $m>0$. Let $s$ be a $\mathfrak{a}_{2m-1}$-singularity of $A_{1}+A_{1}'$. Recall that $\thetailde{B}_{1}$ is the cover of $X$ branched over $\sigmaum_{i=1}^{16}A_{i}$. The curve singularity above $s$ in $\pii_{1}^{*}A_{1}'\sigmaubset\thetailde{B}_{1}$ is a $\varphirak{a}_{m-1}$ singularity (see e.g. \cite[Table 1, P. 109]{BPVdV}). \\ Thus above the singularity $s$ of type $\mathfrak{a}_{2m-1}$ of $A_{1}+A_{1}'$, the surface $V$ has a singularity of type $\muathbb{A}_{m-1}$, (where in fact a $\muathbb{A}_{0}$ (resp. $\varphirak{a}_{0}$) point is a smooth point). \\ The singularities $\muathbb{A}_{m}$ are $ADE$ singularities and by the Theorem of Brieskorn on simultaneous resolution of singularities, they do not change the values of $K^{2}$ , $\chi$ and $p_{g}$ of the surface $\thetailde{V}$ which is the minimal resolution of $V$ (we consider the two successive double covers $V\thetao\thetailde{B}_{1}$ and $\thetailde{B}_{1}\thetao X$). \\ Thus the surface $Z$ obtained by taking the minimal desingularisation of $V$ and the contraction of the $30$ exceptional curves has the same invariants $\chi(Z)$, $K_{Z}^{2}$ and $p_{g}(Z)$ as if the intersection of $A_{1}$ and $A_{1}'$ was transverse. We observe that the image of the natural map $Z\thetao B_{1}\thetaimes B_{2}$ is also a Lagrangian surface by \cite[Section 3]{BT}. Let $\alpha_{m}$ be the number of $\varphirak{a}_{2m-1}$ singularities on $A_{1}+A_{1}'$. Using Miyaoka's bound on the number of quotient singularities on a surface of general type (here to be the surface $B_{3},$ the double cover of $X$ branched over $A_{1}+A_{1}'$), one gets: \[ \sigmaum(n-\varphirac{1}{n})\alpha_{n}\langlembdaeq\varphirac{4}{3}k. \] For $k=1$, a configuration of $6\varphirak{a}_{1}$ singularities on $A_{1}+A_{1}'$ is the only possibility. For $k=2$, the possibilities are \[ 10\varphirak{a}_{1},\,8\varphirak{a}_{1}+\varphirak{a}_{3},\,7\varphirak{a}_{1}+\varphirak{a}_{5}, \] but we know from explicit computations in \cite{RRS} that for a generic Abelian surface polarized by $M$ with $M^{2}=6$, the singularities of $A_{1}+A_{1}'$ are $10\varphirak{a}_{1}$. For $k=3$ the possibilities are \[ 14\varphirak{a}_{1},\,12\varphirak{a}_{1}+\varphirak{a}_{3},\,10\varphirak{a}_{1}+2\varphirak{a}_{3},\,11\varphirak{a}_{1}+\varphirak{a}_{5},\,10\varphirak{a}_{1}+\varphirak{a}_{7}. \] \sigmaubsection{The $H$-constant of the curve $\pirotect\Gamma$} Let $X$ be a surface, $\muathcal{P}$ be a non-empty finite set points on $X$ and let $\betaar{X}\thetao X$ be the blow-up of $X$ at $\muathcal{P}$. For a curve $C$ let $\betaar{C}_{\muathcal{P}}$ be the strict transform of $C$ on $\betaar{X}$. The $H$-constant of $C$ is defined by \[ H(C)=\inf_{\muathcal{P}}\varphirac{(\betaar{C}_{\muathcal{P}})^{2}}{\#\muathcal{P}} \] and the $H$-constant of $X$ is $H(X)=\inf_{C}H(C)$, where the infimum is taken over reduced curves. The $H$-constants have been introduced for studying the bounded negativity Conjecture, which predicts that there exists a bound $b_{X}$ such that for any reduced curve $C$ on $X$, one has $C^{2}\gammaeq b_{X}$. Let $A$ be the generic Abelian surface polarized by $M$ with $M^{2}=k(k+1)$ and let $\Gammaamma$ be the curve with a unique singularity which is of multiplicity $4k+2$ and is in the numerical equivalence class of $4M$. One computes immediately \[ H(\Gamma)=\Gamma^{2}-(4k+2)^{2}=-4. \] For the moment, one do not know curves on Abelian surfaces which have $H$-constants lower than $-4$. We use these curves in a more thorough study of curves with low $H$-constants in \cite{RLow}. \betaegin{thebibliography}{99} \betaibitem{NikiPezzo} Alexeev V., Nikulin V., Del Pezzo and K3 surfaces. MSJ Memoirs, 15. Mathematical Society of Japan, Tokyo, 2006. xvi+149 pp. ISBN: 4-931469-34-5 \betaibitem{AST} Artebani M, Sarti A., Taki S., K3 surfaces with non-symplectic automorphisms of prime order, Math. Z. (2011) 268 507--533 \betaibitem{BN} Barth W., Nieto I., Abelian surfaces of type $(1,3)$ and quartic surfaces with $16$ skew lines, J. Algebraic Geom. 3 (1994), no. 2, 173--222 \betaibitem{BPVdV} Barth W., Hulek K., Peters A.M., Van de Ven A., Compact complex surfaces, Second edition, Springer-Verlag, Berlin, 2004. xii+436 pp. \betaibitem{BT} Bogomolov F, Tschinkel Y., Lagrangian subvarieties of Abelian fourfolds, Asian J. Math. 4 (2000), no. 1, 19--36. \betaibitem{BOPY} Bryan J., Oberdieck G., R. Pandharipande R., Yin Q., Curve counting on abelian surfaces and threefolds, to appear in Algebr. Geom. \betaibitem{CMR} Ciliberto C., Mendes-Lopes M., Roulleau X., On Schoen surfaces, Comment. Math. Helv. 90 (2015), no. 1, 59--74 \betaibitem{GS2} Garbagnati A., Sarti A., On symplectic and non-symplectic automorphisms of K3 surfaces, Rev. Mat. Iberoam. 29 (2013), no. 1, 135--162 \betaibitem{GS} Garbagnati A., Sarti A., Kummer surfaces and K3 surfaces with $(\muathbb{Z}/2\muathbb{Z})^4$ symplectic action, Rocky Mountain J., 46 (2016), no. 4, 1141--1205 \betaibitem{GH} Gritsenko V., Hulek K., Minimal Siegel modular threefolds, Math. Proc. Cambridge Philos. Soc. 123 (1998), 461--485 \betaibitem{Huyb} Huybrechts D., Lectures on K3 surfaces, Cambridge Studies in Advanced Mathematics, 158, 2016. xi+485 pp \betaibitem{HLOY} Hosono S., Lian B.H., Oguiso K., Yau S.T., Kummer structures on a K3 surface - an old question of T. Shioda, Duke Math. J. 120 (2003), no. 3, 635--647 \betaibitem{Keum} Keum J.H., Automorphisms of Jacobian Kummer surfaces, Compositio Mathematica 107: 269--288, 1997. \betaibitem{Kondo} Kondo S., The automorphism group of a generic Jacobian Kummer surface, J. Alg. Geom. 7 (1998) 589--609. \betaibitem{Lange} Lange H., Principal polarizations on products of elliptic curves, Contemp. Math., 397, Amer. Math. Soc., Providence, RI, 2006. \betaibitem{Mc} McMullen C., K3 surfaces, entropy and glue, J. Reine Angew. Math. 658 (2011), 1--25 \betaibitem{Morrison} Morrison D., On K3 surfaces with large Picard number, Invent. Math. 75 (1984), no. 1, 105--121. \betaibitem{NarNori} Narasimhan M.S., Nori M.V., Polarisations on an abelian variety, Proc. Ind. Acad. Sci. (Math), Volume 90, Number 2, April 1981, 125--128 \betaibitem{Nikulin} Nikulin V., Kummer surfaces, Izv. Akad. Nauk SSSR Ser. Mat. 39 (1975), 278--293. English translation: Math. USSR. Izv, 9 (1975), 261--275. \betaibitem{Orlov} Orlov D.O., On equivalences of derived categories of coherent sheaves on abelian varieties, Izv. Ross. Akad. Nauk Ser. Mat. 66 (2002), no. 3, 131--158; translation in Izv. Math. 66 (2002), no. 3, 569--594 \betaibitem{Polizzi} Polizzi F., Monodromy representations and surfaces with maximal Albanese dimension, Bollettino dell'Unione Matematica Italiana, 1-13, 2017 \betaibitem{Pene} Penegini M., The classification of isotrivially fibred surfaces with $p_g = q = 2$. Collect. Math., 62(3):239--274, 2011. With an appendix by S\"onke Rollenske \betaibitem{reid} Reid M., Chapters on algebraic surfaces, Complex algebraic geometry (Park City, UT, 1993), IAS/Park City Math. Ser., vol. 3, Amer. Math. Soc., Providence, RI, 1997, pp. 3--159. MR 1442522 (98d:14049) \betaibitem{RRS} Rito C., Roulleau X., Sarti A., On explicit Schoen surfaces, to appear in Algebr. Geom. \betaibitem{RoulleauIMRN} Roulleau X., Bounded negativity, Miyaoka-Sakai inequality and elliptic curve configurations, Int Math Res Notices (2017) 2017 (8): 2480--2496 \betaibitem{RLow} Roulleau X., Curves with low Harbourne constants on Kummer and Abelian surfaces, to appear in Rend. Circ. Mat. Palermo, II. Ser. \betaibitem{SD} Saint-Donat B., Projective models of K3 surfaces. Amer. J. Math. 96 (1974), 602--639 \betaibitem{SvG} Sarti A., van Geemen B., Nikulin involutions on K3 surfaces, Math. Z. 255 (2007), no. 4, 731--753. \betaibitem{Sh1} Shioda T., Some remarks on abelian varieties, J. Fac. Sci. Univ. Tokyo, Sect. IA, 24 (1977) 11--21. \varepsilonnd{thebibliography} \noindent Xavier Roulleau, \\Aix-Marseille Universit\'e, CNRS, Centrale Marseille, \\I2M UMR 7373, \\13453 Marseille, France \\ \varepsilonmail{[email protected]} \urladdr{http://www.i2m.univ-amu.fr/perso/xavier.roulleau} \\ \noindent {\rm Aut}hor{Alessandra Sarti} \\ \alphaddress{Laboratoire de Math\'ematiques et Applications, UMR CNRS 7348, \\ Universit\'e de Poitiers, T\'el\'eport 2,\\ Boulevard Marie et Pierre Curie, \\ 86962 FUTUROSCOPE CHASSENEUIL, France}\\ \varepsilonmail{[email protected]} \\ \urladdr{http://www-math.sp2mi.univ-poitiers.fr/~sarti/} \varepsilonnd{document}
\begin{document} \begin{CJK*}{UTF8}{gkai} \title[$L^\infty$ stability of Prandtl expansions] {On the $L^\infty$ stability of Prandtl expansions in Gevrey class} \author{Qi Chen} \address{School of Mathematical Science, Peking University, 100871, Beijing, P. R. China} \email{[email protected]} \author{Di Wu} \address{School of Mathematical Science, Peking University, 100871, Beijing, P. R. China} \email{[email protected]} \author{Zhifei Zhang} \address{School of Mathematical Science, Peking University, 100871, Beijing, P. R. China} \email{[email protected]} \date{\today} \maketitle \begin{abstract} In this paper, we prove the $L^\infty\cap L^2$ stability of Prandtl expansions of shear flow type as $\big(U(y/\sqrt{\nu}),0\big)$ for the initial perturbation in the Gevrey class, where $U(y)$ is a monotone and concave function and $\nu$ is the viscosity coefficient. To this end, we develop the direct resolvent estimate method for the linearized Orr-Sommerfeld operator instead of the Rayleigh-Airy iteration method. Our method could be used to the other relevant problems in the hydrodynamic stability. \end{abstract} \section{Introductions} In this paper, we study the incompressible Navier-Stokes equations in $\Omega:=\mathbb{T}\times\mathbb{R}_+$ when the viscosity coefficient $\nu$ tends to zero: \begin{align}\frak label{NS} \frak left\{ \begin{aligned} &\partial_t u^\nu+u^\nu\cdot\nabla u^\nu+\nabla p^\nu-\nu\Delta u^\nu=f^\nu\quad \text{in $[0,T]\times\Omega$},\\ &\nabla\cdot u^\nu=0\quad \text{in $[0,T]\times\Omega$},\\ &u^\nu|_{\partial\Omega}=0\quad\text{on $[0,T]\times\partial\Omega$},\\ &u^\nu(0)=u_0\quad\text{in $\Omega$}. \end{aligned} \right. \end{align} Here $u^\nu=\big(u^\nu_1, u^\nu_2\big)$ is the velocity field, $p^\nu$ is the pressure and $f^\nu$ is the external force. In the absence of the boundary, the solution $u^\nu$ of the Navier-Stokes equations converges to the solution $u^e$ of the Euler equations as $\nu \to 0$: \begin{align*} \pa_t u^{e}+ u^{e}\cdot\na u^{e}+\na p^{e}=0,\quad \na \cdot u^{e}=0. \end{align*} This limit has been justified in various functional settings \cite{Kato, Swann, BM, Mas, CW, AD, Mar}. In the presence of the boundary, the inviscid limit problem becomes more complicated due to the appearance of boundary layer. For the Navier-slip boundary condition, since the boundary layer is weak, the limit from the Navier-Stokes equations to the Euler equations was justified in {2-D} by Clopeau, Mikeli\'{c} and Robert \cite{CMR}, and in {3-D} by Iftimie and Planas \cite{IP}. See \cite{MR, IS, XX, WXZ} for more relevant results. For the nonslip boundary condition, the boundary layer is strong so that when $\nu \to 0$, the solution of \eqref{NS} formally behaves as \begin{eqnarray}\frak label{eq:Pran-exp} \frak left\{ \begin{array}{l} u^{\nu}_1(t,x,y)=u^{e}_1(t,x,y)+ u^{BL}\big(t,x,\f{y}{\sqrt{\nu}}\big)+O(\sqrt{\nu}),\\ u^{\nu}_2(t,x,y)=u^{e}_2(t,x,y)+\sqrt{\nu}v^{BL}\big(t,x,\f{y}{\sqrt{\nu}}\big)+O(\sqrt{\nu}), \end{array}\right. \end{eqnarray} where $(u^p,v^p)=\big(u^e_1(t,x,0)+u^{BL}(t,x,Y), \pa_yu^e_2(t,x,0)Y+v^{BL}(t,x,Y)\big)$ satisfies the Prandtl equation \begin{eqnarray}\frak label{equ:P} \frak left\{\begin{aligned} &\partial_t u^p+u^p\pa_x u^p+v^p\pa_Y u^p+\pa_xp^e|_{y=0}=\pa^2_{Y}u^p,\\ &\pa_xu^p+\pa_Y v^p=0,\\ &u^p|_{Y=0}=v^p|_{Y=0}=0,\quad\frak lim_{Y\rightarrow+\infty}u^p(t,x,Y)=u^e_1(t,x,0). \end{aligned}\right. \end{eqnarray} To our knowledge, the justification of the Prandtl expansion \eqref{eq:Pran-exp} is still a challenging problem except some special cases: the analytic data \cite{SC2}(see \cite{WWZ} for a new proof via direct energy method), and the initial vorticity vanishing near the boundary \cite{Mae, FTZ}. In addition, the convergence was justified in \cite{LMT, MT} when the domain and the initial data have a circular symmetry. Initiated by Kato \cite{Kato1}, there are many works devoted to the conditional convergence \cite{TW, Wang, Ke}. Let us mention some recent well-posedness results of the Prandtl equation \cite{SC1, XZ, AW, MW, GD, GM, CWZ, LY, DG}, which are a key step toward the inviscid limit problem. Recently, the stability of some special boundary layer solutions received a lot of attention. For example, Grenier studied the Prandtl expansion of shear type flow as \begin{eqnarray}\frak label{eq:Prandtl exp-shear} u^\nu_s=\big(U^e(t,y),0\big)+\Big(U^{BL}\big(t,\f y {\sqrt{\nu}}\big),0\Big). \end{eqnarray} When the shear flow $U^{BL}(0,Y)$ is linearly unstable for the Euler equations, he proved the instability of the expansion in the $H^1$ space by constructing the solution with the highly oscillating as $e^{i\al x/\sqrt{\nu}}$ and the growth as $e^{ct|n|}$ at the high frequency $n=\frac 1 {\sqrt{\nu}}\gg1 $. In a recent important work, Grenier and Nguyen \cite{GNg} proved the $L^\infty$ instability of the Prandtl expansion \eqref{eq:Prandtl exp-shear}. In another important work, Guo, Grenier and Nguyen proved that the shear flows which are linearly stable for the Euler equations could be linearly unstable for the Navier-Stokes equations when $\nu$ is very small, where they constructed the solution with the growth as $e^{c|n|^\frac 23t}$. Their result in particular implies that it is possible to prove the stability of monotone and concave shear flows in the Gevrey class $\frac 32$. In a remarkable work \cite{GMM}, Gerard-Varet, Masmoudi and Maekawa proved the stability of the Prandtl expansion \eqref{eq:Prandtl exp-shear} for the perturbations in the Gevrey class when $U^{BL}(t,Y)$ is a monotone and concave function. Roughly speaking, they showed that if the initial perturbation $a(x,y)$ satisfies $\|a\|_{G_\gamma}\frak le \nu^{\frac 12+\beta}$ for $\beta=\frac {2(1-\gamma)} \gamma$, where $G_\gamma$ is a norm of Gevrey class $\frac 1{\gamma}$ with $\gamma\in (0,1]$ depending on the profile $U^{BL}(t,Y)$, then \begin{eqnarray}o \sup_{t\in [0,T]}\big(\|v^\nu(t)\|_{L^2}+(\nu t)^\frac 12\|\na v^\nu(t)\|_{L^2}+(\nu t)^\frac 14\|v^\nu(t)\|_{L^\infty}\big)\frak le C\|a\|_{G_\gamma}, \end{eqnarray}o where $v^\nu(t,x,y)=u^\nu(t,x,y)-\big(U^e(t,y),0\big)+\big(U^{BL}\big(t,\frac y {\sqrt{\nu}}\big),0\big)$. The $L^\infty$ stability estimate, which is in fact an interpolation result between $L^2$ estimate and $H^1$ estimate, will blow up when $t\to 0$ due to the prefactor $t^{\frac 14}$. Their proof relies on the resolvent estimates for the linearized operator via the Rayleigh-Airy iteration method introduced in \cite{GGN}. For the steady Navier-Stokes equations, Gerard-Varet and Maekawa \cite{GMa} proved the stability of shear flows $\big(U\big(\frac y {\sqrt{\nu}}\big),0\big)$ for the external force $f^\nu$ in the Sobolev space, and Guo and Iyer \cite{GI} proved the stability of Blasius flows. Guo and Nguyen \cite{GN} also considered the Prandtl expansions of steady Navier–Stokes equations over a moving plate. The goal of this paper is twofold. First of all, we would like to prove the $L^\infty$ stability estimate of the Prandtl expansion \eqref{eq:Prandtl exp-shear} without the prefactor $t^\frac 14$, i.e., $\nu^\frac 14\|v^\nu(t)\|_{L^\infty}\frak le C\|a\|_{G_\gamma}$. Secondly, we would like to develop a direct resolvent estimate method for the linearized operator instead of the Rayleigh-Airy iteration method. For the simplicity, we take $U^e(t,y)\equiv 1$ and $U^{BL}(t,Y)=U^P(Y)-1$ in \eqref{eq:Prandtl exp-shear}, where $U^P=U^P(Y)$ is a scalar function on $\mathbb{R}_+$ satisfying \[ \frak lim_{Y\to+\infty} U^P(Y)=1,\quad U^P(Y=0)=0. \] Taking the external force \[ f^\nu=\Big(-\partial_Y^2 U^P\big(\frac{y}{\sqrt{\nu}}\big),0\Big), \] we may write the solution of \reff{NS} in the perturbation form \[ u^\nu(t,x,y)=\Big(1+U^{BL}\big(\frac y {\sqrt{\nu}}\big),0\Big)+u(t,x,y) \] with \begin{align}\frak label{NSP} \frak left\{ \begin{aligned} &\partial_t u-\nu\Delta u+u\cdot\nabla u+U^P\big(\frac{y}{\sqrt{\nu}}\big)\partial_x u+\nu^{-1/2}u_2\Big(\partial_Y U^P\big(\frac{y}{\sqrt{\nu}}\big),0\Big)+\nabla p=0,\\ &\nabla\cdot u=0,\\ &u|_{\partial\Omega}=0,\quad u(0)=a. \end{aligned} \right. \end{align} The system \reff{NSP} can be written as \begin{align}\frak label{NSA} \frak left\{ \begin{aligned} &\partial_t u+\mathbb{A}_\nu u=-\mathbb{P}(u\cdot\nabla u),\\ &u|_{t=0}=a, \end{aligned} \right. \end{align} where $\mathbb{P}: L^2(\Omega)^2\to L^2_\sigma(\Omega)$ is the Helmoholtz-Leray projection and \begin{align*} \mathbb{A}_\nu u=-\nu\Delta u+\mathbb{P}\Big(U^P\big(\frac{y}{\sqrt{\nu}}\big)\pa_xu+\nu^{-1/2}u_2\Big(\partial_Y U^P\big(\frac{y}{\sqrt{\nu}}\big),0\Big)\Big) \end{align*} with the domain \[ D(\mathbb{A}_\nu)=W^{2,2}(\Omega)^2\cap W^{1,2}_0(\Omega)^2\cap L^2_\sigma(\Omega). \] Before stating main result, we introduce some notations and functional spaces. Let \begin{align*} (\mathcal{P}_n f)(y)=f_n(y) e^{inx},\quad f_n(y)=\frac{1}{2\pi}\int_0^{2\pi}f(x,y)e^{-inx}dx, \end{align*} be the projection on the Fourier mode $n\in \mathbb Z$ in $x$. We inroduce the following Gevrey class. For $\gamma\in(0,1], d\geq0$ and $K>0$, \begin{align*} &X_{d,\gamma,K}:=\big\{f\in L^2_\sigma(\Omega):\|f\|_{X_{d,\gamma,K}}=\sup_{n\in\mathbb{Z}}(1+|n|^d)e^{K|n|^\gamma}\|\mathcal{P}_nf\|_{L^2(\Omega)}<+\infty\big\},\\ &X^{(1)}_{d,\gamma,K}:=\big\{f\in L^2_\sigma(\Omega):\|f\|_{X_{d,\gamma,K}^{(1)}}=\sup_{n\in\mathbb{Z}}(1+|n|^d)e^{K|n|^\gamma}\|\mathcal{P}_nf\|_{L^2_xH^1_y(\Omega)}<+\infty\big\}. \end{align*} When $\gamma=1$, the functions in $X_{d,\gamma, K}$ are analytic in $x$, where $K$ corresponds to the analytic width. Next we introduce the {\bf strongly concave(SC) condition} on shear flows: \begin{enumerate} \item $U(Y)\in BC^2(\mathbb{R}_+)$ with \[ \|U\|:=\sum_{k=0,1,2}\sup_{Y\geq0}(1+Y)^k|\partial^k_YU(Y)|<\infty. \] \item $U|_{Y=0}=0,\,\,\frak lim_{Y\to+\infty}U(Y)=1$. \item There exists $M>0$ such that $-M\partial_Y^2 U\geq(\partial_Y U)^2$ and $|\partial_Y^3U/\partial_Y^2 U|+|\partial_Y^2U/\partial_Y U|\frak leq M$ for any $Y\geq 0$. \end{enumerate} Now we state our main result as follows. \begin{theorem}\frak label{main} Assume that $U^P$ satisfies (SC) condition. For $\gamma\in[2/3,1), d>5-3\gamma$ and $K>0$, there exists $C,\epsilon, T$ and $K'\in(0,K)$ such that for any sufficiently small $\nu>0$, if $\|a\|_{X^{(1)}_{d,\gamma,K}}\frak leq \epsilon\nu^{\frac{1}{2}+\beta}$ with $\beta=\max\Big\{\f {7(1-\gamma)} {8\gamma}+\f1 {8\gamma}+,\f{3} {16}+\frac{15(1-\gamma)}{16}\Big\},$ then the system \reff{NSP} admits a unique solution $u\in C([0,T];L^2_\sigma(\Omega))\cap L^2(0,T;W^{1,2}_0(\Omega))$ satisfying \begin{align*} &\sup_{0<t\frak leq T}\big(\|u(t)\|_{X_{d,\gamma,K'}}+\nu^{\frac{1}{4}}\|u(t)\|_{L^\infty(\Omega)}+(\nu t)^{\frac{1}{2}}\|\nabla u(t)\|_{L^2(\Omega)}\big)\frak leq C\|a\|_{X^{(1)}_{d,\gamma,K}}. \end{align*} \end{theorem} \begin{remark} Let us give several remarks on our result. \begin{enumerate} \item In Theorem \ref{main}, we obtain the $L^\infty$ stability of Prandtl expansion in Gevrey class, i.e., \begin{eqnarray}o \|u(t)\|_{L^\infty}\frak le C\nu^{-\f14}\|a\|_{X^{(1)}_{d,\gamma,K}}\frak le C\nu^{\f14+\beta}. \end{eqnarray}o \item The smallness requirement on the initial perturbation should be not optimal. It is a very interesting problem to investigate whether the power $\f 12+\beta$ of $\nu$ can be improved to $\f12$. \item Our method could be used to the case when $U(Y)$ satisfies weakly concave(WC) condition: $-M_\sigma\partial_Y^2 U\geq(\partial_Y U)^2$ for $Y\ge \sigma>0$ or $U(t, Y)$ is time dependent and weakly concave. \item In a recent remarkable work \cite{GMM1} by Gerard-Varet, Maekawa and Masmoudi, they show that even for general concave Prandtl profiles, the Prandtl expansion is stable (including in $L^\infty$ sense) if initial perturbation is smaller than $\nu^\f94$ in Gevrey class $\f32$. The key difference is that for the shear flows, one can obtain the semigroup estimates global in time via the resolvent estimates, which should be important to study the instability of the Prandtl expansion in the Sobolev space. \end{enumerate} \end{remark} \section{Sketch of the proof} The roadmap of the proof of Theorem \ref{main} is similar to \cite{GMM}. We first focus on the linearized system of \reff{NSP} and obtain the corresponding semigroup estimates via the resolvent estimates for the linearized Orr-Sommerfeld operator. Then using the Duhamel formula of the solution, we prove the nonlinear stability by combining the semigroup estimates with nonlinear estimates. \subsection{Reformulation of the problem} As in \cite{GMM}, it is more convenient to introduce the rescaled velocity \begin{align}\frak label{eq:rescale} & u(t,x,y)=v(\widetilde{a}u,X,Y),\quad (\widetilde{a}u,X,Y)=\Big(\frac{t}{\sqrt{\nu}},\frac{x}{\sqrt{\nu}},\frac{y}{\sqrt{\nu}}\Big). \end{align} If $u(t)=e^{-t\mathbb{A}_\nu}a$, then $v$ is the solution to the system \begin{align}\frak label{eq:sl-ns}\frak left\{\begin{aligned} &\partial_{\widetilde{a}u}v-\sqrt{\nu}\Delta_{X,Y} v+ \big(v_2\partial_YV,0\big) + V\partial_Xv+\nabla_{X,Y} q =0\quad \text{in}\,\,\Omega_{\nu},\\ &\text{div}_{X,Y}\ v=0,\ v|_{Y=0}=0,\ v|_{t=0}=a^{(\nu)}, \end{aligned}\right. \end{align} where $\Omega_\nu:=(\nu^{-1/2}\mathbb{T})\times\mathbb{R}_+$ and $a^{(\nu)}:=a(\nu^{1/2}X,\nu^{1/2}Y)$. So, $v$, $\nabla q$ are $\frac{2\pi}{\sqrt{\nu}}$-periodic in $X$. We introduce the linearized operator \begin{eqnarray}\frak label{rescale-operator} \mathbb{L}_\nu v=-\sqrt{\nu}\mathbb{P}_\nu\Delta v+\mathbb{P}_\nu\big(V\partial_X v+v_2(\partial_Y V,0)\big) \end{eqnarray} with $D(\mathbb{L}_\nu)=W^{2,2}(\Omega_\nu)\cap W^{1,2}_0(\Omega_\nu)\cap L^2_\sigma(\Omega_\nu)$, where $\mathbb{P}_\nu$ is the rescaled Helmoholtz-Leray projection. To establish the resolvent estimates of $\mathbb{L}_\nu$, we consider the following resolvent problem for $\mu\in \mathbb{C}$: \begin{align}\frak label{eq:sc-res}\frak left\{\begin{aligned} &\mu v-\sqrt{\nu}\Delta_{X,Y} v + \big(v_2\partial_YV,0\big) + V\partial_Xv+\nabla_{X,Y} q =f,\quad Y\geq 0,\\ &\text{div}_{X,Y}\ v=0,\ v|_{Y=0}=0. \end{aligned}\right. \end{align} Let $w=\partial_Xv_{2}-\partial_Yv_{1}$ be the vorticity field of $v$. Direct computation gives \begin{align*} &\mu w-\sqrt{\nu}\Delta w-v_2\partial_Y^2V+V\partial_xw=\partial_Xf_2-\partial_Yf_{1}. \end{align*} The corresponding stream function denoted by $\phi$ solves \begin{align*} &\Delta\phi=w,\quad \phi|_{Y=0}=0. \end{align*} Then $v=(-\partial_Y\phi,\partial_X\phi)$. Since the shear flow is independent of $x$, it is natural to study the problem on each Fourier mode with respect to the $x$ variable. So, we introduce \begin{align*} (\mathcal{P}_{\nu,n}f)(Y)=f_n(Y)e^{\mathrm{i}n\sqrt{\nu}X},\quad f_n(Y)=\frac{\sqrt{\nu}}{2\pi}\int_0^{\frac{2\pi}{\sqrt{\nu}}}f(X,Y)e^{-\mathrm{i}n\sqrt{\nu}X}\mathrm{d}X. \end{align*} Let $\phi_n(Y)$ be the $n\sqrt{\nu}$ fourier modes of $\phi(X,Y)$. Then $\phi_n$ solves the following system \begin{align}\frak label{eq:sc-resphi}\frak left\{\begin{aligned} &\mu (\partial_Y^2-n^2\nu)\phi_n-\sqrt{\nu}(\partial_Y^2-n^2\nu)^2 \phi_n - \mathrm{i}n\sqrt{\nu}\phi_n(\partial_Y^2V) + \mathrm{i}n\sqrt{\nu}V(\partial_Y^2-n^2\nu)\phi_n\\ &\quad=\mathrm{i}n\sqrt{\nu}f_{2,n}-\partial_Yf_{1,n},\\ &\phi_n|_{Y=0}=\partial_Y\phi_n|_{Y=0}=0. \end{aligned}\right. \end{align} Let \begin{eqnarray}o \frak lambda:= \dfrac{\mathrm{i}\mu}{|n|\sqrt{\nu}},\quad \alpha:=\nu^{1/2}|n|. \end{eqnarray}o Removing the subscript $n$ for $(\phi_n, f_{i,n})$, we obtain the Orr-Sommerfeld equation \begin{align}\frak label{eq:sc-resphi1}\frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)^2 \phi + \mathrm{i}\alpha\big((V-\frak lambda)(\partial_Y^2-\alpha^2)\phi- (\partial_Y^2V)\phi\big) =\mathrm{i}\alpha f_{2}-\partial_Yf_{1},\\ &\phi|_{Y=0}=\partial_Y\phi|_{Y=0}=0, \end{aligned}\right. \end{align} Thus, the problem is reduced to solve the Orr-Sommerfeld equation. \subsection{Resolvent estimates} We denote by $\mathbb{L}_{\nu,n}$ the restriction of $\mathbb{L}_\nu$ on the subspace $\mathcal{P}_{\nu, n}L^2_\sigma(\Omega_\nu)$. The resolvent estimates for $(\mu-\mathbb{L}_{\nu,n})^{-1}$ has been reduced to solve the Orr-Sommerfeld equation. In \cite{GMM}, Gerard-Varet, Masmoudi and Maekawa solve the Orr-Sommerfeld equation by developing the Rayleigh-Airy iteration method introduced in \cite{GGN}. Motivated by our work \cite{CLWZ} on the stability of Couette flow, we develop a direct energy method to solve the Orr-Sommerfeld equation for the general shear flows, which may be of independent interest, and could be used to the relevant problems in the hydrodynamic stability. Our method are composed of the following two key steps. \noindent{\bf Step 1.} Solving the Orr-Sommerfeld equation with Navier-slip boundary condition: \begin{align*}\frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)w + \mathrm{i}\alpha\big((V-\frak lambda)w-(\partial_Y^2V)\phi\big)=F,\\ &(\partial_Y^2-\alpha^2)\phi=w,\ w|_{Y=0}=\phi|_{Y=0}=0. \end{aligned}\right. \end{align*} We first decompose the solution of $w$ as $w=w_1+w_2$, where \begin{align*}\frak left\{ \begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)w_1+\mathrm{i}\alpha \big(\partial_Y\big((V-\frak lambda)\phi'_1\big)-(V-\frak lambda)\alpha^2\phi_1-V''\phi_1\big) =F,\\ &(\partial_Y^2-\alpha^2)\phi_1=w_1,\quad w_1|_{Y=0}=\phi_1|_{Y=0}=0, \end{aligned}\right.\end{align*} and \begin{align*}\frak left\{ \begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)w_2+\mathrm{i}\alpha\big((V-\frak lambda)w_2 -V''\phi_2\big) =V'h,\\ &(\partial_Y^2-\alpha^2)\phi_2=w_2,\quad w_2|_{Y=0}=\phi_2|_{Y=0}=0, \end{aligned}\right.\end{align*} with $h=\mathrm{i}\alpha\partial_Y\phi_1$. Thanks to good boundary condition and good structure on $w_1$, the estimates for $w_1$ is direct by making a inner $L^2$ product with $\phi_1$ to the equation of $w_1$. The estimates of $w_2$ are based on two tricks. First of all, by making $L^2$ inner product estimate with $w/(V''-\varsigma)$($\varsigma$ small positive constant) to the equation of $w_2$, we can prove the following weighted estimate on $w_2$: \begin{align*} & \sqrt{\nu} \frak left\|\dfrac{(\partial_Yw_2,\alpha w_2)}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2 +\alpha\frak lambda_i \frak left\|\dfrac{w_2}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2\frak leq C(\alpha\frak lambda_i)^{-1}\|h\|_{L^2}^2 +C\varsigma(\alpha\frak lambda_i)^{-1}\|\alpha\phi_2\|_{L^2}^2. \end{align*} Secondly, to estimate the stream function $\phi_2$, we use the Rayleigh's trick. To this end, we consider the following inhomogeneous Rayleigh equation \begin{eqnarray}o \textsl{R}\phi:= (V-\frak lambda)(\partial_Y^2-\alpha^2)\phi-V''\phi=V'h_1+\partial_Yh_2+\mathrm{i}\alpha h_3,\quad \phi(0)=0. \end{eqnarray}o Using Rayleigh's trick(Lemma \ref{lem:GMMray1}), we can show that \begin{align*} & \|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq C\big(\frak lambda_i^{-1}\|h_1\|_{L^2} +\frak lambda_i^{-2}\|(h_2,h_3)\|_{L^2}\big). \end{align*} Then we rewrite the equation of $w_2$ as \begin{align*} &\mathrm{i}\alpha(\textsl{R}\phi_2) =\sqrt{\nu}(\partial_Y^2-\alpha^2)w_2+V'h. \end{align*} Applying the above estimate with $h_1=h/(\mathrm{i}\alpha),\ h_2=\sqrt{\nu}\partial_Yw_2/(\mathrm{i}\alpha),\ h_3=\sqrt{\nu} w_2$, we get \begin{align*} \|(\partial_Y\phi_2,\alpha\phi_2)\|_{L^2} \frak leq &C(\alpha\frak lambda_i)^{-1}\|h\|_{L^2}+ C\nu^{\f12}\frak lambda_i^{-2}\alpha^{-1}\frak left\|\dfrac{(\partial_Yw_2,\alpha w_2)}{|V''-\varsigma|^{\f12}}\right\|_{L^2}, \end{align*} which along with the weighted estimate on $w_2$ will show that \begin{align*} & \alpha\frak lambda_i\|(\partial_Y\phi_2,\alpha\phi_2)\|_{L^2}\frak leq C\|h\|_{L^2},\\ &\nu^{\f14}(\alpha\frak lambda_i)^{\f12}\frak left\|(\partial_Yw_2,\alpha w_2)/V'\right\|_{L^2}+\alpha\frak lambda_i\frak left\|w_2/V'\right\|_{L^2}\frak leq C\|h\|_{L^2}. \end{align*} \noindent{\bf Step 2.} Boundary layer corrector. To match nonslip boundary condition, we need to introduce the boundary layer corrector:\begin{align}\frak label{eq:Hombound-OSnon} \frak left\{\begin{aligned}\noindentnumber &-\sqrt{\nu}(\partial_Y^2-\alpha^2)W_b+\mathrm{i}\alpha\big((V-\frak lambda)W_b- V''\Phi_b\big)=0,\\ &(\partial_Y^2-\alpha^2)\Phi_b=W_b,\\ &\Phi_b|_{Y=0}=0,\ \partial_Y\Phi_b|_{Y=0}=1. \end{aligned}\right. \end{align} It is not easy to solve the above system directly. Instead, we solve the following system \begin{align*} \frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)W+\mathrm{i}\alpha\big((V-\frak lambda)W- V''\Phi\big)=0,\\ &(\partial_Y^2-\alpha^2)\Phi=W,\\ &\Phi|_{Y=0}=0,\ \partial_Y^2\Phi|_{Y=0}=1. \end{aligned}\right. \end{align*} The advantage is that the solution $W$ of this system can be well approximated by the following scaled Airy function: \begin{align*} & W_a (Y)= Ai\big(\mathrm{e}^{\mathrm{i}\frac{\pi}{6}}|nV'(0)|^{\f13}(Y+d)\big)/ Ai\big(\mathrm{e}^{\mathrm{i}\frac{\pi}{6}}|nV'(0)|^{\f13}d\big), \end{align*} where $d=-\frak lambda_\nu/V'(0)$ with $\frak lambda_\nu=\frak lambda+\mathrm{i}\sqrt{\nu}\alpha$. Further, we observe that $\pa_Y\Phi_a(0)\neq 0$. Next we need to solve the error equation $W_e=W-W_a$, which is given by \begin{align}\noindentnumber\frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)W_e+\mathrm{i}\alpha\big((V-\frak lambda)W_e-V''\Phi_e\big) =-\mathrm{i}\alpha\big(V-V'(0)Y\big)W_a+\mathrm{i}\alpha V''\Phi_a,\\ &(\partial_Y^2-\alpha^2)\Phi_e=W_e,\quad \Phi_e(0)=W_e(0)=0. \end{aligned}\right. \end{align} Since $W_e$ satisfies the Navier-slip boundary condition, we can use the resolvent estimates established in step 1 to obtain various estimates of $W_e$. In particular, we have $\pa_Y\Phi(0)\neq 0$. Thus, $W_b=W(Y)/\pa_Y\Phi(0)$ is well-defined. Finally, the solution $w$ of the Orr-Sommerfeld equation with nonslip boundary condition is given by \begin{eqnarray}o w(Y)=w_{Na}(Y)-\pa_Y\Phi_{Na}W_b(Y), \end{eqnarray}o where $(w_{Na}, \phi_{Na})$ is the solution of the Orr-Sommerfeld equation with Navier-slip boundary condition. Let us refer to section 3 for more details. \subsection{Semigroup estimates} We follow similar ideas in \cite{GMM} from the resolvent estimates to the $L^2-L^2$ and $L^2-H^1$ semigroup estimates, which is based on the formula \begin{align}\noindentnumber e^{-\widetilde{a}u\mathbb{L}_{\nu,n}}=\frac{1}{2\pi\mathrm{i}}\int_{\Gamma}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}\mathrm{d}\mu \end{align} with a suitable contour $\Gamma$ which lies in the resolvent set of $\mathbb{L}_{\nu,n}$. To obtain $L^\infty$ stability result, we need to establish two kinds of new semigroup estimates: (1) $L^2-L^\infty$ semigroup estimates, which will be used to control the inhomogeneous part of the solution of nonlinear system; (2) $H^1-L^\infty$ semigroup estimates, which will be used to control the homogeneous part of the solution. Let us point out that our $L^\infty$ semigroup estimates are not a simple consequence of the interpolation between $L^2$ estimate and $H^1$ estimate. New weighted $H^1$ resolvent estimate established in section 3 plays an important role. Let us refer to section 4 for more details. \subsection{Nonlinear stability} We denote by $\mathbb{A}_{\nu,n}$ the restriction of $\mathbb{A}_\nu$ on the subspace $\mathcal{P}_nL^2_\sigma(\Omega)$. By the Duhamel formula, the solution $u(t)$ of \reff{NSA} could be written into the following integral form with respect to each Fourier mode $n$: \begin{align} \mathcal{P}_n u(t)=e^{-t\mathbb{A}_{\nu,n}}\mathcal{P}_n a-\int_0^te^{-(t-s)\mathbb{A}_{\nu,n}}\mathcal{P}_n\mathbb{P}(u\cdot\nabla u)(s)ds. \end{align} We introduce the functional space of Gevrey type: \begin{eqnarray*} \begin{split} Z_{\gamma,K,T}=&\Big\{f\in(C([0,T];L^2_\sigma(\Omega)):\|f\|_{Z_{\gamma,K,T}}:=\sup_{0<t\frak leq T}\big(\|f(t)\|_{X_{q,\gamma,K(t)}}\\ &\quad+\nu^{\f14}\|f(t)\|_{Y_{q,\gamma,K(t)}}+(\nu t)^{\f12}\|\nabla f(t)\|_{X_{q,\gamma,K(t)}}\big)<+\infty\Big\}, \end{split} \end{eqnarray*} where $K(t)=K-2\delta^{-1}t$ and \begin{align*} Y_{d,\gamma,K(t)}=\Big\{f\in L^2_\sigma(\Omega): \|f\|_{Y_{d,\gamma,K(t)}}=\sup_{n\in\mathbb{Z}}(1+|n|^d)e^{K(t)\langle n\rangle^{\gamma}}\|\mathcal{P}_n f\|_{L^2_xL^\infty_y(\Omega)}<+\infty\Big\}. \end{align*} Based on the semigroup estimates of $e^{-(t-s)\mathbb{A}_{\nu,n}}$ and nonlinear estimates, we can show that \begin{align*} \|u\|_{Z_{\gamma,K,T}}\frak leq C\Big(\|a\|_{X^{(1)}_{d,\gamma,K}}+C\nu^{-\frac{1}{2}-\beta}\|u\|^2_{Z_{\gamma,K,T}}\Big). \end{align*} This implies our result. Let us refer to section 5 for more details. \section{Resolvent Estimates in Middle Frequency} For low frequency $|n|\frak le O(1)$ and high frequency $|n|\ge O(\nu^{-\f34})$, the semigroup estimates of $e^{-t\mathbb{L}_{\nu,n}}$ can be proved by using simple energy method. For the middle range of frequency $\mathcal{O}(1)\frak leq |n|\frak leq \mathcal{O}(\nu^{-3/4})$, we need to derive the semigroup estimates via the resolvent estimates of $\mathbb{L}_{\nu,n}$. \subsection{Key resolvent estimates} We only consider the case of middle frequency: \[ \delta^{-1}_0\frak leq |n|\frak leq \delta_0^{-1}\nu^{-3/4},\quad\text{where}\quad \delta_0=\frac{1}{2(1+\|V\|)}. \] Recall that $\frak lambda= \dfrac{\mathrm{i}\mu}{|n|\sqrt{\nu}}$ and $\alpha=\nu^{1/2}|n|$. Therefore, $\delta^{-1}_0\nu^{1/2}\frak leq\alpha\frak leq\delta^{-1}_0\nu^{-1/4}$. Moreover, we consider the case \begin{equation}\frak label{Remu} \mathbf{Re}\mu=\alpha\mathbf{Im}\frak lambda\geq \frac{\nu^{\f12} |n|^\gamma}{\delta} \end{equation} for some $\gamma\in[0,1]$ and for sufficiently small but fixed positive number $\delta$. We denote $\frak lambda_r=\mathbf{Re}\frak lambda$, $\frak lambda_i=\mathbf{Im}\frak lambda$. We introduce the weight function $\rho(Y)$ defined by \begin{align}\frak label{rho-lambda} \rho(Y)=\frak left\{\begin{aligned} &\big(|n|^{\gamma-\f23}/\delta\big)^{\f32} Y,\qquad \text{if}\,\,\, 0\frak leq Y\frak leq \big(|n|^{\gamma-\f23}/\delta\big)^{-\f32},\\ &1,\qquad\qquad\quad \text{if}\,\,\,Y\geq \big(|n|^{\gamma-\f23}/\delta\big)^{-\f32}. \end{aligned}\right. \end{align} Our key resolvent estimates are stated as follows. \begin{theorem}\frak label{main-resolvent} Assume that $(SC)$ condition holds and $\delta_0^{-1}\frak leq |n|\frak leq \delta_0^{-1}\nu^{-\f34}$, and \reff{Remu} holds for some $\delta\in(0,\delta_*]$. Then there exist $\delta_1,\delta_2,\delta_*\in(0,1)$ satisfying $\delta_1,\delta_2\frak leq \delta_0$ and $\delta_*\frak leq \min\{\delta_1,\delta_2\}$ such that the following statements hold true. \begin{enumerate} \item Let $n\in\mathbb{Z}$ and $\gamma\in[0,1]$. Then there exists $\theta\in(\frac{\pi}{2},\pi)$ such that the set \begin{align}\frak label{S-mun} S_{\nu,n}(\theta)=\big\{\mu\in\mathbb{C}||\mathbf{Im}\mu|\geq (\widetilde{a}n\theta)\mathbf{Re}\mu+\delta_1^{-1}(\nu^{\f12}|n|+|\widetilde{a}n\theta||n|^\gamma\nu^{\f12}),\, |\mu|\geq \delta^{-1}_1\nu^{\f12}|n|\big\} \end{align} is in the resolvent set of $-\mathbb{L}_{\nu,n}$ and \begin{align}\frak label{mularge} &\|(\mu+\mathbb{L}_{\nu,n})^{-1}f\|_{L^2(\Omega_{\nu})}\frak leq\frac{C}{|\mu|}\|f\|_{L^2(\Omega_\nu)},\\ \frak label{mularge-nabla} &\|\nabla(\mu+\mathbb{L}_{\nu,n})^{-1}f\|_{L^2(\Omega_{\nu})}\frak leq\frac{C}{\nu^{\f14}|\mu|^{\f12}}\|f\|_{L^2(\Omega_\nu)} \end{align} for all $\mu\in S_{\nu,n}(\theta)$ and $f\in\mathcal{P}_{\nu,n}L^2_\sigma(\Omega_\nu)$. \item If $|n|\geq \delta_0^{-1}$ and $\mathbf{Re}\mu+n^2\nu^{\f32}\geq\delta^{-1}_2$, then $\mu$ belongs to the resolvent set of $-\mathbb{L}_{\nu,n}$ and the following estimates hold: for all $f\in\mathcal{P}_{\nu,n}L^2_\sigma(\Omega_\nu)$ \begin{align}\frak label{Immularge} &\|(\mu+\mathbb{L}_{\nu,n})^{-1}f\|_{L^2(\Omega_{\nu})}\frak leq\frac{C}{\mathbf{Re}\mu}\|f\|_{L^2(\Omega_\nu)},\\ \frak label{Immularge-na} &\|\nabla(\mu+\mathbb{L}_{\nu,n})^{-1}f\|_{L^2(\Omega_{\nu})}\frak leq\frac{C}{\nu^{\f14}(\mathbf{Re}\mu)^{\f12}}\|f\|_{L^2(\Omega_\nu)}. \end{align} \item Let $\gamma\in[\f23,1]$. Then the set \begin{align} O_{\nu,n}:=\Big\{\mu\in\mathbb{C}||\mu|\frak leq \delta^{-1}_1|n|\nu^{\f12},\quad\mathbf{Re}\mu\geq\frac{|n|^\gamma\nu^{\f12}}{\delta}\Big\} \end{align} is included in the resolvent set of $-\mathbb{L}_{\nu,n}$. Moreover, if $\mu\in O_{\nu,n}$ satisfies $\mathbf{Re}\mu=\frac{|n|^\gamma\nu^{\f12}}{\delta}$ and $\mathbf{Re}\mu+n^2\nu^{\f32}\frak leq\delta_2^{-1}$, then \begin{align}\frak label{musmall} &\|(\mu+\mathbb{L}_{\nu,n})^{-1}f\|_{L^2(\Omega_{\nu})}\frak leq\frac{Cn^{1-\gamma}}{\mathbf{Re}\mu}\|f\|_{L^2(\Omega_\nu)},\\ &\|\nabla(\mu+\mathbb{L}_{\nu,n})^{-1}f\|_{L^2(\Omega_{\nu})}\frak leq\frac{C|n|^{\f12+\f14(1-\gamma)})}{\mathbf{Re}\mu}\|f\|_{L^2(\Omega_\nu)},\\ \frak label{musmall-wegihted} &\|\rho^{\f12}(\mathrm{curl}(\mu+\mathbb{L}_{\nu,n})^{-1}f)\|_{L^2(\Omega_{\nu})}\frak leq \frac{C}{\nu^{\f14}(\mathbf{Re}\mu)^{\f12}}\|f\|_{L^2(\Omega_{\nu})}. \end{align} \end{enumerate} \end{theorem} \begin{proof} We point out that we can deduce \reff{Immularge}-\reff{musmall-wegihted} directly from Proposition \ref{prop-Immu-large} and \ref{Pro:resdrsmall} respectively. Hence, we only prove the first statement of Proposition \ref{main-resolvent}. Let $\delta_1\frak leq \delta_0$ be a small constant in Proposition \ref{prop-lambda-large}. Since $\mu=-\mathrm{i}\sqrt{\nu}|n|\frak lambda$, we notice that $\frak lambda$ satisfies the condition of Proposition \ref{prop-lambda-large}, if $|\mu|\geq \delta^{-1}_1\alpha$ and $\mathbf{Re}\mu=\alpha\mathbf{Im}\frak lambda\geq \delta_1^{-1}|n|^\gamma\nu^{\f12}$. We obtain that such $\mu$ belongs to the resolvent set of $-\mathbb{L}_{\nu,n}$ in $\mathcal{P}_{\nu,n}L^2_\sigma(\Omega_\nu)$. Moreover, we have for such $\mu$, \begin{align*} \|(\mu+\mathbb{L}_{\nu,n})^{-1}f\|_{L^2(\Omega_\nu)}\frak leq \frac{C}{|\mu|}\|f\|_{L^2(\Omega_{\nu})}, \end{align*} which implies that the ball $B_{r_\mu}(\mu):=\{\eta\in\mathbb{C}||\eta-\mu|\frak leq r_\mu\}$ with $r_\mu=\frac{|\mu|}{2C}$ belongs to the resolvent set of $-\mathbb{L}_{\nu,n}$ and for any $\eta\in B_{r_\mu}(\mu)$, \begin{align*} \|(\eta+\mathbb{L}_{\nu,n})^{-1}f\|_{L^2(\Omega_\nu)}\frak leq\frac{C}{|\mu|}\|f\|_{L^2(\Omega_\nu)}. \end{align*} Hence, by taking $\theta=\frac{\pi}{2}+\theta_0$ with $\theta_0=\frac{1}{2C}$, we obtain \begin{align*} S_{\nu,n}(\theta)\subset\cup_{\mu\in E_{\nu,n}} B_{r_\mu}(\mu)\subset\rho(-\mathbb{L}_{\nu,n}), \end{align*} where \begin{align*} E_{\nu,n}=\big\{\mu\in\mathbb{C}|\mathbf{Re}\mu\geq \delta_1^{-1}|n|^\gamma\nu^{\f12}, |\mu|\geq\delta_1^{-1}\alpha\big\}. \end{align*} Hence, we complete the proof of \reff{S-mun} and \reff{mularge}. Similarly, we can obtain \reff{mularge-nabla}. \end{proof} In the sequel, we always assume $\delta_0^{-1}\frak leq |n|\frak leq \delta_{0}^{-1}\nu^{-\f34}$, that is $\delta_0^{-1}\nu^{\f12}\frak leq |\alpha|\frak leq \delta_{0}^{-1}\nu^{-\f14}$, and we assume that $n>0$ for convenience. \subsection{Resolvent estimates when $\frak lambda$ is far away from origin} In this part, we deal with the case when $|\frak lambda|$ is large or $\mathbf{Im}\frak lambda$ is large. We first notice that \reff{eq:sc-resphi1} can be written as \begin{align}\frak label{eq-modify}\frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)\partial_Y^2 \phi + \mathrm{i}\alpha\big((V-\frak lambda_\nu)(\partial_Y^2-\alpha^2)\phi- (\partial_Y^2V)\phi\big) =\mathrm{i}\alpha f_{2}-\partial_Yf_{1},\\ &\phi|_{Y=0}=\partial_Y\phi|_{Y=0}=0, \end{aligned}\right. \end{align} where $\frak lambda_\nu=\frak lambda+i\sqrt{\nu}\alpha$. The following proposition shows the resolvent estimates when $|\frak lambda|$ is large. \begin{proposition}\frak label{prop-lambda-large} There exists $\delta_1\in(0,\delta_0]$ such that the following statements hold. Let $|\frak lambda|\geq \delta_1^{-1}$ and $n\in\mathbb{N}$. Suppose that \reff{Remu} holds for some $\gamma\in[0,1]$ and $\delta\in(0,\delta_1]$. Then for any $f=(f_1,f_2)\in L^2(\mathbb{R}_+)^2$, the weak solution $\phi\in H^2_0(\mathbb{R}_+)$ to \reff{eq:sc-resphi1} satisfies \begin{align} &\|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq \frac{C}{|\alpha\frak lambda|}\|f\|_{L^2},\frak label{lambda-large-L2}\\ &\|(\partial_Y^2-\alpha^2)\phi\|_{L^2}\frak leq \frac{C}{\nu^{1/4}|\alpha\frak lambda|^{1/2}}\|f\|_{L^2},\frak label{lambda-large-Linfinity} \end{align} where $C$ is a constant only depending on $\|V\|$. \end{proposition} \begin{proof} Let $f=(f_1,f_2)\in L^2(\mathbb{R}_+)^2$ and $\phi$ be the unique weak solution to \reff{eq:sc-resphi1}. Then by the previous argument, $\phi$ also satisfies \reff{eq-modify}. Assume that $|\frak lambda|\geq\delta_1^{-1}$ with $\delta_1:=(32(1+\|U\|))^{-1}<\delta_0$. Since $|\mu|=\alpha|\frak lambda|$ and $\mathbf{Im}\frak lambda_\nu=\mathbf{Im}\frak lambda+\nu n\geq \mathbf{Im}\frak lambda>0$, we have $|\frak lambda_\nu|\geq|\frak lambda|\ge\delta_1^{-1}$, which implies \begin{align} \frac{|\frak lambda|}{2}\frak leq |V-\frak lambda_\nu|\frak leq 2|\frak lambda|.\frak label{eq:V-lam-est} \end{align} Moreover, we have \begin{eqnarray} n\mathbf{Im}\frak lambda_\nu\ge n\mathbf{Im}\frak lambda\ge \delta^{-1}n^\gamma\gg 1.\frak label{eq:imlam-est} \end{eqnarray} Multiplying both sides of the first equation of \reff{eq-modify} by $(V-\frak lambda_\nu)^{-1}\bar{\phi}$, then integrating by parts, we obtain \begin{eqnarray}\frak label{prop-proof-energy} \begin{split} \|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2\frak leq& \mathbf{Re}\int_0^{+\infty}\frac{\mathrm{i}\alpha f_2-\partial_Y f_1}{-i\alpha(V-\frak lambda_\nu)}\bar{\phi}\mathrm{d}Y+\int_0^{+\infty}\frac{|\partial_Y^2 V}{|V-\frak lambda_\nu|}|\phi|^2\mathrm{d}Y\\ &+\frac{6}{n\mathbf{Im}\frak lambda_\nu}\int_0^{+\infty}\frac{|\partial_YV|^2}{|V-\frak lambda_\nu|^2}|\partial_Y\phi|^2+\frac{|\partial_Y^2V|^2}{|V-\frak lambda_\nu|^2}|\phi|^2 \mathrm{d}Y\\ &+\frac{6}{n\mathbf{Im}\frak lambda_\nu}\int_0^{+\infty}\Big(\frac{|\partial_Y V|^4}{|V-\frak lambda_\nu|^4}+\alpha^2\frac{|\partial_Y V|^2}{|V-\frak lambda_\nu|^2}\Big)|\phi|^2\mathrm{d}Y. \end{split} \end{eqnarray} We first notice that \begin{align*} \Big|\mathbf{Re}\int_0^{+\infty}\frac{\mathrm{i}\alpha f_2-\partial_Y f_1}{-i\alpha(V-\frak lambda_\nu)}\bar{\phi}\mathrm{d}Y\Big|\frak leq&\f14\|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2+C\Big\|\frac{f}{\alpha(V-\frak lambda_\nu)}\Big\|^2_{L^2}\\ &+C\|V\|\Big\|\frac{f}{\alpha(V-\frak lambda_\nu)^2}\Big\|^2_{L^2}.\noindentnumber\\ \frak le& \f14\|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2+\f C {\frak lambda^2\alpha^2}\|f\|_{L^2}^2, \end{align*} where we used \eqref{eq:V-lam-est} in the last step. By \eqref{eq:imlam-est}, we have \begin{align*} \frac{1}{n\mathbf{Im}\frak lambda_\nu}\int_0^\infty\frac{|\partial_Y V|^4}{|V-\frak lambda_\nu|^4}|\phi|^2\mathrm{d}Y\frak leq& C\frac{\delta_1^4\|V\|^2}{n\mathbf{Im}\frak lambda_\nu}\int_0\|V\|^2(1+Y)^{-2}|\phi|^2\mathrm{d}Y\\ \frak le&\f C{n\mathbf{Im}\frak lambda_\nu}\big\|\f \phi Y\big\|_{L^2}^2\frak le \f 1 {32}\|\pa_Y\phi\|_{L^2}^2. \end{align*} The estimate of the other terms on the right hand side of \reff{prop-proof-energy} is similar. We finally have \begin{align} \|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2\frak leq \frac{C}{\alpha^2|\frak lambda|^2}\|f\|_{L^2}^2.\noindentnumber \end{align} This shows \reff{lambda-large-L2}. Now we turn to prove \reff{lambda-large-Linfinity}. We multiply both sides of \reff{eq:sc-resphi1} by $\bar{\phi}$ and integrate over $(0,+\infty)$. Then we have \begin{align*} -\sqrt{\nu}\|(\partial_Y^2-\alpha^2)\phi\|_{L^2}^2+\frak langle\mathrm{i}\alpha(V-\frak lambda)(\partial_Y^2-\alpha^2)\phi,\phi\rangle_{L^2}-\frak langle\mathrm{i}\alpha(\partial_Y^2 V)\phi,\phi\rangle_{L^2}=\frak langle(\mathrm{i}\alpha f_2-\partial_Y f_1),\phi\rangle_{L^2}, \end{align*} which implies \begin{align}\frak label{proof-mularge-IM} &\sqrt{\nu}\|(\partial_Y^2-\alpha^2)\phi\|_{L^2}^2+\alpha\frak lambda_i\|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2\\ &\quad=\alpha\mathbf{Im}\frak langle \pa_YV\partial_Y\phi,\phi\rangle_{L^2}-\mathbf{Re}\frak langle\mathrm{i}\alpha(\partial_Y^2 V)\phi,\phi\rangle_{L^2}-\mathbf{Re}\frak langle(\mathrm{i}\alpha f_2-\partial_Y f_1),\phi\rangle_{L^2}. \noindentnumber \end{align} On the other hand, we have \begin{align}\frak label{proof-mularge-V} &\big|\frak langle\mathrm{i}\alpha(\partial_Y^2 V)\phi,\phi\rangle_{L^2}\big|\frak leq C\|\partial_Y\phi\|_{L^2}\|\alpha\phi/Y\|_{L^2}\frak le C\al\|\pa_Y\phi\|_{L^2}^2 \frak leq \frac{C}{\alpha|\frak lambda|^2}\|f\|_{L^2}^2,\\ &\big|\alpha\frak langle \pa_YV\pa_Y\phi,\phi\rangle_{L^2}\big|\frak leq C\|\partial_Y\phi\|_{L^2}\|\alpha/Y\phi\|_{L^2}\frak le C\al\|\pa_Y\phi\|_{L^2}^2\frak leq \frac{C}{\alpha|\frak lambda|^2}\|f\|_{L^2}^2, \end{align} and \begin{align}\frak label{proof-mularge-force} |\frak langle(\mathrm{i}\alpha f_2-\partial_Y f_1),\phi\rangle_{L^2}|\frak leq C\|(\partial_Y\phi,\alpha\phi)\|_{L^2}\|f\|_{L^2}\frak leq\frac{C}{\alpha|\frak lambda|}\|f\|_{L^2}^{2}. \end{align} Hence, by collecting \reff{proof-mularge-IM}-\reff{proof-mularge-force}, we obtain \reff{lambda-large-Linfinity}. \end{proof} \begin{proposition}\frak label{prop-Immu-large} There exists $\delta_2\in(0,1)$ such that if $\alpha\frak lambda_i+\sqrt{\nu}\alpha^2\geq\delta_2^{-1}$, then for any $f=(f_1,f_2)\in L^2(\mathbb{R}_+)^2$, there exists a unique weak solution $\phi\in H^2_0(\mathbb{R}_+)$ to \reff{eq:sc-resphi1} satisfying \begin{align} &\|\partial_Y\phi\|_{L^2}+\alpha\|\phi\|_{L^2}\frak leq\frac{C}{\alpha\frak lambda_i+\sqrt{\nu}\alpha^2}\|f\|_{L^2}\frak label{ieq-Immu-large1}\\ &\|(\partial_Y^2-\alpha^2)\phi\|_{L^2}\frak leq \frac{C}{\nu^{1/4}(\alpha\frak lambda_i+\sqrt{\nu}\alpha^2)^{1/2}}\|f\|_{L^2}.\frak label{ieq-Immu-large2} \end{align} \end{proposition} \begin{proof} Let $\delta_2\in(0,\delta_0]$ be a small constant, which is determined later. From the assumption of the proposition and the definition of $\frak lambda_\nu$, we have $\alpha\mathbf{Im}\frak lambda_\nu\geq\delta_2^{-1}$. By taking $L^2$-inner product on both sides of \reff{eq-modify} with $\phi$, we obtain \begin{eqnarray}\noindentnumber \begin{split} -\sqrt{\nu}(\|\partial_Y^2\phi\|^2_{L^2}&+\alpha^2\|\partial_Y\phi\|^2_{L^2})+\frak langle\mathrm{i}\alpha(V-\frak lambda_\nu)(\partial_Y^2-\alpha^2)\phi,\phi\rangle_{L^2}-\frak langle\mathrm{i}\alpha(\partial_Y^2V)\phi,\phi\rangle_{L^2}\\ &=\frak langle(\mathrm{i}\alpha f_2-\partial_Y f_1),\phi\rangle_{L^2}. \end{split} \end{eqnarray} Then by taking the real part of the above equality, we get \begin{eqnarray}\frak label{proof-Immu1} \begin{split} \sqrt{\nu}\big(\|\partial_Y^2\phi\|^2_{L^2}&+\alpha^2\|\partial_Y\phi\|^2_{L^2}\big)+\alpha\mathbf{Im}\frak lambda_\nu\|(\partial_Y\phi,\alpha\phi)\|^2_{L^2}\\ &=-\alpha\mathbf{Im}\frak langle(V-\frak lambda_r)(\partial_Y^2-\alpha^2)\phi,\phi\rangle_{L^2}-\mathbf{Re}\frak langle(\mathrm{i}\alpha f_2-\partial_Y f_1),\phi\rangle_{L^2}\\ &=-\alpha\mathbf{Im}\frak langle V\partial_Y^2\phi,\phi\rangle_{L^2}-\mathbf{Re}\frak langle(\mathrm{i}\alpha f_2-\partial_Y f_1),\phi\rangle_{L^2}. \end{split} \end{eqnarray} We also notice that \begin{eqnarray}\frak label{proof-Immu2} \begin{split} \mathbf{Im}\frak langle V\partial_Y^2\phi,\phi\rangle_{L^2}\frak leq&\|V\|\|\partial_Y\phi\|_{L^2}\|\phi\|_{L^2}\\ \frak leq &\frac{\mathbf{Im}\frak lambda_\nu}{2}\|\partial_Y\phi\|_{L^2}^2+\frac{\|V\|^2}{2\mathbf{Im}\frak lambda_\nu}\|\phi\|^2_{L^2}, \end{split} \end{eqnarray} and \begin{align}\frak label{proof-Immu3} \big|\frak langle(\mathrm{i}\alpha f_2-\partial_Y f_1),\phi\rangle_{L^2}\big|\frak leq \|f\|_{L^2}\|(\partial_y\phi,\alpha\phi)\|_{L^2}. \end{align} Hence, after taking $\delta_2\frak leq \frac{1}{4(1+\|V\|)}$ and collecting \reff{proof-Immu1} and \reff{proof-Immu2} and \reff{proof-Immu3}, we obtain \begin{align} \sqrt{\nu}\big(\|\partial_Y^2\phi\|^2_{L^2}&+\alpha^2\|\partial_Y\phi\|^2_{L^2}\big)+\alpha\mathbf{Im}\frak lambda_\nu\|(\partial_Y\phi,\alpha\phi)\|^2_{L^2}\frak leq \frac{C}{\alpha\mathbf{Im}\frak lambda_\nu}\|f\|^2_{L^2}, \end{align} which gives \reff{ieq-Immu-large1} and \reff{ieq-Immu-large2}. \end{proof} \subsection{Resolvent estimates when $|\frak lambda|\frak leq \delta^{-1}_1$} The purpose of this part is to give the resolvent estimates when $|\frak lambda|\frak leq \delta^{-1}_1$. However, the boundary condition in \reff{eq:sc-resphi1} brings a lot of troubles to obtain an appropriate bound. Our main idea to overcome the difficulty generated by the boundary is the following: \begin{enumerate} \item We first obtain the resolvent estimates under the Navier-slip boundary condition, which allows us to use some special structures of the first equation of \reff{eq:sc-resphi1} by using integration by parts argument. \item We show the bounds of the boundary corrector. Such corrector is built around the Airy function and perfectly matches the boundary layer. \item By combining the controls of the above two, we obtain the resolvent estimates for non-slip boundary condition. \end{enumerate} Throughout this subsection, we assume that $\gamma \in [\f 23,1], |\frak lambda|\frak le \delta_1^{-1}$ and (SC) condition holds. \subsubsection{Resolvent estimates for Navier-slip boundary condition} In this part, we replace the non-slip boundary condition of \reff{eq:sc-resphi1} by Navier-slip boundary condition. That is, we consider the following system \begin{align}\frak label{eq:reswNa1}\frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)w + \mathrm{i}\alpha\big((V-\frak lambda)w-(\partial_Y^2V)\phi\big)=F,\\ &(\partial_Y^2-\alpha^2)\phi=w,\ w|_{Y=0}=\phi|_{Y=0}=0. \end{aligned}\right. \end{align} where $F= -\partial_YF_1+\mathrm{i}\alpha F_2$. Since the source term $F$ actually belongs to $H^{-1}(\mathbb{R}_+)$ due to $F_1,F_2\in L^2(\mathbb{R}_+)$, we decompose $w=w_1+w_2$ with $w_1$ and $w_2$ satisfying \begin{align}\frak label{eq:B1}\frak left\{ \begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)w_1+\mathrm{i}\alpha \big(\partial_Y\big((V-\frak lambda)\phi'_1\big)-(V-\frak lambda)\alpha^2\phi_1-V''\phi_1\big) =F,\\ &(\partial_Y^2-\alpha^2)\phi_1=w_1,\quad w_1|_{Y=0}=\phi_1|_{Y=0}=0, \end{aligned}\right.\end{align} and \begin{align}\frak label{eq:B3}\frak left\{ \begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)w_2+\mathrm{i}\alpha\big((V-\frak lambda)w_2 -V''\phi_2\big) =V'h,\\ &(\partial_Y^2-\alpha^2)\phi_2=w_2,\quad w_2|_{Y=0}=\phi_2|_{Y=0}=0, \end{aligned}\right.\end{align} with $h=\mathrm{i}\alpha\partial_Y\phi_1$. \begin{proposition}\frak label{pro:B3resH-1} There exists $\delta_*\in(0,\delta_1]$ such that if $\frak lambda$ satisfies \reff{Remu} for some $\delta\in(0,\delta_*]$, then the unique solution to \eqref{eq:reswNa1} satisfies \begin{align*} \nu^{\f14}\alpha^\f12\frak lambda_i^{-\f12}\|w\|_{L^2} +\alpha\frak lambda_i\|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq C\frak lambda_i^{-1}\|(F_1,F_2)\|_{L^2}, \end{align*} where the constant $C$ only depends on $\|V\|$. \end{proposition} \begin{proof} Let $w$ be the solution to \reff{eq:reswNa1} and $\phi$ be the corresponding stream function. As we mentioned on above, we decompose $w$ as $w=w_1+w_2$. We first give the estimates for $(w_1,\phi_1)$. By taking inner product with $-\bar{\phi}_1$ to \eqref{eq:B1}, we have \begin{align*} & \sqrt{\nu}\|w_1\|_{L^2}^2+ \alpha\frak lambda_i\|(\partial_Y\phi_1,\alpha\phi_1)\|_{L^2}^2 +\mathrm{i}\alpha\int_{0}^{+\infty}\bigg((V-\frak lambda_r)(|\phi_1'|^2+|\alpha\phi_1|^2) +V''|\phi_1|^2\bigg)\mathrm{d}Y\\ &=-\int_{0}^{+\infty}(-\partial_YF_1+\mathrm{i}\alpha F_2)\bar{\phi}_1\mathrm{d}Y. \end{align*} Then considering the real part, we get \begin{align*} &\nu^{\f12}\|w_1\|_{L^2}^2+ \alpha\frak lambda_i\|(\partial_Y\phi_1,\alpha\phi_1)\|_{L^2}^2\frak leq \|(F_1,F_2)\|_{L^2}\|(\partial_Y\phi_1,\alpha\phi_1)\|_{L^2}, \end{align*} which implies \begin{align*} & \nu^{\f14}(\alpha\frak lambda_i)^{\f12}\|w_1\|_{L^2}+\alpha\frak lambda_i\|( \partial_Y\phi_1,\alpha\phi_1)\|_{L^2}\frak leq C\|(F_1,F_2)\|_{L^2}. \end{align*} Now we turn to deal with $(w_2,\phi_2)$. Noticing that $(w_2,\phi_2)$ satisfies the system \eqref{eq:B3}, then by Lemma \ref{lem:resB3}, we get \begin{align*} &\alpha\frak lambda_i\|w_2\|_{L^2}+ \alpha\frak lambda_i\|(\partial_Y\phi_2,\alpha\phi_2)\|_{L^2}\\ &\frak leq \alpha\frak lambda_i\|V'\|_{L^\infty}\frak left\|w_2/V'\right\|_{L^2}+ \alpha\frak lambda_i\|(\partial_Y\phi_2,\alpha\phi_2)\|_{L^2}\\ & \frak leq C\alpha\frak lambda_i\big(\frak left\|w_2/V'\right\|_{L^2}+ \|(\partial_Y\phi_2,\alpha\phi_2)\|_{L^2}\big)\frak leq C\alpha\|\partial_Y \phi_1\|_{L^2}. \end{align*} Then we obtain \begin{align*} \alpha\frak lambda_i\|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq &\alpha\frak lambda_i\big(\|(\partial_Y\phi_1,\alpha\phi_1)\|_{L^2}+ \|(\partial_Y\phi_2,\alpha\phi_2)\|_{L^2}\big)\\ \frak leq &C\alpha\|\partial_Y\phi_1\|_{L^2} +\alpha\frak lambda_i\|(\partial_Y\phi_1,\alpha\phi_1)\|_{L^2}\\ \frak leq & C\alpha \|(\partial_Y\phi_1,\alpha\phi_1)\|_{L^2} \frak leq C\frak lambda_i^{-1}\|(F_1,F_2)\|_{L^2}. \end{align*} For $\|w\|_{L^2}$, we first notice that \begin{align*} \|w\|_{L^2}\frak leq& \|w_1\|_{L^2}+\|w_2\|_{L^2}\frak leq C\frak lambda_i^{-1}\|\partial_Y\phi_1\|_{L^2}+ \|w_1\|_{L^2} \frak leq C\big(\alpha^{-1}\frak lambda_i^{-2}+ \nu^{-\f14}(\alpha\frak lambda_i)^{-\f12}\big)\|(F_1,F_2)\|_{L^2}. \end{align*} Thanks to \reff{Remu} and $\gamma\ge \f23$, we obtain \begin{align*} \alpha^{-1}\frak lambda_i^{-2}+\nu^{-\f14}(\alpha\frak lambda_i)^{-\f12}&=\nu^{-\f14}(\alpha\frak lambda_i)^{-\f12}(\alpha^{-\f12}\frak lambda_i^{-\f32}\nu^{\f14}+1)\\&\frak leq \nu^{-\f14}(\alpha\frak lambda_i)^{-\f12}(n^{1-\f32\gamma}+1)\frak leq C\nu^{-\f14}(\alpha\frak lambda_i)^{-\f12}. \end{align*} Finally, we have \begin{align*} \|w\|_{L^2}\frak leq C\nu^{-\f14}(\alpha\frak lambda_i)^{-\f12}\|(F_1,F_2)\|_{L^2}. \end{align*} This finishes the proof. \end{proof} The following lemma gives the control of the solution to \reff{eq:B3}. \begin{lemma}\frak label{lem:resB3} Let $h\in L^2(\mathbb{R}_+)$. Then there exists $\delta_*\in(0,\delta_1]$ such that if $\frak lambda$ satisfies \reff{Remu} for some $\delta\in(0,\delta_*]$, then the unique solution to \eqref{eq:B3} satisfies \begin{align*} & \alpha\frak lambda_i\|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq C\|h\|_{L^2},\\ &\nu^{\f14}(\alpha\frak lambda_i)^{\f12}\frak left\|(\partial_Yw,\alpha w)/V'\right\|_{L^2}+\alpha\frak lambda_i\frak left\|w/V'\right\|_{L^2}\frak leq C\|h\|_{L^2}. \end{align*} \end{lemma} \begin{proof} Let $\varsigma$ be an arbitrary fixed small positive number. \textbf{Step 1. Estimate of the vorticity $w$ in a weighted norm} By taking $L^2$-inner product with $-w/(V''-\varsigma)$ on both sides of the first equation of \reff{eq:B3}, we obtain \begin{align}\frak label{L2productw} \mathbf{Re}\frak langle-\sqrt{\nu}(\partial_Y^2-\alpha^2)w+\mathrm{i}\alpha((V-\frak lambda)w-V''\phi),-\frac{w}{V''-\varsigma}\rangle\frak leq |\frak langle V'h,\frac{w}{V''-\varsigma}\rangle|. \end{align} For each terms on the left-hand side of \reff{L2productw}, we first have \begin{align*} \big\frak langle (\partial_Y^2-\alpha^2)w,w/(V''-\varsigma)\big\rangle= &\frak left\|\dfrac{(\partial_Yw,\alpha w)}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2 +\frak left\frak langle\partial_Yw,\dfrac{wV'''}{(V''-\varsigma)^2}\right\rangle, \end{align*} which gives \begin{eqnarray}\frak label{nuww} \begin{split} &\mathbf{Re}\frak langle \sqrt{\nu}(\partial_Y^2-\alpha^2)w,w/(V''-\varsigma)\big\rangle\\ &=\sqrt{\nu}\frak left\|\dfrac{(\partial_Yw,\alpha w)}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2 +\sqrt{\nu}\mathbf{Re}\frak left\frak langle\partial_Yw,\dfrac{wV'''}{(V''-\varsigma)^2}\right\rangle\\ &\geq\sqrt{\nu}\frak left\|\dfrac{(\partial_Yw,\alpha w)}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2-\sqrt{\nu}\frak left\| \dfrac{\partial_Yw}{|V''-\varsigma|^{\f12}}\right\|_{L^2}\frak left\| \dfrac{V'''}{V''-\varsigma}\right\|_{L^\infty}\frak left\| \dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}\\ &\geq \frac{\sqrt{\nu}}{2}\frak left\|\dfrac{(\partial_Yw,\alpha w)}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2-C\sqrt{\nu}\frak left\| \dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2, \end{split} \end{eqnarray} where we used $|V'''/V''|+|V''/V'|\frak leq C$ in the last inequality. We also notice that \begin{align*} &\mathbf{Im} \big\frak langle (V-\frak lambda)w-V''\phi,w/(V''-\varsigma)\big\rangle\\ &= \mathbf{Im}\bigg( \big\frak langle V-\frak lambda,|w|^2/(V''-\varsigma)\big\rangle+\|(\partial_Y\phi,\alpha \phi)\|_{L^2}^2-\varsigma\big\frak langle \phi,w/(V''-\varsigma)\big\rangle\bigg)\\ &=\frak lambda_i\frak left\|\dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2-\varsigma\mathbf{Im} \big\frak langle \phi,w/(V''-\varsigma)\big\rangle, \end{align*} from which, we deduce that \begin{eqnarray} \begin{split} \mathbf{Re}\frak left\frak langle\mathrm{i}\alpha(V-\frak lambda)w-V''\phi),-\frac{w}{V''-\varsigma}\right\rangle&=\alpha\mathbf{Im} \big\frak langle (V-\frak lambda)w-V''\phi,w/(V''-\varsigma)\big\rangle\\ &=\alpha\frak lambda_i\frak left\|\dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2-\varsigma\alpha\mathbf{Im} \big\frak langle \phi,w/(V''-\varsigma)\big\rangle\\ &\geq\alpha\frak lambda_i\frak left\|\dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2-\varsigma^{\frac{1}{2}}\frak left\|\dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}\|\alpha\phi\|_{L^2}\\ &\geq \frac{\alpha\frak lambda_i}{2}\frak left\|\dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2-C\varsigma(\alpha\frak lambda_i)^{-1}\|\alpha\phi\|_{L^2}^2. \end{split} \end{eqnarray} According to the above inequality, \reff{L2productw} and \reff{nuww}, we obtain \begin{eqnarray*} \begin{split} &\sqrt{\nu}\frak left\|\frac{(\partial_Y w,\alpha w)}{|V''-\varsigma|^{1/2}}\right\|^{2}_{L^2}+\alpha\frak lambda_i\frak left\|\dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2\\ &\frak leq 2\|h\|_{L^2}\frak left\|\frac{V'}{|V''-\varsigma|^{\f12}}\right\|_{L^\infty}\frak left\|\dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}+C\sqrt{\nu}\frak left\|\dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2+C\varsigma(\alpha\frak lambda_i)^{-1}\|\alpha\phi\|^2_{L^2}, \end{split} \end{eqnarray*} which gives \begin{align}\frak label{est:nablew1} & \sqrt{\nu} \frak left\|\dfrac{(\partial_Yw,\alpha w)}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2 +\alpha\frak lambda_i \frak left\|\dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}^2\frak leq C(\alpha\frak lambda_i)^{-1}\|h\|_{L^2}^2 +C\varsigma(\alpha\frak lambda_i)^{-1}\|\alpha\phi\|_{L^2}^2, \end{align} along with the (SC) condition and the fact that \begin{align*} \alpha\frak lambda_i\geq \frac{\delta_0^\gamma}{\delta}\sqrt{\nu},~~\text{with $\delta$ small enough}. \end{align*} \textbf{Step 2. Estimates via the Rayleigh equation} Denote $\textsl{R}\phi :=(V-\frak lambda)(\partial_Y^2-\alpha^2)\phi-V''\phi$, then we can write the equation as \begin{align*} &\mathrm{i}\alpha(\textsl{R}\phi) =\sqrt{\nu}(\partial_Y^2-\alpha^2)w+V'h. \end{align*} Applying Lemma \ref{lem:GMMray1} with $h_1=h/(\mathrm{i}\alpha),\ h_2=\sqrt{\nu}\partial_Yw/(\mathrm{i}\alpha),\ h_3=\sqrt{\nu} w$, we get \begin{align*} \|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq &C\big(\frak lambda_i^{-1}\|h_1\|_{L^2}+\frak lambda_i^{-2}\|(h_2,h_3)\|_{L^2}\big)\\ \frak leq &C(\alpha\frak lambda_i)^{-1}\|h\|_{L^2}+ C\nu^{\f12}\frak lambda_i^{-2}\alpha^{-1}\|(\partial_Yw,\alpha w)\|_{L^2}\\ \frak leq &C(\alpha\frak lambda_i)^{-1}\|h\|_{L^2}+ C\nu^{\f12}\frak lambda_i^{-2}\alpha^{-1}\|V''-\varsigma\|_{L^\infty}^{\f12} \frak left\|\dfrac{(\partial_Yw,\alpha w)}{|V''-\varsigma|^{\f12}}\right\|_{L^2}\\ \frak leq &C(\alpha\frak lambda_i)^{-1}\|h\|_{L^2}+ C\nu^{\f12}\frak lambda_i^{-2}\alpha^{-1}\frak left\|\dfrac{(\partial_Yw,\alpha w)}{|V''-\varsigma|^{\f12}}\right\|_{L^2}. \end{align*} Substituting \eqref{est:nablew1} into the above inequality, we deduce that \begin{align*} & \|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq C(\alpha\frak lambda_i)^{-1}\|h\|_{L^2} +C\nu^{\f14}(\alpha\frak lambda_i)^{-\f32}\frak lambda_i^{-1}\|h\|_{L^2}+ C\varsigma^{\f12}\nu^{\f14}(\alpha\frak lambda_i)^{-\f32}\frak lambda_i^{-1}\|\alpha\phi\|_{L^2}. \end{align*} Letting $\varsigma\rightarrow 0^{+},$ we infer that \begin{align}\frak label{est:phiB3pori} & \|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq C(\alpha\frak lambda_i)^{-1}\|h\|_{L^2} +C\nu^{\f14}(\alpha\frak lambda_i)^{-\f32}\frak lambda_i^{-1}\|h\|_{L^2}. \end{align} On the other hand, we notice that by \reff{Remu} \begin{align*} \nu^{\f14}(\alpha\frak lambda_i)^{-\f32}\frak leq \alpha^{-1}n^{1-\f32\gamma} \delta^\f32\frak leq\alpha^{-1}, \end{align*} provided that $\gamma\in[2/3,1]$. Then \eqref{est:phiB3pori} becomes \begin{align}\frak label{est:parphiB3} \|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq C(\alpha\frak lambda_i)^{-1}\|h\|_{L^2}. \end{align} Putting this into \eqref{est:nablew1}, we conclude that \begin{align*} & \nu^{\f14}\frak left\|\dfrac{(\partial_Yw,\alpha w)}{|V''-\varsigma|^{\f12}}\right\|_{L^2}+(\alpha\frak lambda_i)^{\f12}\frak left\| \dfrac{w}{|V''-\varsigma|^{\f12}}\right\|_{L^2}\frak leq C(\alpha\frak lambda_i)^{-\f12}\big(1+\varsigma^{\f12}(\alpha\frak lambda_i)^{-1}\big)\|h\|_{L^2}. \end{align*} Applying $1/|V''-\varsigma|^{\f12}\geq CM^{\f12}/\big|V'+(M\varsigma)^{\f12}\big|$, and letting $\varsigma\rightarrow 0^{+}$, we deduce that \begin{align*} &\nu^{\f14}(\alpha\frak lambda_i)^{\f12}\frak left\|(\partial_Yw,\alpha w)/V'\right\|_{L^2}+\alpha\frak lambda_i\frak left\|w/V'\right\|_{L^2}\frak leq C\|h\|_{L^2}, \end{align*} which gives the second inequality. \end{proof} The following lemma used Rayleigh's trick for strong concave shear flow. \begin{lemma}\frak label{lem:GMMray} Let $\phi\in H^1_0(\mathbb{R}_+)\cap H^2(\mathbb{R}_+)$ and we denote the Rayleigh operator as $\textsl{R}\phi:= (V-\frak lambda)(\partial_Y^2-\alpha^2)\phi-V''\phi$. Then we have \begin{align} & \mathbf{Re} \bigg(\dfrac{1-\frak lambda}{\mathrm{i}\frak lambda_i}\int_{0}^{+\infty}(\textsl{R}\phi) \dfrac{\bar{\phi}}{V-\frak lambda}\mathrm{d}Y\bigg) \geq \|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2+M^{-1} \frak left\|\dfrac{(1-V)^{\f12}V'\phi}{V-\frak lambda}\right\|_{L^2}^2. \end{align} Moreover, if $\frak lambda_r\geq 1$, then we have \begin{align} &-\mathbf{Re} \bigg(\int_{0}^{+\infty}(\textsl{R}\phi) \dfrac{\bar{\phi}}{V-\frak lambda}\mathrm{d}Y\bigg) \geq \|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2+M^{-1} \frak left\|\dfrac{(1-V)^{\f12}V'\phi}{V-\frak lambda}\right\|_{L^2}^2. \end{align} Here $M$ is the constant in third property of the (SC) condition. \end{lemma} \begin{proof} Taking inner product with $\dfrac{-\bar{\phi}}{V-\frak lambda}$, we get \begin{align*} &-\int_{0}^{+\infty}(\textsl{R}\phi) \dfrac{\bar{\phi}}{V-\frak lambda}\mathrm{d}Y= \|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2+\int_{0}^{+\infty} \dfrac{(\partial_Y^2V)|\phi|^2}{V-\frak lambda}\mathrm{d}Y. \end{align*} Considering the real and imaginary part respectively, we obtain \begin{align} -\mathbf{Re}\frak left(\int_{0}^{+\infty} \dfrac{(\textsl{R}\phi) \bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right) &=\|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2+\int_{0}^{+\infty} \dfrac{(V-\frak lambda_r)(\partial_Y^2V)|\phi|^2}{|V-\frak lambda|^2}\mathrm{d}Y,\frak label{proi:real}\\ \mathbf{Im}\frak left(\int_{0}^{+\infty} \dfrac{(\textsl{R}\phi) \bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right)&=\frak lambda_i\int_{0}^{+\infty} \dfrac{(-\partial_Y^2V)|\phi|^2}{|V-\frak lambda|^2}\mathrm{d}Y.\frak label{proi:imega} \end{align} By (SC) condition: $-\partial_Y^2V\geq M^{-1}(\partial_YV)^2$, we get by \eqref{proi:imega} that \begin{align*} &-\int_{0}^{+\infty}\dfrac{(V-\frak lambda_r)(\partial_Y^2V)|\phi|^2}{|V-\frak lambda|^2}\mathrm{d}Y\\ &= -\int_{0}^{+\infty}\dfrac{(1-\frak lambda_r)(\partial_Y^2V)|\phi|^2}{|V-\frak lambda|^2}\mathrm{d}Y -\int_{0}^{+\infty}\dfrac{(V-1)(\partial_Y^2V)|\phi|^2}{|V-\frak lambda|^2}\mathrm{d}Y\\ &= \dfrac{1-\frak lambda_r}{\frak lambda_i}\frak left(\int_{0}^{+\infty} \dfrac{\frak lambda_i(-\partial_Y^2V)|\phi|^2}{|V-\frak lambda|^2}\mathrm{d}Y\right) -\int_{0}^{+\infty}\dfrac{(V-1)(\partial_Y^2V)|\phi|^2}{|V-\frak lambda|^2}\mathrm{d}Y\\ &\frak leq\dfrac{1-\frak lambda_r}{\frak lambda_i}\mathbf{Im}\frak left(\int_{0}^{+\infty} \dfrac{(\textsl{R}\phi) \bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right) -\int_{0}^{+\infty}\dfrac{(1-V)(\partial_YV)^2|\phi|^2}{M|V-\frak lambda|^2}\mathrm{d}Y\\ &\frak leq \dfrac{1-\frak lambda_r}{\frak lambda_i}\mathbf{Im}\frak left(\int_{0}^{+\infty} \dfrac{(\textsl{R}\phi) \bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right) -M^{-1}\frak left\|\dfrac{(1-V)^{\f12}(\partial_YV)\phi}{V-\frak lambda}\right\|_{L^2}^2. \end{align*} Putting this into \eqref{proi:real}, we obtain \begin{align*} &\|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2 =- \int_{0}^{+\infty}\dfrac{(V-\frak lambda_r)(\partial_Y^2V)|\phi|^2}{|V-\frak lambda|^2}\mathrm{d}Y -\mathbf{Re}\frak left(\int_{0}^{+\infty} \dfrac{(\textsl{R}\phi) \bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right)\\ &\frak leq \dfrac{1-\frak lambda_r}{\frak lambda_i}\mathbf{Im}\frak left(\int_{0}^{+\infty} \dfrac{(\textsl{R}\phi) \bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right) -M^{-1}\frak left\|\dfrac{(1-V)^{\f12}(\partial_YV)\phi}{V-\frak lambda}\right\|_{L^2}^2 -\mathbf{Re}\frak left(\int_{0}^{+\infty} \dfrac{(\textsl{R}\phi) \bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right)\\ &=\mathbf{Re}\frak left(\dfrac{1-\frak lambda_r}{\mathrm{i}\frak lambda_i}\int_{0}^{+\infty} \dfrac{(\textsl{R}\phi) \bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right) -\mathbf{Re}\frak left(\int_{0}^{+\infty} \dfrac{(\textsl{R}\phi) \bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right) -M^{-1}\frak left\|\dfrac{(1-V)^{\f12}(\partial_YV)\phi}{V-\frak lambda}\right\|_{L^2}^2\\ &=\mathbf{Re}\frak left(\dfrac{1-\frak lambda}{\mathrm{i}\frak lambda_i}\int_{0}^{+\infty} \dfrac{(\textsl{R}\phi) \bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right) -M^{-1}\frak left\|\dfrac{(1-V)^{\f12}(\partial_YV)\phi}{V-\frak lambda}\right\|_{L^2}^2, \end{align*} This gives the first inequality. If $\frak lambda_r\geq 1$, again by \eqref{proi:real} and (SC) condition: $-\partial_Y^2V\geq M^{-1}(\partial_YV)^2$, we have \begin{align*} -\mathbf{Re}\frak left(\int_{0}^{+\infty} \dfrac{(\textsl{R}\phi) \bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right) &=\|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2+\int_{0}^{+\infty} \dfrac{(V-\frak lambda_r)(\partial_Y^2V)|\phi|^2}{|V-\frak lambda|^2}\mathrm{d}Y\\ &\geq \|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2+\int_{0}^{+\infty} \dfrac{(1-V)(-\partial_Y^2V)|\phi|^2}{|V-\frak lambda|^2}\mathrm{d}Y\\ &\geq \|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2+M^{-1} \frak left\|\dfrac{(1-V)^{\f12}V'\phi}{V-\frak lambda}\right\|_{L^2}^2. \end{align*} This gives the second inequality. \end{proof} The following lemma has been used in the proof of Lemma \ref{lem:resB3}. \begin{lemma}\frak label{lem:GMMray1} Let $\phi\in H^1_0(\mathbb{R}_+)$ solve $\textsl{R}\phi=\tilde{h}:=V'h_1+\partial_Yh_2+\mathrm{i}\alpha h_3$ with $h_i\in L^2(\mathbb{R}_+)$ for $i=1,2,3$. Then it holds that \begin{align*} & \|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq C\big(\frak lambda_i^{-1}\|h_1\|_{L^2} +\frak lambda_i^{-2}\|(h_2,h_3)\|_{L^2}\big). \end{align*} \end{lemma} \begin{proof} Notice that \begin{align*} &\frak left|\int_{0}^{+\infty}\dfrac{\tilde{h}\bar{\phi}}{V-\frak lambda}\mathrm{d}Y \right|=\frak left|\int_{0}^{+\infty}\dfrac{\big( V'h_1+\partial_Yh_2+\mathrm{i}\alpha h_3\big)\bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right|\\ &\frak leq \frak left|\int_{0}^{+\infty} \bigg(V'h_1+\dfrac{V'h_2}{V-\frak lambda}\bigg)\dfrac{\bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right| +\frak left| \int_{0}^{+\infty}\dfrac{h_2\partial_Y\bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right| +\frak left| \int_{0}^{+\infty}\dfrac{ \alpha h_3\bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right|\\ &\frak leq \frak left\|\dfrac{\sqrt{-V''}\phi}{V-\frak lambda}\right\|_{L^2}\frak left( \frak left\|\dfrac{V'h_1}{\sqrt{-V''}}\right\|_{L^2} +\frak left\|\dfrac{V'h_2}{\sqrt{-V''}(V-\frak lambda)}\right\|_{L^2}\right) +\|\partial_Y\phi\|_{L^2}\frak left\|\dfrac{h_2}{V-\frak lambda}\right\|_{L^2} \\&\quad+\|\alpha\phi\|_{L^2}\frak left\|\dfrac{h_3}{V-\frak lambda}\right\|_{L^2}\\ &\frak leq M^{\f12}\frak left\|\dfrac{\sqrt{-V''}\phi}{V-\frak lambda}\right\|_{L^2}\frak left( \|h_1\|_{L^2} +\frak left\|\dfrac{h_2}{(V-\frak lambda)}\right\|_{L^2}\right) +C\frak lambda_i^{-1}\|(\partial_Y\phi,\alpha\phi)\|_{L^2}\|(h_2,h_3)\|_{L^2}. \end{align*} Here we used the strong concave condition. And \eqref{proi:imega} gives \begin{align*} &\frak left|\int_{0}^{+\infty}\dfrac{\tilde{h}\bar{\phi}}{V-\frak lambda}\mathrm{d}Y \right|\geq \mathbf{Im}\frak left(\int_{0}^{+\infty}\dfrac{\tilde{h}\bar{\phi}}{V-\frak lambda}\mathrm{d}Y \right)= \frak lambda_i\frak left\|\dfrac{\sqrt{-V''}\phi}{V-\frak lambda}\right\|_{L^2}^2. \end{align*} Then we obtain \begin{align*} \frak left|\int_{0}^{+\infty}\dfrac{\tilde{h}\bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right|&\frak leq C\frak lambda_i^{-1}M\bigg( \|h_1\|_{L^2} +\frak left\|\dfrac{h_2}{(V-\frak lambda)}\right\|_{L^2}\bigg)^2 +C\frak lambda_i^{-1}\|(\partial_Y\phi,\alpha\phi)\|_{L^2}\|(h_2,h_3)\|_{L^2}\\ &\frak leq C\frak lambda_i^{-1}\|h_1\|_{L^2}^2+ C\frak lambda_i^{-3}\|h_2\|_{L^2}^2+C\frak lambda_i^{-1}\|(\partial_Y\phi,\alpha\phi)\|_{L^2} \|(h_2,h_3)\|_{L^2}. \end{align*} By lemma \ref{lem:GMMray} and $|\frak lambda|\frak leq \delta_1^{-1}$, we get \begin{align*} \|(\partial_Y\phi,\alpha\phi)\|_{L^2}^2\frak leq& C\frak lambda_i^{-1} \frak left|\int_{0}^{+\infty}\dfrac{\tilde{h}\bar{\phi}}{V-\frak lambda}\mathrm{d}Y\right|\\ \frak leq &C\frak lambda_i^{-1}\big(\frak lambda_i^{-1}\|h_1\|_{L^2}^2+ \frak lambda_i^{-4}|\frak lambda|\|h_2\|_{L^2}^2+\frak lambda_i^{-1}\|(\partial_Y\phi,\alpha\phi)\|_{L^2} \|(h_2,h_3)\|_{L^2}\big), \end{align*} which gives \begin{align*} &\|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq C\big(\frak lambda_i^{-1}\|h_1\|_{L^2} +\frak lambda_i^{-2}\|(h_2,h_3)\|_{L^2}\big). \end{align*} This proves the lemma. \end{proof} \subsubsection{ Boundary layer corrector} Since we have obtained the resolvent estimate under the Navier-slip boundary condition, the next step is to re-correct the boundary condition from Navier-slip boundary condition to nonslip one. To this end, we introduce the boundary layer corrector, which is the solution to the following homogeneous system \begin{align}\frak label{eq:Hombound-OSnon} \frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)W_b+\mathrm{i}\alpha\big((V-\frak lambda)W_b- V''\Phi_b\big)=0,\\ &(\partial_Y^2-\alpha^2)\Phi_b=W_b,\\ &\Phi_b|_{Y=0}=0,\ \partial_Y\Phi_b|_{Y=0}=1. \end{aligned}\right. \end{align} Instead of considering the above system directly, we first pay our attention to study the system as follows \begin{align}\frak label{eq:Hombound-OS} \frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)W+\mathrm{i}\alpha\big((V-\frak lambda)W- V''\Phi\big)=0,\\ &(\partial_Y^2-\alpha^2)\Phi=W,\\ &\Phi|_{Y=0}=0,\ \partial_Y^2\Phi|_{Y=0}=1. \end{aligned}\right. \end{align} The reason we consider this system is that \begin{itemize} \item The solution $W$ to \reff{eq:Hombound-OS} actually is a small perturbation around the Airy function, which is known very well by us. Moreover, the corresponding perturbation satisfies the Navier-slip condition, on which we can apply the result in the previous part. \item We observe that $\partial_Y\Phi$ on the boundary $Y=0$ is also positive. Hence, to drive the estimates from \reff{eq:Hombound-OS} to \reff{eq:Hombound-OSnon}, we just need to normalized the value of $\partial_Y\Phi$ on the boundary. \end{itemize} For convenience, we introduce the following notation \begin{align}\frak label{notation:A,A1} &A=|n|^{\f13}(1+|n|^{\f13}|\frak lambda_{\nu}|)^{\f12}. \end{align} We first present the following lemma, which will be used frequently. \begin{lemma}\frak label{lem:L-index} Let $\delta_0^{-1}\nu^{\f12}\frak leq \alpha\frak leq \delta_0^{-1}\nu^{-\f14}$ with $\alpha=\sqrt{\nu}n$, $\frak lambda_i\geq \dfrac{n^{\gamma-1}}{\delta}$ for some $\gamma\in[\f23,1]$. Then it holds that \begin{align*} & \max\big(\delta_0^{\f23}\alpha,\delta_0^{-\f13}\big) \frak leq |n|^{\f13}\quad \mathrm{and}\quad |n|^{-\f13}\frak leq C(1+\alpha)^{-1}. \end{align*} Moreover, we have \begin{align*} &|n|^{\f13}\frak lambda_i\geq \delta_0^{-(\gamma-2/3)}\delta^{-1}\geq \delta^{-1}. \end{align*} \end{lemma} \begin{proof} Due to $\alpha \frak leq \delta_0^{-1}\nu^{-\f14}$, $\alpha = |n|^{\f13}\big|\sqrt{\nu}\alpha^2\big|^{\f13}\frak leq |n|^\f13\delta_0^{-\f23}$. Due to $\delta_0^{-1}\nu^{\f12}\frak leq \alpha$, $|n|^{\f13}\geq \delta_0^{-\f13}$. This deduces the first inequality. We also have \begin{align*} &|n|^{\f13}\frak lambda_i\geq \dfrac{\alpha^{\f13}}{\nu^{\f16}}\dfrac{\nu^{(1-\gamma)/2} \alpha^{\gamma-1}}{\delta}= \nu^{(2/3-\gamma)/2}\alpha^{\gamma-2/3}\delta^{-1} = \big( \alpha/\sqrt{\nu}\big)^{\gamma-2/3}\delta^{-1}\geq \delta_0^{-(\gamma-2/3)}\delta^{-1}, \end{align*} which gives the third inequality. \end{proof} Now we construct the solution $W$ to \reff{eq:Hombound-OS} via the Airy function. We denote $d=-\frak lambda_\nu/V'(0)$, where $\frak lambda_\nu=\frak lambda+\mathrm{i}\sqrt{\nu}\alpha$. We introduce \begin{align*} & W_a (Y)= Ai\big(\mathrm{e}^{\mathrm{i}\frac{\pi}{6}}|nV'(0)|^{\f13}(Y+d)\big)/ Ai\big(\mathrm{e}^{\mathrm{i}\frac{\pi}{6}}|nV'(0)|^{\f13}d\big). \end{align*} here $Ai(y)$ is the Airy function defined in the appendix, which satisfies $Ai''(y)-yAi(y)=0$. Then we have \begin{align*} \partial_Y^2W_a&=\mathrm{e}^{\mathrm{i}\frac{\pi}{3}} |nV'(0)|^{\f23}(\partial_Y^2Ai)\big(\mathrm{e}^{\mathrm{i}\frac{\pi}{6}}|nV'(0)|^{\f13}(Y+d)\big)/Ai\big(\mathrm{e}^{\mathrm{i}\frac{\pi}{6}}|nV'(0)|^{\f13}d\big) \\ &=\mathrm{i} |nV'(0)|^{\f23}(|nV'(0)|^{\f13}(Y+d))Ai\big(\mathrm{e}^{\mathrm{i}\frac{\pi}{6}}|nV'(0)|^{\f13}(Y+d)\big)/Ai\big(\mathrm{e}^{\mathrm{i}\frac{\pi}{6}}|nV'(0)|^{\f13}\tilde{d}\big)\\ &=\mathrm{i}(\alpha/\sqrt{\nu}) \big(V'(0)(Y+d)\big)W_a, \end{align*} which gives \begin{align}\frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)W_a+\mathrm{i}\alpha \big(V'(0)Y-\frak lambda\big)W_a=0,\\ &W_a(0)=1. \end{aligned}\right. \end{align} We denote the perturbation $W_e=W-W_a$, which satisfies \begin{align}\frak label{eq:We}\frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)W_e+\mathrm{i}\alpha\big((V-\frak lambda)W_e-V''\Phi_e\big) =-\mathrm{i}\alpha\big(V-V'(0)Y\big)W_a+\mathrm{i}\alpha V''\Phi_a,\\ &(\partial_Y^2-\alpha^2)\Phi_e=W_e,\ (\partial_Y^2-\alpha^2)\Phi_a=W_a,\\ &\Phi_a(0)=\Phi_e(0)=0,\ W_e(0)=0. \end{aligned}\right. \end{align} We point out that $W_e$ satisfies the Navier-slip boundary condition. As a consequence, we have the following lemma. \begin{lemma}\frak label{lem:Webounds} Let $W_e$ solve \eqref{eq:We}. Then it holds that \begin{align*} & \nu^{\f14}\alpha^{\f12}\frak lambda_i^{-\f12}\|W_{e}\|_{L^2} +\alpha\frak lambda_i\|(\partial_Y\Phi_{e},\alpha\Phi_{e})\|_{L^2} \frak leq C\alpha \frak lambda_i^{-1}A^{-\f72}. \end{align*} Moreover, we have \begin{align*} & \|\partial_Y\Phi_e\|_{L^\infty}\frak leq C|n|^{-\f13}(A\frak lambda_i)^{-\f74}. \end{align*} \end{lemma} \begin{proof} We first notice that $W_a$ satisfies \begin{align*} & \big(V-V'(0)Y\big)W_a\\ &=\partial_Y\big[(V-V'(0)Y)\partial_Y\Phi_a\big] -\partial_Y\big[(V'-V'(0))\Phi_a\big]+V''\Phi_a-\alpha^2(V-V'(0)Y)\Phi_a. \end{align*} Then we have \begin{align*} & -\mathrm{i}\alpha\big(V-V'(0)Y\big)W_a+\mathrm{i}\alpha V''\Phi_a\\ &=-\mathrm{i}\alpha \partial_Y\frak left[(V-V'(0)Y)\partial_Y\Phi_a -(V'-V'(0))\Phi_a\right]+\mathrm{i}\alpha^3(V-V'(0)Y)\Phi_a\\ &=-\partial_YF_{1,1}+\mathrm{i}\alpha F_{1,2}, \end{align*} where \begin{align*} &F_{1,1}=\mathrm{i}\alpha\frak left[(V-V'(0)Y)\partial_Y\Phi_a -(V'-V'(0))\Phi_a\right],\\ &F_{1,2}=\alpha^2(V-V'(0)Y)\Phi_a. \end{align*} Since we have \begin{align*} & |V(Y)-V'(0)Y|=\frak left|\int_{0}^{Y}\int_{0}^{Z}V''(Z_1) \mathrm{d}Z_1\mathrm{d}Z\right|\frak leq Y^2\|V''\|_{L^\infty}/2\frak leq CY^2,\\ & |V'(Y)-V'(0)|=\frak left|\int_{0}^{Y}V''(Z)\mathrm{d}Z\right|\frak leq Y\|V''\|_{L^\infty}\frak leq CY, \end{align*} we infer that \begin{align*} & |F_{1,1}(Y)|\frak leq C\alpha\big( \big|Y^2\partial_Y\Phi_a\big|+ \big|Y\Phi_a\big|\big),\qquad |F_{1,2}(Y)|\frak leq C\alpha^2\big|Y^2\Phi_a\big|. \end{align*} Applying Lemma \ref{lem:Airy-w} with $\kappa=|nV'(0)|^{\f13},\ \eta=-\frak lambda_{\nu}/V'(0)$(so, $\kappa(1+|\kappa\eta|)^{\f12}\sim A$), we get by Lemma \ref{lem:L-index} that \begin{align*} \|(F_{1,1},F_{1,2})\|_{L^2}\frak leq& C\alpha \frak left(\big\|Y^2\partial_Y\Phi_a\big\|_{L^2}+ \big\|Y\Phi_a\big\|_{L^2} + \alpha\big\|Y^2\Phi_a\big\|_{L^2}\right)\\ \frak leq &C\alpha \frak left(A^{-\f72}+\alpha A^{-\f92}\right)=C\alpha A^{-\f72}\big(1+\alpha A^{-1}\big)\\ \frak leq& C\alpha A^{-\f72}\big(1+\alpha |n|^{-\f13}\big)\frak leq C\alpha A^{-\f72}. \end{align*} Now we come to estimate $W_{e}$. We know that $(W_{e},\Phi_{e},F_{1,1},F_{1,2})$ fits the structure of \eqref{eq:reswNa1}. Then by Proposition \ref{pro:B3resH-1}, we get \begin{align*} & \nu^{\f14}\alpha^{\f12}\frak lambda_i^{-\f12}\|W_{e}\|_{L^2} +\alpha\frak lambda_i\|(\partial_Y\Phi_{e},\alpha\Phi_{e})\|_{L^2} \frak leq C\frak lambda_i^{-1}\big\|(F_{1,1},F_{1,2})\big\|_{L^2}\frak leq C\alpha A^{-\f72}\frak lambda_i^{-1}. \end{align*} This gives the first inequality. And we have \begin{align*} A\|\partial_Y\Phi_{e}\|_{L^\infty}&\frak leq CA\|W_{e}\|_{L^2}^{\f12}\|(\partial_Y\Phi_{e},\alpha\Phi_{e})\|_{L^2}^{\f12}\\ &\frak leq C \alpha A^{-\f52}\frak lambda_i^{-1}\nu^{-\f18}\alpha^{-\f34}\frak lambda_i^{-\f14} =C(|n|^{\f13}/A)^{\f34}\frak lambda_i^{\f12}(A\frak lambda_i)^{-\f74}\frak leq C(A\frak lambda_i)^{-\f74}. \end{align*} Here we used $|n|^{\f13}/A\frak leq 1$, $\frak lambda_i\frak leq \delta_1^{-1}$. \end{proof} Combining with Lemma \ref{lem:Webounds}, and Lemma \ref{lem:Airy-w}, we get \begin{lemma}\frak label{lem:Wnorms} Let $W$ solve \eqref{eq:Hombound-OS}. Then it holds that \begin{align*} & \|(\partial_Y\Phi,\alpha\Phi)\|_{L^2}\frak leq CA^{-\f32},\quad \|W\|_{L^2} \frak leq A^{-\f12},\quad \|\rho^{\f12}_\frak lambda W\|_{L^2}\frak le C|n|^{\f14}\frak lambda_i^{\f34}A^{-1}, \end{align*} where $\rho_\frak lambda$ is defined by \begin{align}\noindentnumber \rho_{\frak lambda}(Y)=\frak left\{\begin{aligned} &(|n|^{\f13}\frak lambda_i)^{\f32} Y,\qquad \text{if}\,\,\, 0\frak leq Y\frak leq (|n|^{\f13}\frak lambda_i)^{-\f32},\\ &1,\qquad\qquad\quad \text{if}\,\,\,Y\geq (|n|^{\f13}\frak lambda_i)^{-\f32}. \end{aligned}\right. \end{align} \end{lemma} \begin{proof} From Lemma \ref{lem:Airy-w} and Lemma \ref{lem:Webounds}, we deduce that \begin{align*} \|(\partial_Y\Phi,\alpha\Phi)\|_{L^2} \frak leq&\|(\partial_Y\Phi_a,\alpha\Phi_a)\|_{L^2}+ \|(\partial_Y\Phi_e,\alpha\Phi_e)\|_{L^2}\\ \frak leq &CA^{-\f32}+C\frak lambda_i^{-2}A^{-\f72}\frak leq CA^{-\f32}\big(1+(A\frak lambda_i)^{-2}\big)\\ \frak leq & CA^{-\f32}\big(1+(|n|^{\f13}\frak lambda_i)^{-2}\big)\frak leq CA^{-\f32}, \end{align*} here we used $(A\frak lambda_i)^{-2}\frak leq (|n|^{\f13}\frak lambda_i)^{-2}\frak leq \delta^{2}\frak leq 1$. This gives the first inequality. Applying Lemma \ref{lem:Airy-w} and Lemma \ref{lem:Webounds} again, we infer that \begin{align*} \|W\|_{L^2}\frak leq& \|W_a\|_{L^2}+\|W_e\|_{L^2}\frak leq CA^{-\f12}+C\nu^{-\f14}\alpha^{-\f12}\frak lambda_i^{\f12}\alpha A^{-\f72}\frak lambda_i^{-1}\\ \frak leq &CA^{-\f12}\big(1+(|n|^\f13/A)^{\f32}(A\frak lambda_i)^{-\f32}\frak lambda_i\big)\frak leq CA^{-\f12}, \end{align*} provide that $(|n|^{\f13}/A)\frak leq 1$, $(A\frak lambda_i)^{-\f32}\frak leq (|n|^{\f13}\frak lambda_i)^{-\f32}\frak leq \delta^{\f32}\frak leq 1$, $\frak lambda_i\frak leq \delta_1^{-1} $ . Since $\rho_\frak lambda\frak leq1$, we get by Lemma \ref{lem:Airy-w} and Lemma \ref{lem:Webounds} that \begin{align*} \|\rho^{\f12}_\frak lambda W\|_{L^2}\frak leq&\|\rho^{\f12}_\frak lambda W_a\|_{L^2}+\|W_e\|_{L^2}\frak leq C((|n|^\f13\frak lambda_i)^{\f34}A^{-1}+\nu^{-\f14}\alpha^{\f12}\frak lambda_i^{\f12}A^{-\f72}\frak lambda_i^{-1})\\ =&C(|n|^{\f14}\frak lambda_i^{\f34}A^{-1}+|n|^{\f12}A^{-\f72}\frak lambda_i^{-\f12})\frak leq C|n|^{\f14}\frak lambda_i^{\f34}A^{-1}(1+|n|^\f14A^{-\f34}(A\frak lambda_i)^{-\f74}\frak lambda_i^{\f12})\\ \frak leq&C|n|^{\f14}\frak lambda_i^{\f34}A^{-1}. \end{align*} This proves the lemma. \end{proof} Now we denote \begin{align*} &J:=-\int_{0}^{+\infty}W(Y)\mathrm{e}^{-\alpha Y}\mathrm{d}Y. \end{align*} Then by Lemma \ref{lem:ham-bound}, we find that $J=\partial_Y\Phi(0)$. Hence, the task of the following lemma is to show that the lower bound of $J$ is strictly positive. \begin{lemma}\frak label{lem:lowerJ} Let $W$ be the solution of \eqref{eq:Hombound-OS}. Then it holds that \begin{align*} &|J|\geq C^{-1}A^{-1}. \end{align*} \end{lemma} \begin{proof} Thanks to Lemma \ref{lem:Airy-bound}, we have \begin{align*} \frak left|\int_{0}^{+\infty}W_a(Y)\mathrm{e}^{-\alpha Y}\mathrm{d}Y\right| =&|\partial_Y\Phi_a(0)| \geq C^{-1}(1+|n|^{\f13}|\frak lambda_\nu|)^{-\f12}(|n|^{\f13}+\alpha)^{-1}, \end{align*} which along with Lemma \ref{lem:L-index} gives \begin{align}\frak label{est:W1Philower} & \frak left|\int_{0}^{+\infty}W_a(Y)\mathrm{e}^{-\alpha Y}\mathrm{d}Y\right|\geq C^{-1}(1+|n|^{\f13}|\frak lambda_\nu|)^{-\f12}n^{-\f13}=C^{-1}A^{-1}. \end{align} And by Lemma \ref{lem:Webounds} and Lemma \ref{lem:L-index}, we get \begin{align*} \frak left|\int_{0}^{+\infty}W_e(Y)\mathrm{e}^{-\alpha Y}\mathrm{d}Y\right|=&|\partial_Y\Phi_e(0)|\frak leq \|\partial_Y\Phi_e\|_{L^\infty}\frak leq C|n|^{-\f13}(A\frak lambda_i)^{-\f74}\\ \frak leq &CA^{-1}(|n|^{\f13}\frak lambda_i)^{-\f74}\frak leq CA^{-1}\delta^{\f74}. \end{align*} Then we deuce that \begin{align*} |J|&\geq \frak left|\int_{0}^{+\infty}W_a(Y)\mathrm{e}^{-\alpha Y}\mathrm{d}Y\right|- \frak left|\int_{0}^{+\infty}W_e(Y)\mathrm{e}^{-\alpha Y}\mathrm{d}Y\right|\geq C^{-1}A^{-1}\big(1-C\delta^{\f74}\big). \end{align*} Taking $\delta$ sufficiently small so that $C\delta^{\f74}\frak leq 1/2$, we arrive at \begin{align*} & |J|\geq C^{-1}A^{-1}. \end{align*} \end{proof} From Lemma \ref{lem:lowerJ}, we can define $W_b(Y)=W(Y)/J$. Then it follows Lemma \ref{lem:lowerJ} and Lemma \ref{lem:Wnorms} that \begin{proposition}\frak label{pro:Wbnorms} Let $W_b(Y)=W(Y)/J$. Then $W_b$ solves \eqref{eq:Hombound-OSnon} and satisfies \begin{align*} & \|(\partial_Y\Phi_b,\alpha\Phi_b)\|_{L^2}\frak leq CA^{-\f12}\frak le C,\quad \|W_b\|_{L^2} \frak leq CA^{\f12},\quad \big\|\rho^{\f12}_\frak lambda W_b\big\|_{L^2}\frak le C|n|^{\f14}\frak lambda_i^{\f34}. \end{align*} \end{proposition} \subsubsection{Resolvent estimates for nonslip boundary condition} Now we are back to consider the estimate for the solution to the system with nonslip boundary condition \begin{align}\frak label{eq:reswnon}\frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)w + \mathrm{i}\alpha\big((V-\frak lambda)w-(\partial_Y^2V)\phi\big)=F,\\ &(\partial_Y^2-\alpha^2)\phi=w,\ \partial_Y\phi|_{Y=0}=\phi|_{Y=0}=0, \end{aligned}\right. \end{align} where $F= -\partial_YF_1+\mathrm{i}\alpha F_2$. Let $w_{Na}$ solve \begin{align*}\frak left\{\begin{aligned} &-\sqrt{\nu}(\partial_Y^2-\alpha^2)w_{Na} + \mathrm{i}\alpha\big((V-\frak lambda)w_{Na}-(\partial_Y^2V)\phi_{Na}\big)=F,\\ &(\partial_Y^2-\alpha^2)\phi_{Na}=w_{Na},\ w_{Na}|_{Y=0}=\phi_{Na}|_{Y=0}=0. \end{aligned}\right. \end{align*} By matching the boundary condition, we find that \begin{align*} &w(Y)=w_{Na}(Y)-\partial_Y\phi_{Na}(0)W_b(Y). \end{align*} Moreover, there hold the following estimates for the solution $(w,\phi)$. \begin{proposition}\frak label{Pro:resdrsmall} There exists $\delta_*\in(0,\delta_1]$ and $\nu_0$ such that the following statements hold. Let $\nu\frak leq \nu_0$. Suppose that \reff{Remu} holds for some $\gamma\in[\f23,1]$ and $\delta\in(0,\delta_*]$. Then for any $(F_1,F_2)\in L^2(\mathbb{R}_+)^2$, the solution $\phi\in H^2_0(\mathbb{R}_+)$ to the system \reff{eq:reswnon} satisfies \begin{align*} &\|(\partial_Y\phi,\alpha\phi)\|_{L^2}\frak leq C(\alpha\frak lambda_i)^{-1}\frak lambda_i^{-1}\|(F_1,F_2)\|_{L^2},\\ &\|w\|_{L^2}\frak leq C\nu^{-\f14}\alpha^{-\f12}\frak lambda_i^{-\f54}\|(F_1,F_2)\|_{L^2},\\ &\big\|\rho^{\f12}_\frak lambda w\big\|_{L^2}\frak leq C\nu^{-\f14}\alpha^{-\f12}\frak lambda_i^{-\f12}\|(F_1,F_2)\|_{L^2}. \end{align*} \end{proposition} \begin{proof} By Proposition \ref{pro:B3resH-1}, we obtain \begin{align}\frak label{est:drsmallwna} &\nu^{\f14}\alpha^{\f12}\frak lambda_i^{-\f12}\|w_{Na}\|_{L^2}+ \alpha\frak lambda_i\|(\partial_Y\phi_{Na},\alpha\phi_{Na})\|_{L^2}\frak leq C\frak lambda_i^{-1}\|(F_1,F_2)\|_{L^2}. \end{align} Then by the interpolation, we get \begin{align*} |\partial_Y\phi_{Na}(0)\frak leq &\|\partial_Y\phi_{Na}\|_{L^{\infty}}\frak leq C\|w_{Na}\|_{L^2}^{\f12}\|(\partial_Y\phi_{Na},\alpha\phi_{Na})\|_{L^2}^{\f12}\\ \frak leq & C\nu^{-\f18}\alpha^{-\f34}\frak lambda_i^{-\f54}\|(F_1,F_2)\|_{L^2}= C|n|^{\f14}\alpha^{-1}\frak lambda_i^{-\f54}\|(F_1,F_2)\|_{L^2}. \end{align*} This along with Proposition \ref{pro:Wbnorms} gives \begin{align*} &|\partial_Y\phi_{Na}(0)|\|(\partial_Y\Phi_b,\alpha\Phi_b)\|_{L^2} \frak leq C|n|^{\f14}\alpha^{-1}\frak lambda_i^{-\f54}A^{-\f12}\|(F_1,F_2)\|_{L^2},\\ &|\partial_Y\phi_{Na}(0)|\|W_b\|_{L^2}\frak leq C|n|^{\f14}\alpha^{-1}\frak lambda_i^{-\f54}A^{\f12}\|(F_1,F_2)\|_{L^2}. \end{align*} Since $w(Y)=w_{Na}(Y)-\partial_Y\phi_{Na}(0)W_b(Y)$, the above inequality and \eqref{est:drsmallwna} give \begin{align*} \|(\partial_Y\phi,\alpha\phi)\|_{L^2}&\frak leq \|(\partial_Y\phi_{Na},\alpha\phi_{Na})\|_{L^2}+ |\partial_Y\phi_{Na}(0)|\|(\partial_Y\Phi_b,\alpha\Phi_b)\|_{L^2}\\ &\frak leq C\big((\alpha\frak lambda_i)^{-1}\frak lambda_i^{-1} +|n|^{\f14}\alpha^{-1}\frak lambda_i^{-\f54}A^{-\f12}\big)\|(F_1,F_2)\|_{L^2}\\ &= C(\alpha\frak lambda_i)^{-1}\frak lambda_i^{-1}\big(1+ (|n|^{\f12}\frak lambda_i^{\f12}/A)^{\f12}\frak lambda_i^{\f12}\big)\|(F_1,F_2)\|_{L^2}\\ &\frak leq C(\alpha\frak lambda_i)^{-1}\frak lambda_i^{-1}\|(F_1,F_2)\|_{L^2}, \end{align*} here we used the fact that due to $\frak lambda_i\ge \f {|n|^{\gamma-1}} \delta\ge \f {|n|^{-\f13}} \delta\ge 2\sqrt{\nu}\alpha$, \begin{eqnarray}o |n|^{\f12}\frak lambda_i^{\f12}/A\frak leq C\f {|n|^\f13(|n|^\f13|\frak lambda_\nu|)^\f12} A\frak le C, \quad \frak lambda_i\frak leq \delta_1^{-1}. \end{eqnarray}o This gives the first inequality. Notice that \begin{align*} \|w\|_{L^2}\frak leq &\|w_{Na}\|_{L^2}+|\partial_Y\phi_{Na}(0)|\|W_b\|_{L^2}\\ \frak leq & C\big(\nu^{-\f14}\alpha^{-\f12}\frak lambda_i^{-\f12}+ |n|^{\f14}\alpha^{-1}\frak lambda_i^{-\f54}A^{\f12}\big)\|(F_1,F_2)\|_{L^2}\\ \frak leq &C(\alpha\frak lambda_i)^{-1}\frak lambda_i^{-1} \big(|n|^{\f12}\frak lambda_i^{\f32}+ |n|^{\f14}\frak lambda_i^{\f34}A^{\f12}\big)\|(F_1,F_2)\|_{L^2}\\ \frak leq &C(\alpha\frak lambda_i)^{-1}\frak lambda_i^{-1} \big(|n|^{\f12}\frak lambda_i^{\f32}+ |n|^{\frac{5}{12}}\frak lambda_i^{\f34}+ |n|^{\f12}\frak lambda_i^{\f34}|\frak lambda_{\nu}|^{\f14}\big)\|(F_1,F_2)\|_{L^2}\\ \frak leq &C(\alpha\frak lambda_i)^{-1}\frak lambda_i^{-1} |n|^{\f12}\frak lambda_i^{\f34}\|(F_1,F_2)\|_{L^2}. \end{align*} where in the last line, we used $|n|^{-\f13}+\frak lambda_i+|\frak lambda_{\nu}|\frak leq C$. This gives the second inequality. By Proposition \ref{pro:Wbnorms}, we also have \begin{align*} \big\|\rho^{\f12}_\frak lambda w\big\|_{L^2}&\frak leq \|w_{Na}\|_{L^2}+|\partial_Y\phi_{Na}(0)|\big\|\rho^{\f12}_\frak lambda W_b\big\|_{L^2}\\ &\frak leq C\big(\nu^{-\f14}\alpha^{-\f12}\frak lambda_i^{-\f12}+ |n|^{\f14}\alpha^{-1}\frak lambda_i^{-\f54}|n|^{\f14}\frak lambda_i^{\f34}\|(F_1,F_2)\|_{L^2}\big)\\ &=C\nu^{-\f14}\alpha^{-\f12}\frak lambda_i^{-\f12}\|(F_1,F_2)\|_{L^2}. \end{align*} This finishes the proof of the proposition. \end{proof} \section{Semigroup estimates of $e^{-t\mathbb{A}_\nu}$} This section is devoted to the semigroup estimates of $e^{-t\mathbb{A}_\nu}$. More precisely, we will establish the semigroup estimates for $e^{-t\mathbb{A}_{\nu,n}}$, where $\mathbb{A}_{\nu,n}$ is the restriction of $\mathbb{A}_\nu$ on the subspace $\mathcal{P}_{n}L^2_\sigma(\Omega)$. In the first part of this section, we show the $L^2$ estimates of the semigroup $e^{-t\mathbb{A}_{\nu,n}}$, and in the second part, we give the $L^\infty$ estimates of the semigroup $e^{-t\mathbb{A}_{\nu,n}}$. \subsection{$L^2$ estimate of the semigroup $e^{-t\mathbb{A}_{\nu,n}}$} This part is devoted to $L^2-L^2$ estimates of the semigroup $e^{-t\mathbb{A}_{\nu,n}}$. The following is our main result. \begin{proposition}\frak label{semigroup-L2} Assume that $(SC)$ condition holds. Then there exist $\delta_1,\delta_2,\delta_*\in(0,1)$ satisfying $\delta_1,\delta_2\frak leq \delta_0$ and $\delta_*\frak leq \min\{\delta_1,\delta_2\}$ such that the following statements hold true. Assume that \reff{Remu} holds for some $\delta\in(0,\delta_*]$ and $\gamma\in[\f23,1]$. Then the following estimates hold for all $f\in \mathcal{P}_nL^2_\sigma(\Omega)$ and $t>0$. \begin{enumerate} \item If $|n|\frak leq \delta_0^{-1}$, then \begin{align*} &\|e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2}\frak leq Ce^{ct}\|f\|_{L^2},\\ &\|\nabla e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2}\frak leq \frac{C}{\sqrt{\nu t}}(1+te^{ct})\|f\|_{L^2}. \end{align*} \item If $\delta_0^{-1}\frak leq |n|\frak leq \delta_0^{-1}\nu^{-3/4}$ and $|n|^\gamma\nu^{\frac{1}{2}}<1$, then \begin{align*} &\|e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2}\frak leq C |n|^{2(1-\gamma)}e^{\frac{|n|^\gamma}{\delta}t}\|f\|_{L^2},\\ &\|\nabla e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2}\frak leq \frac{C}{\nu^{1/2}}\Big(t^{-1/2}+|n|^{\f54(1-\gamma)+\f12}e^{\frac{|n|^\gamma}{\delta}t}\Big)\|f\|_{L^2}. \end{align*} \item If $|n|^\gamma\nu^{\frac{1}{2}}\geq 1$ and $|n|\frak leq\delta_0^{-1}\nu^{-\f34}$, then \begin{align*} &\|e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2}\frak leq C|n|^{1-\gamma}e^{\frac{|n|^\gamma}{\delta}t}\|f\|_{L^2},\\ &\|\nabla e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2}\frak leq \frac{C}{\nu^{1/2}}\Big(t^{-1/2}+|n|^{(1-\gamma/2)}e^{\frac{|n|^\gamma}{\delta}t}\Big)\|f\|_{L^2}. \end{align*} \item If $|n|\geq \delta_0^{-1}\nu^{-3/4}$, then \begin{align*} &\|e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2}\frak leq e^{-\frac{1}{4}\nu n^2t}\|f\|_{L^2},\\ &\|\nabla e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2}\frak leq \frac{Ce^{-\frac{1}{4}\nu n^2t}}{\sqrt{\nu t}}(1+|n|t)\|f\|_{L^2}. \end{align*} \end{enumerate} Here $C$ and $c$ are universal constants only depending on $U^P$. \end{proposition} We mention that the results of (1) and (4) are obtained via standard energy method. However, for the results of (2) and (4) which are estimates in Gevrey class for the mid-range frequency, we need to use the corresponding resolvent estimates in Theorem \ref{main-resolvent}. The idea of the proof is similar to \cite{GMM}. For the completeness, we present the details of this proof. \begin{proof} Let $n\in \mathbb{Z}$ and $f\in\mathcal{P}_n L^2_{\sigma}(\Omega)$. We denote $u^{(n)}:=e^{-t\mathbb{A}_{\nu,n}}f$. Then by the definition of semigroup $e^{-t\mathbb{A}_{\nu,n}}$, we know that $u^{(n)}$ satisfies \begin{align*} \partial_t u^{(n)}-\nu\Delta u^{(n)}+U^{p}\big(\frac{y}{\sqrt{\nu}}\big)\partial_x u^{(n)}+\frac{1}{\sqrt{\nu}}\Big(u_2^{(n)}\partial_YU^p(\frac{y}{\sqrt{\nu}}),0\Big)+\mathcal{P}_n\nabla p=0,\quad u^{(n)}|_{t=0}=f, \end{align*} which by introducing $Y=\frac{y}{\sqrt{\nu}}$ can be written as \begin{align}\frak label{n-mode-velocity} \partial_t u^{(n)}-\nu\Delta u^{(n)}+U^{p}(Y)\partial_x u^{(n)}+\big(y^{-1}u_2^{(n)}Y\partial_YU^p(Y),0\big)+\mathcal{P}_n\nabla p=0. \end{align} Recall that \begin{align} \delta_0=\frac{1}{2(1+\|U^P\|)}, \end{align} where $\|\cdot\|$ is defined in (SC) condition. Now we start to prove the first statement of Proposition \ref{semigroup-L2}. We first notice that by the boundary condition and divergence free condition \begin{align} \|y^{-1}u^{(n)}_2\|_{L^2(\Omega)}\frak leq2\|\partial_y u^{(n)}_2\|_{L^2(\Omega)}=2\|\partial_x u^{(n)}_1\|_{L^2(\Omega)}\frak leq 2|n|\|u^{(n)}\|_{L^2(\Omega)}. \end{align} By taking $L^2$-inner product with $u^{(n)}$ on both sides of \reff{n-mode-velocity}, we have \begin{align*} \frac{\mathrm{d}}{\mathrm{d}t}\|u^{(n)}\|^2_{L^2(\Omega)}&=-2\nu\|\nabla u^{(n)}\|_{L^2(\Omega)}^2-2\mathbf{Re}\frak langle y^{-1}u^{(n)}_2(y)Y\partial_Y U^P(Y),u^{(n)}_1\rangle_{L^2}\\ &\frak leq -2\nu\|\nabla u^{(n)}\|_{L^2(\Omega)}^2+2\|Y\partial_Y U^P\|_{L^\infty}|n|\|u^{(n)}\|^2_{L^2(\Omega)}. \end{align*} Hence by Gronwall's inequality, we obtain that for any $n\in\mathbb{Z}$, \begin{align}\frak label{semi-L2-n1} \|u^{(n)}(t)\|_{L^2(\Omega)}\frak leq e^{\frac{|n|t}{\delta_0}}\|f\|_{L^2(\Omega)}. \end{align} To prove the derivative estimates, by the Duhamel formula, we have \begin{align} u^{(n)}(t)=e^{\nu t\mathbb{P}\Delta}f-\int_0^t e^{\nu(t-s)\mathbb{P}\Delta}\mathbb{P}\big(U^{p}(Y)\partial_x u^{(n)}(s)+(y^{-1}u_2^{(n)}(s)Y\partial_YU^p(Y),0)\big)\mathrm{d}s, \end{align} which along with classical estimates of the Stokes semigroup gives \begin{align*} \|\nabla u^{(n)}(t)\|_{L^2(\Omega)}\frak leq C(\nu t)^{-\f12}\|f\|_{L^2(\Omega)}+C|n|\|U^P\|\sup_{0\frak leq s\frak leq t}\|u^{(n)}(s)\|_{L^2(\Omega)}\int_0^t\frac{1}{\nu^{\f12}(t-s)^{\f12}}\mathrm{d}s. \end{align*} Combing the above inequality and \reff{semi-L2-n1}, we obtain \begin{align} \|\nabla u^{(n)}(t)\|_{L^2}\frak leq \frac{C}{\sqrt{\nu t}}(1+|n|te^{\frac{|n|t}{\delta_0}})\|f\|_{L^2}. \end{align} This shows the first statement in the proposition by taking $|n|\frak leq \delta_0^{-1}$. Now we turn to prove the second and third statement in Proposition \ref{semigroup-L2}. By rescaling, we have \begin{align*} u^{(n)}(t,x,y)=(e^{-t\mathbb{A}_{\nu,n}})f(x,y)=(e^{-\widetilde{a}u\mathbb{L}_{\nu,n}}f_{\nu})(X,Y), \end{align*} where $(\widetilde{a}u,X,Y)=(t/\sqrt{\nu},x/\sqrt{\nu},y/\sqrt{\nu})$, $f_\nu(X,Y)=f(\nu^{\f12}X,\nu^{\f12}Y)$. According to \reff{semi-L2-n1}, after rescaling, we have already known that $-\mathbb{L}_{\nu,n}$ generates a $C_0$-semigroup acting on $\mathcal{P}_{\nu,n}L^2_\sigma(\Omega_\nu)$. In details, from \reff{semi-L2-n1}, we infer that for any $g\in\mathcal{P}_{\nu,n}L^2_\sigma(\Omega_\nu)$, \begin{align} \|e^{-\widetilde{a}u\mathbb{L}_{\nu,n}}g\|_{L^2_\sigma(\Omega_\nu)}\frak leq e^{\frac{\sqrt{\nu}|n|\widetilde{a}u}{\delta}}\|g\|_{L^2_\sigma(\Omega_\nu)}, \end{align} which implies the results of the second and third statement for short time $0\frak leq\widetilde{a}u\frak leq \nu^{-\f12}|n|^{-1}$. Hence, we now only need to consider the case of $\widetilde{a}u\geq \nu^{-\f12}|n|^{-1}$. According to Theorem \ref{main-resolvent}, we know that the set \begin{align} \Sigma_{\nu,\gamma}:=S_{\nu,n}(\theta)\cup\Big\{\mu\in\mathbb{C}|\mathbf{Re}\mu\geq\frac{n^{\gamma}\nu^{\f12}}{\delta}\Big\} \end{align} is included in the resolvent set of $-\mathbb{L}_{\nu,n}$, where $S_{\nu,n}(\theta)$ is the set defined in Theorem \ref{main-resolvent}. Thus, the semigroup $e^{-\widetilde{a}u\mathbb{L}_{\nu,n}}$ can be represented as \begin{align}\frak label{semi-repre} e^{-\widetilde{a}u\mathbb{L}_{\nu,n}}=\frac{1}{2\pi\mathrm{i}}\int_{\Gamma}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}\mathrm{d}\mu, \end{align} where the curve $\Gamma$ is taken as $\Gamma=\Gamma_++\Gamma_-+l_++l_-+l_0$ with \begin{eqnarray}\frak label{curve-Gamma} \begin{split} \Gamma_{\pm}&:=\big\{\mu\in\mathbb{C}|\pm\mathbf{Im}\mu=(\widetilde{a}n\theta)\mathbf{Re}\mu +\delta^{-1}_1(\sqrt{\nu}n+|\widetilde{a}n\theta||n|^\gamma\nu^{\f12}),\mathbf{Re}\mu\frak leq0\big\},\\ l_{\pm}&:=\Big\{\mu\in\mathbb{C}|\pm\mathbf{Im}\mu=\delta^{-1}_1(\sqrt{\nu}|n|+|\widetilde{a}n\theta||n|^\gamma\nu^{\f12}),0\frak leq\mathbf{Re}\mu\frak leq\frac{|n|^\gamma\nu^{\f12}}{\delta}\Big\},\\ l_0&:=\Big\{\mu\in\mathbb{C}|0\frak leq|\mathbf{Im}\mu|\frak leq\delta^{-1}_1(\sqrt{\nu}|n|+|\widetilde{a}n\theta||n|^\gamma\nu^{\f12}),\mathbf{Re}\mu=\frac{|n|^\gamma\nu^{\f12}}{\delta}\Big\}. \end{split} \end{eqnarray} By \reff{mularge}, we have that for any $g\in\mathcal{P}_{\nu,n}L^2(\Omega_\nu)$ \begin{align*} \Big\|\frac{1}{2\pi\mathrm{i}}\int_{\Gamma_{\pm}}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}g\mathrm{d}&\mu\Big \|_{L^2(\Omega_\nu)}\frak leq C\|g\|_{L^2(\Omega_\nu)}\Big|\int_{\Gamma_{\pm}}e^{\widetilde{a}u\mathbf{Re}\mu}|\mu|^{-1}\mathrm{d}\mu\Big|\\ &\frak leq C\|g\|_{L^2(\Omega_\nu)}\int_0^{+\infty}\frac{e^{-\widetilde{a}u s}}{s+|\widetilde{a}n\theta|s+\delta^{-1}_1(\sqrt{\nu}n+|\widetilde{a}n\theta|n^\gamma\nu^{\f12})}\mathrm{d}s\\ &\frak leq C(\sqrt{\nu} n \widetilde{a}u)^{-\f12}\|g\|_{L^2(\Omega_\nu)}, \end{align*} which implies that for any $\widetilde{a}u\geq (\nu^{1/2}n)^{-1}$ \begin{align}\frak label{Gamma-L2} \Big\|\frac{1}{2\pi\mathrm{i}}\int_{\Gamma_{\pm}}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}g\mathrm{d}&\mu\Big\|_{L^2(\Omega_\nu)}\frak leq C\|g\|_{L^2(\Omega_\nu)}. \end{align} Again by \reff{mularge}, we have \begin{eqnarray}\frak label{l-L2} \begin{split} &\Big\|\frac{1}{2\pi\mathrm{i}}\int_{l_{\pm}}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}g\mathrm{d}\mu \Big\|_{L^2(\Omega_\nu)}\\ &\frak leq C\|g\|_{L^2(\Omega_\nu)}\int_0^{\frac{|n|^\gamma\nu^{\f12}}{\delta}}\frac{e^{\widetilde{a}u s}}{s+\delta^{-1}_1(\sqrt{\nu}|n|+|\widetilde{a}n\theta||n|^{\gamma}\nu^{\f12})}\mathrm{d}s\\ &\frak leq Ce^{\frac{|n|^{\gamma}\nu^{\f12}\widetilde{a}u}{\delta}}\|g\|_{L^2(\Omega_\nu)}. \end{split} \end{eqnarray} On $l_0$, we apply \reff{Immularge} and \reff{musmall} for the case $|n|^\gamma\nu^{\f12}\geq 1$ and $|n|^\gamma\nu^{\f12}\frak leq 1$ respectively. If $|n|^\gamma\nu^{\f12}\geq 1$, then $|n|^\gamma\nu^{\f12}+\delta|n|^2\nu^{\f32}\geq\delta\delta^{-1}_2$, which allow us to use \reff{Immularge}. Hence, we have \begin{eqnarray}\frak label{l0-L2-large} \begin{split} &\Big\|\frac{1}{2\pi\mathrm{i}}\int_{l_{0}}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}g\mathrm{d}\mu\Big\|_{L^2(\Omega_\nu)}\\ &\frak leq C\frac{\delta}{|n|^\gamma\nu^{\f12}}e^{\frac{|n|^{\gamma}\nu^{\f12}\widetilde{a}u}{\delta}}\|g\|_{L^2(\Omega_\nu)}\int_0^{\frac{C\sqrt{\nu}|n|}{\delta_1}}\mathrm{d}s\\ &\frak leq C|n|^{1-\gamma}e^{\frac{|n|^{\gamma}\nu^{\f12}\widetilde{a}u}{\delta}}\|g\|_{L^2(\Omega_\nu)}. \end{split} \end{eqnarray} If $|n|^\gamma\nu^{\f12}\frak leq 1$, by \reff{musmall} we obtain \begin{eqnarray}\frak label{l0-L2-small} \begin{split} &\Big\|\frac{1}{2\pi\mathrm{i}}\int_{l_{0}}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}g\mathrm{d}\mu \Big\|_{L^2(\Omega_\nu)}\\ &\frak leq C\frac{\delta}{|n|^\gamma\nu^{\f12}}|n|^{1-\gamma}e^{\frac{|n|^{\gamma}\nu^{\f12}\widetilde{a}u}{\delta}}\|g\|_{L^2(\Omega_\nu)}\int_0^{\frac{C\sqrt{\nu}|n|}{\delta_1}}\mathrm{d}s\\ &\frak leq C|n|^{2(1-\gamma)}e^{\frac{|n|^{\gamma}\nu^{\f12}\widetilde{a}u}{\delta}}\|g\|_{L^2(\Omega_\nu)}. \end{split} \end{eqnarray} Therefore, we prove the $L^2$ estimates in the second and third statement of Proposition \ref{semigroup-L2} by \reff{Gamma-L2}, \reff{l-L2}, \reff{l0-L2-large} and \reff{l0-L2-small}. Next we consider the derivative estimates. We use the representation \reff{semi-repre} again. For the integral on $\Gamma_{\pm}$, we have that by \reff{mularge-nabla}, \begin{eqnarray}\frak label{Gamma-nabla} \begin{split} &\Big\|\nabla_{X,Y}\frac{1}{2\pi\mathrm{i}}\int_{\Gamma_{\pm}}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}g\mathrm{d}\mu\Big\|_{L^2(\Omega_\nu)}\\ &\frak leq C\nu^{-\f14}\|g\|_{L^2(\Omega_\nu)}\big|\int_{\Gamma_{\pm}}e^{\widetilde{a}u\mathbf{Re}\mu}|\mu|^{-\f12}\mathrm{d}\mu\big|\\ &\frak leq C\nu^{-\f14}\|g\|_{L^2(\Omega_\nu)}\int_0^{+\infty}\frac{e^{-\widetilde{a}u s}}{(s+|\widetilde{a}n\theta|s+\delta_1^{-1}(\sqrt{\nu}|n|+|\widetilde{a}n\theta||n|^{\gamma}\nu^{\f12}))^{\f12}}\mathrm{d}s\\ &\frak leq C\frac{C}{\nu^{\f14}\widetilde{a}u^{\f12}}\|g\|_{L^2(\Omega_\nu)}. \end{split} \end{eqnarray} In a similar way, on $l_{\pm}$, we have \begin{eqnarray} \begin{split} &\Big\|\nabla_{X,Y}\frac{1}{2\pi\mathrm{i}}\int_{l_{\pm}}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}g\mathrm{d}\mu\Big\|_{L^2(\Omega_\nu)}\\ &\frak leq C\nu^{-\f14}\|g\|_{L^2(\Omega_\nu)}\int_0^{\frac{|n|^\gamma\nu^{\f12}}{\delta}}\frac{e^{\widetilde{a}u s}}{(s+\delta_1^{-1}(\sqrt{\nu}|n|+|\widetilde{a}n\theta||n|^\gamma\nu^{\f12}))^{\f12}}\mathrm{d}s\\ &\frak leq C|n|^{\gamma-\f12}e^{\frac{|n|^\gamma\nu^{\f12}\widetilde{a}u}{\delta}}\|g\|_{L^2(\Omega_\nu)}. \end{split} \end{eqnarray} We obtain the estimates on $l_0$ in the same way as in \reff{l0-L2-large} and \reff{l0-L2-small}. In details, for $|n|^\gamma\nu^{\f12}\geq1$, we have \begin{align} \Big\|\nabla_{X,Y}\frac{1}{2\pi\mathrm{i}}&\int_{l_{0}}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}g\mathrm{d}\mu\Big\|_{L^2(\Omega_\nu)}\frak leq C|n|^{1-\frac{\gamma}{2}}e^{\frac{|n|^\gamma\nu^{\f12}\widetilde{a}u}{\delta}}\|g\|_{L^2(\Omega_\nu)}, \end{align} and for $|n|^\gamma\nu^{\f12}\frak leq1$, we have \begin{align}\frak label{l0-nalbal} &\Big\|\nabla_{X,Y}\frac{1}{2\pi\mathrm{i}}\int_{l_{0}}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}g\mathrm{d}\mu\Big\|_{L^2(\Omega_\nu)}\frak leq Cn^{\f54(1-\gamma)+\f12}e^{\frac{|n|^\gamma\nu^{\f12}\widetilde{a}u}{\delta}}\|g\|_{L^2(\Omega_\nu)}. \end{align} Combining \reff{Gamma-nabla}-\reff{l0-nalbal} and scaling back to the original variables, we finish the proof of the second and third statement of Proposition \ref{semigroup-L2}. Finally, we deal with the last statement of the proposition. In this situation, we are back to the system \reff{n-mode-velocity}. Recall that we consider the case of $|n|\geq \delta_0^{-1}\nu^{-\f34}$. By the standard energy method, we obtain \begin{align*} \frac{\mathrm{d}}{\mathrm{d}t}\|u^{(n)}\|^2_{L^2(\Omega)}&=-2\nu\|\nabla u^{(n)}\|_{L^2(\Omega)}^2-2\nu^{-\f12}\mathbf{Re}\frak langle u^{(n)}_2(y)\partial_Y U^P(Y),u^{(n)}_1\rangle_{L^2}\\ &\frak leq -\nu\|\nabla u^{(n)}\|_{L^2(\Omega)}^2-\nu n^2\|u^{(n)}\|_{L^2}^2+2\nu^{-\f12}\|\partial_Y U^P\|_{L^\infty}\|u\|^2_{L^2}. \end{align*} Then the above inequality and $|n|\geq \delta^{-1}_0\nu^{-\f34}$ lead to \begin{align*} \frac{\mathrm{d}}{\mathrm{d}t}\|u^{(n)}\|^2_{L^2(\Omega)}\frak leq -\frac{\nu n^2}{2}\|u^{(n)}\|^2_{L^2(\Omega)}. \end{align*} Hence by the Gronwall's inequality, we have \begin{align*} \|e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2}\frak leq e^{-\frac{1}{4}\nu n^2t}\|f\|_{L^2}. \end{align*} As in the proof of the first statement, by the Duhamel formula and the above $L^2$-estimates, we can obtain the derivative estimates in the last statement. \end{proof} \subsection{$L^\infty$-estimates of semigroup $e^{-t\mathbb{A}_{\nu,n}}$} In this part, we establish two kinds of $L^\infty$-estimates of the semigroup $e^{-t\mathbb{A}_{\nu,n}}$. The first one stated as follows is the $L^2-L^\infty$ estimate, which is used to controll nonlinear part of the full nonlinear perturbation system \eqref{NSA}. \begin{proposition}\frak label{semigoup-L2Linf} Assume that $(SC)$ condition holds. Then there exist $\delta_1,\delta_2,\delta_*\in(0,1)$ satisfying $\delta_1,\delta_2\frak leq \delta_0$ and $\delta_*\frak leq \min\{\delta_1,\delta_2\}$ such that the following statements hold true. Assume that \reff{Remu} holds for some $\delta\in(0,\delta_*]$ and $\gamma\in[\f23,1]$. \begin{enumerate} \item If $|n|^\gamma\nu^{\f12}\frak leq1$ and $|n|\frak le \delta_0^{-1}\nu^{-\f34}$, then \begin{eqnarray}\noindentnumber \begin{split} &\|e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2_xL^\infty_y}\\ &\frak leq C\langle n\rangle^{\f34(1-\gamma)}\frak log^\f12(\langle n\rangle)\frac{1}{\nu^{\f14}\langle n\rangle^{\f14}t^{\f12}}\|f\|_{L^2}+C\frac{\langle n\rangle^{1-\gamma}}{\nu^{\f14}t^{\f14}}e^{\frac{\langle n\rangle^\gamma t}{2\delta}}\|f\|_{L^2}\\ &\qquad+C\nu^{-\f14}\frak log^{\f12}(\langle n\rangle)\langle n\rangle^{\frac{3}{2}-\f54\gamma}e^{\frac{\langle n\rangle^\gamma t}{\delta}}\|f\|_{L^2}, \end{split} \end{eqnarray} here $\langle n\rangle=(1+n^2)^\f12$. \item If $|n|^\gamma\nu^{\frac{1}{2}}\geq 1$ and $|n|\frak leq\delta_0^{-1}\nu^{-\f34}$, then \begin{align*} &\|e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2_xL^\infty_y} \frak leq C\frac{|n|^{(1-\gamma)/2}}{\nu^{\f14}t^{\f14}}e^{\frac{|n|^\gamma t}{2\delta}}\|f\|_{L^2}+C\nu^{-\f14}|n|^{1-\f34\gamma}e^{\frac{|n|^\gamma t}{\delta}}\|f\|_{L^2}. \end{align*} \item If $|n|\geq\delta_0^{-1}\nu^{-\f34}$, then \begin{align*} &\|e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2_xL^\infty_y}\frak leq C\frac{1}{\nu^{\f14}t^{\f14}}(1+|n|^{\f12}t^{\f12})e^{-\frac{1}{4}\nu|n|^2t}\|f\|_{L^2}. \end{align*} \end{enumerate} \end{proposition} \begin{remark} We point out that the second and third statement of the above proposition follow from Proposition \ref{semigroup-L2} by the interpolation. However, for the case of $|n|^\gamma\nu^{\f12}\frak leq 1$, the $L^\infty$ estimate of the semigroup $e^{-t\mathbb{A}_\nu}$ will loss many derivatives(the power of $|n|$), if we apply the interpolation directly. This will in turn cause more smallness requirement on the initial perturbation when we apply it to the full nonlinear system. To obtain a sharper bounds for the case of $|n|^\gamma\nu^{\f12}\frak le 1$, we introduce a weight function $\rho(Y)$ matching boundary layer quite well and obtain a sharper $L^\infty$ bound by a $H^1$ resolvent estimate associated to the weight function $\rho(Y)$ defined by \eqref{rho-lambda}. \end{remark} \begin{proof} Let $n\in\mathbb{Z}$ and $f\in\mathcal{P}_nL^2_\sigma(\Omega)$. By the interpolation and Proposition \ref{semigroup-L2}, we obtain that for any $|n|^\gamma\nu^{\frac{1}{2}}\geq 1$ and $|n|\frak leq\delta_0^{-1}\nu^{-\f34}$, \begin{align*} \|u^{(n)}\|_{L^2_xL^\infty_y}\frak leq& \|u^{(n)}\|_{L^2_{x,y}}^{\f12}\|\partial_y u^{(n)}\|_{L^2_{x,y}}^{\f12}\\ \frak leq&C\frac{|n|^{(1-\gamma)/2}}{\nu^{\f14}t^{\f14}}e^{\frac{|n|^\gamma t}{2\delta}}\|f\|_{L^2_{x,y}}+C\nu^{-\f14}|n|^{1-\f34\gamma}e^{\frac{|n|^\gamma t}{\delta}}\|{f}\|_{L^2_{x,y}}, \end{align*} and for any $|n|\geq\delta_0^{-1}\nu^{-\f34}$, \begin{align*} \|u^{(n)}\|_{L^2_xL^\infty_y}\frak leq& \|u^{(n)}\|_{L^2_{x,y}}^{\f12}\|\partial_y u^{(n)}\|_{L^2_{x,y}}^{\f12}\\ \frak leq&C\frac{1}{\nu^{\f14}t^{\f14}}(1+|n|^{\f12}t^{\f12})e^{-\frac{1}{4}\nu n^2t}\|f\|_{L^2_{x,y}}. \end{align*} Next we consider the case of $|n|^{\gamma}\nu^{\f12}\frak leq 1$ and $\delta_0^{-1}\frak le |n|\frak leq\delta_0^{-1}\nu^{-\f34}$. Recall that \begin{align*} v^{(n)}(t,X,Y):=(e^{-\widetilde{a}u\mathbb{L}_{\nu,n}}f_{\nu})(X,Y)=(e^{-t\mathbb{A}_{\nu,n}})f(x,y)=u^{(n)}(t,x,y), \end{align*} where $(\widetilde{a}u,X,Y)=(t/\sqrt{\nu},x/\sqrt{\nu},y/\sqrt{\nu})$, $f_\nu(X,Y)=f(\nu^{\f12}X,\nu^{\f12}Y)$. Hence, we directly have \begin{align}\frak label{Rescalenorm} \|u^{(n)}(t)\|_{L^2_xL^\infty_y(\Omega)}=\nu^{\f14}\|v^{(n)}(\widetilde{a}u)\|_{L^2_XL^\infty_Y(\Omega_\nu)}, \qquad \|f\|_{L^2_xL^2_y(\Omega)}=\nu^{\f12}\|f_{\nu}\|_{L^2_XL^2_Y(\Omega_{\nu})}. \end{align} On the other hand, we get by Lemma \ref{lem:inter} that \begin{align*} \|v^{(n)}(\widetilde{a}u)\|_{L^2_XL^\infty_Y}\frak leq& C\|\rho^{\f12}\omega^{(n)}(\widetilde{a}u)\|^{\f12}_{L^2_{X,Y}}\|v^{(n)}(\widetilde{a}u)\|^{\f12}_{L^2_{X,Y}}+C\|(1-\rho^{\f12})\omega^{(n)}(\widetilde{a}u)\|_{L^2_XL^1_Y}\\ &+C\nu^{\f14}|n|^{\f12}|\|v^{(n)}(\widetilde{a}u)\|_{L^2_{X,Y}}, \end{align*} where $\omega^{(n)}:=\mathrm{curl}_{X,Y}v^{(n)}$. For the second term on the right hand side, we notice that by taking $h_0=(|n|^{\gamma-2/3}/\delta)^{-\f32}$ and $h_1:=h_0|n|^{3(\gamma-1)/2}$ \begin{align*} \big\|(1-\rho^{\f12})\omega^{(n)}\big\|_{L^2_XL^1_Y}\frak leq & \|\omega^{(n)}\|_{L^2_XL^1_Y(h_1,h_2)}+\|\omega^{(n)}\|_{L^2_XL_Y^1(0,h_1)}\\ \frak leq &\big\|\rho^{-\f12}\big\|_{L^2_Y(h_1,h_0)}\big\|\rho^{\f12}\omega^{(n)}\big\|_{L^2_{X,Y}}+ h_1^{\f12}\|\omega^{(n)}\|_{L^2_{X,Y}}\\ \frak leq &C|n|^{1/2-3\gamma/4}\big(\frak log(|n|)\big)^{\f12}\big\|\rho^{\f12}\omega^{(n)}\big\|_{L^2_{X,Y}} +C|n|^{-\f14}\|\omega^{(n)}\|_{L^2_{X,Y}}. \end{align*} Hence, by the above two estimates, we get \begin{eqnarray} \begin{split} \|v^{(n)}(\widetilde{a}u)\|_{L^2_XL^\infty_Y}\frak leq&C\|\rho^{\f12}\omega^{(n)}(\widetilde{a}u)\|^{\f12}_{L^2_{X,Y}}\|v^{(n)}(\widetilde{a}u)\|^{\f12}_{L^2_{X,Y}}\\ &+C|n|^{1/2-3\gamma/4}\big(\frak log(|n|)\big)^{\f12}\big\|\rho^{\f12}\omega^{(n)}(\widetilde{a}u)\big\|_{L^2_{X,Y}}\\ &+C|n|^{-\f14}\|\omega^{(n)}(\widetilde{a}u)\|_{L^2_{X,Y}}+C\nu^{\f14}|n|^{\f12}|\|v^{(n)}(\widetilde{a}u)\|_{L^2_{X,Y}}. \end{split} \end{eqnarray} Applying Proposition \ref{semigroup-L2} and after scaling, we deduce that for $\delta_0^{-1}\frak leq |n|\frak leq \delta_0^{-1}\nu^{-3/4}$ and $|n|^\gamma\nu^{\frac{1}{2}}<1$, \begin{align*} &\|v^{(n)}(\widetilde{a}u)\|_{L^2_{X,Y}}\frak leq C |n|^{2(1-\gamma)}e^{\frac{|n|^\gamma}{\delta}\nu^{\f12}\widetilde{a}u}\|f_{\nu}\|_{L^2_{X,Y}},\\ &\|\omega^{(n)}(\widetilde{a}u)\|_{L^2_{X,Y}}\frak leq C\Big(\frac{1}{\nu^{\f14}\widetilde{a}u^{\f12}}+n^{\f54(1-\gamma)+\f12}e^{\frac{|n|^\gamma\nu^{\f12}\widetilde{a}u}{\delta}}\Big)\|f_\nu\|_{L^2_{X,Y}}. \end{align*} Hence, in order to obtain the estimate of $\|v^{(n)}(\widetilde{a}u)\|_{L^2_XL^\infty_Y}$, we are left with the control of $\|\rho^{\f12}\omega^{(n)}(\widetilde{a}u)\|^{\f12}_{L^2_{X,Y}}$. For this purpose, we are back to the formula \begin{align*} v^{(n)}(\widetilde{a}u)=e^{-\widetilde{a}u\mathbb{L}_{\nu,n}}f_\nu=\frac{1}{2\pi\mathrm{i}}\int_{\Gamma}e^{\widetilde{a}u\mu}(\mu+\mathbb{L}_{\nu,n})^{-1}f_\nu\mathrm{d}\mu, \end{align*} where $\Gamma$ is given in \reff{curve-Gamma}. Then by using \reff{mularge-nabla}, \reff{Immularge-na} and \reff{musmall-wegihted}, we infer that for any $\delta_0^{-1}\frak leq |n|\frak leq \delta_0^{-1}\nu^{-\f34}$, \begin{align} \|\rho^{\f12}\omega^{(n)}(\widetilde{a}u)\|_{L^2}\frak leq C\Big(\frac{1}{\nu^{\f14}\widetilde{a}u^{\f12}}+n^{(1-\gamma/2)}e^{\frac{|n|^\gamma\nu^{\f12}\widetilde{a}u}{\delta}}\Big)\|f_\nu\|_{L^2_{X,Y}}, \end{align} by using a similar argument in the proof of derivative estimates in Proposition \ref{semigroup-L2}. Collecting the above estimates, we obtain that for $\delta_0^{-1}\frak leq |n|\frak leq \delta_0^{-1}\nu^{-3/4}$ and $|n|^\gamma\nu^{\frac{1}{2}}<1$, \begin{align*} &\|v^{(n)}(\widetilde{a}u)\|_{L^2_XL^\infty_Y}\\ &\frak leq C|n|^{1-\gamma}e^{\frac{|n|^\gamma\nu^{\f12}\widetilde{a}u}{2\delta}}\big(\frac{1}{\nu^{1/8}\widetilde{a}u^{1/4}}+|n|^{\frac{1}{2}-\frac{\gamma}{4}}e^{\frac{|n|^\gamma\nu^{\f12}\widetilde{a}u}{2\delta}}\big)\|f_{\nu}\|_{L^2_{X,Y}}\\ &\quad+C|n|^{1/2-3\gamma/4}\big(\frak log(|n|)\big)^{\f12}\Big(\frac{1}{\nu^{\f14}\widetilde{a}u^{\f12}}+n^{(1-\gamma/2)}e^{\frac{|n|^\gamma\nu^{\f12}\widetilde{a}u}{\delta}}\Big)\|f_\nu\|_{L^2_{X,Y}}\\ &\quad+C|n|^{-\frac{1}{4}}\Big(\frac{1}{\nu^{\f14}\widetilde{a}u^{\f12}}+n^{\f54(1-\gamma)+\f12}e^{\frac{|n|^\gamma\nu^{\f12}\widetilde{a}u}{\delta}}\Big)\|f_\nu\|_{L^2_{X,Y}}\\ &\quad+C\nu^{\f14}|n|^{\f12}|n|^{2(1-\gamma)}e^{\frac{|n|^\gamma}{\delta}\nu^{\f12}\widetilde{a}u}\|f_{\nu}\|_{L^2_{X,Y}}\\ &\frak leq C|n|^{\f34(1-\gamma)}\frak log^\f12(|n|)\frac{1}{\nu^{\f14}|n|^{\f14}\widetilde{a}u^{\f12}}\|f_{\nu}\|_{L^2_{X,Y}}+C\frac{|n|^{1-\gamma}}{\nu^{\f18}\widetilde{a}u^{\f14}}e^{\frac{|n|^\gamma\nu^{\f12}\widetilde{a}u}{2\delta}}\|f_{\nu}\|_{L^2_{X,Y}}\\ &\quad+C\frak log^{\f12}(|n|)|n|^{\frac{3}{2}-\f54\gamma}e^{\frac{|n|^\gamma \nu^{\f12}\widetilde{a}u}{\delta}}\|f_{\nu}\|_{L^2_{X,Y}}. \end{align*} By scaling back to the original variables and \eqref{Rescalenorm}, we obtain that for $\delta_0^{-1}\frak leq |n|\frak leq \delta_0^{-1}\nu^{-3/4}$ and $|n|^\gamma\nu^{\frac{1}{2}}<1$, \begin{align*} &\|u^{(n)}\|_{L^2_xL^\infty_y}\\ &\frak leq C|n|^{\f34(1-\gamma)}\frak log^\f12(|n|)\frac{1}{\nu^{\f14}|n|^{\f14}t^{\f12}}\|f\|_{L^2}+C\frac{|n|^{1-\gamma}}{\nu^{\f14}t^{\f14}}e^{\frac{|n|^\gamma t}{2\delta}}\|f\|_{L^2}\\ &\qquad+C\nu^{-\f14}\frak log^{\f12}(|n|)|n|^{\frac{3}{2}-\f54\gamma}e^{\frac{|n|^\gamma t}{\delta}}\|f\|_{L^2}. \end{align*} When $|n|\frak le \delta_0^{-1}$, by (1) in Proposition \ref{semigroup-L2} and the interpolation, we get \begin{eqnarray}o \|u^{(n)}\|_{L^2_xL^\infty_y}\frak le C\f 1 {\nu^\f14 t^\f14}e^{ct}\|f\|_{L^2}. \end{eqnarray}o This completes the proof of the first one in this proposition. \end{proof} In Proposition \ref{semigoup-L2Linf}, all of the $L^\infty$ estimates of the semigroup $e^{-t\mathbb{A}_\nu}$ contain some singularity of $t$ at $t=0$, which means that these result could not apply to estimate the homogeneous part of the solution, as our goal is to obtain an $L^\infty$ stability up to $t=0$. Hence, we need to use the following $H^1-L^\infty$ semigroup estimates without the singularity at $t=0$. \begin{proposition}\frak label{semigoup-H^1Linf} Under the same assumptions in Proposition \ref{semigroup-L2}, there holds for all $f\in \mathcal{P}_nL^2_\sigma(\Omega)\cap\mathcal{P}_n L^2_xH^1_y(\Omega)$ and $t>0$. \begin{enumerate} \item If $|n|^\gamma\nu^{\f12}\frak leq1$, then \begin{align} \|&e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2_xL^\infty_y}\frak leq C\|f\|_{L^2_xH^1_y}+C\nu^{-\f14}t^{\f34}(1+|n|^{3-2\gamma})e^{\frac{|n|^\gamma t}{\delta}}\|f\|_{L^2}.\noindentnumber \end{align} \item If $|n|^\gamma\nu^{\frac{1}{2}}\geq 1$ and $|n|\frak leq\delta_0^{-1}\nu^{-\f34}$, then \begin{align*} &\|e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2_xL^\infty_y}\frak leq C\|f\|_{L^2_xH^1_y}+C\nu^{-\f14}t^{\f34}|n|^{2-\gamma}e^{\frac{|n|^\gamma t}{\delta}}\|f\|_{L^2}. \end{align*} \item If $|n|\geq\delta_0^{-1}\nu^{-\f34}$, then \begin{align*} &\|e^{-t\mathbb{A}_{\nu,n}}f\|_{L^2_xL^\infty_y}\frak leq C\|f\|_{L^2_xH^1_y}+C\nu^{-\f14}t^{\f34}|n|e^{-\f14\nu n^2t}\|f\|_{L^2}. \end{align*} \end{enumerate} \end{proposition} The main idea of the proof is that we treat the linear part generated by shear follow as a perturbation, and then the semigroup $e^{-t\mathbb{A}_n}$ is a perturbation of Stokes semigroup $e^{\nu t\mathbb{P}\Delta}$. \begin{proof} Let $n\in\mathbb{Z}$ and $f\in\mathcal{P}_nL^2_xL^\infty_y(\Omega)\cap\mathcal{P}_nL^2_\sigma(\Omega)$. We still denote $u^{(n)}=e^{-t\mathbb{A}_{\nu,n}}f$. By Duhamel's formula, $u^{(n)}$ can be written as \begin{align} u^{(n)}(t)=e^{\nu t\mathbb{P}\Delta}f-\int_0^t e^{\nu(t-s)\mathbb{P}\Delta}\mathbb{P}\big(U^{p}(Y)\partial_x u^{(n)}(s)+(y^{-1}u_2^{(n)}(s)Y\partial_YU^p(Y),0)\big)\mathrm{d}s. \end{align} According to Lemma \ref{lem:stokes-semigroup}, we have \begin{align*} &\|u^{(n)}(t)\|_{L^2_xL^\infty_y}\\ &\frak leq C\|f\|_{L^2_xH^1_y}+C\sup_{0\frak leq s\frak leq t}\|\mathbb{P}\big(U^{p}(Y)\partial_x u^{(n)}(s)+(y^{-1}u_2^{(n)}(s)Y\partial_YU^p(Y),0)\big)\|_{L^2}\int_0^t\frac{1}{(\nu(t-s))^{\f14}}\mathrm{d}s\\ &\frak leq C\|f\|_{L^2_xH^1_y}+C\nu^{-\f14}t^{\f34}\big(\|\partial_xu^{(n)}(s)\|_{L^2}+\|y^{-1}u_2^{(n)}(s)\|_{L^2}\big), \end{align*} which along with Hardy inequality and the divergence free condition implies \begin{align}\frak label{heat-L2-Linfinity} \|&u^{(n)}(t)\|_{L^2_xL^\infty_y}\frak leq C\|f\|_{L^2_xH^1_y}+C\nu^{-\f14}t^{\f34}|n|\sup_{0\frak leq s\frak leq t}\|u^{(n)}(s)\|_{L^2}. \end{align} Next we consider the case of $|n|^\gamma\nu^{\f12}\frak leq1$. In this case, by the results (1) and (2) in Proposition \ref{semigroup-L2}, we have \begin{align*} \|u^{(n)}(t)\|_{L^2}\frak leq C\big(1+|n|^{2(1-\gamma)}\big)e^{\frac{|n|^\gamma t}{\delta}}\|f\|_{L^2}, \end{align*} which along with \reff{heat-L2-Linfinity} gives \begin{align}\noindentnumber \|&u^{(n)}(t)\|_{L^2_xL^\infty_y}\frak leq C\|f\|_{L^2_xH^1_y}+C\nu^{-\f14}t^{\f34}\big(1+|n|^{3-2\gamma}\big)e^{\frac{|n|^\gamma t}{\delta}}\|f\|_{L^2}. \end{align} In a similar way, we deduce that for $1\frak leq |n|^\gamma\nu^{\f12}$ and $|n|\frak leq\delta_0^{-1}\nu^{-\f34}$, \begin{align*} \|&u^{(n)}(t)\|_{L^2_xL^\infty_y}\frak leq C\|f\|_{L^2_xH^1_y}+C\nu^{-\f14}t^{\f34}|n|^{2-\gamma}e^{\frac{|n|^\gamma t}{\delta}}\|f\|_{L^2}, \end{align*} and for $|n|\geq \delta_0^{-1}\nu^{-\f34}$, \begin{align*} \|&u^{(n)}(t)\|_{L^2_xL^\infty_y}\frak leq C\|f\|_{L^2_xH^1_y}+C\nu^{-\f14}t^{\f34}|n|e^{-\f14\nu n^2t}\|f\|_{L^2}. \end{align*} This completes the proof. \end{proof} \section{Nonlinear stability in Gevrey class} In this section, we prove the nonlinear stability. By the Duhamel formula, the solution $u(t)$ of the system \reff{NSA} with initial data $a$ is given by \begin{align} u(t)=e^{-t\mathbb{A}_\nu}a-\int_0^t e^{-t\mathbb{A}_{\nu}(t-s)}\mathbb{P}(u\cdot\nabla u)ds,\qquad t>0. \end{align} With respect to each Fourier mode $n$ of variable $x$, we know that the solution $u(t)$ can be represented as \begin{align} \mathcal{P}_n u(t)=e^{-t\mathbb{A}_{\nu,n}}\mathcal{P}_n a-\int_0^te^{-(t-s)\mathbb{A}_{\nu,n}}\mathcal{P}_n\mathbb{P}(u\cdot\nabla u)(s)ds. \end{align} Since we have already obtained the estimates of the semigroup $e^{-t\mathbb{A}_{\nu,n}}$ in Proposition \ref{semigroup-L2}, \ref{semigoup-L2Linf} and \ref{semigoup-H^1Linf}, our next task is to obtain nonlinear estimates. Before we start our proof, we recall some function spaces which we shall work on. For $\gamma\in[0,1]$, $d\geq 0$ and $K>0$, the Banach space $X_{d,\gamma,K}$ is given by \begin{align*} &X_{d,\gamma,K}=\Big\{f\in L^2_\sigma(\Omega): \|f\|_{X_{d,\gamma,K}}=\sup_{n\in\mathbb{Z}}(1+|n|^d)e^{K\langle n\rangle^{\gamma}}\|\mathcal{P}_n f\|_{L^2(\Omega)}<+\infty\Big\},\\ &X^{(1)}_{d,\gamma,K}=\Big\{f\in L^2_\sigma(\Omega): \|f\|_{X_{d,\gamma,K}^{(1)}}=\sup_{n\in\mathbb{Z}}(1+|n|^d)e^{K\langle n\rangle^{\gamma}}\|\mathcal{P}_n f\|_{L^2_xH^1_y(\Omega)}<\infty\Big\}, \end{align*} and the space $Y_{d,\gamma,K}$ is defined as \begin{align*} Y_{d,\gamma,K}=\Big\{f\in L^2_\sigma(\Omega): \|f\|_{Y_{d,\gamma,K}}=\sup_{n\in\mathbb{Z}}(1+|n|^d)e^{K\langle n\rangle^{\gamma}}\|\mathcal{P}_n f\|_{L^2_xL^\infty_y(\Omega)}<\infty\Big\}. \end{align*} \begin{proof}[Proof of Theorem \ref{main}] Let $\gamma\in[\f23,1)$ and $u(t)$ be the solution to the system \reff{NSA} with initial data $a$. Set \begin{align} q:=d-3(1-\gamma)-1\in(1,d),\qquad K(t):=K-2\delta^{-1}t. \end{align} In what follows, we always assume that $T\frak le \delta K/2\frak le 1$ so that $K(t)\ge 0$ for $t\in [0,T]$. We would like to establish a priori estimate of $u(t)$ in the space \begin{eqnarray} \begin{split} Z_{\gamma,K,T}:=&\Big\{f\in(C([0,T];L^2_\sigma(\Omega)):\|f\|_{Z_{\gamma,K,T}}:=\sup_{0<t\frak leq T'}\big(\|f(t)\|_{X_{q,\gamma,K(t)}}\\ &\quad+\nu^{\f14}\|f(t)\|_{Y_{q,\gamma,K(t)}}+(\nu t)^{\f12}\|\nabla f(t)\|_{X_{q,\gamma,K(t)}}\big)<+\infty\Big\}. \end{split} \end{eqnarray} We first show a basic estimate for nonlinear term $\mathcal{P}_n\mathbb{P}(u\cdot\nabla u)$, which will be used frequently later. We notice that \begin{align*} \|\mathcal{P}_n\mathbb{P}(u\cdot\nabla u)\|_{L^2(\Omega)}&\frak leq \|\mathcal{P}_n(u_1\partial_x u)\|_{L^2(\Omega)}+\|\mathcal{P}_n(u_2\partial_y u)\|_{L^2(\Omega)}\\ &\frak leq \big\|\sum_{j\in\mathbb{Z}}(e^{-\mathrm{i}jx}\mathcal{P}_j u_1)\cdot(e^{-\mathrm{i}(n-j)x}\partial_x\mathcal{P}_{n-j}u)\big\|_{L^2_y(\mathbb{R}_+)}\\ &\quad+\big\|\sum_{j\in\mathbb{Z}}(e^{-\mathrm{i}jx}\mathcal{P}_j u_2)\cdot(e^{-\mathrm{i}(n-j)x}\partial_y\mathcal{P}_{n-j}u)\big\|_{L^2_y(\mathbb{R}_+)}. \end{align*} From Gagliardo-Nirenberg inequality and divergence free condition, we obtain \begin{align*} \big\|\sum_{j\in\mathbb{Z}}&(e^{-\mathrm{i}jx}\mathcal{P}_j u_1)\cdot(e^{-\mathrm{i}(n-j)x}\partial_x\mathcal{P}_{n-j}u)\big\|_{L^2_y(\mathbb{R}_+)}\\ &\frak leq C\sum_{j\in\mathbb{Z}}\|\mathcal{P}_j u\|_{L^2(\Omega)}^{\f12}\|\nabla\mathcal{P}_j u\|_{L^2(\Omega)}^{\f12}|n-j|^{\f12}\|\mathcal{P}_{n-j} u\|_{L^2(\Omega)}^{\f12}\|\nabla\mathcal{P}_{n-j} u\|_{L^2(\Omega)}^{\f12}, \end{align*} and \begin{align*} \big\|\sum_{j\in\mathbb{Z}}&(e^{-\mathrm{i}jx}\mathcal{P}_j u_2)\cdot(e^{-\mathrm{i}(n-j)x}\partial_y\mathcal{P}_{n-j}u)\big\|_{L^2_y(\mathbb{R}_+)} \frak leq C\sum_{j\in\mathbb{Z}}|j|^{\f12}\|\mathcal{P}_j u\|_{L^2(\Omega)}\|\nabla\mathcal{P}_{n-j} u\|_{L^2(\Omega)}. \end{align*} Therefore, for $u\in Z_{\gamma,K,T}$, we have \begin{align}\frak label{mainproof-nonlinear} \|\mathcal{P}_n\mathbb{P}(u\cdot\nabla u)(t)\|_{L^2(\Omega)}\frak leq C(\nu t)^{-\f12}\frac{e^{-K(t)\langle n\rangle^\gamma}}{1+|n|^{q-\frac{1}{2}}}\|u\|_{Z_{\gamma,K,T}}^2 \end{align} due to $q>1$. Now we are ready to show the estimates of $\mathcal{P}_n u(t)$. We split our proof into three situations: $|n|^\gamma\nu^{\f12}<1$, $1\frak leq |n|^\gamma\nu^{\f12}\frak leq \delta_0^{-\gamma}\nu^{-\f34\gamma+\f12}$ and $|n|\geq \delta_0^{-1}\nu^{-\f34}$. \noindent\textbf{Case 1.} $|n|^{\gamma}\nu^{\f12}<1$. In this case, we infer from Proposition \ref{semigroup-L2} that \begin{align*} \|\mathcal{P}_n u(t)\|_{L^2(\Omega)}\frak leq& C(1+|n|^{2(1-\gamma)})e^{\frac{|n|^\gamma t}{\delta}}\|\mathcal{P}_n a\|_{L^2(\Omega)}\\ &+C(1+|n|^{2(1-\gamma)})\int_0^t e^{\frac{|n|^\gamma(t-s)}{\delta}}\|\mathcal{P}_n\mathbb{P}(u\cdot\nabla u)(s)\|_{L^2(\Omega)}\mathrm{d}s\\ \frak leq&C(1+|n|^{2(1-\gamma)})e^{\frac{|n|^\gamma t}{\delta}}\|\mathcal{P}_n a\|_{L^2(\Omega)}\\ &+C(1+|n|^{2(1-\gamma)})\int_0^te^{\frac{|n|^\gamma (t-s)}{\delta}}(\nu s)^{-\f12}\frac{e^{-K(s)\langle n\rangle^\gamma}}{1+|n|^{q-\frac{1}{2}}}\mathrm{d}s\|u\|_{Z_{\gamma,K,T}}^2. \end{align*} On the other hand, we notice that \begin{eqnarray}\frak label{mainproof-int} \begin{split} \int_0^{t}e^{\frac{|n|^\gamma(t-s)}{\delta}}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s&\frak le e^{-K(t)\langle n\rangle^\gamma}\int_0^t e^{\frac{-\langle n\rangle^\gamma(t-s)}{\delta}}s^{-\f12}\mathrm{d}s\\ &\frak leq C e^{-K(t)\langle n\rangle^\gamma}\langle n\rangle^{-\frac{\gamma}{2}}. \end{split} \end{eqnarray} Therefore, we obtain \begin{align*} \|\mathcal{P}_n u(t)\|_{L^2(\Omega)}\frak leq& C\frac{e^{-K(t)\langle n\rangle ^\gamma}}{1+|n|^{d-2(1-\gamma)}}\|a\|_{X_{d,\gamma,K}}\\ &+\frac{C\langle n\rangle^{\f12-\f \gamma 2+2(1-\gamma)}}{\nu^{\f12}(1+|n|^q)}e^{-K(t)\langle n\rangle^\gamma}\|u\|^2_{Z_{\gamma,K,T}}. \end{align*} This shows that for $\beta_0=\frac{5(1-\gamma)}{4\gamma}$, \begin{align}\frak label{mainproof-case1-L^2} &\sup_{0<t\frak leq T}\sup_{|n|^\gamma\nu^{\f12}<1}(1+|n|^q)e^{K(t)\langle n\rangle^\gamma}\|\mathcal{P}_n u(t)\|_{L^2(\Omega)}\noindentnumber\\ &\frak leq C\Big(\|a\|_{X_{d,\gamma,K}}+\nu^{-\f12}\|u\|^2_{Z_{\gamma,K,T}}\sup_{|n|^\gamma\nu^{\f12}<1}\langle n\rangle^{\frac 52(1-\gamma)}\Big)\noindentnumber\\ &\frak leq C\Big(\|a\|_{X_{d,\gamma,K}}+\nu^{-\f12-\beta_0}\|u\|^2_{Z_{\gamma,K,T}}\Big). \end{align} Now we turn to show the $L^2_xL^\infty_y$ estimates in this case. By Proposition \ref{semigoup-L2Linf} and Proposition \ref{semigoup-H^1Linf}, we have \begin{align*} \|\mathcal{P}_n u(t)\|_{L^2_xL^\infty_y(\Omega)}\frak leq& C\|\mathcal{P}_n a\|_{L^2_xH^1_y(\Omega)}+C\nu^{-\frac{1}{4}}t^{\f34}\langle n\rangle^{3-2\gamma}e^{\frac{|n|^\gamma t}{\delta}}\|\mathcal{P}_n a\|_{L^2(\Omega)}\\ &+\frac{C}{\nu^{\f14}\langle n\rangle^{\f14}}\langle n\rangle^{\f34(1-\gamma)}\frak log^\f12\langle n\rangle\int_0^t\frac{1}{(t-s)^{\f12}}\|\mathcal{P}_n\mathbb{P}(u\cdot\nabla u)(s)\|_{L^2(\Omega)}\mathrm{d}s\\ &+\frac{C\langle n\rangle^{1-\gamma}}{\nu^{\f14}}\int_0^t(t-s)^{-\f14}e^{\frac{\langle n\rangle^\gamma(t-s)}{2\delta}}\|\mathcal{P}_n\mathbb{P}(u\cdot\nabla u)(s)\|_{L^2(\Omega)}\mathrm{d}s\\ &+C\nu^{-\f14}\frak log^{\f12}\langle n\rangle\langle n\rangle^{\frac{3}{2}-\f54\gamma}\int_0^t e^{\frac{|n|^\gamma(t-s)}{\delta}}\|\mathcal{P}_n\mathbb{P}(u\cdot\nabla u)(s)\|_{L^2(\Omega)}\mathrm{d}s, \end{align*} which along with \reff{mainproof-nonlinear} implies \begin{align*} \|\mathcal{P}_n u(t)\|_{L^2_xL^\infty_y(\Omega)}\frak leq& C\|\mathcal{P}_n a\|_{L^2_xH^1_y(\Omega)}+C\nu^{-\frac{1}{4}}t^{\f34}\langle n\rangle^{3-2\gamma}e^{\frac{|n|^\gamma t}{\delta}}\|a\|_{L^2(\Omega)}\\ &+\frac{C\langle n\rangle^{\f34(1-\gamma)}\frak log^\f12\langle n\rangle)}{\nu^{\f34}\langle n\rangle^{\f14}(1+|n|^{q-\f12})}\|u\|_{Z_{\gamma,K,T}}^2\int_0^t(t-s)^{-\f12}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\\ &+\frac{C\langle n\rangle^{1-\gamma}}{\nu^{\f34}(1+|n|^{q-\f12})}\|u\|^2_{Z_{\gamma,K,T}}\int_0^t(t-s)^{-\f14}e^{\frac{|n|^\gamma(t-s)}{2\delta}}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\\ &+\frac{C\frak log^{\f12}\langle n\rangle\langle n\rangle^{\frac{3}{2}-\f54\gamma}}{\nu^{\f34}(1+|n|^{q-\f12})}\|u\|^2_{Z_{\gamma,K,T}}\int_0^t e^{\frac{|n|^\gamma(t-s)}{\delta}}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s. \end{align*} On the other hand, we notice that \begin{align*} &\int_0^t(t-s)^{-\f12}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\frak leq C e^{-K(t)\langle n\rangle^\gamma},\\ &\int_0^t(t-s)^{-\f14}e^{\frac{|n|^\gamma(t-s)}{2\delta}}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\frak leq C\langle n\rangle^{-\f\gamma 4}e^{-K(t)\langle n\rangle^\gamma}. \end{align*} Collecting \reff{mainproof-int} and the above two inequalities, we obtain \begin{align*} \|\mathcal{P}_n u(t)\|_{L^2_xL^\infty_y(\Omega)}\frak leq& C\|\mathcal{P}_n a\|_{L^2_xH^1_y(\Omega)}+C\nu^{-\frac{1}{4}}t^{\f34}\langle n\rangle^{3-2\gamma}e^{\frac{|n|^\gamma t}{\delta}}\|\mathcal{P}_na\|_{L^2(\Omega)}\\ &+\frac{C\langle n\rangle^{\f34(1-\gamma)+\f14}\frak log^\f12\langle n\rangle}{\nu^{\f34}(1+|n|^{q})}e^{-K(t)\langle n\rangle^\gamma}\|u\|_{Z_{\gamma,K,T}}^2\\ &+\frac{C\langle n\rangle^{1-\gamma+\f12-\f \gamma 4}}{\nu^{\f34}(1+|n|^{q})}e^{-K(t)\langle n\rangle^\gamma}\|u\|^2_{Z_{\gamma,K,T}}\\ &+\frac{C\frak log^{\f12}\langle n\rangle\langle n\rangle^{\frac{3}{4}+\f54(1-\gamma)-\f \gamma 2}}{\nu^{\f34}(1+|n|^{q})}e^{-K(t)\langle n\rangle^\gamma}\|u\|^2_{Z_{\gamma,K,T}}. \end{align*} This shows that \begin{align*} &\sup_{0<t\frak leq T}\sup_{|n|^\gamma\nu^{\f12}<1}\nu^{\f14}(1+|n|^q)e^{K(t)\langle n\rangle^\gamma}\|\mathcal{P}_n u(t)\|_{L^2_xL^\infty_y(\Omega)} \frak leq C\nu^{\f14}\|a\|_{X_{d,\gamma,K}^{(1)}}+C\|a\|_{X_{d,\gamma,K}}\\ &\quad+C\nu^{-\f12}\|u\|^2_{Z_{\gamma,K,T}}\sup_{|n|^\gamma\nu^{\f12}<1}\langle n\rangle^{\frac{1}{4}+\f34(1-\gamma)}\frak log^\f12\langle n\rangle +C\nu^{-\f12}\|u\|^2_{Z_{\gamma,K,T}}\sup_{\langle n\rangle^\gamma\nu^{\f12}<1}\langle n\rangle^{\f14+\f54(1-\gamma)}\\ &\qquad+C\nu^{-\f12}\|u\|^2_{Z_{\gamma,K,T}}\sup_{|n|^\gamma\nu^{\f12}<1}\frak log^{\f12}\langle n\rangle\langle n\rangle^{\f14+\f74(1-\gamma)}. \end{align*} Taking $\beta_1=\frac{7(1-\gamma)}{8\gamma}+\frac{1}{8\gamma}+$, we conclude that \begin{eqnarray}\frak label{mainproof-case1-Linfinity} \begin{split} &\sup_{0<t\frak leq T}\sup_{|n|^\gamma\nu^{\f12}<1}\nu^{\f14}(1+|n|^q)e^{K(t)\langle n\rangle^\gamma}\|\mathcal{P}_n u(t)\|_{L^2_xL^\infty_y(\Omega)}\\ & \frak leq C\|a\|_{X^{(1)}_{d,\gamma,K}}+C\nu^{-\f12-\beta_1}\|u\|^2_{Z_{\gamma,K,T}}. \end{split} \end{eqnarray} Next we prove the $H^1$ estimates in this case. It follows from Proposition \ref{semigroup-L2} that \begin{align*} \|\nabla\mathcal{P}_n u(t)\|_{L^2(\Omega)}\frak leq& \frac{C}{\nu^{1/2}}\Big(t^{-1/2}+\langle n\rangle^{\frac{1}{2}+\frac{5}{4}(1-\gamma)}e^{\frac{|n|^\gamma}{\delta}t}\Big)\|\mathcal{P}_n a\|_{L^2}\\ &+\frac{C}{\nu^{\f12}}\int_0^t\big((t-s)^{-\f12}+\langle n\rangle^{\frac{1}{2}+\frac{5}{4}(1-\gamma)}e^{\frac{|n|^\gamma}{\delta}(t-s)}\big)\|\mathcal{P}_n(u\cdot\nabla u)(s)\|_{L^2(\Omega)}\mathrm{d}s, \end{align*} which along with \reff{mainproof-nonlinear} and the fact $e^{\frac{|n|^\gamma t}{\delta}}\frak leq C(\frac{|n|^\gamma t}{\delta})^{-\f12}e^{\frac{2|n|^\gamma t}{\delta}}$ gives \begin{eqnarray}\noindentnumber \begin{split} \|\nabla\mathcal{P}_n& u(t)\|_{L^2(\Omega)} \frak leq\frac{C}{(\nu t)^{\f12}}\langle n\rangle^{\f74(1-\gamma)}e^{\frac{2|n|^\gamma t}{\delta}}\|\mathcal{P}_n a\|_{L^2(\Omega)}\\ &+\frac{C}{\nu(1+|n|^{q-\f12})}\|u\|_{Z_{\gamma,K,T}}^2\int_0^t\big((t-s)^{-\f12}+\langle n\rangle^{\frac{1}{2}+\frac{5}{4}(1-\gamma)}e^{\frac{\langle n\rangle^\gamma}{\delta}(t-s)}\big)\frac{e^{-K(s)|n|^\gamma}}{s^{\f12}}\mathrm{d}s. \end{split} \end{eqnarray} On the other hand, we notice that \begin{align*} &\int_0^t(t-s)^{-\f12}{e^{-K(s)\langle n\rangle^\gamma}}{s^{-\f12}}\mathrm{d}s\frak leq Ct^{-\f12}\langle n\rangle^{-\f \gamma 2}e^{-K(t)\langle n\rangle^\gamma},\\ &\int_0^te^{\frac{|n|^\gamma}{\delta}(t-s)}{e^{-K(s)\langle n\rangle^\gamma}}{s^{-\f12}}\mathrm{d}s\frak leq Ct^{-\f 12}\langle n\rangle^{-\gamma }e^{-K(t)\langle n\rangle^\gamma}. \end{align*} Hence, we obtain \begin{align*} &(\nu t)^{\f12}\|\nabla\mathcal{P}_n u(t)\|_{L^2(\Omega)}\\ &\frak leq \frac{Ce^{-K(t)|n|^\gamma}}{(1+|n|)^{d-\frac{7}{4}(1-\gamma)}}\|a\|_{X^{(1)}_{d,\gamma,K}}+\frac{C\langle n\rangle^{\f12(1-\gamma)}e^{-K(t)\langle n\rangle^\gamma}}{\nu^{\f12}(1+|n|^q)}\|u\|_{Z_{\gamma,K,T}}^2\\ &\qquad+\frac{C\langle n\rangle^{\frac{9}{4}(1-\gamma)}e^{-K(t)\langle n\rangle^\gamma}}{\nu^\f12(1+|n|^q)}\|u\|_{Z_{\gamma,K,T}}^2, \end{align*} which gives \begin{eqnarray}\frak label{mainproof-case1-na} \begin{split} &\sup_{0<t\frak leq T}\sup_{|n|^\gamma\nu^{\f12}<1}(1+|n|^q)e^{K(t)\langle n\rangle^\gamma}(\nu t)^{\f12}\|\nabla\mathcal{P}_n u(t)\|_{L^2(\Omega)}\\ &\frak leq C\|a\|_{X^{(1)}_{d,\gamma,K}}+C\nu^{-\f12-\beta_0}\|u\|^{2}_{Z_{\gamma,K,T}}. \end{split} \end{eqnarray} \noindent\textbf{Case 2.} $1\frak leq |n|^\gamma\nu^{\f12}\frak leq \delta_0^{-\gamma}\nu^{-\f34\gamma+\f12}$. The argument is similar to Case 1. According to Proposition \ref{semigroup-L2} and \reff{mainproof-nonlinear}, we have \begin{align}\frak label{mainproof-case2-L2} &\sup_{0<t\frak leq T}\sup_{1\frak leq |n|^\gamma\nu^{\f12}\frak leq \delta_0^{-\gamma}\nu^{-\f34\gamma+\f12}}(1+|n|^q)e^{K(t)\langle n\rangle^\gamma}\|\mathcal{P}_n u(t)\|_{L^2(\Omega)}\noindentnumber\\ &\quad\frak leq C\|a\|_{X_{d,\gamma,K}}+C\nu^{-\f12}\|u\|_{Z_{\gamma,K,T}}^2\sup_{1\frak leq |n|^\gamma\nu^{\f12}\frak leq \delta_0^{-\gamma}\nu^{-\f34\gamma+\f12}}(1+|n|)^{\f32(1-\gamma)}\noindentnumber\\ &\quad\frak leq C\|a\|_{X_{d,\gamma,K}}+C\nu^{-\f12-\f98(1-\gamma)}\|u\|_{Z_{\gamma,K,T}}^2. \end{align} For the $L^\infty$-estimate, we infer from Proposition \ref{semigoup-L2Linf} and Proposition \ref{semigoup-H^1Linf} that \begin{align*} \|\mathcal{P}_n u(t)\|_{L^2_xL^\infty_y(\Omega)}\frak leq& C\|\mathcal{P}_n a\|_{L^2_xH^1_y(\Omega)}+C\nu^{-\frac{1}{4}}t^{\f34}|n|^{2-\gamma}e^{\frac{|n|^\gamma t}{\delta}}\|\mathcal{P}_na\|_{L^2(\Omega)}\\ &+\frac{C|n|^{\f14+\f 34(1-\gamma)}}{\nu^{\f34}(1+|n|^{q})}e^{-K(t)\langle n\rangle^\gamma}\|u\|^2_{Z_{\gamma,K,T}}\\ &+C\frac{(1+|n|)^{\frac{1}{4}+\f54(1-\gamma)}}{\nu^{\f34}(1+|n|^{q})}e^{-K(t)\langle n\rangle^\gamma}\|u\|^2_{Z_{\gamma,K,T}}, \end{align*} which gives \begin{eqnarray}\frak label{mainproof-case2-Linfinity} \begin{split} \sup_{0<t\frak leq T}&\sup_{1\frak leq |n|^\gamma\nu^{\f12}\frak leq \delta_0^{-\gamma}\nu^{-\f34\gamma+\f12}}\nu^{\f14}(1+|n|^q)e^{K(t)\langle n\rangle^\gamma}\|\mathcal{P}_n u(t)\|_{L^2_xL^\infty_y(\Omega)}\\ \frak leq&C\|a\|_{X^{(1)}_{d,\gamma,K}}+C\nu^{-\f12-\f 3{16}-\f {15} {16}(1-\gamma)}\|u\|^2_{Z_{\gamma,K,T}}. \end{split} \end{eqnarray} For the $H^1$ estimate, we infer from Proposition\ref{semigroup-L2} and \reff{mainproof-nonlinear} that \begin{eqnarray}\frak label{mainproof-case2-na} \begin{split} \sup_{0<t\frak leq T}&\sup_{1\frak leq |n|^\gamma\nu^{\f12}\frak leq \delta_0^{-\gamma}\nu^{-\f34\gamma+\f12}}(\nu t)^{\f12}(1+|n|^q)e^{K(t)\langle n\rangle^\gamma}\|\mathcal{P}_n u(t)\|_{L^2(\Omega)}\\ \frak leq&C\|a\|_{X^{(1)}_{d,\gamma,K}}+C\nu^{-\f12}\|u\|_{Z_{\gamma,K,T}}^2\sup_{1\frak leq |n|^\gamma\nu^{\f12}\frak leq \delta_0^{-\gamma}\nu^{-\f34\gamma+\f12}}\langle n\rangle^{\f32(1-\gamma)}\\ \frak leq& C\|a\|_{X^{(1)}_{d,\gamma,K}}+C\nu^{-\f12-\frac{9}{8}(1-\gamma)}\|u\|_{Z_{\gamma,K,T}}^2. \end{split} \end{eqnarray} \noindent\textbf{Case 3.} $|n|>\delta_0^{-1}\nu^{-\f34}$. By Proposition \ref{semigroup-L2} and \reff{mainproof-nonlinear}, we first have \begin{align*} \|\mathcal{P}_n u(t)\|_{L^2(\Omega)}\frak leq C\|\mathcal{P}_n a\|_{L^2(\Omega)}+\frac{C\langle n\rangle^{\f12}}{\nu^{\f12}(1+|n|^q)}\int_0^te^{-\frac{1}{4}\nu|n|^2(t-s)}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\|u\|_{Z_{\gamma,K,T}}^2. \end{align*} We claim that for all $|n|>\delta_0^{-1}\nu^{-\f34}$ \begin{align}\frak label{mainproof-Inu} I_\nu:=\langle n\rangle^{\f12}\int_0^te^{-\frac{1}{4}\nu|n|^2(t-s)}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\frak leq \frac{Ce^{-K(t)\langle n\rangle^\gamma}}{\nu^{\f12(1-\gamma)}}. \end{align} In fact, when $\delta_0^{-1}\nu^{-\f34}\frak leq|n|\frak leq\nu^{-1}$, we have \begin{align*} I_\nu\frak leq C|n|^{\f12}\int_0^te^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\frak leq C|n|^\f12 e^{-K(t)\langle n\rangle^\gamma}|n|^{-\frac{\gamma}{2}}\frak leq C\nu^{-\f12(1-\gamma)}e^{-K(t)\langle n\rangle^\gamma}, \end{align*} and when $|n|\geq\nu^{-1}$, we have \begin{align*} I_\nu\frak leq C\int_0^t\nu^{-\f14}(t-s)^{-\f14}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\frak leq C\nu^{-\f14}|n|^{-\frac{\gamma}{4}}e^{-K(t)\langle n\rangle^\gamma}\frak leq C\nu^{-\f12(1-\gamma)}e^{-K(t)\langle n\rangle^\gamma}, \end{align*} which gives \reff{mainproof-Inu}. Then we obtain \begin{eqnarray}\frak label{mainproof-case3-L2} \begin{split} &\sup_{0<t\frak leq T}\sup_{|n|^\gamma\nu^{\f12}>\delta_0^{-1}\nu^{-\f14}}(1+|n|^q)e^{K(t)\langle n\rangle^\gamma}\|\mathcal{P}_n u(t)\|_{L^2(\Omega)}\\ &\frak leq C\|a\|_{X^{(1)}_{d,\gamma,K}}+C\nu^{-\frac{1}{2}-\frac{1}{2}(1-\gamma)}\|u\|^2_{Z_{\gamma,K,T}}. \end{split} \end{eqnarray} For the $L^\infty_y$ estimate, we infer that from Proposition \ref{semigoup-L2Linf}, Proposition \ref{semigoup-H^1Linf} and \reff{mainproof-nonlinear} that \begin{eqnarray} \begin{split} \|\mathcal{P}_n u(t)&\|_{L^2_xL^\infty(\Omega)}\frak leq C\|\mathcal{P}_n a\|_{L^2_xH^1_y(\Omega)}+C|n|\nu^{-\f14}\|\mathcal{P}_n a\|_{L^2(\Omega)}\\ &+\frac{C\langle n\rangle^{\f12}}{\nu^{\f34}(1+|n|^q)}\int_0^t\frac{1+|n|^{\f12}(t-s)^{\f12}}{(t-s)^{\f14}}e^{-\f14\nu |n|^2(t-s)}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\|u\|_{Z_{\gamma,K,T}}^2. \end{split} \end{eqnarray} On the other hand, we notice that for all $|n|>\delta_0^{-1}\nu^{-\f34}$ \begin{align*} II_{\nu}:=&\langle n\rangle^{\f12}\int_0^t\frac{1+|n|^{\f12}(t-s)^{\f12}}{(t-s)^{\f14}}e^{-\f14\nu |n|^2(t-s)}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\\ \frak leq &C\int_0^t\Big(\frac{1}{\nu^{\f14}(t-s)^{\f12}}+\frac{1}{\nu^{\f12}(t-s)^{\f14}}\Big)e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\\ \frak leq&C\nu^{-\f14}+\nu^{-\f12}\langle n\rangle^{-\f \gamma 4}. \end{align*} Then we obtain \begin{eqnarray}\frak label{mainproof-case3-Linfinity} \begin{split} &\sup_{0<t\frak leq T}\sup_{|n|>\delta_0^{-1}\nu^{-\f34}}\nu^{\f14}(1+|n|^q)e^{K(t)\langle n\rangle^\gamma}\|\mathcal{P}_n u(t)\|_{L^2_xL^\infty_y(\Omega)}\\ &\frak leq C\|a\|_{X^{(1)}_{d,\gamma,K}}+C\nu^{-1+\frac{3}{16}\gamma}\|u\|_{Z_{\gamma,K,T}}^2. \end{split} \end{eqnarray} For the $H^1$ estimate, we have \begin{eqnarray}\noindentnumber \begin{split} &\|\nabla\mathcal{P}_n u(t)\|_{L^2(\Omega)}\frak leq\frac{C}{(\nu t)^{\f12}}(1+|n|t)e^{-\f14\nu n^2 t}\|\mathcal{P}_n a\|_{L^2(\Omega)}\\ &\qquad+\frac{C\langle n\rangle^\f12}{\nu(1+|n|^q)}\int_0^t\Big(\frac{1}{(t-s)^{\f12}}+|n|(t-s)^{\f12}\Big)e^{-\nu n^2(t-s)}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\|u\|^2_{Z_{\gamma,K,T}}. \end{split} \end{eqnarray} By a similar argument as in \reff{mainproof-Inu}, we obtain \begin{eqnarray*} \begin{split} &\langle n\rangle^\f12\int_0^t\Big(\frac{1}{(t-s)^{\f12}}+|n|(t-s)^{\f12}\Big)e^{-\nu n^2(t-s)}e^{-K(s)\langle n\rangle^\gamma}s^{-\f12}\mathrm{d}s\\ &\frak leq \frac{Ce^{-K(t)\langle n\rangle^\gamma}}{\nu^{\f32(1-\gamma)}t^{\f12}}. \end{split} \end{eqnarray*} Then we deduce that \begin{eqnarray}\frak label{mainproof-case3-na} \begin{split} &\sup_{0<t\frak leq T}\sup_{|n|>\delta^{-1}_0\nu^{-\f34}}(\nu t)^{\f12}(1+|n|^q)e^{K(t)|n|^\gamma}\|\mathcal{P}_n u(t)\|_{L^2(\Omega)}\\ &\frak leq C\|a\|_{X^{(1)}_{d,\gamma,K}}+C\nu^{-\f12-\f32(1-\gamma)}\|u\|^2_{Z_{\gamma,K,T}}. \end{split} \end{eqnarray} Now we denote \begin{align*} \beta=&\max\Big\{\f {5(1-\gamma)} {4\gamma},\f {7(1-\gamma)} {8\gamma}+\f1 {8\gamma}+,\f 98(1-\gamma),\f{3} {16}+\frac{15(1-\gamma)}{16}\Big\}\\ =&\max\Big\{\f {7(1-\gamma)} {8\gamma}+\f1 {8\gamma}+,\f{3} {16}+\frac{15(1-\gamma)}{16}\Big\}. \end{align*} Summing up \reff{mainproof-case1-L^2}, \reff{mainproof-case1-Linfinity}, \reff{mainproof-case1-na}, \reff{mainproof-case2-L2}, \reff{mainproof-case2-Linfinity}, \reff{mainproof-case2-na}, \reff{mainproof-case3-L2}, \reff{mainproof-case3-Linfinity} and \reff{mainproof-case3-na}, we conclude that \begin{align*} \|u\|_{Z_{\gamma,K,T}}\frak leq C\Big(\|a\|_{X^{(1)}_{d,\gamma,K}}+C\nu^{-\frac{1}{2}-\beta}\|u\|^2_{Z_{\gamma,K,T}}\Big). \end{align*} Thanks to our assumption $\|a\|_{X^{(1)}_{d,\gamma,K}}\frak leq \epsilon\nu^{\f12+\beta}$, if we take $\epsilon$ small enough so that $C\epsilon\frak le \f12$, then we deduce that \begin{align} \|u\|_{Z_{\gamma,K,T}}\frak leq C\|a\|_{X^{(1)}_{d,\gamma,K}}.\noindentnumber \end{align} This completes the proof of Theorem \ref{main}. \end{proof} \appendix \section{Estimates of the Stokes semigroup} In this appendix, we present some $L^\infty$ type estimates of the Stokes semigroup $e^{\nu t\mathbb{P}\Delta}$ on the half plane. \begin{lemma}\frak label{lem:stokes-semigroup} If $u_0\in L^2_x H^1_y(\mathbb{T}\times\mathbb{R}_+)$, then it holds that \begin{align*} &\|e^{\nu t\mathbb{P}\Delta} u_0\|_{L^2_xL^\infty_y}\frak leq C\|u_0\|_{L^2_x H^1_y},\\ & \|e^{\nu t\mathbb{P}\Delta} u_0\|_{L^2_xL^\infty_y}\frak leq \frac{C}{(\nu t)^{\f14}}\|u_0\|_{L^2},\\ & \|\na e^{\nu t\mathbb{P}\Delta} u_0\|_{L^2}\frak leq \frac{C}{(\nu t)^{\f12}}\|u_0\|_{L^2}. \end{align*} \end{lemma} \begin{proof} For given $u_0\in L^2_x H^1_y(\mathbb{T}\times\mathbb{R}_+)$, $u^{(S)}:=e^{\nu t\mathbb{P}\Delta}u_0$ is the solution to the Stokes equation on the half plane: \begin{align*} \frak left\{ \begin{aligned} &\partial_t u^{(S)}(x,y)-\nu\Delta u^{(S)}(x,y)+\nabla p^{(S)}(x,y)=0,\quad (x,y)\in\mathbb{T}\times\mathbb{R}_+,\\ &\nabla\cdot u^{(S)}(x,y)=0,\\ &u|_{y=0}=0,\quad u(t=0)=u_0. \end{aligned} \right. \end{align*} It is easy to see that \begin{align*} \frac{1}{2}\frac{d}{ dt}\|u^{(S)}\|^2_{L^2}+\nu\|\nabla u^{(S)}\|^2_{L^2}=0, \end{align*} which gives \begin{align}\frak label{eq:stokes1} \|u^{(S)}(t)\|_{L^2}^2+2\nu\int_0^t\|\nabla u^{(S)}(\widetilde{a}u)\|_{L^2}^2d\widetilde{a}u \frak leq \|u_0\|_{L^2}^2. \end{align} By taking the $L^2$ inner product with $\partial_t u^{(S)}$, we find that \begin{align*} &\int_{\mathbb{T}\times\mathbb{R}_+}|\partial_t u^{(S)}|^2dxdy-\nu\int_{\mathbb{T}\times\mathbb{R}_+}\Delta u^{(S)}\partial_t u^{(S)}dxdy\\ &=\int_{\mathbb{T}\times\mathbb{R}_+}|\partial_t u^{(S)}|^2dxdy+\frac{\nu}{2}\frac{d}{dt}\|\nabla u^{(S)}\|^2_{L^2_{x,y}}=0, \end{align*} which implies that for any $t>0$ \begin{align}\frak label{eq:stokes2} \|\partial_y u^{(S)}(t)\|_{L^2}\frak leq \|\partial_y u_0\|_{L^2}. \end{align} Therefore by collecting \eqref{eq:stokes1}, \eqref{eq:stokes2} and the interpolation, we deduce that for any $t>0$ \begin{align*} \|u^{(S)}(t)\|_{L^2_xL^\infty_y}\frak leq C\|u^{(S)}(t)\|^{\f12}_{L^2}\|\partial_y u^{(S)}(t)\|_{L^2}^{\f12}\frak leq C\|u_0\|_{L^2_x H^1_y}. \end{align*} This proves the first inequality of this lemma. To prove the other two inequalities, we make the following weighted estimate \begin{align*} &t\int_{\mathbb{T}\times\mathbb{R}_+} (\partial_t u^{(S)})\pa_tu^{(S)}dxdy-\nu t\int_{\mathbb{T}\times\mathbb{R}_+}(\Delta u^{(S)})\pa_tu^{(S)}dxdy\\ &=t\|\pa_tu^{(S)}(t)\|^2_{L^2}+\f12\nu\f d {dt}\big(t\|\nabla u^{(S)}(t)\|^2_{L^2}\big)-\f12\nu\|\nabla u^{(S)}\|_{L^2}^2=0, \end{align*} which along with \eqref{eq:stokes1} gives \begin{align*} \|\na u^{(S)}\|_{L^2}\frak leq \frac{1}{(2\nu t)^{\f12}}\|u_0\|_{L^2}. \end{align*} Again by the interpolation, we obtain \begin{align*} \|u^{(S)}(t)\|_{L^2_xL^\infty_y}\frak leq \frac{C}{(\nu t)^{\f14}}\|u_0\|_{L^2}. \end{align*} The proof is completed. \end{proof} \section{Interpolation inequality} \begin{lemma}\frak label{lem:inter} Let $\varphi\in H^2(\mathbb{R}_{+})$ solve $(\partial_Y^2-\alpha^2)\varphi=w$ with $\varphi(0)=0$. Then for any function $0\frak leq \rho\frak leq 1$, it holds that \begin{align*} \|(\partial_Y\varphi,\alpha \varphi)\|_{L^\infty}\frak leq& C\|\rho^{\f12}w\|_{L^2}^{\f12}\|(\partial_Y\varphi,\alpha \varphi)\|_{L^2}^{\f12}+ C\|(1-\rho^{\f12})w\|_{L^1} +C|\alpha|^{\f12}\|(\partial_Y\varphi,\alpha\varphi)\|_{L^2}. \end{align*} \end{lemma} \begin{proof} We have \begin{align*} & \|(\partial_Y\varphi,\alpha \varphi)\|_{L^\infty}^2 \frak leq C\||\partial_Y\varphi|^2+|\alpha \varphi|^2\|_{L^\infty} \frak leq C\big\|\partial_Y(|\partial_Y\varphi|^2+|\alpha \varphi|^2)\big\|_{L^1}. \end{align*} Notice that \begin{align*} & \big|\partial_Y(|\partial_Y\varphi|^2+|\alpha \varphi|^2)\big|\frak leq 2|\partial_Y^2\varphi||\partial_Y\varphi|+2 \alpha^2|\partial_Y\varphi||\varphi|\frak leq 2|w||\partial_Y\varphi|+4 \alpha^2|\partial_Y\varphi||\varphi|. \end{align*} Then we infer that \begin{align*} &\|(\partial_Y\varphi,\alpha \varphi)\|_{L^\infty}^2\frak leq C\big\||w||\partial_Y\varphi|\big\|_{L^1}+ C\alpha^2\big\||\partial_Y\varphi||\varphi|\big\|_{L^1}\\ &\frak leq C\big\||\rho^{\f12}w||\partial_Y\varphi|\big\|_{L^1}+ C\big\||(1-\rho^{\f12})w||\partial_Y\varphi|\big\|_{L^1}+ C\alpha^2\big\||\partial_Y\varphi||\varphi|\big\|_{L^1}\\ &\frak leq C\|\rho^{\f12}w\|_{L^2}\|(\partial_Y\varphi,\alpha \varphi)\|_{L^2}+ C\|(1-\rho^{\f12})w\|_{L^1}\|\partial_Y\varphi\|_{L^\infty}+ C|\alpha|\|(\partial_Y\varphi,\alpha\varphi)\|_{L^2}^2. \end{align*} Then by Young's inequality, we get \begin{align*} \|(\partial_Y\varphi,\alpha \varphi)\|_{L^\infty}\frak leq& C\|\rho^{\f12}w\|_{L^2}^{\f12}\|(\partial_Y\varphi,\alpha \varphi)\|_{L^2}^{\f12}+ C\|(1-\rho^{\f12})w\|_{L^1} +C|\alpha|^{\f12}\|(\partial_Y\varphi,\alpha\varphi)\|_{L^2}. \end{align*} The proof is completed. \end{proof} \section{Some estimates of Airy Function} Let $Ai(y)$ be the Airy function, which is a nontrivial solution of $f''-yf=0$. We denote \begin{align*} &A_0(z)=\int_{\mathrm{e}^{{\mathrm{i}}\pi/6}z}^{\infty}Ai(t)\mathrm{d}t =\mathrm{e}^{{i}\pi/6}\int_{z}^{\infty}Ai(\mathrm{e}^{{\mathrm{i}}\pi/6}t)\mathrm{d}t. \end{align*} The following lemma comes from \cite{CLWZ}. \begin{lemma}\frak label{lem:Airy-p1} There exists $c>0$ and $\delta_0>0$ so that for $\textbf{Im}(z)\frak le \delta_0$, \begin{align} &\frak left|\f{A_0'(z)}{A_0(z)}\right|\frak lesssim1+|z|^{\f12},\quad {\rm Re}\f{A_0'(z)}{A_0(z)}\frak leq\min(-1/3,-c(1+|z|^{\f12})). \end{align} Moreover, for ${{\bf Im }z}\frak le \delta_0$, we have \begin{eqnarray}o \Big|\f{A_0''(z)}{A_0(z)}\Big|\frak le C(1+|z|). \end{eqnarray}o \end{lemma} We denote \begin{eqnarray}o \tilde{A}(Y):=Ai(e^{\mathrm{i}\frac{\pi}{6}}\kappa(Y+\eta))/Ai(e^{\mathrm{i}\frac{\pi}{6}}\kappa\eta) \end{eqnarray}o with $\kappa>0$ and $\mathbf{Im}\eta<0$. We define $\tilde{\Phi}(Y)$ as the solution of \begin{eqnarray}o (\partial_Y^2-\alpha^2)\tilde{\Phi}=\tilde{A},\quad \tilde{\Phi}(0)=0. \end{eqnarray}o To estimate $\tilde{A}(Y)$, we need the following lemmas. \begin{lemma}\frak label{lem:Uineq} There exists a positive constant $C>0$, such that $\forall \ z\in\mathbb{C},\ t>0$, it holds \begin{align*} & \int_{0}^{t}|z-s|^{\f12}\mathrm{d}s\geq C^{-1}|z|^{\f12}t. \end{align*} \end{lemma} \begin{proof} Let $z_r=\mathbf{Re}(z),\ z_i=\mathbf{Im}(z)$. Let us first claim that \begin{align}\frak label{est:Uineq Re} & \int_{0}^{t}|z_r-s|^{\f12}\mathrm{d}s\geq C^{-1}|z_r|^{\f12}t. \end{align} Once \eqref{est:Uineq Re} holds, we have \begin{align*} & \int_{0}^{t}|z-s|^{\f12}\mathrm{d}s\geq C^{-1} \big( \int_{0}^{t}|z_r-s|^{\f12}\mathrm{d}s+ \int_{0}^{t}|z_i|^{\f12}\mathrm{d}s\big)\geq C^{-1}\big(|z_r|^{\f12}t+|z_i|^{\f12}t\big)\geq C^{-1}|z|^{\f12}t. \end{align*} It remains to prove \eqref{est:Uineq Re}. \textbf{Case 1}. $z_r\frak leq 0$. In this case, we have $$\int_{0}^{t}|z_r-s|^{\f12}\mathrm{d}s\geq \int_{0}^{t}|z_r|^{\f12}\mathrm{d}s= |z_r|^{\f12}t.$$ \textbf{Case 2}. $0\frak leq z_r\frak leq t/2$. In this case, we have \begin{align*} \int_{0}^{t}|z_r-s|^{\f12}\mathrm{d}s&\geq \int_{t/2}^{t}|z_r-s|^{\f12}\mathrm{d}s= \int_{t/2}^{t}(s-z_r)^{\f12}\mathrm{d}s\geq \int_{t/2}^{t}(s-t/2)^{\f12}\mathrm{d}s\\ &= \dfrac{2(t-t/2)^{\f32}}{3}=\dfrac{2(t/2)^{\f32}}{3}\geq \dfrac{z_r^{\f12}t}{3}=\dfrac{|z_r|^{\f12}t}{3}. \end{align*} \textbf{Case 3}. $z_r\geq t/2$. In this case, we have \begin{align*} \int_{0}^{t}|z_r-s|^{\f12}\mathrm{d}s&\geq \int_{0}^{t/4}|z_r-s|^{\f12}\mathrm{d}s= \int_{0}^{t/4}|z_r-s|^{\f12}\mathrm{d}s\geq \int_{0}^{t/4}(z_r-t/4)^{\f12}\mathrm{d}s\\ &=\dfrac{(z_r-t/4)^{\f12}t}{4}\geq \dfrac{|z_r/2|^{\f12}t}{4}, \end{align*} here we used $z_r-t/4\geq z_r/2=|z_r|/2$. Combining three cases, we conclude our result. \end{proof} \begin{lemma}\frak label{lem:ham-bound} If $w\in L^2(\mathbb{R}_+)$, and $\phi\in H^2$ satisfies \begin{align*} & (\partial_Y^2-\alpha^2)\phi=w, \end{align*} then it holds that \begin{align*} &-\partial_Y\phi(Y)=\int_{Y}^{+\infty}w(Z)\mathrm{e}^{-\alpha( Z-Y)}\mathrm{d}Z. \end{align*} Specially, we have \begin{align*} & -\partial_Y\phi(0)=\int_{0}^{+\infty}w(Y)\mathrm{e}^{-\alpha Y}\mathrm{d}Y. \end{align*} \end{lemma} \begin{proof} Direct calculation gives \begin{align*} & \int_{Y}^{+\infty}w(Z)\mathrm{e}^{-\alpha Z}\mathrm{d}Z= \int_{Y}^{+\infty}\big((\partial_Z^2-\alpha^2)\phi(Z)\big)\mathrm{e}^{-\alpha Z}\mathrm{d}Z\\ &\quad=\int_{Y}^{+\infty}\phi\big((\partial_Z^2-\alpha^2)\mathrm{e}^{-\alpha Z}\big)\mathrm{d}Z+\big((\partial_Z\phi)\mathrm{e}^{-\alpha Z}\big)\big|_{Y}^{+\infty}=-\partial_Y\phi(Y)\mathrm{e}^{-\alpha Y}. \end{align*} \end{proof} \begin{lemma}\frak label{lem:Airy-w} Let $\kappa>0$ and $\mathbf{Im}\eta<0$. Then we have \begin{align*} &|\tilde{A}(Y)|\frak leq C\mathrm{e}^{-c\kappa\big(1+|\kappa\eta|^{\f12}\big)},\qquad \|\tilde{A}\|_{L^2}\frak leq C\kappa^{-\f12}\big(1+|\kappa\eta|)^{-\f14},\\ &\|Y\tilde{A}\|_{L^2}\frak leq C\kappa^{-\f32}\big(1+|\kappa\eta|)^{-\f34},\qquad \|Y^2\tilde{A}\|_{L^2}\frak leq C\kappa^{-\f52}\big(1+|\kappa\eta|)^{-\f54},\\ &\|(\partial_Y\tilde{\Phi},\alpha\tilde{\Phi})\|_{L^2}\frak leq C\kappa^{-\f32}\big(1+|\kappa\eta|)^{-\f34}. \end{align*} Moreover, there holds \begin{align*} & |\partial_Y\tilde{\Phi}(Y)|\frak leq C\kappa^{-1}\big(1+|\kappa\eta |)^{-\f12}\mathrm{e}^{-c\kappa Y\big(1+|\kappa\eta|^{\f12}\big)},\\ &\|\tilde{\Phi}\|_{L^2}+\|Y\partial_Y\tilde{\Phi}\|_{L^2}\frak leq C\kappa^{-\f52}\big(1+|\kappa\eta|)^{-\f54},\\ &\big\|Y\tilde{\Phi}\big\|_{L^2}+\big\|Y^2\partial_Y\tilde{\Phi}\big\|_{L^2}\frak leq C\kappa^{-\f72}\big(1+|\kappa\eta|)^{-\f74},\\ &\big\|Y^2\tilde{\Phi}\big\|_{L^2}+\big\|Y^3\partial_Y\tilde{\Phi}\big\|_{L^2}\frak leq C\kappa^{-\f92}\big(1+|\kappa\eta|)^{-\f94}. \end{align*} \end{lemma} \begin{proof} By Lemma \ref{lem:Airy-p1}, we have \begin{align*} &\frak left|\dfrac{A_0(t+B)}{A_0(B)}\right|= \frak left|\exp\big(\frak ln(A_0\big(t+B\big))-\frak ln(A_0(B))\big)\right| = \frak left|\exp\bigg(\int_{0}^{t}\dfrac{A_0'\big(s+B\big)}{A_0\big(s+B\big)}\mathrm{d}s\bigg)\right|\\ &\frak leq \exp\bigg(\int_{0}^{t}\mathbf{Re}\dfrac{A_0'\big(s+B\big)}{A_0 \big(s+B\big)}\mathrm{d}s\bigg) \frak leq \exp\bigg(-\int_{0}^{t}\max\big(1/3,c(1+|s+B|^{\f12})\big)\mathrm{d}s\bigg), \end{align*} which along with Lemma \ref{lem:Uineq} implies \begin{align}\frak label{A0tB} \frak left|\dfrac{A_0(t+B)}{A_0(B)}\right|\frak leq \exp\frak left(-\max\big(t/3,c(1+|B|^{\f12})t\big)\right). \end{align} Thanks to $\mathbf{Re}\dfrac{A_0'(z)}{A_0(z)}\frak leq \min(-1/3,-c(1+|z|^{\f12})\big)<0$, $\frak left|\mathbf{Re}\dfrac{A_0'(z)}{A_0(z)}\right| \geq c(1+|z|^{\f12})$ and \begin{align}\frak label{est:A/A'} & \frak left|\dfrac{A_0(z)}{A_0'(z)}\right|=\frak left|\dfrac{A_0'(z)}{A_0(z)}\right|^{-1}\frak leq \frak left|\mathbf{Re}\dfrac{A_0'(z)}{A_0(z)}\right|^{-1} \frak leq c^{-1}(1+|z|^{\f12})^{-1}. \end{align} Now we are ready to show the estimates about $\tilde{A}(Y)$. Lemma \ref{lem:Airy-p1} gives \begin{align*} |\tilde{A}(Y)|&=\frak left|\dfrac{A_0'\big(\kappa(Y+\eta)\big)}{A_0'(\kappa\eta)} \right|= \frak left|\dfrac{A_0(\kappa\eta)}{A_0'(\kappa\eta}\right| \frak left|\dfrac{A_0\big(\kappa(Y+\eta)\big)}{A_0(\kappa\eta)}\right| \frak left|\dfrac{A_0'\big(\kappa(Y+\eta)\big)}{A_0\big(\kappa(Y+\eta)\big)}\right|\\ & \frak leq C(1+|\kappa\eta|)^{-\f12}\big(1+|\kappa\eta|+\kappa Y\big)^{\f12} \mathrm{e}^{-c\kappa Y\big(1+|\kappa\eta|^{\f12}\big)}\\ &\frak leq C\mathrm{e}^{-c\kappa Y\big(1+|\kappa\eta|^{\f12}\big)}. \end{align*} Then we find that \begin{align*} & \|\tilde{A}\|_{L^2}\frak leq C\big\|\mathrm{e}^{-c\kappa Y\big(1+|\kappa\eta|^{\f12}\big)}\big\|_{L^2}\frak leq C\kappa^{-\f12}\big(1+|\kappa\eta|)^{-\f14},\\ & \|Y\tilde{A}\|_{L^2}\frak leq C\big\|Y\mathrm{e}^{-c\kappa Y\big(1+|\kappa\eta|^{\f12}\big)}\big\|_{L^2}\frak leq C\kappa^{-\f32}\big(1+|\kappa\eta|)^{-\f34},\\ &\|Y^2\tilde{A}\|_{L^2}\frak leq C\big\|Y^2 \mathrm{e}^{-c\kappa Y\big(1+|\kappa\eta|^{\f12}\big)}\big\|_{L^2}\frak leq C\kappa^{-\f52}\big(1+|\kappa\eta|)^{-\f54}. \end{align*} Now we turn to deal with $\tilde{\Phi}(Y)$. By duality argument and Hardy's inequality, we obtain \begin{align*} & \|(\partial_Y\tilde{\Phi},\alpha \tilde{\Phi})\|_{L^2}^2=\big|\frak langle\tilde{A} ,\tilde{\Phi}\rangle\big| \frak leq \|Y\tilde{A}\|_{L^2}\big\|\tilde{\Phi}/Y\big\|_{L^2}\frak leq 2\|Y\tilde{A}\|_{L^2}\|\partial_Y\tilde{\Phi}\|_{L^2}, \end{align*} which gives \begin{align*} &\|(\partial_Y\tilde{\Phi},\alpha \tilde{\Phi})\|_{L^2}\frak leq 2\|Y\tilde{A}\|_{L^2}\frak leq C\kappa^{-\f32}\big(1+|\kappa\eta|)^{-\f34}. \end{align*} By Lemma \ref{lem:ham-bound} and $|\tilde{A}(Y)|\frak leq C\mathrm{e}^{-c\kappa\big(1+|\kappa\eta|^{\f12}\big)Y}$, we have \begin{align*} \frak left|-\mathrm{e}^{-\alpha Y}\partial_Y\tilde{\Phi}(Y)\right| &=\frak left|\int_{Y}^{+\infty}\tilde{A}(Z)\mathrm{e}^{-\alpha Z}\mathrm{d}Z\right| \frak leq \int_{Y}^{+\infty}C\mathrm{e}^{-c\kappa\big(1+|\kappa\eta|^{\f12}\big)Z}\mathrm{e}^{-\alpha Z}\mathrm{d}Z\\ &\frak leq C\big(c\kappa\big(1+|\kappa\eta|^{\f12}\big)+\alpha\big)^{-1} \mathrm{e}^{-c\kappa\big(1+|\kappa\eta|^{\f12}\big)Y-\alpha Y}\\ &\frak leq C\kappa^{-1}\big(1+|\kappa\eta|^{\f12}\big)^{-1} \mathrm{e}^{-c\kappa\big(1+|\kappa\eta|^{\f12}\big)Y-\alpha Y}. \end{align*} Therefore, we obtain \begin{align*} & |\partial_Y\tilde{\Phi}(Y)|\frak leq C \kappa^{-1}(1+|\kappa\eta|)^{-\f12} \mathrm{e}^{-c\big(1+|\kappa\eta|^{\f12}\big)\kappa Y}, \end{align*} which implies \begin{align*} & \big\|Y\partial_Y\tilde{\Phi}\big\|_{L^2}\frak leq C\kappa^{-1}(1+|\kappa\eta|)^{-\f12} \big\|Y\mathrm{e}^{-c\big(1+|\kappa\eta|^{\f12}\big)\kappa Y}\big\|_{L^2}\frak leq C\kappa^{-\f52}\big(1+|\kappa\eta|)^{-\f54},\\ &\big\|Y^2\partial_Y\tilde{\Phi}\big\|_{L^2}\frak leq C\kappa^{-1}(1+|\kappa\eta|)^{-\f12} \big\|Y^2\mathrm{e}^{-c\big(1+|\kappa\eta|^{\f12}\big)\kappa Y}\big\|_{L^2}\frak leq C\kappa^{-\f72}\big(1+|\kappa\eta|)^{-\f74},\\ &\big\|Y^3\partial_Y\tilde{\Phi}\big\|_{L^2}\frak leq C\kappa^{-1}(1+|\kappa\eta|)^{-\f12} \big\|Y^3\mathrm{e}^{-c\big(1+|\kappa\eta^{\f12}\big)\kappa Y}\big\|_{L^2} \frak leq C\kappa^{-\f92}\big(1+|\kappa\eta|)^{-\f94}. \end{align*} For $\beta=\{0,1,2\}$, we have \begin{align*} &(2\beta+1)\|Y^{\beta}\tilde{\Phi}\|_{L^2}^2 =\big\frak langle |\tilde{\Phi}|^2,\partial_Y\big(Y^{2\beta+1}\big)\big\rangle=-\big\frak langle\partial_Y(|\tilde{\Phi}|^2),Y^{2\beta+1} \big\rangle\\ &=-2\mathbf{Re}\frak langle Y^{\beta}\tilde{\Phi}, Y^{\beta+1}\partial_Y\tilde{\Phi}\rangle\frak leq 2\big\|Y^{\beta}\tilde{\Phi}\big\|_{L^2}\big\|Y^{\beta+1}\partial_Y\tilde{\Phi}\big\|_{L^2}. \end{align*} Then we obtain \begin{align*} &\big\|Y^{\beta}\tilde{\Phi}\big\|_{L^2}\frak leq \dfrac{2}{2\beta+1}\big\|Y^{\beta+1}\partial_Y\tilde{\Phi}\big\|_{L^2}\frak leq C\kappa^{-\f{5+2\beta}{2}}\big(1+|\kappa\eta|)^{-\f{5+2\beta}{4}}. \end{align*} This proves the lemma. \end{proof} \begin{lemma}\frak label{lem:Airy-bound} Let $\kappa>0$ and $\mathbf{Im}\eta<0$. Then it holds that \begin{align*} &|\partial_Y\tilde{\Phi}(0)|\geq C^{-1}(1+|\kappa\eta|)^{-\f12}(\kappa+3\alpha)^{-1}. \end{align*} \end{lemma} \begin{proof} By Lemma \ref{lem:ham-bound}, we have \begin{align*} -\partial_Y\tilde{\Phi}(0) &=\int_{0}^{+\infty}\tilde{A}(Y)\mathrm{e}^{-\alpha Y}\mathrm{d}Y = \int_{0}^{+\infty}\dfrac{Ai\big( \mathrm{e}^{\mathrm{i}\frac{\pi}{6}}\kappa(Y+\eta)\big)}{ Ai\big(\mathrm{e}^{\mathrm{i}\frac{\pi}{6}}\kappa\eta\big)}\mathrm{e}^{-\alpha Y}\mathrm{d}Y\\ &= \int_{0}^{+\infty}\dfrac{A_0'\big(\kappa(Y+\eta)\big)}{ A_0'(\kappa\eta)}\mathrm{e}^{-\alpha Y}\mathrm{d}Y\\ &=-\dfrac{A_0(\kappa\eta)}{\kappa A_0'(\kappa\eta)} -\int_{0}^{+\infty}\dfrac{A_0\big(\kappa(Y+\eta)\big)}{ \kappa A_0'(\kappa\eta)}\partial_Y\big(\mathrm{e}^{-\alpha Y}\big)\mathrm{d}Y, \end{align*} which along with Lemma \ref{lem:Airy-w} gives \begin{align*} \kappa|A_0'(\kappa\eta)||\partial_Y\tilde{\Phi}(0)|&\geq \big|A_0(\kappa\eta)\big|- \int_{0}^{+\infty}\big|A_0\big(\kappa(Y+\eta)\big)\big| \big|\partial_Y\big(\mathrm{e}^{-\alpha Y}\big)\big|\mathrm{d}Y\\ &\geq \big|A_0(\kappa\eta)\big|- \int_{0}^{+\infty} \mathrm{e}^{-\kappa Y/3} \big|A_0(\kappa\eta)\big|\big| \partial_Y\big(\mathrm{e}^{-\alpha Y}\big)\big|\mathrm{d}Y\\ &= \big|A_0(\kappa\eta)\big|+ \big|A_0(\kappa\eta)\big|\int_{0}^{+\infty} \mathrm{e}^{-\kappa Y/3} \partial_Y\big(\mathrm{e}^{-\alpha Y}\big)\mathrm{d}Y\\ &= \dfrac{\kappa}{3}\big|A_0(\kappa\eta)\big|\int_{0}^{+\infty} \mathrm{e}^{-\kappa Y/3-\alpha Y}\mathrm{d}Y\\ &=\big|A_0(\kappa\eta)\big|\dfrac{\kappa}{ \kappa+3\alpha}, \end{align*} which along with Lemma \ref{lem:Airy-p1} gives \begin{align*} |\partial_Y\tilde{\Phi}(0)|&\geq \dfrac{\big|A_0(\kappa\eta)\big|}{(\kappa+3\alpha)|A_0'(\kappa\eta)|}\geq C^{-1}(1+|\kappa\eta|)^{-\f12}(\kappa+3\alpha)^{-1}, \end{align*} which gives our result. \end{proof} \end{CJK*} \end{document}
\begin{document} \title{On the Korteweg-de Vries long-wave approximation of the Gross-Pitaevskii equation I} \mathfrak{a}uthor{ \renewcommand{{\rm th}efootnote}{\mathfrak{a}rabic{footnote}} Fabrice B\'ethuel \footnotemark[1], Philippe Gravejat \footnotemark[2], Jean-Claude Saut \footnotemark[3], Didier Smets \footnotemark[4]} \footnotetext[1]{UPMC, Universit\'e Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France. E-mail: [email protected]} \footnotetext[2]{Centre de Recherche en Math\'ematiques de la D\'ecision, Universit\'e Paris Dauphine, Place du Mar\'echal De Lattre De Tassigny, 75775 Paris Cedex 16, France, and \'Ecole Normale Sup\'erieure, DMA, UMR 8553, F-75005, Paris, France. E-mail: [email protected]} \footnotetext[3]{Laboratoire de Math\'ematiques, Universit\'e Paris Sud and CNRS UMR 8628, B\^atiment 425, 91405 Orsay Cedex, France. E-mail: [email protected]} \footnotetext[4]{UPMC, Universit\'e Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France, and \'Ecole Normale Sup\'erieure, DMA, UMR 8553, F-75005, Paris, France. E-mail: [email protected]} \maketitle \begin{abstract} The fact that the Korteweg-de-Vries equation offers a good approximation of long-wave solutions of small amplitude to the one-dimensional Gross-Pitaevskii equation was derived several years ago in the physical literature (see e.g. \cite{KuznZak1}). In this paper, we provide a rigorous proof of this fact, and compute a precise estimate for the error term. Our proof relies on the integrability of both the equations. In particular, we give a relation between the invariants of the two equations, which, we hope, is of independent interest. \end{abstract} \mathfrak{s}ection{Introduction} In this paper, we consider the one-dimensional Gross-Pitaevskii equation \renewcommand{{\rm th}eequation}{GP} \begin{equation} \label{GP} i \mathfrak{p}artial_t \Psi + \mathfrak{p}artial_{\rm x}^2 \Psi = \Psi (|\Psi|^2 - 1) \ {\rm on} \ \mathbb{R} \times \mathbb{R}, \end{equation} which is a version of the defocusing cubic nonlinear Schr\"odinger equation and appears as a relevant model in various areas of physics: Bose-Einstein condensation, fluid mechanics (see e.g. \cite{GinzPit1,Pitaevs1,Gross1,Coste1}), nonlinear optics (see e.g. \cite{KivsLut1}). We supplement this equation with the boundary condition at infinity \renewcommand{{\rm th}eequation}{\mathfrak{a}rabic{equation}} \mathfrak{s}etcounter{equation}{0} \begin{equation} \label{bdinfini} |\Psi({\rm x}, t)| \to 1, \ {\rm as} \ |{\rm x}| \to + \infty. \end{equation} This boundary condition is suggested by the formal conservation of the energy (see \eqref{GLE} below), and by the use of the Gross-Pitaevskii equation as a physical model, e.g. for the modelling of ``dark solitons'' in nonlinear optics (see \cite{KivsLut1}). Note that boundary condition \eqref{bdinfini} ensures that \eqref{GP} has a truly nonlinear dynamics, contrary to the case of null condition at infinity where the dynamics is governed by dispersion and scattering. In particular, \eqref{GP} has nontrivial localized coherent structures called ``solitons''. At least on a formal level, the Gross-Pitaevskii equation is hamiltonian. The conserved Hamiltonian is a Ginzburg-Landau energy, namely \begin{equation} \label{GLE} E(\Psi) = \frac{1}{2} \int_{\mathbb{R}} |\mathfrak{p}artial_{\rm x} \Psi|^2 + \frac{1}{4} \int_{\mathbb{R}} (1 - |\Psi|^2)^2 \equiv \int_{\mathbb{R}} e(\Psi). \end{equation} In this paper, we will only consider finite energy solutions to \eqref{GP}. Similarly, as far as it might be defined, the momentum \begin{equation} \label{VectP} P(\Psi) = \frac{1}{2} \int_{\mathbb{R}} \langle i \mathfrak{p}artial_{\rm x} \Psi, \Psi \rangle \end{equation} is formally conserved. Another quantity which is formally conserved by the flow is the mass \begin{equation} \label{alamasse} m(\Psi) = \frac{1}{2} \int_{\mathbb{R}} \Big( |\Psi|^2 - 1 \Big). \end{equation} Equation \eqref{GP} is integrable by means of the inverse scattering method, and it has been formally analyzed within this framework in \cite{ShabZak2}, and rigorously in \cite{GeraZha1}. The formalism of inverse scattering provides an infinite number of invariant functionals for the Gross-Pitaevskii equation and our proofs rely crucially on several of them. Concerning the Cauchy problem, it can be shown (see \cite{Zhidkov1,Gerard2}) that \eqref{GP} is locally well-posed in the spaces $$X^k(\mathbb{R}) = \{ u \in L^1_{\rm loc}(\mathbb{R}, \mathbb{C}), \ {\rm s.t.} \ E(u) < + \infty, \ {\rm and} \ \mathfrak{p}artial_{\rm x} u \in H^{k - 1}(\mathbb{R}) \},$$ for any $k \geq 1$, and globally well-posed for $k = 1.$ In the one-dimensional case considered here, it is also globally well-posed for $k\geq 2$. \begin{theorem} \label{thm:existe} Let $k \in \mathbb{N}^*$ and $\Psi_0 \in X^k(\mathbb{R})$. Then, there exists a unique solution $\Psi(\cdot, t)$ in $\mathcal{C}^0(\mathbb{R}, X^k(\mathbb{R}))$ to \eqref{GP} with initial data $\Psi_0$. If $\Psi_0$ belongs to $X^{k+2}(\mathbb{R})$, then the map $t \mapsto \Psi(\cdot, t)$ belongs to $\mathcal{C}^1(\mathbb{R}, X^k(\mathbb{R}))$ and $\mathcal{C}^0(\mathbb{R}, X^{k+2}(\mathbb{R}))$. Moreover, the flow map $\Psi_0 \mapsto \Psi(\cdot, T)$ is continuous on $X^k(\mathbb{R})$ for any fixed $T \in \mathbb{R}$. \end{theorem} Furthermore, the energy is conserved along the flow, as well as the momentum, at least under suitable assumptions (see e.g. \cite{BeGrSaS1}). On the other hand, the rigorous derivation of conservation of mass raises some difficulties. If $\Psi$ does not vanish, one may write $$\Psi = \mathfrak{s}qrt{\rho} \exp i \mathfrak{v}arphi.$$ This leads to the hydrodynamic form of the equation, with $v = 2 \mathfrak{p}artial_{\rm x} \mathfrak{v}arphi,$ \begin{equation} \label{HDGP} \left\{ \begin{array}{ll} \mathfrak{p}artial_t \rho + \mathfrak{p}artial_{\rm x} (\rho v) = 0,\\ \rho (\mathfrak{p}artial_t v + v.\mathfrak{p}artial_{\rm x} v) + \mathfrak{p}artial_{\rm x} (\rho^2) = \rho \mathfrak{p}artial_{\rm x} \Big( \frac{ \mathfrak{p}artial_{\rm x}^2 \rho}{\rho} - \frac{|\mathfrak{p}artial_{\rm x} \rho|^2}{2 \rho^2} \Big). \end{array} \right. \end{equation} If one neglects the right-hand side of the second equation, which is often referred to as the quantum pressure, system \eqref{HDGP} yields the Euler equation for a compressible fluid, with pressure law $p(\rho) = \rho^2$. Since the right-hand side of \eqref{HDGP} contains third order derivatives, this approximation is only relevant in the long-wave limit. A rigorous derivation of this asymptotics was derived by Grenier in \cite{Grenier1} for different conditions at infinity. Recall that linearizing the compressible Euler equation with pressure $p(\rho) = \rho^2$, around the constant solution $\rho = 1$ and $v = 0$, one obtains the system \begin{equation} \label{ondes} \left\{ \begin{array}{ll} \mathfrak{p}artial_t \mathfrak{u}prho + \mathfrak{p}artial_{\rm x} \mathfrak{v} = 0,\\ \mathfrak{p}artial_t \mathfrak{v} + 2 \mathfrak{p}artial_{\rm x} \mathfrak{u}prho = 0, \end{array} \right. \end{equation} which is equivalent to the wave equation with speed $c_s$ given by $$c_s^2 = 2.$$ This speed is referred as the sound speed for the Gross-Pitaevskii equation. In this setting, the wave equation \eqref{ondes} appears as an approximation of the Gross-Pitaevskii equation. As mentioned above, this amounts however to neglect the quantum pressure, coming from the dispersive properties of the Schr\"odinger equation, as well as to restrict ourselves to small long-wave data, so that the wave equation approximates the Euler equation. Rigorous mathematical evidence of this fact is provided in \cite{BetDaSm1}. In order to specify the nature of the perturbation as well as of the long-wave asymptotics, we introduce a small parameter $0 < \mathfrak{v}arepsilon < 1$ and set $$\left\{ \begin{array}{ll} \rho({\rm x}, t) = 1 + \frac{\mathfrak{v}arepsilon}{\mathfrak{s}qrt{2}} a_\mathfrak{v}arepsilon(\mathfrak{v}arepsilon {\rm x},\mathfrak{v}arepsilon t),\\ v({\rm x}, t) = \mathfrak{v}arepsilon v_\mathfrak{v}arepsilon(\mathfrak{v}arepsilon {\rm x}, \mathfrak{v}arepsilon t), \end{array} \right.$$ so that system \eqref{HDGP} translates into \begin{equation} \label{eq:dynaslow} \left\{ \begin{array}{ll} \mathfrak{p}artial_t a_\mathfrak{v}arepsilon + \mathfrak{s}qrt{2} \mathfrak{p}artial_{\rm x} v_\mathfrak{v}arepsilon = - \mathfrak{v}arepsilon \mathfrak{p}artial_{\rm x} (a_\mathfrak{v}arepsilon v_\mathfrak{v}arepsilon),\\ \mathfrak{p}artial_t v_\mathfrak{v}arepsilon + \mathfrak{s}qrt{2} \mathfrak{p}artial_{\rm x} a_\mathfrak{v}arepsilon = \mathfrak{v}arepsilon \bigg( - v_\mathfrak{v}arepsilon \cdot \mathfrak{p}artial_{\rm x} v_\mathfrak{v}arepsilon + 2 \mathfrak{p}artial_{\rm x} \Big( \frac{\mathfrak{p}artial_{\rm x}^2 \mathfrak{s}qrt{\mathfrak{s}qrt{2} + \mathfrak{v}arepsilon a_\mathfrak{v}arepsilon}}{\mathfrak{s}qrt{\mathfrak{s}qrt{2} + \mathfrak{v}arepsilon a_\mathfrak{v}arepsilon}} \Big) \bigg). \end{array} \right. \end{equation} Specifying a result of \cite{BetDaSm1} in dimension one, we are led to \begin{theorem}[\cite{BetDaSm1}] \label{thm:un} Let $s \geq 2$. There exists some positive constant $K(s)$ such that, given any initial datum $(a^0_\mathfrak{v}arepsilon, v^0_\mathfrak{v}arepsilon) \in H^{s + 1}(\mathbb{R}) \times H^s(\mathbb{R})$ verifying $$K(s) \mathfrak{v}arepsilon \| (a^0_\mathfrak{v}arepsilon, v^0_\mathfrak{v}arepsilon) \|_{H^{s + 1}(\mathbb{R}) \times H^s(\mathbb{R})} \leq 1,$$ there exists some real number $$T_\mathfrak{v}arepsilon \geq \frac{1}{K(s) \mathfrak{v}arepsilon^2 \|(a^0_\mathfrak{v}arepsilon, v^0_\mathfrak{v}arepsilon) \|_{H^{s+1}(\mathbb{R}) \times H^s(\mathbb{R})}},$$ such that system \eqref{eq:dynaslow} has a unique solution $(a_\mathfrak{v}arepsilon, v_\mathfrak{v}arepsilon) \in \mathcal{C}^0([0, \mathfrak{v}arepsilon T_\mathfrak{v}arepsilon], H^{s+1}(\mathbb{R}) \times H^s(\mathbb{R}))$ satisfying $$\| (a_\mathfrak{v}arepsilon(\cdot, \mathfrak{v}arepsilon t), v_\mathfrak{v}arepsilon(\cdot, \mathfrak{v}arepsilon t)) \|_{H^{s + 1}(\mathbb{R}) \times H^s(\mathbb{R})} \leq K(s) \| (a^0_\mathfrak{v}arepsilon, v^0_\mathfrak{v}arepsilon) \|_{H^{s + 1}(\mathbb{R}) \times H^s(\mathbb{R})}, \ {\rm and} \ \frac{1}{2} \leq \rho(\cdot, t) \leq 2,$$ for any $t \in [0, T_\mathfrak{v}arepsilon]$. Let $(\mathfrak{a}, \mathfrak{v})$ denote the solution of the free-wave equation \begin{equation} \label{eq:wamu0} \left\{ \begin{array}{ll} \mathfrak{p}artial_t \mathfrak{a} + \mathfrak{s}qrt{2} \mathfrak{p}artial_{\rm x} \mathfrak{v} = 0,\\ \mathfrak{p}artial_t \mathfrak{v} + \mathfrak{s}qrt{2} \mathfrak{p}artial_{\rm x} \mathfrak{a} = 0, \end{array} \right. \end{equation} with initial datum $(a^0_\mathfrak{v}arepsilon, v^0_\mathfrak{v}arepsilon)$, then for any $0 \leq t \leq T_\mathfrak{v}arepsilon$, we have \begin{align*} & \| (a_\mathfrak{v}arepsilon, v_\mathfrak{v}arepsilon)(\cdot, \mathfrak{v}arepsilon t) - (\mathfrak{a}, \mathfrak{v})(\cdot, \mathfrak{v}arepsilon t) \|_{H^{s-2}(\mathbb{R}) \times H^{s-2}(\mathbb{R})}\\ \leq K(s) \Big( \mathfrak{v}arepsilon^2 t \| (a^0_\mathfrak{v}arepsilon, & v^0_\mathfrak{v}arepsilon) \|_{H^{s + 1}(\mathbb{R}) \times H^s(\mathbb{R})}^2 + \mathfrak{v}arepsilon^3 t \|(a^0_\mathfrak{v}arepsilon, v^0_\mathfrak{v}arepsilon)\|_{H^{s + 1}(\mathbb{R}) \times H^s(\mathbb{R})} \Big). \end{align*} \end{theorem} \begin{remark} \label{rbs} Notice that the bounds on $K(s)$ provided by the proof of Theorem \ref{thm:un} in \cite{BetDaSm1} blow up as $s$ tends to $+ \infty$. An interesting question is therefore to determine whether the constant $K(s)$ may be bounded independently of $s$. In particular, it would be of interest to extend the result to the limiting case $s = + \infty$. \end{remark} The purpose of the present paper is to consider even smaller perturbations of the constant one, and to characterize the deviation from the wave equation on larger time scales. Our initial data has the form $$\left\{ \begin{array}{ll} \rho({\rm x}, 0) = 1 - \frac{\mathfrak{v}arepsilon^2}{6} N_\mathfrak{v}arepsilon^0(\mathfrak{v}arepsilon {\rm x}),\\ v({\rm x}, 0) = \frac{\mathfrak{v}arepsilon^2}{6 \mathfrak{s}qrt{2}} W_\mathfrak{v}arepsilon^0(\mathfrak{v}arepsilon {\rm x}), \end{array} \right.$$ where $N_\mathfrak{v}arepsilon^0$ and $W_\mathfrak{v}arepsilon^0$ are uniformly bounded in some Sobolev space $H^s(\mathbb{R})$ for sufficiently large $s$. Applying Theorem \ref{thm:un} to such data, that is for $a_\mathfrak{v}arepsilon^0 = - \frac{\mathfrak{v}arepsilon \mathfrak{s}qrt{2} }{6} N_\mathfrak{v}arepsilon^0$ and $v_\mathfrak{v}arepsilon^0 = \frac{\mathfrak{v}arepsilon}{6 \mathfrak{s}qrt{2}} W_\mathfrak{v}arepsilon^0$, yields uniform bounds on a time scale $T_\mathfrak{v}arepsilon = \mathcal{O}(\mathfrak{v}arepsilon^{- 3})$. More precisely, setting $$n_\mathfrak{v}arepsilon(\mathfrak{v}arepsilon {\rm x}, \mathfrak{v}arepsilon t) = - \frac{6}{\mathfrak{v}arepsilon \mathfrak{s}qrt{2}} a_\mathfrak{v}arepsilon(\mathfrak{v}arepsilon {\rm x}, \mathfrak{v}arepsilon t), \ {\rm and} \ w_\mathfrak{v}arepsilon(\mathfrak{v}arepsilon {\rm x}, \mathfrak{v}arepsilon t) = \frac{6 \mathfrak{s}qrt{2}}{\mathfrak{v}arepsilon} v_\mathfrak{v}arepsilon(\mathfrak{v}arepsilon {\rm x}, \mathfrak{v}arepsilon t),$$ it follows for such initial data from Theorem \ref{thm:un} that we have \begin{prop} \label{prop:propre} Assume $s \geq 2$ and $K \mathfrak{v}arepsilon^2 \| (N_\mathfrak{v}arepsilon^0, W_\mathfrak{v}arepsilon^0) \|_{H^{s + 1(\mathbb{R})} \times H^s(\mathbb{R})} \leq 1$. Let $(\mathfrak{n}, \mathfrak{w})$ denote the solution of the free wave equation \begin{equation} \label{eq:wamu} \left\{ \begin{array}{ll} \mathfrak{p}artial_t \big( \mathfrak{s}qrt{2} \mathfrak{n} \big) - \mathfrak{p}artial_{\rm x} \mathfrak{w} = 0,\\ \mathfrak{p}artial_t \mathfrak{w} - 2 \mathfrak{p}artial_{\rm x} \big( \mathfrak{s}qrt{2} \mathfrak{n} \big) = 0, \end{array} \right. \end{equation} with initial datum $(N_\mathfrak{v}arepsilon^0, W_\mathfrak{v}arepsilon^0)$. Then, for any $0 \leq t \leq T_\mathfrak{v}arepsilon$, we have \begin{equation} \begin{split} \label{eq:wavegood} & \|(n_\mathfrak{v}arepsilon, w_\mathfrak{v}arepsilon)(\cdot, \mathfrak{v}arepsilon t) - (\mathfrak{n}, \mathfrak{w})(\cdot, \mathfrak{v}arepsilon t)\|_{H^{s-2}(\mathbb{R}) \times H^{s-2}(\mathbb{R})}\\ \leq K \mathfrak{v}arepsilon^3 t \Big( \|(N^0_\mathfrak{v}arepsilon, & W^0_\mathfrak{v}arepsilon) \|_{H^{s + 1}(\mathbb{R}) \times H^s(\mathbb{R})} + \| (N^0_\mathfrak{v}arepsilon, W^0_\mathfrak{v}arepsilon) \|^2_{H^{s + 1}(\mathbb{R}) \times H^s(\mathbb{R})} \Big), \end{split} \end{equation} where $$T_\mathfrak{v}arepsilon = \frac{1}{K \mathfrak{v}arepsilon^3 \| (N_\mathfrak{v}arepsilon^0, W_\mathfrak{v}arepsilon^0) \|_{H^{s + 1}(\mathbb{R}) \times H^s(\mathbb{R})}}.$$ \end{prop} In particular, if $N_\mathfrak{v}arepsilon^0$ and $W_\mathfrak{v}arepsilon^0$ are required to be uniformly bounded in $H^{s + 1}(\mathbb{R}) \times H^s(\mathbb{R})$, then in view of \eqref{eq:wavegood}, the wave equation provides a good approximation on time scales of order $o(\mathfrak{v}arepsilon^{- 3})$. This approximation ceases to be valid for times of order $\mathcal{O}(\mathfrak{v}arepsilon^{- 3})$ as the subsequent analysis will show. The general solution to \eqref{eq:wamu} may be written as $$(\mathfrak{n}, \mathfrak{w}) = (\mathfrak{n}^+, \mathfrak{w}^+) + (\mathfrak{n}^-, \mathfrak{w}^-),$$ where the functions $(\mathfrak{n}^\mathfrak{p}m, \mathfrak{w}^\mathfrak{p}m)$ are solutions to \eqref{eq:wamu} given by the d'Alembert formulae, \begin{align*} \big( \mathfrak{n}^+({\rm x}, t), \mathfrak{w}^+({\rm x}, t) \big) = \big( N^+({\rm x} - \mathfrak{s}qrt{2} t), W^+( {\rm x} - \mathfrak{s}qrt{2} t) \big),\\ \big( \mathfrak{n}^-({\rm x}, t), \mathfrak{w}^-({\rm x}, t) \big) = \big( N^-({\rm x} + \mathfrak{s}qrt{2} t), W^-( {\rm x} + \mathfrak{s}qrt{2} t) \big), \end{align*} where the profiles $N^\mathfrak{p}m$ and $W^\mathfrak{p}m$ are real-valued functions on $\mathbb{R}$. Solutions may therefore be split into right and left going waves of speed $\mathfrak{s}qrt{2}$. Since the functions $(\mathfrak{n}^\mathfrak{p}m, \mathfrak{w}^\mathfrak{p}m)$ are solutions to \eqref{eq:wamu}, it follows that \begin{equation} \big( 2 N^+ + W^+ \big)_{\rm x} = 0, \ {\rm and} \ \big( 2 N^- - W^- \big)_{\rm x} = 0, \end{equation} so that, if the functions decay to zero at infinity, then \begin{equation} \label{heuriger} 2 N^\mathfrak{p}m = \mp W^\mathfrak{p}m = \frac{2 N_\mathfrak{v}arepsilon^0 \mp W_\mathfrak{v}arepsilon^0}{2}. \end{equation} At this stage, it is worthwhile to notice that the Gross-Pitaevskii equation, as well as the wave equation, is invariant under the symmetry $x \to - x$. It remains to derive the appropriate approximation for time scales of order $\mathcal{O}(\mathfrak{v}arepsilon^{- 3})$. On a formal level, this was performed in \cite{KuznZak1}. We wish to give here a rigorous proof of that approximation. In view of the previous discussion, and following the approach of \cite{KuznZak1}, we introduce the slow variables \begin{equation} \label{martinsepp} x = \mathfrak{v}arepsilon({\rm x} + \mathfrak{s}qrt{2} t), \ {\rm and} \ \tau = \frac{\mathfrak{v}arepsilon^3}{2 \mathfrak{s}qrt{2}} t. \end{equation} The definition of the new variable $x$ corresponds to a reference frame travelling to the left with speed $\mathfrak{s}qrt{2}$ in the original variables $({\rm x}, t)$. In this frame, the wave $(\mathfrak{n}^-, \mathfrak{w}^-)$, originally travelling to the left, is now stationary, whereas the wave $(\mathfrak{n}^+,\mathfrak{w}^+)$ travelling to the right now has a speed equal to $8 \mathfrak{v}arepsilon^{- 2}$. This change of variable is therefore particularly appropriate for the study of waves travelling to the left. This will lead us to impose some additional assumptions which will imply the smallness of $N^+$ and $W^+$. Notice that the change of frame breaks the symmetry of the original equations. In view of \eqref{martinsepp}, we then define the rescaled functions $N_\mathfrak{v}arepsilon$ and $\mathbb{T}heta_\mathfrak{v}arepsilon$ as follows \begin{equation} \label{slow-var} \begin{split} N_\mathfrak{v}arepsilon(x, \tau) & = \frac{6}{\mathfrak{v}arepsilon^2} \eta({\rm x}, t) = \frac{6}{\mathfrak{v}arepsilon^2} \eta \Big( \frac{x}{\mathfrak{v}arepsilon} - \frac{4 \tau}{\mathfrak{v}arepsilon^3}, \frac{2 \mathfrak{s}qrt{2} \tau}{\mathfrak{v}arepsilon^3} \Big),\\ \mathbb{T}heta_\mathfrak{v}arepsilon(x, \tau) & = \frac{6 \mathfrak{s}qrt{2}}{\mathfrak{v}arepsilon} \mathfrak{v}arphi({\rm x}, t) = \frac{6 \mathfrak{s}qrt{2}}{\mathfrak{v}arepsilon} \mathfrak{v}arphi \Big( \frac{x}{\mathfrak{v}arepsilon} - \frac{4 \tau}{\mathfrak{v}arepsilon^3}, \frac{2 \mathfrak{s}qrt{2} \tau}{\mathfrak{v}arepsilon^3} \Big), \end{split} \end{equation} where $\Psi = \mathfrak{v}arrho \exp i \mathfrak{v}arphi$ and $\eta = 1 - \mathfrak{v}arrho^2= 1 - |\Psi|^2$. Our main theorem is \begin{theorem} \label{cochon} Let $\mathfrak{v}arepsilon > 0$ be given and assume that the initial data $\Psi_0(\cdot) = \Psi(\cdot, 0)$ belongs to $X^4(\mathbb{R})$ and satisfies the assumption \begin{equation} \label{grinzing1} \| N_\mathfrak{v}arepsilon^0 \|_{H^3(\mathbb{R})} + \mathfrak{v}arepsilon \| \mathfrak{p}artial^4_x N_\mathfrak{v}arepsilon^0 \|_{L^2(\mathbb{R})}+ \|\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0\|_{H^3(\mathbb{R})} \leq K_0. \end{equation} Let $\mathcal{N}_\mathfrak{v}arepsilon$ and $\mathcal{M}_\mathfrak{v}arepsilon$ denote the solutions to the Korteweg-de Vries equation \renewcommand{{\rm th}eequation}{KdV} \begin{equation} \label{KdV} \mathfrak{p}artial_\tau N + \mathfrak{p}artial_x^3 N + N \mathfrak{p}artial_x N = 0 \end{equation} with initial data $N_\mathfrak{v}arepsilon^0$, respectively $\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0$. There exists positive constants $\mathfrak{v}arepsilon_0$ and $K_1$, depending possibly only on $K_0$ such that, if $\mathfrak{v}arepsilon \leq \mathfrak{v}arepsilon_0$, we have for any $\tau \in \mathbb{R}$, \renewcommand{{\rm th}eequation}{\mathfrak{a}rabic{equation}} \mathfrak{s}etcounter{equation}{18} \begin{equation} \begin{split} \label{eq:fortis} \| \mathcal{N}_\mathfrak{v}arepsilon & (\cdot, \tau) - N_\mathfrak{v}arepsilon(\cdot, \tau ) \|_{L^2(\mathbb{R})} + \| \mathcal{M}_\mathfrak{v}arepsilon(\cdot, \tau) - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau ) \|_{L^2(\mathbb{R})}\\ & \leq K_1 \big( \mathfrak{v}arepsilon + \| N_\mathfrak{v}arepsilon^0 - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^3(\mathbb{R})} \big) \exp (K_1 |\tau|). \end{split} \end{equation} \end{theorem} Theorem \ref{cochon} yields a convergence result to the \eqref{KdV} equation for appropriate initial data. Since the norms involved in \eqref{eq:fortis} are translation invariant, the \eqref{KdV} approximation can only be relevant if the waves travelling to the right are negligible. In view of our previous discussion, this is precisely the role of the term $\| N_\mathfrak{v}arepsilon^0 -\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^3(\mathbb{R})}$ in the right-hand side of \eqref{eq:fortis}. Indeed, in the setting of Theorem \ref{cochon}, the right going waves $N^+$ and $W^+$ are given by $$2 N^+ = - W^+ = N_\mathfrak{v}arepsilon^0 - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0.$$ If the term $\| N_\mathfrak{v}arepsilon^0 -\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^3(\mathbb{R})}$ is small, then the \eqref{KdV} approximation is valid on a time interval (in the original time variable) $t \in [0, S_\mathfrak{v}arepsilon]$ with $$S_\mathfrak{v}arepsilon = o \bigg( \min \bigg\{ \frac{|\log(\mathfrak{v}arepsilon)|}{\mathfrak{v}arepsilon^3}, \frac{|\log(\|N_\mathfrak{v}arepsilon^0 - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^3(\mathbb{R})})|}{\mathfrak{v}arepsilon^3} \bigg\} \bigg).$$ In particular, if $\| N_\mathfrak{v}arepsilon^0 - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^3(\mathbb{R})} \leq C \mathfrak{v}arepsilon^\mathfrak{a}lpha$, with $\mathfrak{a}lpha > 0$, then the approximation is valid on a time interval $t \in [0, S_\mathfrak{v}arepsilon]$ with $S_\mathfrak{v}arepsilon = o(\mathfrak{v}arepsilon^{- 3} |\log(\mathfrak{v}arepsilon)|)$. Moreover, if $\| N_\mathfrak{v}arepsilon^0 - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^3(\mathbb{R})}$ is of order $\mathcal{O}(\mathfrak{v}arepsilon)$, then the approximation error remains of order $\mathcal{O}(\mathfrak{v}arepsilon)$ on a time interval $t \in [0, S_\mathfrak{v}arepsilon]$ with $S_\mathfrak{v}arepsilon = \mathcal{O}(\mathfrak{v}arepsilon^{- 3})$. \begin{remark} We also show in the course of our proofs (see Proposition \ref{H3-control} below) that, under the assumptions of Theorem \ref{cochon}, the $H^3$-norms of $N_\mathfrak{v}arepsilon$ and $\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon$ remain uniformly bounded in time. Since the same property holds for the solutions $\mathcal{N}_\mathfrak{v}arepsilon$ and $\mathcal{M}_\mathfrak{v}arepsilon$, it follows by interpolation that the difference of the two solutions may also be computed in terms of $H^s$-norm as \begin{align*} \| \mathcal{N}_\mathfrak{v}arepsilon & (\cdot, \tau) - N_\mathfrak{v}arepsilon(\cdot, \tau ) \|_{H^s(\mathbb{R})} + \| \mathcal{M}_\mathfrak{v}arepsilon(\cdot, \tau) - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau ) \|_{H^s(\mathbb{R})}\\ & \leq K \big( \mathfrak{v}arepsilon + \| N_\mathfrak{v}arepsilon^0 - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^3(\mathbb{R})} \big)^{\mathfrak{a}lpha(s)} \exp (\mathfrak{a}lpha(s) K_1 |\tau|). \end{align*} for any $0 \leq s < 3$ and any $\tau \in \mathbb{R}$, where $\mathfrak{a}lpha(s) = 1 - \frac{s}{3}$, and where the constant $K$ depends possibly on $K_0$ and $s$. \end{remark} \begin{remark} As a matter of fact, we believe that for any $s \geq 0$, the following inequality holds \begin{equation} \label{fortisse3} \| \mathcal{N}_\mathfrak{v}arepsilon(\cdot, \tau) - N_\mathfrak{v}arepsilon(\cdot, \tau ) \|_{H^s(\mathbb{R})} \leq K(s) \big( \mathfrak{v}arepsilon^2 + \| N_\mathfrak{v}arepsilon^0 - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^{s+3}(\mathbb{R})} \big) \exp ( K_1 |\tau|), \end{equation} for any $\tau \in \mathbb{R}$. To prove inequality \eqref{fortisse3} along the lines of the proof of Theorem \ref{cochon} would require a more general treatment of the invariants of the Gross-Pitaevskii equation, whereas in this paper, we have only handled the lower order ones (at the cost of sometimes tedious computations). In a forthcoming paper \cite{BeGrSaS3}, we make use of a different strategy avoiding invariants but at the cost of a higher loss of derivatives. Here also, as in Remark \ref{rbs}, it would be of interest to prove a result in $H^\infty(\mathbb{R})$. \end{remark} The functions $N_\mathfrak{v}arepsilon$ and $\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon$ are rigidly constrained one to the other as shown by the following \begin{theorem} \label{H3-controlbis} Let $\Psi$ be a solution to \eqref{GP} in $\mathcal{C}^0(\mathbb{R}, H^4(\mathbb{R}))$ with initial data $\Psi^0$. Assume that \eqref{grinzing1} holds. Then, there exists some positive constant $K$, which does not depend on $\mathfrak{v}arepsilon$ nor $\tau$, such that \begin{equation} \label{dobling1ter} \| N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})} \leq \| N_\mathfrak{v}arepsilon^0 \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{L^2(\mathbb{R})} + K \mathfrak{v}arepsilon ^2 \big( 1 + |\tau| \big), \end{equation} for any $\tau \in \mathbb{R}$. \end{theorem} The approximation errors provided by Theorem \ref{cochon} and \ref{H3-controlbis} diverge as time increases. Concerning the weaker notion of consistency, we have the following result whose peculiarity is that the bounds are independent of time. \begin{theorem} \label{Bobby} Let $\Psi$ be a solution to \eqref{GP} in $\mathcal{C}^0(\mathbb{R}, H^4(\mathbb{R}))$ with initial data $\Psi^0$. Assume that \eqref{grinzing1} holds. Then, there exists some positive constant $K$, which does not depend on $\mathfrak{v}arepsilon$ nor $\tau$, such that \begin{equation} \label{jerrard} \| \mathfrak{p}artial_\tau U_\mathfrak{v}arepsilon + \mathfrak{p}artial^3_x U_\mathfrak{v}arepsilon + U_\mathfrak{v}arepsilon \mathfrak{p}artial_x U_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \leq K(\mathfrak{v}arepsilon + \|N_\mathfrak{v}arepsilon^0 - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^3(\mathbb{R})}), \end{equation} for any $\tau \in \mathbb{R}$, where $U_\mathfrak{v}arepsilon = \frac{N_\mathfrak{v}arepsilon + \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{2}$. \end{theorem} The relevance of the function $U_\mathfrak{v}arepsilon$ will be discussed below. A typical example where the assumptions of Theorem \ref{cochon} apply is provided by travelling wave solutions to \eqref{GP}, i.e. solutions of the form $\Psi({\rm x}, t) = v_c({\rm x} + c t)$, where the profile $v_c$ is a complex-valued function defined on $\mathbb{R}$ satisfying a simple ordinary differential equation which may be integrated explicitly. Solutions then do exist for any value of the speed $c$ in the interval $[0, \mathfrak{s}qrt{2})$. Next, we choose the wave-length parameter to be $\mathfrak{v}arepsilon = \mathfrak{s}qrt{2 - c^2}$, and take as initial data $\Psi_\mathfrak{v}arepsilon$ the corresponding wave $v_c$. We consider the rescaled function $$\mathfrak{u}pnu_\mathfrak{v}arepsilon(x) = \frac{6}{\mathfrak{v}arepsilon^2} \eta_c \Big( \frac{{\rm x}}{\mathfrak{v}arepsilon} \Big),$$ where $\eta_c \equiv 1 - |v_c|^2$. The explicit integration of the travelling wave equation for $v_c$ leads to the formula $$\mathfrak{u}pnu_\mathfrak{v}arepsilon(x) = \mathfrak{u}pnu(x) \equiv \frac{3}{{\rm ch}^2 \big( \frac{x}{2} \big)}.$$ The function $\mathfrak{u}pnu$ is the classical soliton to the Korteweg-de Vries equation, which is moved by the \eqref{KdV} flow with constant speed equal to $1$, so that $$\mathcal{N}_\mathfrak{v}arepsilon(x, \tau) = \mathfrak{u}pnu(x - \tau).$$ On the other hand, we deduce from \eqref{slow-var} that $N^0_\mathfrak{v}arepsilon = \mathfrak{u}pnu$, so that $$N_\mathfrak{v}arepsilon(x, \tau) = \mathfrak{u}pnu \Big( x -\frac{4}{\mathfrak{v}arepsilon^2} \Big( 1 - \mathfrak{s}qrt{1 - \frac{\mathfrak{v}arepsilon^2}{2}} \Big) \tau \Big).$$ Therefore, we have for any $\tau\in \mathbb{R}$, $$\| \mathcal{N}_\mathfrak{v}arepsilon(\cdot, \tau) - N_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})} = \mathcal{O}(\mathfrak{v}arepsilon^2 \tau).$$ Concerning the phase $\mathfrak{v}arphi_c$ of $v_c$, we consider the scale change $$\mathbb{T}heta_\mathfrak{v}arepsilon^0(x) = \frac{6 \mathfrak{s}qrt{2}}{\mathfrak{v}arepsilon} \mathfrak{v}arphi_c \Big( \frac{x}{\mathfrak{v}arepsilon} \Big),$$ so that, in view of \cite{Graveja4}, $$\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0(x) = \mathfrak{s}qrt{1 - \frac{\mathfrak{v}arepsilon^2}{2}} \frac{\mathfrak{u}pnu(x)}{1 - \frac{\mathfrak{v}arepsilon^2}{6} \mathfrak{u}pnu(x)},$$ and hence, $$\| N_\mathfrak{v}arepsilon^0 - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^3(\mathbb{R})} = \mathcal{O}(\mathfrak{v}arepsilon^2).$$ This may suggest that the $\mathfrak{v}arepsilon$ error in inequality \eqref{eq:fortis} is not optimal. As a matter of fact, we believe that the optimal error term would be of order $\mathfrak{v}arepsilon^2$ (as mentioned in formula \eqref{fortisse3}). A proof of this claim would require to have higher order bounds on $N_\mathfrak{v}arepsilon$ and $\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon$. We next present some ideas in the proofs. We infer from \eqref{GP} the equations for $N_\mathfrak{v}arepsilon$ and $\mathbb{T}heta_\mathfrak{v}arepsilon$, namely \begin{equation} \label{slow1-0} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon - \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{\mathfrak{v}arepsilon^2}{2} \Big( \frac{1}{2} \mathfrak{p}artial_\tau N_\mathfrak{v}arepsilon + \frac{1}{3} N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{1}{3} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \Big) = 0, \end{equation} and \begin{equation} \label{slow2-0} \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon - N_\mathfrak{v}arepsilon + \frac{\mathfrak{v}arepsilon^2}{2} \Big( \frac{1}{2} \mathfrak{p}artial_\tau \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{\mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon}{1 - \frac{\mathfrak{v}arepsilon^2}{6} N_\mathfrak{v}arepsilon} + \frac{1}{6} (\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon)^2 \Big) + \frac{\mathfrak{v}arepsilon^4}{24} \frac{(\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2}{(1 - \frac{\mathfrak{v}arepsilon^2}{6} N_\mathfrak{v}arepsilon)^2} = 0. \end{equation} The leading order in this expansion is provided by $N_\mathfrak{v}arepsilon - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon$ and its spatial derivative, so that an important step is to keep control on this term. In view of \eqref{slow1-0} and \eqref{slow2-0} and d'Alembert decomposition \eqref{heuriger}, we are led to introduce the new variables $U_\mathfrak{v}arepsilon$ and $V_\mathfrak{v}arepsilon$ defined by $$U_\mathfrak{v}arepsilon = \frac{N_\mathfrak{v}arepsilon + \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{2}, \ {\rm and} \ V_\mathfrak{v}arepsilon = \frac{N_\mathfrak{v}arepsilon - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{2},$$ and compute the relevant equations for $U_\mathfrak{v}arepsilon$ and $V_\mathfrak{v}arepsilon$, \begin{equation} \label{slow1} \mathfrak{p}artial_\tau U_\mathfrak{v}arepsilon + \mathfrak{p}artial_x^3 U_\mathfrak{v}arepsilon + U_\mathfrak{v}arepsilon \mathfrak{p}artial_x U_\mathfrak{v}arepsilon = - \mathfrak{p}artial_x^3 V_\mathfrak{v}arepsilon + \frac{1}{3}\mathfrak{p}artial_x \big( U_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon \big) + \frac{1}{6} \mathfrak{p}artial_x \big( V_\mathfrak{v}arepsilon^2 \big) - \mathfrak{v}arepsilon^2 R_{\mathfrak{v}arepsilon}, \end{equation} and \begin{equation} \label{slow2} \mathfrak{p}artial_\tau V_\mathfrak{v}arepsilon + \frac{8}{\mathfrak{v}arepsilon^2} \mathfrak{p}artial_x V_\mathfrak{v}arepsilon = \mathfrak{p}artial_x^3 U_\mathfrak{v}arepsilon + \mathfrak{p}artial_x^3 V_\mathfrak{v}arepsilon + \frac{1}{2} \mathfrak{p}artial_x (V_\mathfrak{v}arepsilon^2) - \frac{1}{6} \mathfrak{p}artial_x (U_\mathfrak{v}arepsilon)^2 -\frac{1}{3} \mathfrak{p}artial_x (U_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon) + \mathfrak{v}arepsilon^2 R_{\mathfrak{v}arepsilon}, \end{equation} where the remainder term $R_{\mathfrak{v}arepsilon}$ is given by the formula \begin{equation} \label{grouin} R_{\mathfrak{v}arepsilon} = \frac{N_\mathfrak{v}arepsilon\mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon}{6 (1 - \frac{\mathfrak{v}arepsilon^2}{6} N_\mathfrak{v}arepsilon)} + \frac{(\mathfrak{p}artial_x N_\mathfrak{v}arepsilon) (\mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon)}{3 (1 - \frac{\mathfrak{v}arepsilon^2}{6} N_\mathfrak{v}arepsilon)^2} + \frac{\mathfrak{v}arepsilon^2}{36} \frac{(\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^3}{(1 - \frac{\mathfrak{v}arepsilon^2}{6} N_\mathfrak{v}arepsilon)^3}. \end{equation} The left-hand side of equation \eqref{slow1} corresponds to the \eqref{KdV} operator applied to $U_\mathfrak{v}arepsilon$: a major step in the proof is therefore to establish that the right-hand side is small in suitable norms. This amounts in particular, as already mentioned, to show that $V_\mathfrak{v}arepsilon$, which is assumed to be small at time $\tau = 0$ remains small, and that $U_\mathfrak{v}arepsilon$, which is assumed to be bounded at time $\tau = 0$, remains bounded in appropriate Sobolev norm. To establish these estimates, we rely among other things on several conservation laws which are provided by the integrability of the one-dimensional \eqref{GP} equation. To illustrate the argument, we next present it for the $L^2$-norm, where we only need to invoke the conservation of energy and momentum. In the rescaled setting, the Ginzburg-Landau energy may be written as \begin{equation} \label{sirbu1} E(\Psi) = \frac{\mathfrak{v}arepsilon^3}{144} \Bigg( \int_\mathbb{R} \Big( (\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon)^2 + N_\mathfrak{v}arepsilon^2 \Big) + \frac{\mathfrak{v}arepsilon^2}{2} \int_\mathbb{R} \bigg( \frac{(\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2}{1 - \frac{\mathfrak{v}arepsilon^2}{6} N_\mathfrak{v}arepsilon} -\frac{1}{3} N_\mathfrak{v}arepsilon(\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon)^2 \bigg) \Bigg) \equiv \frac{\mathfrak{v}arepsilon^3}{18} \mathcal{E}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon), \end{equation} so that assumption \eqref{grinzing1} implies that \begin{equation} \label{pauli1} \mathcal{E}_1(N_\mathfrak{v}arepsilon^0, \mathbb{T}heta_\mathfrak{v}arepsilon^0) \leq K_0. \end{equation} On the other hand, when the energy $E(\Psi)$ is sufficiently small, which is the case at the limit $\mathfrak{v}arepsilon \to 0$, we may assume that $$\frac{1}{2} \leq |\Psi| \leq 2,$$ which may be translated as \begin{equation} \label{borninf} \frac{1}{4} \leq 1 - \frac{\mathfrak{v}arepsilon^2}{6} N_\mathfrak{v}arepsilon \leq 4, \end{equation} so that the rescaled energy $\mathcal{E}_1$ satisfies \begin{equation} \label{L2-control} \int_\mathbb{R} \Big( (\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon)^2 + N_\mathfrak{v}arepsilon^2 \Big) \leq K \mathcal{E}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon), \end{equation} where $K$ is some universal constant. Similarly, the momentum may be written as \begin{equation} \label{sirbu2} P(\Psi) = \frac{1}{2} \int_\mathbb{R} \eta \mathfrak{p}artial_{\rm x} \mathfrak{v}arphi = \frac{\mathfrak{v}arepsilon^3}{72 \mathfrak{s}qrt{2}} \int_\mathbb{R} N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \equiv \frac{\mathfrak{v}arepsilon^3}{18} \mathcal{P}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon). \end{equation} Next, we compute $$\mathcal{E}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) - \mathfrak{s}qrt{2} \mathcal{P}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) = \frac{1}{8} \int_\mathbb{R} (N_\mathfrak{v}arepsilon - \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon)^2 + \frac{\mathfrak{v}arepsilon^2}{8} \int_\mathbb{R} \bigg( \frac{\mathfrak{p}artial_x N_\mathfrak{v}arepsilon^2}{1 - \frac{\mathfrak{v}arepsilon^2}{6} N_\mathfrak{v}arepsilon} - \frac{1}{3} N_\mathfrak{v}arepsilon (\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon)^2 \bigg),$$ so that \begin{equation} \label{wachovien} \Big| \mathcal{E}_1(N_\mathfrak{v}arepsilon^0, \mathbb{T}heta_\mathfrak{v}arepsilon^0) - \mathfrak{s}qrt{2} \mathcal{P}_1(N_\mathfrak{v}arepsilon^0, \mathbb{T}heta_\mathfrak{v}arepsilon^0) \Big| \leq K_0. \end{equation} Moreover, by the Sobolev embedding theorem and the inequality $2 a b \leq a^2 + b^2$, \begin{equation} \label{eq:wachovia} \mathcal{E}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) - \mathfrak{s}qrt{2} \mathcal{P}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \geq K^{-1} \Big( \int_\mathbb{R} V_\mathfrak{v}arepsilon^2 + \mathfrak{v}arepsilon^2 \int_\mathbb{R} (\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2\Big) - K \mathfrak{v}arepsilon^2 \bigg( \int_\mathbb{R} (\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon)^2 \bigg)^2, \end{equation} where $K$ refers to some universal constant. By conservation, we then have \begin{equation} \label{conserve} \frac{d}{d\tau} \big( \mathcal{E}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \big) = 0, \ {\rm and } \ \frac{d}{d\tau} \big( \mathcal{P}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \big) = 0. \end{equation} Invoking \eqref{pauli1} and \eqref{L2-control}, we are led to $$\| N_\mathfrak{v}arepsilon(\cdot, \tau)\|_{L^2(\mathbb{R})}^2 + \|\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})}^2 \leq K_0,$$ for any $\tau \in \mathbb{R}$. In turn, using \eqref{wachovien}, \eqref{eq:wachovia} and \eqref{conserve} yields $$\|V_\mathfrak{v}arepsilon(\cdot, \tau)\|_{L^2(\mathbb{R})}^2 \leq K \big( \|V_\mathfrak{v}arepsilon(0)\|_{L^2(\mathbb{R})}^2 + \mathfrak{v}arepsilon^2 \big).$$ It turns out that the other conservation laws for the Gross-Pitaevskii equation involve quantities which behave as higher order energies and others which behave as higher order momenta. We denote $\mathcal{E}_k(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon)$ and $\mathcal{P}_k(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon)$, respectively these quantities (precise expressions are provided in Section \ref{Invariants}). Using these invariants, we may perform a similar argument to control higher Sobolev norms. This gives \begin{prop} \label{H3-control} Let $\Psi$ be a solution to \eqref{GP} in $\mathcal{C}^0(\mathbb{R}, H^4(\mathbb{R}))$ with initial data $\Psi^0$. Assume that \eqref{grinzing1} holds. Then, there exists some positive constant $K$, which does not depend on $\mathfrak{v}arepsilon$ nor $\tau$, such that \begin{equation} \label{grinzing1bis} \| N_\mathfrak{v}arepsilon(\cdot, \tau) \|_{H^3(\mathbb{R})} + \mathfrak{v}arepsilon \| \mathfrak{p}artial_x^4 N_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})} + \|\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau) \|_{H^3(\mathbb{R})} \leq K, \end{equation} and \begin{equation} \label{dobling1bis} \| N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau) \|_{H^3(\mathbb{R})} \leq K \big( \| N_\mathfrak{v}arepsilon^0 \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^3(\mathbb{R})} + \mathfrak{v}arepsilon \big), \end{equation} for any $\tau \in \mathbb{R}$. \end{prop} The proof of Theorem \ref{Bobby} follows directly from Proposition \ref{H3-control}. Using a standard energy method applied to the system \eqref{slow1-0} and \eqref{slow2-0} and taking advantage of the fact that the left-hand side of equation \eqref{slow2-0} is a transport operator with speed $\frac{8}{\mathfrak{v}arepsilon^2}$, we obtain Theorem \ref{H3-controlbis}. Finally, the proof of Theorem \ref{cochon} follows again from an energy method applied to the difference $W_\mathfrak{v}arepsilon = N_\mathfrak{v}arepsilon - \mathcal{N}_\mathfrak{v}arepsilon$ (and the equivalent for $\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon$). \begin{remark} It is worthwhile to stress that in the course of proving Proposition \ref{H3-control}, we have been led to prove a number of facts which, we believe, are of independent interest, and represent actually the bulk contribution of our paper. First, we have given expressions of the invariant quantities and proved that they are well-defined on the spaces $X^k(\mathbb{R})$: their expressions are not a straightforward consequence of the inductive formulae for the conservation laws provided by the inverse scattering method of \cite{ShabZak2}. Indeed, various renormalizations have to be applied to give a sound mathematical meaning to the expressions. Moreover, we have rigorously established that these quantities are conserved by the \eqref{GP} flow in the appropriate functional spaces. In a related direction, we have highlighted a strong and somewhat striking relationship between the \eqref{GP} invariants and the \eqref{KdV} invariants. More precisely, we have shown that, for any $1 \leq k \leq 4$ and for any functions in the appropriate spaces, $$\mathcal{E}_k(N, \mathfrak{p}artial_x \mathbb{T}heta) - \mathfrak{s}qrt{2} \mathcal{P}_k(N, \mathfrak{p}artial_x \mathbb{T}heta) = E_k^{KdV} \Big( \frac{N - \mathfrak{p}artial_x \mathbb{T}heta}{2} \Big) + \mathcal{O}(\mathfrak{v}arepsilon^2),$$ where $E_k^{KdV}$ refers to the \eqref{KdV} invariants (for more precise statements, see Proposition \ref{Controluv}). In particular, the \eqref{GP} invariants $\mathcal{E}_k$ and $\mathcal{P}_k$, as well as the \eqref{KdV} invariants, provide control on the $H^k$-norms. \end{remark} \begin{remark} It would be of interest to investigate further the relationships between \eqref{GP} and \eqref{KdV}, in particular at the level of the spectral problems associated to the corresponding inverse scattering methods. Indeed, recall that $\eqref{KdV}$ can be resolved using scattering and inverse scattering methods for the linear Schr\"odinger equation $$L_\mathcal{N}(\Phi) = -\mathfrak{p}artial_x^2 \Phi + \mathcal{N} \Phi,$$ whereas $\eqref{GP}$ is known to be tractable using the scattering and inverse scattering methods for the Dirac operator $$D_\Psi(\Phi_1, \Phi_2) = i \begin{pmatrix} 1 + \mathfrak{s}qrt{3} & 0\\0 & 1 - \mathfrak{s}qrt{3} \end{pmatrix} \begin{pmatrix} \mathfrak{p}artial_x \Phi_1 \\ \mathfrak{p}artial_x \Phi_2 \end{pmatrix} + \begin{pmatrix} 0 & \Psi^* \\ \Psi & 0 \end{pmatrix} \begin{pmatrix} \Phi_1 \\ \Phi_2 \end{pmatrix}.$$ Besides, it is known that the Schr\"odinger equation is a nonrelativistic limit of the Dirac equation. Kutznetsov and Zakharov \cite{KuznZak1} suggest that this correspondence can be carried out in the asymptotic limit considered here. Notice however that rigorous scattering and inverse scattering methods require decay and regularity assumptions on the data (see e.g. \cite{GeraZha1} where the datum is required to decay at least as $|x|^{- 4}$, as well as its first three derivatives). \end{remark} Let us emphasize again that our paper focuses on the left going waves. Our proof requires to impose conditions on the initial data to ensure that the right going wave is small. An interesting problem is to remove this assumption, i.e. to consider simultaneously both left and right going waves, and to study their interaction. We hope to handle this problem in a forthcoming paper, as well as the already mentioned optimal bounds. The paper is organized as follows. The next section is devoted to properties of the Cauchy problem. In Section \ref{Invariants}, we compute the invariants of the \eqref{GP} flow needed for our proofs, and show that they are conserved. In Section \ref{Rescaledinv}, we recast these invariants in the asymptotics considered here, and show the convergence to the \eqref{KdV} invariants. In Section \ref{Notime}, we give the proofs to Proposition \ref{H3-control} and Theorem \ref{Bobby}. Finally, in Section \ref{Expansion}, we present the energy methods which yield the proofs to Theorems \ref{cochon} and \ref{H3-controlbis}. While completing this work, we learned that D. Chiron and F. Rousset \cite{ChirRou2} were obtaining at the same time several results which are related to our analysis of the \eqref{KdV} limit, and also treated the higher dimensional case. \begin{merci} The authors are grateful to the referee for his forward looking remarks which helped to improve the manuscript.\\ A large part of this work was completed while the four authors were visiting the Wolfgang Pauli Institute in Vienna. We wish to thank warmly this institution, as well as Prof. Norbert Mauser for the hospitality and support. We are also thankful to Dr. Martin Sepp for fruitful digressions.\\ F.B., P.G. and D.S. are partially sponsored by project JC05-51279 of the Agence Nationale de la Recherche. J.-C. S. acknowledges support from project ANR-07-BLAN-0250 of the Agence Nationale de la Recherche. \end{merci} \mathfrak{n}umberwithin{cor}{section} \mathfrak{n}umberwithin{equation}{section} \mathfrak{n}umberwithin{lemma}{section} \mathfrak{n}umberwithin{prop}{section} \mathfrak{n}umberwithin{remark}{section} \mathfrak{n}umberwithin{theorem}{section} \mathfrak{s}ection{Global well-posedness for the Gross-Pitaevskii equation} \label{Gwp} The purpose of this section is to present the proof of Theorem \ref{thm:existe}. It is presumably well-known to the experts, but we did not find it stated in the literature, and therefore we provide a proof here for the sake of completeness. Notice that Gallo \cite{Gallo3} already established the local well-posedness of \eqref{GP} in the spaces $X^k(\mathbb{R})$ for any $k \geq 1$ (see also \cite{Zhidkov1,Gerard2}). More precisely, we have \begin{theorem}[\cite{Gallo3,Gerard2}] \label{Augalop} Let $k \geq 2$. Given any function $\Psi_0 \in X^k(\mathbb{R}^N)$, consider the unique solution $\Psi(\cdot, t)$ to \eqref{GP} in $\mathcal{C}^0(\mathbb{R}, X^1(\mathbb{R}))$ with initial data $\Psi_0$. Then, there exist $(T_-, T_+) \in (0, + \infty]^2$ such that the map $t \mapsto \Psi(\cdot, t)$ belongs to $\mathcal{C}^0((- T_-, T_+), X^k(\mathbb{R}))$. Moreover, either $T_+$ is equal to $+ \infty$, respectively $T_- = + \infty$, or \begin{equation} \label{impossible} \| \mathfrak{p}artial_{\rm x} \Psi(\cdot, t) \|_{H^{k-1}(\mathbb{R})} \to + \infty, \ {\rm as} \ t \to T_+ \ ({\rm resp.} \ t \to - T_-). \end{equation} If $\Psi_0$ belongs to $X^{k+2}(\mathbb{R})$, then the map $t \mapsto \Psi(\cdot, t)$ belongs to $\mathcal{C}^1((- T_-, T_+), X^k(\mathbb{R}))$ and $\mathcal{C}^0((- T_-, T_+), X^{k+2}(\mathbb{R}))$. Moreover, the flow map $\Psi_0 \mapsto \Psi(\cdot, T)$ is continuous on $X^k(\mathbb{R})$ for any fixed $- T_- < T < T_+$. \end{theorem} In view of Theorem \ref{Augalop}, the proof of Theorem \ref{thm:existe} reduces to establish that the $H^{k - 1}$-norm of the function $\mathfrak{p}artial_{\rm x} \Psi$ cannot blow-up in finite time. In \cite{Gallo3,Gerard1}, it is proved that the linear Schr\"odinger propagator $S(t)$ maps $X^k(\mathbb{R})$ into $X^k(\mathbb{R})$, so that we may invoke the Duhamel formula $$\Psi(\cdot, t) = S(t) \Psi_0 - \int_0^t S(t-s) \Psi(\cdot, s) \big( 1 - |\Psi(\cdot, s)|^2 \big) ds,$$ to estimate the $H^{k - 1}$-norm of the function $\mathfrak{p}artial_{\rm x} \Psi$ by \begin{equation} \label{grogne} \| \mathfrak{p}artial_{\rm x} \Psi(\cdot, t) \|_{H^{k-1}(\mathbb{R})} \leq \| \mathfrak{p}artial_{\rm x} \Psi_0 \|_{H^{k-1}(\mathbb{R})} + \bigg| \int_0^t \| \mathfrak{p}artial_{\rm x} \big( \Psi(\cdot, s) \big( 1 - |\Psi(\cdot, s)|^2 \big) \big) \|_{H^{k-1}(\mathbb{R})} ds \bigg|. \end{equation} To estimate the second term on the left-hand side, we invoke the following tame estimates. \begin{lemma} \label{Tamise} Let $k \geq 1$ and $(\mathfrak{p}si_1,\mathfrak{p}si_2) \in X^k(\mathbb{R})^2$. Given any $1 \leq j \leq k$, there exists some constant $K(j, k)$, depending only on $j$ and $k$, such that \begin{equation} \label{clyde} \big\| \mathfrak{p}artial_{\rm x}^j \big( \mathfrak{p}si_1 \mathfrak{p}si_2 \big) \big\|_{L^2(\mathbb{R})} \leq K(j, k) \Big( \| \mathfrak{p}si_1 \|_{L^\infty(\mathbb{R})} \| \mathfrak{p}artial_{\rm x}^k \mathfrak{p}si_2 \|_{L^2(\mathbb{R})} + \| \mathfrak{p}si_2 \|_{L^\infty(\mathbb{R})} \| \mathfrak{p}artial_{\rm x}^k \mathfrak{p}si_1 \|_{L^2(\mathbb{R})} \Big). \end{equation} \end{lemma} We postpone the proof of Lemma \ref{Tamise} and first complete the proof of Theorem \ref{thm:existe}. \begin{proof}[Proof of Theorem \ref{thm:existe}] In view of \eqref{clyde}, inequality \eqref{grogne} yields \begin{equation} \label{wall-e} \| \mathfrak{p}artial_{\rm x} \Psi(\cdot, t) \|_{H^{k-1}(\mathbb{R})} \leq \| \mathfrak{p}artial_{\rm x} \Psi_0 \|_{H^{k-1}(\mathbb{R})} + K(k) \bigg| \int_0^t \big( 1 + \| \Psi(\cdot, s) \|_{L^\infty(\mathbb{R})}^2 \big) \| \mathfrak{p}artial_{\rm x} \Psi(\cdot, s) \|_{H^{k-1}(\mathbb{R})} ds \bigg|, \end{equation} where $K(k)$ is some constant depending only on $k$. Notice that $\Psi_0$ belongs to $X^1(\mathbb{R})$, so that in view of the conservation of energy proved in \cite{Gerard1} (see Theorem \ref{ConsE} below), we have $$E(\Psi(\cdot, t)) = E(\Psi_0).$$ Next, given any function $\mathfrak{p}si \in X^1(\mathbb{R})$, there exists some universal positive constant $K$ such that \begin{equation} \label{holtz} \| \mathfrak{p}si \|_{L^\infty(\mathbb{R})} \leq K \big( 1 + E(\mathfrak{p}si) \big)^\frac{1}{2}. \end{equation} In particular, it follows from \eqref{holtz} that $\| \Psi(s) \|_{L^\infty(\mathbb{R})} \leq K \big( 1 + E(\Psi_0) \big)^\frac{1}{2},$ so that by \eqref{wall-e}, we are led to $$\| \mathfrak{p}artial_{\rm x} \Psi(\cdot, t) \|_{H^{k-1}(\mathbb{R})} \leq K(k, \Psi_0) \bigg( 1 + \bigg| \int_0^t \| \mathfrak{p}artial_{\rm x} \Psi(\cdot, s) \|_{H^{k-1}(\mathbb{R})} ds \bigg| \bigg),$$ where $K(k, \Psi_0)$ is some constant only depending on $k$, $E(\Psi_0)$ and $\| \mathfrak{p}artial_{\rm x} \Psi_0 \|_{H^{k-1}(\mathbb{R})}$. Therefore, we have by integration $$\| \mathfrak{p}artial_{\rm x} \Psi(\cdot, t) \|_{H^{k-1}(\mathbb{R})} \leq K(k, \Psi_0) \exp \big( K(k, \Psi_0) |t| \big), $$ and it follows, going back to \eqref{impossible}, that $$T_- = T_+ = + \infty,$$ which completes the proof. \end{proof} We now provide the proof of Lemma \ref{Tamise}. \begin{proof}[Proof of Lemma \ref{Tamise}] We introduce some cut-off function ${\rm ch}i \in \mathcal{C}^\infty(\mathbb{R}, [0, 1])$ such that \begin{equation} \label{ness} {\rm ch}i = 1 \ {\rm on} \ (-1, 1), \ {\rm and} \ {\rm ch}i = 0 \ {\rm on} \ \mathbb{R} \mathfrak{s}etminus (-2, 2), \end{equation} and set \begin{equation} \label{maree} {\rm ch}i_R({\rm x}) = {\rm ch}i \Big( \frac{{\rm x}}{R} \Big), \ \forall {\rm x} \in \mathbb{R}, \end{equation} for any $R > 1$. Using standard tame estimates, we have \begin{equation} \begin{split} \label{coe} & \big\| \mathfrak{p}artial_{\rm x}^j \big( {\rm ch}i_R \mathfrak{p}si_1 {\rm ch}i_R \mathfrak{p}si_2 \big) \big\|_{L^2(\mathbb{R})}\\ \leq K(j, k) \Big( \| & {\rm ch}i_R \mathfrak{p}si_1 \|_{L^\infty(\mathbb{R})} \| \mathfrak{p}artial_{\rm x}^k ({\rm ch}i_R \mathfrak{p}si_2) \|_{L^2(\mathbb{R})} + \| {\rm ch}i_R \mathfrak{p}si_2 \|_{L^\infty(\mathbb{R})} \| \mathfrak{p}artial_{\rm x}^k ({\rm ch}i_R \mathfrak{p}si_1) \|_{L^2(\mathbb{R})} \Big)\\ \leq K(j, k) \Big( \| & \mathfrak{p}si_1 \|_{L^\infty(\mathbb{R})} \| \mathfrak{p}artial_{\rm x}^k ({\rm ch}i_R \mathfrak{p}si_2) \|_{L^2(\mathbb{R})} + \| \mathfrak{p}si_2 \|_{L^\infty(\mathbb{R})} \| \mathfrak{p}artial_{\rm x}^k ({\rm ch}i_R \mathfrak{p}si_1) \|_{L^2(\mathbb{R})} \Big). \end{split} \end{equation} We now claim that \begin{equation} \label{affric} \| \mathfrak{p}artial_{\rm x}^j ({\rm ch}i_R \mathfrak{p}si) \|_{L^2(\mathbb{R})} \to \| \mathfrak{p}artial_{\rm x}^j \mathfrak{p}si \|_{L^2(\mathbb{R})}, \ {\rm as} \ R \to + \infty. \end{equation} for any function $\mathfrak{p}si \in X^k(\mathbb{R})$ and any $1 \leq j \leq k$. As a matter of fact, by the Leibniz formula, we have \begin{equation} \label{leibniz} \mathfrak{p}artial_{\rm x}^j ({\rm ch}i_R \mathfrak{p}si) = \mathfrak{s}um_{m = 1}^j C_j^m \mathfrak{p}artial_{\rm x}^m {\rm ch}i_R \mathfrak{p}artial_{\rm x}^{j - m} \mathfrak{p}si. \end{equation} We next deduce from the dominated convergence theorem that $${\rm ch}i_R \mathfrak{p}artial_{\rm x}^j \mathfrak{p}si \to \mathfrak{p}artial_{\rm x}^j \mathfrak{p}si \ {\rm in} \ L^2(\mathbb{R}), \ {\rm as} \ R \to + \infty,$$ whereas, when $m \geq 1$, we similarly have using \eqref{ness} and \eqref{maree}, \begin{align*} \int_\mathbb{R} \big| \mathfrak{p}artial_{\rm x}^m {\rm ch}i_R \mathfrak{p}artial_{\rm x}^{j - m} \mathfrak{p}si \big|^2 = & \frac{1}{R^{2 m-1}} \bigg( \int_1^2 \big| \mathfrak{p}artial_{\rm x}^m {\rm ch}i(x) \mathfrak{p}artial_{\rm x}^{j - m} \mathfrak{p}si(R x) \big|^2 dx + \int_{- 2}^{- 1} \big| \mathfrak{p}artial_{\rm x}^j {\rm ch}i(x) \mathfrak{p}artial_{\rm x}^{j - m} \mathfrak{p}si(R x) \big|^2 dx \bigg)\\ \leq & \frac{K}{R^{2 m - 1}} \| \mathfrak{p}artial_{\rm x}^{j - m} \mathfrak{p}si \|_{L^\infty(\mathbb{R})}^2 \to 0, \ {\rm as} \ R \to + \infty. \end{align*} Hence, in view of \eqref{leibniz}, we are led to $$\mathfrak{p}artial_{\rm x}^j ({\rm ch}i_R \mathfrak{p}si) \to \mathfrak{p}artial_{\rm x}^j \mathfrak{p}si \ {\rm in} \ L^2(\mathbb{R}), \ {\rm as} \ R \to + \infty,$$ which ends the proof of claim \eqref{affric}. Combining \eqref{coe} with \eqref{affric}, and noticing that \eqref{affric} remains valid replacing ${\rm ch}i_R$ by ${\rm ch}i_R^2$, we obtain \eqref{clyde} at the limit $R \to + \infty$. This concludes the proof of Lemma \ref{Tamise}. \end{proof} \mathfrak{s}ection{Invariants of the Gross-Pitaevskii equations} \label{Invariants} \mathfrak{s}ubsection{Formal derivation of the invariants} \label{Formol} In \cite{ShabZak2}, Shabat and Zakharov established that the one-dimensional Gross-Pitaevskii equation is integrable, and admits an infinite number of conservation laws $f_n(\Psi)$, leading to an infinite family of invariants $I_n(\Psi)$. Set \begin{equation} \label{f1} f_1(\Psi) = - \frac{1}{2} |\Psi|^2. \end{equation} and let \begin{equation} \label{recurfn} f_{n+1}(\Psi) = \overline{\Psi} \mathfrak{p}artial_{\rm x} \Big( \frac{f_n(\Psi)}{\overline{\Psi}} \Big) + \mathfrak{s}um_{j=1}^{n-1} f_j(\Psi) f_{n-j}(\Psi). \end{equation} Using the inverse scattering method, it is shown formally in \cite{ShabZak2} that the functions $f_n(\Psi)$ are conservation laws for \eqref{GP}, so that the related integral quantities $I_n(\Psi)$ defined by \begin{equation} \label{densities} I_n(\Psi) = \int_{\mathbb{R}} \big( f_n(\Psi)({\rm x}) - f_n(\Psi)(\infty) \big) d{\rm x}, \end{equation} are invariants for \eqref{GP}. Here, the notation $f_n(\Psi)(\infty)$ stands for the limit at infinity of the map $f_n(\Psi)$ assuming that $$\Psi({\rm x}) \to 1, \ {\rm as} \ |{\rm x}| \to + \infty, \ {\rm and} \ \mathfrak{p}artial_{\rm x}^k \Psi({\rm x}) \to 0, \ {\rm as} \ |{\rm x}| \to + \infty,$$ for any $k \in \mathbb{N}^*$. The first five conservation laws are computed in \cite{ShabZak2}, namely \eqref{f1} and \begin{align} \label{f2} f_2(\Psi) & = - \frac{1}{2} \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi,\\ \label{f3} f_3(\Psi) & = - \frac{1}{2} \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi + \frac{1}{4} |\Psi|^4,\\ \label{f4} f_4(\Psi) & = - \frac{1}{2} \overline{\Psi} \mathfrak{p}artial_{\rm x}^3 \Psi + |\Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi + \frac{1}{4} |\Psi|^2 \Psi \mathfrak{p}artial_{\rm x} \overline{\Psi},\\ \label{f5} f_5(\Psi) & = - \frac{1}{2} \overline{\Psi} \mathfrak{p}artial_{\rm x}^4 \Psi + \frac{3}{2} |\Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi + \frac{1}{4} |\Psi|^2 \Psi \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} + \frac{3}{2} |\Psi|^2 |\mathfrak{p}artial_{\rm x} \Psi|^2 + \frac{5}{4} (\overline{\Psi})^2 (\mathfrak{p}artial_{\rm x} \Psi)^2 - \frac{1}{4} |\Psi|^6. \end{align} The purpose of this section is to give a rigorous meaning to these quantities, to prove that they are conserved, and to extend the explicit list of invariants. As a matter of fact, these invariants enter directly in our analysis of the transonic limit. The first step is to compute the additional conservation laws using formula \eqref{recurfn}. Notice first that formula \eqref{recurfn} is singular at the points where $\Psi$ vanishes. A first task is therefore to show that \eqref{recurfn} can be used to define the functionals $f_n(\Psi)$ even in the case the function $\mathfrak{p}si$ vanishes somewhere. To remove the singularity in \eqref{recurfn}, we check by induction that the function $f_n(\Psi)$ may be written as \begin{equation} \label{formfn} f_n(\Psi) = \overline{\Psi} \times \mathcal{F}_n(\Psi), \end{equation} where the map $\mathcal{F}_n$ is inductively defined by \begin{equation} \label{boF1} \mathcal{F}_1(\Psi) = - \frac{\Psi}{2}, \end{equation} and \begin{equation} \label{recurFn} \mathcal{F}_{n+1}(\Psi) = \mathfrak{p}artial_{\rm x} \mathcal{F}_n(\Psi) + \overline{\Psi} \mathfrak{s}um_{j=1}^{n-1} \mathcal{F}_j(\Psi) \mathcal{F}_{n-j}(\Psi). \end{equation} In particular, the map $\mathcal{F}_n(\Psi)$ is a polynomial functional of the functions $\Psi$, $\overline{\Psi}$, $\cdots$, $\mathfrak{p}artial_{\rm x}^{n-2} \Psi$, $\mathfrak{p}artial_{\rm x}^{n-2} \overline{\Psi}$ and $\mathfrak{p}artial_{\rm x}^{n-1} \Psi$, which is defined without additional assumptions on $\Psi$. This leads to explicit expressions of $f_6(\Psi)$, $f_7(\Psi)$, $f_8(\Psi)$ and $f_9(\Psi)$, which are given by \begin{align*} f_6(\Psi) = & - \frac{1}{2} \overline{\Psi} \mathfrak{p}artial_{\rm x}^5 \Psi + 2 |\Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^3 \Psi + \frac{1}{4} |\Psi|^2 \Psi \mathfrak{p}artial_{\rm x}^3 \overline{\Psi} + 2 |\Psi|^2 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} + 3 |\Psi|^2 \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi\\ \mathfrak{n}otag & + \frac{9}{2} (\overline{\Psi})^2 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^2 \Psi + \frac{11}{4} |\mathfrak{p}artial_{\rm x} \Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi - \frac{3}{4} |\Psi|^4 \Psi \mathfrak{p}artial_{\rm x} \overline{\Psi} - 2 |\Psi|^4 \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi,\\ f_7(\Psi) = & - \frac{1}{2} \overline{\Psi} \mathfrak{p}artial_{\rm x}^6 \Psi + \frac{1}{4} |\Psi|^2 \Psi \mathfrak{p}artial_{\rm x}^4 \overline{\Psi} + \frac{5}{2} |\Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^4 \Psi + \frac{5}{2} |\Psi|^2 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^3 \overline{\Psi} + 5 |\Psi|^2 \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{p}artial_{\rm x}^3 \Psi\\ \mathfrak{n}otag & + 7 (\overline{\Psi})^2 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^3 \Psi + 5 |\Psi|^2 |\mathfrak{p}artial_{\rm x}^2 \Psi|^2 + \frac{19}{4} (\mathfrak{p}artial_{\rm x} \Psi)^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \overline{\Psi}+ \frac{19}{4} (\overline{\Psi})^2 (\mathfrak{p}artial_{\rm x}^2 \Psi)^2 + 13 |\mathfrak{p}artial_{\rm x} \Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi\\ \mathfrak{n}otag & - |\Psi|^4 \Psi \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} - \frac{15}{4} |\Psi|^4 \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi - \frac{3}{4} |\Psi|^2 (\Psi)^2 (\mathfrak{p}artial_{\rm x} \overline{\Psi})^2 - 8 |\Psi|^4 |\mathfrak{p}artial_{\rm x} \Psi|^2 - \frac{25}{4} |\Psi|^2 (\overline{\Psi})^2 (\mathfrak{p}artial_{\rm x} \Psi)^2\\ \mathfrak{n}otag & + \frac{5}{16} |\Psi|^8,\\ f_8(\Psi) = & - \frac{1}{2} \overline{\Psi} \mathfrak{p}artial_{\rm x}^7 \Psi + \frac{1}{4} |\Psi|^2 \Psi \mathfrak{p}artial_{\rm x}^5 \overline{\Psi} + 3 |\Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^5 \Psi + 3 |\Psi|^2 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^4 \overline{\Psi} + \frac{15}{2} |\Psi|^2 \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{p}artial_{\rm x}^4 \Psi\\ \mathfrak{n}otag & + 10 (\overline{\Psi})^2 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^4 \Psi + \frac{15}{2} |\Psi|^2 \mathfrak{p}artial_{\rm x}^2 \Psi \mathfrak{p}artial_{\rm x}^3 \overline{\Psi} + \frac{29}{4} (\mathfrak{p}artial_{\rm x} \Psi)^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^3 \overline{\Psi} + 10 |\Psi|^2 \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^3 \Psi\\ \mathfrak{n}otag & + 17 (\overline{\Psi})^2 \mathfrak{p}artial_{\rm x}^2 \Psi \mathfrak{p}artial_{\rm x}^3 \Psi + 25 |\mathfrak{p}artial_{\rm x} \Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^3 \Psi + \frac{55}{2} |\mathfrak{p}artial_{\rm x}^2 \Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi + \frac{71}{4} (\mathfrak{p}artial_{\rm x}^2 \Psi)^2 \overline{\Psi} \mathfrak{p}artial_{\rm x} \overline{\Psi} - \frac{5}{4} |\Psi|^4 \Psi \mathfrak{p}artial_{\rm x}^3 \overline{\Psi}\\ \mathfrak{n}otag & - 6 |\Psi|^4 \overline{\Psi} \mathfrak{p}artial_{\rm x}^3 \Psi - \frac{5}{2} |\Psi|^2 (\Psi)^2 \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} - \frac{53}{4} |\Psi|^4 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} - \frac{75}{4} |\Psi|^4 \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi\\ \mathfrak{n}otag & - 27 |\Psi|^2 (\overline{\Psi})^2 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^2 \Psi - \frac{41}{4} |\Psi|^2 |\mathfrak{p}artial_{\rm x} \Psi|^2 \Psi \mathfrak{p}artial_{\rm x} \overline{\Psi} - \frac{131}{4} |\Psi|^2 |\mathfrak{p}artial_{\rm x} \Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi - \frac{15}{2} (\overline{\Psi})^3 (\mathfrak{p}artial_{\rm x} \Psi)^3\\ \mathfrak{n}otag & + \frac{29}{16} |\Psi|^6 \Psi \mathfrak{p}artial_{\rm x} \overline{\Psi} + 4 |\Psi|^6 \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi, \end{align*} and \begin{align*} f_9(\Psi) = & - \frac{1}{2} \overline{\Psi} \mathfrak{p}artial_{\rm x}^8 \Psi + \frac{1}{4} |\Psi|^2 \Psi \mathfrak{p}artial_{\rm x}^6 \overline{\Psi} + \frac{7}{2} |\Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^6 \Psi + \frac{7}{2} |\Psi|^2 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^5 \overline{\Psi} + \frac{21}{2} |\Psi|^2 \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{p}artial_{\rm x}^5 \Psi\\ & + \frac{27}{2} (\overline{\Psi})^2 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^5 \Psi + \frac{21}{2} |\Psi|^2 \mathfrak{p}artial_{\rm x}^2 \Psi \mathfrak{p}artial_{\rm x}^4 \overline{\Psi} + \frac{41}{4} (\mathfrak{p}artial_{\rm x} \Psi)^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^4 \overline{\Psi} + \frac{35}{2} |\Psi|^2 \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^4 \Psi\\ & + \frac{55}{2} (\overline{\Psi})^2 \mathfrak{p}artial_{\rm x}^2 \Psi \mathfrak{p}artial_{\rm x}^4 \Psi + \frac{85}{2} |\mathfrak{p}artial_{\rm x} \Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^4 \Psi + \frac{35}{2} |\Psi|^2 |\mathfrak{p}artial_{\rm x}^3 \Psi|^2 + \frac{99}{2} \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^2 \Psi \mathfrak{p}artial_{\rm x}^3 \overline{\Psi}\\ & + \frac{69}{4} (\overline{\Psi})^2 (\mathfrak{p}artial_{\rm x}^3 \Psi)^2 + \frac{125}{2} \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^3 \Psi + \frac{155}{2} \overline{\Psi} \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi \mathfrak{p}artial_{\rm x}^3 \Psi + \frac{181}{4} |\mathfrak{p}artial_{\rm x}^2 \Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi\\ & - \frac{3}{2} |\Psi|^4 \Psi \mathfrak{p}artial_{\rm x}^4 \overline{\Psi} - \frac{35}{4} |\Psi|^4 \overline{\Psi} \mathfrak{p}artial_{\rm x}^4 \Psi - \frac{15}{4} |\Psi|^2 (\Psi)^2 \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{p}artial_{\rm x}^3 \overline{\Psi} - \frac{79}{4} |\Psi|^4 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^3 \overline{\Psi}\\ & - 36 |\Psi|^4 \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{p}artial_{\rm x}^3 \Psi - 49 |\Psi|^2 (\overline{\Psi})^2 \mathfrak{p}artial_{\rm x} \Psi \mathfrak{p}artial_{\rm x}^3 \Psi - \frac{5}{2} |\Psi|^2 (\Psi)^2 (\mathfrak{p}artial_{\rm x}^2 \overline{\Psi})^2 - \frac{149}{4} |\Psi|^4 |\mathfrak{p}artial_{\rm x}^2 \Psi|^2\\ & - \frac{165}{4} |\Psi|^2 |\mathfrak{p}artial_{\rm x} \Psi|^2 \Psi \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} - 66 |\Psi|^2 \overline{\Psi} (\mathfrak{p}artial_{\rm x} \Psi)^2 \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} - \frac{133}{4} |\Psi|^2 (\overline{\Psi})^2 (\mathfrak{p}artial_{\rm x}^2 \Psi)^2\\ & - 29 |\Psi|^2 \Psi (\mathfrak{p}artial_{\rm x} \overline{\Psi})^2 \mathfrak{p}artial_{\rm x}^2 \Psi - \frac{349}{2} |\Psi|^2 |\mathfrak{p}artial_{\rm x} \Psi|^2 \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi - \frac{221}{4} (\overline{\Psi})^3 (\mathfrak{p}artial_{\rm x} \Psi)^2 \mathfrak{p}artial_{\rm x}^2 \Psi - \frac{213}{4} |\Psi|^2 |\mathfrak{p}artial_{\rm x} \Psi|^4\\ & - \frac{101}{2} |\mathfrak{p}artial_{\rm x} \Psi|^2 (\overline{\Psi})^2 (\mathfrak{p}artial_{\rm x} \Psi)^2 + \frac{47}{16} |\Psi|^6 \Psi \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} + \frac{35}{4} |\Psi|^6 \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi + \frac{71}{16} |\Psi|^4 (\Psi)^2 (\mathfrak{p}artial_{\rm x} \overline{\Psi})^2\\ & + \frac{117}{4} |\Psi|^6 |\mathfrak{p}artial_{\rm x} \Psi|^2 + \frac{175}{8} |\Psi|^4 (\overline{\Psi})^2 (\mathfrak{p}artial_{\rm x} \Psi)^2 - \frac{7}{16} |\Psi|^{10}. \end{align*} The second step is to provide explicit expressions of the invariants $I_n(\Psi)$ associated to each conservation law $f_n(\Psi)$ for an arbitrary function $\Psi$ in the appropriate $X^k(\mathbb{R})$ space. This raises some serious difficulties since the integrands are not in general integrable when $\Psi$ belongs to $X^k(\mathbb{R})$. For instance, according to definition \eqref{densities}, the invariants $I_1(\Psi)$, $I_2(\Psi)$ and $I_3(\Psi)$ should be given by \begin{equation} I_1(\Psi) = \frac{1}{2} \int_\mathbb{R} \big( 1 - |\Psi|^2 \big), I_2(\Psi) = - \frac{1}{2} \int_\mathbb{R} \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi, \ {\rm and} \ I_3(\Psi) = - \frac{1}{2} \int_\mathbb{R} \overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi + \frac{1}{4} \int_\mathbb{R} \big( |\Psi|^4 - 1 \big). \end{equation} For an arbitrary function $\Psi$ in $X^k(\mathbb{R})$, none of the above integrands belong to $L^1(\mathbb{R})$. Some quantities like $\overline{\Psi} \mathfrak{p}artial_{\rm x}^2 \Psi$ can be handled using integration by parts. This is not possible for $1 - |\Psi|^2$ or $|\Psi|^4 - 1$, which do not involve derivatives. Even the quantity $\overline{\Psi} \mathfrak{p}artial_{\rm x}\Psi$ cannot be immediately treated by integration by parts. In particular, the renormalization process as used in formula \eqref{densities} is not sufficient to give a sense to the invariants $I_n(\Psi)$ in the spaces $X^k(\mathbb{R})$. When $n = 2 m + 1$ is an odd number, a simple way to remove this difficulty is to introduce linear combinations of the conservation laws. More precisely, we consider the integral quantities formally defined by \begin{align} \label{renorE1} E_1(\Psi) & = \int_{\mathbb{R}} \Big( f_3(\Psi) + f_1(\Psi) + \frac{1}{4} \Big),\\ \label{renorE2} E_2(\Psi) & = - \int_{\mathbb{R}} \Big( f_5(\Psi) + 3 f_3(\Psi) + \frac{3}{2} f_1(\Psi) + \frac{1}{4} \Big),\\ \label{renorE3} E_3(\Psi) & = \int_{\mathbb{R}} \Big( f_7(\Psi) + 5 f_5(\Psi) + \frac{15}{2} f_3(\Psi) + \frac{5}{2} f_1(\Psi) + \frac{5}{16} \Big), \end{align} and \begin{equation} \begin{split} \label{renorE4} E_4(\Psi) & = - \int_{\mathbb{R}} \Big( f_9(\Psi) + 7 f_7(\Psi) + \frac{35}{2} f_5(\Psi) + \frac{35}{2} f_3(\Psi) + \frac{35}{8} f_1(\Psi) + \frac{7}{16} \Big). \end{split} \end{equation} Setting $\eta \equiv 1 - |\Psi|^2$ as usual, formal integrations by parts lead to the expressions \begin{align} \label{E1} E_1(\Psi) \equiv & E(\Psi) = \frac{1}{2} \int_{\mathbb{R}} |\mathfrak{p}artial_{\rm x} \Psi|^2 + \frac{1}{4} \int_{\mathbb{R}} \eta^2,\\ \label{E2} E_2(\Psi) \equiv & \frac{1}{2} \int_{\mathbb{R}} |\mathfrak{p}artial_{\rm x}^2 \Psi|^2 - \frac{3}{2} \int_{\mathbb{R}} \eta |\mathfrak{p}artial_{\rm x} \Psi|^2 + \frac{1}{4} \int_{\mathbb{R}} (\mathfrak{p}artial_{\rm x} \eta)^2 - \frac{1}{4} \int_\mathbb{R} \eta^3,\\ \label{E3} E_3(\Psi) \equiv & \frac{1}{2} \int_{\mathbb{R}} |\mathfrak{p}artial_{\rm x}^3 \Psi|^2 + \frac{1}{4} \int_{\mathbb{R}} |\mathfrak{p}artial_{\rm x}^2 \eta|^2 + \frac{5}{4} \int_{\mathbb{R}} |\mathfrak{p}artial_{\rm x} \Psi|^4 + \frac{5}{2} \int_{\mathbb{R}} \mathfrak{p}artial_{\rm x}^2 \eta |\mathfrak{p}artial_{\rm x} \Psi|^2 - \frac{5}{2} \int_{\mathbb{R}} \eta |\mathfrak{p}artial_{\rm x}^2 \Psi|^2\\ \mathfrak{n}otag - & \frac{5}{4} \int_\mathbb{R} \eta (\mathfrak{p}artial_{\rm x} \eta)^2 + \frac{15}{4} \int_\mathbb{R} \eta^2 |\mathfrak{p}artial_{\rm x} \Psi|^2 + \frac{5}{16} \int_\mathbb{R} \eta^4, \end{align} and \begin{equation} \begin{split} \label{E4} E_4(\Psi) \equiv & \frac{1}{2} \int_{\mathbb{R}} |\mathfrak{p}artial_{\rm x}^4 \Psi|^2 + \frac{1}{4} \int_{\mathbb{R}} |\mathfrak{p}artial_{\rm x}^3 \eta|^2 - \frac{7}{4} \int_\mathbb{R} \eta (\mathfrak{p}artial_{\rm x}^2 \eta)^2 - \frac{7}{2} \int_\mathbb{R} \eta |\mathfrak{p}artial_{\rm x}^3 \Psi|^2 + \frac{35}{8} \int_{\mathbb{R}} \eta^2 (\mathfrak{p}artial_{\rm x} \eta)^2\\ + & \frac{35}{4} \int_\mathbb{R} \eta^2 |\mathfrak{p}artial_{\rm x}^2 \Psi|^2 - \frac{35}{4} \int_{\mathbb{R}} (\mathfrak{p}artial_{\rm x} \eta)^2 |\mathfrak{p}artial_{\rm x} \Psi|^2 - \frac{7}{2} \int_\mathbb{R} |\mathfrak{p}artial_{\rm x} \Psi|^2 |\mathfrak{p}artial_{\rm x}^2 \Psi|^2 - 7 \int_\mathbb{R} \mathfrak{p}artial_{\rm x}^2 \eta \langle \mathfrak{p}artial_{\rm x} \Psi, \mathfrak{p}artial_{\rm x}^3 \Psi\rangle\\ - & 7 \int_\mathbb{R} |\mathfrak{p}artial_{\rm x} \Psi|^2 \langle \mathfrak{p}artial_{\rm x} \Psi, \mathfrak{p}artial_{\rm x}^3 \Psi \rangle - \frac{35}{2} \int_\mathbb{R} \eta \mathfrak{p}artial_{\rm x}^2 \eta |\mathfrak{p}artial_{\rm x} \Psi|^2 - \frac{35}{4} \int_\mathbb{R} \eta^3 |\mathfrak{p}artial_{\rm x} \Psi|^2 - \frac{35}{4} \int_\mathbb{R} \eta |\mathfrak{p}artial_{\rm x} \Psi|^4\\ - & \frac{7}{16} \int_\mathbb{R} \eta^5. \end{split} \end{equation} These expressions involve only integrable integrands, and therefore provide a rigorous definition of the corresponding integrals. We will refer to $E_k(\Psi)$ as the $k^{\rm th}$-order energy. When $n = 2 m$, with $m \geq 2$, the same strategy can be applied to define the $k^{\rm th}$-order momentum. We first introduce the formal linear combinations of even conservation laws \begin{align} \label{renorP2} P_2(\Psi) & = i \int_{\mathbb{R}} \Big( f_4(\Psi) + \frac{3}{2} f_2(\Psi) \Big),\\ \label{renorP3} P_3(\Psi) & = - i \int_{\mathbb{R}} \Big( f_6(\Psi) + 5 f_4(\Psi) + 5 f_2(\Psi) \Big), \end{align} and \begin{equation} \begin{split} \label{renorP4} P_4(\Psi) & = i \int_{\mathbb{R}} \Big( f_8(\Psi) + 7 f_6(\Psi) + \frac{35}{2} f_4(\Psi) + \frac{105}{8} f_2(\Psi) \Big). \end{split} \end{equation} After some integrations by parts, these expressions are transformed into the well-defined quantities \begin{align} \label{P2} P_2(\Psi) & \equiv \frac{1}{2} \int_{\mathbb{R}} \langle i \mathfrak{p}artial_{\rm x}^2 \Psi, \mathfrak{p}artial_{\rm x} \Psi \rangle - \frac{3}{4} \int_{\mathbb{R}} \eta \langle i \mathfrak{p}artial_{\rm x} \Psi, \Psi \rangle,\\ \label{P3} P_3(\Psi) & \equiv \frac{1}{2} \int_{\mathbb{R}} \langle i \mathfrak{p}artial_{\rm x}^3 \Psi, \mathfrak{p}artial_{\rm x}^2 \Psi \rangle - \frac{5}{2} \int_{\mathbb{R}} \eta \langle i \mathfrak{p}artial_{\rm x}^2 \Psi, \mathfrak{p}artial_{\rm x} \Psi \rangle + \frac{5}{4} \int_{\mathbb{R}} (\eta^2 + \eta) \langle i \mathfrak{p}artial_{\rm x} \Psi, \Psi \rangle, \end{align} and \begin{equation} \begin{split} \label{P4} P_4(\Psi) \equiv & \frac{1}{2} \int_{\mathbb{R}} \langle i \mathfrak{p}artial_{\rm x}^4 \Psi, \mathfrak{p}artial_{\rm x}^3 \Psi \rangle - \frac{7}{2} \int_{\mathbb{R}} \eta \langle i \mathfrak{p}artial_{\rm x}^3 \Psi, \mathfrak{p}artial_{\rm x}^2 \Psi \rangle + \frac{7}{2} \int_{\mathbb{R}} \mathfrak{p}artial_{\rm x}^2 \eta \langle i \mathfrak{p}artial_{\rm x}^2 \Psi, \mathfrak{p}artial_{\rm x} \Psi \rangle + \frac{7}{4} \int_{\mathbb{R}} |\mathfrak{p}artial_{\rm x} \Psi|^2 \langle i \mathfrak{p}artial_{\rm x}^2 \Psi, \mathfrak{p}artial_{\rm x} \Psi \rangle\\ + & \frac{35}{4} \int_{\mathbb{R}} \eta^2 \langle i \mathfrak{p}artial_{\rm x}^2 \Psi, \mathfrak{p}artial_{\rm x} \Psi \rangle - \frac{35}{16} \int_{\mathbb{R}} (\eta^3 + \eta^2 + \eta) \langle i \mathfrak{p}artial_{\rm x} \Psi, \Psi \rangle. \end{split} \end{equation} The case $n = 2$ has to be discussed separately. The invariant $I_2(\Psi)$ is formally equal, up to some integration by parts, to $$I_2(\Psi) = \frac{1}{4} \int_\mathbb{R} \Big( \Psi \mathfrak{p}artial_{\rm x} \overline{\Psi} - \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi \Big).$$ This quantity is purely imaginary. Its imaginary part is equal to the momentum, i.e. \begin{equation} \label{P1} {\rm Im}(I_2(\Psi)) = P_1(\Psi) \equiv P(\Psi) = \frac{1}{2} \int_{\mathbb{R}} \langle i \mathfrak{p}artial_{\rm x} \Psi, \Psi \rangle. \end{equation} However, the definition of the momentum raises some difficulty. As a matter of fact, the quantity $P(\Psi)$ is not well-defined for any arbitrary map $\Psi$ in the energy space $X^1(\mathbb{R})$. We refer to \cite{BeGrSaS1} for a proof of this claim, and a discussion about the different ways to provide a rigorous definition of the momentum in the energy space. Notice that in our analysis of the transonic limit, we handle with maps $\Psi$ with small energy. In particular, we may assume that they satisfy \begin{equation} \label{toutpetit} E(\Psi) < \frac{2 \mathfrak{s}qrt{2}}{3}, \end{equation} so that we may lift $\Psi$ as \begin{equation} \label{lift} \Psi = \mathfrak{v}arrho \exp i \mathfrak{v}arphi. \end{equation} Then, we may define a so-called renormalized momentum by \begin{equation} \label{p1} p_1(\Psi) = p(\Psi) \equiv \frac{1}{2} \int_{\mathbb{R}} \eta \mathfrak{p}artial_{\rm x} \mathfrak{v}arphi \end{equation} (see \cite{BetGrSa2,BeGrSaS1} for more details), which is also, at least formally, an invariant for the Gross-Pitaevskii equation, since it verifies \begin{equation} \label{renorp1} p_1(\Psi) = - i \int_{\mathbb{R}} f_2(\Psi), \end{equation} when $\Psi$ is sufficiently smooth and integrable at infinity. We will also consider the renormalized momenta $p_k$, which are linear combinations of $P_k$ and $p_1$. They are defined by \begin{align} \label{p2} p_2(\Psi) \equiv P_2(\Psi) - \frac{3}{2} p_1(\Psi) = & \frac{1}{2} \int_{\mathbb{R}} \langle i \mathfrak{p}artial_{\rm x}^2 \Psi, \mathfrak{p}artial_{\rm x} \Psi \rangle - \frac{3}{4} \int_{\mathbb{R}} \eta \langle i \mathfrak{p}artial_{\rm x} \Psi, \Psi \rangle - \frac{3}{4} \int_{\mathbb{R}} \eta \mathfrak{p}artial_{\rm x} \mathfrak{v}arphi,\\ \label{p3} p_3(\Psi) \equiv P_3(\Psi) + \frac{5}{2} p_1(\Psi) = & \frac{1}{2} \int_{\mathbb{R}} \langle i \mathfrak{p}artial_{\rm x}^3 \Psi, \mathfrak{p}artial_{\rm x}^2 \Psi \rangle - \frac{5}{2} \int_{\mathbb{R}} (\eta - 1) \langle i \mathfrak{p}artial_{\rm x}^2 \Psi, \mathfrak{p}artial_{\rm x} \Psi \rangle\\ \mathfrak{n}otag & + \frac{5}{4} \int_{\mathbb{R}} (\eta^2 + \eta) \langle i \mathfrak{p}artial_{\rm x} \Psi, \Psi \rangle + \frac{5}{4} \int_{\mathbb{R}} \eta \mathfrak{p}artial_{\rm x} \mathfrak{v}arphi, \end{align} and \begin{equation} \begin{split} \label{p4} p_4(\Psi) \equiv P_4(\Psi) - \frac{35}{8} p_1(\Psi) = & \frac{1}{2} \int_{\mathbb{R}} \langle i \mathfrak{p}artial_{\rm x}^4 \Psi, \mathfrak{p}artial_{\rm x}^3 \Psi \rangle - \frac{7}{2} \int_{\mathbb{R}} \eta \langle i \mathfrak{p}artial_{\rm x}^3 \Psi, \mathfrak{p}artial_{\rm x}^2 \Psi \rangle + \frac{35}{4} \int_{\mathbb{R}} \eta^2 \langle i \mathfrak{p}artial_{\rm x}^2 \Psi, \mathfrak{p}artial_{\rm x} \Psi \rangle\\ & - \frac{35}{16} \int_{\mathbb{R}} (\eta^3 + \eta^2 + \eta) \langle i \mathfrak{p}artial_{\rm x} \Psi, \Psi \rangle - \frac{35}{16} \int_{\mathbb{R}} \eta \mathfrak{p}artial_{\rm x} \mathfrak{v}arphi, \end{split} \end{equation} provided that the function $\Psi$ satisfies condition \eqref{toutpetit}. As a matter of fact, the renormalized momenta $p_k$, more than the momenta $P_k$, will be involved in the analysis of the transonic limit. We may summarize some of our previous discussion in \begin{lemma} \label{DefEkPk} The functionals $E_k$, for $1 \leq k \leq 4$, and $P_k$, for $2\leq k \leq 4$, are well-defined and continuous on $X^k(\mathbb{R})$. The functionals $p_k(\Psi)$ are well-defined for any function $\Psi \in X^k(\mathbb{R})$ which satisfies \eqref{toutpetit}. \end{lemma} \begin{proof} The proof follows from the definition of the space $X^1(\mathbb{R})$ for the functional $E_1 = E$. For the momentum $p_1 = p$, it is proved in \cite{BetGrSa2} that any function $\Psi \in X^1(\mathbb{R})$ such that \eqref{toutpetit} holds, verifies $$\rho_{\min} = \mathfrak{u}nderset{x \in \mathbb{R}}{\inf} |\Psi(x)| > 0,$$ so that, denoting $\Psi = \mathfrak{v}arrho \exp i \mathfrak{v}arphi$ as above, $$|\eta \mathfrak{p}artial_{\rm x} \mathfrak{v}arphi| \leq \frac{1}{\rho_{\min}} \big| \eta \big| \big| \mathfrak{v}arrho \mathfrak{p}artial_{\rm x} \mathfrak{v}arphi \big| \leq \frac{1}{\rho_{\min}} \big| \eta \big| \big| \mathfrak{p}artial_{\rm x} \Psi \big|.$$ Hence, the quantity $\eta \mathfrak{p}artial_{\rm x} \mathfrak{v}arphi$ belongs to $L^1(\mathbb{R})$, so that $p(\Psi)$ is well-defined as well. Finally, for the higher order invariants, notice that, by the Sobolev embedding theorem, any function $\Psi \in X^k(\mathbb{R})$ belongs to $\mathcal{C}^{k-1}_0(\mathbb{R})$, so that, in particular, $\eta$ is in $H^k(\mathbb{R})$. Continuity raises no difficulty. \end{proof} \mathfrak{s}ubsection{Conservation of the invariants in the spaces $X^k(\mathbb{R})$} \label{Definv} The purpose of this section is to provide a rigorous mathematical proof to the fact that the invariants are conserved along the Gross-Pitaevskii flow. As mentioned in the introduction, conservation of the energy $E_1 = E$ was already addressed in \cite{Zhidkov1} (see also \cite{Gerard2}). \begin{theorem}[\cite{Zhidkov1,Gerard2}] \label{ConsE} Let $\Psi_0 \in X^1(\mathbb{R})$. Then, the unique solution $\Psi(\cdot, t)$ to \eqref{GP} in $\mathcal{C}^0(\mathbb{R}, X^1(\mathbb{R}))$ with initial data $\Psi_0$ given by Theorem \ref{thm:existe} satisfies $$E \big( \Psi(\cdot, t) \big) = E(\Psi_0),$$ for any $t \in \mathbb{R}$. \end{theorem} Concerning the momentum, Gallo \cite{Gallo3} established the conservation of the renormalized momentum $p_1$ (see also \cite{BeGrSaS1}). \begin{theorem}[\cite{Gallo3, BeGrSaS1}] \label{Defandconsp} Let $\Psi_0$ be a function in $X^1(\mathbb{R})$ which satisfies \eqref{toutpetit}. If $\Psi(\cdot, t)$ stands for the unique solution to \eqref{GP} in $\mathcal{C}^0(\mathbb{R}, X^1(\mathbb{R}))$ with initial data $\Psi_0$ given by Theorem \ref{thm:existe} , then $$p \big( \Psi(\cdot, t) \big) = p(\Psi_0),$$ for any $t \in \mathbb{R}$. \end{theorem} Here, we extend the analysis to the integral quantities $P_k(\Psi)$ and $E_k(\Psi)$. \begin{theorem} \label{Invconserved} Let $2 \leq k \leq 4$ and $\Psi_0 \in X^k(\mathbb{R})$. Then, the unique solution $\Psi(\cdot, t)$ in the space $\mathcal{C}^0(\mathbb{R}, X^k(\mathbb{R}))$ to \eqref{GP} with initial data $\Psi_0$ given by Theorem \ref{thm:existe} satisfies \begin{equation} \label{buffet} P_k \big( \Psi(\cdot, t) \big) = P_k(\Psi_0), \ {\rm and} \ E_k \big( \Psi(\cdot, t) \big) = E_k(\Psi_0), \end{equation} for any $t \in \mathbb{R}$. \end{theorem} \begin{remark} Theorem \ref{Invconserved} focuses on the conservation of integral quantities which play a role in the analysis of the transonic limit. As mentioned in the introduction, the mass $m(\Psi)$ defined by \eqref{alamasse} is also formally conserved. However, the quantity $m(\Psi)$ is not well-defined in the energy space $X^1(\mathbb{R})$. A proof of its conservation along the Gross-Pitaevskii flow would first require to provide a precise mathematical meaning to this quantity in $X^1(\mathbb{R})$.\\ Similarly, Theorem \ref{Invconserved} does not address the question of the existence and conservation of higher order energies and momenta. A more general treatment of the inductive form of the conservation laws $f_n$ would be required to define properly higher order energies and momenta. However, we believe that such integral quantities could be well-defined in the spaces $X^k(\mathbb{R})$ taking linear combinations and integrating by parts as above, so that their conservation along the Gross-Pitaevskii flow would also follow from Lemma \ref{Consfn} below. \end{remark} At this stage, notice that, in view of Theorems \ref{Defandconsp} and \ref{Invconserved}, and definitions \eqref{p2}, \eqref{p3} and \eqref{p4}, the quantities $p_k$ are also conserved along the Gross-Pitaevskii flow. \begin{cor} \label{Conspk} Let $2 \leq k \leq 4$, and let $\Psi_0$ be a function in $X^k(\mathbb{R})$ such that assumption \eqref{toutpetit} holds. Then, we have $$p_k \big( \Psi(\cdot, t) \big) = p_k(\Psi_0),$$ for any $t \in \mathbb{R}$, where $\Psi$ denotes the unique solution to \eqref{GP} in $\mathcal{C}^0(\mathbb{R}, X^k(\mathbb{R}))$ with initial data $\Psi_0$. \end{cor} In the proof of Theorem \ref{Invconserved}, we will make use of the fact that the functionals $f_n$ are conservation laws for \eqref{GP}. More precisely, we have \begin{lemma} \label{Consfn} Let $- \infty \leq a < b \leq + \infty$ and $n \geq 1$. Consider a solution $\Psi$ to \eqref{GP} such that \begin{equation} \label{aig} \Psi \in \mathcal{C}^0((a, b), \mathcal{C}^{n+1}(\mathbb{R})) \cap \mathcal{C}^1((a, b), \mathcal{C}^{n-1}(\mathbb{R})). \end{equation} Then, the map $t \mapsto f_n(\Psi(\cdot, t))$ is in $\mathcal{C}^0((a, b), \mathcal{C}^1(\mathbb{R})) \cap \mathcal{C}^1((a, b), \mathcal{C}^0(\mathbb{R}))$, while the function $t \mapsto f_{n+1}(\Psi(\cdot, t))$ belongs to $\mathcal{C}^0((a, b), \mathcal{C}^1(\mathbb{R}))$. Moreover, they satisfy \begin{equation} \label{lehmann} \mathfrak{p}artial_t \Big( f_n(\Psi) \Big) = i \mathfrak{p}artial_{\rm x} \Big( f_{n + 1}(\Psi) - \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathcal{F}_n(\Psi) \Big) \ {\rm on} \ \mathbb{R} \times (a, b). \end{equation} \end{lemma} We will first consider the maps $\mathcal{F}_n(\Psi)$ defined by \eqref{recurFn}, and prove \begin{lemma} \label{ConsFn} Let $- \infty \leq a < b \leq + \infty$ and $n \geq 1$. Consider a solution $\Psi$ to \eqref{GP} which satisfies \eqref{aig}. Then, the map $t \mapsto \mathcal{F}_n(\Psi(\cdot, t))$ is in $\mathcal{C}^0((a, b), \mathcal{C}^1(\mathbb{R})) \cap \mathcal{C}^1((a, b), \mathcal{C}^0(\mathbb{R}))$, while the function $t \mapsto \mathcal{F}_{n+1}(\Psi(\cdot, t))$ belongs to $\mathcal{C}^0((a, b), \mathcal{C}^1(\mathbb{R}))$. Moreover, they satisfy \begin{equation} \label{nomura} \mathfrak{p}artial_t \big( \mathcal{F}_n(\Psi) \big) = i \mathcal{F}_n(\Psi) - i |\Psi|^2 \mathcal{F}_n(\Psi) + i \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{s}um_{j = 1}^{n - 1} \mathcal{F}_j(\Psi) \mathcal{F}_{n - j}(\Psi) + i \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{n+1}(\Psi) \big). \end{equation} \end{lemma} Lemma \ref{Consfn} is then a direct consequence of Lemma \ref{ConsFn}. \begin{proof}[Proof of Lemma \ref{Consfn}] Notice first that, in view of assumption \eqref{aig} and formulae \eqref{recurfn} and \eqref{recurFn}, the maps $f_j(\Psi)$ and $\mathcal{F}_j(\Psi)$ belong to $\mathcal{C}^0((a, b), \mathcal{C}^1(\mathbb{R}))$ for any $1 \leq j \leq n + 1$, and the functionals $f_n(\Psi)$ and $\mathcal{F}_n(\Psi)$ are also in $\mathcal{C}^1((a, b), \mathcal{C}^0(\mathbb{R}))$. Therefore, in view of \eqref{formfn}, we can write $$\mathfrak{p}artial_t \big( f_n(\Psi) \big) = \mathfrak{p}artial_t \overline{\Psi} \mathcal{F}_n(\Psi) + \overline{\Psi} \mathfrak{p}artial_t \big( \mathcal{F}_n(\Psi) \big),$$ so that, by \eqref{GP} and \eqref{nomura}, $$\mathfrak{p}artial_t \big( f_n(\Psi) \big) = i \Big( - \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} \mathcal{F}_n(\Psi) + \overline{\Psi} \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{s}um_{j = 1}^{n - 1} \mathcal{F}_j(\Psi) \mathcal{F}_{n - j}(\Psi) + \overline{\Psi} \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{n+1}(\Psi) \big) \Big).$$ In view of \eqref{recurFn}, we are led to $$\mathfrak{p}artial_t \big( f_n(\Psi) \big) = i \Big( - \mathfrak{p}artial_{\rm x}^2 \overline{\Psi} \mathcal{F}_n(\Psi) + \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathcal{F}_{n+1}(\Psi) - \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_n(\Psi) \big) + \overline{\Psi} \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{n+1}(\Psi) \big) \Big),$$ which completes the proof of \eqref{lehmann}, invoking definition \eqref{formfn}. \end{proof} We now provide the proof of Lemma \ref{ConsFn}. \begin{proof}[Proof of Lemma \ref{ConsFn}] The proof is by induction on $n \in \mathbb{N}^*$. For $n = 1$, it follows from \eqref{GP} that \begin{align} \mathfrak{p}artial_t \big( \mathcal{F}_1(\Psi) \big) = - \frac{1}{2} \mathfrak{p}artial_t \Psi = \frac{i}{2} \Big( - \mathfrak{p}artial_{\rm x}^2 \Psi - \Psi + |\Psi|^2 \Psi \Big) = i \mathcal{F}_1(\Psi) - i |\Psi|^2 \mathcal{F}_1(\Psi) + i \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_2(\Psi) \big), \end{align} so that \eqref{nomura} holds for $n = 1$. We now turn to the case $n = N + 1$, assuming that the conclusion of Lemma \ref{Consfn} holds for any $1 \leq n \leq N$. Notice first that, in view of assumption \eqref{aig} and formulae \eqref{recurfn} and \eqref{recurFn}, the maps $\mathcal{F}_j(\Psi)$ are in $\mathcal{C}^0((a, b), \mathcal{C}^1(\mathbb{R}))$ for any $1 \leq j \leq N + 2$, while the functional $\mathcal{F}_{N + 1}(\Psi)$ also belongs to $\mathcal{C}^1((a, b), \mathcal{C}^0(\mathbb{R}))$. Therefore, in view of \eqref{recurFn}, we can write \begin{align*} \mathfrak{p}artial_t \big( \mathcal{F}_{N+1}(\Psi) \big) = & \mathfrak{p}artial_t \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_N(\Psi) \big) + \mathfrak{p}artial_t \overline{\Psi} \mathfrak{s}um_{j = 1}^{N - 1} \mathcal{F}_j(\Psi) \mathcal{F}_{N-j}(\Psi) \\ & + \overline{\Psi} \mathfrak{s}um_{j = 1}^{N - 1} \Big( \mathfrak{p}artial_t \big( \mathcal{F}_j(\Psi) \big) \mathcal{F}_{N-j}(\Psi) + \mathcal{F}_j(\Psi) \big( \mathfrak{p}artial_t \mathcal{F}_{N-j}(\Psi) \big) \Big). \end{align*} Invoking the inductive assumption combined with \eqref{GP}, we are led to \begin{equation} \label{stanley} \begin{split} \mathfrak{p}artial_t \big( \mathcal{F}_{N+1}(\Psi) \big) = & i \bigg( \big( 1 - |\Psi|^2 \big) \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_N(\Psi) \big) - \big( \Psi \mathfrak{p}artial_{\rm x} \overline{\Psi} + \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi \big) \mathcal{F}_N(\Psi)\\ + & \mathfrak{p}artial_{\rm x}^2 \big( \mathcal{F}_{N+1}(\Psi) \big) + \overline{\Psi} \big( 1 - |\Psi|^2 \big) \mathfrak{s}um_{j = 1}^{N - 1} \mathcal{F}_j(\Psi) \mathcal{F}_{N-j}(\Psi)\\ + & \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{s}um_{j = 1}^{N - 1} \Big( \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_j(\Psi) \big) \mathcal{F}_{N-j}(\Psi) + \mathcal{F}_j(\Psi) \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{N-j}(\Psi) \big) \Big)\\ + & \overline{\Psi} \mathfrak{s}um_{j = 1}^{N - 1} \Big( \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{j+1}(\Psi) \big) \mathcal{F}_{N-j}(\Psi) + \mathcal{F}_j(\Psi) \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{N+1-j}(\Psi) \big) \Big)\\ + & \overline{\Psi} \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{s}um_{j = 1}^{N - 1} \Big( \mathcal{F}_{N-j}(\Psi) \mathfrak{s}um_{k = 1}^{j - 1} \mathcal{F}_k(\Psi) \mathcal{F}_{j-k}(\Psi) + \mathcal{F}_j(\Psi) \mathfrak{s}um_{k = 1}^{N - j - 1} \mathcal{F}_k(\Psi) \mathcal{F}_{N-j-k}(\Psi) \Big) \bigg). \end{split} \end{equation} In view of \eqref{boF1}, we first have \begin{equation} \begin{split} \label{cdo1} - \overline{\Psi} \mathfrak{p}artial_{\rm x} \Psi \mathcal{F}_N(\Psi) & + \overline{\Psi} \mathfrak{s}um_{j = 1}^{N - 1} \Big( \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{j+1}(\Psi) \big) \mathcal{F}_{N-j}(\Psi) + \mathcal{F}_j(\Psi) \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{N+1-j}(\Psi) \big) \Big)\\ = & \overline{\Psi} \mathfrak{s}um_{j = 1}^N \Big( \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_j(\Psi) \big) \mathcal{F}_{N+1-j}(\Psi) + \mathcal{F}_j(\Psi) \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{N+1-j}(\Psi) \big) \Big), \end{split} \end{equation} whereas, by formula \eqref{recurFn}, \begin{equation} \label{cdo2} \big( 1 - |\Psi|^2 \big) \Big( \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_N(\Psi) \big) + \overline{\Psi} \mathfrak{s}um_{j = 1}^{N - 1} \mathcal{F}_j(\Psi) \mathcal{F}_{N-j}(\Psi) \Big) = \big( 1 - |\Psi|^2 \big) \mathcal{F}_{N+1}(\Psi), \end{equation} and \begin{equation} \begin{split} \label{cdo3} - \Psi \mathfrak{p}artial_{\rm x} \overline{\Psi} & \mathcal{F}_N(\Psi) + \overline{\Psi} \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{s}um_{j = 1}^{N - 1} \Big( \mathcal{F}_{N-j}(\Psi) \mathfrak{s}um_{k = 1}^{j - 1} \mathcal{F}_k(\Psi) \mathcal{F}_{j-k}(\Psi) + \mathcal{F}_j(\Psi) \mathfrak{s}um_{k = 1}^{N - j - 1} \mathcal{F}_k(\Psi) \mathcal{F}_{N-j-k}(\Psi) \Big)\\ = & 2 \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{s}um_{j = 1}^N \mathcal{F}_j(\Psi) \mathcal{F}_{N+1-j}(\Psi) - \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{s}um_{j = 1}^{N - 1} \Big( \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_j(\Psi) \big) \mathcal{F}_{N-j}(\Psi) + \mathcal{F}_j(\Psi) \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{N-j}(\Psi) \big) \Big). \end{split} \end{equation} Hence, we deduce from \eqref{stanley}, \eqref{cdo1}, \eqref{cdo2} and \eqref{cdo3} that \begin{equation} \label{morgan} \begin{split} \mathfrak{p}artial_t \big( \mathcal{F}_{N+1}(\Psi) \big) = i \bigg( \big( 1 - & |\Psi|^2 \big) \mathcal{F}_{N+1}(\Psi) + \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathfrak{s}um_{j = 1}^N \mathcal{F}_j(\Psi) \mathcal{F}_{N+1-j}(\Psi)\\ + & \mathfrak{p}artial_{\rm x} \Big( \mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{N+1}(\Psi) \big) + \overline{\Psi} \mathfrak{s}um_{j = 1}^N \mathcal{F}_j(\Psi) \mathcal{F}_{N+1-j}(\Psi) \Big) \bigg). \end{split} \end{equation} In view of \eqref{recurFn}, the second line in \eqref{morgan} is equal to $\mathfrak{p}artial_{\rm x} \big( \mathcal{F}_{N+2}(\Psi) \big)$, so that \eqref{nomura} holds for $n = N + 1$. This completes the proof of Lemma \ref{ConsFn} by induction. \end{proof} We finally turn to the proof of Theorem \ref{Invconserved} \begin{proof}[Proof of Theorem \ref{Invconserved}] We first assume that in addition $\Psi_0 \in X^9(\mathbb{R})$. In this situation, the maps $t \mapsto E_k(\Psi(\cdot, t))$ and $t \mapsto P_k(\Psi(\cdot, t))$ are in $\mathcal{C}^1(\mathbb{R}, \mathbb{R})$, while by the Sobolev embedding theorem, the map $t \mapsto \Psi(\cdot, t)$ is in $\mathcal{C}^1(\mathbb{R}, \mathcal{C}^8(\mathbb{R}))$ and $\mathcal{C}^0(\mathbb{R}, \mathcal{C}^{10}(\mathbb{R}))$. Hence, in view of Lemma \ref{Consfn}, \begin{equation} \label{mitsu} \mathfrak{p}artial_t \Big( f_n(\Psi) \Big) = i \mathfrak{p}artial_{\rm x} \Big( f_{n + 1}(\Psi) - \mathfrak{p}artial_{\rm x} \overline{\Psi} \mathcal{F}_n(\Psi) \Big) \ {\rm on} \ \mathbb{R}, \end{equation} for any $1 \leq n \leq 9$. Now consider, for instance, the map $t \mapsto E_2(\Psi(\cdot, t))$. In view of \eqref{renorE2}, its derivative is, at least formally, given by $$\frac{\rm d}{\rm dt} \Big( E_2(\Psi(\cdot, t)) \Big) = - \int_{\mathbb{R}} \Big( \mathfrak{p}artial_t f_5(\Psi) + 3 \mathfrak{p}artial_t f_3(\Psi) + \frac{3}{2} \mathfrak{p}artial_t f_1(\Psi) \Big),$$ so that by \eqref{mitsu}, we formally have $$\frac{\rm d}{\rm dt} \Big( E_2(\Psi(\cdot, t)) \Big) = - i \int_{\mathbb{R}} \mathfrak{p}artial_{\rm x} \bigg( f_6(\Psi) + 3 f_4(\Psi) + \frac{3}{2} f_2(\Psi) - \mathfrak{p}artial_{\rm x} \overline{\Psi} \Big( \mathcal{F}_5(\Psi) + 3 \mathcal{F}_2(\Psi) + \frac{3}{2} \mathcal{F}_1(\Psi) \Big) \bigg) = 0,$$ i.e. the quantity $E_2(\Psi)$ is formally conserved by \eqref{GP}. In particular, the proof of the conservation of $E_2$ along the Gross-Pitaevskii flow reduces to drop some integrability difficulties in the above formal argument. Therefore, given any $R > 1$, we introduce some cut-off function ${\rm ch}i \in \mathcal{C}^\infty(\mathbb{R}, [0, 1])$ such that \begin{equation} \label{plouto} {\rm ch}i = 1 \ {\rm on} \ (-1, 1), \ {\rm and} \ {\rm ch}i = 0 \ {\rm on} \ \mathbb{R} \mathfrak{s}etminus (-2, 2), \end{equation} and denote \begin{equation} \label{dingo} {\rm ch}i_R({\rm x}) = {\rm ch}i \Big( \frac{{\rm x}}{R} \Big), \ \forall {\rm x} \in \mathbb{R}. \end{equation} Since the map $t \mapsto \Psi(\cdot, t)$ belongs to $\mathcal{C}^1(\mathbb{R}, X^9(\mathbb{R}))$, we then have \begin{equation} \label{mickey} \frac{\rm d}{\rm dt} \Big( E_2(\Psi(\cdot, t)) \Big) = \int_\mathbb{R} \mathfrak{p}artial_t \big( e_2(\Psi(\cdot,t)) \big) = \mathfrak{u}nderset{R \to + \infty}{\lim} \int_\mathbb{R} {\rm ch}i_R({\rm x}) \mathfrak{p}artial_t \big( e_2(\Psi({\rm x},t)) \big) d{\rm x}, \end{equation} where we let $$E_2(\mathfrak{p}si) \equiv \int_\mathbb{R} e_2(\mathfrak{p}si).$$ We now make use of formal relation \eqref{renorE2} to compute $$\int_\mathbb{R} {\rm ch}i_R({\rm x}) \mathfrak{p}artial_t \big( e_2(\Psi({\rm x}, t)) \big) d{\rm x} = - \int_{\mathbb{R}} {\rm ch}i_R \Big( \mathfrak{p}artial_t f_5(\Psi) + 3 \mathfrak{p}artial_t f_3(\Psi) + \frac{3}{2} \mathfrak{p}artial_t f_1(\Psi) \Big) + \int_{\mathbb{R}} \mathfrak{p}artial_{\rm x} {\rm ch}i_R \ Q_1(\Psi, \mathfrak{p}artial_t \Psi),$$ where, using definitions \eqref{f1}, \eqref{f3}, \eqref{f5} and \eqref{E2}, and the Sobolev embedding theorem, the function $Q_1(\Psi, \mathfrak{p}artial_t \Psi)$ tends to $0$ at $\mathfrak{p}m \infty$. Invoking \eqref{mitsu} and integrating by parts once more, we are led to \begin{equation} \label{minnie} \int_\mathbb{R} {\rm ch}i_R({\rm x}) \mathfrak{p}artial_t \big( e_2(\Psi({\rm x}, t)) \big) d{\rm x} = \int_{\mathbb{R}} \mathfrak{p}artial_{\rm x} {\rm ch}i_R \ Q_2(\Psi, \mathfrak{p}artial_t \Psi), \end{equation} where $$Q_2(\Psi, \mathfrak{p}artial_t \Psi) = Q_1(\Psi, \mathfrak{p}artial_t \Psi) + i f_6(\Psi) + 3 i f_4(\Psi) + \frac{3}{2} i f_2(\Psi) - i \mathfrak{p}artial_{\rm x} \overline{\Psi} \Big( \mathcal{F}_5(\Psi) + 3 \mathcal{F}_2(\Psi) + \frac{3}{2} \mathcal{F}_1(\Psi) \Big),$$ also tends to $0$ at $\mathfrak{p}m \infty$. Finally, notice that when $$f({\rm x}) \to 0, \ {\rm as} \ |{\rm x}| \to + \infty,$$ we have $$\int_{\mathbb{R}} \mathfrak{p}artial_{\rm x}^j {\rm ch}i_R({\rm x}) f({\rm x}) d{\rm x} = \frac{1}{R^{j-1}} \bigg( \int_1^2 \mathfrak{p}artial_{\rm x}^j {\rm ch}i({\rm x}) f(R {\rm x}) d{\rm x} + \int_{- 2}^{- 1} \mathfrak{p}artial_{\rm x}^j {\rm ch}i({\rm x}) f(R {\rm x}) d{\rm x} \bigg) \to 0, \ {\rm as} \ R \to + \infty,$$ so that, in view of \eqref{mickey} and \eqref{minnie}, we obtain at the limit $R \to + \infty$, $$\frac{\rm d}{\rm dt} \Big( E_2(\Psi(\cdot, t)) \Big) = 0,$$ which gives \eqref{buffet} for the quantity $E_2$. Using formal identities \eqref{renorE3}, \eqref{renorE4}, \eqref{renorP2}, \eqref{renorP3} and \eqref{renorP4}, the proofs are identical for the functionals $E_3$, $E_4$, $P_2$, $P_3$ and $P_4$, so that we omit them. In the general case where we only have $\Psi_0 \in X^k(\mathbb{R})$, we first approximate $\Psi_0$ by a sequence of functions $\mathfrak{p}si_n$ in $X^9(\mathbb{R})$ for the $X^k$-distance (see e.g. \cite{Gerard1}), and then use the continuity of the flow map $\Psi_0 \mapsto \Psi(\cdot, T)$ in $X^k(\mathbb{R})$ for any fixed $T$, and the continuity of the functionals $E_k$ and $p_k$ with respect to the $X^k$-distance. \end{proof} \mathfrak{s}ection{Invariants in the transonic limit} \label{Rescaledinv} In this section, we analyse the expressions of the invariant quantities introduced in the previous section in the slow variables. Therefore, we introduce the quantities $\mathcal{E}_k(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon)$ and $\mathcal{P}_k(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon)$ defined by \begin{equation} \label{slowEk} E_k(\Psi) = \frac{\mathfrak{v}arepsilon^{2 k + 1}}{18} \mathcal{E}_k(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon), \end{equation} and \begin{equation} \label{slowpk} \begin{split} p_k(\Psi) = & \frac{\mathfrak{v}arepsilon^{2 k + 1}}{18} \mathcal{P}_k(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon). \end{split} \end{equation} We also set $$m_\mathfrak{v}arepsilon = 1 - \frac{\mathfrak{v}arepsilon^2}{6} N_\mathfrak{v}arepsilon.$$ We now derive the precise expansions of $\mathcal{E}_k$ and $\mathcal{P}_k$ and stress the relationship with the corresponding \eqref{KdV} invariants. \mathfrak{s}ubsection{Formulae of the invariants in the rescaled variables} \label{Squale} For the $k^{th}$-energies defined by \eqref{E2}, \eqref{E3} and \eqref{E4}, a direct computation provides, in view of definitions \eqref{slow-var} of $N_\mathfrak{v}arepsilon$ and $\mathbb{T}heta_\mathfrak{v}arepsilon$, \begin{lemma} \label{Esquale} Let $2 \leq k \leq 4$ and $\mathfrak{v}arepsilon > 0$. Given any function $\Psi$ in $X^k(\mathbb{R})$ which satisfies \eqref{toutpetit}, and denoting $N_\mathfrak{v}arepsilon$ and $\mathbb{T}heta_\mathfrak{v}arepsilon$, the functions defined by \eqref{slow-var}, we have \begin{equation} \begin{split} \label{boE2} \mathcal{E}_2(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) & \equiv \frac{1}{8} \int_\mathbb{R} \bigg( \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 + m_\mathfrak{v}arepsilon \Big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{6 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \Big)^2 - \frac{1}{6} N_\mathfrak{v}arepsilon^3 - \frac{m_\mathfrak{v}arepsilon}{2} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \bigg)\\ & + \frac{\mathfrak{v}arepsilon^2}{16} \int_\mathbb{R} \frac{1}{m_\mathfrak{v}arepsilon} \bigg( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon + \frac{m_\mathfrak{v}arepsilon}{6} \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{\mathfrak{v}arepsilon^2}{12 m_\mathfrak{v}arepsilon} (\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2 \bigg)^2 - \frac{\mathfrak{v}arepsilon^2}{32} \int_\mathbb{R} \mathcal{R}_2(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon), \end{split} \end{equation} with \begin{equation} \label{R2eps} \mathcal{R}_2(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \equiv \frac{N_\mathfrak{v}arepsilon (\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2}{m_\mathfrak{v}arepsilon}, \end{equation} \begin{equation} \begin{split} \label{boE3} \mathcal{E}_3(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) = & \frac{1}{8} \int_\mathbb{R} \bigg( m_\mathfrak{v}arepsilon \Big( \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{72} \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3 - \frac{\mathfrak{v}arepsilon^2 \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon}{4 m_\mathfrak{v}arepsilon} - \frac{\mathfrak{v}arepsilon^2 \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{4 m_\mathfrak{v}arepsilon} - \frac{\mathfrak{v}arepsilon^4 (\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{48 m_\mathfrak{v}arepsilon^2} \Big)^2\\ & + \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 - \frac{5}{6} \Big( 2 \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon + N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \Big) + \frac{5}{144} \Big( \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^4\\ & + 6 N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + N_\mathfrak{v}arepsilon^4 \Big) \bigg) + \frac{\mathfrak{v}arepsilon^2}{16} \int_\mathbb{R} \frac{1}{m_\mathfrak{v}arepsilon} \bigg( \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{24} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{m_\mathfrak{v}arepsilon}{2} \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon\\ & + \frac{\mathfrak{v}arepsilon^2}{4 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon + \frac{\mathfrak{v}arepsilon^4}{48 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^3 \bigg)^2 + \frac{\mathfrak{v}arepsilon^2}{96} \int_\mathbb{R} \mathcal{R}_3(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon), \end{split} \end{equation} where \begin{equation} \begin{split} \label{R3eps} & \mathcal{R}_3(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) = \frac{5}{3} N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - \frac{5}{4} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - 5 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - \frac{5}{12} N_\mathfrak{v}arepsilon^3 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2\\ & - \frac{5}{18} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^4 - \frac{5}{12} N_\mathfrak{v}arepsilon^3 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - \frac{5}{m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 + \frac{5}{4 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 + \frac{\mathfrak{v}arepsilon^2}{6} \Big( \frac{5}{24} N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^4\\ & - \frac{5}{2 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - \frac{25}{24 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 - \frac{5}{m_\mathfrak{v}arepsilon^2} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon - \frac{5 \mathfrak{v}arepsilon ^2}{24 m_\mathfrak{v}arepsilon^3} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 \Big). \end{split} \end{equation} and \begin{equation} \begin{split} \label{boE4} \mathcal{E}_4 & (N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) = \frac{1}{8} \int_\mathbb{R} \bigg( m_\mathfrak{v}arepsilon \Big( \mathfrak{p}artial_x^4 \mathbb{T}heta - \frac{\mathfrak{v}arepsilon^2}{2 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{3 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{3 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon\\ & - \frac{\mathfrak{v}arepsilon^2}{12} \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^4}{24 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^4}{12 m_\mathfrak{v}arepsilon^2} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{\mathfrak{v}arepsilon^4}{216 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3\\ & - \frac{\mathfrak{v}arepsilon^6}{144 m_\mathfrak{v}arepsilon^3} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^3 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \Big)^2 + \big( \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon \big)^2 - \frac{7}{6} \Big( N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 + m_\mathfrak{v}arepsilon N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2\\ & + 2 m_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon \Big) + \frac{35}{72} \Big( N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 + m_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + 4 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon\\ & + m_\mathfrak{v}arepsilon N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \Big) -\frac{7}{864} \Big( N_\mathfrak{v}arepsilon^5 + 5 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^4 + 10 N_\mathfrak{v}arepsilon^3 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \Big) \bigg)\\ & + \frac{\mathfrak{v}arepsilon^2}{16} \int_\mathbb{R} \frac{1}{m_\mathfrak{v}arepsilon} \Big( \mathfrak{p}artial_x^4 N_\mathfrak{v}arepsilon + \frac{m_\mathfrak{v}arepsilon}{2} \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{2}{3} m_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{\mathfrak{v}arepsilon^2}{3 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon + \frac{\mathfrak{v}arepsilon^2}{4 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2\\ & - \frac{\mathfrak{v}arepsilon^2}{12} \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - \frac{\mathfrak{v}arepsilon^2}{6} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{432} m_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^4 - \frac{\mathfrak{v}arepsilon^4}{144 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2\\ & + \frac{\mathfrak{v}arepsilon^4}{8 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon + \frac{5 \mathfrak{v}arepsilon^6}{576 m_\mathfrak{v}arepsilon^3} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 \Big)^2 + \frac{\mathfrak{v}arepsilon^2}{48} \int_\mathbb{R} \mathcal{R}_4(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon), \end{split} \end{equation} where \begin{equation} \begin{split} \label{R4eps} & \mathcal{R}_4(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) = \frac{35}{432} m_\mathfrak{v}arepsilon N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^4 + \frac{7}{864} m_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^6 - \frac{35}{18} m_\mathfrak{v}arepsilon N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2\\ & - \frac{35}{9} N_\mathfrak{v}arepsilon^2 \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{35}{72} \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^4 - \frac{175}{72} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - \frac{35}{24} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2\\ & + \frac{91}{24} \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{35}{6} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{21}{12} N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{2}{3} N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon\\ & + \frac{7}{2} \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^3 - \frac{35}{144 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon^3 \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 - \frac{35}{72 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 + \frac{35}{24 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 + \frac{\mathfrak{v}arepsilon^2}{4} \Big( \frac{7}{2 m_\mathfrak{v}arepsilon^2} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^3\\ & - \frac{7 m_\mathfrak{v}arepsilon}{46656} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^6 - \frac{35}{54} N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^4 - \frac{35}{432} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^4 + \frac{35}{72 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2\\ & + \frac{7}{ 3 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 - \frac{35}{12 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - \frac{35}{18 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2\\ & - \frac{35}{12 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta _\mathfrak{v}arepsilon \big)^2 - \frac{35}{3 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{35}{54 m_\mathfrak{v}arepsilon^2} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 \Big)\\ & + \frac{\mathfrak{v}arepsilon^4}{144} \Big( \frac{35}{8 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - \frac{35}{96 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^4 + \frac{35}{m_\mathfrak{v}arepsilon^2} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2\\ & - \frac{175}{72 m_\mathfrak{v}arepsilon^3} N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 + \frac{147}{2 m_\mathfrak{v}arepsilon^3} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 \Big) + \frac{\mathfrak{v}arepsilon^6}{6912} \Big( \frac{245}{m_\mathfrak{v}arepsilon^3} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2\\ & - \frac{497}{5 m_\mathfrak{v}arepsilon^4} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^6 \Big) - \frac{7 \mathfrak{v}arepsilon^8}{2560 m_\mathfrak{v}arepsilon^5} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^6. \end{split} \end{equation} \end{lemma} Similarly, for the $k^{th}$-renormalized momenta, we compute \begin{lemma} \label{Psquale} Let $2 \leq k \leq 4$ and $\mathfrak{v}arepsilon > 0$. Given any function $\Psi$ in $X^k(\mathbb{R})$ which satisfies \eqref{toutpetit}, and denoting $N_\mathfrak{v}arepsilon$ and $\mathbb{T}heta_\mathfrak{v}arepsilon$, the functions defined by \eqref{slow-var}, we have \begin{equation} \label{boP2eps} \mathcal{P}_2(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \equiv \frac{1}{4 \mathfrak{s}qrt{2}} \int_\mathbb{R} \bigg( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{m_\mathfrak{v}arepsilon}{12} \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3 - \frac{1}{4} N_\mathfrak{v}arepsilon^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \bigg) - \frac{\mathfrak{v}arepsilon^2}{32 \mathfrak{s}qrt{2}} \int_\mathbb{R} r_2(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon), \end{equation} with \begin{equation} \label{r2eps} r_2(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \equiv \frac{(\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{m_\mathfrak{v}arepsilon}, \end{equation} \begin{equation} \label{boP3eps} \begin{split} \mathcal{P}_3(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \equiv & \frac{1}{4 \mathfrak{s}qrt{2}} \int_\mathbb{R} \bigg( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{5}{12} \Big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + 2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon + \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \Big)\\ & + \frac{5}{72} \Big( m_\mathfrak{v}arepsilon N_\mathfrak{v}arepsilon (\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon)^3 + N_\mathfrak{v}arepsilon^3 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \Big) \bigg) + \frac{\mathfrak{v}arepsilon^2}{48 \mathfrak{s}qrt{2}} \int_\mathbb{R} r_3(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon), \end{split} \end{equation} with \begin{equation} \begin{split} \label{r3eps} & r_3(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \equiv \frac{5}{6} N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{5}{3} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{m_\mathfrak{v}arepsilon}{72} \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^5 + \frac{5}{4 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon\\ & - \frac{5}{m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{5}{2 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{5 \mathfrak{v}arepsilon^2}{72 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3 - \frac{5 \mathfrak{v}arepsilon^2}{18 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon\big)^3 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon\\ & + \frac{25 \mathfrak{v}arepsilon^4}{864 m_\mathfrak{v}arepsilon^3} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon, \end{split} \end{equation} and \begin{equation} \begin{split} \label{boP4eps} \mathcal{P}_4(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) & \equiv \frac{1}{4 \mathfrak{s}qrt{2}} \int_\mathbb{R} \bigg( \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^4 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{7}{12} \Big( 2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon + \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon + m_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \Big)\\ & + \frac{35}{72} \Big( N_\mathfrak{v}arepsilon^2 \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon + N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon + \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon + m_\mathfrak{v}arepsilon N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \Big)\\ & - \frac{7}{1728} \Big( \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^5 + 10 m_\mathfrak{v}arepsilon N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3 + 5 N_\mathfrak{v}arepsilon^4 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \Big) \bigg) + \frac{\mathfrak{v}arepsilon^2}{48 \mathfrak{s}qrt{2}} \int_\mathbb{R} r_4(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon), \end{split} \end{equation} with \begin{equation} \begin{split} \label{r4eps} & r_4(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \equiv 7 \Big( \frac{5}{12 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{5}{6 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{5}{48 m_\mathfrak{v}arepsilon} N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon\\ & + \frac{1}{4} \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - \frac{5}{12} N_\mathfrak{v}arepsilon \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{1}{2} \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon -\frac{25}{144} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3\\ & + \frac{1}{216} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^5 - \frac{1}{12} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3 - \frac{5 m_\mathfrak{v}arepsilon}{72} \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3 \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - \frac{5}{36 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^3 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon\\ & + \frac{5}{72 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3 - \frac{1}{m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{1}{2 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{5}{18 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^3 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \Big)\\ & + \frac{\mathfrak{v}arepsilon^2}{4} \Big( \frac{245}{432 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{21}{1296} N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^5 -\frac{m_\mathfrak{v}arepsilon}{1296} \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^7 - \frac{35}{36 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3\\ & - \frac{35}{6 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{35}{12 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{7}{2 m_\mathfrak{v}arepsilon^2} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon\\ & - \frac{7}{m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon +\frac{7}{2 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^3 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{175}{216 m_\mathfrak{v}arepsilon^3} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{7}{108} \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^5 \Big)\\ & + \frac{\mathfrak{v}arepsilon^4}{48} \Big( \frac{35}{m_\mathfrak{v}arepsilon^3} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^3 \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{7}{72 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^5 - \frac{25}{6 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^3 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon\\ & - \frac{5}{18 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3 + \frac{49}{2 m_\mathfrak{v}arepsilon^3} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \Big) + \frac{\mathfrak{v}arepsilon^6}{768} \Big( \frac{5}{3 m_\mathfrak{v}arepsilon^3} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3\\ & + \frac{252}{5 m_\mathfrak{v}arepsilon^4} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^5 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{49 \mathfrak{v}arepsilon^2}{10 m_\mathfrak{v}arepsilon^5} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^6 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \Big). \end{split} \end{equation} \end{lemma} \mathfrak{s}ubsection{Relating the \eqref{GP} invariants to the \eqref{KdV} invariants} \label{Kdvpitre} Recall that the Korteweg-de Vries equation is integrable, and admits an infinite number of invariants (see \cite{GarKrMi1}). The first four invariants for \eqref{KdV} are given by \begin{align} \label{e0KdV} E^{KdV}_0(v) \equiv & \frac{1}{2} \int_\mathbb{R} v^2,\\ \label{e1KdV} E^{KdV}_1(v) \equiv & \frac{1}{2} \int_\mathbb{R} \big( \mathfrak{p}artial_x v \big)^2 - \frac{1}{6} \int_\mathbb{R} v^3,\\ \label{e2KdV} E^{KdV}_2(v) \equiv & \frac{1}{2} \int_\mathbb{R} \big( \mathfrak{p}artial_x^2 v \big)^2 - \frac{5}{6} \int_\mathbb{R} v \big( \mathfrak{p}artial_x v \big)^2 + \frac{5}{72} \int_\mathbb{R} v^4, \end{align} and \begin{equation} \label{e3KdV} E^{KdV}_3(v) \equiv \frac{1}{2} \int_\mathbb{R} \big( \mathfrak{p}artial_x^3 v \big)^2 - \frac{7}{6} \int_\mathbb{R} v \big( \mathfrak{p}artial_x^2 v \big)^2 + \frac{35}{36} \int_\mathbb{R} v^2 \big( \mathfrak{p}artial_x v \big)^2 - \frac{7}{216} \int_\mathbb{R} v^5. \end{equation} Notice that the invariants $E_k$ are bounded in terms of the $ H^k$-norm, since we have \begin{equation} \label{bornation1} \big| E_k^{KdV}(v) \big| \leq K \big( \| v \|_{H^{k-1}(\mathbb{R})} \big) \| v \|_{H^k(\mathbb{R})}^2, \end{equation} where $K \big( \| v \|_{H^{k-1}(\mathbb{R})} \big)$ is some constant depending only on the $H^{k-1}$-norm of $v$. Another important observation concerning the \eqref{KdV} invariants is that, given any function $v \in H^k(\mathbb{R})$, the $H^k$-norm of $v$ is controlled by the first $k^{th}$-invariants of \eqref{KdV}. This claim is straightforward for $k = 0$, whereas for $k \geq 1$, we have \begin{lemma} \label{Kdvcontrol} Let $1 \leq k \leq 3$ be given. Given any function $v \in H^k(\mathbb{R})$, there exists some positive constant $K = K \big( \| v \|_{H^{k-1}(\mathbb{R})} \big)$, depending only on the $H^{k-1}$-norm of $v$, such that \begin{equation} \label{bornation2} \| \mathfrak{p}artial_x^k v \|_{L^2(\mathbb{R})}^2 \leq K \Big( \| v \|_{H^{k-1}(\mathbb{R})}^2 + \big| E_k^{KdV}(v) \big| \Big). \end{equation} \end{lemma} \begin{proof} For $k = 2$ and $k = 3$, the proof of \eqref{bornation2} is a direct application of the Sobolev embedding theorem to formulae \eqref{e2KdV} and \eqref{e3KdV}. For $k = 1$, we have in view of \eqref{e1KdV}, $$\| \mathfrak{p}artial_x v \|_{L^2(\mathbb{R})}^2 \leq \frac{1}{3} \| v \|_{L^2(\mathbb{R})}^2 \| \mathfrak{p}artial_x v \|_{L^2(\mathbb{R})} + 2 \big| E_1^{KdV}(v) \big|,$$ so that by the inequality $2 a b \leq a^2 + b^2$, $$\| \mathfrak{p}artial_x v \|_{L^2(\mathbb{R})}^2 \leq \frac{1}{9} \| v \|_{L^2(\mathbb{R})}^4 + 4 \big| E_1^{KdV}(v) \big| \leq K \Big( \| v \|_{L^2(\mathbb{R})}^2 + \big| E_1^{KdV}(v) \big| \Big),$$ where $K = \max \big\{ \frac{1}{9} \| v \|_{L^2(\mathbb{R})}^2, 4 \big\}$. \end{proof} We complete the subsection showing that the \eqref{KdV} invariant $E_{k-1}^{KdV}$ is related to the \eqref{GP} invariant quantities $\mathcal{E}_k \mathfrak{p}m \mathfrak{s}qrt{2} \mathcal{P}_k$. For that purpose, assume that $$N_\mathfrak{v}arepsilon \to N_0 \ {\rm in} \ H^1(\mathbb{R}), \ {\rm and} \ \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \to \mathfrak{p}artial _x \mathbb{T}heta_0 \ {\rm in} \ L^2(\mathbb{R}), \ {\rm as} \ \mathfrak{v}arepsilon \to 0.$$ For $k = 1$, we notice in view of expansions \eqref{sirbu1} and \eqref{sirbu2}, that \begin{equation} \label{limit1} \mathcal{E}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \mathfrak{p}m \mathfrak{s}qrt{2} \mathcal{P}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \to \frac{1}{8} \int_\mathbb{R} \big( N_0 \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_0 \big)^2 = E_0^{KdV} \Big( \frac{N_0 \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_0}{2} \Big), \ {\rm as} \ \mathfrak{v}arepsilon \to 0. \end{equation} Similarly, it follows from Lemmas \ref{Esquale} and \ref{Psquale} that \begin{prop} \label{Limitep} Let $1 \leq k \leq 4$ and $\Psi$ in $X^k(\mathbb{R})$ which satisfies \eqref{toutpetit}. Denoting $N_\mathfrak{v}arepsilon$ and $\mathbb{T}heta_\mathfrak{v}arepsilon$ the variables defined by \eqref{slow-var}, and assuming that $$N_\mathfrak{v}arepsilon \to N_0 \ {\rm in} \ H^k(\mathbb{R}), \ {\rm and} \ \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \to \mathfrak{p}artial_x \mathbb{T}heta_0 \ {\rm in} \ H^{k-1}(\mathbb{R}), \ {\rm as} \ \mathfrak{v}arepsilon \to 0,$$ we have $$\mathcal{E}_k(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \mathfrak{p}m \mathfrak{s}qrt{2} \mathcal{P}_k(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \to E_{k-1}^{KdV} \Big( \frac{N_0 \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_0}{2} \Big), \ {\rm as} \ \mathfrak{v}arepsilon \to 0.$$ \end{prop} \begin{remark} We believe that Proposition \ref{Limitep} might be extended to higher order \eqref{GP} and \eqref{KdV} invariants, provided one was first able to compute some expressions for them. \end{remark} \begin{proof} Combining the expansions of Lemmas \ref{Esquale} and \ref{Psquale} with \eqref{e1KdV}, \eqref{e2KdV} and \eqref{e3KdV}, and using the Sobolev embedding theorem, the proof reduces to a direct computation similar to the proof of \eqref{limit1}. \end{proof} \mathfrak{s}ubsection{$H^k$-estimates for $N_\mathfrak{v}arepsilon$ and $\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon$} \label{Lecontrole} In the same spirit as Lemma \ref{Kdvcontrol} which allows to bound the $H^k$-norms by the \eqref{KdV} invariants, we next show that the $H^k$-norms of $N_\mathfrak{v}arepsilon$ and $\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon$ are controlled by the quantities $\mathcal{E}_k(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon)$ in the limit $\mathfrak{v}arepsilon \to 0$. More precisely, we have \begin{lemma} \label{Controlhk} Let $1 \leq k \leq 4$ be given, and assume that there exists some positive constant $A$ such that \begin{equation} \label{hypoener} \mathcal{E}_j(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \leq A, \end{equation} for any $1 \leq j \leq k$. Then, there exists some positive numbers $\mathfrak{v}arepsilon_A$ and $K_A$, possibly depending on $A$, such that \begin{equation} \label{bornitude} \| N_\mathfrak{v}arepsilon \|_{H^{k-1}(\mathbb{R})} + \mathfrak{v}arepsilon \| \mathfrak{p}artial_x^k N_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} + \| \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \|_{H^{k-1}(\mathbb{R})} \leq K_A, \end{equation} for any $0 < \mathfrak{v}arepsilon < \mathfrak{v}arepsilon_A$. \end{lemma} \begin{remark} We again believe that Lemma \ref{Controlhk} might be extended to higher order \eqref{GP} and \eqref{KdV} invariants, which will provide bounds for higher Sobolev norms of $N_\mathfrak{v}arepsilon$ and $\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon$. \end{remark} \begin{proof} We split the proof in four steps according to the value of $k$. \begin{step} \label{one-way} $k = 1$. \end{step} In view of \eqref{sirbu1}, assumption \eqref{hypoener} may be written as \begin{equation} \label{hypoener1} \int_\mathbb{R} \Big( m_\mathfrak{v}arepsilon (\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon)^2 + N_\mathfrak{v}arepsilon^2 + \frac{\mathfrak{v}arepsilon^2 (\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2}{2 m_\mathfrak{v}arepsilon} \Big) \leq 8 A, \end{equation} so that \eqref{bornitude} follows once some lower and upper uniform bounds on $m_\mathfrak{v}arepsilon$ are established. Indeed, if we can choose $\mathfrak{v}arepsilon_A$ so that \begin{equation} \label{petit} \frac{1}{2} \leq m_\mathfrak{v}arepsilon \leq 2, \end{equation} for any $0 < \mathfrak{v}arepsilon < \mathfrak{v}arepsilon_A$, then \eqref{bornitude} follows from \eqref{hypoener1} with $K_A = 24 \mathfrak{s}qrt{A}$. Hence, the proof reduces to show \eqref{petit} for some suitable choice of $\mathfrak{v}arepsilon_A$. In order to prove \eqref{petit}, we apply the H\"older inequality and assumption \eqref{hypoener} to obtain $$|\Psi(x) - \Psi(x_0)| \leq \mathfrak{s}qrt{2} |x - x_0|^\frac{1}{2} E(\Psi)^\frac{1}{2} \leq \frac{\mathfrak{v}arepsilon^\frac{3}{2}}{3} \mathcal{E}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon)^\frac{1}{2} \leq \frac{\mathfrak{s}qrt{A}}{3} \mathfrak{v}arepsilon^\frac{3}{2},$$ for any point $x_0 \in \mathbb{R}$ and any $x_0 - 1 \leq x \leq x_0 + 1$, so that $$\big| 1 - |\Psi(x_0)| \big| - \frac{\mathfrak{s}qrt{A}}{3} \mathfrak{v}arepsilon^\frac{3}{2} \leq \big| 1 - |\Psi(x)| \big|.$$ Setting $\mathfrak{v}arepsilon_A = (16 A)^{- \frac{1}{3}}$, and assuming by contradiction that \eqref{petit} does not hold at the point $x_0$, we obtain that \begin{equation} \label{hyporealestate} \big| 1 - |\Psi(x_0)| \big| = \big| 1 - \mathfrak{s}qrt{m_\mathfrak{v}arepsilon(x_0)} \big| \geq 1 - \frac{1}{\mathfrak{s}qrt{2}} \geq \frac{1}{12} = \frac{\mathfrak{s}qrt{A}}{3} \mathfrak{v}arepsilon_A^3 \geq \frac{\mathfrak{s}qrt{A}}{3} \mathfrak{v}arepsilon^3, \end{equation} for any $0 < \mathfrak{v}arepsilon < \mathfrak{v}arepsilon_A$, so that $$\Big( \big| 1 - |\Psi(x_0)| \big| - \frac{\mathfrak{s}qrt{A}}{3} \mathfrak{v}arepsilon^\frac{3}{2} \Big)^2 \leq \int_{x_0 - 1}^{x_0 + 1} \big( 1 - |\Psi(x)|^2 \big)^2 dx \leq \frac{2 \mathfrak{v}arepsilon^3}{9} \mathcal{E}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \leq \frac{2 A}{9} \mathfrak{v}arepsilon^3,$$ and $$\big| 1 - |\Psi(x_0)| \big| \leq \mathfrak{s}qrt{A} \mathfrak{v}arepsilon^\frac{3}{2} \leq \frac{1}{4}.$$ It follows that $$\frac{9}{16} \leq m_\mathfrak{v}arepsilon(x_0) = |\Psi(x_0)|^2 \leq \frac{25}{16},$$ which gives a contradiction with the fact that \eqref{petit} does not hold at the point $x_0$. This completes the proof of \eqref{petit}, and of Step \ref{one-way}. \begin{step} \label{two-ways} $k = 2$. \end{step} Notice first that in view of the inductive nature of assumption \eqref{hypoener}, and of Step \ref{one-way}, we have already established \eqref{bornitude} for $k = 1$. Combining this estimate with the Sobolev embedding theorem, bounds \eqref{petit} and formulae \eqref{boE2} and \eqref{R2eps}, assumption \eqref{hypoener} may be written as \begin{equation} \begin{split} \label{hypoener2} & \int_\mathbb{R} \bigg( \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 + \Big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{6 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \Big)^2 + \mathfrak{v}arepsilon^2 \Big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon + \frac{m_\mathfrak{v}arepsilon}{6} \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{\mathfrak{v}arepsilon^2}{12 m_\mathfrak{v}arepsilon} (\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2 \Big)^2 \bigg)\\ & \leq K \Big( 1 + \| N_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} \big( \| N_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})}^2 + \| \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})}^2 + \mathfrak{v}arepsilon^2 \| \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})}^2 \big) \Big) \leq K_A \Big( 1 + \| N_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} \Big). \end{split} \end{equation} This first gives that \begin{equation} \label{unicredit1} \int_\mathbb{R} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \leq K_A, \end{equation} so that by \eqref{hypoener2} and the Sobolev embedding theorem, $$\| \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \leq K \Big( 1 + \mathfrak{v}arepsilon^2 \| \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \Big) \leq K_A \Big( 1 + \mathfrak{v}arepsilon^2 \| \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} \Big).$$ Hence, we obtain $$\int_\mathbb{R} \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \leq K_A,$$ setting $\mathfrak{v}arepsilon_A$ sufficiently small. In view of \eqref{hypoener2}, \eqref{unicredit1} and the Sobolev embedding theorem, it follows that $$\mathfrak{v}arepsilon \| \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \leq K \Big( 1 + \mathfrak{v}arepsilon \| \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \|_{L^4(\mathbb{R})}^2 + \mathfrak{v}arepsilon^3 \| \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} \| \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \Big) \leq K_A \Big( 1 + \mathfrak{v}arepsilon^3 \| \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} \Big),$$ which completes the proof of \eqref{bornitude} choosing $\mathfrak{v}arepsilon_A$ sufficiently small. \begin{step} \label{freeway} $k = 3$. \end{step} Notice again that in view of the inductive nature of assumption \eqref{hypoener}, and of Step \ref{two-ways}, we have already established \eqref{bornitude} for $k = 2$. Combining this estimate with the Sobolev embedding theorem, bounds \eqref{petit} and formulae \eqref{boE3} and \eqref{R3eps}, assumption \eqref{hypoener} may be written as \begin{align*} & \int_\mathbb{R} \bigg( \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 + \Big( \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{72} \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3 - \frac{\mathfrak{v}arepsilon^2 \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon}{4 m_\mathfrak{v}arepsilon} - \frac{\mathfrak{v}arepsilon^2 \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{4 m_\mathfrak{v}arepsilon} - \frac{\mathfrak{v}arepsilon^4 (\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{48 m_\mathfrak{v}arepsilon^2} \Big)^2\\ & + \mathfrak{v}arepsilon^2 \Big( \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{24} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{m_\mathfrak{v}arepsilon}{2} \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{\mathfrak{v}arepsilon^2}{4 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon + \frac{\mathfrak{v}arepsilon^4}{48 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^3 \Big)^2 \bigg)\\ & \leq K_A \Big( 1 + \| \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} + \| \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} + \mathfrak{v}arepsilon \| \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \Big). \end{align*} Invoking once again estimates \eqref{bornitude} (for $k = 2$) and \eqref{petit} to bound the remainder terms in the above integral, we are led to $$\int_\mathbb{R} \bigg( \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 + \big( \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon \big)^2 \Big) \leq K_A \Big( 1 + \| \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} + \| \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} + \mathfrak{v}arepsilon \| \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \Big),$$ which provides the proof of Step \ref{freeway}. \begin{step} \label{forwhat} $k = 4$. \end{step} Notice once last time that, in view of the inductive nature of assumption \eqref{hypoener}, and of Step \ref{two-ways}, we have already established \eqref{bornitude} for $k = 3$. Combining this estimate with the Sobolev embedding theorem, bounds \eqref{petit} and formulae \eqref{boE4} and \eqref{R4eps}, assumption \eqref{hypoener} may be written as \begin{align*} & \int_\mathbb{R} \bigg( \big( \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon \big)^2 + \Big( \mathfrak{p}artial_x^4 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{2 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{3 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{3 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon\\ & - \frac{\mathfrak{v}arepsilon^2}{12} \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^4}{24 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^4}{12 m^2} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{\mathfrak{v}arepsilon^4}{216 m} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3\\ & - \frac{\mathfrak{v}arepsilon^6}{144 m_\mathfrak{v}arepsilon^3} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^3 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \Big)^2 + \mathfrak{v}arepsilon^2 \int_\mathbb{R} \Big( \mathfrak{p}artial_x^4 N_\mathfrak{v}arepsilon + \frac{m_\mathfrak{v}arepsilon}{2} \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{2}{3} m_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon + \frac{\mathfrak{v}arepsilon^2}{3 m_\mathfrak{v}arepsilon} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon\\ & + \frac{\mathfrak{v}arepsilon^2}{4 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big)^2 - \frac{\mathfrak{v}arepsilon^2}{12} \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 - \frac{\mathfrak{v}arepsilon^2}{6} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{432} m_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^4\\ & - \frac{\mathfrak{v}arepsilon^4}{144 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{\mathfrak{v}arepsilon^4}{8 m_\mathfrak{v}arepsilon^2} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon + \frac{5 \mathfrak{v}arepsilon^6}{576 m_\mathfrak{v}arepsilon^3} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^4 \Big)^2\\ & \leq K_A \bigg( 1 + \| \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} + \| \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} \bigg), \end{align*} so that we similarly obtain $$\int_\mathbb{R} \bigg( \big( \mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon \big)^2 + \big( \mathfrak{p}artial_x^4 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \bigg) \leq K_A,$$ then, combining with the Sobolev embedding theorem, we also have $$\mathfrak{v}arepsilon \| \mathfrak{p}artial_x^4 N_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \leq K_A.$$ This completes the proofs of Step \ref{forwhat} and Lemma \ref{Controlhk}. \end{proof} An important consequence of Lemma \ref{Controlhk} which refines the result of Lemma \ref{Limitep} is \begin{prop} \label{Controluv} Let $1 \leq k \leq 3$. Given some positive constant $A$, consider some functions $N_\mathfrak{v}arepsilon$ and $\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon$ which satisfy \eqref{hypoener} for any $1 \leq j \leq k + 1$. Then, there exists some positive numbers $\mathfrak{v}arepsilon_A$ and $K_A$, possibly depending on $A$, such that \begin{equation} \label{bornage} \Big| \mathcal{E}_k(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \mathfrak{p}m \mathfrak{s}qrt{2} \mathcal{P}_k(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) - E_{k-1}^{KdV} \Big( \frac{N_\mathfrak{v}arepsilon \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{2} \Big) \Big| \leq K_A \mathfrak{v}arepsilon^2. \end{equation} for any $0 < \mathfrak{v}arepsilon < \mathfrak{v}arepsilon_A$. \end{prop} \begin{remark} Similarly to Lemma \ref{Controlhk}, we believe that Proposition \ref{Controluv} might be extended to higher order \eqref{GP} and \eqref{KdV} invariants. \end{remark} \begin{proof} Let $k = 1$. In view of \eqref{sirbu1}, \eqref{sirbu2} and \eqref{e0KdV}, we have $$\mathcal{E}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \mathfrak{p}m \mathfrak{s}qrt{2} \mathcal{P}_1(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) - E_0^{KdV} \Big( \frac{N_\mathfrak{v}arepsilon \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{2} \Big) = \frac{\mathfrak{v}arepsilon^2}{2} \int_\mathbb{R} \bigg( \frac{(\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2}{1 - \frac{\mathfrak{v}arepsilon^2}{6} N_\mathfrak{v}arepsilon} - \frac{1}{3} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \bigg).$$ Inequality \eqref{bornage} follows for $\mathfrak{v}arepsilon_A$ sufficiently small, invoking \eqref{bornitude} (for $k = 2$) and \eqref{petit}. For $k = 2$, we deduce from \eqref{boE2}, \eqref{R2eps} and \eqref{e1KdV} that \begin{align*} & \mathcal{E}_2(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) \mathfrak{p}m \mathfrak{s}qrt{2} \mathcal{P}_2(N_\mathfrak{v}arepsilon, \mathbb{T}heta_\mathfrak{v}arepsilon) - E_1^{KdV} \Big( \frac{N_\mathfrak{v}arepsilon \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{2} \Big)\\ = \frac{\mathfrak{v}arepsilon^2}{8} \int_\mathbb{R} \bigg( \frac{1}{12} N_\mathfrak{v}arepsilon^2 \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 & - \frac{1}{3} \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon - \frac{1}{6} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 \mathfrak{p}m \frac{1}{36} N_\mathfrak{v}arepsilon \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^3 - \frac{N_\mathfrak{v}arepsilon (\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2}{4 m_\mathfrak{v}arepsilon}\\ \mp \frac{(\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2 \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon}{4 m_\mathfrak{v}arepsilon} + & \frac{\mathfrak{v}arepsilon^2 (\mathfrak{p}artial_x N_\mathfrak{v}arepsilon)^2 (\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon)^2}{36 m_\mathfrak{v}arepsilon} + \frac{1}{2 m_\mathfrak{v}arepsilon} \Big( \mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon + \frac{m_\mathfrak{v}arepsilon}{6} \big( \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon \big)^2 + \frac{\mathfrak{v}arepsilon^2}{12 m_\mathfrak{v}arepsilon} \big( \mathfrak{p}artial_x N_\mathfrak{v}arepsilon \big)^2 \Big)^2 \bigg), \end{align*} so that \eqref{bornage} follows again from \eqref{bornitude} (for $k = 3$) and \eqref{petit}. Similarly, the proof of \eqref{bornage} for $k = 3$ reduces to estimate the remainder terms in \eqref{boE3} and \eqref{R3eps} using \eqref{bornitude} (for $k = 4$) and \eqref{petit}. \end{proof} \mathfrak{s}ection{Time-independent estimates} \label{Notime} In this section, we use the above conservation laws to derive time-independent estimates of the functions $U_\mathfrak{v}arepsilon$ and $V_\mathfrak{v}arepsilon$, together with the consistency of the solutions to \eqref{GP} with the \eqref{KdV} equation in the limit $\mathfrak{v}arepsilon \to 0$. This yields the proofs of Proposition \ref{H3-control} and Theorem \ref{Bobby}. \mathfrak{s}ubsection{Proof of Proposition \ref{H3-control}} \label{Cestpresquefini} Given any functions $N_\mathfrak{v}arepsilon^0$ and $\mathbb{T}heta_\mathfrak{v}arepsilon^0$ such that \eqref{grinzing1} holds, it follows from the formulae of Lemmas \ref{Esquale} and \ref{Psquale} that there exists some positive constant $A_0$, which does not depend on $\mathfrak{v}arepsilon$, such that \begin{equation} \label{hypoenerzero} \mathcal{E}_k(N_\mathfrak{v}arepsilon^0, \mathbb{T}heta_\mathfrak{v}arepsilon^0) \leq A_0, \end{equation} for any $1 \leq k \leq 4$. In view of Theorem \ref{Invconserved} and definition \eqref{slowEk}, we deduce that the solution $(N_\mathfrak{v}arepsilon(\cdot, \tau), \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau))$ to system \eqref{slow1-0}-\eqref{slow2-0} with initial datum $(N_\mathfrak{v}arepsilon^0, \mathbb{T}heta_\mathfrak{v}arepsilon^0)$ satisfies $$\mathcal{E}_k(N_\mathfrak{v}arepsilon(\cdot, \tau), \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau)) \leq A_0,$$ for any time $\tau \in \mathbb{R}$. In particular, inequality \eqref{grinzing1bis} is a direct consequence of Lemma \ref{Controlhk}, whereas in view of Proposition \ref{Controluv}, we have $$\Big| \mathcal{E}_k(N_\mathfrak{v}arepsilon(\cdot, \tau), \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau)) \mathfrak{p}m \mathfrak{s}qrt{2} \mathcal{P}_k(N_\mathfrak{v}arepsilon(\cdot, \tau), \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau)) - E_{k-1}^{KdV} \Big( \frac{N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau)}{2} \Big) \Big| \leq K_{A_0} \mathfrak{v}arepsilon^2,$$ for any time $\tau \in \mathbb{R}$. Using again the conservation of $E_k$ and $p_k$ provided by Theorem \ref{Invconserved} and Corollary \ref{Conspk}, and definitions \eqref{slowEk} and \eqref{slowpk}, we are led to $$\Big| \mathcal{E}_k(N_\mathfrak{v}arepsilon^0, \mathbb{T}heta_\mathfrak{v}arepsilon^0) \mathfrak{p}m \mathfrak{s}qrt{2} \mathcal{P}_k(N_\mathfrak{v}arepsilon^0, \mathbb{T}heta_\mathfrak{v}arepsilon^0) - E_{k-1}^{KdV} \Big( \frac{N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau)}{2} \Big) \Big| \leq K_{A_0} \mathfrak{v}arepsilon^2.$$ Invoking \eqref{hypoenerzero}, we apply once more Proposition \ref{Controluv} to obtain \begin{equation} \label{lloyds} \Big| E_{k-1}^{KdV} \Big( \frac{N_\mathfrak{v}arepsilon^0 \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0}{2} \Big) - E_{k-1}^{KdV} \Big( \frac{N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau)}{2} \Big) \Big| \leq K_{A_0} \mathfrak{v}arepsilon^2. \end{equation} For $k = 1$, we then deduce from \eqref{e2KdV} that \begin{equation} \label{abbey} \| N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})} \leq \| N_\mathfrak{v}arepsilon^0 \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{L^2(\mathbb{R})} + K_{A_0} \mathfrak{v}arepsilon, \end{equation} so that in particular, we have by \eqref{grinzing1} for $\mathfrak{v}arepsilon$ sufficiently small, $$\| N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})} \leq K_{A_0},$$ where $K_{A_0}$ denotes some further constant depending only on $A_0$. Hence, for $k = 2$, we may write using \eqref{bornation2}, that $$\big\| \mathfrak{p}artial_x N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau) \big\|_{L^2} \leq K_{A_0} \Big( \Big| E_1^{KdV} \Big( \frac{N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau)}{2} \Big) \Big| + \| N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2} \Big),$$ so that by \eqref{bornation1}, \eqref{lloyds} and \eqref{abbey}, $$\| \mathfrak{p}artial_x N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x^2 \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})} \leq K_{A_0} \Big( \| N_\mathfrak{v}arepsilon^0 \mathfrak{p}m \mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon^0 \|_{H^1(\mathbb{R})} + \mathfrak{v}arepsilon \Big).$$ Using repetitively this argument to estimate the $L^2$-norms of the functions $\mathfrak{p}artial_x^2 N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x^3 \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau)$ and $\mathfrak{p}artial_x^3 N_\mathfrak{v}arepsilon(\cdot, \tau) \mathfrak{p}m \mathfrak{p}artial_x^4 \mathbb{T}heta_\mathfrak{v}arepsilon(\cdot, \tau)$, we are led to \eqref{dobling1bis}, which completes the proof of Proposition \ref{H3-control}. \mathfrak{s}ubsection{Proof of Theorem \ref{Bobby}} \label{bobbydone} Theorem \ref{Bobby} is a consequence of Proposition \ref{H3-control}. Applying estimates \eqref{dobling1bis} to the right-hand side of \eqref{slow1}, together with the Sobolev embedding theorem, we obtain estimate \eqref{jerrard}. \mathfrak{s}ection{Energy methods} \label{Expansion} This section is devoted to the proofs of Theorems \ref{cochon} and \ref{H3-controlbis}, which both rely on applying standard energy methods to equations \eqref{slow1} and \eqref{slow2}. \mathfrak{s}ubsection{Proof of Theorem \ref{H3-controlbis}} \label{Veve} In order to estimate the $L^2$-norm of $V_\mathfrak{v}arepsilon(\cdot, \tau)$, we multiply equation \eqref{slow2} by $V_\mathfrak{v}arepsilon(\cdot, \tau)$ and integrate by parts. In order to simplify the presentation, we recast equation \eqref{slow2} as \begin{equation} \label{soleillevant} \mathfrak{p}artial_\tau V_\mathfrak{v}arepsilon + \frac{8}{\mathfrak{v}arepsilon^2} \mathfrak{p}artial_x V_\mathfrak{v}arepsilon = \frac{1}{2} \mathfrak{p}artial_x (V_\mathfrak{v}arepsilon^2) + \mathfrak{p}artial_x f_\mathfrak{v}arepsilon + \mathfrak{v}arepsilon^2 R_\mathfrak{v}arepsilon, \end{equation} where $$f_\mathfrak{v}arepsilon = \mathfrak{p}artial^2_x N_\mathfrak{v}arepsilon - \frac{1}{6} U_\mathfrak{v}arepsilon^2 - \frac{1}{3} U_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon,$$ and $R_\mathfrak{v}arepsilon$ is defined in \eqref{grouin}. We are led to $$\mathfrak{p}artial_\tau \bigg( \int_\mathbb{R} V_\mathfrak{v}arepsilon(\cdot, \tau)^2 \bigg) = -2 \int_\mathbb{R} f_\mathfrak{v}arepsilon \mathfrak{p}artial_x V_\mathfrak{v}arepsilon(\cdot, \tau) + 2 \mathfrak{v}arepsilon^2 \int_\mathbb{R} R_\mathfrak{v}arepsilon(\cdot, \tau) V_\mathfrak{v}arepsilon(\cdot, \tau).$$ We now integrate with respect to the time variable to obtain \begin{equation} \label{marseille} \int_\mathbb{R} \big( V_\mathfrak{v}arepsilon(\cdot, \tau) \big)^2 = \int_\mathbb{R} \big( V_\mathfrak{v}arepsilon^0 \big)^2 -2 \int_0^\tau \int_\mathbb{R} f_\mathfrak{v}arepsilon \mathfrak{p}artial_x V_\mathfrak{v}arepsilon + 2 \mathfrak{v}arepsilon^2 \int_0^\tau \int_\mathbb{R} R_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon. \end{equation} Combining inequalities \eqref{grinzing1bis} with definition \eqref{grouin} and bound \eqref{borninf} and using the Sobolev embedding theorem, we next have \begin{equation} \label{aubagne} \| U_\mathfrak{v}arepsilon(\cdot, \tau) \|_{H^3(\mathbb{R})} + \| V_\mathfrak{v}arepsilon(\cdot, \tau) \|_{H^3(\mathbb{R})} + \| R_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})}+ \| f_\mathfrak{v}arepsilon(\cdot, \tau) \|_{H^1(\mathbb{R})} \leq K, \end{equation} for any $\tau \in \mathbb{R}$ and some positive constant $K$ not depending on $\mathfrak{v}arepsilon$. In particular, \begin{equation} \label{pinard} \bigg| 2 \mathfrak{v}arepsilon^2 \int_0^\tau \int_\mathbb{R} R_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon \bigg| \leq C \mathfrak{v}arepsilon^2 \bigg| \int_0^\tau \| V_\mathfrak{v}arepsilon(\cdot, s) \|_{L^2(\mathbb{R})} ds \bigg|, \end{equation} where $C = C(K)$ does not depend on $\mathfrak{v}arepsilon$. In order to bound the second term in the right-hand side of \eqref{marseille}, we replace the quantity $\mathfrak{p}artial_x V_\mathfrak{v}arepsilon$ in \eqref{marseille} according to \eqref{soleillevant}, so that \begin{align*} \int_0^\tau \int_\mathbb{R} f_\mathfrak{v}arepsilon \mathfrak{p}artial_x V_\mathfrak{v}arepsilon & = \frac{\mathfrak{v}arepsilon^2}{8} \int_0^\tau \int_\mathbb{R} f_\mathfrak{v}arepsilon \Big( - \mathfrak{p}artial_\tau V_\mathfrak{v}arepsilon + \frac{1}{2} \mathfrak{p}artial_x (V_\mathfrak{v}arepsilon^2) + \mathfrak{p}artial_x f_\mathfrak{v}arepsilon +\mathfrak{v}arepsilon^2 R_\mathfrak{v}arepsilon \Big)\\ & \equiv J_1 + J_2 + J_3 + J_4. \end{align*} We bound each of the terms $J_k$ separately. First note that the integrand being a differential, $J_3 = 0$. Next, it follows from \eqref{aubagne} that \begin{equation} \label{lia} |J_4| \leq C \mathfrak{v}arepsilon^4 \tau. \end{equation} Concerning $J_2$, we have \begin{equation} \label{stan} |J_2| = \bigg| \frac{\mathfrak{v}arepsilon^2}{16} \int_0^\tau \int_\mathbb{R} f_\mathfrak{v}arepsilon \mathfrak{p}artial_x (V_\mathfrak{v}arepsilon^2) \bigg| = \bigg| \frac{\mathfrak{v}arepsilon^2}{16} \int_0^\tau \int_\mathbb{R} \mathfrak{p}artial_x f_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon^2 \bigg| \leq C \mathfrak{v}arepsilon^2 \bigg| \int_0^\tau \| V_\mathfrak{v}arepsilon(\cdot, s) \|_{L^2(\mathbb{R})} ds \bigg|. \end{equation} For $J_1$, we perform an integration by parts with respect to the time variable, so that \begin{equation} \label{persil} J_1 = \frac{\mathfrak{v}arepsilon^2}{8} \int_0^\tau \int_\mathbb{R} \mathfrak{p}artial_\tau f_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon - \frac{\mathfrak{v}arepsilon^2}{8} \bigg[ \int_\mathbb{R} f_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon \bigg]^\tau_0. \end{equation} Note that by \eqref{slow1-0}, \eqref{slow1}, \eqref{grinzing1bis} and \eqref{aubagne}, \begin{align*} \mathfrak{p}artial_\tau f_\mathfrak{v}arepsilon & = \mathfrak{p}artial_x^2 \mathfrak{p}artial_\tau N_\mathfrak{v}arepsilon - \frac{1}{3} U_\mathfrak{v}arepsilon \mathfrak{p}artial_\tau U_\mathfrak{v}arepsilon - \frac{1}{3} \mathfrak{p}artial_\tau U_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon - \frac{1}{3} U_\mathfrak{v}arepsilon \mathfrak{p}artial_\tau V_\mathfrak{v}arepsilon\\ & = - \frac{4}{\mathfrak{v}arepsilon^2} \mathfrak{p}artial^3_x V_\mathfrak{v}arepsilon - \frac{1}{3} U_\mathfrak{v}arepsilon \mathfrak{p}artial_\tau V_\mathfrak{v}arepsilon + \mathcal{O}(1) \end{align*} uniformly in $L^2(\mathbb{R})$, so that \begin{equation} \label{ariel} \bigg| \frac{\mathfrak{v}arepsilon^2}{8} \int_0^\tau \int_\mathbb{R} \mathfrak{p}artial_\tau f_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon \bigg| \leq \bigg| \frac{\mathfrak{v}arepsilon^2}{48} \int_0^\tau \int_\mathbb{R} U_\mathfrak{v}arepsilon \mathfrak{p}artial_\tau(V_\mathfrak{v}arepsilon)^2 \bigg| + C \mathfrak{v}arepsilon^2 \bigg| \int_0^\tau \| V_\mathfrak{v}arepsilon(\cdot, s) \|_{L^2(\mathbb{R})} ds \bigg|. \end{equation} A further integration by parts in time leads to \begin{equation} \begin{split} \label{omo} \frac{\mathfrak{v}arepsilon^2}{48} \int_0^\tau \int_\mathbb{R} U_\mathfrak{v}arepsilon \mathfrak{p}artial_\tau (V_\mathfrak{v}arepsilon)^2 = - \frac{\mathfrak{v}arepsilon^2}{48} \int_0^\tau \int_\mathbb{R} (\mathfrak{p}artial_\tau U_\mathfrak{v}arepsilon) V_\mathfrak{v}arepsilon^2 + \frac{\mathfrak{v}arepsilon^2}{48} \bigg[ \int_\mathbb{R} U_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon^2 \bigg]^\tau_0, \end{split} \end{equation} and since $\mathfrak{p}artial_\tau U_\mathfrak{v}arepsilon$ is uniformly bounded in $L^2(\mathbb{R})$ by \eqref{slow1}, \eqref{grinzing1bis} and \eqref{aubagne}, we obtain, combining \eqref{persil}, \eqref{ariel} and \eqref{omo}, \begin{equation} \label{filou} J_1 \leq C \mathfrak{v}arepsilon^2 \bigg( \| V_\mathfrak{v}arepsilon(\cdot, 0) \|_{L^2(\mathbb{R})} + \| V_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})} + \bigg| \int_0^\tau \| V_\mathfrak{v}arepsilon(\cdot, s) \|_{L^2(\mathbb{R})} ds \bigg| \bigg). \end{equation} Finally, combining \eqref{marseille}, \eqref{pinard}, \eqref{lia}, \eqref{stan} and \eqref{filou}, we obtain $$\| V_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2}^2 \leq \| V_\mathfrak{v}arepsilon(\cdot, 0) \|_{L^2}^2 + C \mathfrak{v}arepsilon^2 \bigg( \mathfrak{v}arepsilon^2 \tau + \| V_\mathfrak{v}arepsilon(\cdot, 0) \|_{L^2} + \| V_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2} + \bigg| \int_0^\tau \| V_\mathfrak{v}arepsilon(\cdot, s) \|_{L^2} ds \bigg| \bigg).$$ The proof of Theorem \ref{H3-controlbis} then follows by the Gronwall lemma. \mathfrak{s}ubsection{Proof of Theorem \ref{cochon}} \label{Uhu} We first recall the equation \eqref{slow1} satisfied by $U_\mathfrak{v}arepsilon$, namely $$\mathfrak{p}artial_\tau U_\mathfrak{v}arepsilon + \mathfrak{p}artial_x^3 U_\mathfrak{v}arepsilon + U_\mathfrak{v}arepsilon \mathfrak{p}artial_x U_\mathfrak{v}arepsilon = - \mathfrak{p}artial_x^3 V_\mathfrak{v}arepsilon + \frac{1}{3} \mathfrak{p}artial_x \Big( U_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon + \frac{V_\mathfrak{v}arepsilon^2}{2} \Big) - \mathfrak{v}arepsilon^2 R_{\mathfrak{v}arepsilon},$$ and take the difference with the \eqref{KdV} equation $$\mathfrak{p}artial_\tau \mathcal{N}_\mathfrak{v}arepsilon + \mathfrak{p}artial_x^3 \mathcal{N}_\mathfrak{v}arepsilon + \mathcal{N}_\mathfrak{v}arepsilon \mathfrak{p}artial_x \mathcal{N}_\mathfrak{v}arepsilon = 0,$$ so that $Z_\mathfrak{v}arepsilon \equiv U_\mathfrak{v}arepsilon - \mathcal{N}_\mathfrak{v}arepsilon$ satisfies the equation \begin{equation} \label{dede} \mathfrak{p}artial_\tau Z_\mathfrak{v}arepsilon + \mathfrak{p}artial_x^3 Z_\mathfrak{v}arepsilon + Z_\mathfrak{v}arepsilon \mathfrak{p}artial_x U_\mathfrak{v}arepsilon + \mathcal{N}_\mathfrak{v}arepsilon \mathfrak{p}artial_x Z_\mathfrak{v}arepsilon = - \mathfrak{p}artial_x^3 V_\mathfrak{v}arepsilon + \frac{1}{3} \mathfrak{p}artial_x \Big( U_\mathfrak{v}arepsilon V_\mathfrak{v}arepsilon + \frac{V_\mathfrak{v}arepsilon^2}{2} \Big) - \mathfrak{v}arepsilon^2 R_{\mathfrak{v}arepsilon}. \end{equation} We multiply \eqref{dede} by $Z_\mathfrak{v}arepsilon$, integrate on $\mathbb{R}$ and perform an integration by parts to obtain \begin{align*} & \mathfrak{p}artial_\tau \| Z_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})}^2 \leq K \big( \| \mathfrak{p}artial_x U_\mathfrak{v}arepsilon \|_{L^\infty(\mathbb{R})} + \| \mathfrak{p}artial_x \mathcal{N}_\mathfrak{v}arepsilon \|_{L^\infty(\mathbb{R})} \big) \| Z_\mathfrak{v}arepsilon \|^2_{L^2(\mathbb{R})}\\ + K & \| Z_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \Big( \| V_\mathfrak{v}arepsilon \|_{H^3(\mathbb{R})} + \| V_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \big( \| U_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} + \| V_\mathfrak{v}arepsilon \|_{H^1(\mathbb{R})} \big) + \mathfrak{v}arepsilon^2 \| R_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \Big). \end{align*} Using bounds \eqref{aubagne} for $U_\mathfrak{v}arepsilon$, $V_\mathfrak{v}arepsilon$ and $R_\mathfrak{v}arepsilon$, and the bound of $\mathcal{N}_\mathfrak{v}arepsilon$ in $H^3(\mathbb{R})$ which follows from the integrability theory of \eqref{KdV}, we are led to $$\mathfrak{p}artial_\tau \| Z_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})}^2 \leq K \| Z_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})}^2 + K \| Z_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \big( \mathfrak{v}arepsilon^2 + \| V_\mathfrak{v}arepsilon \|_{H^3(\mathbb{R})} \big).$$ Finally, we invoke Proposition \ref{H3-control} to assert $$\mathfrak{p}artial_\tau \| Z_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})}^2 \leq K \| Z_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})}^2 + K \| Z_\mathfrak{v}arepsilon \|_{L^2(\mathbb{R})} \big( \mathfrak{v}arepsilon + \| V_\mathfrak{v}arepsilon(\cdot, 0) \|_{H^3(\mathbb{R})} \big),$$ so that by the Gronwall lemma, \begin{equation} \label{mittal} \| Z_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})} \leq \| Z_\mathfrak{v}arepsilon(\cdot, 0) \|_{L^2(\mathbb{R})} + K \big( \mathfrak{v}arepsilon + \| V_\mathfrak{v}arepsilon(\cdot, 0) \|_{H^3(\mathbb{R})} \big) \exp(K \tau). \end{equation} On the other hand, at time $\tau = 0$, since $\mathcal{N}_\mathfrak{v}arepsilon(\cdot, 0) = N_\mathfrak{v}arepsilon(\cdot, 0)$, we have \begin{equation} \label{arcelor} \| Z_\mathfrak{v}arepsilon(\cdot, 0) \|_{L^2(\mathbb{R})} = \| U_\mathfrak{v}arepsilon(\cdot,0) - \mathcal{N}_\mathfrak{v}arepsilon(\cdot,0) \|_{L^2(\mathbb{R})}= \| V_\mathfrak{v}arepsilon(\cdot, 0)\|_{L^2(\mathbb{R})}, \end{equation} whereas at positive time, by definition of $V_\mathfrak{v}arepsilon$, we have \begin{equation} \begin{split} \label{cigale} \| N_\mathfrak{v}arepsilon(\cdot, \tau) - \mathcal{N}_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})} & \leq \| Z_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})} + \| V_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})}\\ & \leq \| Z_\mathfrak{v}arepsilon(\cdot, \tau) \|_{L^2(\mathbb{R})} + \| V_\mathfrak{v}arepsilon(\cdot, 0) \|_{L^2(\mathbb{R})} + K \mathfrak{v}arepsilon^2 |\tau|, \end{split} \end{equation} where we have used Theorem \ref{H3-controlbis}. The conclusion for $N_\mathfrak{v}arepsilon - \mathcal{N}_\mathfrak{v}arepsilon$ then follows from \eqref{mittal}, \eqref{arcelor} and \eqref{cigale}. The proof is similar for $\mathfrak{p}artial_x \mathbb{T}heta_\mathfrak{v}arepsilon - \mathcal{M}_\mathfrak{v}arepsilon$ considering the function $Y_\mathfrak{v}arepsilon \equiv U_\mathfrak{v}arepsilon - \mathcal{M}_\mathfrak{v}arepsilon$ instead of $Z_\mathfrak{v}arepsilon$, so that we omit it. \end{document}
\begin{document} \title{ Generalized Eigenvalue Problems with Specified Eigenvalues } \begin{abstract} We consider the distance from a (square or rectangular) matrix pencil to the nearest matrix pencil in 2-norm that has a set of specified eigenvalues. We derive a singular value optimization characterization for this problem and illustrate its usefulness for two applications. First, the characterization yields a singular value formula for determining the nearest pencil whose eigenvalues lie in a specified region in the complex plane. For instance, this enables the numerical computation of the nearest stable descriptor system in control theory. Second, the characterization partially solves the problem posed in [Boutry et al. 2005] regarding the distance from a general rectangular pencil to the nearest pencil with a complete set of eigenvalues. The involved singular value optimization problems are solved by means of BFGS and Lipschitz-based global optimization algorithms. \\ \\ \textbf{Key words.} Matrix pencils, eigenvalues, optimization of singular values, inverse eigenvalue problems, Lipschitz continuity, Sylvester equation. \textbf{AMS subject classifications.} 65F15, 65F18, 90C26, 90C56 \end{abstract} \pagestyle{myheadings} \thispagestyle{plain} \markboth{D. KRESSNER, E. MENGI, I. NAKIC AND N. TRUHAR}{Generalized Eigenvalue Problems with Specified Eigenvalues} \section{Introduction} Consider a matrix pencil $A - \lambda B$ where $A, B \in \C^{n\times m}$ with $n \geq m$. Then a scalar $\rho \in \C$ is called an \emph{eigenvalue} of the pencil if there exists a nonzero vector $v \in \C^n$ such that \begin{equation}\label{eq:eig_defn} ( A - \rho B ) v = 0. \end{equation} The vector $v$ is said to be a \emph{(right) eigenvector} associated with $\rho$ and the pair $(\rho, v)$ is said to be an \emph{eigenpair} of the pencil. In the square case $m=n$, the eigenvalues are simply given by the roots of the characteristic polynomial $\det(A-\lambda B)$ and there are usually $n$ eigenvalues, counting multiplicities. The situation is quite the opposite for $n>m$. Generically, a rectangular pencil $A - \lambda B$ has no eigenvalues at all. To see this, notice that a necessary condition for the satisfaction of~(\ref{eq:eig_defn}) is that $ n! / \left( (n-m)! m! \right)$ polynomials, each corresponding to the determinant of a pencil obtained by choosing $m$ rows of $A - \lambda B$ out of $n$ rows, must have a common root. Also, the generic Kronecker canonical form of a rectangular matrix pencil only consists of singular blocks (see \cite{DemE95}). Hence, (\ref{eq:eig_defn}) is an ill-posed problem and requires reformulation before admitting numerical treatment. To motivate our reformulation of~(\ref{eq:eig_defn}), we describe a typical situation giving rise to rectangular matrix pencils. Let $M \in \mathbb C^{n\times n}$ and suppose that the columns of $U \in \mathbb C^{n\times m}$ form an orthonormal basis for a subspace $\mathcal W \subset \mathbb C^n$ known to contain approximations to some eigenvectors of $M$. Then it is quite natural to consider the $n\times m$ matrix pencil \begin{equation} \label{eq:rectpencilsubspace} A - \lambda B := M U - \lambda U. \end{equation} The approximations contained in $\mathcal W$ and the approximate eigenpairs of $A - \lambda B$ are closely connected to each other. In one direction, suppose that $(\rho, x)$ with $x \in \mathcal W$ satisfies \begin{equation} \label{eq:squarematrixperturbed} (M+\Delta M - \rho I) x = 0 \end{equation} for some (small) perturbation $\Delta M$. Then there is $v \in \mathbb C^{n}$ such that $x = Uv$. Moreover, we have \begin{equation} \label{eq:rectpencilsubspaceperturbed} (A+\Delta A - \rho B)v = 0 \end{equation} with $\Delta A := \Delta M\cdot U$ satisfying $\|\Delta A\|_2 \le \|\Delta M\|_2$. In the other direction, the relation~(\ref{eq:rectpencilsubspaceperturbed}) with an arbitrary $\Delta A$ implies~(\ref{eq:squarematrixperturbed}) with $\Delta M = \Delta A\cdot U^{\ast}$ satisfying $\|\Delta M\|_2 = \|\Delta A\|_2$. Unless $M$ is normal, the first part of this equivalence between approximate eigenpairs of $M$ and $A - \lambda B$ does not hold when the latter is replaced by the more common compression $U^{\ast} M U$. This observation has led to the use of rectangular matrix pencils in, e.g., large-scale pseudospectra computation (see \cite{TohT96}) and Ritz vector extraction (see \cite{JiaS01}). This paper is concerned with determining the 2-norm distance from the pencil $A - \lambda B$ to the nearest pencil $(A+\Delta A) - \lambda B$ with a subset of specified eigenvalues. To be precise, let ${\mathbb S}= \{ \lambda_1, \dots, \lambda_k \}$ be a set of distinct complex numbers and let $r$ be a positive integer. Let $m_j(A + \Delta A, B)$ denote the (possibly zero) algebraic multiplicity\footnote{For a rectangular matrix pencil, the algebraic multiplicity of $\lambda_j$ is defined as the sum of the sizes of associated regular Jordan blocks in the Kronecker canonical form, see also Section~\ref{sec:KCF}. By definition, this number is zero if $\lambda_j$ is actually not an eigenvalue of the pencil.} of $\lambda_j$ as an eigenvalue of $(A+\Delta A) - \lambda B$. Then we consider the distance \begin{equation} \label{eq:taurs} \tau_r({\mathbb S}) := \inf \Big\{ \| \Delta A \|_2 : \sum_{j=1}^k m_j(A + \Delta A, B) \geq r \Big\}. \end{equation} We allow $B$ to be rank-deficient. However, we require that ${\rm rank}(B) \geq r$. Otherwise, if ${\rm rank}(B) < r$, the pencil $(A+\Delta A) - \lambda B$ has fewer than $r$ finite eigenvalues for all $\Delta A$ and consequently the distance $\tau_r({\mathbb S})$ is ill-posed. For $k = r = 1$, it is relatively easy to see that \[ \tau_1(\{\lambda_1\}) = \sigma_m(A - \lambda_1 B), \] where, here and in the following, $\sigma_k$ denotes the $k$th largest singular value of a matrix. (The particular form of this problem with $k=r=1$, and when $A$ and $B$ are perturbed simultaneously, is also studied for instance in \cite{Byers93}.) One of the main contributions of this paper is a derivation of a similar singular value optimization characterization for general $k$ and $r$, which facilitates the computation of $\tau_r({\mathbb S})$. Very little seems to be known in this direction. Existing results concern the square matrix case ($m=n$ and $B = I$); see the works by \cite{Malyshev1999} for $k = 1$ and $r = 2$ as well as \cite{MR2156435} for $k = 2$ and $r=2$, \cite{MR1991900} for $k = 1$ and $r = 3$, and \cite{Mengi2009} for $k = 1$ and arbitrary $r$. Some attempts have also been made by \cite{MR2592917} for arbitrary $k$ and $r$ and for the square matrix case, and by \cite{MR2444335} for $k = 1$ and $r = 2$ and for the square matrix polynomial case. Another class of applications arises in (robust) control theory, where a number of tasks require the determination of a (minimal) perturbation that moves some or all eigenvalues into a certain region in the complex plane. With the region of interest denoted by $\Omega \subseteq \mathbb{C}$, the results in this paper are an important step towards rendering the numerical computation of the distance \begin{eqnarray*} \tau_r(\Omega)& := & \inf \big\{ \| \Delta A\|_2 : (A+\Delta A) - \lambda B \; \text{has $r$ finite eigenvalues in $\Omega$} \big\} \\ &= & \inf_{{\mathbb S} \subseteq \Omega} \tau_r({\mathbb S}) \end{eqnarray*} feasible. Here and in the following, multiple eigenvalues are counted according to their algebraic multiplicities. For $r = 1$ and $\Omega$ equal to $\mathbb C^+$ (right-half complex plane), the quantity $\tau_1(\mathbb C^+)$ amounts to the distance to instability, also called stability radius. In \cite{VanLoan1985}, a singular value characterization of $\tau_1(\mathbb C^+)$ was provided, forming the basis of a number of algorithms for computing $\tau_1(\mathbb C^+)$, see, e.g.,~\cite{BoyB90,Bye88}. In our more general setting, we can also address the converse question: Given an unstable matrix pencil $A-\lambda B$, determine the closest stable pencil. Notice that this problem is intrinsically harder than the distance to instability. For the distance to instability it suffices to perturb the system so that \emph{one of the eigenvalues} is in the undesired region. On the other hand to make an unstable system stable one needs to perturb the system so that \emph{all eigenvalues} lie in the region of stability. An important special case, $\Omega = \mathbb C$ leads to \begin{eqnarray*} \tau_r(\C) &:= & \inf \{ \| \Delta A \|_2 : (A+\Delta A) - \lambda B \;\; {\rm has} \; r \; {\rm finite} \; {\rm eigenvalues} \; \} \\ &= & \inf_{{\mathbb S} \subseteq \C} \tau_r({\mathbb S}). \end{eqnarray*} For $r = 1$ and particular choices of rectangular $A$ and $B$, the distance $\tau_1(\C)$ corresponds to the distance to uncontrollability for a matrix pair (see \cite{BurLO04,Eis84}). For general $r$, a variant of this distance was suggested in \cite{Boutry2005} to solve an inverse signal processing problem approximately. More specifically, this problem is concerned with the identification of the shape of a region in the complex plane given the moments over the region. If the region is assumed to be a polygon, then its vertices can be posed as the eigenvalues of a rectangular pencil $A - \lambda B$, where $A$ and $B$ are not exact due to measurement errors, causing the pencil to have no eigenvalues (see \cite{Elad2004} for details). Then the authors attempt to locate nearby pencils with a complete set of eigenvalues. In this work we allow perturbations to $A$ only, but not to $B$. This restriction is only justified if the absolute value of $\lambda$ does not become too small. We consider our results and technique as significant steps towards the complete solution of the problem posed in \cite{Elad2004}. The outline of this paper is as follows. In the next section, we review the Kronecker canonical form for the pencil $A - \lambda B$. In \S\ref{sec:rank_char_pencils}, we derive a rank characterization for the condition $\sum_{j=1}^k m_j(A , B) \geq r$. This is a crucial prerequisite for deriving the singular value characterizations of $\tau_r({\mathbb S})$ in \S\ref{sec:SVD_derivation}. We discuss several corollaries of the singular value characterizations for $\tau_r({\mathbb S})$, in particular for $\tau_r(\Omega)$ and $\tau_r(\C)$, in \S\ref{sec:nearest_rect_pencil}. The singular value characterizations are deduced under certain mild multiplicity and linear independence assumptions. Although we expect these assumptions to be satisfied for examples of practical interest, they may fail to hold as demonstrated by an academic example in \S\ref{sec:qualifications}. Interestingly, the singular value characterization remains true for this example despite the fact that our derivation no longer applies. Finally, a numerical approach to solving the involved singular value optimization problems is briefly outlined in \S\ref{sec:computation} and applied to a number of settings in \S\ref{sec:numerical_exp}. The main point of the developed numerical method and the experiments is to demonstrate that the singular value characterizations facilitate the computation of $\tau_r({\mathbb S})$, $\tau_r(\Omega)$ and $\tau_r(\C)$. We do not claim that the method outlined here is as efficient as it could be, neither do we claim that it is reliable. \section{Kronecker Canonical Form}\label{sec:KCF} Given a matrix pencil $A-\lambda B \in \C^{n \times m}$, the Kronecker canonical form (KCF), see~\cite{Gantmacher1959}, states the existence of invertible matrices $P \in \C^{n\times n}$ and $Q \in \C^{m\times m}$ such that the transformed pencil $P(A-\lambda B)Q$ is block diagonal with each diagonal block taking the form \[ J_p(\alpha) - \lambda I_p \quad \text{or} \quad I_p - \lambda J_p(0) \quad \text{or}\quad F_p - \lambda G_p \quad \text{or}\quad F_p^T - \lambda G_p^T, \] where \begin{equation} \label{eq:blockdef} J_p(\alpha) = \underbrace{\left[ \begin{array}{cccc} \alpha & 1 \\[-5pt] & \alpha & \ddots \\[-5pt] & & \ddots & 1 \\ & & & \alpha \end{array}\right]}_{p\times p},\ F_p = \underbrace{\left[ \begin{array}{cccc} 1 & 0 \\ & \ddots & \ddots \\ & & 1 & 0 \end{array} \right]}_{p \times (p+1)},\ G_p = \underbrace{\left[ \begin{array}{cccc} 0 & 1 \\ & \ddots & \ddots \\ & & 0 & 1 \end{array} \right]}_{p \times (p+1)} \end{equation} for some $\alpha \in \C$. \emph{Regular blocks} take the form $J_p(\alpha) - \lambda I_p$ or $I_p - \lambda J_p(0)$, with $p\ge 1$, corresponding to finite or infinite eigenvalues, respectively. The blocks $F_p - \lambda G_p$ and $F_p^T - \lambda G_p^T$ are called \emph{right and left singular blocks}, respectively, with $p\ge 0$ corresponding to a so called Kronecker index. In large parts of this paper, indeed until the main singular value optimization characterization, we will assume that $A-\lambda B$ has no right singular blocks $F_p - \lambda G_p$. Eventually, we will remove this assumption by treating the occurence of such blocks separately in Section~\ref{sec:mainresult}. \section{Rank Characterization for Pencils with Specified Eigenvalues}\label{sec:rank_char_pencils} In this section we derive a rank characterization for the satisfaction of the condition \begin{equation}\label{eq:alg_mult_ineq} \sum_{j=1}^k m_j(A,B) \geq r, \end{equation} where $m_j(A,B)$ denotes the algebraic multiplicity of the eigenvalue $\lambda_j$. The following classical result \cite[Theorem 1, p. 219]{Gantmacher1959} concerning the dimension of the solution space for a Sylvester equation will play a central role. \begin{theorem}\label{thm:Sylvester_matrix} Let $F \in \C^{m\times m}$ and $G \in \C^{r\times r}$. Then the dimension of the solution space for the Sylvester equation \[ FX - XG = 0 \] only depends on the Jordan canonical forms of the matrices $F$ and $G$. Specifically, suppose that $\mu_1, \dots, \mu_\ell$ are the common eigenvalues of $F$ and $G$. Let $c_{j,1}, \dots, c_{j, \ell_j}$ and $p_{j,1}, \dots, p_{j,\tilde{\ell}_j}$ denote the sizes of the Jordan blocks of $F$ and $G$ associated with the eigenvalue $\mu_j$, respectively. Then \[ {\rm dim} \{ X \in \C^{m\times r} : FX - XG = 0 \} = \sum_{j = 1}^\ell \sum_{i = 1}^{\ell_j} \sum_{q = 1}^{\tilde{\ell}_j} \min ( c_{j,i}, p_{j,q} ). \] \end{theorem} For our purposes, we need to extend the result of Theorem~\ref{thm:Sylvester_matrix} to a generalized Sylvester equation of the form \begin{equation}\label{eq:gen_Sylvester_pr} AX - BXC = 0, \end{equation} where $C$ is a matrix with the desired set of eigenvalues ${\mathbb S}$ and with correct algebraic multiplicities. For this type of generalized Sylvester equation, the extension is straightforward.\footnote{\cite{Kosir96} provides an extension of Theorem~\ref{thm:Sylvester_matrix} to a more general setting.} To see this, let us partition the KCF \begin{equation} \label{eq:kcfpart} P(A - \lambda B) Q = {\rm diag} \left( A_F - \lambda I, I - \lambda A_I, A_S - \lambda B_S \right), \end{equation} such that \begin{itemize} \item $A_F - \lambda I$ contains all regular blocks corresponding to finite eigenvalues; \item $I - \lambda A_I$ contains all regular blocks corresponding to infinite eigenvalues; \item $A_S - \lambda B_S$ contains all left singular blocks of the form $F_p^T - \lambda G_p^T$. \end{itemize} As explained in Section~\ref{sec:KCF}, we exclude the occurence of right singular blocks for the moment. Note that the finite eigenvalues of $A - \lambda B$ are equal to the eigenvalues of $A_F$ with the same algebraic and geometric multiplicities. Using~(\ref{eq:kcfpart}), $X$ is a solution of the generalized Sylvester equation (\ref{eq:gen_Sylvester_pr}) if and only if \[ (PAQ) (Q^{-1} X) - (PB Q) (Q^{-1} X) C = 0 \;\;\; \Longleftrightarrow \;\;\; {\rm diag} \left( A_F, I, A_S \right) Y - {\rm diag} \left( I, A_I, B_S \right) Y C = 0 \] where $Y = Q^{-1}X$. Consequently, the dimension of the solution space for~(\ref{eq:gen_Sylvester_pr}) is the sum of the solution space dimensions of the equations \[ A_F Y_1 - Y_1 C = 0 \;\;\;\; {\rm and} \;\;\;\; Y_2 - A_I Y_2 C = 0 \;\;\;\; {\rm and} \;\;\;\; A_S Y_3 - B_S Y_3 C = 0. \] Results by~\cite{DemE95} show that the last two equations only admit the trivial solutions $Y_2 = 0$ and $Y_3 = 0$. To summarize: the solution spaces of the generalized Sylvester equation (\ref{eq:gen_Sylvester_pr}) and the (standard) Sylvester equation \[ A_F X - XC = 0 \] have the same dimension. Applying Theorem \ref{thm:Sylvester_matrix} we therefore obtain the following result. \begin{theorem}\label{thm:Sylvester} Let $A, B \in \C^{n\times m}$ with $n \geq m$ be such that the KCF of $A-\lambda B$ does not contain right singular blocks. Then the dimension of the solution space for the generalized Sylvester equation \[ AX - BXC = 0 \] only depends on the Kronecker canonical form of $A - \lambda B$ and the Jordan canonical form of $C \in \C^{r\times r}$. Specifically suppose that $\mu_1, \dots, \mu_\ell$ are the common eigenvalues of $A - \lambda B$ and $C$. Let $c_{j,1}, \dots, c_{j, \ell_j}$ and $p_{j,1}, \dots, p_{j,\tilde{\ell}_j}$ denote the sizes of the Jordan blocks of $A - \lambda B$ and $C$ associated with the eigenvalue $\mu_j$, respectively. Then \[ {\rm dim} \{ X \in \C^{m\times r} : AX - BXC = 0 \} = \sum_{j = 1}^\ell \sum_{i = 1}^{\ell_j} \sum_{q = 1}^{\tilde{\ell}_j} \min ( c_{j,i}, p_{j,q} ). \] \end{theorem} We now apply the result of Theorem~\ref{thm:Sylvester} to the generalized Sylvester equation \begin{equation}\label{eq:gen_Sylvester} AX - BXC(\mu,\Gamma) = 0, \end{equation} where $C(\mu,\Gamma)$ takes the form \begin{equation} \label{eq:cmu} C(\mu,\Gamma) = \left[ \begin{array}{cccc} \mu_1 & -\gamma_{21} & \dots & -\gamma_{r1} \\ 0 & \mu_2 & \ddots & \vdots \\[-0.1cm] & & \ddots & -\gamma_{r,r-1} \\ 0 & & & \mu_r \\ \end{array} \right], \end{equation} with \[ \mu = \left[ \begin{array}{cccc} \mu_1 & \mu_2 & \dots & \mu_r \end{array} \right]^T \in {\mathbb S}^r, \quad \Gamma = \left[ \begin{array}{cccc} \gamma_{21} & \gamma_{31} & \dots & \gamma_{r,r-1} \end{array} \right]^T \in \C^{r(r-1)/2}. \] As explained in the introduction, the set ${\mathbb S} = \{ \lambda_1,\dots, \lambda_k \}$ contains the desired approximate eigenvalues. Suppose that $\lambda_j$ occurs $p_j$ times in $\mu$. Furthermore, as in Theorem \ref{thm:Sylvester}, denote the sizes of the Jordan blocks of $A - \lambda B$ and $C(\mu,\Gamma)$ associated with the scalar $\lambda_j$ by $c_{j,1}, \dots, c_{j, \ell_j}$ and $p_{j,1}, \dots, p_{j,\tilde{\ell}_j}$, respectively. Note that $p_j = \sum_{q=1}^{\tilde{\ell}_j} p_{j,q}$. In fact, for generic values of $\Gamma$ the matrix $C(\mu,\Gamma)$ has at most one Jordan block of size $p_j$ associated with $\lambda_j$ for $j = 1, \dots, k$, see~\cite{DemE95}. In the following, we denote this set of generic values for $\Gamma$ by ${\mathcal G}(\mu)$. By definition, this set depends on $\mu$ but not on $A - \lambda B$. First, suppose that inequality (\ref{eq:alg_mult_ineq}) holds. If we choose $\mu$ such that $\sum_{j=1}^k p_j = r$ and $p_j \leq m_j(A,B) = \sum_{i = 1}^{\ell_j} c_{j,i}$, then Theorem \ref{thm:Sylvester} implies that the dimension of the solution space for the generalized Sylvester equation (\ref{eq:gen_Sylvester}) is \[ \sum_{j = 1}^k \sum_{i = 1}^{\ell_j} \sum_{q = 1}^{\tilde{\ell}_j} \min ( c_{j,i}, p_{j,q} ) \geq \sum_{j = 1}^k \sum_{i = 1}^{\ell_j} \min ( c_{j,i}, p_j ) \geq \sum_{j = 1}^k \min ( m_j(A,B), p_j ) = \sum_{j = 1}^k p_j = r. \] In other words, there exists a vector $\mu$ with components from $\mathbb{S}$ such that the dimension of the solution space of the Sylvester equation (\ref{eq:gen_Sylvester}) is at least $r$. Now, on the contrary, suppose that inequality (\ref{eq:alg_mult_ineq}) does not hold. Then for generic values $\Gamma \in {\mathcal G}(\mu)$, the solution space dimension of~(\ref{eq:gen_Sylvester}) is \[ \sum_{j = 1}^k \sum_{i = 1}^{\ell_j} \min ( c_{j,i}, p_{j} ) \leq \sum_{j = 1}^k \sum_{i = 1}^{\ell_j} c_{j,i} = \sum_{j=1}^k m_j(A,B) < r. \] In other words, no matter how $\mu$ is formed from $\mathbb{S}$, the dimension is always less than $r$ for $\Gamma \in {\mathcal G}(\mu)$. This shows the following result. \begin{theorem}\label{thm:spec_eigvals_Syl} Let $A, B \in \C^{n\times m}$ with $n \geq m$ be such that the KCF of $A-\lambda B$ does not contain right singular blocks. Consider a set ${\mathbb S} = \{ \lambda_1, \dots, \lambda_k \}$ of distinct complex scalars, and a positive integer $r$. Then the following two statements are equivalent. \begin{enumerate} \item[\bf (1)] $\sum_{j=1}^k m_j(A,B) \geq r$, where $m_j(A,B)$ is the algebraic multiplicity of $\lambda_j$ as an eigenvalue of $A - \lambda B$. \item[\bf (2)] There exists $\mu \in {\mathbb S}^{r}$ such that \[ {\rm dim} \{ X \in \C^{m\times r} : AX - BXC(\mu,\Gamma) = 0 \} \ge r \] for all $\Gamma \in {\mathcal G}(\mu)$, where $C(\mu,\Gamma)$ is defined as in~(\ref{eq:cmu}). \end{enumerate} \end{theorem} \noindent To obtain a matrix formulation of Theorem~\ref{thm:spec_eigvals_Syl}, we use the Kronecker product $\otimes$ to vectorize the generalized Sylvester equation~(\ref{eq:gen_Sylvester}) and obtain \[ \left( ((I \otimes A) - (C^T(\mu,\Gamma) \otimes B) \right) {\rm vec}(X) = {\mathcal L}(\mu,\Gamma,A,B) {\rm vec}(X) = 0, \] with the lower block triangular matrix \begin{equation}\label{eq:linear_pencil} {\mathcal L}(\mu,\Gamma,A,B) := \left[ \begin{array}{ccccc} A-\mu_1 B & & & & \\ \gamma_{21} B & A-\mu_2 B & & & \\ \vdots & \ddots & \ddots & & \\ \vdots & & \ddots & A-\mu_{r-1} B & \\ \gamma_{r 1} B & \gamma_{r 2} B & \cdots & \gamma_{r, r -1} B & A-\mu_r B \\ \end{array} \right]. \end{equation} The operator ${\rm vec}$ stacks the columns of a matrix into one long vector. Clearly, the solution space of the generalized Sylvester equation and the null space of ${\mathcal L}(\mu,\Gamma,A,B)$ have the same dimension. Consequently, Theorem \ref{thm:spec_eigvals_Syl} can be rephrased as follows. \begin{corollary}\label{thm:spec_eigvals_rankl} Under the assumptions of Theorem~\ref{thm:spec_eigvals_Syl}, the following two statements are equivalent. \begin{enumerate} \item[\bf (1)] $\sum_{j=1}^k m_j(A,B) \geq r$. \item[\bf (2)] There exists $\mu \in {\mathbb S}^{r}$ such that $ {\rm rank}\left( {\mathcal L}(\mu,\Gamma,A,B) \right) \leq mr - r $ for all $\Gamma \in {\mathcal G}(\mu)$. \end{enumerate} \end{corollary} \section{A singular value characterization for the nearest pencil with specified eigenvalues}\label{sec:SVD_derivation} As before, let ${\mathbb S} = \{ \lambda_1, \dots, \lambda_k \}$ be a set of distinct complex scalars and let $r$ be a positive integer. The purpose of this section is to derive a singular value optimization characterization for the distance $\tau_r({\mathbb S})$ defined in~(\ref{eq:taurs}). Our technique is highly inspired by the techniques in \cite{Mengi2009, Mengi2010} and in fact the main result of this section generalizes the singular value optimization characterizations from these works. We start by applying the following elementary result \cite[Theorem 2.5.3, p.72]{Golub1996} to the rank characterization derived in the previous section. \begin{lemma} \label{thm:nearest_rankr} Consider $C \in \C^{\ell\times q}$ and a positive integer $p < \min (\ell,q)$. Then \[ \inf \big\{\| \Delta C \|_2 : {\rm rank}(C+\Delta C) \leq p \big\} = \sigma_{p+1}(C). \] \end{lemma} \noindent Defining \begin{equation}\label{eq:defn_P} {\mathcal P}_r(\mu) := \inf \big\{ \| \Delta A \|_2 : {\rm rank} \left( {\mathcal L}(\mu, \Gamma, A+\Delta A, B) \right) \leq mr - r \big\} \end{equation} for some $\Gamma \in {\mathcal G}(\mu)$, Corollary~\ref{thm:spec_eigvals_rankl} implies \[ \tau_r({\mathbb S}) = \inf_{\mu \in {\mathbb S}^r } {\mathcal P}_r(\mu), \] independent of the choice of $\Gamma$. By Lemma~\ref{thm:nearest_rankr}, it holds that \begin{eqnarray*} {\mathcal P}_r(\mu) &= & \inf \{ \| \Delta A \|_2 : {\rm rank} \left( {\mathcal L}(\mu, \Gamma, A+\Delta A, B) \right) \leq mr - r \} \\ &\geq & \sigma_{mr-r+1} \left( {\mathcal L}(\mu, \Gamma, A, B) \right), \end{eqnarray*} using the fact that $A$ enters ${\mathcal L}$ linearly. Note that this inequality in general is \emph{not} an equality due to the fact that the allowable perturbations to ${\mathcal L}(\mu,\Gamma,A,B)$ in the definition of ${\mathcal P}_r(\mu)$ are not arbitrary. On the other hand, the inequality holds for all $\Gamma \in {\mathcal G}(\mu)$ and hence -- by continuity of the singular value $\sigma_{mr-r+1}(\cdot )$ with respect to $\Gamma$ -- we obtain the lower bound \begin{equation} \label{eq:lowerbound} {\mathcal P}_r(\mu) \;\; \geq \;\; \sup_{\Gamma \in \C^{r(r-1)/2}} \sigma_{mr-r+1} \left( {\mathcal L}(\mu, \Gamma, A, B) \right) =: \kappa_r(\mu). \end{equation} For $m = n$, it can be shown that $\sigma_{mr-r+1} \left( {\mathcal L}(\mu,\Gamma,A,B) \right)$ tends to zero as $\|\Gamma\|:=\sum |\gamma_{ij}|^2\to\infty$ provided that ${\rm rank}(B) \geq r$; see Appendix \ref{sec:to_zero} for details. From this fact and the continuity of singular values, it follows that the supremum is attained at some $\Gamma_{\ast}$ in the square case: \[ \kappa_r(\mu) = \sigma_{mr-r+1} \left( {\mathcal L}(\mu,\Gamma_{\ast},A,B) \right). \] In the rectangular case, numerical experiments indicate that the supremum is still attained if ${\rm rank}(B) \geq r$, but a formal proof does not appear to be easy. Moreover, it is not clear whether the supremum is attained at a unique $\Gamma_\ast$ or not. However, as we will show in the subsequent two subsections, any local extremum of the singular value function is a global maximizer under mild assumptions. (To be precise, the satisfaction of the multiplicity and linear independence qualifications at a local extremum guarantees that the local extremum is a global maximizer; see Definitions \ref{def:mult} and \ref{def:linin} below for multiplicity and linear independence qualifications.) Throughout the rest of this section we assume that the supremum is attained at some $\Gamma_{\ast}$ and that $\Gamma_{\ast} \in {\mathcal G}(\mu)$. The latter assumption will be removed later, in Section~\ref{sec:mainresult}. We will establish the reverse inequality ${\mathcal P}_r(\mu) \leq \kappa_r(\mu)$ by constructing an optimal perturbation $\Delta A_{\ast}$ such that \begin{enumerate} \item[\bf (i)] $\| \Delta A_{\ast} \|_2 = \kappa_r(\mu)$, $\;\;$ and \item[\bf (ii)] ${\rm rank} \left( {\mathcal L}(\mu,\Gamma_{\ast},A+\Delta A_{\ast},B) \right) \leq mr - r$. \end{enumerate} Let us consider the left and right singular vectors $U\in\C^{rn}$ and $V\in\C^{rm}$ satisfying the relations \begin{equation}\label{eq:opt_sval_svecs} {\mathcal L}(\mu,\Gamma_{\ast},A,B) \; V = \kappa_r(\mu) \; U, \qquad U^{\ast} \; {\mathcal L}(\mu,\Gamma_{\ast},A,B) = V^{\ast} \; \kappa_r(\mu),\qquad \|U\|_2 = \|V\|_2 = 1. \end{equation} The aim of the next two subsections is to show that the perturbation \begin{equation}\label{eq:optimal_perturbation} \Delta A_{\ast} := -\kappa_r(\mu)\, {\mathcal U} {\mathcal V}^+ \end{equation} with ${\mathcal U} \in \C^{n\times r}$ and ${\mathcal V} \in \C^{m\times r}$ such that ${\rm vec}({\mathcal U}) = U$ and ${\rm vec}({\mathcal V}) = V$ satisfies properties \textbf{(i)} and \textbf{(ii)}. Here, ${\mathcal V}^{+}$ denotes the Moore-Penrose pseudoinverse of ${\mathcal V}$. The optimality of $\Delta A_{\ast}$ will be established under the following additional assumptions. \begin{definition}[Multiplicity Qualification]\label{def:mult} We say that the multiplicity qualification holds at $\left( \mu, \Gamma \right)$ for the pencil $A - \lambda B$ if the multiplicity of the singular value $\sigma_{mr-r+1} \left( {\mathcal L}(\mu,\Gamma,A,B) \right)$ is one. \end{definition} \begin{definition}[Linear Independence Qualification] \label{def:linin} We say that the linear independence qualification holds at $\left( \mu, \Gamma \right)$ for the pencil $A - \lambda B$ if there is a right singular vector $V$ associated with $\sigma_{mr-r+1} \left( {\mathcal L}(\mu,\Gamma,A,B) \right)$ such that $\mathcal V \in \C^{m\times r}$, with ${\rm vec}(\mathcal V) = {V}$, has full column rank. \end{definition} \subsection{The 2-norm of the optimal perturbation} \label{sec:2norm} Throughout this section we assume that the multiplicity qualification holds at the optimal $(\mu, \Gamma_{\ast})$ for the pencil $A - \lambda B$. Moreover, we can restrict ourselves to the case $\kappa_r(\mu) \not=0$, as the optimal perturbation is trivially given by $\Delta A_\ast = 0$ when $\kappa_r(\mu) =0$ . Let ${\mathcal A}(\gamma)$ be a matrix-valued function depending analytically on a parameter $\gamma \in \R$. If the multiplicity of $\sigma_j \left( {\mathcal A}(\gamma_\ast) \right)$ is one and $\sigma_j \left( {\mathcal A}(\gamma_\ast) \right)\not=0$, then $\sigma_j \left( {\mathcal A}(\gamma) \right)$ is analytic at $\gamma = \gamma_\ast$, with the derivative \begin{equation} \label{eq:derivative} \frac{\partial \sigma_j\left( {\mathcal A}(\gamma_\ast) \right)}{\partial \gamma} = {\rm Re} \left( u_j^{\ast} \frac{\partial {\mathcal A}(\gamma_\ast)}{\partial \gamma} v_j \right), \end{equation} where $u_j$ and $v_j$ denote a consistent pair of unit left and right singular vectors associated with $\sigma_j \left( {\mathcal A}(\gamma_\ast) \right)$, see, e.g.,~\cite{Bunse1991,Malyshev1999,Rel36}. Let us now define \[ f(\Gamma) := \sigma_{nr-r+1} \big( {\mathcal L}(\mu,\Gamma,A,B) \big), \] where we view $f$ as a mapping $\R^{r(r-1)} \rightarrow \R$ by decomposing each complex parameter $\gamma_{j\ell}$ contained in $\Gamma$ into its real and imaginary parts $\Re \gamma_{j\ell}$ and $\Im \gamma_{j\ell}$. By~(\ref{eq:derivative}), we have \[ \frac{ \partial f(\Gamma_{\ast}) }{\partial \Re \gamma_{j\ell}} = {\rm Re} \big( U_j^{\ast} B V_\ell \big), \qquad \frac{ \partial f(\Gamma_{\ast} ) }{\partial \Im \gamma_{j\ell}} = {\rm Re} \big( \mathrm{i}\, U_j^{\ast} B V_\ell \big) = - {\rm Im} \big(U_j^{\ast} B V_\ell \big), \] where $U_j \in \C^n$ and $V_\ell \in \C^m$ denote the $j$th and $\ell$th block components of $U$ and $V$, respectively. Furthermore, the fact that $\Gamma_{\ast}$ is a global maximizer of $f$ implies that both derivatives are zero. Consequently we obtain the following result. \begin{lemma}\label{thm:opt_svecs} Suppose that the multiplicity qualification holds at $(\mu,\Gamma_{\ast})$ for the pencil $A - \lambda B$ and $\kappa_r(\mu) \not=0$. Then $ U_j^{\ast} B V_\ell = 0 $ for all $j = 2,\dots,r$ and $\ell = 1,\dots,j-1$. \end{lemma} Now by exploiting Lemma \ref{thm:opt_svecs} we show ${\mathcal U}^{\ast} {\mathcal U} = {\mathcal V}^{\ast} {\mathcal V}$. Geometrically this means that the angle between $U_i$ and $U_j$ is identical with the angle between $V_i$ and $V_j$. \begin{lemma} \label{eq:uv} Under the assumptions of Lemma \ref{thm:opt_svecs} it holds that $ {\mathcal U}^{\ast} {\mathcal U} = {\mathcal V}^{\ast} {\mathcal V}. $ \end{lemma} \begin{proof} Expressing the first two equalities in the singular value characterization~(\ref{eq:opt_sval_svecs}) in matrix form yields the generalized Sylvester equations \begin{equation*} A{\mathcal V} - B{\mathcal V}C(\mu,\Gamma_{\ast}) = \kappa_r(\mu) {\mathcal U} \end{equation*} and \begin{equation*} {\mathcal U}^\ast A - C(\mu,\Gamma_{\ast}) {\mathcal U}^\ast B = \kappa_r(\mu) {\mathcal V}^\ast. \end{equation*} By multiplying the first equation with ${\mathcal U}^\ast$ from the left-hand side, multiplying the second equation with ${\mathcal V}$ from the right-hand side, and then subtracting the second equation from the first we obtain \begin{equation}\label{eq:usu-vsv} \kappa_r(\mu) \left( {\mathcal U}^\ast {\mathcal U} - {\mathcal V}^\ast {\mathcal V} \right) = C(\mu,\Gamma_{\ast}) {\mathcal U}^\ast B {\mathcal V} - {\mathcal U}^\ast B{\mathcal V} C(\mu,\Gamma_{\ast}). \end{equation} Lemma \ref{thm:opt_svecs} implies that ${\mathcal U}^\ast B{\mathcal V}$ is upper triangular. Since $C(\mu,\Gamma_{\ast})$ is also upper triangular, the right-hand side in (\ref{eq:usu-vsv}) is \emph{strictly} upper triangular. But the left-hand side in (\ref{eq:usu-vsv}) is Hermitian, implying that the right-hand side is indeed zero, which -- together with $\kappa_r(\mu) \not=0$ -- completes the proof. \end{proof} The result of Lemma~\ref{eq:uv} implies $\| {\mathcal U} {\mathcal V}^{+} \|_2 = 1$. A formal proof of this implication can be found in \cite[Lemma 2]{Malyshev1999} and \cite[Theorem 2.5]{Mengi2009}. Indeed, the equality $\| {\mathcal U} {\mathcal V}^{+} \|_2 = 1$ can be directly deduced from $\| {\mathcal U}{\mathcal V}^+ x\|_2 = \| {\mathcal V}{\mathcal V}^+ x\|_2$ for every $x$ (implied by Lemma~\ref{eq:uv}), and $\| VV^+ \|_2 = 1$ (since $VV^+$ is an orthogonal projector). \begin{theorem} Suppose that the multiplicity qualification holds at $(\mu,\Gamma_{\ast})$ for the pencil $A - \lambda B$. Then the perturbation $\Delta A_{\ast}$ defined in~(\ref{eq:optimal_perturbation}) satisfies $ \| \Delta A_{\ast} \|_2 = \kappa_r(\mu). $ \end{theorem} \subsection{Satisfaction of the rank condition by the optimally perturbed pencil} Now we assume that the linear independence qualification (Definition~\ref{def:linin}) holds at $(\mu,\Gamma_{\ast})$ for the pencil $A - \lambda B$. In particular we assume we can choose a right singular ``vector'' $\; {\rm vec} \left( {\mathcal V} \right) \;$ so that ${\mathcal V}$ has full column rank. We will establish that \begin{equation} \label{eq:rank} {\rm rank} \big( {\mathcal L} (\mu, \Gamma_{\ast},A+\Delta A_{\ast},B) \big) \leq mr - r \end{equation} for $\Delta A_{\ast}$ defined as in~(\ref{eq:optimal_perturbation}). Writing the first part of the singular vector characterization~(\ref{eq:opt_sval_svecs}) in matrix form leads to the generalized Sylvester equation \begin{equation*} A{\mathcal V} - B{\mathcal V}C(\mu,\Gamma_{\ast}) = \kappa_r(\mu) {\mathcal U}. \end{equation*} The fact that ${\mathcal V}$ has full column rank implies ${\mathcal V}^{+} {\mathcal V} = I$ and hence \[ \displaystyle \begin{array}{rrcl} & A{\mathcal V} - B{\mathcal V}C(\mu,\Gamma_{\ast}) & = & \kappa_r(\mu) {\mathcal U} {\mathcal V}^{+} {\mathcal V} \\ \Longrightarrow & (A - \kappa_r(\mu) {\mathcal U} {\mathcal V}^{+}) {\mathcal V} - B{\mathcal V}C(\mu,\Gamma_{\ast}) & = & 0 \\ \Longrightarrow & (A + \Delta A_{\ast}) {\mathcal V} - B{\mathcal V}C(\mu,\Gamma_{\ast}) & = & 0. \end{array} \] Let us consider $ {\mathcal M} = \{ D \in \C^{r\times r} : C(\mu,\Gamma_{\ast})D - D C(\mu,\Gamma_{\ast}) = 0 \}, $ the subspace of all $r\times r$ matrices commuting with $C(\mu,\Gamma_{\ast})$. By Theorem \ref{thm:Sylvester_matrix}, ${\mathcal M}$ is a subspace of dimension at least~$r$. Clearly for all $D \in {\mathcal M}$, we have \[ 0 = (A + \Delta A_{\ast}) {\mathcal V}D - B{\mathcal V}C(\mu,\Gamma_{\ast})D = (A + \Delta A_{\ast}) ({\mathcal V}D) - B({\mathcal V}D) C(\mu,\Gamma_{\ast}). \] In other words, $\{ {\mathcal V}D : D \in {\mathcal M} \}$ has dimension at least $r$ (using the fact that ${\mathcal V}$ has full column rank) and represents a subspace of solutions to the generalized Sylvester equation \[ (A + \Delta A_{\ast}) X - B X C(\mu,\Gamma_{\ast}) = 0. \] Reinterpreting this result in terms of the matrix representation, the desired rank estimate~(\ref{eq:rank}) follows. This completes the derivation of ${\mathcal P}_r(\mu) \leq \kappa_r(\mu)$ under the stated multiplicity and linear independence assumptions. \subsection{Main Result} \label{sec:mainresult} To summarize the discussion above, we have obtained the singular value characterization \begin{equation} \label{eq:svcharacterization} \tau_r({\mathbb S}) = \inf_{\mu \in {\mathbb S}^r} \sup_{\Gamma} \sigma_{mr -r + 1} \left( {\mathcal L} \left( \mu,\Gamma,A,B \right) \right). \end{equation} Among our assumptions, we have \begin{equation}\label{eq:assump_removable} \textbf{(i)} \; \text{the KCF of $A-\lambda B$ has no right singular blocks} \;\;\;\;\;\; {\rm and} \;\;\;\;\;\; \textbf{(ii)} \; \Gamma_\ast \in {\mathcal G}(\mu). \end{equation} In this section, we show that these two assumptions can be dropped. We still require that ${\rm rank}(B) \ge r$. As explained in the introduction, the distance problem becomes ill-posed otherwise. \begin{enumerate} \item[\bf (i)] Suppose that the KCF of $A-\lambda B$ contains a right singular block $F_p - \lambda G_p \in \R^{p\times (p+1)}$ for some $p\ge 0$. By~\cite[Sec. 4]{Kosir96}, the generalized Sylvester equation $F_p Y - G_p X C(\mu,\Gamma) = 0$ has a solution space of dimension $r$. This implies that also the solution space of $AX - BX C(\mu,\Gamma) = 0$ has dimension at least $r$, and consequently $\sigma_{mr -r + 1} \left( {\mathcal L} \left( \mu,\Gamma,A,B \right)\right)$ is always zero. On the other hand, the presence of a right singular block implies that for \emph{any} $\varepsilon>0$ and $\mu_1,\ldots,\mu_r \in \C$ with $r\le \text{rank}(B)$ there is a perturbation $\triangle A$ such that $\|\triangle A\|_2\le \varepsilon$ and $(A + \triangle A) - \lambda B$ has eigenvalues $\mu_1,\ldots,\mu_r$, see~\cite{DeTK12}. This shows $\tau_r({\mathbb S}) = 0$ and hence both sides of~\eqref{eq:svcharacterization} are equal to zero. In summary, we can replace the assumption \textbf{(i)} by the weaker assumption ${\rm rank}(B) \geq r$. \item[\bf (ii)] To address \textbf{(ii)}, we first note that both ${\mathcal P}_r(\mu)$ and $\kappa_r(\mu)$, defined in (\ref{eq:defn_P}) and (\ref{eq:lowerbound}), change continuously with respect to $\mu$. Suppose that $\mu$ has repeating elements, which allows for the possibilitiy that $\Gamma_\ast \notin {\mathcal G}(\mu)$. But for all $\tilde{\mu}$ with distinct elements, we necessarily have ${\mathcal G}(\tilde{\mu}) = \C^{r(r-1)/2}$. Moreover, when $\tilde{\mu}$ is sufficiently close to $\mu$ then ${\mathcal P}_r(\tilde{\mu}) = \kappa_r(\tilde{\mu})$, provided that the multiplicity and linear independence assumptions hold at $(\mu,\Gamma_\ast)$ (implying the satisfaction of these two assumptions for $\tilde{\mu}$ also). Then the equality ${\mathcal P}_r(\mu) = \kappa_r(\mu)$ follows from continuity. Consequently, the assumption \textbf{(ii)} in (\ref{eq:assump_removable}) is also not needed for the singular value characterization. \end{enumerate} We conclude this section by stating the main result of this paper. \begin{theorem}[Nearest Pencils with Specified Eigenvalues]\label{thm:sval_char_pres_eigs} Let $A - \lambda B$ be an $n\times m$ pencil with $n \geq m$, let $r$ be a positive integer such that $r \leq {\rm rank}(B)$ and let ${\mathbb S} = \{ \lambda_1, \dots, \lambda_k \}$ be a set of distinct complex scalars. \begin{enumerate} \item[\bf (i)] Then \[ \tau_r({\mathbb S}) = \inf_{\mu \in {\mathbb S}^r} \sup_{\Gamma} \sigma_{mr -r + 1} \left( {\mathcal L} \left( \mu,\Gamma,A,B \right) \right) \] holds, provided that the optimization problem on the right is attained at some $(\mu_{\ast}, \Gamma_{\ast})$ for which $\Gamma_\ast$ is finite and the multiplicity as well as the linear independence qualifications hold. \item[\bf (ii)] A minimal perturbation $\Delta A_{\ast}$ such that $\sum_{j=1}^k m(A + \Delta A_{\ast}, B) \geq r$ is given by (\ref{eq:optimal_perturbation}), with $\mu$ replaced by $\mu_{\ast}$. \end{enumerate} \end{theorem} \section{Corollaries of Theorem \ref{thm:sval_char_pres_eigs}}\label{sec:nearest_rect_pencil} As discussed in the introduction one potential application of Theorem~\ref{thm:sval_char_pres_eigs} is in control theory, to ensure that the eigenvalues lie in a particular region in the complex plane. Thus let $\Omega$ be a subset of the complex plane. Then, provided that the assumptions of Theorem~\ref{thm:sval_char_pres_eigs} hold, we have the following singular value characterization for the distance to the nearest pencil with $r$ eigenvalues in $\Omega$: \begin{equation}\label{eq:nearest_pencil_region} \begin{split} \tau_r(\Omega) := & \inf_{{\mathbb S} \subseteq \Omega} \tau_r({\mathbb S}) \hskip 32ex \\ = & \inf_{{\mathbb S} \subseteq \Omega} \inf_{\mu \in {\mathbb S}^r} \sup_{\Gamma} \sigma_{mr -r + 1} \big( {\mathcal L} \left( \mu,\Gamma,A,B \right) \big) \\ = & \inf_{\mu \in \Omega^r} \sup_{\Gamma} \sigma_{mr -r + 1} \big( {\mathcal L} \left( \mu,\Gamma,A,B \right) \big), \hskip 7ex \end{split} \end{equation} where $\Omega^r$ denotes the set of vectors of length $r$ with all entries in $\Omega$. When the pencil $A - \lambda B$ is rectangular, that is $n > m$, the pencil has generically no eigenvalues. Then the distance to the nearest rectangular pencil with $r$ eigenvalues is of interest. In this case, the singular value characterization takes the following form: \begin{equation}\label{eq:nearest_pencil_cplane} \tau_r(\C) = \inf_{\mu \in \C^r} \sup_{\Gamma} \sigma_{mr -r + 1} \left( {\mathcal L} \left( \mu,\Gamma,A,B \right) \right). \end{equation} The optimal perturbations $\Delta A_{\ast}$ such that the pencil $(A + \Delta A_{\ast}) - \lambda B$ has eigenvalues (in $\C$ and $\Omega$) are given by (\ref{eq:optimal_perturbation}), with $\mu$ replaced by the minimizing $\mu$ values in (\ref{eq:nearest_pencil_cplane}) and (\ref{eq:nearest_pencil_region}), respectively. \section{Multiplicity and linear independence qualifications}\label{sec:qualifications} The results in this paper are proved under the assumptions of multiplicity and linear independence qualifications. This section provides an example for which the multiplicity and linear independence qualifications are not satisfied for the optimal value of $\Gamma$. Note that this does not mean that these assumptions are necessary to prove the results from this paper. In fact, numerical experiments suggest that our results may hold even if these assumptions are not satisfied. Consider the pencil \[ \left[ \begin{array}{rcc} -1 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 2 \end{array} \right] -\lambda \begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}. \] Let $\mu=\begin{bmatrix} 5 & 1 \end{bmatrix}^T$, that is, the target eigenvalues are $5$ and $1$. Then it is easy to see that the optimal perturbation is given by \[ \Delta A_{\ast} = \left[ \begin{array}{ccr} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right]. \] The singular values of the matrix ${\mathcal L}(\mu,\gamma,A,B)$ are \[ 0, 1, \sqrt{ 16+|\gamma|^2 }, \sqrt{ 5+\frac{1}{2}|\gamma|^2\pm \frac{1}{2} \sqrt{|\gamma|^4+20|\gamma|^2+64} } \] where the multiplicity of the singular value 1 is two. Hence \[ \sigma_5\left({\mathcal L}(\mu,\gamma,A,B) \right) = \sqrt{ 5+\frac{1}{2}|\gamma|^2 - \frac{1}{2} \sqrt{|\gamma|^4+20|\gamma|^2+64} }. \] Clearly the supremum is attained for $\gamma=0$ and $\sigma_5\left({\mathcal L}(\mu,0,A,B) \right) = 1$. Hence the multiplicity condition at the optimal $\gamma$ is violated. All three pairs of singular vectors corresponding to the singular value 1 at the optimal $\gamma$ violate the linear independence condition, but one pair does lead to the optimal perturbation $\Delta A_{\ast}$. \section{Computational issues}\label{sec:computation} A numerical technique that can be used to compute $\tau_r(\Omega)$ and $\tau_r(\C)$ based on the singular value characterizations was already described in \cite{Mengi2009, Mengi2010}. For completeness, we briefly recall this technique in the following. The distances of interest can be characterized as \[ \tau_r(\Omega) = \inf_{\mu \in \Omega^r} g(\mu) \hskip 7ex {\rm and} \hskip 7ex \tau_r(\C) = \inf_{\mu \in \C^r} g(\mu), \] where $g : \C^r \rightarrow \R$ is defined by \[ g(\mu) := \sup_{\Gamma \in \C^{r(r-1)/2}} \sigma_{mr-r+1} \big( {\mathcal L}(\mu,\Gamma,A,B) \big). \] The inner maximization problems are solved by BFGS, even though $\sigma_{mr-r+1}(\cdot)$ is not differentiable at multiple singular values. In practice this is not a major issue for BFGS as long as a proper line search (e.g., a line search respecting weak Wolfe conditions) is used, as the multiplicity of the $r$th smallest singular value is one generically with respect to $\Gamma$ for any given $\mu$; see the discussions in~\cite{LO12}. If the multiplicity and linear independence qualifications hold at a local maximizer $\Gamma_{\ast}$, then $\Gamma_{\ast}$ is in fact a global maximizer and hence $g(\mu)$ is retrieved. If, on the other hand, BFGS converges to a point where one of these qualifications is violated, it needs to be restarted with a different initial guess. In practice we have almost always observed convergence to a global maximizer immediately, without the need for such a restart. Although the function $g(\mu)$ is in general non-convex, it is Lipschitz continuous: \[ | g(\mu + \delta \mu) - g(\mu) | \leq \| \delta\mu \|_2 \cdot \| B \|_2. \] There are various Lipschitz-based global optimization algorithms in the literature stemming mainly from ideas due to Piyavskii and Shubert (see \cite{Piyavskii1972, Shubert1972}). The Piyavskii-Shubert algorithm is based on the idea of constructing a piecewise linear approximation lying beneath the Lipschitz function. We used DIRECT (see \cite{Jones1993}), a sophisticated variant of the Piyavskii-Shubert algorithm. DIRECT attempts to estimate the Lipschitz constant locally, which can possibly speed up convergence. The main computational cost involved in the numerical optimization of singular values is the retrieval of the $r$th smallest singular value of ${\mathcal L}(\mu,\Gamma,A,B)$ at various values of $\mu$ and $\Gamma$. As we only experimented with small pencils, we used direct solvers for this purpose. For medium to large scale pencils, iterative algorithms such as the Lanczos method (see \cite{Golub1996}) are more appropriate. \section{Numerical Experiments}\label{sec:numerical_exp} Our algorithm is implemented in Fortran, calling routines from LAPACK for singular value computations, the limited memory BFGS routine written by J. Nocedal (discussed in \cite{Liu1989}) for inner maximization problems, and an implementation of the DIRECT algorithm by Gablonsky (described in \cite{Gablonsky2001}) for outer Lipschitz-based minimization. A mex interface provides convenient access via {\sc Matlab}. The current implementation is not very reliable, which appears to be related to the numerical solution of the outer Lipschitz minimization problem, in particular the DIRECT algorithm and its termination criteria. We rarely obtain results that are less accurate than the prescribed accuracy. The multiplicity and linear independence qualifications usually hold in practice and don't appear to affect the numerical accuracy. For the moment, the implementation is intended for small pencils (\textit{e.g.}, $n,m < 100$). \subsection{Nearest Pencils with Multiple Eigenvalues} As a corollary of Theorem \ref{thm:sval_char_pres_eigs} it follows that for a square pencil $A - \lambda B$ the nearest pencil having ${\mathbb S} = \{ \mu \}$ as a multiple eigenvalue is given by \[ \tau_2({\mathbb S}) = \sup_{\gamma} \left( \left[ \begin{array}{cc} A - \mu B & 0 \\ \gamma B & A - \mu B \\ \end{array} \right] \right) \] provided that the multiplicity and linear independence qualifications are satisfied at the optimal $(\mu,\gamma_\ast)$. Therefore, for the distance from $A - \lambda B$ to the nearest square pencil with a multiple eigenvalue the singular value characterization takes the form \begin{equation}\label{eq:dist_defect} \inf_{\mu \in \C} \sup_{\gamma} \sigma_{2n-1} \left( \left[ \begin{array}{cc} A - \mu B & 0 \\ \gamma B & A - \mu B \\ \end{array} \right] \right). \end{equation} Specifically, we consider the pencil \begin{equation}\label{eq:num_examp1_pencil} A-\lambda B = \left[ \begin{array}{rrr} 2 & -1 & -1 \\ -1 & 2 & -1 \\ -1 & -1 & 2 \\ \end{array} \right] - \lambda \left[ \begin{array}{rrr} -1 & 2 & 3 \\ 2 & -1 & 2 \\ 4 & 2 & -1 \\ \end{array} \right]. \end{equation} Solving the above singular value optimization problem results in a distance of $0.59299$ to the nearest pencil with a multiple eigenvalue. By~(\ref{eq:optimal_perturbation}), a nearest pencil turns out to be \[ \left[ \begin{array}{rrr} 1.91465 & -0.57896 & -1.21173 \\ -1.32160 & 1.93256 & -0.57897 \\ -0.72082 & -1.32160 & 1.91466 \\ \end{array} \right] - \lambda \left[ \begin{array}{rrr} -1 & 2 & 3 \\ 2 & -1 & 2 \\ 4 & 2 & -1 \\ \end{array} \right], \] with the double eigenvalue $\lambda_{\ast} = -0.85488$. The optimal maximizing $\gamma$ turns out to be zero, which means neither the multiplicity nor the linear independence qualifications hold. (This is the non-generic case; had we attempted to calculate the distance to the nearest pencil with $\mu$ as a multiple eigenvalue for a given $\mu$, optimal $\gamma$ appears to be non-zero for generic values of $\mu$.) Nevertheless, the singular value characterization (\ref{eq:dist_defect}) remains to be true for the distance as discussed next. The $\epsilon$-pseudospectrum of $A - \lambda B$ (subject to perturbations in $A$ only) is the set $\Lambda_{\epsilon}(A,B)$ containing the eigenvalues of all pencils $(A+\Delta A) - \lambda B$ such that $\| \Delta A \|_2 \leq \epsilon$. Equivalently, \[ \Lambda_{\epsilon}(A,B) = \{ \lambda \in \C : \sigma_{\min}(A - \lambda B) \leq \epsilon \}. \] It is well known that the smallest $\epsilon$ such that two components of $\Lambda_{\epsilon}(A,B)$ coalesce equals the distance to the nearest pencil with multiple eigenvalues. (See~\cite{Alam2005} for the case $B = I$, but the result easily extends to arbitrary invertible $B$.) Figure \ref{fig:gen_pseudo} displays the pseudospectra of the pencil in~(\ref{eq:num_examp1_pencil}) for various levels of $\epsilon$. Indeed, two components of the $\epsilon$-pseudospectrum coalesce for $\epsilon = 0.59299$, confirming our result. \begin{figure} \caption{Pseudospectra for the pencil in (\ref{eq:num_examp1_pencil} \label{fig:gen_pseudo} \end{figure} \subsection{Nearest Rectangular Pencils with at least Two Eigenvalues} As an example for a rectangular pencil, let us consider the $4\times 3$ pencil \[ A - \lambda B = \left[ \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0.1 & 0 \\ 0 & 2 & 0.3 \\ 0 & 1 & 2 \\ \end{array} \right] - \lambda \left[ \begin{array}{ccc} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right]. \] The KCF of this pencil contains a $4\times 3$ singular block and therefore the pencil has no eigenvalues. However, if the entry ${a}_{22}$ is set to zero, the KCF of the resulting pencil contains a $2 \times 1$ singular block and a $2\times 2$ regular block corresponding to finite eigenvalues. Hence, a perturbation with 2-norm $0.1$ is sufficient to have two eigenvalues. According to the corollaries in Section \ref{sec:nearest_rect_pencil} the distance to the nearest $4\times 3$ pencil with at least two eigenvalues has the characterization \begin{equation}\label{eq:rect2_svalchar} \tau_2(\C) = \inf_{\mu \in \C^2} \underbrace{ \sup_{\gamma} \sigma_{2m-1} \left( \left[ \begin{array}{cc} {A} - \mu_1 {B} & 0 \\ \gamma {B} & {A} - \mu_2 {B} \\ \end{array} \right] \right)}_{=:g(\mu)} \end{equation} for $m = 3$. Our implementation returns $\tau_2(\C) = 0.03927$. The corresponding nearest pencil~(\ref{eq:optimal_perturbation}) is given by \[ \left[ \begin{array}{rrr} 0.99847 & -0.03697 & -0.01283 \\ 0 & 0.08698 & 0.03689 \\ 0 & 2.00172 & 0.30078 \\ 0.00007 & 1.00095 & 2.00376 \\ \end{array} \right] - \lambda \left[ \begin{array}{ccc} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right] \] and has eigenvalues at $\mu_1 = 2.55144$ and $\mu_2 = 1.45405$. This result is confirmed by Figure~\ref{fig:Jordan_rect}, which illustrates the level sets of the function $g(\mu)$ defined in (\ref{eq:rect2_svalchar}) over $\R^2$. For this example the optimal $\gamma$ is 2.0086. The smallest three singular values of the matrix in (\ref{eq:rect2_svalchar}) are $1.4832$, $0.0393$ and $0.0062$ for these optimal values of $\mu$ and $\gamma$. The linear independence qualification also holds. \begin{figure} \caption{Level sets over $\mathbb R^2$ of the function $g(\mu)$ defined in (\ref{eq:rect2_svalchar} \label{fig:Jordan_rect} \end{figure} \subsection{Nearest Stable Pencils} As a last example, suppose that $Bx^\prime(t) = Ax(t)$ with $A, B \in \C^{n\times n}$ is an unstable descriptor system. The distance to a nearest stable descriptor system is a special case of $\tau_n(\Omega)$, with $\Omega = \C^{-}$, the open left-half of the complex plane. A singular value characterization is given by \[ \tau_n(\C^{-}) = \inf_{\lambda_j \in \C^{-}} \;\; \sup_{\gamma_{ik} \in \C} \; \sigma_{n^2 - n + 1} \left( \left[ \begin{array}{cccc} A - \lambda_1 B & 0 & & 0 \\ \gamma_{21} B & A - \lambda_2 B & & 0 \\ & & \ddots & \\ \gamma_{n1} B & \gamma_{n2} B & & A - \lambda_n B \\ \end{array} \right] \right). \] Specifically, we consider a system with $B = I_2$ and \begin{equation}\label{eq:2by2unstable} A = \left[ \begin{array}{cc} 0.6 - \frac{1}{3} i & -0.2 + \frac{4}{3} i \\ -0.1 + \frac{2}{3} i & 0.5 + \frac{1}{3} i \\ \end{array} \right]. \end{equation} Both eigenvalues $\lambda_1 = 0.7 - i$ and $\lambda_2 = 0.4 + i$ are in the right-half plane. Based on the singular value characterization, we have computed the distance to a nearest stable system $x'(t) = (A + \Delta A_{\ast}) x(t)$ as $0.6610$. The corresponding perturbed matrix \[ A + \Delta A_{\ast} = \left[ \begin{array}{rr} 0.0681 - 0.3064i & -0.4629 + 1.2524i \\ 0.2047 + 0.5858i & -0.1573 + 0.3064i \\ \end{array} \right] \] at a distance of $0.6610$ has one eigenvalue $\left( \lambda_{\ast} \right)_1 = -0.0885+0.9547i$ in the left-half plane and the other $\left( \lambda_{\ast} \right)_2 = -0.9547i$ on the imaginary axis. The $\epsilon$-pseudospectrum of $A$ is depicted in Figure \ref{fig:pss_nearest_stable}. For $\epsilon = 0.6610$, one component of the $\epsilon$-pseudospectrum crosses the imaginary axis, while the other component touches the imaginary axis. \begin{figure} \caption{Pseudospectra of the matrix $A$ defined in (\ref{eq:2by2unstable} \label{fig:pss_nearest_stable} \end{figure} \section{Concluding Remarks} In this work a singular value characterization has been derived for the 2-norm of a smallest perturbation to a square or a rectangular pencil $A - \lambda B$ such that the perturbed pencil has a desired set of eigenvalues. The immediate corollaries of this main result are \begin{enumerate} \item[\bf (i)] a singular value characterization for the 2-norm of the smallest perturbation so that the perturbed pencil has a specified number of its eigenvalues in a desired region in the complex plane, and \item[\bf (ii)] a singular value characterization for the 2-norm of the smallest perturbation to a rectangular pencil so that it has a specified number of eigenvalues. \end{enumerate} Partly motivated by an application explained in the introduction, we allow perturbations to $A$ only. The extension of our results to the case of simultaneously perturbed $A$ and $B$ remains open. The development of efficient and reliable computational techniques for the solution of the derived singular value optimization problems is still in progress. As of now the optimization problems can be solved numerically only for small pencils with small number of desired eigenvalues. The main task that needs to be addressed from a computational point of view is a reliable and efficient implementation of the DIRECT algorithm for Lipschitz-based optimization. For large pencils it is necessary to develop Lipschitz-based algorithms converging asymptotically faster than the algorithms (such as the DIRECT algorithm) stemming from the Piyavskii-Shubert algorithm. The derivatives from Section~\ref{sec:2norm} might constitute a first step in this direction. \vskip 2ex \noindent \textbf{Acknowledgments} We are grateful to two anonymous referees for their valuable comments. The research of the second author is supported in part by the European Commision grant PIRG-GA-268355 and the T\"{U}B\.{I}TAK (the scientific and technological research council of Turkey) carrier grant 109T660. \vskip 6ex \appendix \section{ Proof that $\sigma_{mr-r+1} \left( {\mathcal L}(\mu,\Gamma,A,B) \right) \to 0$ as $\Gamma\to\infty$} \label{sec:to_zero} We prove that the $r$ smallest singular values of ${\mathcal L}(\mu,\Gamma,A,B)$ decay to zero as soon as at least one entry of $\Gamma$ tends to infinity, provided that $n = m$. In the rectangular case, $n>m$, these singular values generally do not decay to zero. We start by additionally assuming that $A-\mu_i B$ are non--singular matrices for all $i=1,\ldots,r$. We will first prove the result under this assumption, and then we will drop it. Our approach is a generalization of the procedure from \cite[\S 5]{IN2005}, which in turn is a generalization of \cite[Lemma 2]{Malyshev1999}. Under our assumptions the matrix ${\mathcal L}(\mu,\Gamma,A,B)$ is non--singular, and one can explicitly calculate the inverse. It is easy to see that the matrix ${\mathcal L}^{-1}(\mu,\Gamma,A,B)$ has the form \[\begin{bmatrix} (A-\mu_1 B)^{-1} & 0 & \ldots & 0 \\ X_{21} & (A-\mu_2 B)^{-1} & \ldots & 0 \\ X_{31} & X_{32} & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ X_{r1} & X_{r2} & \ldots & (A-\mu_r B)^{-1} \end{bmatrix}. \] We will use the well--known relations \begin{equation} \label{eq:sigmar} \sigma_{nr-r+1} \left( {\mathcal L}(\mu,\Gamma,A,B) \right) = \sigma_{r}\left({\mathcal L}(\mu,\Gamma,A,B)^{-1} \right)^{-1} \le \sigma_{r}\left(X_{ij} \right)^{-1}. \end{equation} We first compute the matrices $X_{21},\ldots,X_{r,r-1}$ which lie on the first sub--diagonal. By a straightforward computation we obtain \begin{equation*} X_{i+1,i} = -\gamma_{i+1,i}(A-\mu_{i+1}B)^{-1}B(A-\mu_i B)^{-1}. \end{equation*} If $\sigma_r \left( (A-\mu_{i+1}B)^{-1}B(A-\mu_i B)^{-1}\right) >0 $, then from \eqref{eq:sigmar} it follows that if any of $|\gamma_{i+1,i}|$ tends to infinity, we obtain the desired result. But $\sigma_r \left( (A-\mu_{i+1}B)^{-1}B(A-\mu_i B)^{-1}\right) > 0 $ easily follows from the assumption ${\rm rank} (B) \ge r$. If this is not the case, meaning $\max_i\{\gamma_{i+1,i}\}$ is bounded, then we use the entries on the next sub--diagonal $X_{i+2,i}$. Again by straightforward computation we obtain \[ X_{i+2,i} = -\gamma_{i+2,i}(A-\mu_{i+2}B)^{-1}B(A-\mu_i B)^{-1} + \gamma_{i+2,i+1}\gamma_{i+1,i}(A-\mu_{i+2}B)^{-1}B(A-\mu_{i+1}B)^{-1}B(A-\mu_i B)^{-1}. \] Because again ${\rm rank} (B) \ge r$ implies $\sigma_r \left( (A-\mu_{i+2}B)^{-1}B(A-\mu_i B)^{-1} \right) >0 $, it follows that if any of $|\gamma_{i+2,i}|$ tend to infinity, we obtain the desired result. In general, we have the recursive formula \[ X_{i+j,i} = -\gamma_{i+j,i}(A-\mu_{i+j}B)^{-1}B(A-\mu_i B)^{-1} - \sum_{k=1}^{j-1} \gamma_{i+j,i+k} (A-\mu_{i+j}B)^{-1}BX_{i+k,i}. \] Applying the same procedure as above, we conclude the proof in this case. To remove the assumption that the matrices $A-\mu_i B$ are non--singular, we fix any $\varepsilon >0$. Let us choose a matrix $A_{\varepsilon}$ such that $\|A_{\varepsilon}-A\| < \varepsilon$ and that the matrices $A_{\varepsilon}-\mu_i B$ are non--singular for all $i=1,\ldots,r$. From the arguments above, if follows that there exists $\gamma_0>0$ such that $ \sigma_{nr-r+1} \left( {\mathcal L}(\mu,\Gamma,A_{\varepsilon},B) \right) < \varepsilon$, when $\|\Gamma\|>\gamma_0$. Since \begin{center} $\sigma_{nr-r+1} \left( {\mathcal L}(\mu,\Gamma,A,B) \right) \le \sigma_{nr-r+1} \left( {\mathcal L}(\mu,\Gamma,A_{\varepsilon},B) \right) + \varepsilon$, \end{center} we obtain the inequality $\sigma_{nr-r+1} \left( {\mathcal L}(\mu,\Gamma,A,B) \right) < 2 \varepsilon $, when $\|\Gamma\|>\gamma_0$. \end{document}
\begin{document} \begin{frontmatter} \title{Analysis and Control of Quantum Finite-level Systems Driven by Single-photon Input States\thanksref{footnoteinfo}} \thanks[footnoteinfo]{This paper was not presented at any IFAC meeting. Corresponding author Y.~Pan.} \author[Yu]{Yu Pan}\ead{[email protected]}, \author[Guofeng]{Guofeng Zhang}\ead{[email protected]}, \author[Matt]{Matthew~R.~James}\ead{[email protected]} \address[Yu]{Research School of Engineering, Australian National University, Canberra, ACT 0200, Australia} \address[Guofeng]{Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong, China} \address[Matt]{ARC Centre for Quantum Computation and Communication Technology, Research School of Engineering, Australian National University, Canberra, ACT 0200, Australia} \begin{keyword} Quantum system; System response; Single photon state; Linear transfer function \end{keyword} \begin{abstract} Single-photon states, which carry quantum information and coherently interact with quantum systems, are vital to the realization of all-optical engineered quantum networks. In this paper we derive the analytical form of the output field state for a large class of quantum finite-level systems driven by single-photon input field states using a transfer function approach. Single-photon pulse shaping via coherent feedback is also studied. \end{abstract} \end{frontmatter} \section{Introduction} In classical (non-quantum) control theory, responses of systems to various types of input signals reveal important system properties. For example, in the linear case, step response tells us the rise time, overshoot and settling time of the system, frequency response shows the ability of the system to track rapidly changing signals, impulse response allows to calculate the $H_2$ norm of the system, response to $L_2$ signals reveals the robustness of the system to disturbance measured in terms of $H_\infty$ performance, and response to Gaussian white noise is the foundation of the celebrated Kalman filtering theory, which is the basis of Linear-Quadratic-Gaussian (LQG) feedback control. In the quantum regime, the response of quantum systems to quantum Gaussian input states has been studied intensively due to the prevalent use of Gaussian states such as vacuum states, coherent states and squeezed states. This is the basis of widely applied measurement-based quantum feedback control in quantum optics \cite{WM10}. However, besides Gaussian states, there are many other useful quantum states such as single-photon and multi-photon states. Simply speaking, a light field is in an $n$-photon state if it contains exactly $n$ photons. When $n=1$, the light field is in a single-photon state. When $n\geq 2$, we simply say it is in a multi-photon state. Single-photon states and multi-photon states are very useful resources for quantum information technology. For example, photons are ideal information carriers that transfer quantum information from one node to another node in a quantum network \cite{Cirac97,Nysteen14,Nielsen04}. It is possible to build a quantum switch \cite{Chen13} which uses a single photon as controller to switch on and off a physical process. last but not the least, the ability to control the flow of single photons using on-chip finite-level systems could give birth to a new generation of light transistors \cite{Chang07}. Therefore, the analysis and control of quantum systems driven by single-photon or multi-photon states is important for a successful quantum engineering. In the linear regime, the response of quantum systems to single-photon and multi-photon states has been recently studied in \cite{Guofeng13,Guofeng14}. Moreover, interestingly, it is shown that linear quantum systems theory turns out quite useful in the study of quantum memories where a single-photon is efficiently stored and read out by a collection of atoms \cite{Hush13,Yamamoto14,Nurdin14}. In this paper we go beyond the linear regime and study quantum finite-level systems driven by single-photon states. Quantum finite-level systems, for example two-level atoms, are {\it nonlinear} quantum systems. The study of the interaction between finite-level systems and single-photon states are fundamental to the study of light-matter interaction, which is the foundation of engineered integrated quantum networks \cite{Cirac97}. The interaction between finite-level systems and single-photon states has been studied extensively in the quantum optics community \cite{Cirac97,Shen05,Zhou08,Fan10,Gough12,Naoki14}. Unfortunately, most studies are usually based on various assumptions such as weak excitation limit and primarily focused on two-level systems. In this paper we show from a control theoretic point perspective that, for a large class of quantum finite-level systems driven by single-photon states, the analytical expression of the output field state can be derived, cf. Theorem \ref{theorem3}. Interestingly, due to the special nature of single-photon states, the techniques developed for the linear case in \cite{Guofeng13,Guofeng14} can be adopted here to show that the pulse shape of the output single photon is obtained via the linear transfer of the input single-photon pulse shape. Based on this analysis result, we also investigate how to manipulate the pulse shape of single-photon wavepackets by means of coherent feedback. The rest of the paper is organized as follows. In Section \ref{secss}, we introduce the existing results on quantum stochastic differential equations, the response of linear systems to single-photon states, and finite-level systems (Subsection \ref{subsec:finite_level_single_photon}). In Section \ref{secfl}, we prove the main result that the input-output relation for a large class of finite-level systems can be solved using a transfer function approach. In Section \ref{secapp}, we present some applications of the main result. We study in Section \ref{sec:control} how to use coherent feedback to manipulate the pulse shape of single-photon wavepackets. Conclusion is put in Section \ref{seccon}. \section{Notations and preliminaries}\label{secss} We use $X^\dagger$ to denote the adjoint of an operator $X$ defined on a Hilbert space $\mathcal H$. The notation $\ast$ stands for complex conjugation, and $T$ for the transpose. $X^\dagger=X^\ast$ if $X$ is a one-dimensional scalar. The commutator of two operators is given by $[A,B]=AB-BA$. We also define the doubled-up column vector of operators as $\breve{X}=[X^T,X^\dagger]^T$. We use $\delta(\cdot)$ for the Dirac-delta function, and the symbol $\otimes$ for the Kronecker tensor product. $\Re(\cdot)$ and $\Im(\cdot)$ are the real part and imaginary part of a number. The Fourier transform of a function $f(t)$ is defined by $\tilde{f}(\omega)= \mathcal F[f](\omega) :=\int_{-\infty}^\infty e^{-\mbox{i}\omega t}f(t)dt$. \subsection{Open quantum systems} An open quantum system often involves a plant interacting with external environment which is defined on a Fock space $\mathcal{H}_B$ over $L^2 (\mathbb{R}_+ , dt) $. The open quantum system can be properly modelled using a triplet $(S,L,H_0)$ \cite{hudson84,Gough09,Gough12}. $S$ is a constant scattering matrix. The plant is coupled to the external fields through the operator $L$. We assume $L=[c_1L_0\ \cdot\cdot\cdot\ c_KL_0]^T=\theta^TL_0$, that is, the plant couples to the environment through $K$ channels via the same coupling operator $L_0$. $H_0$ is the Hamiltonian of the plant. The state of the total system (plant plus field) undergoes a unitary evolution generated by a unitary operator $U(t,t_0)$ whose dynamics is given by \begin{equation}\label{lem1p} dU(t,t_0)=\{b^\dag(t)L-L^\dag Sb(t)-(\frac{1}{2}L^\dag L+\mbox{i}H_0)dt\}U(t,t_0) \end{equation} for $t\geq t_0$, where $t_0$ is the initial time. In Heisenberg picture, the evolution of a plant operator $X$ is given by $X(t)=U^\dagger(t,t_0)(X\otimes I)U(t,t_0)$, where $I$ is the identity operator on $\mathcal H_B$. Driven by canonical input fields, the dynamics of $X(t)$ is described by quantum stochastic differential equations (QSDEs) of the form \begin{eqnarray}\label{nopr2} \dot{X}(t)&=&\mathcal G_t(X)+b^\dagger(t)S^\dagger[X(t),L(t)]\nonumber\\ &+&[L^\dagger(t),X(t)]Sb(t),\\ b_{out}(t)&=&L(t)+Sb(t),\nonumber \end{eqnarray} where the generator $\mathcal G_t(X)$ is defined as \begin{eqnarray}\label{pn1} \mathcal G_t(X)&:=&-\mbox{i}[X(t),H_0(t)]+\sum_{k=1}^K(L_k^\dagger(t) X(t)L_k(t)\nonumber\\ &&-\frac{1}{2}L_k(t)^\dagger L_k(t)X(t)-\frac{1}{2}X(t)L_k^\dagger(t) L_k(t)). \end{eqnarray} By the form of the operator $L$ discussed above, we have $L_k=c_kL_0$ for $k=1,...,K$. In Eq. (\ref{nopr2}), $b(t)=[b_1(t)\ \cdots \ b_K(t)]^T$ is a vector of annihilation operators for input field modes which satisfy the canonical commutation relation $[b_i(t),b_j^\dagger(s)]=\delta(t-s),\ i=j$ and $[b_i(t),b_j^\dagger(s)]=0,\ i\neq j$ resembling classical white noise if the field is vacuum. Physically, $b_i(t)$ and $b_i^\dag(t)$ can be understood as the annihilation and creation of one photon in the $i$-th channel at time $t$. $b_{out}(t)=U^\dagger(t,t_0)b(t)U(t,t_0)$ defines the annihilation operators of the output fields. For later use, we define $f(t,t_0)=[f_1(t,t_0)\ \cdot\cdot\cdot\ f_K(t,t_0)]^T := U(t,t_0)b(t)U^\dagger(t,t_0)$. Finally, if $\rho$ is the state of the total system, the state of the field can be obtained by tracing out the plant \cite{Nielsen04}, that is, $\rho_{field} =\text{Tr}_s(\rho):=\sum_j\langle j_s|\rho|j_s\rangle$, where $\{|j_s\rangle\}$ is the basis of system Hilbert space $\mathcal H$. \subsection{Response of linear quantum systems to single photon input} If the plant is a collection of quantum harmonic oscillators, Eq. (\ref{nopr2}) describes a linear quantum system as \begin{eqnarray}\label{lintem} \dot{\breve{x}}(t)&=&A\breve{x}(t)+B\breve{b}(t),\nonumber\\ \breve{b}_{out}(t)&=&C\breve{x}(t)+D\breve{b}(t), \end{eqnarray} where the constant matrices $A,B,C,D$ can be expressed by $S,L,H_0$, see e.g. \cite{Guofeng13,Guofeng14}. The input-output relation of this system can be written as \begin{equation}\label{outlin} \breve{b}_{out}(t)=Ce^{A(t-t_0)}\breve{x}(0)+\int_{t_0}^t g_{G}(t-r)\breve{b}(r)dr, \end{equation} where \begin{equation} \label{tf} g_{G}(t):=\left\{ \begin{array}{ll} \delta(t)D+Ce^{At}B, & t\geq 0, \\ 0, & t<0. \end{array} \right. \end{equation} The response of quantum linear systems to single photon input has been studied in detail in \cite{Guofeng13,Guofeng14}. A single-channel single photon input is defined by, \cite{Milburn08,Gough12,Guofeng13,Guofeng14}, \begin{equation}\label{sinput} |1_\xi\rangle = \int_{-\infty}^\infty \xi(r)b^\dagger(r)|0\rangle dr, \end{equation} where $\xi(r)$ represents the pulse shape of the single photon in the time domain. Here $|0\rangle$ denotes a one-channel vacuum input. $|\xi(r)|^2dr$ is the probability of finding the photon in the time interval $[r,r+dr)$, and we have a normalization condition $\int_{-\infty}^\infty|\xi(r)|^2dr=1$. Denote by $G(s)$ the transfer function determined by $g_{G}(t)$ in Eq. (\ref{tf}). Then it is shown in \cite{Guofeng13} that the pulse shape $\xi^{'}(t)$ of the output field state is given in terms of the input-output transfer $\tilde{\xi}^{'}(\omega)=G(\mbox{i}\omega)\tilde{\xi}(\omega)$. For multiple-channel input, the single photon state may be defined as a superposition of one-photon excitation on multiple channels and thus given by \begin{equation}\label{sinput1} |1_\xi\rangle=\sum_{k=1}^K\int_{-\infty}^\infty \xi_k(r)b_k^\dagger(r)|0\rangle dr=\int_{-\infty}^\infty b^\dag(t)\xi(r)|0\rangle dr \end{equation} with $\sum_{k=1}^K\int_{-\infty}^\infty |\xi_k(r)|^2dr=1$ and $\xi(r)=[\xi_1(r)\ \cdot\cdot\cdot\ \xi_K(r)]^T$. Here $|0\rangle$ denotes a multi-channel vacuum input. The following result will be used later. \begin{lem}(\cite[Lemma 2]{Guofeng14})\label{lemsi} Suppose $A$ is Hurwitz. Letting $t_0\rightarrow-\infty$, Eq. (\ref{outlin}) becomes a convolution \begin{equation} \breve{b}_{out}(t)=\int_{-\infty}^\infty g_{G}(t-r)\breve{b}(r)dr, \end{equation} or equivalently, \begin{equation}\label{silin} \breve{b}(t)=\int_{-\infty}^\infty g_{G}(t-r)\breve{f}(r,-\infty)dr. \end{equation} Moreover, the stable inversion of Eq. (\ref{silin}) exists and is given by \begin{equation} \breve{f}(t,-\infty)=\int_{-\infty}^\infty g_{G^{-1}}(t-r)\breve{b}(r)dr. \end{equation} The formula to compute the stable inverse function $g_{G^{-1}}(\cdot)$ is given in \cite[Eq. (19)]{Guofeng13}. \end{lem} Following a similar argument as in the proof of Proposition 2 in \cite{Guofeng13}, we have \begin{lem}\label{leminf} If the single photon input $|1_\xi\rangle$ is defined by (\ref{sinput}), the output state of the total system in the limit $(t_0\rightarrow-\infty, t\rightarrow\infty)$ can be written as \begin{equation}\label{fl1} \rho_\infty=\int_{-\infty}^\infty dr\xi(r)f^\dagger(r,-\infty)\rho_{\infty g}\int_{-\infty}^\infty dr\xi^*(r)f(r,-\infty) \end{equation} with $\rho_{\infty g}=\lim_{t\rightarrow\infty,t_0\rightarrow-\infty}U(t,t_0)\rho_0U^\dagger(t,t_0)$, where $\rho_0$ is the initial state of the total system defined as $\rho_0=|0_s\rangle\langle0_s|\otimes|0\rangle\langle0|$. \end{lem} \subsection{Quantum finite-level systems}\label{subsec:finite_level_single_photon} A quantum $N$-level system has states residing in the Hilbert space $\mathcal H=\mathbb{C}^N$. Let $|0_s\rangle$ denote the ground state of an $N$-dimensional system, and $\{|j_s\rangle,j=1,2,...,N-1\}$ denote its excited basis states. Then $\langle0_s|j_s\rangle=0$ for $j=1,2,...,N-1$. The Pauli operator $\sigma_z$ for a qubit is defined as $\sigma_z=|1_s\rangle\langle1_s|-|0_s\rangle\langle0_s|$. The raising and lowering operators for the qubit are given by $\sigma_+=|1_s\rangle\langle0_s|,\sigma_-=|0_s\rangle\langle1_s|$ respectively. For a quantum $N$-level system, the commutators $[X,L]$ and $[L^\dag,X]$ in Eq. (\ref{nopr2}) usually are not constant, instead they are often operators, so the term $b^\dagger(t)S^\dagger[X(t),L(t)]+[L^\dagger(t),X(t)]Sb(t)$ in Eq. (\ref{nopr2}) is in general nonlinear in $b(t)$. For example, consider a two-level system described by the triplet $(S,L,H_0)=\left(1,\sqrt{\kappa}\sigma_-,\frac{\omega_c}{2}\sigma_z\right)$. Here, $\omega_c$ is the transition frequency between the ground and excited states. And $\kappa$ is a parameter defined by $\kappa=2\pi g^2$, where $g$ is the coupling strength between the system and field. For this system, Eq. (\ref{nopr2}) becomes \begin{eqnarray} \dot{\sigma}_-(t) &=& -(i\omega_c + \frac{\kappa}{2})\sigma_-(t) +\sqrt{\kappa} \sigma_z (t) b(t), \label{example}\\ b_{out}(t)&=& \sqrt{\kappa} \sigma_-(t) + b(t). \label{example_2} \end{eqnarray} Due to nonlinearity, most studies of the interaction of quantum finite-level systems and single-photon states are often based on various assumptions such as weak excitation limit and are primarily focused on two-level systems. \section{Main results}\label{secfl} In this section, we prove that the response of a class of quantum finite-level systems to single-photon input can be analytically solved using a transfer function approach. Recall that $L=\theta^TL_0$. \begin{lem}\label{lemout2} Assume the interaction between the plant and the input field is given by the triplet $(S,L,H_0)$ and \begin{equation} \label{eq:H_L} H_0|0_s\rangle=\alpha|0_s\rangle,\ L_0|0_s\rangle=0 \end{equation} for some constant $\alpha$. Then the following equalities \begin{equation}\label{lemout2e1} U(t,t_0)|0\rangle|0_s\rangle=\exp(\mbox{i}\alpha_t)|0\rangle|0_s\rangle, \end{equation} and \begin{equation}\label{lemout2e2} U^\dagger(t,t_0)|0\rangle|0_s\rangle=\exp(-\mbox{i}\alpha_t)|0\rangle|0_s\rangle, \end{equation} hold for some phase shift $\exp(\mbox{i}\alpha_t)$. \end{lem} \begin{pf*}{Proof. } Considering Eq. (\ref{lem1p}) and using the assumptions, we have \begin{eqnarray} &&\langle0|\langle0_s|dU(t,t_0)\nonumber\\ &=&\langle0|\langle0_s|\{b^\dag(t)L-L^\dag Sb(t)-(\frac{1}{2}L^\dag L+\mbox{i}H_0)dt\}U(t,t_0)\nonumber\\ &=&-\mbox{i}\alpha dt\langle0|\langle0_s|U(t,t_0). \end{eqnarray} Therefore we have $\langle0|\langle0_s|U(t,t_0)=e^{-\mbox{i}\alpha(t-t_0)}\langle0|\langle0_s|$, which proves Eq.~(\ref{lemout2e1}). Eq.~(\ref{lemout2e2}) is obtained using (\ref{lemout2e1}): \begin{eqnarray} &&U^\dagger(t,t_0)U(t,t_0)|0\rangle|0_s\rangle=U^\dagger(t,t_0)\exp(\mbox{i}\alpha_t)|0\rangle|0_s\rangle\nonumber\\ &\Rightarrow&U^\dagger(t,t_0)|0\rangle|0_s\rangle=\exp(-\mbox{i}\alpha_t)|0\rangle|0_s\rangle. \end{eqnarray}\qed \end{pf*} Lemma \ref{lemout2} says if the plant Hamiltonian and plant-field coupling don't generate photons, then the field will remain vacuum and the plant at its ground state. \begin{lem}\label{lemmanew} If the input to the $N$-level system is the single photon input $|1_\xi\rangle$ defined in (\ref{sinput1}), then the output field state is \begin{eqnarray}\label{fl3} \rho_{\infty,field}&=&\text{Tr}_s(\rho_\infty)\nonumber\\ &=&\sum_{j=0}^{N-1}\langle j_s|\int_{-\infty}^\infty dtf^\dagger(t,-\infty)\xi(t)|0_s\rangle|0\rangle\nonumber\\ &&\times\langle0|\langle0_s|\int_{-\infty}^\infty dr\xi^\dag(t)f(t,-\infty)|j_s\rangle. \end{eqnarray} \end{lem} \begin{pf*}{Proof. } Following the techniques in \cite{Guofeng13}, $\rho_\infty$ can be obtained by extending upon the derivation of Lemma \ref{leminf}. Then $\rho_{\infty,field}$ is obtained by tracing out the plant.\qed \end{pf*} The following theorem is the main result of this paper, it gives the analytic expression of the output field state of a large class of quantum finite-level systems driven by single-photon input states. \begin{thm}\label{theorem3} In addition to the assumptions in Lemma \ref{lemout2}, if \begin{equation} \langle0_s|[L_0,H_0]=\langle0_s|\beta L_0,\quad [L_0^\dagger,L_0]|0_s\rangle=h|0_s\rangle\label{fl9}, \end{equation} and \begin{equation}\label{fn3} \Re(a=-\mbox{i}\beta+\frac{1}{2}\sum_{k=1}^K|c_k|^2h)<0, \end{equation} hold with $\beta$ and $h$ being constants, then the output field state in response to a single photon input $|1_\xi\rangle$ as defined in (\ref{sinput1}) is given by \begin{equation}\label{flt4} \rho_{\infty,field} = |1_{\xi^\prime}\rangle \langle 1_{\xi^\prime}|, \end{equation} where the pulse shape $\xi^\prime$ is given by the linear transfer \[ \xi^\prime (t) = \int_{-\infty}^\infty g_{G^{-}}(t-r)\xi(r)dr \] with the impulse response function being \begin{equation} g_{G^{-}}(t) :=\left\{ \begin{array}{ll} h\theta^T(\theta^T)^\dag e^{at}S+\delta(t)S, & t\geq 0, \\ 0, & t<0. \end{array} \right. \label{eq:tf} \end{equation} \end{thm} \begin{pf*}{Proof. } $h$ is real because it is an eigenvalue of a Hermitian operator $[L_0^\dagger,L_0]$. According to Lemma \ref{lemmanew}, we need to solve for $\langle0|\langle0_s|f(t,-\infty)|j_s\rangle$. By Eq. (\ref{nopr2}), we have \begin{eqnarray} \dot{L_0}(t)&=&\mathcal G_t(L_0)+b^\dagger(t)S^\dag[L_0(t),\theta^TL_0(t)]\nonumber\\ &+&[(\theta^TL_0(t))^\dag,L_0(t)]Sb(t),\label{fl4}\\ b_{out}(t)&=&L(t)+Sb(t).\label{fl5} \end{eqnarray} Particularly by (\ref{fl4}) we have \begin{eqnarray}\label{fl6} &&\langle0|\langle0_s|\dot{L_0}(t)\nonumber\\ &=&\langle0|\langle0_s|(\mathcal G_t(L_0)+[(\theta^TL_0(t))^\dag,L_0(t)]Sb(t))\nonumber\\ &=&\langle0|\langle0_s|aL_0(t)\nonumber\\ &&+\langle0|\langle0_s|(\theta^T)^\dag U^\dagger(t,t_0)[L_0^\dag,L_0]U(t,t_0)Sb(t)\nonumber\\ &=&\langle0|\langle0_s|(aL_0(t)+h(\theta^T)^\dag Sb(t)), \end{eqnarray} where Eq. (\ref{fl9}) and Lemma \ref{lemout2} are used to derive the last line. This is where the single-photon hypothesis plays a key role. It can be seen clearly from Eq. (\ref{fl6}) that the nonlinear stochastic differential equation (\ref{fl4}) is transformed to a linear version under single photon driving. Letting $t_0\rightarrow-\infty$, we can solve Eq. (\ref{fl6}) and then combine with Eq. (\ref{fl5}) to yield \begin{eqnarray}\label{invb} &&\langle0|\langle0_s|b_{out}(t)\nonumber\\ &=&\langle0|\langle0_s|\int_{-\infty}^t[h\theta^T(\theta^T)^\dag e^{a(t-r)}+\delta(t-r)]Sb(r)dr. \end{eqnarray} By Lemma \ref{lemout2} and $b_{out}(t)=U^\dagger(t,t_0)b(t)U(t,t_0)$, we can express Eq. (\ref{invb}) as \begin{equation}\label{befsi} \langle 0|\langle 0_{s}|b(t)=\int_{-\infty }^{\infty }g_{G^{-}}(t-r)\langle 0|\langle0_{s}|f(r,-\infty )dr, \end{equation} where $g_{G^-}$ is that defined in Eq. (\ref{eq:tf}). Then by Lemma \ref{lemsi} we can apply stable inversion on Eq. (\ref{befsi}) to get \begin{equation} \langle 0|\langle 0_{s}|f(t,-\infty )=\left[ \begin{array}{cc} 1 & 0 \end{array} \right] \int_{-\infty }^{\infty }g_{G^{-1}}(t-r)\langle 0|\langle 0_{s}|\breve{b}(r)dr. \label{eq:sept1_2} \end{equation} Employing Lemma $1$ in \cite{Guofeng13}, we can obtain \begin{eqnarray*} &&\left[ \begin{array}{cc} 1 & 0 \end{array} \right] \int_{-\infty }^{\infty }g_{G^{-1}}(t-r)\breve{b}(r)dr \\ &=&\left[ \begin{array}{cc} 1 & 0 \end{array} \right] \int_{-\infty }^{\infty }\left[ \begin{array}{cc} g_{G^{-}}(r-t)^{\dag } & 0 \\ 0 & g_{G^{-}}(r-t) \end{array} \right] \left[ \begin{array}{c} b(r) \\ b^{\dagger}(r) \end{array} \right] dr \notag \\ &=&\int_{-\infty }^{\infty }g_{G^{-}}(r-t)^{\dag }b(r)dr. \notag \end{eqnarray*} Eq. (\ref{eq:sept1_2}) becomes \begin{equation} \langle 0|\langle 0_{s}| f(t,-\infty )=\int_{-\infty }^{\infty }g_{G^{-}}(r-t)^{\dag}\langle 0|\langle 0_{s}| b(r)dr. \end{equation} Therefore, the only nonzero term in $\{\langle0|\langle0_s|f(t,-\infty)|j\rangle_s$, $j=0,...,N-1\}$ is $\langle0|\langle0_s|f(t,-\infty)|0\rangle_s$. Then it is straightforward to verify Eq. (\ref{flt4}) is the output field state.\qed \end{pf*} It is worth mentioning that the coupling between a two-level system and the input field is commonly modelled by the operator $L_0=\sigma_-$, even for multiple channels. This $L_0$ satisfies the condition Eq. (\ref{fl9}) since $[\sigma_+,\sigma_-]|0_s\rangle=-\frac{I-\sigma_z}{2}|0_s\rangle=-|0_s\rangle$. Moreover, the conditions for $H_0$ are often met. Therefore, as shown in the next section, Theorem \ref{theorem3} is applicable to a wide range of qubit systems. If the coupling is modelled by other system operators, e.g. $L_0=\sigma_x=|0_s\rangle\langle1_s|+|1_s\rangle\langle0_s|$, then the system-environment interaction $\sigma_x(b(t)+b^\dag(t))$ may generate photons when acting on the vacuum state $|0\rangle|0_s\rangle$. When there exist more than one photons, the system may not follow linear dynamics due to nonlinear photon-photon interaction. Also note that the coupling operator for a linear optical cavity is usually $a_-$ and it satisfies $[a_+,a_-]=-1$, where $a_+$ and $a_-$ are the creation and annihilation operators of the cavity mode respectively. This explains why the two-level system with $L_0=\sigma_-$ may exhibit linear input-output relation under single photon driving. \section{Applications}\label{secapp} In this section three applications drawn from the quantum physics literature are used to illustrate the usefulness of Theorem \ref{theorem3}. \subsection{Two-level system:one input channel}\label{sectwoone} We consider the quantum two-level system (\ref{example})-(\ref{example_2}) driven by a single-photon state (\ref{sinput}). Applying Theorem \ref{theorem3}, the output pulse shape is calculated to be \begin{equation} \xi^{'}(t)=\int_{-\infty }^{t}[-\kappa e^{-\left( \frac{\kappa }{2}+\mbox{i}\omega _{c}\right) (t-r)}+\delta (t-r)]\xi (r)dr. \label{eq:eta} \end{equation} Also, we can easily obtain the Fourier transform of (\ref{eq:eta}) using the convolution theorem: \begin{equation}\label{outputp2} \tilde{\xi}^{'}(\omega)=\tilde{\xi}(\omega)\frac{-\frac{\kappa}{2}+\mbox{i}(\omega+\omega_c)}{\frac{\kappa}{2}+\mbox{i}(\omega+\omega_c)} :=\tilde{\xi}(\omega)G(\mbox{i}\omega), \end{equation} The single photon response of two-level systems has been extensively studied in physics, see e. g. \cite{Shen:05,Zhou08}. Here we obtained the analytic form of the output field state without making any physical approximations such as weak excitation limit or scattering modes. Compared to the results obtained in \cite{Guofeng13}, the output state is analogous to the output of a single-mode linear system in response to a single-photon input. This observation is consistent with the existing results from \cite{Shen:05,Fan10,Zhou08}, where the authors have found that the transmission and reflection spectrums for the single-photon transport through a two-level system are analogous to the scattering spectrums for linear cavities. We can apply zero-dynamics principle \cite{Naoki14} for studying the full inversion of the states. To remove the zero from the transfer function, we should choose the input as $\tilde{\xi}(\mbox{i}\omega)=\sqrt{\kappa}/(-\frac{\kappa}{2}+\mbox{i}\omega+\mbox{i}\omega_c)$. The inverse Fourier transform of the input yields $\xi(t)=-\sqrt{\kappa}e^{(\frac{\kappa }{2}-\mbox{i}\omega _{c})t}(1-u(t))$, with $u(t)$ being the Heaviside step function. The input pulse is exponentially rising but with a resonant phase component till $t=0$ in time domain. This inverting single-photon pulse matches the existing designs for the inversion of two-level atoms \cite{Cirac97,Rephaeli10}. \subsection{Two-level system:two input channels} The system is described by the triplet $(S,L,H_0)=(I_{2},[\sqrt{\kappa _{1}}\ \sqrt{\kappa _{2}}]^T\sigma _{-},\frac{\omega _{c}}{2}\sigma _{z})$. Let the input photon enter the plant from the first channel \begin{equation}\label{ttwo22} |1_{\xi_1}\rangle=\int_{-\infty}^\infty \xi_1(r)b_1^\dagger(r)|0\rangle dr. \end{equation} In this case, the second-channel input is a vacuum state. The output field state is calculated to be $|\Psi _{out}\rangle\langle\Psi _{out}|$, where we define \begin{equation} \left\vert \Psi _{out}\right\rangle=\int_{-\infty }^{\infty }\xi_1^{'}(t)b_1^{\dagger}(t)dt|0\rangle+\int_{-\infty }^{\infty }\xi_2^{'}(t)b_2^{\dagger}(t)dt|0\rangle. \end{equation} The shapes of the output pulses in these two channels are given by $\xi_1^{'}(t)=\xi_1(t)-\kappa _{1}\eta (t)$ and $\xi_2^{'}(t)=-\sqrt{\kappa _{1}\kappa _{2}}\eta (t)$, and $\eta(t)$ is expressed as \begin{equation} \eta (t):=\int_{-\infty }^{t}\ \ e^{-\left( i\omega _{c}+\frac{\kappa _{1}+\kappa _{2}}{2}\right) (t-r)}\xi_1(r)dr. \label{eta} \end{equation} Define two transfer functions $G_1(\mbox{i}\omega)$ and $G_2(\mbox{i}\omega)$ in terms of $\tilde{\xi}_1^{'}(\omega)=G_1(\mbox{i}\omega)\tilde{\xi_1}(\omega)$ and $\tilde{\xi}_2^{'}(\omega)=-G_2(\mbox{i}\omega)\tilde{\xi_1}(\omega)$. Simple calculation yields \begin{eqnarray} G_1(\mbox{i}\omega)=\frac{-\frac{\kappa_1-\kappa_2}{2}+\mbox{i}(\omega+\omega_c)}{\frac{\kappa_1+\kappa_2}{2}+\mbox{i}(\omega+\omega_c)},\label{fs23}\\ G_2(\mbox{i}\omega)=\frac{\sqrt{\kappa_1\kappa_2}}{\frac{\kappa_1+\kappa_2}{2}+\mbox{i}(\omega+\omega_c)}.\label{fs24} \end{eqnarray} Physically, $G_1(\mbox{i}\omega)$ could correspond to a transmission spectrum and $G_2(\mbox{i}\omega)$ could be related to reflection. Further calculation shows \begin{equation}\label{sb2} |G_1(\mbox{i}\omega)|^2=1-\frac{4\kappa_1\kappa_2}{(\kappa_1+\kappa_2)^2+4(\omega+\omega_c)^2}. \end{equation} By (\ref{sb2}), if $\omega$ is largely detuned from the transition frequency $\omega_c$ of the qubit, we have $|G_1(\mbox{i}\omega)|^2\approx1$ and the photon will transmit with high probability. On the other hand, \begin{equation}\label{sb4} |G_2(\mbox{i}\omega)|^2=\frac{4\kappa_1\kappa_2}{(\kappa_1+\kappa_2)^2+4(\omega+\omega_c)^2}. \end{equation} Since $(\kappa_1+\kappa_2)^2\geq4\kappa_1\kappa_2$ is always true, $|G_2(\mbox{i}\omega)|^2=1$ admits a solution if and only if $\kappa_1=\kappa_2$ and $\omega=-\omega_c$. Only the frequency component in resonance with the qubit can be perfectly reflected if the two channels are coupled to the qubit with the same strength. If $\kappa_1\neq\kappa_2$, there is no frequency component that can be perfectly reflected. These results support a theoretical understanding of the numerical studies in \cite{Bara12}. \subsection{Gradient echo quantum memory} Consider the finite input-output model for gradient echo memories where $N$ two-level atoms are interconnected by series product \cite{Gough09} via one channel \cite{Hush13} . The $(S,L,H_0)$ representation of the system is given by $(I,\sum_{n=1}^N\sqrt{\kappa}\sigma_-^n,\sum_{n=1}^N\frac{\omega_c}{2}\sigma_z^n+\frac{\kappa}{2\mbox{i}}\sum_{j=2}^N\sum_{i=1}^{j-1}(\sigma_+^j\sigma_-^i-\sigma_+^i\sigma_-^j))$. A so-called weak atomic excitation limit is introduced in \cite{Protsenko99,Hush13} to approximate the atoms by linear cavities in this memory model. When the input to the memory is a single-photon state, then the output of each atom is a single-photon state as well, which implies that each two-level atom of the memory is driven by a single-photon state. As a consequence, we can simply use the result from Section~\ref{sectwoone} to obtain the pulse function of the output state of the memory as \begin{equation}\label{reviseeq1} \tilde{\xi}^{'}(\omega)=(\frac{-\frac{\kappa}{2}+\mbox{i}(\omega+\omega_c)}{\frac{\kappa}{2}+\mbox{i}(\omega+\omega_c)})^N\tilde{\xi}(\omega). \end{equation} The inverse Fourier transform of (\ref{reviseeq1}) yields the time-domain output field state \begin{eqnarray} \xi^{'}(t)&=&\int_{-\infty}^{t}dr[\kappa N{e^{-\frac{\kappa}{2}(t-r)}}_1F_1(1+N,2,-\kappa(t-r))\nonumber\\ &+&\delta(t-r)]\xi(r), \end{eqnarray} where we have used the the definition of Kummer confluent hypergeometric function $_1F_1(a,b,z)=\sum_{k=0}^{\infty}\frac{a^{(n)}z^n}{b^{(n)}n!}$ with $a^{(n)}=a(a+1)\cdot\cdot\cdot(a+n-1)$ \cite{abramowitz1964handbook}. Note that the above procedure can be easily extended to calculate the output state for atoms of different resonant frequencies $\{\omega_c^n\}$. \begin{figure} \caption{The second output channel is directly fed back to the system as the second input channel. The entry $S_{ij} \label{figonemode} \end{figure} \section{Single-photon pulse shaping by coherent feedback}\label{sec:control} In this section we study how to manipulate the pulse shape of single-photon states by means of coherent feedback. The idea behind the coherent feedback is that only quantum systems are used for control and hence all the components have similar characteristic time scales. For example, all-optical feedback has been used in practical control design which removes the slow and inefficient measurement process from the feedback loop. For experimental demonstrations of coherent feedback please see \cite{PhysRevA.78.032323,Iida12}. Consider a two-channel two-level system with parameters \[ (S,[\sqrt{\kappa _{1}}\ \sqrt{\kappa _{2}}]^T\sigma _{-},\frac{\omega _{c}}{2}\sigma _{z}). \] We design a coherent feedback by linear fractional transformation, as shown in Figure \ref{figonemode}. In what follows we study two cases. Case 1: $S$ is real. The resulting single-channel system is given by the triplet $(S_{11}+S_{12}(1-S_{22})^{-1}S_{21},(\sqrt{\kappa_1}+S_{12}(1-S_{22})^{-1}\sqrt{\kappa_2})\sigma_-,\frac{\omega _{c}}{2}\sigma _{z})$. Using Theorem \ref{theorem3}, we can solve for the transfer function of this system to be \begin{eqnarray} &&G(\mbox{i}\omega)\nonumber\\ &=&(S_{11}+\frac{S_{12}S_{21}}{1-S_{22}})\frac{-\frac{(\sqrt{\kappa_1}+\frac{S_{12}}{1-S_{22}}\sqrt{\kappa_2})^2}{2}+\mbox{i}(\omega+\omega_c)}{\frac{(\sqrt{\kappa_1}+\frac{S_{12}}{1-S_{22}}\sqrt{\kappa_2})^2}{2}+\mbox{i}(\omega+\omega_c)}.\nonumber\\ \end{eqnarray} This suggests we can use $S$ and $\kappa_2$ to control the input-output transfer function, and thus shape the output pulse. For example, if $S=[0\ 1;1\ 0]$, i. e. the first-channel input is scattering to the second-channel and directly fed back to the system, then $G(\mbox{i}\omega)=(-\frac{(\sqrt{\kappa_1}+\sqrt{\kappa_2})^2}{2}+\mbox{i}(\omega+\omega_c))/(\frac{(\sqrt{\kappa_1}+\sqrt{\kappa_2})^2}{2}+\mbox{i}(\omega+\omega_c))$. The decay of the two-level system is enhanced and the transfer spectrum is made sharper compared to that in Eq. (\ref{outputp2}). Case 2: $S$ is complex-valued. A complex-valued matrix $S$ can be realized by interconnecting a beam-splitter before the inputs entering the plant. In this case, the triplet of the feedback network is given by $(S_{11}+S_{12}(1-S_{22})^{-1}S_{21},(\sqrt{\kappa_1}+S_{12}(1-S_{22})^{-1}\sqrt{\kappa_2})\sigma_-,\frac{\omega _{c}}{2}\sigma _{z}+\frac{\sigma_z+1}{2}\Im(\sqrt{\kappa_1\kappa_2}S_{12}(1-S_{22})^{-1}+\kappa_2S_{22}(1-S_{22})^{-1}))$. Using Theorem \ref{theorem3} and denoting $\Delta :=\Im(\sqrt{\kappa_1\kappa_2}S_{12}(1-S_{22})^{-1}+\kappa_2S_{22}(1-S_{22})^{-1})$, we can solve for the transfer function to be \begin{eqnarray} &&G(\mbox{i}\omega)=(S_{11}+S_{12}S_{21}(1-S_{22})^{-1})\nonumber\\ &&~~~~~~\times\frac{-\frac{(\sqrt{\kappa_1}+\sqrt{\kappa_2}S_{12}(1-S_{22})^{-1})^2}{2}+\mbox{i}(\omega+\omega_c+\Delta)}{\frac{(\sqrt{\kappa_1}+\sqrt{\kappa_2}S_{12}(1-S_{22})^{-1})^2}{2}+\mbox{i}(\omega+\omega_c+\Delta)}. \end{eqnarray} Therefore, we can further shift the spectrum by an amount of $\Delta$. For example, if a $50/50$ beam-splitter is used, i.e. $S=\frac{1}{\sqrt{2}}[1\ \mbox{i};\mbox{i}\ 1]$, then $G(\mbox{i}\omega)=-(-\frac{(\sqrt{\kappa_1}+\mbox{i}\sqrt{\kappa_2}/(\sqrt{2}-1))^2}{2}+\mbox{i}(\omega+\omega_c+\sqrt{\kappa_1\kappa_2}/(\sqrt{2}-1)))/(\frac{(\sqrt{\kappa_1}+\mbox{i}\sqrt{\kappa_2}/(\sqrt{2}-1))^2}{2}+\mbox{i}(\omega+\omega_c+\sqrt{\kappa_1\kappa_2}/(\sqrt{2}-1)))$. \section{Conclusion}\label{seccon} In this paper the response of a class of quantum finite-level systems to single-photon states has been investigated. Analytic expression of the output single-photon states has been derived. Single-photon pulse shaping by means of coherent feedback has also been studied. The future research would include the application of this work to the hybrid coherent quantum networks driven by single photons. \end{ack} \end{document}
\begin{document} \title{A New Approach to Efficient Enumeration by Push-out Amortization} \begin{abstract} Enumeration algorithms have been one of recent hot topics in theoretical computer science. Different from other problems, enumeration has many interesting aspects, such as the computation time can be shorter than the total output size, by sophisticated ordering of output solutions. One more example is that the recursion of the enumeration algorithm is often structured well, thus we can have good amortized analysis, and interesting algorithms for reducing the amortized complexity. However, there is a lack of deep studies from these points of views; there are only few results on the fundamentals of enumeration, such as a basic design of an algorithm that is applicable to many problems. In this paper, we address new approaches on the complexity analysis, and propose a new way of amortized analysis {\it Push Out Amortization} for enumeration algorithms, where the computation time of an iteration is amortized by using all its descendant iterations. We clarify sufficient conditions on the enumeration algorithm so that the amortized analysis works. By the amortization, we show that many elimination orderings, matchings in a graph, connected vertex induced subgraphs in a graph, and spanning trees can be enumerated in $O(1)$ time for each solution by simple algorithms with simple proofs. \end{abstract} \section{Introduction} Suppose that there is a simple algorithm to solve a problem, and we have two improvements on the time complexity; (a) is by developing a new algorithm with a small complexity, and (b) proves that its complexity is actually small by complexity analysis. Both types of improvements are important in theoretical computer science, but these days almost all results are on the type of (a). Developing simple algorithms in (a) is non-trivial, thus many recent algorithms and their complexity analysis are difficult to understand. Moreover, these types of algorithms often require some structures in the input, hence the problem formulations tend to be distant from the real world. On contrary, (b) type has a great advantage on these points. Even though the analysis is complicated, we can hide the difficulty by producing general statements applicable to many problems. At least, we do not have to implement the complicated proofs in a program. According to this motivation, we study on complexity analysis in this paper, that is amortized analysis for enumeration algorithms. Amortized analysis is a paradigm of complexity analysis. In the paradigm, we charge the cost of iterations with long computation time to those with shorter time, to make the upper bound of computation time of an iteration shorter. Compared to usual complexity analysis considering the worst case, the amortized analysis is often more powerful, for example dynamic tree, union find, and some enumeration algorithms\cite{DynamicTree,UnionFind}. In the case of dynamic tree, the cost of changing the shape of the tree is charged to the preceding changes with smaller costs, and attains $O(\log n)$ average time complexity for each change where $n$ is the size of the tree. The time complexity is not attained by usual worst case analysis, and it seems to be hard to obtain algorithms with the same complexity by the analysis. This is similar to the union find algorithm, and the resulted time complexity is $O(n\alpha(n))$ while straightforward algorithms take $O(n^2)$ time. The concept of ``charging the cost'' made a paradigm shift on the design of algorithms. Some enumeration algorithms are designed so that the time complexity of an iteration is linear in the number of subproblems, to make the average computation time per child will be short\cite{KpRm95,SrTmUn97}. Enumeration is now rapidly increasing its presence in theoretical computer science. One of the biggest reasons comes from its importance in application areas. An example is the pattern mining problems in data mining. The problem is to find all the patterns belonging to a class of structures, such as subsets and trees, such that the patterns satisfy some constraints in the given database, such as appearing at least $k$ times. One more motivation is that there have not been many studies including simple problems, thus there is a great possibility. On the other hand, enumeration has several interesting aspects which we can not observe in other problems. For example, by dealing only with the difference between output solutions, we can often attain the computation time shorter than its output size, by outputting the solutions by the differences. Another example is its well-structured recursion. We can frequently have several structural results on enumeration, and it gives interesting algorithms and mathematical properties, while it is hard to characterize when a brunch and bound algorithm cuts off subproblems. Structured recursion often gives a good amortization. Thus, there is a great interests on investigating amortized analysis on enumeration algorithms. According to this motivation and interests, this paper addresses amortized analysis of enumeration algorithms. One of our goals on this topic is to fill the gap between theory and practice. In practice, enumeration algorithms are often quite efficient and than the theoretical upper bound on the computation time. Filling the gap gives understandings for both the theoretical and practical properties on the data and algorithms; the properties of the data accelerating the algorithms, and the the mechanism of the algorithms that enable us to attain smaller bounds. We have observed that the recursive structures of enumeration algorithms satisfies a property which we call {\em bottom-expanded}. Iterations of enumeration algorithms generate several recursive calls. Thus, the number of iterations exponentially increases in deeper levels of the recursion. On the other hand, iterations on deeper levels often have relatively small inputs compared to upper levels. Thus, we can expect that iterations near by the root of the recursion are few and spend a long time, and iterations near by the bottom of the recursions are many and spend very short time. In practice, we can frequently observe this, especially in many kinds of pattern mining algorithms. This also implies that the amortized computation time per iteration, or even per solution, is short. This mechanism is what we call bottom-expanded. We can see this mechanism not only in practice but also classic enumeration algorithms. This mechanism motivated us to develop a good amortized analysis. However, amortization is not easy in general, since it is hard to globally estimate the number of iterations and computation time. Thus, in many existing studies, the computation time is amortized between a parent and its children, and sometimes its grandchildren\cite{enumPEO,Ep90,Ep94,ksubtree,KpRm95,SrTmUn97}. These local structures are easier to analyze than the global structures. Extensions of this idea to more global structures are non-trivial. For example, if we want to amortize between iterations in different subtrees of the recursion, we have to understand the relation and the correspondence between all iterations in the different subtrees. This is often a difficult task. In this paper, we propose a new way of carrying out amortized analysis of the time complexity of enumeration algorithms, and propose new algorithms for enumeration of matchings, elimination orderings, and connected vertex induced subgraphs. We also show that the amortized analysis can prove the existing complexity results in very simple ways, for the enumerations of spanning trees, perfect elimination orderings, and perfect sequences, while the existing algorithms often need sophisticated algorithms or data structures. We can also see that the condition in the analysis is often satisfied in practice, thus this amortized analysis explains why the enumeration algorithms are efficient in practice. These satisfy out basic motivations for this kind of studies. Our amortization of an iteration is basically done with all its descendants. For each iteration, we push out its computation time to its children so that the assigned time is proportional to their computation time. By applying this push-out from the root of the recursion to deeper levels, the long computation time near the root is diffused to deeper levels, that have shorter time on average. Since it is very hard to capture the structure of the recursion, we give a condition called {\em Push-out condition} such that the amortized computation time is bounded when the condition is satisfied. As the condition is given to the relation between each iteration and its children, proving the satisfiability of the condition is often not difficult. As a result, to give a bound to amortized time complexity, what we have to do is to prove that the condition holds for some algorithms. In this way, we propose algorithms for enumerating matchings, elimination orderings, and connected vertex induced subgraphs, and prove that the condition holds for each. These lead that these graph objects can be enumerated in constant time per solution. We also show that the condition holds for the algorithm for spanning tree enumeration, and this gives a very simple proof compared to the existing ones. The paper is organized as follows. Section \ref{sec:prlm} is for preliminaries, and Section \ref{sec:PO} describes our Push out amortization and Push out condition. Sections \ref{sec:elim}, \ref{sec:match}, \ref{sec:CIS} and \ref{sec:sptree} show algorithms and their proofs. We conclude the paper in Section \ref{sec:cncl}. \section{Preliminaries}\label{sec:prlm} Let $\cal A$ be an enumeration algorithm. Suppose that $\cal A$ is a recursive type algorithm, i.e., composed of a subroutine that recursively calls itself several times (or none). Thus, the recursion structure of the algorithm forms a tree. We call the subroutine, or the execution of the subroutine an {\em iteration}. Note that an iteration does not include the computation done in the subroutines recursively called by the iteration, thus no iteration is included in another. When the algorithm is composed of several kinds of subroutines and operations, and thus the recursion is a nest of several kind of subroutines. In such cases, we consider a series of iterations of different types as an iteration. When an iteration $X$ recursively calls an iteration $Y$, $X$ is called the {\em parent} of $Y$, and $Y$ is called a {\em child} of $X$. The {\em root iteration} is that with no parent. For non-root iteration $X$, its parent is unique, and is denoted by $P(X)$. The set of the children of $X$ is denoted by $C(X)$. The parent-child relation between iterations forms a tree structure called a {\em recursion tree}. An iteration is called a {\em leaf iteration} if it has no child, and an {\em inner iteration} otherwise. For iteration $X$, an upper bound of the execution time (the number of operations) of $X$ is denoted by $T(X)$. Here we exclude the computation for the output process from the computation time. We remind that $T(X)$ is the time for local execution time, and thus does not included the computation time in the recursive calls generated by $X$. For example, when $T(X) = O(n^2)$, $T(X)$ is written as $cn^2$ for some constant $c$. $T^*$ is the maximum $T(X)$ among all leaf iterations $X$. Here, $T^*$ can be either constant, or a polynomial of the input size. If $X$ is an inner iteration, let $\overline{T}(X) = \sum_{Y \in C(X)} T(Y)$. In this paper, we assume that a graph is stored in a style of adjacency list. For a vertex subset $U$ of a graph $G=(V,E)$, the {\em induced subgraph} of $U$ is the graph whose vertex set is $U$, and whose edge set contains the edges of $E$ connecting two vertices of $U$. An edge is called a {\em bridge} if its removal increases the number of connected components in the graph. An edge $f$ is said to be {\em parallel} to $e$ if $e$ and $f$ have the same endpoints, and be {\em series} to $e$ if $e$ is a bridge in $G\setminus f$ and not so in $G$. For an edge $e$ of a graph $G$, we denote the graph obtained by removing $e$ from $G$ by $G\setminus e$, and that by removing $e$ and edges adjacent to $e$ by $G^+(e)$. Similarly, for a vertex $v$ of $G$, $G\setminus v$ is the graph obtained from $G$ by removing $v$ and edges incident to $v$. For an edge $(u,v)$ of $G$, the graph {\em contracted} by $(u,v)$, denoted by $G/(u,v)$, is the graph obtained by unifying the vertices $u$ and $v$ into one. For an edge set $F=\{e_1,\ldots,e_k\}$, $G/F$ denotes the graph $G/e_1/e_2/\cdots/e_k$. \section{Push Out Amortization}\label{sec:PO} The size of the input of each iteration for a recursive algorithm often decreases as the depth of the recursion. Thus, iterations near the root iteration take a relatively long time, and iterations near leaf iterations take a relatively short time. Motivated by this observation, we amortize the computation time by moving the computation time of each iteration to its children. We carry out this move from the top to the bottom, so that the computation time of ancestors is recursively diffused to their descendants. When we can obtain a short amortized computation time in this way, iterations with long computation times have many descendants at least proportional to their computation time; the average computation time per iteration will be long only when they have few descendants. However, it is not easy to prove that any inner iteration has sufficiently many descendants. Instead of that, we use some local conditions, related to a parent and children. Suppose that $\alpha > 1$ and $\beta\geq 0$ are two constants.\\ \noindent {\bf \em Push Out Condition (PO condition):} for iteration $X$, $\overline{T}(X) \ge \alpha T(X) - \beta (|C(X)|+1)T^*$.\\ \noindent Fig. \ref{fig:POcond} shows a simple example of this condition. After the assignment of the computation time of $\alpha\beta (|C(X)|+1)T^*$ to children and the remaining to itself, the inequation $\overline{T}(X) \ge \alpha T(X)$ holds. This implies that the computation time of one level of recursion intuitively increases as the depth, unless there are not so many leaf iterations. Considering that enumeration algorithms usually spend less time in deeper levels of the recursion, we can see that this implies that each iteration has many children on average. This is in some sense not a typical condition to bound the time complexity of recursive algorithms; usually we want to decrease the total computation time in deeper levels. However, in the enumeration, the number of leaf iterations is fixed, and thereby the total computation time in the bottom level is also fixed. Thus, this condition implies that the total computation time is short. \begin{theorem}\label{poa} If any inner iteration of an enumeration algorithm satisfies PO condition, the amortized computation time of an iteration is $O(T^*)$. \end{theorem} \begin{figure} \caption{An iteration, its children, and their computation time represented by rectangle lengths; seems to be inefficient if children take long time, but this results in many descendants indeed. } \label{fig:POcond} \caption{Push out rule; an iteration (center) receives computation time from its parent (while rectangle), and delivers it together with its computation time (gray rectangle) to its children, proportional to their computation time. } \label{fig:POrule} \end{figure} \proof To prove the lemma, we charge the computation time. We neither move the operations nor modify the algorithm, but just charge the computation time; the computation time can be considered as tokens, and we move the tokens so that each iteration has a small number of tokens. We charge the computation time from an iteration to its children, i.e., from the top of the recursion tree to the bottom. Thus, an iteration receives computation time from its parent. We charge (push out) its computation time and that received from its parent to its children. The computation time is charged to the children, in proportion of their individual computation time, using the following rule.\\ \noindent {\bf \em Push out rule:} Suppose that iteration $X$ receives a computation time of $S(X)$ from its parent, thus $X$ has computation time of $S(X) + T(X)$ in total. Then, we fix $\frac{\beta}{\alpha-1}(|C(X)|+1) T^*$ of the computation time to $X$, and charge (push out) the remaining computation time of quantity $S(X) + T(X) - \frac{\beta}{\alpha-1}(|C(X)|+1) T^*$ to its children. Each child $Z$ of $X$ receives computation time proportional to $T(Z)$, i.e., \[ S(Z) = (S(X) + T(X) - \frac{\beta}{\alpha-1} (|C(X)|+1)T^*) \frac{T(Z)}{\overline{T}(X)}. \] \noindent See Fig. \ref{fig:POrule} as an example. According to this rule, we charge the computation time from the root iteration to leaf iterations, so that each inner iteration has $O((|C(X)|+1)T^*)$ computation time. Since the sum of the number of children over all nodes in a tree is no greater than the number of nodes in a tree, this is equivalent to that each iteration has $O(T^*)$ time. The remaining issue is to prove the statement of the lemma by showing that each leaf iteration receives computation time of $O(T^*)$, and it is sufficient to prove the statement. To show that, we state the following claim.\\ \noindent {\bf \em Claim}: if we charge computation time in the manner of the push out rule, each iteration $X$ receives computation time of at most $T(X) / (\alpha-1)$ from its parent, i.e., $S(X) \le T(X) / (\alpha-1)$\\ \noindent The root iteration satisfies this condition. Suppose that an iteration $X$ satisfies it. Then, for any child $Z$ of $X$, $Z$ receives computation time of \begin{eqnarray*} && (S(X) + T(X) - \frac{\beta}{\alpha-1} (|C(X)|+1)T^*) \frac{T(Z)}{\overline{T}(X)}\\ &\le& (T(X) / (\alpha-1) + T(X) - \frac{\beta}{\alpha-1} (|C(X)|+1)T^*) \frac{T(Z)}{\overline{T}(X)}\\ &=& \frac{\alpha T(X) - \beta (|C(X)|+1)T^*}{\alpha-1} \times \frac{T(Z)}{\overline{T}(X)}\\ &=& \frac{\alpha T(X) - \beta (|C(X)|+1)T^*}{\overline{T}(X)} \times \frac{T(Z)}{\alpha-1}. \end{eqnarray*} \noindent Since PO condition is satisfied, $\overline{T}(X) \ge \alpha T(X) - \beta (|C(X)|+1)T^*$. Thus, \[ \frac{\alpha T(X) - \beta (|C(X)|+1)T^*}{\overline{T}(X)} \frac{T(Z)}{\alpha-1} \le \frac{T(Z)}{\alpha-1}. \] \noindent By induction, any iteration satisfies the condition in the claim. \qed Note that PO condition does not require for the iterations to have at least two children. \section{Enumeration of Elimination Ordering}\label{sec:elim} Let ${\cal L}$ be a class of structures such as sets, graphs, and sequences. Suppose that any structure $Z\in {\cal L}$ consists of a set of elements called an {\em ground set}, that is denoted by $V(Z)$. Examples of ground sets are the vertex set of a graph, the edge set of a graph, the cells of a matrix, and the letters of a string. The empty structure $\perp$ is the unique structure that has $V(\perp) = \emptyset$, and hereafter we consider only $\cal L$ including the empty structure. For each $Z\in {\cal L}, Z\ne \perp$, we define the set of {\em removable elements} $R(Z)$, such that for each removable element $e\in R(Z)$, the removal of $e$ from $Z$ results in a structure $Z'\in {\cal L}, V(Z') = V(Z)\setminus \{ e\}$. We denote the removal of $e$ from $Z$ by $Z\setminus e$, and we assume that no two different structures can be generated by the removal of $e$. By using removable elements, we define {\em elimination orderings}. An elimination ordering is an ordering $(z_1,\ldots,z_n)$ of elements in $V(Z)$ iteratively removed from $Z$ until $Z$ is $\perp$, i.e., any $z_i$ is removable in the structure $Z_i$ that is obtained by repeatedly removing $z_1$ to $z_{i-1}$ from $Z$. Example of elimination ordering are removing leaves from a tree, and perfect elimination ordering of a chordal graph. A simple algorithm for enumerating elimination orderings can be described as follows. \begin{tabbing} {\bf Algorithm} EnumElimOrdering ($Z, S$)\\ 1. {\bf if} $|V(Z)| = 1$, {\bf output} $S + z$ where $V(Z) = \{ z\}$; {\bf return}\\ 2. {\bf for} each element $z\in V(Z)$ {\bf do}\\ 3. \ \ {\bf if} $z\in R(Z)$, {\bf call} EnumElimOrdering ($Z\setminus z, S + z$)\\ 4. {\bf end for} \end{tabbing} Suppose that we are given a structure $Z$ in a class $\cal L$ and removable ground set $R$ for ground set $V(Z)$. We suppose that for any $z\in V(Z)$, we can list all $z\in R(Z)$ in $\Theta(p(|V(Z)|)q(n))$ time, where $p(|V(Z)|)$ is a polynomial of $|V(Z)|$, and $q(n)$ is a function where $n$ is an invariant of the input structure, such as the number of edges in the original graph. We also assume that a removal of element takes $\Theta(p(|V(Z)|)q(n))$ time. \begin{theorem}\label{elim} Elimination orderings of a class $\cal L$ can be enumerated in $O(q(n))$ time for each, if $|R(Z)| \ge 2$ holds for each $Z\in {\cal L}$ such that $|V(Z)|$ is larger than a constant number $c$. \end{theorem} \proof We first bound the computation time except for the output processes, that is, step 1 of EnumElimOrdering. First, we choose two constants $\delta>c$ and $\alpha>1$ such that $\frac{2p(i-1)}{p(i)} > \alpha$ holds for any $i > \delta$. Since $p$ is a polynomial function, $\frac{p(i)}{p(i-1)}$ converges to $1$, thus such $\alpha$ always exists. Let $X$ be an iteration. When $X$ inputs $Z$ with $|V(Z)| \le \delta$, the computation time is $q(n)$, except for the output process. Hence, we have $T^* = O(q(n))$. For the case $|V(Z)| \le \delta$, the computation time of $X$ is bounded by $q(n)$. For the case $|V(Z)| > \delta$, we have \[ \overline{T}(X) \ \ge \ 2(|V(Z)|-1)p(|V(Z)|-1)q(n) \ > \ \alpha |V(Z)|p(|V(Z)|)q(n),\] \noindent since $X$ has at least two children. Thus, $X$ satisfies PO condition with any constant $\beta > 0$. From Theorem \ref{poa}, except for the output process, the computation time is bounded by $O(q(n))$ time for each iteration whose input has at least $\delta$ elements. Since any inner iteration $Y$ has exactly one child only if $|V(Y)| \le c$, the number of inner iterations is bounded by the number of leaf iterations, multiplied by $c$. Therefore, the computation time for each elimination ordering can be bounded by $O(cq(n)) = O(q(n))$ time. Next, let us consider the output process. Instead of explicitly outputting elimination orderings, we output each elimination ordering $S$ by the difference from $S'$ that is output just before $S$. We can output them compactly in this way. Although the difference can be large up to $|V(Z)|$, we can see that it is bounded by the number of operations done from the previous output process. Thus, the size of all output differences, except for the first one output in the usual way, is at most proportional to the total computation time. Therefore, the computation time for the output process is also bounded by $O(q(n))$ time for each. \qed The next corollary immediately follows from the theorem. \begin{corollary} For a given set class, elimination ordering can be enumerated by EnumElimOrdering in $O(1)$ amortized time for each, if each inner iteration generates at least two recursive calls, and takes $O(p(|V(Z)|))$ time, where $p$ is a polynomial of $|V(Z)|$. \qed \end{corollary} There are actually several elimination orderings to which this theorem can be applied, and they are listed below. For conciseness, we have described each by their structures and removable elements.\\ {\bf Example (a): perfect elimination orderings of a chordal graph\cite{enumPEO}}\\ For a graph, a vertex is called {\em simplicial} if the vertices adjacent to it form a clique. An elimination orderings of simplicial vertex is called {\em perfect elimination ordering}\cite{elimord}, and a graph is {\em chordal} if it has a perfect elimination ordering. We define $\cal L$ by the set of chordal graphs, $V(Z)$ by the vertex set of $Z\in {\cal L}$, and $R(Z)$ by the set of its simplicial vertices. It is known that any chordal graph $Z$ admits a clique tree whose vertices are maximal cliques of $Z$. If $Z$ is a clique, all vertices in $Z$ are simplicial. If not, it is known that there are at least two cliques that has a vertex that is not included in the other maximal cliques. Note that these cliques are leaf cliques of a clique tree, where the vertices of a clique tree are maximal cliques of $Z$, each edge connects overlapping cliques, and the maximal cliques including any vertex forms a subtree of the clique tree. The vertex is simplicial, hence $|R(Z)| \ge 2$ always holds. Since we can check whether a vertex is simplicial or not in $(|V(X)|^2)$ time, we can enumerate all perfect elimination orderings in $O(1)$ time for each. Note that although the algorithm in \cite{enumPEO} already attained the same time complexity, our analysis yields much simpler algorithm and proof, \\ {\bf Example (b): perfect sequence\cite{perfectseq}}\\ $\cal L$ is the class of chordal graphs $Z$, and $V(Z)$ is the set of maximal cliques in $Z$. A maximal clique is removable if it is a leaf of some clique trees of $Z$, and the removal of a maximal clique $z$ from $Z$ is the removal of all vertices of $z$ that do not belong to another maximal clique. The removal of the vertices results in the graph that includes remaining maximal cliques, and no new maximal clique appears in the graph. Note that a clique tree has at least two leaves if it has more than one vertex, thus $|R(Z)|\ge 2$. An elimination ordering is called a {\em perfect sequence}. Since all removable maximal cliques can be found in polynomial time in the number of maximal cliques\cite{perfectseq}, all perfect sequences are enumerated in $O(1)$ time for each.\\ The elimination orderings induced by following removable elements can be also enumerated in $O(1)$ time for each. \begin{itemize} \item non-cut vertices of connected graph \item points on surface of convex hull of a point set in plane \item leaves of a tree \item vertices of degrees less than seven of a simple planar graph. \end{itemize} \section{Enumeration of Matchings}\label{sec:match} A {\it matching} of a graph is an edge subset of a graph $G=(V,E)$ such that no two edges are adjacent. The matchings are enumerated by the following algorithm. \begin{tabbing} {\bf Algorithm} EnumMatching ($G=(V,E), M$)\\ 1: {\bf if} $E = \emptyset$ {\bf then output} $M$; {\bf return}\\ 2: choose an edge $e$ from $E$\\ 3: {\bf call} EnumMatching ($G\setminus e, M$)\\ 4: {\bf call} EnumMatching ($G^+(e), M\cup \{ e\}$) \end{tabbing} \noindent The time complexity of an iteration of EnumMatching is $O(|V|)$. Since each inner iteration generates two children, the computation time for each matching is $O(|V|)$, and no better algorithm has been proposed in the literature. A leaf iteration takes $O(1)$ time, thus $T^* = O(1)$. However, PO condition may not hold for some iterations. This cannot be better than $O(|V|)$ in straightforward ways. PO condition does not hold when many edges are adjacent to $e$. In such cases, $G^+(e)$ has few edges, thus the subproblem of $G^+(e)$ takes short time so that PO condition does not hold. To avoid this situation, we modify the way of recursion as follows so that in such cases the iteration has many children. Let $u_1,\ldots,u_k$ be the vertices adjacent to $v$, and $e_i = (v,u_i)$. We partition the matchings to be enumerated into \begin{itemize} \item matchings including $e_1$ \item matchings including $e_2$ \item $\cdots$ \item matchings including $e_k$ \item matchings including no edge incident to $v$. \end{itemize} We see that any matching belongs to exactly one of these groups. To recur, we derive $G^+(e_1),\ldots,G^+(e_k)$ and $G\setminus v$. $G\setminus v$ and $G^+(e_1)$ can be derived in $O(|E|)$ time. To shorten the computation time for $G^+(e_i)$ for $i\ge 2$, we construct $G^+(e_i)$ from $G^+(e_{i-1})$. We add all edges of $G$ incident to $u_{i-1}$ to $G^+(e_{i-1})$, and remove all edges adjacent to $u_i$, and obtain $G^+(e_i)$. This can be done in $O(d(u_{i-1})+d(u_i))$ time. To construct $G^+(e_i)$ for all $i=2,\ldots,k$, we need \[ O(\ (d(u_1)+d(u_2))\ +\ (d(u_2)+d(u_3))\ +\ \cdots\ +\ (d(u_{k-1})+d(u_k))\ )\ \ =\ O(|E|) \] \noindent time. Thus, the computation time of an iteration is bounded by $c|E|$ with a constant $c$. The algorithm is described as follows. \begin{tabbing} {\bf Algorithm} EnumMatching2 ($G=(V,E), M$)\\ 1: {\bf if} $E = \emptyset$ {\bf then output} $M$; {\bf return}\\ 2: choose a vertex $v$ having the maximum degree in $G$\\ 3: {\bf call} EnumMatching2 ($G\setminus v, M$)\\ 4: {\bf for} each edge $e$ adjacent to $v$, {\bf call} EnumMatching2 ($G^+(e), M\cup \{ e\}$) \end{tabbing} \begin{theorem}\label{match} All matchings in a graph can be enumerated in $O(1)$ time for each, with $O(|E|+|V|)$ space. \end{theorem} \proof The amortized computation time for outputting process is bounded by $O(1)$ for each by using difference as elimination ordering. Let us consider an inner iteration $X$. In the iteration $X$, if $d(v)\ge |E|/4$, we generate at least $|E|/4$ recursive calls, thus we have $|C(X)|=\Omega(|E|)$ and PO condition is satisfied by choosing sufficiently large $\beta$. If $d(v) < |E|/4$, the subproblems of $G\setminus v$ take at least $\Theta(3c|E|/4)$ time, and the subproblems of $G^+(e_1)$ take at least $c|E|/2$ time. Hence, by setting $\alpha = 1.25$, we have \[ \overline{T}(X) \ge 3c|E|/4 + c|E|/2 = 5c|E|/4 \ge \alpha T(X) - \beta |C(X)|T^* \] \noindent thereby PO condition holds. Remind that each inner iteration generates two or more recursive calls, the number of iterations does not exceed the twice the number of matchings. Since any inner iteration satisfies PO condition and $T^* = O(1)$, the statement holds. We remind that we assumed that there is no isolated vertex in the input graph, and thus the number of matchings in the graph is greater than the number of vertices, and the number of edges. \qed \section{Enumeration of Connected Vertex Induced Subgraphs}\label{sec:CIS} We consider the problem of enumerating all vertex sets of the given graph $G=(V,E)$ inducing connected subgraphs (connected induced subgraphs in short). In literature, an algorithm is proposed that runs in $O(|V|)$ time for each\cite{AvFk96}. For the enumeration, it is sufficient to enumerate all connected induced subgraphs including the given vertex $r$. For a vertex $v$ adjacent to $r$, the connected induced subgraphs including $r$ are partitioned into those including $v$ and those not including $v$. The former subgraphs are connected induced subgraphs in $\underline{G/(r,v)}$ and the latter subgraphs are those in $G\setminus v$. We have the following algorithm according to this partition, and we prove that this algorithm satisfies PO condition. \begin{tabbing} {\bf Algorithm} EnumConnect ($G=(V,E), S, r$)\\ 1: {\bf if} $d(r) = 0$ {\bf then output} $S$; {\bf return}\\ 2: choose a vertex $v$ adjacent to $r$\\ 3: {\bf call} EnumConnect ($\underline{G/(r,v)}, S\cup \{ v\}, r$)\\ 4: {\bf call} EnumConnect ($G\setminus v, S, r$) \end{tabbing} \begin{theorem}\label{connect} All connected vertex induced subgraphs in a graph can be enumerated in $O(1)$ time for each, with $O(|E|+|V|)$ space. \end{theorem} \proof The correctness of the algorithm and the bound for memory usage are clear. Since each inner iteration generates exactly two recursive calls, the number of iterations is linearly bounded by the number of connected induced subgraphs, and $T^* = O(1)$. As same to the matching enumeration, the computation time for outputting process is bounded by $O(1)$ for each. An inner iteration $X$ of the algorithm takes $O(d(r)+d(v))$ time. We assume that $T(X) = c(3d(r)+d(v))$ for a constant $c$, and leaf iteration takes $3c$ time, since $T^* = O(1)$. The constant factor of three is a key to PO condition. The degree of $r$ is at least $(d(r)+d(v))/2-1$ in $\underline{G/(r,v)}$, and $d(r)-1$ in $G\setminus v$. Note that $d(r)$ and $d(v)$ are degrees of $r$ and $v$ in $G$. From this, we can see that the child iteration of $\underline{G/(r,v)}$ takes at least $3c((d(r)+d(v))/2-1)$ time, and that of $G\setminus v$ takes at least $3c(d(r)-1)$ time. Their sum is at least \[ 3c((d(r)+d(v))/2-1) + 3c(d(r)-1) = \frac{3}{2}c(3d(r)+d(v)) - 6c = \frac{3}{2}T(X) - 6c.\] \noindent Setting $\beta = 6$, we can see that $X$ satisfies PO condition. Thanks to Theorem \ref{poa}, the computation time for each connected induced subgraph is $O(1)$. \qed \section{Spanning Trees}\label{sec:sptree} A subtree $T$ of a graph $G=(V,E)$ is called a {\em spanning tree} if any vertex of $G$ is incident to at least one edge of $T$. Any spanning tree has $|V|-1$ edges. There have already been several studies on this problem\cite{KpRm95,SrTmUn97,Un99}, and \cite{Un99} is the simplest and uses an amortized analysis similar to us. Without loss of generality, we assume that the input graph does not have any bridge. Let $e_1$ be an edge of $G$. If there are several edges $e_2,\ldots,e_k$ parallel to $e_1$, let $F = \{ e_1,\ldots,e_k\}$ and $F_i = F\setminus \{ e_i\}$. We see that at most one edge from $F$ can be included in a spanning tree, thus we enumerate spanning trees in $(G\setminus F_1)/e_1,\ldots, (G\setminus F_k)/e_k$. We further enumerate spanning trees in $G\setminus F$ if it is connected. Any spanning tree is enumerated in exactly one of these. When $e_1$ has no parallel edges, $e_1$ can have series edges. If there are several edges $e_2,\ldots,e_k$ series to $e_1$, again let $F = \{ e_1,\ldots,e_k\}$ and $F_i = F\setminus \{ e_i\}$. We also see that any spanning tree includes at least $k-1$ edges of $F$, thus we enumerate spanning trees in $(G/F_1)\setminus e_1,\ldots, (G/F_k)\setminus e_k$. We further enumerate spanning trees in $G/F$ if $F$ is not the edges of a cycle. Also in this case, any spanning tree is enumerated once among these. By using these subdivisions, we construct the following algorithm. \begin{tabbing} {\bf Algorithm} EnumSpanningTree ($G=(V,E), T$)\\ 1: {\bf if} $E = \emptyset$ {\bf then output} $T$; {\bf return}\\ 2: choose an edge $e_1$ from $E$\\ 3: $F^p := \{ e_1\} \cup \{ e | e \mbox{ is parallel to } e_1\}$ ;\\ \ \ \ \ \ $F^s := \{ e_1\} \cup \{ e| e \mbox{ is not parallel to } e_1, \mbox{ and } e \mbox{ is series to } e_1\}$\\ 4: {\bf for} each $e_i\in F^p$, \ {\bf call} EnumSpanningTree ($(G\setminus (F^p\setminus \{e_i\})/ e_i, T\cup \{ e_i\}$)\\ 5: {\bf for} each $e_i\in F^s$, \ {\bf call} EnumSpanningTree ($(G/ (F^s\setminus \{e_i \})\setminus e_i, T\cup (F^s \setminus \{ e_i\})$) \end{tabbing} \noindent We observe that these $k$ subgraphs are actually isomorphic in both cases except for the edge label $e_i$, thus constructing these graphs takes $O(|V|+|E|)$ time. \begin{theorem}\label{spantree} All spanning trees in a graph can be enumerated in $O(1)$ time for each, with $O(|E|+|V|)$ space. \end{theorem} \proof The space complexity of the algorithm is $O(|E|+|V|)$ and an iteration takes $\Theta(|V|+|E|)$ time since all edges parallel/series to an edge can be found by two connected component decomposition in $O(|V|+|E|)$ time. If no edge is parallel or series to $e_1$, we generate two subproblems of $|E|-1$ edges, thus PO condition holds. If $k$ edges are parallel or series to $e_1$, we have at least $k+1\ge 2$ subproblems of $|E|-(k+1)$ edges. When $k+1\ge |E|/4$, $T(X) - \beta (|C(X)|+1)T^* = 0 $ holds for some $\beta>0$, and PO condition holds. When $k+1< |E|/4$, $(k+1)(|E|-(k+1)) \ge 1.5|E|$ holds, PO condition holds for $\alpha = 1.5$ and some $\beta>0$. Since each iteration generates at least two recursive calls or outputs a solution, the number of iterations is at most twice the number of solutions, therefore the statement holds. \qed \section{Conclusion}\label{sec:cncl} We introduced a new way of looking at amortizing the computation time of enumeration algorithms, by local conditions of recursion trees. We clarified the conditions that are sufficient to give non-trivial upper bounds for the average computation time of iterations that only depended on the relation between the computation time of a parent iteration and that of its child iterations. We showed that many algorithms for elimination orderings have good properties so that the conditions are satisfied, and thus enumerated in constant time for each. Several other enumeration algorithms for matchings, connected vertex induced subgraphs, and spanning trees were also described, whose time complexities are $O(1)$ for each solution. There are many problems for those enumeration algorithms that do not satisfy the conditions. An interesting future work is to develop new algorithms for these problems, that satisfy the conditions. Another direction is to study other conditions for bounding amortized computation time. Further studies on amortized analysis will possibly fill the gaps between theory and practice, and clarify the mechanisms of enumeration algorithms. \ \\ \noindent {\bf \large Acknowledgments: } Part of this research is supported by the Funding Program for World-Leading Innovative R\&D on Science and Technology, Japan, and Grant-in-Aid for Scientific Research (KAKENHI), Japan. \end{document}
\begin{document} \setlength{\parskip}{2mm} \setlength{\parindent}{0pt} \author{\mathbf{e}nsuremath{\mathbf{h}}space{0.3in}Andrew Cotter \\ \mathbf{e}nsuremath{\mathbf{h}}space{0.3in} \texttt{[email protected]} \\\mathbf{e}nsuremath{\mathbf{h}}space{0.3in} TTIC \\\mathbf{e}nsuremath{\mathbf{h}}space{0.3in} Chicago, IL 60637 USA \\ \mathbf{e}nsuremath{\mathbf{h}}space{2in} \and Ohad Shamir \\ \texttt{[email protected]} \\ Microsoft Research\\ Cambridge, MA 02142, USA \and Nathan Srebro \\ \texttt{[email protected]} \\ TTIC \\ Chicago, IL 60637 USA\\ \mathbf{e}nsuremath{\mathbf{h}}space{2in} \and Karthik Sridharan \\ \texttt{[email protected]} \\ TTIC \\ Chicago, IL 60637 USA } \date{} \title{Better Mini-Batch Algorithms\ via Accelerated Gradient Methods} \begin{abstract} Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problems. We study how such algorithms can be improved using accelerated gradient methods. We provide a novel analysis, which shows how standard gradient methods may sometimes be insufficient to obtain a significant speed-up and propose a novel accelerated gradient algorithm, which deals with this deficiency, enjoys a uniformly superior guarantee and works well in practice. \mathbf{e}nd{abstract} \section{Introduction} We consider a stochastic convex optimization problem of the form $$\min_{\mathbf{e}nsuremath{\mathbf{w}}\in\mathcal{W}cal} L(\mathbf{e}nsuremath{\mathbf{w}}),$$ where $$L(\mathbf{e}nsuremath{\mathbf{w}})=\Ep{z}{\mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}},z)},$$ and optimization is based on an empirical sample of instances $z_1,\ldots,z_m$. We focus on objectives $\mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}},z)$ that are non-negative, convex and smooth in their first argument (i.e.~have a Lipschitz-continuous gradient). The classical learning application is when $z=(\mathbf{e}nsuremath{\mathbf{x}},y)$ and $\mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}},(\mathbf{e}nsuremath{\mathbf{x}},y))$ is a prediction loss. In recent years, there has been much interest in developing efficient first-order stochastic optimization methods for these problems, such as stochastic mirror descent \cite{BeckTeb03,NemirovskiJuLaSh09} and stochastic dual averaging \cite{Nesterov09,Xiao10}. These methods are characterized by incremental updates based on subgradients $\partial \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}},z_i)$ of individual instances, and enjoy the advantages of being highly scalable and simple to implement. An important limitation of these methods is that they are inherently sequential, and so problematic to parallelize. A popular way to speed-up these algorithms, especially in a parallel setting, is via \mathbf{e}mph{mini-batching}, where the incremental update is performed on an average of the subgradients with respect to several instances at a time, rather than a single instance (i.e., $\frac{1}{b}\sum_{j=1}^{b}\partial \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}},z_{i+j})$). The gradient computations for each mini-batch can be parallelized, allowing these methods to perform faster in a distributed framework (see for instance \cite{ShalSiSreCo11}). Recently, \cite{DekelGilShamXia11} has shown that a mini-batching distributed framework is capable of attaining asymptotically optimal speed-up in general (see also \cite{AD}). A parallel development has been the popularization of \mathbf{e}mph{accelerated} gradient descent methods \cite{Nesterov83,Nesterov05,Tseng08,Lan09}. In a deterministic optimization setting and for general smooth convex functions, these methods enjoy a rate of $O(1/n^2)$ (where $n$ is the number of iterations) as opposed to $O(1/n)$ using standard methods. However, in a stochastic setting (which is the relevant one for learning problems), the rate of both approaches have an $O(1/\sqrt{n})$ dominant term in general, so the benefit of using accelerated methods for learning problems is not obvious. \begin{algorithm}[t] \caption{Stochastic Gradient Descent with Mini-Batching (SGD)} \begin{algorithmic} \STATE Parameters: Step size $\mathbf{e}ta$, mini-batch size $b$. \STATE Input: Sample $z_1,\ldots,z_m$ \STATE $\mathbf{e}nsuremath{\mathbf{w}}_1=0$ {F}OR{$i=1$ to $n=m/b$} \STATE Let $\mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) = \frac{1}{b}\sum_{t=b(i-1)+1}^{bi}\mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)$ \STATE $\mathbf{e}nsuremath{\mathbf{w}}'_{i+1}:= \mathbf{e}nsuremath{\mathbf{w}}_i - \mathbf{e}ta \nabla\mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i))$ \STATE $\mathbf{e}nsuremath{\mathbf{w}}_{i+1}:=P_{\mathcal{W}cal}(\mathbf{e}nsuremath{\mathbf{w}}'_{i+1})$ \ENDFOR \STATE Return $\bar{\mathbf{e}nsuremath{\mathbf{w}}} = \frac{1}{n}\sum_{i=1}^{n}\mathbf{e}nsuremath{\mathbf{w}}_i$ \mathbf{e}nd{algorithmic} \label{alg:md} \mathbf{e}nd{algorithm} \begin{algorithm} \caption{Accelerated Gradient Method (AG)} \begin{algorithmic} \STATE Parameters: Step sizes $(\gamma_i,\beta_i)$, mini-batch size $b$ \STATE Input: Sample $z_1,\ldots,z_m$ \STATE $\mathbf{e}nsuremath{\mathbf{w}}=0$ {F}OR{$i=1$ to $n=m/b$} \STATE Let $\mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) := \frac{1}{b}\sum_{t=b(i-1)+1}^{bi}\mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}},z_t)$ \STATE $\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i := \beta_i^{-1} \mathbf{e}nsuremath{\mathbf{w}}_{i} + (1 - \beta_i^{-1}) \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}}$ \STATE $\mathbf{e}nsuremath{\mathbf{w}}'_{i+1} := \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i} - \gamma_i \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i)$ \STATE $\mathbf{e}nsuremath{\mathbf{w}}_{i+1} := P_{\mathcal{W}cal}(\mathbf{e}nsuremath{\mathbf{w}}'_{i+1})$ \STATE $\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i+1} \leftarrow \beta_i^{-1} \mathbf{e}nsuremath{\mathbf{w}}_{i+1} + (1 - \beta_i^{-1}) \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}}$ \ENDFOR \STATE Return $\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{n}$ \mathbf{e}nd{algorithmic} \label{alg:ag} \mathbf{e}nd{algorithm} In this paper, we study the application of accelerated methods for mini-batch algorithms, and provide theoretical results, a novel algorithm, and empirical experiments. The main resulting message is that by using an appropriate accelerated method, we obtain significantly better stochastic optimization algorithms in terms of convergence speed. Moreover, in certain regimes acceleration is actually \mathbf{e}mph{necessary} in order to allow a significant speedups. The potential benefit of acceleration to mini-batching has been briefly noted in \cite{DMB}, but here we study this issue in much more depth. In particular, we make the following contributions: \begin{itemize} \item We develop novel convergence bounds for the standard gradient method, which refines the result of \cite{DekelGilShamXia11,DMB} by being dependent on $L(\mathbf{e}nsuremath{\mathbf{w}}opt) = \inf_{\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}cal} L(\mathbf{e}nsuremath{\mathbf{w}})$, the expected loss of the best predictor in our class. For example, we show that in the regime where the desired suboptimality is comparable or larger than $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$, including in the separable case $L(\mathbf{e}nsuremath{\mathbf{w}}opt)=0$, mini-batching does not lead to significant speed-ups with standard gradient methods. \item We develop a novel variant of the stochastic accelerated gradient method \cite{Lan09}, which is optimized for a mini-batch framework and implicitly adaptive to $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$. \item We provide an analysis of our accelerated algorithm, refining the analysis of \cite{Lan09} by being dependent on $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$, and show how it always allows for significant speed-ups via mini-batching, in contrast to standard gradient methods. Moreover, its performance is uniformly superior, at least in terms of theoretical upper bounds. \item We provide an empirical study, validating our theoretical observations and the efficacy of our new method. \mathbf{e}nd{itemize} \section{Preliminaries}\label{sec:prelim} We consider stochastic convex optimization problems over some convex domain $\mathcal{W}cal$. Here, we take $\mathcal{W}cal$ to be a convex subset of a Euclidean space, and use $\norm{\mathbf{e}nsuremath{\mathbf{w}}}$ to denote the standard Euclidean norm. In the Appendix, we state and prove the result in a more general setting, where $\mathcal{W}cal$ is a convex subset of a Banach space, and $\norm{\mathbf{e}nsuremath{\mathbf{w}}}$ can be an arbitrary norm.(subset of Euclidean case, see Appendix for the more general Banach space case), using an i.i.d. sample $z_1,\ldots,z_m \in \mc{Z}$ drawn from some fixed distribution. Throughout this paper we assume that the instantaneous loss $\mathbf{e}ll : \mathcal{W}cal \times \mc{Z} \mapsto \mathbf{e}nsuremath{\mathbb{R}}$ is convex in its first argument and non-negative. We further assume that the loss is $H$-smooth in its first argument for each $z \in \mc{Z}$. That is for every $z \in \mc{Z}$ and $\mathbf{e}nsuremath{\mathbf{w}} , \mathbf{e}nsuremath{\mathbf{w}}' \in \mathcal{W}cal$, $$ \norm{\nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}},z) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}',z)} \le H \norm{\mathbf{e}nsuremath{\mathbf{w}} - \mathbf{e}nsuremath{\mathbf{w}}'} $$ (for more general Banach space case, the norm on the left hand side is the dual norm). Let us denote $$ L(\mathbf{e}nsuremath{\mathbf{w}}) := \Ep{z}{\mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}},z)} $$ We wish to minimize $L(\mathbf{e}nsuremath{\mathbf{w}})$ over convex domain $\mathcal{W}cal$. We will provide guarantees on $L(\mathbf{e}nsuremath{\mathbf{w}})$ relative to $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$ at some $\mathbf{e}nsuremath{\mathbf{w}}opt \in \mathcal{W}cal$, where the guarantees also depend on $\norm{\mathbf{e}nsuremath{\mathbf{w}}opt}$. We could choose $\mathbf{e}nsuremath{\mathbf{w}}opt := \arg\min_{\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}} L(\mathbf{e}nsuremath{\mathbf{w}})$, though our results hold for any $\mathbf{e}nsuremath{\mathbf{w}}opt\in\mathcal{W}cal$, and in some cases we might choose to compete with a low-norm $\mathbf{e}nsuremath{\mathbf{w}}opt$ that is not optimal in $\mathcal{W}cal$. The behavior of the accelerated gradient method also depends on the radius of $\mathcal{W}cal$, defined as: $$ D := \sup_{\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}cal} \norm{\mathbf{e}nsuremath{\mathbf{w}}} $$ We discuss two stochastic optimization approaches to deal with this problem: stochastic gradient descent (SGD), and accelerated gradient methods (AG). In a mini-batch setting, both approaches iteratively average sub-gradients with respect to several instances, and use this average to update the predictor. However, the update is done in different ways. In the Appendix, we also provide the form of the update in the more general mirror descent setting, where $\norm{\mathbf{e}nsuremath{\mathbf{w}}}$ is an arbitrary norm. The stochastic gradient descent algorithm is summarized as Algorithm \ref{alg:md}. In the pseudocode, $P_{\mathcal{W}cal}$ refers to the projection on to the ball $\mathcal{W}cal$ (under the Euclidean distance). The accelerated gradient method (e.g., \cite{Lan09}) is summarized as Algorithm \ref{alg:ag}. In terms of existing results, for the SGD algorithm we have \cite[Section 5.1]{DMB} \[ \E{L(\bar{\mathbf{e}nsuremath{\mathbf{w}}})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \le \mathcal{O}\left(\sqrt{\frac{1}{m}}+\frac{b}{m}\right), \] whereas for an accelerated gradient algorithm, we have \cite{Lan09} \[ \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \le \mathcal{O}\left(\sqrt{\frac{1}{m}}+\frac{b^2}{m^2}\right), \] where in both cases the dependence on $D, H$ and $\norm{\mathbf{e}nsuremath{\mathbf{w}}opt}$ is suppressed. The above bounds suggest that, as long as $b=o(\sqrt{m})$, both methods allow us to use a large mini-batch size $b$ without significantly degrading the performance of either method. This allows the number of iterations $n=m/b$ to be smaller, potentially resulting in faster convergence speed. However, these bounds do not show that accelerated methods have a significant advantage over the SGD algorithm, at least when $b=o(\sqrt{m})$, since both have the same first-order term $1/\sqrt{m}$. To understand the differences between these two methods better, we will need a more refined analysis, to which we now turn. \section{Convergence Guarantees}\label{sec:convergence} The following theorems provide a refined convergence guarantee for the SGD algorithm and the AG algorithm, which improves on the analysis of \cite{DekelGilShamXia11,DMB,Lan09} by being explicitly dependent on $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$, the expected loss of the best predictor $\mathbf{e}nsuremath{\mathbf{w}}opt$ in $\mathcal{W}cal$. \begin{theorem}\label{thm:sgd} For any $\mathbf{e}nsuremath{\mathbf{w}}opt\in\mathcal{W}cal$, using Stochastic Gradient Descent with a step size of $\mathbf{e}ta = \min\left\{\frac{1}{2H}, \tilde{f}rac{\sqrt{\frac{b \norm{\mathbf{e}nsuremath{\mathbf{w}}^*}^2 }{L(\mathbf{e}nsuremath{\mathbf{w}}opt) H n}}}{ 1 + \sqrt{\frac{H \norm{\mathbf{e}nsuremath{\mathbf{w}}^*}^2}{L(\mathbf{e}nsuremath{\mathbf{w}}opt) b n}}} \right\}$, we have: \begin{align*} \E{L(\bar{\mathbf{e}nsuremath{\mathbf{w}}})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \le \sqrt{\frac{64 H \norm{\mathbf{e}nsuremath{\mathbf{w}}opt}^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n}} + \frac{4 L(\mathbf{e}nsuremath{\mathbf{w}}opt) + 4 H \norm{\mathbf{e}nsuremath{\mathbf{w}}opt}^2}{n} + \frac{8 H \norm{\mathbf{e}nsuremath{\mathbf{w}}opt}^2}{b n} \mathbf{e}nd{align*} \mathbf{e}nd{theorem} Note that the radius $D$ does not appear in the above bound, which depends only on $\norm{\mathbf{e}nsuremath{\mathbf{w}}opt}$. This means that $\mathcal{W}cal$ could be unbounded, perhaps even the entire space, and a projection step for SGD is not really crucial. The step size, of course, still depends on $\norm{\mathbf{e}nsuremath{\mathbf{w}}opt}$. \begin{theorem}\label{thm:ag} For any $\mathbf{e}nsuremath{\mathbf{w}}opt\in\mathcal{W}cal$, using Accelerated Gradient with step size parameters $\beta_i = \frac{i+1}{2}$, $\gamma_i = \gamma i^p$ where \begin{align}\label{eq:optgamma} \gamma = \min \left\{\tilde{f}rac{1}{4 H},\ \sqrt{\tilde{f}rac{b \norm{\mathbf{e}nsuremath{\mathbf{w}}^*}^2}{348 H L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-1)^{2p+1}}},\ \left(\tilde{f}rac{b}{1044 H (n - 1)^{2p}}\right)^{\frac{p+1}{2p+1}} \left(\tilde{f}rac{\norm{\mathbf{e}nsuremath{\mathbf{w}}^*}^2 }{ 4 H \norm{\mathbf{e}nsuremath{\mathbf{w}}^*}^2 + \sqrt{4 H \norm{\mathbf{e}nsuremath{\mathbf{w}}^*}^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt)}} \right)^{\frac{p}{2p+1}}\right\} \mathbf{e}nd{align} and \begin{align}\label{eq:optp} p = \min\left\{\max\left\{\frac{\log(b)}{2 \log(n-1)} , \frac{\log \log(n)}{2\left(\log(b(n-1)) - \log \log(n)\right)}\right\} ,1\right\}~~, \mathbf{e}nd{align} as long as $n \ge 783$, we have: \begin{align*} \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le 117 \sqrt{\frac{H \norm{\mathbf{e}nsuremath{\mathbf{w}}opt}^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n}} + \frac{367 H \norm{\mathbf{e}nsuremath{\mathbf{w}}opt}^{4/3} D^{\frac{2}{3}} }{\sqrt{b} n} + \frac{546 H D^2 \sqrt{\log(n)}}{b n } + \frac{5 H \norm{\mathbf{e}nsuremath{\mathbf{w}}opt}^2 }{n^{2}} \\ & \le 117 \sqrt{\frac{H D^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n}} + \frac{367 H D^2 }{\sqrt{b} n} + \frac{546 H D^2 \sqrt{\log(n)}}{b n} + \frac{5 H D^2 }{ n^{2}} \mathbf{e}nd{align*} \mathbf{e}nd{theorem} Unlike for SGD, notice that the bound for the AG method above does depend on $D$, and a projection step is necessary for our analysis. However it is worth noting that $D$ only appears in terms of order at least $1/n$, and appears only mildly in the $1/(\sqrt{b}n)$ term, suggesting some robustness to the radius $D$. We emphasize that \thmref{thm:ag} gives more than a theoretical bound: it actually specifies a novel accelerated gradient strategy, where the step size $\gamma_i$ scales {\mathbf{e}m polynomially} in $i$, in a way dependent on the minibatch size $b$ and $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$. While $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$ may not be known in advance, it does have the practical implication that choosing $\gamma_i\propto i^p$ for some $p<1$, as opposed to just choosing $\gamma_i \propto i$ as in \cite{Lan09}), might yield superior results. We now provide a proof sketch of Theorems \ref{thm:sgd} and \ref{thm:ag}. A more general statement of the Theorems as well as a complete proof can be found in the Appendix. The key observation used for analyzing the dependence on $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$ is that for any non-negative $H$-smooth convex function $f : \mathcal{W} \mapsto \mathbf{e}nsuremath{\mathbb{R}}$, we have \cite{SreSriTew10}: \begin{align}\label{eq:self} \norm{\nabla f(\mathbf{e}nsuremath{\mathbf{w}})} \le \sqrt{4 H f(\mathbf{e}nsuremath{\mathbf{w}})} \mathbf{e}nd{align} This self-bounding property tells us that the norm of the gradient is small at a point if the loss is itself small at that point. This self-bounding property has been used in \cite{Shalev07} in the online setting and in \cite{SreSriTew10} in the stochastic setting to get better (faster) rates of convergence for non-negative smooth losses. The implication of this observation are that for any $\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}$, $\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}})} \le \sqrt{4 H L(\mathbf{e}nsuremath{\mathbf{w}})}$ and $\forall z \in \mc{Z}, \norm{\mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}, z)} \le \sqrt{4 H \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}},z)}$. \begin{proof}[{\bf Proof sketch for Theorem \ref{thm:sgd}}] The proof for the stochastic gradient descent bound is mainly based on the proof techniques in \cite{Lan09} and its extension to the mini-batch case in \cite{DekelGilShamXia11}. Following the line of analysis in \cite{Lan09}, one can show that $$ \E{ \tilde{f}rac{1}{n} \sum_{i=1}^{n} L(\mathbf{e}nsuremath{\mathbf{w}}_{i})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \le \tilde{f}rac{\mathbf{e}ta}{n-1} \sum_{i=1}^{n-1} \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}^2} + \tilde{f}rac{D^2}{2 \mathbf{e}ta (n-1)} $$ In the case of \cite{Lan09}, $\E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}}$ is bounded by the variance, and that leads to the final bound provided in \cite{Lan09} (by setting $\mathbf{e}ta$ appropriately). As noticed in \cite{DekelGilShamXia11}, in the minibatch setting we have $\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) = \frac{1}{b} \sum_{t=b(i-1)+1}^{bi} \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)$ and so one can further show that \begin{align}\label{eq:DGSX} \E{ \tilde{f}rac{1}{n} \sum_{i=1}^{n} L(\mathbf{e}nsuremath{\mathbf{w}}_{i})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \le \tilde{f}rac{\mathbf{e}ta}{b^2(n-1)} \sum_{i=1}^{n-1} \sum_{\underset{(i-1)b+1}{t=}}^{ib} \mathbb{E}\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)}^2 + \tilde{f}rac{D^2}{2 \mathbf{e}ta (n-1)} \mathbf{e}nd{align} In \cite{DekelGilShamXia11}, each of $\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)}$ is bounded by $\sigma_0$ and so setting $\mathbf{e}ta$, the mini-batch bound provided there is obtained. In our analysis we further use the self-bounding property to \mathbf{e}qref{eq:DGSX} and get that $$ \E{ \tilde{f}rac{1}{n} \sum_{i=1}^{n} L(\mathbf{e}nsuremath{\mathbf{w}}_{i})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \le \tilde{f}rac{16 H \mathbf{e}ta}{b(n-1)} \sum_{i=1}^{n-1} \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i)} + \tilde{f}rac{D^2}{2 \mathbf{e}ta (n-1)} $$ rearranging and setting $\mathbf{e}ta$ appropriately gives the final bound. \mathbf{e}nd{proof} \begin{proof}[{\bf Proof sketch for Theorem \ref{thm:ag}}] The proof of the accelerated method starts in a similar way as in \cite{Lan09}. For the $\gamma_i$'a and $\beta_i$'s mentioned in the theorem, following similar lines of analysis as in \cite{Lan09} we get the preliminary bound $$ \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \le \frac{2 \gamma}{(n-1)^{p+1}} \sum_{i=1}^{n-1} i^{2p}\ \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i)}^2} + \frac{D^2}{\gamma (n-1)^{p+1}} $$ In \cite{Lan09} the step size $\gamma_i = \gamma (i+1)/2$ and $\beta_i = (i+1)/2$ which effectively amounts to $p = 1$ and further similar to the stochastic gradient descent analysis. Furthermore, each $\E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i)}^2} $ is assumed to be bounded by some constant, and thus leads to the final bound provided in \cite{Lan09} by setting $\gamma$ appropriately. On the other hand, we first notice that due to the mini-batch setting, just like in the proof of stochastic gradient descent, \begin{align*} \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \tilde{f}rac{2 \gamma}{b^2 (n-1)^{p+1}} \sum_{i=1}^{n-1} i^{2p} \sum_{\underset{b(i-1)+1}{t=}}^{ib} \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i,z_t)}^2} + \tilde{f}rac{D^2}{\gamma (n-1)^{p+1}} \mathbf{e}nd{align*} Using smoothness, the self bounding property some manipulations, we can further get the bound \begin{align*} \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \tilde{f}rac{64 H \gamma}{b(n-1)^{1-p}} \sum_{i=1}^{n-1} \left(\E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \right) + \tilde{f}rac{64 H \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-1)^p}{b} \\ & ~~~~~~~~~~ + \tilde{f}rac{D^2}{\gamma (n-1)^{p+1}} + \tilde{f}rac{32 H D^2}{b(n-1)} \mathbf{e}nd{align*} Notice that the above recursively bounds $\E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt)$ in terms of $\sum_{i=1}^{n-1} \left(\E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \right)$. While unrolling the recursion all the way down to $2$ does not help, we notice that for any $\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}cal$, $L(\mathbf{e}nsuremath{\mathbf{w}})- L(\mathbf{e}nsuremath{\mathbf{w}}opt) \le 12 HD^2 + 3 L(\mathbf{e}nsuremath{\mathbf{w}}opt)$. Hence we unroll the recursion to $M$ steps and use this inequality for the remaining sum. Optimizing over number of steps up to which we unroll and also optimizing over the choice of $\gamma$, we get the bound, \begin{align*} \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \sqrt{\tilde{f}rac{1648 H D^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \tilde{f}rac{348 (6 HD^2 + 2 L(\mathbf{e}nsuremath{\mathbf{w}}opt))}{b(n-1)} (b (n-1))^{\frac{p}{p+1}} + \tilde{f}rac{32 H D^2}{b(n-1)} \\ & ~~~~~ + \tilde{f}rac{4 H D^2}{(n-1)^{p+1}} + \tilde{f}rac{36 H D^2}{b(n-1)} \tilde{f}rac{\log(n)}{(b (n-1))^{\frac{p}{2p + 1}}} \mathbf{e}nd{align*} Using the $p$ as given in the theorem statement, and few simple manipulations, gives the final bound. \mathbf{e}nd{proof} \section{Optimizing with Mini-Batches}\label{sec:minibatches} To compare our two theorems and understand their implications, it will be convenient to treat $H$ and $D$ as constants, and focus on the more interesting parameters of sample size $m$, minibatch size $b$, and optimal expected loss $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$. Also, we will ignore the logarithmic factor in \thmref{thm:ag}, since we will mostly be interested in significant (i.e. polynomial) differences between the two algorithms, and it is quite possible that this logarithmic factor is merely an artifact of our analysis. Using $m=nb$, we get that the bound for the SGD algorithm is \begin{equation}\label{eq:sgd} \E{L(\bar{\mathbf{e}nsuremath{\mathbf{w}}})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) ~\le~ \tilde{\mathcal{O}}\left(\sqrt{\frac{L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{bn }}+\frac{1}{n}\right) ~=~ \tilde{\mathcal{O}}\left(\sqrt{\frac{L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{m }}+\frac{b}{m}\right), \mathbf{e}nd{equation} and the bound for the accelerated gradient method we propose is \begin{equation}\label{eq:ag} \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) ~\le~ \tilde{\mathcal{O}}\left(\sqrt{\frac{L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{bn}}+\frac{1}{\sqrt{b}n}+ \frac{1}{n^2}\right) ~=~ \tilde{\mathcal{O}}\left(\sqrt{\frac{L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{m}}+\frac{\sqrt{b}}{m}+ \frac{b^2}{m^2}\right). \mathbf{e}nd{equation} To understand the implication these bounds, we follow the approach described in \cite{BotBou07,ShalSre08} to analyze large-scale learning algorithms. First, we fix a desired suboptimality parameter $\mathbf{e}psilon$, which measures how close to $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$ we want to get. Then, we assume that both algorithms are ran till the suboptimality of their outputs is at most $\mathbf{e}psilon$. Our goal would be to understand the \mathbf{e}mph{runtime} each algorithm needs, till attaining suboptimality $\mathbf{e}psilon$, as a function of $L(\mathbf{e}nsuremath{\mathbf{w}}opt),\mathbf{e}psilon,b$. To measure this runtime, we need to discern two settings here: a \mathbf{e}mph{parallel} setting, where we assume that the mini-batch gradient computations are performed in parallel, and a \mathbf{e}mph{serial} setting, where the gradient computations are performed one after the other. In a parallel setting, we can take the number of iterations $n$ as a rough measure of the runtime (note that in both algorithms, the runtime of a single iteration is comparable). In a serial setting, the relevant parameter is $m$, the number of data accesses. To analyze the dependence on $m$ and $n$, we upper bound \mathbf{e}qref{eq:sgd} and \mathbf{e}qref{eq:ag} by $\mathbf{e}psilon$, and invert\ignore{\footnote{The derivation is based on the fact that for any monotonically descreasing functions $f,g$, if we ignore constants, then a bound of the form $\mathbf{e}psilon\leq \mathcal{O}(f(n)+g(n))$ is equivalent to $\mathbf{e}psilon \leq\mathcal{O}(\max\{f(n),g(n)\})$, which in turn is equivalent to $n\leq \mathcal{O}(\max\{f^{-1}(n),g^{-1}(n)\})$, or $n\leq \mathcal{O}(f^{-1}(\mathbf{e}psilon)+g^{-1}(\mathbf{e}psilon))$. This also holds for sums of more functions.}} them to get the bounds on $m$ and $n$. Ignoring logarithmic factors, for the SGD algorithm we get \begin{equation}\label{eq:sgdinv} n\leq \frac{1}{\mathbf{e}psilon}\left(\frac{L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}psilon}\cdot\frac{1}{b}+1\right)\;\;\;\;\;\; m\leq \frac{1}{\mathbf{e}psilon}\left(\frac{L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}psilon}+b\right), \mathbf{e}nd{equation} and for the AG algorithm we get \begin{equation}\label{eq:aginv} n\leq \frac{1}{\mathbf{e}psilon}\left(\frac{L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}psilon}\cdot\frac{1}{b}+\frac{1}{\sqrt{b}}+\sqrt{\mathbf{e}psilon}\right)\;\;\;\;\;\; m\leq \frac{1}{\mathbf{e}psilon}\left(\frac{L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}psilon}+\sqrt{b}+b\sqrt{\mathbf{e}psilon}\right). \mathbf{e}nd{equation} First, let us compare the performance of these two algorithms in the parallel setting, where the relevant parameter to measure runtime is $n$. Analyzing which of the terms in each bound dominates, we get that for the SGD algorithm, there are 2 regimes, while for the AG algorithm, there are 2-3 regimes depending on the relationship between $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$ and $\mathbf{e}psilon$. The following two tables summarize the situation (again, ignoring constants): \begin{center} \renewcommand\arraystretch{1.5} \begin{minipage}[b]{0.35\linewidth}\centering SGD Algorithm \begin{tabular}{|c|c|}\mathbf{e}nsuremath{\mathbf{h}}line Regime & n \\\mathbf{e}nsuremath{\mathbf{h}}line\mathbf{e}nsuremath{\mathbf{h}}line $b\leq \sqrt{L(\mathbf{e}nsuremath{\mathbf{w}}opt)m}$ & $\frac{L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}psilon^2 b}$\\ $b\geq \sqrt{L(\mathbf{e}nsuremath{\mathbf{w}}opt)m}$ & $\frac{1}{\mathbf{e}psilon}$\\\mathbf{e}nsuremath{\mathbf{h}}line \mathbf{e}nd{tabular} \mathbf{e}nd{minipage} \mathbf{e}nsuremath{\mathbf{h}}space{0.5cm} \begin{minipage}[b]{0.5\linewidth} \centering AG Algorithm \begin{tabular}{|c||c|c|}\mathbf{e}nsuremath{\mathbf{h}}line & Regime & n \\\mathbf{e}nsuremath{\mathbf{h}}line\mathbf{e}nsuremath{\mathbf{h}}line \multirow{2}{*}{$\mathbf{e}psilon \leq L(\mathbf{e}nsuremath{\mathbf{w}}opt)^2$} & $b\leq L(\mathbf{e}nsuremath{\mathbf{w}}opt)^{1/4}m^{3/4}$ & $\frac{L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}psilon^2 b}$ \\ & $b\geq L(\mathbf{e}nsuremath{\mathbf{w}}opt)^{1/4}m^{3/4}$ & $\frac{1}{\sqrt{\mathbf{e}psilon}}$ \\\mathbf{e}nsuremath{\mathbf{h}}line\mathbf{e}nsuremath{\mathbf{h}}line \multirow{3}{*}{$\mathbf{e}psilon \geq L(\mathbf{e}nsuremath{\mathbf{w}}opt)^2$} & $b\leq L(\mathbf{e}nsuremath{\mathbf{w}}opt)m$ & $\frac{L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}psilon^2 b}$\\ & $L(\mathbf{e}nsuremath{\mathbf{w}}opt)m\leq b \leq m^{2/3}$ & $\frac{1}{\mathbf{e}psilon\sqrt{b}}$ \\ & $b\geq m^{2/3}$ & $\frac{1}{\sqrt{\mathbf{e}psilon}}$\\\mathbf{e}nsuremath{\mathbf{h}}line \mathbf{e}nd{tabular} \mathbf{e}nd{minipage} \mathbf{e}nd{center} From the tables, we see that for both methods, there is an initial linear speedup as a function of the minibatch size $b$. However, in the AG algorithm, this linear speedup regime holds for much larger minibatch sizes\footnote{Since it is easily verified that $\sqrt{L(\mathbf{e}nsuremath{\mathbf{w}}opt) m}$ is generally smaller than both $L(\mathbf{e}nsuremath{\mathbf{w}}opt)^{1/4}m^{3/4}$ and $L(\mathbf{e}nsuremath{\mathbf{w}}opt)m$}. Even beyond the linear speedup regime, the AG algorithm still maintains a $\sqrt{b}$ speedup, for the reasonable case where $\mathbf{e}psilon\geq L(\mathbf{e}nsuremath{\mathbf{w}}opt)^2$. Finally, in all regimes, the runtime bound of the AG algorithm is equal or significantly smaller than that of the SGD algorithm. We now turn to discuss the serial setting, where the runtime is measured in terms of $m$. Inspecting \mathbf{e}qref{eq:sgdinv} and \mathbf{e}qref{eq:aginv}, we see that a larger size of $b$ actually requires $m$ to increase for both algorithms. This is to be expected, since mini-batching does not lead to large gains in a serial setting. However, using mini-batching in a serial setting might still be beneficial for implementation reasons, resulting in constant-factor improvements in runtime (e.g. saving overhead and loop control, and via pipelining, concurrent memory accesses etc.). In that case, we can at least ask what is the largest mini-batch size that won't degrade the runtime guarantee by more than a constant. Using our bounds, the mini-batch size $b$ for the SGD algorithm can scale as much as $L/\mathbf{e}psilon$, vs. a larger value of $L/\mathbf{e}psilon^{3/2}$ for the AG algorithm. Finally, an interesting point is that the AG algorithm is sometimes actually \mathbf{e}mph{necessary} to obtain significant speed-ups via a mini-batch framework (according to our bounds). Based on the table above, this happens when the desired suboptimality $\mathbf{e}psilon$ is not much bigger then $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$, i.e.~$\mathbf{e}psilon=\Omega(L(\mathbf{e}nsuremath{\mathbf{w}}opt))$. This includes the ``separable'' case, $L(\mathbf{e}nsuremath{\mathbf{w}}opt)=0$, and in general a regime where the ``estimation error'' $\mathbf{e}psilon$ and ``approximation error'' $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$ are roughly the same---an arguably very relevant one in machine learning. For the SGD algorithm, the critical mini-batch value $\sqrt{L(\mathbf{e}nsuremath{\mathbf{w}}opt)m}$ can be shown to equal $L(\mathbf{e}nsuremath{\mathbf{w}}opt)/\mathbf{e}psilon$, which is $O(1)$ in our case. So with SGD we get no non-constant parallel speedup. However, with AG, we still enjoy a speedup of at least $\Theta(\sqrt{b})$, all the way up to mini-batch size $b=m^{2/3}$. \section{Experiments} \begin{figure}[t] \noindent \begin{centering} \begin{tabular}{ @{} L @{} S @{} S @{} } & astro-physics & CCAT \\ \rotatebox{90}{Test Loss} & \includegraphics[width=0.44\textwidth]{power-astro-nomargin} & \includegraphics[width=0.44\textwidth]{power-ccat-nomargin} \\ & $p$ & $p$ \\ \mathbf{e}nd{tabular} \mathbf{e}nd{centering} {\small \caption{\small Left: Test smoothed hinge loss, as a function of $p$, after training using the AG algorithm on 6361 examples from astro-physics, for various batch sizes. Right: the same, for 18578 examples from CCAT. In both datasets, margin violations were removed before training so that $L(\mathbf{e}nsuremath{\mathbf{w}}opt)=0$. The circled points are the theoretically-derived values $p = \ln b / ( 2 \ln ( n - 1 ) )$ (see Theorem \ref{thm:ag}).}} \label{fig:power} \mathbf{e}nd{figure} We implemented both the SGD algorithm (Algorithm \ref{alg:md}) and the AG algorithm (Algorithm \ref{alg:ag}, using step-sizes of the form $\gamma_i=\gamma i^p$ as suggested by \thmref{thm:ag}) on two publicly-available binary classification problems, astro-physics and CCAT. We used the smoothed hinge loss $\mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}};\mathbf{e}nsuremath{\mathbf{x}},y)$, defined as $0.5-y\mathbf{e}nsuremath{\mathbf{w}}^{\top}\mathbf{e}nsuremath{\mathbf{x}}$ if $y\mathbf{e}nsuremath{\mathbf{w}}^{\top}\mathbf{e}nsuremath{\mathbf{x}}\leq 0$; $0$ if $y\mathbf{e}nsuremath{\mathbf{w}}^{\top}\mathbf{e}nsuremath{\mathbf{x}}> 1$, and $0.5(1-y\mathbf{e}nsuremath{\mathbf{w}}^{\top}\mathbf{e}nsuremath{\mathbf{x}})^2$ in between. While both datasets are relatively easy to classify, we also wished to understand the algorithms' performance in the ``separable'' case $L(\mathbf{e}nsuremath{\mathbf{w}}opt)=0$, to see if the theory in \secref{sec:minibatches} holds in practice. To this end, we created an additional version of each dataset, where $L(\mathbf{e}nsuremath{\mathbf{w}}opt)=0$, by training a classifier on the entire dataset and removing margin violations. In all of our experiments, we used up to half of the data for training, and one-quarter each for validation and testing. The validation set was used to determine the step sizes $\mathbf{e}ta$ and $\gamma_i$. We justify this by noting that our goal is to compare the performance of the SGD and AG algorithms, independently of the difficulties in choosing their stepsizes. In the implementation, we neglected the projection step, as we found it does not significantly affect performance when the stepsizes are properly selected. In our first set of experiments, we attempted to determine the relationship between the performance of the AG algorithm and the $p$ parameter, which determines the rate of increase of the step sizes $\gamma_i$. Our experiments are summarized in Figure \ref{fig:power}. Perhaps the most important conclusion to draw from these plots is that neither the ``traditional'' choice $p=1$, nor the constant-step-size choice $p=0$, give the best performance in all circumstances. Instead, there is a complicated data-dependent relationship between $p$, and the final classifier's performance. Furthermore, there appears to be a weak trend towards higher $p$ performing better for larger minibatch sizes $b$, which corresponds neatly with our theoretical predictions. In our next experiment, we directly compared the performance of the SGD and AG methods. To do so, we varied the minibatch size $b$ while holding the total amount of data used for training, $m=nb$, fixed. When $L(\mathbf{e}nsuremath{\mathbf{w}}opt)>0$ (top row of Figure \ref{fig:batch}), the total sample size $m$ is high and the suboptimality $\mathbf{e}psilon$ is low (red and black plots), we see that for small minibatch size, both methods do not degrade as we increase $b$, corresponding to a linear parallel speedup. In fact, SGD is actually overall better, but as $b$ increases, its performance degrades more quickly, eventually performing worse than AG. That is, even in the least favorable scenario for AG (high $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$ and small $\mathbf{e}psilon$, see the tables in \secref{sec:minibatches}), it does give benefits with large enough minibatch sizes. Also, we see that even here, once the suboptimality $\mathbf{e}psilon$ is roughly equal to $L(\mathbf{e}nsuremath{\mathbf{w}}opt)$, AG significantly outperforms SGD, even with small minibatches, agreeing with our the theory. Turning to the case $L(\mathbf{e}nsuremath{\mathbf{w}}opt)=0$ (bottom two rows of Figure \ref{fig:batch}), which is theoretically more favorable to AG, we see it is indeed mostly better, in terms of retaining linear parallel speedups for larger minibatch sizes, even for large data set sizes corresponding to small suboptimality values, and might even be advantageous with small minibatch sizes. \begin{figure}[h!] \noindent \begin{centering} \begin{tabular}{ @{} L @{} S @{} S @{} } & astro-physics & CCAT \\ \rotatebox{90}{Test Loss} & \includegraphics[width=0.4\textwidth]{batch-astro} & \includegraphics[width=0.4\textwidth]{batch-ccat} \\ \rotatebox{90}{Test Loss} & \includegraphics[width=0.4\textwidth]{batch-astro-nomargin} & \includegraphics[width=0.4\textwidth]{batch-ccat-nomargin} \\ \rotatebox{90}{Test Misclassification} & \includegraphics[width=0.4\textwidth]{batch-astro-nomargin-error} & \includegraphics[width=0.4\textwidth]{batch-ccat-nomargin-error} \\ & $b$ & $b$ \\ \mathbf{e}nd{tabular} \mathbf{e}nd{centering} {\small \caption{\small Test loss on astro-physics and CCAT as a function of mini-batch size $b$ (in log-scale), where the total amount of training data $m=nb$ is held fixed. Solid lines and dashed lines are for SGD and AG respectively (for AG, we used $p = \ln b / ( 2 \ln( n - 1 ) )$ as in Theorem \ref{thm:ag}). The upper row shows the smoothed hinge loss on the test set, using the original (uncensored) data. The bottom rows show the smoothed hinge loss and misclassification rate on the test set, using the modified data where $L(\mathbf{e}nsuremath{\mathbf{w}}opt)=0$. All curves are averaged over three runs.}} \label{fig:batch} \mathbf{e}nd{figure} \section{Summary} In this paper, we presented novel contributions to the theory of first order stochastic convex optimization (Theorems \ref{thm:sgd} and \ref{thm:ag}, generalizing results of \cite{DMB} and \cite{Lan09} to be sensitive to $\L{\mathbf{e}nsuremath{\mathbf{w}}opt}$), developed a novel step size strategy for the accelerated method that we used in order to obtain our results and we saw works well in practice, and provided a more refined analysis of the effects of minibatching which paints a different picture then previous analyses \cite{DMB,AD} and highlights the benefit of accelerated methods. A remaining open practical and theoretical question is whether the bound of Theorem \ref{thm:ag} is tight. Following \cite{Lan09}, the bound is tight for $b=1$ and $b\rightarrow\infty$, i.e.~the first and third terms are tight, but it is not clear whether the $1/(\sqrt{b}n)$ dependence is indeed necessary. It would be interesting to understand whether with a more refined analysis, or perhaps different step-sizes, we can avoid this term, whether an altogether different algorithm is needed, or whether this term does represent the optimal behavior for any method based on $b$-aggregated stochastic gradient estimates. \appendix \section{Generalizing to Different Norms} We now turn to general norms and discuss the generic Mirror Descent and Accelerated Mirror Descent algorithms. In this more general case we let domain $\mathcal{W}cal$ be some closed convex set of a Banach space equipped with norm $\norm{\cdot}$. We will use $\norm{\cdot}_*$ to represent the dual norm of $\norm{\cdot}$. Further the $H$-smoothness of the loss function in this general case is takes the form that for any $z \in \mc{Z}$ and any $\mathbf{e}nsuremath{\mathbf{w}} , \mathbf{e}nsuremath{\mathbf{w}}' \in \mathcal{W}cal$, $$ \norm{\nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}} , z) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}',z)}_* \le H \norm{\mathbf{e}nsuremath{\mathbf{w}} - \mathbf{e}nsuremath{\mathbf{w}}'} $$ The key to generalizing the algorithms and result is to find a non-negative function $R : \mathcal{W}cal \mapsto \mathbb{R}$ that is strongly convex on the domain $\mathcal{W}cal$ w.r.t. to the norm $\norm{\cdot}$, that is: \begin{definition} A function $R : \mathcal{W}cal \mapsto \mathbf{e}nsuremath{\mathbb{R}}$ is said to be $1$-strongly convex w.r.t. norm $\norm{\cdot}$ if for any $\mathbf{e}nsuremath{\mathbf{w}} , \mathbf{e}nsuremath{\mathbf{w}}' \in \mathcal{W}cal$ and any $\alpha \in [0,1]$, \begin{align*} R(\alpha \mathbf{e}nsuremath{\mathbf{w}} + (1 - \alpha)\mathbf{e}nsuremath{\mathbf{w}}' ) \le \alpha R(\mathbf{e}nsuremath{\mathbf{w}}) + (1 - \alpha) R(\mathbf{e}nsuremath{\mathbf{w}}') - \tilde{f}rac{\alpha (1 - \alpha)}{2} \norm{\mathbf{e}nsuremath{\mathbf{w}} - \mathbf{e}nsuremath{\mathbf{w}}'}^2 \mathbf{e}nd{align*} \mathbf{e}nd{definition} We also denote more generally $$D := \sqrt{2 \sup_{\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}cal} R(\mathbf{e}nsuremath{\mathbf{w}})}~.$$ The generalizations of the SGD and AG methods are summarized in Algorithms \ref{alg:smd} and \ref{alg:amd} respectively. The key difference between these and the Euclidean case is that the gradient descent step is replaced by a descent step involving gradient mappings of $R$ and its conjugate $R^*$ and the projection step is replaced by Bregman projection (projection to set minimizing the Bregman divergence to the point). \begin{algorithm} \caption{Stochastic Mirror Descent with Mini-Batching (SMD)} \begin{algorithmic} \STATE Parameters: Step size $\mathbf{e}ta$, mini-batch size $b$. \STATE Input: Sample $z_1,\ldots,z_m$ \STATE $\mathbf{e}nsuremath{\mathbf{w}}_1= \argmin{\mathbf{e}nsuremath{\mathbf{w}}} R(\mathbf{e}nsuremath{\mathbf{w}})$ {F}OR{$i=1$ to $n=m/b$} \STATE Let $\mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) = \frac{1}{b}\sum_{t=b(i-1)+1}^{bi}\mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)$ \STATE \mathbf{e}nsuremath{\mathbf{h}}space{-0.08in}$\left.\begin{array}{ll} \mathbf{e}nsuremath{\mathbf{w}}'_{i+1} := \nabla R^*\left( \nabla R\left(\mathbf{e}nsuremath{\mathbf{w}}_{i}\right) - \gamma_i \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) \right) \\ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} := \argmin{\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}cal}\breg{R}{\mathbf{e}nsuremath{\mathbf{w}}}{\mathbf{e}nsuremath{\mathbf{w}}'_{i+1}} \mathbf{e}nd{array}\right\} \mathbf{e}nsuremath{\mathbf{w}}_{i+1} = \argmin{\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}cal}\left\{\mathbf{e}ta \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}} - \mathbf{e}nsuremath{\mathbf{w}}_i} + \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}}{\mathbf{e}nsuremath{\mathbf{w}}_i} \right\} $ \ENDFOR \STATE Return $\bar{\mathbf{e}nsuremath{\mathbf{w}}} = \frac{1}{n}\sum_{i=1}^{n}\mathbf{e}nsuremath{\mathbf{w}}_i$ \mathbf{e}nd{algorithmic} \label{alg:smd} \mathbf{e}nd{algorithm} \begin{algorithm} \caption{Accelerated Mirror Descent Method (AMD)} \begin{algorithmic} \STATE Parameters: Step sizes $(\gamma_i,\beta_i)$, mini-batch size $b$ \STATE Input: Sample $z_1,\ldots,z_m$ \STATE $\mathbf{e}nsuremath{\mathbf{w}}_1 = \argmin{\mathbf{e}nsuremath{\mathbf{w}}} R(\mathbf{e}nsuremath{\mathbf{w}})$ {F}OR{$i=1$ to $n=m/b$} \STATE Let $\mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) := \frac{1}{b}\sum_{t=b(i-1)+1}^{bi}\mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}},z_t)$ \STATE $\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i := \beta_i^{-1} \mathbf{e}nsuremath{\mathbf{w}}_{i} + (1 - \beta_i^{-1}) \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}}$ \STATE \mathbf{e}nsuremath{\mathbf{h}}space{-0.1in} $\left.\begin{array}{ll} \mathbf{e}nsuremath{\mathbf{w}}'_{i+1} := \nabla R^*\left( \nabla R\left(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}\right) - \gamma_i \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) \right) \\ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} := \argmin{\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}cal}\breg{R}{\mathbf{e}nsuremath{\mathbf{w}}}{\mathbf{e}nsuremath{\mathbf{w}}'_{i+1}} \mathbf{e}nd{array}\right\} \mathbf{e}nsuremath{\mathbf{w}}_{i+1} = \argmin{\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}cal}\left\{\gamma_i \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i)}{\mathbf{e}nsuremath{\mathbf{w}} - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i} + \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}}{\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i} \right\} $ \STATE $\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i+1} \leftarrow \beta_i^{-1} \mathbf{e}nsuremath{\mathbf{w}}_{i+1} + (1 - \beta_i^{-1}) \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}}$ \ENDFOR \STATE Return $\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{n}$ \mathbf{e}nd{algorithmic} \label{alg:amd} \mathbf{e}nd{algorithm} \begin{theorem}\label{thm:smd} Let $R : \mathcal{W}cal \mapsto \mathbf{e}nsuremath{\mathbb{R}}$ be a non-negative strongly convex function on $\mathcal{W}cal$ w.r.t. norm $\norm{\cdot}$. Let $K = \sqrt{2 \sup_{\mathbf{e}nsuremath{\mathbf{w}} : \norm{\mathbf{e}nsuremath{\mathbf{w}}} \le 1} R(\mathbf{e}nsuremath{\mathbf{w}})}$. For any $\mathbf{e}nsuremath{\mathbf{w}}opt\in\mathcal{W}cal$, using Stochastic Mirror Descent with a step size of $$ \mathbf{e}ta = \min\left\{\frac{1}{2 H}, \frac{b}{32 H K^2 }, \frac{\sqrt{\frac{32 b R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{ L(\mathbf{e}nsuremath{\mathbf{w}}opt) H K^2 n}}}{16\left(1 + \sqrt{\frac{32 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{ L(\mathbf{e}nsuremath{\mathbf{w}}opt) b n}}\right) } \right\} ~ , $$ we have that, \begin{align*} \E{L(\bar{\mathbf{e}nsuremath{\mathbf{w}}})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \le \sqrt{\frac{128 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)\ L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n}} + \frac{4 L(\mathbf{e}nsuremath{\mathbf{w}}opt) + 8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{n} + \frac{16 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n} \mathbf{e}nd{align*} \mathbf{e}nd{theorem} \begin{theorem}\label{thm:amd} Let $R : \mathcal{W}cal \mapsto \mathbf{e}nsuremath{\mathbb{R}}$ be a non-negative strongly convex function on $\mathcal{W}cal$ w.r.t. norm $\norm{\cdot}$. Also let $K = \sqrt{2 \sup_{\mathbf{e}nsuremath{\mathbf{w}} : \norm{\mathbf{e}nsuremath{\mathbf{w}}} \le 1} R(\mathbf{e}nsuremath{\mathbf{w}})}$. For any $\mathbf{e}nsuremath{\mathbf{w}}opt\in\mathcal{W}cal$, using Accelerated Mirror Descent with step size parameters $\beta_i = \frac{i+1}{2}$, $\gamma_i = \gamma i^p$ where $$ \gamma = \min \left\{\frac{1}{4 H},\ \sqrt{\frac{b R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{174 H K^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-1)^{2p+1}}},\ \left(\frac{b}{1044 H K^2 (n - 1)^{2p}}\right)^{\frac{p+1}{2p+1}} \left(\frac{6 R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ \frac{3}{2} H D^2 + L(\mathbf{e}nsuremath{\mathbf{w}}opt)} \right)^{\frac{p}{2p+1}}\right\} ~~\textrm{and}\\ $$ $$ p = \min\left\{\max\left\{\frac{\log(b)}{2 \log(n-1)} , \frac{\log \log(n)}{2\left(\log(b(n-1)) - \log \log(n)\right)}\right\} ,1\right\}~~, $$ as long as $ n \ge \max\{783 K^2, \frac{87 K^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{HD^2}\}$, we have that : \begin{align*} \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le 164 \sqrt{\frac{H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \frac{580 H K^2 (R(\mathbf{e}nsuremath{\mathbf{w}}opt))^{2/3} D^{\frac{2}{3}} }{\sqrt{b} (n-1)} + \frac{545 H K^2 D^2 \sqrt{\log(n)}}{b (n - 1)} + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ (n - 1)^{2}} \mathbf{e}nd{align*} \mathbf{e}nd{theorem} \section{Complete Proofs} We provide complete proofs of Theorems \ref{thm:smd} and \ref{thm:amd}, noting how Theorems \ref{thm:sgd} and \ref{thm:ag} are specializations to the Euclidean case. \subsection{Stochastic Mirror Descent} \begin{proof}[Proof of Theorem \ref{thm:smd}] Due to $H$-smoothness of convex function $L$ we have that, \begin{align*} L(\mathbf{e}nsuremath{\mathbf{w}}_{i+1}) & \le L(\mathbf{e}nsuremath{\mathbf{w}}_i) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_i} + \frac{H}{2} \|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 \\ & = L(\mathbf{e}nsuremath{\mathbf{w}}_i) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_i} + \frac{H}{2} \|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_i}\\ \intertext{by Holder's inequality we get,} & \le L(\mathbf{e}nsuremath{\mathbf{w}}_i) + \|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_* \|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_i\| + \frac{H}{2} \|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_i}\\ \intertext{since for any $\alpha > 0$, $ab \le \frac{a^2}{2 \alpha} + \frac{\alpha b^2}{2}$,} & \le L(\mathbf{e}nsuremath{\mathbf{w}}_i) + \frac{\|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2 }{2(1/\mathbf{e}ta - H)} + \frac{(1/\mathbf{e}ta - H)}{2} \|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_i\|^2 + \frac{H}{2} \|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_i}\\ & = L(\mathbf{e}nsuremath{\mathbf{w}}_i) + \frac{\|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2 }{2(1/\mathbf{e}ta - H)} + \frac{\|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_i\|^2}{2 \mathbf{e}ta} + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_i} \mathbf{e}nd{align*} We now note that the update step can be written equivalently as $$\mathbf{e}nsuremath{\mathbf{w}}_{i+1} = \argmin{\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}cal} \left\{\mathbf{e}ta \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}} - \mathbf{e}nsuremath{\mathbf{w}}_i} + \Delta_R(\mathbf{e}nsuremath{\mathbf{w}} , \mathbf{e}nsuremath{\mathbf{w}}_i)\right\}~.$$ It can be shown that (see for instance Lemma 1 of \cite{Lan09}) $$ \mathbf{e}ta \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_i} \le \mathbf{e}ta \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i} + \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt, \mathbf{e}nsuremath{\mathbf{w}}_{i}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt , \mathbf{e}nsuremath{\mathbf{w}}_{i+1}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}_i , \mathbf{e}nsuremath{\mathbf{w}}_{i+1}) $$ Plugging this we get that, \begin{align*} L(\mathbf{e}nsuremath{\mathbf{w}}_{i+1}) &\le L(\mathbf{e}nsuremath{\mathbf{w}}_i) + \frac{\|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2}{2 (1/\mathbf{e}ta - H)} + \frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{2\mathbf{e}ta} + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i}\\ &~~~~~~~~~~~~~ + \frac{1}{\mathbf{e}ta}\left(\Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt, \mathbf{e}nsuremath{\mathbf{w}}_{i}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt , \mathbf{e}nsuremath{\mathbf{w}}_{i+1}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}_i , \mathbf{e}nsuremath{\mathbf{w}}_{i+1})\right) \\ & = L(\mathbf{e}nsuremath{\mathbf{w}}_i) + \frac{\|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2}{2 (1/\mathbf{e}ta - H)} + \frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{2\mathbf{e}ta} + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i} + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i}\\ &~~~~~~~~~~~~~ + \frac{1}{\mathbf{e}ta}\left( \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt, \mathbf{e}nsuremath{\mathbf{w}}_{i}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt , \mathbf{e}nsuremath{\mathbf{w}}_{i+1}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}_i , \mathbf{e}nsuremath{\mathbf{w}}_{i+1}) \right) \\ & \ge L(\mathbf{e}nsuremath{\mathbf{w}}_i) + \frac{\|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2}{2 (1/\mathbf{e}ta - H)} + \frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{2\mathbf{e}ta} + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i} - \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_i - \mathbf{e}nsuremath{\mathbf{w}}opt}\\ &~~~~~~~~~~~~~ + \frac{1}{\mathbf{e}ta}\left( \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt, \mathbf{e}nsuremath{\mathbf{w}}_{i}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt , \mathbf{e}nsuremath{\mathbf{w}}_{i+1}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}_i , \mathbf{e}nsuremath{\mathbf{w}}_{i+1}) \right) \intertext{by strong convexity, $\Delta_R(\mathbf{e}nsuremath{\mathbf{w}}_i , \mathbf{e}nsuremath{\mathbf{w}}_{i+1}) \ge \frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_i - \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}}{2}$ and so, } &\le L(\mathbf{e}nsuremath{\mathbf{w}}_i) + \frac{\|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2}{2 (1/\mathbf{e}ta - H)} + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i} - \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_i - \mathbf{e}nsuremath{\mathbf{w}}opt}\\ &~~~~~~~~~~~~~ + \frac{1}{2 \mathbf{e}ta}\left(\Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt, \mathbf{e}nsuremath{\mathbf{w}}_{i}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt , \mathbf{e}nsuremath{\mathbf{w}}_{i+1})\right) \intertext{since $\mathbf{e}ta \le \frac{1}{2 H}$,} & \le L(\mathbf{e}nsuremath{\mathbf{w}}_i) + \mathbf{e}ta \|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2 + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i} - \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_i - \mathbf{e}nsuremath{\mathbf{w}}opt}\\ &~~~~~~~~~~~~~ + \frac{1}{\mathbf{e}ta}\left( \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt, \mathbf{e}nsuremath{\mathbf{w}}_{i}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt , \mathbf{e}nsuremath{\mathbf{w}}_{i+1})\right) \intertext{by convexity, $L(\mathbf{e}nsuremath{\mathbf{w}}_{i}) - \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_i - \mathbf{e}nsuremath{\mathbf{w}}opt} \le L(\mathbf{e}nsuremath{\mathbf{w}}opt)$ and so} & \le L(\mathbf{e}nsuremath{\mathbf{w}}opt) + \mathbf{e}ta \|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2 + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i}\\ &~~~~~~~~~~~~~ + \frac{1}{\mathbf{e}ta}\left( \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt, \mathbf{e}nsuremath{\mathbf{w}}_{i}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt , \mathbf{e}nsuremath{\mathbf{w}}_{i+1})\right) \mathbf{e}nd{align*} Hence we conclude that : \begin{align*} \frac{1}{n-1} \sum_{i=1}^{n-1} L(\mathbf{e}nsuremath{\mathbf{w}}_{i+1}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \frac{\mathbf{e}ta}{ (n-1) } \sum_{i=1}^{n-1} \|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2 + \frac{1}{n-1} \sum_{i=1}^{n-1} \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i} \\ &~~~~~~~~~~ + \frac{1}{n-1} \sum_{i=1}^{n-1}\frac{\Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt, \mathbf{e}nsuremath{\mathbf{w}}_{i}) - \Delta_R(\mathbf{e}nsuremath{\mathbf{w}}opt , \mathbf{e}nsuremath{\mathbf{w}}_{i+1})}{\mathbf{e}ta}\\ & = \frac{\mathbf{e}ta}{ (n-1) } \sum_{i=1}^{n-1} \|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2 + \frac{1}{n-1} \sum_{i=1}^{n-1} \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i} \\ &~~~~~~~~~~ + \frac{\breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_1} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{n-1}}}{\mathbf{e}ta (n-1)} \\ & \le \frac{\mathbf{e}ta}{ (n-1) } \sum_{i=1}^{n-1} \|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2 + \frac{1}{n-1} \sum_{i=1}^{n-1} \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i} \\ &~~~~~~~~~~ + \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}ta (n-1)} \\ & \le \frac{\mathbf{e}ta}{ (n-1) } \sum_{i=1}^{n-1} \|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2 + \frac{1}{n-1} \sum_{i=1}^{n-1} \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i} \\ &~~~~~~~~~~ + \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}ta (n-1)} \mathbf{e}nd{align*} Taking expectation with respect to sample on both sides and noticing that $\E{\ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i}} = 0$, we get that, \begin{align*} \E{\frac{1}{n-1} \sum_{i=1}^{n-1} L(\mathbf{e}nsuremath{\mathbf{w}}_{i+1}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt)} & \le \frac{\mathbf{e}ta}{ (n-1) } \sum_{i=1}^{n-1} \E{\|\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)\|_*^2 } + \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}ta (n-1)} \mathbf{e}nd{align*} Now note that $$ \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i) = \frac{1}{b} \sum_{t= (i-1)b+ 1}^{bi} \left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)\right) $$ and that $\left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)\right)$ is a mean zero vector drawn i.i.d. Also note that $\mathbf{e}nsuremath{\mathbf{w}}_i$ only depends on the first $(i-1)b$ examples and so when we consider expectation w.r.t. $z_{(i-1)b + 1} , \ldots, z_{ib}$ alone, $\mathbf{e}nsuremath{\mathbf{w}}_i$ is fixed. Hence by Corollary \ref{cor:eqsmooth} we have that, \begin{align*} \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}_*^2} & \le \frac{K^2}{b^2}\ \E{\norm{\sum_{t= (i-1)b+ 1}^{bi} \left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)\right)}_*^2}\\ & = \frac{K^2}{b^2} \sum_{t= (i-1)b+ 1}^{bi} \E{\norm{ \left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)\right)}_*^2} \mathbf{e}nd{align*} Plugging this back we get that \begin{align*} \E{\frac{1}{n-1} \sum_{i=1}^{n-1} L(\mathbf{e}nsuremath{\mathbf{w}}_{i+1}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt)} & \le \frac{K^2 \mathbf{e}ta}{ b^2 (n-1) } \sum_{i=1}^{n-1} \sum_{t= (i-1)b+ 1}^{bi} \E{\norm{ \left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)\right)}_*^2} + \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}ta (n-1)} \\ & \le \frac{2 K^2 \mathbf{e}ta }{b^2 (n-1) } \sum_{i=1}^{n-1} \sum_{t=(i-1)b + 1}^{ib} \E{ \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i)}^2 + \norm{\nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)}_*^2 } + \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}ta (n-1)} \intertext{for any non-negative $H$-smooth convex function $f$, we have the self-bounding property that $\norm{\nabla f(\mathbf{e}nsuremath{\mathbf{w}})}_* \le \sqrt{4 H f(\mathbf{e}nsuremath{\mathbf{w}})}$. Using this, } & \le \frac{ 8 H K^2 \mathbf{e}ta }{b^2 (n-1)} \sum_{i=1}^{n-1} \sum_{t=(i-1)b + 1}^{ib} \E{ L(\mathbf{e}nsuremath{\mathbf{w}}_i) + \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t) } + \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}ta (n-1)} \\ & = \frac{16 \mathbf{e}ta H K^2 }{b } \E{\frac{1}{n-1}\sum_{i=1}^{n-1} L(\mathbf{e}nsuremath{\mathbf{w}}_i) } + \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}ta (n-1)} \mathbf{e}nd{align*} Adding $\frac{1}{n-1}L(\mathbf{e}nsuremath{\mathbf{w}}_1)$ on both sides and removing $L(\mathbf{e}nsuremath{\mathbf{w}}_n)$ on the left we conclude that \begin{align*} \E{\frac{1}{n-1} \sum_{i=1}^{n-1} L(\mathbf{e}nsuremath{\mathbf{w}}_{i})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \frac{16 \mathbf{e}ta H K^2 }{b } \E{\frac{1}{n-1}\sum_{i=1}^{n-1} L(\mathbf{e}nsuremath{\mathbf{w}}_i) } + \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}ta (n-1)} + \frac{L(\mathbf{e}nsuremath{\mathbf{w}}_1)}{n-1} \mathbf{e}nd{align*} Hence we conclude that \begin{align*} \E{\frac{1}{n} \sum_{i=1}^{n} L(\mathbf{e}nsuremath{\mathbf{w}}_{i})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) &\le \frac{1}{\left(1 - \frac{16 \mathbf{e}ta H K^2 }{b}\right) } \left( \frac{16 \mathbf{e}ta H K^2 }{b} L(\mathbf{e}nsuremath{\mathbf{w}}opt) + \frac{L(\mathbf{e}nsuremath{\mathbf{w}}_1)}{n} + \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}ta n} \right)\\ &= \left( \frac{1}{1 - \frac{16 \mathbf{e}ta H K^2 }{b}} - 1 \right) L(\mathbf{e}nsuremath{\mathbf{w}}opt) + \frac{1}{1 - \frac{16 \mathbf{e}ta H K^2 }{b}} \left(\frac{L(\mathbf{e}nsuremath{\mathbf{w}}_1)}{n} + \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}ta n} \right)\\ &= \left( \frac{1}{1 - \frac{16 \mathbf{e}ta H K^2 }{b}} - 1 \right) L(\mathbf{e}nsuremath{\mathbf{w}}opt) + \left(\frac{1}{1 - \frac{16 \mathbf{e}ta H K^2 }{b}}\right) \frac{L(\mathbf{e}nsuremath{\mathbf{w}}_1)}{n} \\ & ~~~~~+ \left(\frac{1}{1 - \frac{16 \mathbf{e}ta H K^2 }{b}}\right) \frac{b}{16 \mathbf{e}ta H K^2 } \frac{16 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{ b n} \mathbf{e}nd{align*} Writing $\alpha = \frac{1}{1 - \frac{16 \mathbf{e}ta H K^2 }{b}} - 1 $, so that $\mathbf{e}ta = \frac{b}{16 H K^2 }\left(1 - \frac{1}{\alpha + 1}\right)$ we get, \begin{align*} \E{\frac{1}{n} \sum_{i=1}^{n} L(\mathbf{e}nsuremath{\mathbf{w}}_{i})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \alpha L(\mathbf{e}nsuremath{\mathbf{w}}opt) + \frac{(\alpha + 1) L(\mathbf{e}nsuremath{\mathbf{w}}_1)}{n} + \frac{16 H (\alpha + 1)^2}{\alpha} \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n} \\ & \le \alpha L(\mathbf{e}nsuremath{\mathbf{w}}opt) + \frac{(\alpha + 1) L(\mathbf{e}nsuremath{\mathbf{w}}_1)}{n} + \left(\alpha + \frac{1}{\alpha}\right) \frac{32 H R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n} \mathbf{e}nd{align*} Now we shall always pick $\mathbf{e}ta \le \frac{b}{32 H K^2 }$ so that $\alpha \le 1$ and so \begin{align*} \E{\frac{1}{n} \sum_{i=1}^{n} L(\mathbf{e}nsuremath{\mathbf{w}}_{i})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \alpha L(\mathbf{e}nsuremath{\mathbf{w}}opt) + \frac{32 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\alpha\ b n} + \frac{2 L(\mathbf{e}nsuremath{\mathbf{w}}_1)}{n} + \frac{16 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n} \mathbf{e}nd{align*} Picking $$ \mathbf{e}ta = \min\left\{\frac{1}{2 H}, \frac{b}{32 H K^2 }, \frac{\sqrt{\frac{32 b R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{ L(\mathbf{e}nsuremath{\mathbf{w}}opt) H K^2 n}}}{16\left(1 + \sqrt{\frac{32 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{ L(\mathbf{e}nsuremath{\mathbf{w}}opt) b n}}\right) } \right\} ~ , $$ or equivalently $\alpha = \min\left\{1 , \sqrt{\frac{32 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{ L(\mathbf{e}nsuremath{\mathbf{w}}opt) b n}}\right\}$ we get, \begin{align*} \E{\frac{1}{n} \sum_{i=1}^{n} L(\mathbf{e}nsuremath{\mathbf{w}}_{i})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \sqrt{\frac{128 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)\ L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n}} + \frac{2 L(\mathbf{e}nsuremath{\mathbf{w}}_1)}{n} + \frac{16 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n} \mathbf{e}nd{align*} Finally note that by smoothness, \begin{align*} L(\mathbf{e}nsuremath{\mathbf{w}}_1) & \le L(\mathbf{e}nsuremath{\mathbf{w}}opt) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_1) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}nsuremath{\mathbf{w}}_1 -\mathbf{e}nsuremath{\mathbf{w}}opt} + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\mathbf{e}nsuremath{\mathbf{w}}_1 - \mathbf{e}nsuremath{\mathbf{w}}opt} \\ & \le L(\mathbf{e}nsuremath{\mathbf{w}}opt) + \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_1) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}opt)}_* \norm{\mathbf{e}nsuremath{\mathbf{w}}_1 -\mathbf{e}nsuremath{\mathbf{w}}opt} + \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}opt)}_* \norm{\mathbf{e}nsuremath{\mathbf{w}}_1 - \mathbf{e}nsuremath{\mathbf{w}}opt} \\ & \le L(\mathbf{e}nsuremath{\mathbf{w}}opt) + H \norm{\mathbf{e}nsuremath{\mathbf{w}}_1 -\mathbf{e}nsuremath{\mathbf{w}}opt}^2 + \sqrt{4 H L(\mathbf{e}nsuremath{\mathbf{w}}opt)} \norm{\mathbf{e}nsuremath{\mathbf{w}}_1 - \mathbf{e}nsuremath{\mathbf{w}}opt} \\ \intertext{Since $R$ is $1$-strongly convex and $\mathbf{e}nsuremath{\mathbf{w}}_1 = \argmin{\mathbf{e}nsuremath{\mathbf{w}}} R(\mathbf{e}nsuremath{\mathbf{w}})$, } & \le L(\mathbf{e}nsuremath{\mathbf{w}}opt) + 2 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) + \sqrt{8 H L(\mathbf{e}nsuremath{\mathbf{w}}opt) R(\mathbf{e}nsuremath{\mathbf{w}}opt)} \\ & \le 2 L(\mathbf{e}nsuremath{\mathbf{w}}opt) + 4 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) \mathbf{e}nd{align*} Hence we conclude that \begin{align*} \E{\frac{1}{n} \sum_{i=1}^{n} L(\mathbf{e}nsuremath{\mathbf{w}}_{i})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \sqrt{\frac{128 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)\ L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n}} + \frac{4 L(\mathbf{e}nsuremath{\mathbf{w}}opt) + 8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{n} + \frac{16 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b n} \mathbf{e}nd{align*} Using Jensen's inequality concludes the proof. \mathbf{e}nd{proof} \begin{proof}[Proof of Theorem \ref{thm:sgd}] For Euclidean case $R(\mathbf{e}nsuremath{\mathbf{w}}) = \frac{1}{2} \norm{\mathbf{e}nsuremath{\mathbf{w}}}_2^2$ and $K = \sqrt{\sup_{\mathbf{e}nsuremath{\mathbf{w}} : \norm{\mathbf{e}nsuremath{\mathbf{w}}}_2 \le 1} \norm{\mathbf{e}nsuremath{\mathbf{w}}}^2} = 1$. Plugging these in the previous theorem concludes the proof. \mathbf{e}nd{proof} \subsection{Accelerated Mirror Descent} \begin{lemma} For the accelerated update rule, if the step sizes $\beta_i \in [1,\infty)$ and $\gamma_i \in (0,\infty)$ are chosen such that $\beta_1 = 1$ and for all $i \in [n]$ $$ 0 < \gamma_{i+1}(\beta_{i+1} - 1) \le \beta_i \gamma_i ~~~\textrm{and} ~~~ 2 H \gamma_i \le \beta_i $$ then we have that \begin{align*} \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \frac{\gamma_1 (\beta_1 - 1)}{\gamma_n (\beta_n - 1)} L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) + \frac{32 H}{b \gamma_n (\beta_n - 1)} \sum_{i=1}^{n-1} \gamma_i^2 \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} + \frac{D^2 }{2 \gamma_n (\beta_n - 1)} + \frac{16 H^2 D^2}{b \gamma_n (\beta_n - 1)} \sum_{i=1}^{n-1} \frac{\gamma_i^2}{\beta_i^2} \mathbf{e}nd{align*} \mathbf{e}nd{lemma} \begin{proof} First note that for any $i$, \begin{align} \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i & = \beta_i^{-1} \mathbf{e}nsuremath{\mathbf{w}}_{i+1} + (1 - \beta_i^{-1}) \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}} - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i \notag \\ & = \beta_i^{-1} \mathbf{e}nsuremath{\mathbf{w}}_{i+1} + (1 - \beta_i^{-1}) \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}} - \beta_i^{-1} \mathbf{e}nsuremath{\mathbf{w}}_{i} - (1 - \beta_i^{-1}) \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}} \notag \\ & = \beta_i^{-1} \left(\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\right) \label{eq:rearrange} \mathbf{e}nd{align} Now by smoothness we have that \begin{align*} L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i+1}) & \le L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}} + \frac{H}{2}\|\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}\|^2 \\ & = L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}} + \frac{H}{2 \beta_i^2}\|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 \\ & = L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}} +\frac{1}{2 \beta_i \gamma_i}\|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 - \frac{ \beta_i/\gamma_i - H}{2 \beta_i^2}\|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 \\ \intertext{since $\mathbf{e}nsuremath{\mathbf{w}}_{i+1}^{\mathrm{ag}} = \beta_i^{-1} \mathbf{e}nsuremath{\mathbf{w}}_{i+1} + (1 - \beta_i^{-1}) \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}}$, } & = L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \beta_i^{-1} \mathbf{e}nsuremath{\mathbf{w}}_{i+1} + (1 - \beta_i^{-1}) \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}} - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}} +\frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{ 2 \beta_i \gamma_i} - \frac{ \beta_i/\gamma_i - H}{2 \beta_i^2}\|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 \\ & = L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) + (1 - \beta_i^{-1}) \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{\mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}} - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}} + \frac{\ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}}{ \beta_i} + \frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_{i}-\mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{2 \beta_i \gamma_i} \\ & ~~~~~~~~~~- \frac{ \beta_i/\gamma_i - H}{2 \beta_i^2}\|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 \\ & = (1 - \beta_i^{-1})\left( L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{\mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}} - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}} \right) + \frac{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}}{\beta_i} + \frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{2 \beta_i \gamma_i} \\ & ~~~~~~~~~~- \frac{ \beta_i/\gamma_i - H}{2 \beta_i^2}\|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 \\ & = (1 - \beta_i^{-1}) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) + \frac{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}}{\beta_i} + \frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_{i}-\mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{2 \beta_i \gamma_i} - \frac{ \beta_i/\gamma_i - H}{2 \beta_i^2}\|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 \\ \intertext{} & = (1 - \beta_i^{-1}) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) - \frac{ \beta_i/\gamma_i - H}{2 \beta_i^2}\|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 + \frac{ \norm{\mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{2 \beta_i \gamma_i} \\ & ~~~~~~~~~~ + \frac{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}} + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}}{\beta_i} \\ & = (1 - \beta_i^{-1}) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) - \frac{ \beta_i/\gamma_i - H}{2 \beta_i^2}\|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 + \frac{ \norm{\mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{2 \beta_i \gamma_i} + \frac{\ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}} }{\beta_i} \\ & ~~~~~~~~~~ + \frac{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}} + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}}{\beta_i} \\ \intertext{by Holder's inequality,} & \le (1 - \beta_i^{-1}) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) - \frac{ \beta_i/\gamma_i - H}{2 \beta_i^2}\|\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}\|^2 + \frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{ 2 \beta_i \gamma_i} + \frac{ \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_* \norm{ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}} }{\beta_i}\\ & ~~~~~~~~~~ + \frac{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}} + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}}{\beta_i} \\ \intertext{since for any $a,b$ and $\alpha > 0$, $ab \le \frac{a^2}{2 \alpha} + \frac{\alpha b^2}{2}$} & \le (1 - \beta_i^{-1}) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) + \frac{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} + \frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{2 \beta_i \gamma_i} \\ & ~~~~~~~~~~ + \frac{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}} + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}}{\beta_i} \mathbf{e}nd{align*} We now note that the update step 2 of accelerated gradient can be written equivalently as $$\mathbf{e}nsuremath{\mathbf{w}}_{i+1} = \argmin{\mathbf{e}nsuremath{\mathbf{w}} \in \mathcal{W}cal} \left\{\gamma_i \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{md}})}{\mathbf{e}nsuremath{\mathbf{w}} - \mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{md}}} + \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}}{\mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}\right\}~.$$ It can be shown that (see for instance Lemma 1 of \cite{Lan09}) $$ \gamma_i \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i)}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1} - \mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{md}}} \le \gamma_i \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{md}})}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{md}}} + \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}_i}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} $$ Plugging this we get that, \begin{align*} L(\mathbf{e}nsuremath{\mathbf{w}}_{i+1}^{\mathrm{ag}}) & \le (1 - \beta_i^{-1}) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) + \frac{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} + \frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2}{2 \beta_i \gamma_i} + \frac{ \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}} }{\beta_i} \\ & ~~~~~~~~ + \frac{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}}{\beta_i} + \frac{\breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}_i}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}}}{\gamma_i \beta_i} \intertext{by strong-convexity of $R$, $\breg{R}{\mathbf{e}nsuremath{\mathbf{w}}_i}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} \ge \frac{1}{2}\norm{\mathbf{e}nsuremath{\mathbf{w}}_i- \mathbf{e}nsuremath{\mathbf{w}}_{i+1}}^2$ and so,} & = (1 - \beta_i^{-1}) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) + \frac{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} + \frac{ \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}} }{\beta_i} \\ & ~~~~~~~~ + \frac{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}}{\beta_i} + \frac{\breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} }{\gamma_i \beta_i} \\ & = (1 - \beta_i^{-1}) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) + \frac{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} + \frac{ \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{md}})}{ \mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}} }{\beta_i} \\ & ~~~~~~~~ + \frac{\ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}}{\beta_i} + \frac{\breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} }{\gamma_i \beta_i} \\ & ~~~~~~~~ + \frac{ L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{md}})}{ \mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}} }{\beta_i} \\ \intertext{by convexity, $L(\mathbf{e}nsuremath{\mathbf{w}}opt) \ge L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{md}})}{\mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i}$, hence } & \le (1 - \beta_i^{-1}) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) + \frac{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} + \frac{ \ip{\nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{md}})}{ \mathbf{e}nsuremath{\mathbf{w}}opt - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}} }{\beta_i} \\ & ~~~~~~~~ + \frac{\ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{md}}}}{\beta_i} + \frac{\breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} }{ \gamma_i \beta_i} + \frac{ L(\mathbf{e}nsuremath{\mathbf{w}}opt) }{\beta_i} \\ & = (1 - \beta_i^{-1}) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) + \frac{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} + \frac{\ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}opt}}{\beta_i} \\ & ~~~~~~~~ + \frac{\breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} }{\gamma_i \beta_i} + \beta_i^{-1} L(\mathbf{e}nsuremath{\mathbf{w}}opt) \\ & = L(\mathbf{e}nsuremath{\mathbf{w}}opt) + (1 - \beta_i^{-1}) \left(L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \right) + \frac{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} + \frac{\ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}opt}}{\beta_i} \\ & ~~~~~~~~ + \frac{\breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} }{\gamma_i \beta_i} \mathbf{e}nd{align*} Thus we conclude that \begin{align*} L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i+1}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le (1 - \beta_i^{-1})\left( L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt)\right)+ \frac{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} + \frac{\ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}opt}}{\beta_i} \\ & ~~~~~~ + \frac{\breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} }{ \beta_i \gamma_i} \mathbf{e}nd{align*} Multiplying throughout by $\beta_i \gamma_i$ we get \begin{align*} \gamma_i \beta_i \left(L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i+1}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \right) & \le \gamma_i (\beta_i - 1)\left( L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt)\right)+ \frac{\gamma_i \beta_i \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} \\ & ~~~~~~ + \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} + \gamma_i \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}opt} \mathbf{e}nd{align*} Owing to the condition that $\gamma_{i+1} (\beta_{i+1} -1) \le \gamma_i \beta_i$ we have that \begin{align*} \gamma_{i+1}(\beta_{i+1} -1) \left(L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i+1}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \right) & \le \gamma_i (\beta_i - 1)\left( L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt)\right)+ \frac{\gamma_i \beta_i \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} \\ & ~~~~~~ + \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{i+1}} + \gamma_i \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}opt} \mathbf{e}nd{align*} Using the above inequality repeatedly we conclude that \begin{align*} \gamma_{n}(\beta_{n} - 1) \left(L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{n}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \right) & \le \gamma_1 (\beta_1 - 1)\left( L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt)\right)+ \sum_{i=1}^{n-1} \frac{\gamma_i \beta_i \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} \\ & ~~~~~~ + \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{1}} - \breg{R}{\mathbf{e}nsuremath{\mathbf{w}}opt}{\mathbf{e}nsuremath{\mathbf{w}}_{n}} + \sum_{i=1}^{n-1} \gamma_i \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}opt}\\ & \le \gamma_1 (\beta_1 - 1)\left( L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt)\right)+ \sum_{i=1}^{n-1} \frac{\gamma_i \beta_i \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} \\ & ~~~~~~ + R(\mathbf{e}nsuremath{\mathbf{w}}opt) + \sum_{i=1}^{n-1} \gamma_i \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}opt}\\ & = \gamma_1 (\beta_1 - 1)\left( L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt)\right)+ \sum_{i=1}^{n-1} \frac{\gamma_i \beta_i \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2}{2 ( \beta_i/\gamma_i - H)} \\ & ~~~~~~ + R(\mathbf{e}nsuremath{\mathbf{w}}opt) + \sum_{i=1}^{n-1} \gamma_i \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}opt} \intertext{since $2 H \gamma_i \le \beta_i$, } & \le \gamma_1 (\beta_1 - 1)\left( L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) - L(\mathbf{e}nsuremath{\mathbf{w}}opt)\right)+ \sum_{i=1}^{n-1} \gamma_i^2 \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}_*^2 + R(\mathbf{e}nsuremath{\mathbf{w}}opt) \\ & ~~~~~~ + \sum_{i=1}^{n-1} \gamma_i \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}opt}\\ & \le \gamma_1 (\beta_1 - 1) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) + \sum_{i=1}^{n-1} 2 \gamma_i^2 \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i})}_*^2 + R(\mathbf{e}nsuremath{\mathbf{w}}opt) \\ & ~~~~~~ + \sum_{i=1}^{n-1} \gamma_i \ip{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i})}{ \mathbf{e}nsuremath{\mathbf{w}}_{i} - \mathbf{e}nsuremath{\mathbf{w}}opt} \\ & ~~~~~~ + \sum_{i=1}^{n-1} 2 \gamma_i^2 \norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) + \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i})}_*^2 \mathbf{e}nd{align*} Taking expectation we get that \begin{align} \gamma_{n}(\beta_{n} - 1) \left(\E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{n})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \right) & \le \gamma_1 (\beta_1 - 1) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) + \sum_{i=1}^{n-1} 2 \gamma_i^2 \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i})}_*^2} + R(\mathbf{e}nsuremath{\mathbf{w}}opt) \notag \\ & ~~~~~~ + \sum_{i=1}^{n-1} 2 \gamma_i^2 \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_{i}) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i}) + \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{i})}_*^2} \label{eq:inter} \mathbf{e}nd{align} Now note that $$ \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) = \frac{1}{b} \sum_{t= (i-1)b+ 1}^{bi} \left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i,z_t)\right) ~~~~~\textrm{and} $$ $$ \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) = \frac{1}{b} \sum_{t= (i-1)b+ 1}^{bi} \left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i,z_t) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i,z_t)\right) $$ Further $\left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i) - \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i,z_t)\right)$ and $\left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i,z_t) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i,z_t)\right)$ are mean zero vectors drawn i.i.d. Also note that $\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i$ only depends on the first $(i-1)b$ examples and so when we consider expectation w.r.t. $z_{(i-1)b + 1} , \ldots, z_{ib}$, $\mathbf{e}nsuremath{\mathbf{w}}_i$ is fixed. Hence by Corollary \ref{cor:eqsmooth} we have that, \begin{align*} \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}}) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})}_*^2} & = \frac{K^2}{b^2} \E{\norm{\sum_{t= (i-1)b+ 1}^{bi} \left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}}) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}},z_t)\right)}_*^2}\\ & \le \frac{K^2}{b^2} \sum_{t= (i-1)b+ 1}^{bi} \E{\norm{ \left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}}) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}},z_t)\right)}_*^2} \mathbf{e}nd{align*} and similarly \begin{align*} & \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \nabla \mathbf{e}ll_i(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i)}_*^2} \\ & ~~~~~~~~~~ \le \frac{K^2}{b^2} \sum_{t= (i+1)b +1}^{bi} \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i,z_t) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i, z_t)}_*^2} \mathbf{e}nd{align*} Plugging these back in Equation \ref{eq:inter} we get : {\small \begin{align*} \gamma_{n}(\beta_{n} - 1) \left(\E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{n})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) \right) & \le \gamma_1 (\beta_1 - 1) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) + \sum_{i=1}^{n-1} \frac{2 K^2 \gamma_i^2}{b^2} \sum_{t= (i-1)b+ 1}^{bi} \E{\norm{ \left(\nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}}) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}},z_t)\right)}_*^2} + R(\mathbf{e}nsuremath{\mathbf{w}}opt) \\ & ~~~ + \sum_{i=1}^{n-1} \frac{2 K^2 \gamma_i^2}{b^2} \sum_{t= (i+1)b +1}^{bi} \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i,z_t) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i) + \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i, z_t)}_*^2}\\ & \le \gamma_1 (\beta_1 - 1) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) + \sum_{i=1}^{n-1} \frac{4 K^2 \gamma_i^2}{b^2} \sum_{t= (i-1)b+ 1}^{bi} \E{\norm{ \nabla L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})}_*^2 + \norm{\nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}},z_t)}_*^2} + R(\mathbf{e}nsuremath{\mathbf{w}}opt) \\ & ~~~ + \sum_{i=1}^{n-1} \frac{4 K^2 \gamma_i^2}{b^2} \sum_{t= (i+1)b +1}^{bi} \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i)}_*^2 + \norm{\nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i, z_t) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i,z_t)}_*^2} \intertext{for any non-negative $H$-smooth convex function $f$, we have the self-bounding property that $\norm{\nabla f(\mathbf{e}nsuremath{\mathbf{w}})} \le \sqrt{4 H f(\mathbf{e}nsuremath{\mathbf{w}})}$. Using this, } & \le \gamma_1 (\beta_1 - 1) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) + \sum_{i=1}^{n-1} \frac{16 H K^2 \gamma_i^2}{b^2} \sum_{t= (i-1)b+ 1}^{bi} \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}}) + \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}},z_t)} + R(\mathbf{e}nsuremath{\mathbf{w}}opt) \\ & ~~~ + \sum_{i=1}^{n-1} \frac{4 K^2 \gamma_i^2}{b^2 } \sum_{t= (i+1)b +1}^{bi} \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i)}_*^2 + \norm{\nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i, z_t) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i,z_t)}_*^2}\\ & = \gamma_1 (\beta_1 - 1) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) + \sum_{i=1}^{n-1} \frac{32 H K^2 \gamma_i^2}{b} \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} + R(\mathbf{e}nsuremath{\mathbf{w}}opt)\\ & ~~~ + \sum_{i=1}^{n-1} \frac{4 K^2 \gamma_i^2}{b^2 } \sum_{t= (i+1)b +1}^{bi} \E{\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i)}_*^2 + \norm{\nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i, z_t) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i,z_t)}_*^2} \intertext{by $H$-smoothness of $L$ and $\mathbf{e}ll$ we have that $\norm{\nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i) - \nabla L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i)}_* \le H \norm{\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i}$. Similarly we also have that $\norm{\nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i,z_t) - \nabla \mathbf{e}ll(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i,z_t)}_* \le H \norm{\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i}$. Hence,} & \le \gamma_1 (\beta_1 - 1) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) + \sum_{i=1}^{n-1} \frac{32 H K^2 \gamma_i^2}{b} \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} + R(\mathbf{e}nsuremath{\mathbf{w}}opt) \\ & ~~~ + \sum_{i=1}^{n-1} \frac{8 H^2 K^2 \gamma_i^2}{b} \E{\norm{\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i}^2 } \intertext{However, $\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i \leftarrow \beta_i^{-1} \mathbf{e}nsuremath{\mathbf{w}}_{i} + (1 - \beta_i^{-1}) \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}}$. Hence $\norm{\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_i - \mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{md}}_i}^2 \le \frac{\norm{\mathbf{e}nsuremath{\mathbf{w}}_i - \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}}}^2}{\beta_i^2} \le \frac{2 \norm{\mathbf{e}nsuremath{\mathbf{w}}_i - \mathbf{e}nsuremath{\mathbf{w}}_1}^2 + 2 \norm{\mathbf{e}nsuremath{\mathbf{w}}_1- \mathbf{e}nsuremath{\mathbf{w}}_{i}^{\mathrm{ag}}}^2}{\beta_i^2} \le \frac{4 D^2}{\beta_i^2}$ . Hence,} & \le \gamma_1 (\beta_1 - 1) L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) + \sum_{i=1}^{n-1} \frac{32 H K^2 \gamma_i^2}{b} \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} + R(\mathbf{e}nsuremath{\mathbf{w}}opt) + \frac{32 H^2 K^2 D^2}{b} \sum_{i=1}^{n-1} \frac{\gamma_i^2}{\beta_i^2} \mathbf{e}nd{align*} } Dividing throughout by $\gamma_n(\beta_n -1)$ concludes the proof. \mathbf{e}nd{proof} \begin{proof}[Proof of Theorem \ref{thm:amd}] First note that the for any $i$, $$ 2 H \gamma_i = 2 H \gamma i^{p} \le \frac{i^p}{2} \le \beta_i $$ Also note that since $p \in [0,1]$, $$ \gamma_{i+1} (\beta_{i+1} -1) = \gamma \frac{i (i+1)^p}{2} \le \gamma \frac{i^p (i+1)}{2} = \gamma_i \beta_i $$ Thus we have verified that the step sizes satisfy the conditions required by previous lemma. From the previous lemma we have that \begin{align*} \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \frac{\gamma_1 (\beta_1 - 1)}{\gamma_n (\beta_n - 1)} L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_{1}) + \frac{32 H K^2 }{b \gamma_n (\beta_n - 1)} \sum_{i=1}^{n-1} \gamma_i^2 \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} + \frac{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{ \gamma_n (\beta_n - 1)} + \frac{32 H^2 K^2 D^2}{b \gamma_n (\beta_n - 1)} \sum_{i=1}^{n-1} \frac{\gamma_i^2}{\beta_i^2}\\ & = \frac{64 H K^2 \gamma}{b n^p (n - 1)} \sum_{i=1}^{n-1} i^{2p}\ \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{\gamma n^p (n - 1)} + \frac{256 H^2 K^2 D^2 \gamma}{b n^p (n - 1)} \sum_{i=1}^{n-1} \frac{i^{2p}}{(i+1)^2}\\ & \le \frac{64 H K^2 \gamma (n-1)^{2p}}{b n^p (n - 1)} \sum_{i=1}^{n-1} \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}} + \frac{256 H^2 K^2 D^2 \gamma}{b (n - 1)^{p+1}} \sum_{i=1}^{n-1} \frac{1}{i^{2(1 - p)}}\\ & \le \frac{64 H K^2 \gamma }{b (n - 1)^{1 - p}} \sum_{i=1}^{n-1} \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}} + \frac{256 H^2 K^2 D^2 \gamma}{b (n - 1)^{p+1}} \sum_{i=1}^{n-1} \frac{1}{i^{2(1-p)}}\\ & \le \frac{64 H K^2 \gamma }{b (n - 1)^{1 - p}} \sum_{i=1}^{n-1} \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}} + \frac{256 H^2 K^2 D^2 \gamma}{b (n - 1)}\\ & \le \frac{64 H K^2 \gamma }{b (n - 1)^{1 - p}} \sum_{i=1}^{n-1} \left(\E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt)\right) + \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n - 1)^{p}}{b} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}} \\ & ~~~~~+ \frac{256 H^2 K^2 D^2 \gamma}{b (n - 1)} \intertext{since $\gamma \le 1/4H$,} & \le \frac{64 H K^2 \gamma }{b (n - 1)^{1 - p}} \sum_{i=1}^{n-1} \left(\E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt)\right) + \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n - 1)^{p}}{b} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}}\\ & ~~~~~ + \frac{64 H K^2 D^2}{b (n - 1)} \mathbf{e}nd{align*} Thus we have shown that \begin{align*} \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) & \le \frac{64 H K^2 \gamma }{b (n - 1)^{1 - p}} \sum_{i=1}^{n-1} \left(\E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt)\right) + \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n - 1)^{p}}{b} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}} \\ & ~~~~~ + \frac{64 H K^2 D^2}{b (n - 1)} \mathbf{e}nd{align*} Now if we use the notation $a_i = \E{L(\mathbf{e}nsuremath{\mathbf{w}}_i^{\mathrm{ag}})} - L(\mathbf{e}nsuremath{\mathbf{w}}opt)$, $A(i) = \frac{64 H K^2 \gamma }{b (i - 1)^{1 - p}}$ and $$ B(i) = \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (i - 1)^{p}}{b} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (i - 1)^{p+1}} + \frac{64 H K^2 D^2}{b (i - 1)} $$ Note that for any $i$ by smoothness, $a_i \le L_0 := \frac{3}{2} H D^2 + L(\mathbf{e}nsuremath{\mathbf{w}}opt)$ Also notice that \begin{align*} \sum_{i=n - M - 1}^n A(i) & = \frac{64 H K^2 \gamma }{b} \sum_{i=n-M-1}^n \frac{1}{(i - 1)^{1 - p}} \le \frac{64 H K^2 \gamma n^p}{b} \mathbf{e}nd{align*} Hence as long as \begin{align}\label{eq:con1} \gamma \le \frac{b}{64 H K^2 n^p}~, \mathbf{e}nd{align} $\sum_{i=n - M - 1}^n A(i) \le 1$. We shall ensure that the $\gamma$ we choose will satisfy the above condition. Now applying lemma \ref{ut:rec} we get that for any $M$, \begin{align}\label{eq:fromrec} a_n & \le e A(n)\left( a_0 (n-M) + \sum_{i= n-M-1}^n B(i) \right) + B(n) \mathbf{e}nd{align} Now notice that \begin{align*} \sum_{i= n-M-1}^n B(i) &= \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) }{b} \sum_{i= n-M-1}^n \frac{1}{(i-1)^p} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma} \sum_{i=n-M-1}^n \frac{1}{(i - 1)^{p+1}} + \frac{64 H K^2 D^2}{b} \sum_{i=n-M-1}^n \frac{1}{(i-1)}\\ & \le \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-M-2)^{p}}{b} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n-M-2)^{p+1}} + \frac{64 H K^2 D^2}{b (n-M-2)} + \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-1)^{p+1}}{b}\\ &~~~~~~~~~~ + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n -M - 2)^{p}} + \frac{64 H K^2 D^2 \log\ n}{b } \mathbf{e}nd{align*} Plugging this back in Equation \ref{eq:fromrec} we conclude that \begin{align*} a_n & \le \frac{64 e H K^2 \gamma }{b (n - 1)^{1 - p}}\Big( L_0 (n-M) + \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-M-2)^{p}}{b} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n-M-2)^{p+1}} \\ & ~~~~~ + \frac{64 H K^2 D^2}{b (n-M-2)} + \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-1)^{p+1}}{b} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n -M - 2)^{p}} + \frac{64 H K^2 D^2 \log\ n}{b} \Big) \\ & ~~~~~~~~~+ \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n - 1)^{p}}{b} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}} + \frac{64 H K^2 D^2}{b (n - 1)}\\ & \le \frac{64 e H K^2 \gamma }{b (n - 1)^{1 - p}}\Big( L_0 (n-M - 2) + \frac{64 H K^2 D^2}{b (n-M-2)} + \frac{4 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n -M - 2)^{p}} + \frac{64 H K^2 D^2 \log(n)}{b}\\ & ~~~~~ + \frac{256 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-1)^{p+1}}{b} \Big) + \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n - 1)^{p}}{b} + \frac{4 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}} + \frac{64 H K^2 D^2}{b (n - 1)} \intertext{since $\gamma \le \frac{b}{64 H K^2 n^p}$ and $ \frac{64 H K^2 D^2}{b (n-M-2)} \le \frac{4 R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{\gamma (n -M - 2)^{p}}$, } & \le \frac{64 e H K^2 \gamma }{b (n - 1)^{1 - p}}\Big( L_0 (n-M - 2) + \frac{6 R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{\gamma (n -M - 2)^{p}} + \frac{64 H K^2 D^2 \log\ n}{b} \\ & ~~~~~ + \frac{256 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-1)^{p+1}}{b} \Big) + \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n - 1)^{p}}{b} + \frac{4 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}} + \frac{64 H K^2 D^2}{b (n - 1)} \mathbf{e}nd{align*} We now optimize over the choice of $M$ above by using $$ (n - M - 2) = \left(\frac{6 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma L_0}\right)^{\frac{1}{p+1}} $$ Ofcourse for the choice of $M$ to be valid we need that $n - M - 2 \le n$ which gives our second condition on $\gamma$ which is \begin{align}\label{eq:con2} \gamma \ge \frac{6 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{n^{p+1} L_0} \mathbf{e}nd{align} Plugging in this $M$ we get, \begin{align} a_n & \le \frac{64 e H K^2 \gamma }{b (n - 1)^{1 - p}}\left( 2 L_0^{\frac{p}{p+1}} \left(\frac{6 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma }\right)^{\frac{1}{p+1}} + \frac{128 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-1)^{p+1}}{b} + \frac{64 H K^2 D^2 \log\ n}{b} \right) \notag \\ & ~~~~~+ \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n - 1)^{p}}{b} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}} + \frac{64 H K^2 D^2}{b (n - 1)}\notag \\ & = \frac{128 e H K^2 \gamma^{\frac{p}{p+1}} L_0^{\frac{p}{p+1}} \left(6 R(\mathbf{e}nsuremath{\mathbf{w}}opt)\right)^{\frac{1}{p+1}} }{b (n - 1)^{1 - p}} + \frac{2 e (64 H K^2 \gamma)^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-1)^{2p}}{b^2 } + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}} \notag \\ & ~~~~~ + \frac{2 e (64 H K^2)^2 D^2 \gamma \log\ n}{b^2 (n - 1)^{1 - p}} + \frac{64 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n - 1)^{p}}{b} + \frac{64 H K^2 D^2}{b (n - 1)}\notag \intertext{however by condition in Equation \ref{eq:con1}, $\gamma \le \frac{b}{64 H K^2 n^p}$, hence} & \le \frac{348 H K^2 \gamma^{\frac{p}{p+1}} L_0^{\frac{p}{p+1}} \left(6 R(\mathbf{e}nsuremath{\mathbf{w}}opt)\right)^{\frac{1}{p+1}} }{b (n - 1)^{1 - p}} + \frac{2 e (64 H K^2)^2 D^2 \gamma \log\ n}{b^2 (n - 1)^{1 - p}} \notag\\ & ~~~~~+ \frac{348 H K^2 \gamma L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n - 1)^{p}}{b} + \frac{2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\gamma (n - 1)^{p+1}} + \frac{64 H K^2 D^2}{b (n - 1)} \label{eq:bound} \mathbf{e}nd{align} We shall try to now optimize the above bound w.r.t. $\gamma$, To this end set \begin{align}\label{eq:gamma} \gamma = \min \left\{\frac{1}{4 H},\ \sqrt{\frac{b R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{174 H K^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-1)^{2p+1}}},\ \left(\frac{b}{1044 H K^2 (n - 1)^{2p}}\right)^{\frac{p+1}{2p+1}} \left(\frac{6 R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ L_0} \right)^{\frac{p}{2p+1}}\right\} \mathbf{e}nd{align} We first need to verify that this choice of $\gamma$ satisfies the conditions in Equation \ref{eq:con1} and \ref{eq:con2}. To this end, note that as for the condition in Equation \ref{eq:con1}, \begin{align*} \gamma \le \left(\frac{b}{1044 H K^2 (n - 1)^{2p}}\right)^{\frac{p+1}{2p+1}} \left(\frac{6 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{ L_0} \right)^{\frac{p}{2p+1}} \mathbf{e}nd{align*} and hence it can be easily verified that for $n \ge 3$, $\gamma \le \frac{b}{64 H K^2 n^p}$. On the other hand to verify the condition in Equation \ref{eq:con2}, we need to show that \begin{align*} \gamma & = \min \left\{\frac{1}{4 H},\ \sqrt{\frac{b R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{174 H K^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt) (n-1)^{2p+1}}},\ \left(\frac{b}{1044 H K^2 (n - 1)^{2p}}\right)^{\frac{p+1}{2p+1}} \left(\frac{6 R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ L_0} \right)^{\frac{p}{2p+1}}\right\}\\ & \ge \frac{6 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{n^{p+1} \left(L_0 \right)} \mathbf{e}nd{align*} It can be verified that this condition is satisfied as long as, \begin{align*} n \ge \max\left\{3,\ \frac{87 K^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b} , \frac{783 K^2}{b} \right\} \mathbf{e}nd{align*} So in effect as long as $n \ge 3$ and sample size $n b \ge \max\{783 K^2, \frac{87 K^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{HD^2}\}$ the conditions are satisfied. Now plugging in this choice of $\gamma$ into the bound in Equation \ref{eq:bound}, we get \begin{align*} a_n &\le \sqrt{\frac{2784 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \frac{2}{3} \left(\frac{6264 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L_0^{\frac{p}{p+1}} }{b (n-1)}\right)^{ \frac{p+1}{2p + 1}} + \frac{64 H K^2 D^2}{b (n - 1)} \\ & ~~~~~ + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ (n - 1)^{p+1}} + D^2 \log(n) \left(\frac{64 H K^2 }{b (n - 1)}\right)^{\frac{3p+1}{2p+1}} \left(\frac{6 R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ L_0} \right)^{\frac{p}{2p+1}}\\ &\le \sqrt{\frac{2784 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \frac{2}{3} \left(\frac{6264 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L_0^{\frac{p}{p+1}} }{b (n-1)}\right)^{ \frac{p+1}{2p + 1}} + \frac{64 H K^2 D^2}{b (n - 1)} \\ & ~~~~~ + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ (n - 1)^{p+1}} + D^2 \log(n) \left(\frac{64 H K^2 }{b (n - 1)}\right)^{\frac{3p+1}{2p+1}} \left(\frac{6 R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ L_0} \right)^{\frac{p}{2p+1}}\\ &\le \sqrt{\frac{2784 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \frac{2}{3} \left(\frac{6264 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L_0^{\frac{p}{p+1}} }{b (n-1)}\right)^{ \frac{p+1}{2p + 1}} + \frac{64 H K^2 D^2}{b (n - 1)} \\ & ~~~~~ + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ (n - 1)^{p+1}} + \left(\frac{\left(96 K^2\right)^{\frac{p}{p+1}} D^2 }{R(\mathbf{e}nsuremath{\mathbf{w}}opt)}\right)^{\frac{p+1}{2p+1}} \left(\frac{64 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{b (n - 1)}\right) \frac{\log(n)}{\left(b (n - 1)\right)^{\frac{p}{2p+1}}} \\ & \le \sqrt{\frac{2784 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \frac{4176 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)} \left(\frac{L_0}{6264 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}\right)^{\frac{p}{2p+1}} \left(b (n-1)\right)^{ \frac{p}{2p + 1}} + \frac{64 H K^2 D^2}{b (n - 1)} \\ & ~~~~~ + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ (n - 1)^{p+1}} + \left(\frac{64 H K^2 D^2 }{b (n - 1)}\right) \frac{\log(n)}{\left(b (n - 1)\right)^{\frac{p}{2p+1}} } \left(\frac{384 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ L_0} \right)^{\frac{p}{2p+1}}\\ & \le \sqrt{\frac{2784 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \frac{4176 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)} \left(\frac{L_0}{6264 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}\right)^{\frac{p}{2p+1}} \left(b (n-1)\right)^{ \frac{p}{2p + 1}} + \frac{64 H K^2 D^2}{b (n - 1)} \\ & ~~~~~ + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ (n - 1)^{p+1}} + \left(\frac{64 H K^2 D^2 }{b (n - 1)}\right) \frac{\log(n)}{\left(b (n - 1)\right)^{\frac{p}{2p+1}} } \mathbf{e}nd{align*} Picking $$ p = \min\left\{\max\left\{\frac{\log(b)}{2 \log(n-1)} , \frac{\log \log(n)}{2\left(\log(b(n-1)) - \log \log(n)\right)}\right\} ,1\right\} $$ we get the bound, \begin{align*} a_n &\le \sqrt{\frac{2784 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \left(\frac{4176 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\sqrt{b} (n-1)} + \frac{4176 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) \sqrt{\log(n)}}{b (n-1)} \right) \left(\frac{L_0}{6264 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}\right)^{\frac{1}{3}} \\ & ~~~~~ + \frac{120 H K^2 D^2}{b (n - 1)} + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ (n - 1)^{2}} + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ \sqrt{b} (n - 1)} + \frac{64 H K^2 D^2 \sqrt{\log(n)}}{b (n - 1)} \\ &\le \sqrt{\frac{2784 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \left(\frac{4176 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}{\sqrt{b} (n-1)} + \frac{4176 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) \sqrt{\log(n)}}{b (n-1)} \right) \left(\frac{L_0}{6264 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt)}\right)^{\frac{1}{3}} \\ & ~~~~~ + \frac{120 H K^2 D^2}{b (n - 1)} + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ (n - 1)^{2}} + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ \sqrt{b} (n - 1)} + \frac{64 H K^2 D^2 \sqrt{\log(n)}}{b (n - 1)} \\ &\le \sqrt{\frac{2784 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \frac{454 (H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt))^{2/3} L_0^{\frac{1}{3}} }{\sqrt{b} (n-1)} + \frac{454 (H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt))^{2/3} L_0^{\frac{1}{3}} \sqrt{\log(n)}}{b (n-1)} \\ & ~~~~~ + \frac{120 H K^2 D^2}{b (n - 1)} + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ (n - 1)^{2}} + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ \sqrt{b} (n - 1)} + \frac{64 H K^2 D^2 \sqrt{\log(n)}}{b (n - 1)} \mathbf{e}nd{align*} Recall that $L_0 = \frac{3}{2} H D^2 + L(\mathbf{e}nsuremath{\mathbf{w}}opt)$. Now note that if $L(\mathbf{e}nsuremath{\mathbf{w}}opt) \le H K^2 D^2/2$ then $L_0 \le 2 H K^2 D^2$, on the other hand if $L(\mathbf{e}nsuremath{\mathbf{w}}opt) > H K^2 D^2/2$ then $(H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt))^{2/3} L_0^{\frac{1}{3}} \le \sqrt{4 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}$. Hence we can conclude that, \begin{align*} a_n &\le \sqrt{\frac{2784 H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \frac{454 H K^2 (R(\mathbf{e}nsuremath{\mathbf{w}}opt))^{2/3} (2 D^2)^{\frac{1}{3}} }{\sqrt{b} (n-1)} + \frac{454 H K^2 (R(\mathbf{e}nsuremath{\mathbf{w}}opt))^{2/3} (2 D^2)^{\frac{1}{3}} \sqrt{\log(n)}}{b (n-1)} \\ & ~~~~~ + \frac{120 H K^2 D^2}{b (n - 1)} + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ (n - 1)^{2}} + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ \sqrt{b} (n - 1)} + \frac{64 H K^2 D^2 \sqrt{\log(n)}}{b (n - 1)} + \frac{908 \sqrt{H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)} }{\sqrt{b} (n-1)} \\ & ~~~~~ + \frac{908 \sqrt{H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt) \log(n)}}{b (n-1)} \mathbf{e}nd{align*} Since $n > 783 K^2$ and $R(\mathbf{e}nsuremath{\mathbf{w}}opt) \le D^2/2$ we can conclude that \begin{align*} a_n &\le 164 \sqrt{\frac{H K^2 R(\mathbf{e}nsuremath{\mathbf{w}}opt) L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \frac{580 H K^2 (R(\mathbf{e}nsuremath{\mathbf{w}}opt))^{2/3} D^{\frac{2}{3}} }{\sqrt{b} (n-1)} + \frac{545 H K^2 D^2 \sqrt{\log(n)}}{b (n - 1)} + \frac{8 H R(\mathbf{e}nsuremath{\mathbf{w}}opt) }{ (n - 1)^{2}} \mathbf{e}nd{align*} This concludes the proof. \mathbf{e}nd{proof} \begin{proof}[Proof of Theorem \ref{thm:ag}] For Euclidean case $R(\mathbf{e}nsuremath{\mathbf{w}}) = \frac{1}{2} \norm{\mathbf{e}nsuremath{\mathbf{w}}}_2^2$ and $K = \sqrt{\sup_{\mathbf{e}nsuremath{\mathbf{w}} : \norm{\mathbf{e}nsuremath{\mathbf{w}}}_2 \le 1} \norm{\mathbf{e}nsuremath{\mathbf{w}}}^2} = 1$. Plugging these in the previous theorem (along with appropriate step size) we get \begin{align*} \E{L(\mathbf{e}nsuremath{\mathbf{w}}^{\mathrm{ag}}_n)} - L(\mathbf{e}nsuremath{\mathbf{w}}opt) &\le 116 \sqrt{\frac{H \norm{\mathbf{e}nsuremath{\mathbf{w}}opt}^2 L(\mathbf{e}nsuremath{\mathbf{w}}opt)}{b (n-1)}} + \frac{366 H \norm{\mathbf{e}nsuremath{\mathbf{w}}opt}^{4/3} D^{\frac{2}{3}} }{\sqrt{b} (n-1)} + \frac{545 H D^2 \sqrt{\log(n)}}{b (n - 1)} + \frac{4 H \norm{\mathbf{e}nsuremath{\mathbf{w}}opt}^2 }{ (n - 1)^{2}} \mathbf{e}nd{align*} The second inequality is a direct consequence of the fact that $\norm{\mathbf{e}nsuremath{\mathbf{w}}opt} \le D$. \mathbf{e}nd{proof} \subsection{Some Technical Lemmas} \begin{lemma}\label{cor:eqsmooth} Denote $K:= \sqrt{2 \sup_{\mathbf{e}nsuremath{\mathbf{w}} : \norm{\mathbf{e}nsuremath{\mathbf{w}}} \le 1} R(\mathbf{e}nsuremath{\mathbf{w}})}$, then for any $\mathbf{e}nsuremath{\mathbf{x}}_{1},\ldots,\mathbf{e}nsuremath{\mathbf{x}}_b$ mean zero vectors drawn iid from any fixed distribution, $$ \E{\norm{\frac{1}{b} \sum_{t=1}^b \mathbf{e}nsuremath{\mathbf{x}}_t}_*^2} \le \frac{K^2}{b^2} \sum_{t=1}^b \E{\|\mathbf{e}nsuremath{\mathbf{x}}_t\|_*^2} $$ \mathbf{e}nd{lemma} \begin{proof} We start by noting that \begin{align} \norm{\frac{1}{b} \sum_{t=1}^i \mathbf{e}nsuremath{\mathbf{x}}_t}_*^2 & = \left( \sup_{\mathbf{e}nsuremath{\mathbf{w}} :\|\mathbf{e}nsuremath{\mathbf{w}}\| \le 1 }\ip{\mathbf{e}nsuremath{\mathbf{w}}}{\frac{1}{b} \sum_{t=1}^i \mathbf{e}nsuremath{\mathbf{x}}_t } \right)^2 \notag \\ & = \left(\inf_{\alpha} \frac{1}{\alpha} \sup_{\mathbf{e}nsuremath{\mathbf{w}} :\|\mathbf{e}nsuremath{\mathbf{w}}\| \le 1 }\ip{\mathbf{e}nsuremath{\mathbf{w}}}{ \frac{\alpha}{b} \sum_{t=1}^i \mathbf{e}nsuremath{\mathbf{x}}_t } \right)^2 \notag \\ & \le \left(\inf_{\alpha}\left\{ \frac{1}{\alpha} \sup_{\mathbf{e}nsuremath{\mathbf{w}} :\|\mathbf{e}nsuremath{\mathbf{w}}\| \le 1 } R(\mathbf{e}nsuremath{\mathbf{w}}) + \frac{1}{\alpha} R^*\left(\frac{\alpha}{b} \sum_{t=1}^i \mathbf{e}nsuremath{\mathbf{x}}_t\right) \right\}\right)^2 \notag \\ & = \left(\inf_{\alpha}\left\{ \frac{K^2}{2 \alpha} + \frac{1}{\alpha} R^*\left(\frac{\alpha}{b} \sum_{t=1}^i \mathbf{e}nsuremath{\mathbf{x}}_t\right) \right\}\right)^2 \label{eq:nrmsmt} \mathbf{e}nd{align} where the step before last was due to Fenchel-Young inequality and $R^*$ is simply the convex conjugate of $R$. Now For any $i \in [b]$ define $S_i = R^*\left(\frac{\alpha}{b} \sum_{t=1}^i \mathbf{e}nsuremath{\mathbf{x}}_t\right)$. We claim that \begin{align*} \E{S_i} \le \E{S_{i-1}} + \frac{\alpha^2}{2 b^2}\E{\|\mathbf{e}nsuremath{\mathbf{x}}_i\|_*^2} \mathbf{e}nd{align*} To see this note that since $R$ is $1$-strongly convex w.r.t. $\norm{\cdot}$, by duality $R^*$ is $1$-strongly smooth w.r.t. $\norm{\cdot}_*$ and so for any $i \in [b]$, \begin{align*} R^*\left(\frac{1}{b} \sum_{t=1}^i \mathbf{e}nsuremath{\mathbf{x}}_t\right) & \le R^*\left(\frac{1}{b} \sum_{t=1}^{i-1} \mathbf{e}nsuremath{\mathbf{x}}_t\right) + \frac{1}{2 b} \ip{\nabla R^*\left(\frac{1}{b} \sum_{t=1}^{i-1} \mathbf{e}nsuremath{\mathbf{x}}_t\right)}{\mathbf{e}nsuremath{\mathbf{x}}_i} + \frac{\alpha^2}{2 b^2} \norm{\mathbf{e}nsuremath{\mathbf{x}}_i}_*^2 \mathbf{e}nd{align*} taking expectation w.r.t. $\mathbf{e}nsuremath{\mathbf{x}}_i$ and noting that $\E{\mathbf{e}nsuremath{\mathbf{x}}_i} = 0$ by assumption we see that \begin{align*} \Es{\mathbf{e}nsuremath{\mathbf{x}}_b}{S_i} & \le S_{i-1} + \frac{\alpha^2}{2 b^2} \Es{\mathbf{e}nsuremath{\mathbf{x}}_i}{\norm{\mathbf{e}nsuremath{\mathbf{x}}_i}_*^2} \mathbf{e}nd{align*} Taking expectation we get as claimed that : $$ \E{S_i} \le \E{S_{i-1}} + \frac{\alpha^2}{2 b^2}\E{\|\mathbf{e}nsuremath{\mathbf{x}}_i\|_*^2} $$ Now using this above recursively (and noting that $S_0 = 0$ ) we conclude that $$ \E{S_i} \le \frac{\alpha^2}{2 b^2} \sum_{t=1}^{i} \E{\|\mathbf{e}nsuremath{\mathbf{x}}_t\|_*^2} $$ Plugging this back in Equation \ref{eq:nrmsmt} we get \begin{align*} \E{\norm{\frac{1}{b} \sum_{t=1}^b \mathbf{e}nsuremath{\mathbf{x}}_t}_*^2} & \le \left(\inf_{\alpha}\left\{ \frac{K^2}{2 \alpha} + \frac{\alpha}{2 b^2} \sum_{t=1}^{i} \E{\|\mathbf{e}nsuremath{\mathbf{x}}_t\|_*^2} \right\}\right)^2\\ & = \left(\inf_{\alpha}\left\{ \frac{K^2}{2 \alpha} + \frac{\alpha}{2 b^2} \sum_{t=1}^{i} \E{\|\mathbf{e}nsuremath{\mathbf{x}}_i\|_*^2} \right\}\right)^2 = \frac{K^2}{b^2} \sum_{t=1}^{i} \E{\|\mathbf{e}nsuremath{\mathbf{x}}_t\|_*^2} \mathbf{e}nd{align*} \mathbf{e}nd{proof} \begin{lemma}\label{ut:rec} Consider a sequence of non-negative number $a_{1},\ldots,a_n \in [0,a_0]$ that satisfy $$ a_n \le A(n) \sum_{i=1}^{n-1} a_i + B(n) $$ where $A$ is decreasing in $n$. For such a sequence, for any $m \in [n]$, as long as $A(i) \le 1/2$ for any $i \ge n-m-1$ and $\sum_{i=n-m-1}^{n} A(i) \le 1$ then \begin{align*} a_n & \le e A(n)\left( a_0 (n-m) + \sum_{i= n-m-1}^n B(i) \right) + B(n) \mathbf{e}nd{align*} \mathbf{e}nd{lemma} \begin{proof} We shall unroll this recursion. Note that \begin{align*} a_n & \le A(n) \sum_{i=1}^{n-1} a_i + B(n)\\ & = A(n) \left( \sum_{i=1}^{n-2} a_i + a_{n-1} \right) + B(n)\\ & \le A(n) \left( \sum_{i=1}^{n-2} a_i + A(n-1) \sum_{i=1}^{n-2} a_i + B(n-1) \right) + B(n)\\ & = A(n) (1 + A(n-1)) \sum_{i=1}^{n-2} a_i + B(n) + A(n) B(n-1)\\ & \le A(n) (1 + A(n-1)) \left(\sum_{i=1}^{n-3} a_i + A(n-2) \sum_{i=1}^{n-3} a_i + B(n-2) \right) + + B(n) + A(n) B(n-1)\\ & = A(n) (1 + A(n-1)) (1 + A(n-2)) \sum_{i=1}^{n-3} a_i + B(n) + A(n) B(n-1) + A(n) (1 + A(n-1)) B(n-2) \mathbf{e}nd{align*} Continuing so upto $m$ steps we get \begin{align}\label{eq:rec} a_n \le A(n) \left( \prod_{i=1}^{m-1} (1 + A(n-i)) \right) \sum_{i=1}^{n-m} a_i + B(n) + A(n) \left( \sum_{i=1}^{m-1} \left( \prod_{j=1}^{i-1} (1 + A(n-j)) \right) B(n-i) \right) \mathbf{e}nd{align} We would now like to bound in general the term $\prod_{i=1}^{m-1} (1 + A(n-i)) $. To this extant note that, $$ \prod_{i=1}^{m-1} (1 + A(n-i)) = \mathbf{e}xp\left( \sum_{i=1}^{m-1} \log(1 + A(n-i))\right) $$ Now assume $A(i) \le 1/2$ for all $i \ge n-m-1$ so that $\log(1 + A(n-i)) \le A(n-i)$. We get $$ \prod_{i=1}^{m-1} (1 + A(n-i)) \le \mathbf{e}xp\left( \sum_{i=1}^{m-1} A(n-i)\right) $$ Now if $\sum_{i=n-m-1}^{n} A(i) \le 1$ then we can conclude that $$ \prod_{i=1}^{m-1} (1 + A(n-i)) \le e $$ Plugging this in Equation \ref{ut:rec} we get \begin{align*} a_n & \le e A(n)\left( \sum_{i=1}^{n-m} a_i + \sum_{i=1}^{m-1} B(n-i) \right) + B(n) \\ & = e A(n)\left( \sum_{i=1}^{n-m} a_i + \sum_{i= n-m-1}^n B(i) \right) + B(n) \mathbf{e}nd{align*} Now if for each $i \le n$, $a_i \le a_0$ then we see that \begin{align*} a_n & \le e A(n)\left( a_0 (n-m) + \sum_{i= n-m-1}^n B(i) \right) + B(n) \mathbf{e}nd{align*} Hence we conclude that as long as $\sum_{i=n-m-1}^{n} A(i) \le 1$ \begin{align*} a_n & \le e A(n)\left( a_0 (n-m) + \sum_{i= n-m-1}^n B(i) \right) + B(n) \mathbf{e}nd{align*} \mathbf{e}nd{proof} \mathbf{e}nd{document}
\begin{document} \title{Efficient Indexing of Necklaces and Irreducible Polynomials over Finite Fields} \author{Swastik Kopparty\thanks{Department of Computer Science and Department of Mathematics, Rutgers University. Research supported in part by a Sloan Fellowship and NSF grant CCF-1253886. Email: \texttt{[email protected]}.}\and Mrinal Kumar\thanks{Department of Computer Science, Rutgers University. Research supported in part by NSF grant CCF-1253886. Email: \texttt{[email protected]}.}\and Michael Saks\thanks{Department of Mathematics, Rutgers University. Research supported in part by NSF grants CCF-0832787 and CCF-1218711. Email: \texttt{[email protected]}}} \maketitle \begin{abstract} We study the problem of {\em indexing} irreducible polynomials over finite fields, and give the first efficient algorithm for this problem. Specifically, we show the existence of $\mathsf {Prefix}oly(n, \log q)$-size circuits that compute a bijection between $\{1, \ldots, |S|\}$ and the set $S$ of all irreducible, monic, univariate polynomials of degree $n$ over a finite field $\mathbb{F}_q$. This has applications in pseudorandomness, and answers an open question of Alon, Goldreich, H{\aa}stad and Peralta~\cite{AGHP}. Our approach uses a connection between irreducible polynomials and necklaces ( equivalence classes of strings under cyclic rotation). Along the way, we give the first efficient algorithm for indexing necklaces of a given length over a given alphabet, which may be of independent interest. \end{abstract} \mathsf {Suffix}ection{Introduction} For a finite field $\mathbb{F}_q$ and an integer $n$, let $S$ be the set of all irreducible polynomials in $1$ variable over $\mathbb{F}_q$ of degree exactly $n$. There is a well known formula for $|S|$ (which is approximately $\frac{q^n}{n}$). We consider the problem of giving an efficiently computable {\em indexing} of irreduducible polynomials i.e., finding a bijection $f: \{1, \ldots, |S| \} \to S$ such that $f(i)$ is computable in time $\mathsf {Prefix}oly(\log |S|) = \mathsf {Prefix}oly(n \log q)$. Our main result is that indexing of irreducible polynomials can be done efficiently given $O(n \log q)$ advice. This answers a problem posed by Alon, Goldreich, H{\aa}stad and Peralta~\cite{AGHP}, and is the polynomial analogue of the the well-known problem of ``giving a formula for the $n$-bit primes". Note that today it is not even known (in general) how to produce a single irreducible polynomial of degree $n$ in time $\mathsf {Prefix}oly(n \log q)$ without the aid of either advice or randomness. The main technical result we show en route is an efficient indexing algorithm for {\em necklaces}. Necklaces are equivalance classes of strings modulo cyclic rotation. We give an $\mathsf {Prefix}oly(n \log |\Sigma|)$-time computable bijection $g: \{1, 2, \ldots, |\mathcal N|\} \to \mathcal N$, where $\mathcal N$ is the set of necklaces of length $n$ over the alphabet $\Sigma$. \iffalse{ For an positive integer $a$ and a prime $n$, Fermat's little theorem states that $n$ divides $a^n - a$. A beautiful elementary proof of this, due to Golomb~\cite{golomb-FLT}, goes as follows. Consider elements of $[a]^n$ as $n$-letter strings, and call two strings equivalent if one can be obtained from the other by cyclic rotation. There are $a$ strings which are in equivalence classes of size $1$; namely the constant strings. The primality of $n$ easily implies that the remaining $a^n - a$ strings of $[a]^n$ each lie in equivalence classes with exactly $n$ elements. Thus $(a^n -a)$ is divisible by $n$, and the quantity $(a^n-a)/n$ counts the number of {\em necklaces} of length $n$ over an alphabet of size $a$. This paper gives the first efficient algorithms for indexing necklaces. For prime $n$, our results give a $\mathsf {Prefix}oly(n, \log a)$ time computable bijection between $\{1, 2, \ldots, (a^n -a)/n \}$ and the set of all necklaces of length $n$ over an alphabet of size $a$. As a consequence of our indexing algorithm for necklaces, we show the existence of polynomial size circuits for indexing {\em irreducible polynomials over finite fields} (this answers a problem posed by Alon, Goldreich, H{\aa}stad and Peralta~\cite{AGHP}). This latter problem is the polynomial analogue of the well-known problem of ``giving a formula for the $n$-bit primes" (with a little advice). Next, we will formally define indexing problems, the specific indexing problems that we study, and then state our main results. }\fi \mathsf {Suffix}ubsection{The indexing problem} We define an {\em indexing} of a finite set $S$ to be a bijection from the set $\{1,\ldots,|S|\}$ to $S$. Let us formalize indexing as a computational problem. Suppose that $L$ is an arbitrary language over alphabet $\Sigma$ and let $L^n$ be the set of strings of $L$ of length $n$. We want to ``construct'' an indexing function $A^n$ for each of the sets $L^n$. Formally, this means giving an algorithm $A$ which takes as input a size parameter $n$ and an index $j$ and outputs $A^n(j)$, so that the following properties hold for each $n$: \begin{itemize} \item $A^n$ maps the set $\{1,\ldots,|L^n|\}$ bijectively to $L^n$. \item If $j > |L^n|$ then $A^n(j)$ returns {\bf too large}. \end{itemize} An indexing algorithm is considered to be efficient if its running time is $\mathsf {Prefix}oly(n)$. A closely related problem is {\em reverse-indexing}. A reverse-indexing of $L$ a bijection from $L^n$ to $\{1, \ldots, |L^n|\}$, and we say it is efficient if it can be computed in time $\mathsf {Prefix}oly(n)$. We can use the above formalism for languages to formulate the indexing and reverse-indexing problems for any combinatorial structure, such as permutations, graphs, partitions, etc. by using standard efficient encodings of such structures by strings. \mathsf {Suffix}ubsection{Indexing, enumeration, counting and ranking} Indexing is closely related to the well-studied {\em counting}, {\em enumeration} and {\em ranking} problems for $L$. The counting problem is to give an algorithm that, on input $n$ outputs the size of $L^n$. The enumeration problem is to give an algorithm that, on input $n$, outputs a list containing all elements of $L^n$. A counting or enumeration algorithm is said to be efficient if it runs in time $\mathsf {Prefix}oly(n)$ or $|L^n| \cdot \mathsf {Prefix}oly(n)$ respectively. Other important algorithmic problems associated with combinatorial objects include the {\em ranking} and {\em unranking} problems. For the ranking problem, one is given an ordering of $L^n$ (such as the lexicographic order) and the goal is to compute the rank (under this order) of a given element of $L^n$. For the unranking problem, one has to compute the inverse of this ranking map. It is easy to see that unranking algorithms for any ordering are automatically indexing algorithms, and ranking algorithms for any ordering are automatically reverse-indexing algorithms\footnote{We use the terms indexing and reverse-indexing instead of the terms unranking and ranking to make an important distinction: in indexing and reverse-indexing the actual bijection between $\{1, \ldots, |S| \}$ and $S$ is of no importance whatsoever, but in ranking and unranking the actual bijection is part of the problem. We feel this difference is worth highlighting, and hence we introduced the new terms indexing and reverse-indexing for this purpose. Note that some important prior work on ranking/unranking distinguishes between these notions~\cite{MR-perm-rank}.}. There is well developed complexity theory for counting problems, starting with the fundamental work of Valiant~\cite{Valiant}. For combinatorial structures, counting problems are (of course) at the heart of combinatorics, and many basic identities in combinatorics (such as recurrence relations that express the number of structures of a particular size in terms of the number of such structures of smaller sizes) can also be viewed as giving efficient counting algorithms for these structures. The enumeration and ranking problems for combinatorial structures has also received a large amount of attention. See the books~\cite{NW78, stinson1999combinatorial, ruskey2003combinatorial, arndt2011matters} for an overview of some of the work on this topic. Counting and enumeration can be easily reduced to indexing: Given an indexing algorithm $A$ we can compute $|L^n|$ by calling $A^n(j)$ on increasing powers of 2 until we get the answer `{\bf too large}' and then do binary search to determine the largest $j$ for which $A^n(j)$ is not too large. Enumeration can be done by just running the indexing algorithm on the integers $1, 2, \ldots$ until we get the answer {\bf too large}. \iffalse{[NOTE FROM MIKE:{\bf As I went through this part, I was reminded of the Garsia-Milne involution principle. I never really learned this, but I recall that it is a way to construct ``explicit'' bijections when you have an inclusion-exclusion proof that two sets have the same size. Doron has written about and used this. At some point we should find out about it, and see what it has to say about the problems we're looking at.}] }\fi Conversely, in many cases, such as for subsets, permutations, set partitions, integer partitions, trees, spanning trees, (and many many more) the known counting algorithms can be modified to give efficient indexing (and hence enumeration) algorithms. This happens, for example, when the counting problem is solved by a recurrence relation that is proved via a bijective proof. However, it seems that not all combinatorial counting arguments lead to efficient indexing algorithms. A prime example of this situation is when we have a finite group acting on a finite set, and the set we want to count is the set of orbits of the action. The associated counting problem can be solved using the Burnside counting lemma, and there seems to be no general way to use this to get an efficient indexing algorithm. This leads us to one of the indexing problems studied here: Fix an alphabet $\Sigma$ and consider two strings $x$ and $y$ in $\Sigma^n$ to be equivalent if one is a rotation of the other, i.e. we can find strings $x^1,x^2$ such that $x=x^1 x^2$ and $y=x^2x^1$ (here $uv$ denotes the concatenation of the strings $u$ and $v$). The equivalence classes of strings are precisely the orbits under the natural action of the cyclic group $\mathbb{Z}_n$ on $\Sigma^n$. These equivalence classes are often called {\em necklaces} because if we view the symbols of a string as arranged in a circle, then equivalent strings give rise to the same arrangement. We are interested in the problem of efficiently indexing necklaces. We apply the indexing algorithm for necklaces to the problem of indexing irreducible polynomials over a finite field. \mathsf {Suffix}ubsection{Main results} Our main result is an efficient algorithm for indexing irreducible polynomials. \begin{theorem} Let $q$ be a prime power, and let $n\geq 1$ be an integer. Let $I_{q,n}$ be the set of monic irreducible polynomials of degree $n$ over $\mathbb{F}_q$. There is an indexing algorithm for $I_{q,n}$, which takes $O(n \log q)$ bits of advice and runs in $\mathsf {Prefix}oly(n, \log q)$ time. \end{theorem} We remark that it is not known today how to deterministically produce (without the aid of advice or randomness) even a single irreducible polynomial of degree $n$ in time $\mathsf {Prefix}oly(n \log q)$ for all choices of $n$ and $q$. Our result shows that once we take a little bit of advice, we can produce not just one, but all irreducible polynomials. For constant $q$, where it is known how to deterministically construct a single irreducible polynomial in $\mathsf {Prefix}oly(n)$ time without advice~\cite{Shoupirred}, our indexing algorithm can be made to run with just $\mathsf {Prefix}oly(\log n)$ bits of advice. Using a known correspondence~\cite{Golumb} between necklaces and irreducible polynomials over finite fields, indexing irreducible polynomials reduces to the problem of indexing necklaces. Our main technical result (of independent interest) is an efficient algorithm for this latter problem. \begin{theorem}~\label{thm:indexing} There is an algorithm for indexing necklaces of length $n$ over the alphabet $\{1,\ldots,q\}$, which runs in time $\mathsf {Prefix}oly(n\log q)$. \end{theorem} Our methods also give an efficient reverse-indexing algorithm for necklaces (but unfortunately this does not lead to an efficient reverse-indexing algorithm for irreducible polynomials; this has to do with the the open problem of efficiently computing the discrete logarithm). \begin{theorem}~\label{thm:reverse_indexing} There is an algorithm for reverse-indexing necklaces of length $n$ over the alphabet $\{1,\ldots,q\}$, which runs in time $\mathsf {Prefix}oly(n\log q)$. \end{theorem} The indexing algorithm for irreducible polynomials can be used to make a classical $\epsilon$-biased set construction from~\cite{AGHP} based on linear-feedback shift register sequences constructible with logarithmic advice (to put it at par with the other constructions in that paper). It can also be used to make the explicit subspace designs of~\cite{GK-subs-design} very explicit (with small advice). Agrawal and Biswas~\cite{AB03} gave a construction of a family of nearly-coprime polynomials, and used this to give randomness-efficient black-box polynomial identity tests. The ability to efficiently index irreducible polynomials enables one to do this even more randomness efficiently (using a small amount of advice). Similarly, the string fingerprinting algorithm by Rabin~\cite{Rabin81}, which is based on choosing a random irreducible polynomial can be made more randomness efficient by choosing the random irreducible polynomial via first choosing a random index and then indexing the corresponding irreducible polynomial using our indexing algorithm. This application also requires a small amount of advice. As another application of the indexing algorithm for necklaces, we give a $\mathsf {Prefix}oly(n)$ time algorithm for computing any given entry of the $k \times 2^n$ generator matrix matrix or the $(2^n-k) \times 2^n$ parity check matrix of BCH codes for all values of the designed distance (this is the standard notion of strong explicitness for error-correcting codes). Earlier, it was only known how to compute this entry explicitly for very small values of the designed distance (which is usually the setting where BCH codes are used). \iffalse Conversely, indexing can be solved given the solution to a stronger enumeration problem. Consider the lexicographic order $\mathsf {Prefix}rec$ on $\Sigma^n$ and suppose that we have an algorithm $C$ that given $n$ and string $x \in \Sigma^n$ counts the number of strings in $L^n$ that are less than or equal to $x$. Then given a number $j$ we declare $j$ to be too large if $j>C^n(1^n)$ and otherwise using binary search (with $n$ calls to $C^n$) we can find the smallest string $x$ such that $C^n(x)=j$, and declare $x$ to be the string of $L^n$ indexed by $j$. This general approach can be used to given indexing algorithms for many combinatorial structures such as permutations, set partitions, partitions of an integer, trees,etc. However, for some types of combinatorial objects there is no simple using binary search then with $n$ calls to $C$ we can find the smallest string $x$ by $O(\log|L^n|)$ calls to an indexing algorithm: Call Consider a general type of combinatorial structure, such as permutations, graphs, and set partitions. Structures of a given type are typically class Let $(S_n:n \in mathbb{N})$ be a family of sets where $S_n$ is a finite set represents the set of objects of a given type having ``size'' $n$; for example $S_n$ might be the set of all permutations of the set $\{1,\ldot,n\}$. \fi \iffalse Let $S$ be a finite set of interest, with $|S| = N$. We define an {\em indexing} of $S$ is to be a bijection $f: [N] \to S$. We say $f$ is an efficient indexing of $S$ if $f$ can be computed by an algorithm that runs in time $\mathsf {Prefix}oly(\log(N))$. Our \fi \iffalse A very closely related algorithmic problem is that of {\em enumeration}. Here we want to design a fast algorithm to enumerate all the elements of $S$. There is a very extensive literature on fast enumeration algorithms for a variety of interesting combinatorial objects (see the books ~\cite{ , } and the references therein). It is natural to wonder whether efficient indexing is possible for all these combinatorial objects. For many natural combinatorial objects (sets, permutations, partitions, partitions of an integer, trees, etc.), efficient indexing turns out to be possible and quite easy: this follows almost immediately from the bijective proofs of the recurrence relations which count these objects. In some cases, the indexing algorithm is well known, such as the Prufer codes for indexing the $n^{n-2}$ trees on $n$ labelled nodes (ANOTHER EXAMPLE?). However for some kinds of combinatorial objects, an efficient indexing algorithm does not follow from the known counting arguments. The most prominent of these has to do with groups acting on sets, and the problem of indexing orbits of this group action. what does it mean to index the equivalence classes of an equivalence relation on a set special interesting case: indexing the orbits of a group acting on a set reverse indexing \fi \mathsf {Suffix}ubsection{Related Work} There is an extensive literature on enumeration algorithms for combinatorial objects (see the books \cite{ruskey2003combinatorial,knuth-4,stinson1999combinatorial, NW78,arndt2011matters}). Some of these references discuss necklaces in depth, and some also discuss the ranking/unranking problems for various combinatorial objects. The lexicographically smallest element of a rotation class is called a {\em Lyndon word}, and much is known about them. Algorithmically, the problem of enumerating/indexing necklaces is essentially equivalent to the problem of enumerating/indexing Lyndon words. Following a long line of work \cite{neckgen1,neckgen2,neckgen3,neckgen4,neckgen5,RS-necklace,CRSSM00}, we now know linear time enumeration algorithms for Lyndon words/necklaces. In~\cite{martinez2004efficient} and~\cite{ruskey2003combinatorial}, it was noted that the problem of efficient ranking/unranking of the lexicographic order on Lyndon words is an open problem. Our indexing algorithms in fact give a solution to this problem too: we get an efficient ranking/unranking algorithm for the lexicographic order on Lyndon words. Recent work of Andoni, Goldberger, McGregor and Porat~\cite{DBLP:conf/stoc/AndoniGMP13} studied a problem that may be viewed as an approximate version of reverse indexing of necklaces. They gave a randomized algorithm for producing short fingerprints of strings, such that the fingerprints of rotations of a string are determined by the fingerprint of the string itself. This fingerprinting itself was useful for detecting proximity of strings under misalignment. \mathsf {Prefix}aragraph{Recent independent work :} A preliminary version of this paper appeared as~\cite{KKS14}. At about the same time, similar results were published by Kociumaka, Radoszewski and Rytter \cite{KRR14}. The work in these two papers was done independently. The papers both have polynomial time algorithms for indexing necklaces; the authors in~\cite{KRR14} exercised more care in designing the algorithm to obtain a better polynomial running time. Their approach to alphabets of size more than 2 is cleaner than ours. On the other hand, we put the results in a broader context and have some additional applications (indexing irreducible polynomials and explicit constructions). \iffalse \mathsf {Suffix}ection{The Problem}~\label{sec:probdefn} We consider the set of all strings over an alphabet $\Sigma$ Given $n$ The general indexing problem We study the problem of indexing the orbits under the action of the group of cyclic rotations on the strings of length $n$. More concretely, we look at the equivalence class of strings of length $n$ under cyclic rotation and develop a polynomial time (in $n$) algorithm for indexing any particular equivalent class. On input $i$, the algorithm outputs an element of the $i^{th}$ equivalence class in time polynomial in $n$. \fi \mathsf {Suffix}ubsection{Organization of the paper} The rest of the paper is organized as follows. We give the algorithm to index necklaces in Section~\ref{sec:countequiv}. In Section~\ref{sec:irrind}, we use our indexing algorithm for necklaces to give an indexing algorithm for irreducible polynomials over finite fields. In Section~\ref{sec:BCH}, we give an application to the explicit construction of generator and parity check matrices of BCH codes. We conclude with some open problems in Section~\ref{sec:openprobs}. In Appendix~\ref{sec:magic}, we give an alternate algorithm for indexing binary necklaces of prime length. In Appendix~\ref{sec:complexity}, we give some prelimary observations about the complexity theory of indexing in general. \iffalse {\bf Basic Indexing Tool: Indexing accepting paths of a read-once (layered?) branching program. Identify a canonical element within each equivalence class. Hopefully the set of all canonical elements can be recognized by a read-once branching program. Can only do this for $n$ prime. A more elaborate way of describing the above idea is as follows. Pick an order on the canonical elements, and try to solve the counting problem: given $x \in \Sigma^n$, how many canonical elements $y$ are $< x$ in the chosen order. The set of these $y$ Instead we attack the problem indirectly. We still choose canonical elements for each equivalence class (breaking the symmetry of the class). However the counting problem we solve stays symmetric. We count: given $x \in \Sigma^n$, accept those $y \in \Sigma^n$ are such that the canonical element in the class of $y$ is $< x$. Given a solution to this counting problem, we can recover a solution to the A classic exercise in automata theory states that: $L^{rot}$ is regular. However, there is exponential blowup in the size of the automaton, and this result does not give us the polynomial size read-once branching program that we seek. Nevertheless, we show that for our particular language $L$, $L^{rot}$ can be recognized by a polynomial size read-once branching program. } One natural strategy for the problem would be to identify a unique canonical element in each equivalence class, and as its representative and then order the equivalence classes via some total order on the representatives. The goal then, would be to come up with an efficient algorithm, which on an input $i$ which is not larger than the number of equivalence classes, outputs the representative of the equivalence class of rank $i$. We follow this natural strategy and define the representative of each equivalence class as its smallest element in the ~lexicographic order. We then order the equivalence classes according to the lexicographic order of their representatives. The aim now is to design an efficient algorithm to index an element in the equivalence class of rank $i$ on this total order. Again, a natural way of doing this would be to do a binary search on the chain of representatives. Although, in order to do so, we need an efficient procedure to find an element which splits the chain into two parts each roughly a constant fraction smaller than the parent chain in size. Let us call this element a pivot. We do not know how to do this efficiently. In order to get around this problem, we do a somewhat hybrid binary search where our pivots themselves may not be representatives of their equivalence classes. In order to make the idea work, we solve the following problem. \begin{itemize} \item {\bf Input:} A string $x$ of length $n$ \item {\bf Output:} The number of equivalence classes smaller than $x$ in lexicographic order. \end{itemize} Here, we say that an equivalence class is smaller than a string $x$ in lexicographic order if the representative of the equivalence class is smaller than $x$ in lexicographic order. An efficient algorithm for this problem, coupled with the binary search strategy described above, would imply an efficient algorithm for the indexing problem we started with. With this in mind, for the rest of the write-up, we will focus on developing an efficient algorithm for the problem described above. \fi \mathsf {Suffix}ection{Indexing necklaces}\label{sec:countequiv} \mathsf {Suffix}ubsection{Strategy for the algorithm}~\label{sec:strategy} We first consider a very basic indexing algorithm which will inspire our algorithms. Given a directed acyclic graph $D$ on vertex set $V$ and distinguished subsets $S$ and $T$ of nodes, there is a straightforward indexing algorithm for the set of of paths that start in $S$ and end in $T$: Fix an arbitrary ordering on the nodes, and consider the induced lexicographic ordering on paths (i.e. path $P_1P_2\ldots$ is less than path $Q_1Q_2\ldots$ if $P_i<Q_i$ where $i$ is the least integer such that $P_i \neq Q_i$). Our indexing function will map the index $j$ to the $j$th path from $S$ to $T$ in lexicographic order. There is a simple dynamic program which computes for each node $v$, the number $N(v)$ of paths from $v$ to a vertex in $T$. Let $v_1,\ldots,v_r$ be the nodes of $S$ listed in order. Given the input index $j$, we find the first source $v_i$ such that the number of paths to $T$ starting at nodes $v_1,\ldots,v_i$ is at least $j$; if there is no such source then the index $j$ is larger than the number of paths being indexed. Otherwise, $v_i$ is the first node of the desired path, and we can proceed inductively by replacing the set $S$ by the set of children of $v_i$. This approach can be adapted to the following situation. Suppose the set $S$ we want to index is a set of strings of fixed length $n$ over alphabet $\Sigma$. A {\em read-once branching program} of length $n$ over alphabet $\Sigma$ is an acyclic directed graph with vertex layers numbered from 0 to $n$, where (1) layer 0 has a single start node, (2) there is a designated subset of accepting nodes at level $n$, and (3) every non-sink node has one outgoing arc corresponding to each alphabet symbol, and these arcs connect the node to nodes at the next level. For nodes $v$ and $w$ and alphabet symbol $\mathsf {Suffix}igma$ we write $v\rightarrow_{\mathsf {Suffix}igma} w$ to mean that there is an arc from $v$ to $w$ labelled by $\mathsf {Suffix}igma$. Such a branching program takes words from $\Sigma^n$ and, starting from the start node, follows the path corresponding to the word to either the accept or reject node. Given a read-once branching program for $S$, there is a 1-1 correspondence between strings in $S$ and paths from the start node to an accepting node. We can use the indexing algorithm for paths given above to index $S$. \iffalse This approach can be applied in the following situation: Suppose the set $S$ we want to index is a subset of strings of some fixed length $n$. A {\em read-once branching program} of length $n$ over alphabet and we are able to build a deterministic automaton with $\mathsf {Prefix}oly(n)$ states such that the only strings of length $n$ accepted are those in $S$. From this we can construct a layered directed graph with a single source at level 0, and $n$ additional layers each containing copies of all of the states which represents the computation of the automaton over time, and whose source-to-sink paths correspond to strings of length $n$ accepted by the automaton. Thus we can index the set of strings accepted by the automaton. \fi This suggests the following approach to indexing necklaces. For each equivalence class of strings (necklace) identify a canonical representative string of the class (such as the lexicographically smallest representative). Then build a branching program $B$ which, given string $y$, determines whether $y$ is a canonical representative of its class. By the preceding paragraph, this would be enough to index all of the canonical representatives, which is equivalent to indexing equivalence classes. In fact, we are able to implement this approach provided that $q=2$ and $n$ is prime (See appendix \ref{sec:magic}). However, we have not been able to make it work in general. For this we need another approach, which still uses branching programs, but in a more involved way. First some notation. For a given string $y$, we write the string obtained from $y$ after cyclically rotating it rightwards by $i$ positions as $\mathbb{R}ot^i(y)$. We define $\mathsf{Orbit}(y)$ to be the set containing $y$ and all its distinct rotations. $\mathsf{Orbit}(y)$ will also be referred to as the {\it equivalence class} of $y$. A string $y$ is said to be periodic with period $p$ if it can be written as ${y_1}^{q}$ for some ${y_1}\in \Sigma^{p}$ and $q = \frac{n}{p}$. A string is said to have fundamental period $p$ if it is periodic with period $p$ and not periodic with any period smaller than $p$. We will denote the fundamental period of a string $y$ by $\mathsf{FP}(y)$. Note that for any string $y$, $|\mathsf{Orbit}(y)| = \mathsf{FP}(y)$. If $E$ is an orbit and $x$ is a string, we say that $E<x$ if $E$ contains at least one string $y$ that is lexicographically less than $x$. (Notice that under our definition, if $x$ and $y$ are strings then we might have both that the orbit of $x$ is less than $y$ and the orbit of $y$ is less than $x$). Let $t$ be the total number of orbits. Let $\mathbb{C}lasses_x$ be the set of orbits that are less than $x$. Our main goal will be to design an efficient algorithm which, given string $x$, returns $|\mathbb{C}lasses_x|$. We now show that if we can do this then we can solve both the indexing and reverse indexing problems. For the indexing problem, we want a 1-1 function $\mathsf {Prefix}si$ that maps $j \in\{1,\ldots,t\}$ to a string so that all of the image strings are in different orbits. The map $\mathsf {Prefix}si$ will be easily computabile given a subroutine for $|\mathbb{C}lasses_x|$. Define the {\em minimal representative} of an orbit to be the lexicographically least string in the orbit. Let $y^1< \cdots < y^t$ denote the minimal representatives in lex order. Our map $\mathsf {Prefix}si$ will map $j$ to $y^j$. This clearly maps each index to a representative of a different orbit. It suffices to show how to compute $\mathsf {Prefix}si(j)$. Note that $|\mathbb{C}lasses_x|$ is equal to the number of $y^i$ that precede $x$, and is thus a nondecreasing function of $x$. Therefore, $\mathsf {Prefix}si(j)=y^j$ is equal to the lexicographically largest string with $|\mathbb{C}lasses_x| <j$. Furthermore, since $|\mathbb{C}lasses_x|$ is a nondecreasing function of $x$, we can find $\mathsf {Prefix}si(j)$ by doing binary search on the set of strings according to the value of $|\mathbb{C}lasses_x|$. Simiarly, we can solve the reverse indexing problem: given a string $x$ we can find the index of the orbit to which it belongs by first finding the lexicographically minimal representative $y^i$ of its orbit and then computing $|\mathbb{C}lasses_{y^i}|+1$. \begin{lem}~\label{lem:reduction} To efficiently index and reverse index necklaces of length $n$ over an alphabet $\Sigma$, it suffices to have an efficient algorithm that takes as input a string $x \in \Sigma^n$ and outputs $|\mathbb{C}lasses_x|$. \end{lem} The next section gives our algorithm to determine $|\mathbb{C}lasses_x|$ fpr any input string $x$. \mathsf {Suffix}ubsection{Computing $|\mathbb{C}lasses_x|$} Let us define: \begin{itemize} \item $G_{x,p} = \bigcup_{ E \in \mathbb{C}lasses_x: |E| = p} E$. \item $G_{x, \leq p} = \bigcup_{ E \in \mathbb{C}lasses_x: |E| \mbox{ divides } p} E$. \end{itemize} In Section ~\ref{sec:reduc} we reduce the problem of computating of $|\mathbb{C}lasses_x|$ to the problem of computing $|G_{x,\leq p}|$ for various $p$. The main component of the indexing algorithm is a subroutine that computes $|G_{x, \leq p}|$ given a string $x$ and an integer $p$. This subroutine works by building a branching program with $n^{O(1)}$ nodes, which when given a string $y$ accepts if and only if (1) the orbit of $y$ has size dividing $p$ and (2) $\mathsf{Orbit}(y) < x$. The quantity we want to compute, $|G_{x, \leq p}|$, is therefore simply the number of $y$ accepted by this branching program (which, as noted above can be computed in polynomial time via a simple dynamic program). \mathsf {Suffix}ubsubsection{Notation and Preliminaries}~\label{sec:notations} \noindent {\bf Preliminaries:} We state some basic facts about periodic strings without proof. \begin{fact}\label{obs:periodic1} Let $y$ be a string of length $n$ and let $p$ be positive integer dividing $n$. Then, $|\mathsf{Orbit}(y)| = p$ if and only if $y$ has fundamental period $p$. In particular, $y$ can be written as ${y_1}^{\frac{n}{p}}$ for an aperiodic string $y_1 \in \Sigma^p$. \end{fact} \begin{fact}~\label{obs:periodic2} The fundamental period of a string is a divisor of any period of the string. \end{fact} In particular, the fundamental period of a string is unique. We denote the fundamental period of $y$ by $\mathsf{FP}(y)$. \mathsf {Suffix}ubsubsection{Reduction to computing $|G_{x, \leq p}|$} \label{sec:reduc} We begin with some simple transformations that reduce the computation of $|\mathbb{C}lasses_x|$ to the computation of $|G_{x, \leq p}|$ (for various $p$). \begin{lem}\label{lem: 1} For all $x \in \Sigma^n$, $$ |\mathbb{C}lasses_x| = \mathsf {Suffix}um_{y \in G_{x, \leq n}} \frac{1}{|\mathsf{Orbit}(y)|} = \mathsf {Suffix}um_{y \in G_{x, \leq n}} \frac{1}{\mathsf{FP}(y)}.$$ \end{lem} \begin{proof} For $y \in G_{x, \leq n}$, $\mathbb{R}ot^i(y) \in G_{x, \leq n}$ for every positive integer $i$. Note that there are exactly $|\mathsf{Orbit}(y)|$ distinct strings of the form $\mathbb{R}ot^i(y)$. Thus for any orbit $E \in \mathbb{C}lasses_x$, we have $\mathsf {Suffix}um_{y \in E} \frac{1}{|\mathsf{Orbit}(y)|} = 1$. Therefore: $$ \mathsf {Suffix}um_{y \in G_{x, \leq n}} \frac{1}{|\mathsf{Orbit}(y)|} = \mathsf {Suffix}um_{E \in \mathbb{C}lasses_x} \mathsf {Suffix}um_{y \in E} \frac{1}{|\mathsf{Orbit}(y)|} = \mathsf {Suffix}um_{E \in \mathbb{C}lasses_x} 1 = |\mathbb{C}lasses_x|.$$ \end{proof} The sum on the right hand side can be split on the basis of the period of $y$. From Lemma~\ref{lem: 1} and Fact~\ref{obs:periodic1}, we have the following lemma. \begin{lem}\label{lem: 2} For all $x \in \Sigma^n$, $$ |\mathbb{C}lasses_x| = \mathsf {Suffix}um_{i\mid n} \frac{|G_{x,i}|}{i}$$ \end{lem} So, to count $|\mathbb{C}lasses_x|$ efficiently, it suffices to compute $|G_{x,i}|$ efficiently for each $i|n$. Now, from the definitions, we have the following lemma. \begin{lem}~\label{lem: 3} For all $x \in \Sigma^n$, $$|G_{x,\leq p}| = \mathsf {Suffix}um_{i|p}|G_{x, i}|$$ \end{lem} From the M\"obius Inversion Formula (see Chapter 3 in~\cite{Stanley1} for more details), we have the following equality. \begin{lem}~\label{lem:4} $$|G_{x,p}| = \mathsf {Suffix}um_{i|p}\mu\left(\frac{p}{i}\right)|G_{x,\leq i}|$$ \end{lem} Lemma~\ref{lem:4} implies that it suffices to compute $|G_{x,\leq p}|$ efficiently for every divisor $p$ of $n$. In the next few sections, we will focus on this sub-problem and design an efficient algorithm for this problem. We will first describe the algorithm when the alphabet is binary, and then generalize to larger alphabets. \mathsf {Suffix}ubsubsection{Computing $|G_{x, \leq n}|$ efficiently for the binary alphabet}~\label{sec:binalphabet} In this section, we will design an efficient algorithm that given a string $x \in \{0,1\}^n$ computes $|G_{x, \leq n}|$. On input $x$ the algorithm will construct a branching program with the property that $|G_{x,\leq n}|$ is the number of accepting paths in the branching program. This number of accepting paths can be computed by a simple dynamic program as described at the beginning of Section ~\ref{sec:strategy}. \iffalse For our purposes, a branching program is a directed graph with a designated start node, a set $A$ of accepting vertices, and for each non-accepting vertex there is a set of outgoing arcs corresonding to the alphabet. The graph has the obvious interpretation: given a string over the alphabet, the branching program operates by starting at the start vertex and following the edges corresponding to the symbols in the string. The string is accepted if the final vertex belongs to $A$. \fi \begin{lem}~\label{lem:countautomaton} Given as input a branching program $B$ of length $n$ over alphabet $\Sigma$, we can compute the size of the set of accepted strings in time $\mathsf {Prefix}oly(|B|,\log n)$. \end{lem} \begin{proof} The number accepted strings is the number of paths from the start node to the accept node, and all such paths have length exactly $n$. Thus the number of accepted strings is the $i,j$ entry in the $n^{\rm{th}}$ power of the adjacency matrix of the graph. and can thus be computed in time polynomial in the size of the graph and $\log n$ (by repeated squaring). \end{proof} We now describe how to construct, for each fixed string $x \in \{0,1\}^n$, a branching program $B_x$ of size polynomial in $n$ such that the strings accepted by $B_x$ are exactly those in $G_{x, \leq n}$. Lemma~\ref{lem:countautomaton} then implies that we can compute $|G_{x,\leq n}|$ in time polynomial in $n$. For strings $x,y$, when is $y lexicographic q x$? This happens and only if there exists an $i\in \{1,2,\ldots, n-1\}$ such that $y_j = x_j$ for every $j\leq i$ and $x_{i+1} > y_{i+1}$. In the case of binary strings of length $n$, we must have $x_{i+1} = 1$ and $y_{i+1} = 0$. \begin{definition} The set of witnesses for $x$, denoted $L_x$, is defined by: $$L_x = \{s0 : s1\text{ is a prefix of } x \} $$ \end{definition} We can summarize the discussion from the paragraph above as follows: \begin{obs}~\label{obs:lexineq1} For $x, y \in \{0,1\}^n$, we have $y lexicographic q x$ if and only if some prefix of $y$ lies in $L_x$. \end{obs} We will now generalize this observation to strings under rotation. For strings $x, y$, when is $\mathsf{Orbit}(y) < x$? Recall that $\mathsf{Orbit}(y) < x$ if for some $y' \in \mathsf{Orbit}(y)$, we have $y' lexicographic q x$. From Observation~\ref{obs:lexineq1}, we know that this happens if and only if some $y' \in \mathsf{Orbit}(y)$ has some prefix $w$ in $L_x$. Rotating back to $y$, two situations can arise. Either $y$ contains $w$ as a contiguous substring, or $w$ appears as a ``split substring" wrapped around the end of $y$. In the latter case, $y$ has a prefix $w_1$ and a suffix $w_2$ such that $w_2w_1 = w \in L_x$. Recall that $G_{x, \leq n}$ is the set of $y$ with $\mathsf{Orbit}(y) < x$. Thus, $y \in G_{x, \leq n}$ if and only if it has a contiguous substring as a witness, or it has a witness that is wrapped around its end. Let us separate these two cases out. \begin{definition}~\label{def:partition} For a string $x\in \{0,1\}^n$, $$G_{x, \leq n}^c = \{y\in \{0,1\}^n: y \text{ contains a string in } L_x \text{ as a contiguous substring }\}$$ $$G_{x, \leq n}^w = \{y \in \{0,1\}^n: y \text{ has a prefix } w_1\text{ and suffix } w_2 \text{ such that } w_2w_1 \in L_x\}$$ \end{definition} From the discussion in the paragraph above, we have the following observation: \begin{obs}~\label{obs:separate} $$G_{x, \leq n} = G_{x, \leq n}^c \cup G_{x, \leq n}^w$$ \end{obs} The branching program $B_x$ will be obtained by combining two branching programs $B_x^c$ and $B_x^w$, where the first accepts the strings in $G_{x,\leq n}^c$ and the second accepts the strings in $G_{x,\leq n}^w$. Each layer $j$ of the branching program $B_x$ is the product of layer $j$ of $B_x^c$ and layer $j$ of $B_x^w$ and we have arcs $(v,v')\rightarrow_{\mathsf {Suffix}igma} (w,w')$ when $v \rightarrow_{\mathsf {Suffix}igma} v'$ and $w \rightarrow_{\mathsf {Suffix}igma} w'$. The accepting nodes at level $n+1$ are nodes $(v,v')$ where $v$ is an accepting node of $B_x^c$ or $v'$ is an accepting node of $B_x^w$. The resulting branching program clearly accepts the set of strings accepted by $B_x^c$ or $B_x^w$. \iffalse So, in order to construct an automaton $A_x$ for which $L(A_x) \cap \{0,1\}^n = G_{x,\leq n}$, it suffices to construct automata $A_x^i$ for $i\in \{c,w\}$ recognizing $G_x^i$. The size of $A_x$ would be at most $|A_x^w| \times |A_x^c|$. In the next two sections, we will give efficient algorithms to construct small automata $A_x^w$ and $A_x^c$, given $x$ as input. \begin{remark} In the definition~\ref{def:partition}, if $w_2 = \epsilon$ or $w_1 = \epsilon$, then $y \in G_{x, \leq n}^c \cap G_{x, \leq n}^w$. Hence, $G_{x, \leq n}^c$ and $ G_{x, \leq n}^w$ do not partition the set $G_{x, \leq n}$ and such strings $y$ will be accepted by both the automata $A_x^w$ and $A_x^c$ and hence $A_x$. \end{remark} \fi Note that the branching programs $B_x$ produced by the algorithm are never actually ``run'', but are given as input to the algorithm of Lemma~\ref{lem:countautomaton} in order to determine $|G_{x,\leq n}|$. For a set of strings $W$, we will use $\mathsf {Prefix}(W)$ to denote the set of all prefixes of all strings in $W$ (including the empty string $\epsilon$). Similarly, $\mathsf {Suffix}(W)$ denotes the set of all suffixes of of all strings in $W$ (including the empty string $\epsilon$). Similarly, we will use $\mathsf {Suffix}t(W)$ for set of all contiguous substrings of strings in $W$. \iffalse Let $P$ be the set of prefixes of strings in $L_x$ and $S$ be the set of all (consecutive) substrings of strings in $L_x$. \fi For a string $r$, $Q(r)$ is the set of suffixes of $r$ that belong to $\mathsf {Prefix}(L_x)$. \mathsf {Prefix}aragraph{Constructing branching program $B_x^c$} We now present an algorithm which on input $x \in \{0,1\}^n$,runs in time polynomial in $n$ and outputs a branching program $B_x^c$ that recognizes $L_x^c$. \begin{definition}{\bf Branching program $B_x^c$} \begin{enumerate} \item Nodes at level $j$ are triples $(j,s,b)$ where $s \in \mathsf {Prefix}(L_x)$ and $b \in \{0,1\}$. (We want string $s$ to be the longest suffix of $z_1z_2\ldots z_j$ that belongs to $\mathsf {Prefix}(L_x)$, and $b=1$ iff $z_1z_2\ldots z_j$ contains a substring that belongs to $L_x$.) \item The start node is $(0,\Lambda,0)$ where $\Lambda$ is the empty string. \item The accepting nodes $(n,s,b)$ are those with $b=1$. \item For $j \leq n$, the arc out of nodes $(j-1,s,b)$ labeled by alphabet symbol $\alpha$ is $(j,s',b')$ where $s'$ is the longest string in $Q(s\alpha)$ and $b'=1$ if $s'$ contains a suffix in $L_x$ and otherwise $b'=b$. \end{enumerate} \end{definition} It is clear that the branching program can be constructed (as a directed graph) in time polynomial in $n$. It remains to show that it accepts those $z$ that have a substring that belongs to $(L_x)$. Fix a string $z \in \{0,1\}^n$. Let $(j,s_j,b_j)$ be the $j$th vertex visited by the branching program on input $z$. Note that $s_j$ is a suffix of $z_1 \ldots z_j$. Let $h_j$ be the index such that $s=z_{h_j} \ldots z_j$; if $s$ is empty, we set $h_j=j+1$. For $j$ between 1 and $n$ let $i_j$ be the least index such that $z_{i_j} \ldots z_j$ belongs to $\mathsf {Prefix}(L_x)$ (so $i_j=j+1$ if there is no such string). Note that $i_j \geq i_{j-1}$ since if $z_i \ldots z_j$ belongs to $\mathsf {Prefix}(L_x)$ so does $z_i \ldots z_{j-1}$. The branching program is designed to make the following true: \begin{claim} For $j$ between 1 and $n$, $h_j=i_j$ and $b_j=1$ if and only if a substring of $z_1 \ldots z_j$ belongs to $L_x$. \end{claim} The claim for $b_j=1$ implies that the branching program accepts the desired set of strings. \begin{proof} The claim follows easily by induction, where the basis $j=0$ is trivial. Assume $j>0$. First we show that $h_j=i_j$. By induction $h_{j-1}=i_{j-1}$ and by definition of $h_j$ and $i_j$ we have $i_j \leq h_j$. To show $h_j \leq i_j$, note that since $i_j \geq i_{j-1}=h_{j-1}$, the string $z_{i_{j}}\ldots z_j$ is in $Q(t_{j-1}\alpha)$ and so is considered in the choice of $s_j$ and thus $h_j=i_j$. For the claim on $b_j$, if $z$ has no substring in $L_x$ then $b_j$ remains 0 by induction. If $z$ has a substring in $L_x$ let $z_i \ldots z_k$ be such a substring with $k$ minimum. Then by the claim on $t_k$, $h_k\leq i$, and so $z_i \ldots z_k$ is a suffix of $s_k$ and so $b_k=1$, and for all $j \geq k$, $b_j$ continues to be 1. \end{proof} \mathsf {Prefix}aragraph{Constructing branching program $B_x^w$} We now present an algorithm which on input $x \in \{0,1\}^n$,runs in time polynomial in $n$ and outputs a branching program $B_x^w$ that accepts the set of strings $z$ that have a nonempty suffix $u$ and nonemtpy prefix $v$ such that $uv$ belongs to $L_x$. \begin{definition}{\bf Branching program $B_x^w$} \begin{enumerate} \item Nodes at level $j$ are triples $(j,s,p)$ where $p,s \in \mathsf {Prefix}(L_x)$. (String $s$ will be the longest suffix of $z_1z_2\ldots z_j$ that belongs to $\mathsf {Prefix}(L_x)$ (as in $B_x^c$) and $p$ is the longest prefix of $z_1z_2 \ldots z_j$ that belongs to $\mathsf {Prefix}(L_x)$. \item The start node is $(0,\Lambda,\Lambda)$ where $\Lambda$ is the empty string. \item The accepting states are those states $(n,s,p)$ such that $p$ has a nonempty prefix $p'$ and $s$ has a nonempty suffix $s'$ such that $s'p' \in L_x$. \item For $j \leq n$, the arc out of state $(j-1,s,p)$ labeled by alphabet symbol $\alpha$ is $(j,s',p')$ where $s'$ is the longest string in $Q(s\alpha)$ and $p'=p\alpha$ if $|s|=j-1$ and $p\alpha \in \mathsf {Prefix}(L_x)$ and $p'=p$ otherwise. \end{enumerate} \end{definition} It is clear that the branching program can be constructed (as a directed graph) in time polynomial in $n$. It remains to show that it accepts $L_x^w$. Fix a string $z \in \{0,1\}^n$. Let $(j,s_j,p_j)$ be the $j$th node visited by the branching program on input $z$. Notice that $s_j$ is calculated the same way in $B_x^w$ as in $B_x^c$ and so $s_j$ is the longest suffix of $z_1\ldots z_j$ that belongs to $\mathsf {Prefix}(L_x)$. An easy induction shows that $p_j$ is the longest prefix of $z_1\ldots z_j$ belonging to $\mathsf {Prefix}(L_x)$: Let $k$ be the length of the longest prefix of $z$ belonging to $\mathsf {Prefix}(L_x)$. For $j \leq k$ we have $p_j=z_1 \ldots z_j$ and for $j>k$, $p_j=z_1 \ldots z_k$. Finally, we need to show that the branching program accepts $z$ if and only if $z$ has a a nonempty suffix $s'$ and $z$ has a nonempty prefix $p'$ such that $s'p' \in L_x$. If the program accepts then the acceptance condition and the fact that $s_n$ is a suffix of $z$ and $p_n$ is a prefix of $z$ implies that $z$ has the required suffix and prefix. Conversely, if $z$ has such a prefix $p'$ and suffix $s'$, then they each belong to $\mathsf {Prefix}(L_x)$. Since $p_n$ is the longest prefix of $z$ belonging to $\mathsf {Prefix}(L_x)$, $p'$ is a prefix of $p_n$ and since $s_n$ is the longest suffix of $z$ belonging to $\mathsf {Prefix}(L_x)$, $s'$ is a suffix of $t_n$. So the branching program will accept. \iffalse A simple induction shows that \item for every $s_t \in F$ and $b \in \{0,1\}$, $\delta_{A_x^c}(s_t, b) = s_t$ \item for every $s_t \notin F$ and $b\in\{0,1\}$, such that $tb$ has a suffix in $L_x$, $\delta_{A_x^c}(s_t,b) = s_u$ where $u$ is the longest such suffix \item for every $s_t \notin F$ and $b \in \{0,1\}$, such that $tb$ has no suffix in $L_x$, $\delta_{A_x^c}(s_t, b) = s_u$, where $u$ is the longest suffix of $tb$ in $\mathsf {Prefix}(L_x)$ \end{enumerate} \end{itemize} \end{definition} We will now argue that $L(A_x^c)\cap \{0,1\}^n = G_x^c$. We will first prove the following claim. \begin{claim} For every state $s_t \in Q(A_x^c)$ and $z \in \{0,1\}^*$, if $A_x^c$ reaches the state $s_t$ on input $z$, then one of the following is true: \begin{itemize} \item $t\in L_x$ and $t$ is a contiguous substring of $z$ \item $t\notin L_x$ and $t$ is the longest suffix of $z$ in $\mathsf {Prefix}(L_x)$ \end{itemize} \end{claim} \begin{proof} The proof follows via a simple induction on the length of $z$. {\bf Base Case :} We verify the claim for strings of length $0$. For $z = \epsilon$, the branching program has not read anything yet and hence $t = \epsilon$. Since every string in $L_x$ has length at least $1$, so $t \notin L_x$. Clearly, $t$ satisfies the second item in the claim. {\bf Induction Step :} Let us assume that the claim is true for all strings of length at most $j-1 \geq 0$. Let $z$ be a string of length $j$. Let $b$ be the last bit of $z$, so that $z = z'b$, where $z'$ is the prefix of $z$ of length $j-1$. Let $s_{t'}$ be the state the machine is in after reading $z'$. We now break the claim into two cases. \begin{enumerate} \item If $t'\in L_x$, then the automata stays in that state, and $t = t'$ and the claim holds. \item Otherwise, $t' \notin L_x$. In this case, from the induction hypothesis, we know that $t'$ is the longest suffix of $z'$ in $\mathsf {Prefix}(L_x)$. Now, on reading $b$, if $t'b$ has a suffix in $L_x$, then $t = \text{longest such suffix}$ and the claim is true. Else, $t'b$ does not have a suffix in $L_x$ and $t = u$, where $u$ is the longest suffix of $t'b$ in $\mathsf {Prefix}(L_x)$. We need to verify that $u$ is the longest suffix of $z$ in $\mathsf {Prefix}(L_x)$ and then we would be done. The only way this cannot be the case is when the length of the longest such suffix exceeds the length of $t'b$. But, then this implies that $t'$ was not the longest suffix of $z'$ in $\mathsf {Prefix}(L_x)$, which is a contradiction. \end{enumerate} \end{proof} The claim above implies that a string is accepted if and only if it has a contiguous substring in $L_x$. In particular, we get the correctness of the construction. \begin{lem}~\label{lem:correctnessA_x^c} $L(A_x^c) \cap \{0,1\}^n = G_x^c$ \end{lem} For $x\in \{0,1\}^n$, the sets $L_x$ and $\mathsf {Prefix}(L_x)$ can be constructed in time polynomial in $n$. Therefore, the branching program $A_x^c$ can be constructed in time polynomial in $n$. \mathsf {Prefix}aragraph{Constructing $A_x^w$ efficiently} In this section, we will design an efficient algorithm for constructing a branching program $A_x^w$ satisfying $$L(A_x^w) \cap \{0,1\}^n = G_x^w$$ In other words, $A_x^w$ accepts strings $y$ such that $y$ has a prefix $b$ and a suffix $a$ such that $ab\in L_x$. We will construct $A_x^w$ in two steps. Intuitively, in the first step, we will construct a branching program $A_x^{w, \mathsf{P}}$, which detects the prefix and then we will construct a branching program $A_x^{w, \mathsf{S}}$ to detect the suffix part. Finally, we will show how to combine the two to obtain $A_x^w$. \iffalse \begin{remark} We note that the definitions of the automata $A_x^{w,\mathsf{P}}$ and $A_x^{w, \mathsf{S}}$ differ from that of an automaton in general in the sense that they will not have accepting or rejecting states. We will think of them as finite state machines which help us detect certain structures in their inputs. \end{remark} \fi In the definition below, we use $\mathsf {Prefix}(\mathsf {Suffix}(L_x))$ to denote the set of strings which are a prefix of a suffix of a string in $L_x$. Observe that this is precisely the set of all substrings of strings in $L_x$. Recall that, intuitively the goal of the branching program $A_x^{w, \mathsf{P}}$ is to detect the longest prefix of the input which is a suffix of a string in $L_x$ \begin{definition}[{\bf Branching program $A_x^{w,\mathsf{P}}$}] The branching program $A_x^{w,\mathsf{P}}$ is defined as a tuple $Q(A_x^{w, \mathsf{P}}), s(A_x^{w, \mathsf{P}}), \delta_{A_x^{w, \mathsf{P}}}$ with, \begin{itemize} \item the set of states $Q(A_x^{w, \mathsf{P}}) = \{s_u^c : u \in \mathsf {Prefix}(\mathsf {Suffix}(L_x)), c \in \{ {\times}, {\checkmark} \}\}$ \item a start state $s(A_x^{w, \mathsf{P}}) = s_{\epsilon}^{\checkmark}$ \item transition function $\delta_{A_x^{w, \mathsf{P}}}$ defined as follows: \begin{enumerate} \item for $b\in \{0,1\}$, $\delta_{A_x^{w, \mathsf{P}}}(s_t^{\times}, b) = s_t^{\times}$ \item for $b \in \{0,1\}$, if $tb \in \mathsf {Prefix}(\mathsf {Suffix}(L_x))$, then $\delta_{A_x^{w, \mathsf{P}}}(s_t^{\checkmark}, b) = s_{tb}^{\checkmark}$ \item for $b \in \{0,1\}$, if $tb \notin \mathsf {Prefix}(\mathsf {Suffix}(L_x))$ and $u$ is the longest prefix of $tb$ in $\mathsf {Suffix}(L_x)$, then $\delta_{A_x^{w, \mathsf{P}}}(s_t^{\checkmark}, b) = s_u^{\times}$ \end{enumerate} \end{itemize} \end{definition} Having described the construction, we will now argue its correctness via the following lemma. \begin{lem}~\label{lem:correctA_x^w^p} For every state $s_t^c \in Q(A_x^{w, \mathsf{P}})$, $A_x^{w, \mathsf{P}}$ reaches $s_t^c$ on input $z$ if and only if one of the following is true: \begin{itemize} \item $c = {\checkmark}$ and $t = z$ is an element in $\mathsf {Prefix}(\mathsf {Suffix}(L_x))$, \item $c = {\times}$ and $t \neq z$ is the longest prefix of $z$ in $\mathsf {Suffix}(L_x)$. \end{itemize} In particular, it follows that $t$ is a prefix of $z$. \end{lem} \begin{proof} The proof is again via a simple induction on the length of $z$ and the following simple observations which follow from the definition of the transition function. \begin{itemize} \item Once the branching program goes to a state with superscript ${\times}$, it always remains there. \item The first transition, where the branching program goes to a state with superscript ${\times}$ is of the form $\delta_{A_x^{w, \mathsf{P}}}(s_v^{\checkmark}, b) = s_u^{\times}$, where $b \in \{0,1\}$, if $vb \notin \mathsf {Prefix}(\mathsf {Suffix}(L_x))$ and $u$ is the longest prefix of $vb$ in $\mathsf {Suffix}(L_x)$. \item Before transition to a state with superscript ${\times}$, the branching program remembers its entire input in the subscript of the state. \end{itemize} \end{proof} { From the Lemma above, we get the following. \begin{lem}~\label{lem:correctnessA_x^w^p} Let $z$ be a string such that on reading $z$, the branching program $A_x^{w, \mathsf{P}}$ goes to a state $s_t^c$ with $c\in \{{\times}, {\checkmark}\}$. Then, for every string $u$ such that $u$ is a prefix of $z$ and $u \in \mathsf {Suffix}(L_x)$, $u$ is a prefix of $t$. \end{lem} \begin{proof} Let the machine be the state $s_{t'}^{c'}$ after reading the prefix $u$ of $z$. If $c' = {\checkmark}$, then $t' = u$ by item 1 in lemma~\ref{lem:correctA_x^w^p}. From the definition of the automata, it follows that $u$ will be a prefix of $t$ for any state $s_t$ that the branching program goes to in the future while reading $z$. If $c' = {\times}$, then by item 2 in lemma~\ref{lem:correctA_x^w^p}, $t'$ is the longest prefix of $u$ in $\mathsf {Suffix}(L_x)$. But since $u \in \mathsf {Suffix}(L_x)$, it follows that $t' = u$, and the claim follows analogous to the earlier case. \end{proof} \fi \iffalse This gives us the following lemma about the $A_x^{w, \mathsf{P}}$. \begin{lem} After reading an input string $z$, the machine $A_x^{w, \mathsf{P}}$ goes to a state $s_t^c$ with $c\in \{{\times}, {\checkmark}\}$ such that $t$ is the longest prefix of $z$ in $\mathsf {Prefix}(\mathsf {Suffix}(L_x))$. \end{lem} \fi \iffalse We now construct the automaton $A_x^{w, \mathsf{S}}$ which will be used to detect the longest suffix of its input which lies in the set $\mathsf {Prefix}(L_x)$. The construction is along similar lines, but there are small differences which simplify the presentation (the most notable one being that $\mathsf {Prefix}(\mathsf {Prefix}(L_x))$ is the same as $\mathsf {Prefix}(L_x)$). \begin{definition}[{\bf Branching program $A_x^{w, \mathsf{S}}$}] The branching program $A_x^{w, \mathsf{S}}$ is defined as a tuple $Q(A_x^{w, \mathsf{S}}), s(A_x^{w, \mathsf{S}}), \delta_{A_x^{w, \mathsf{S}}}$ with, \begin{itemize} \item the set of states $Q(A_x^{w, \mathsf{S}}) = \{s_u : u \in \mathsf {Prefix}(L_x)\} $ \item a start state $s(A_x^{w, \mathsf{S}}) = s_{\epsilon}$ \item transition function $\delta_{A_x^{w, \mathsf{S}}}$ defined as follows: \begin{enumerate} \item for every $b \in \{0,1\}$ such that $tb \in \mathsf {Prefix}(L_x)$, $\delta_{A_x^{w, \mathsf{S}}}(s_t, b) = s_{tb}$ \item for every $b \in \{0,1\}$ such that $tb \notin \mathsf {Prefix}(L_x)$, and $u$ is longest suffix of $tb$ in $\mathsf {Prefix}(L_x)$, $\delta_{A_x^{w, \mathsf{S}}}(s_t, b) = s_u$ \end{enumerate} \end{itemize} \end{definition} The next lemma states that this construction indeed maintains the longest suffix of the input string which is an element of $\mathsf {Prefix}(L_x)$. \begin{lem}~\label{lem:correctnessA_x^w^s} $A_x^{w, \mathsf{S}}$ reaches $s_t$ on input $z$ if and only if $t$ is the longest suffix of $z$ in $\mathsf {Prefix}(L_x)$. \end{lem} \begin{proof} The proof again follows from the definitions via an induction on the length of $z$ and the observation that the branching program essentially remembers in its state the longest suffix of the input which is an element of $\mathsf {Prefix}(L_x)$. \end{proof} Having constructed the two automata $A_x^{w, \mathsf{P}}$ and $A_x^{w, \mathsf{S}}$, we are now all set to construct the branching program $A_x^w$. The branching program $A_x^w$ is essentially a simple product construction from $A_x^{w, \mathsf{P}}$ and $A_x^{w, \mathsf{S}}$ with carefully chosen final states. We formally define the construction below. \begin{definition} [{\bf Branching program $A_x^w$}] The branching program $A_x^w$ is a tuple $(Q(A_x^w), s(A_x^w), F(A_x^w), \delta_{A_x^w})$, with \begin{itemize} \item the set of states $Q(A_x^w) = Q(A_x^{w, \mathsf{P}}) \times Q(A_x^{w, \mathsf{S}})$ \item the start state $s(A_x^w) = (s(A_x^{w, \mathsf{P}}), s(A_x^{w, \mathsf{S}}))$ \item the set of accepting states $F(A_x^w) = \{(s_u^c, s_v) : $ there is a suffix $\alpha$ of $v$ and prefix $\beta$ of $u$ with $\alpha\beta \in L_x \}$ \item the transition function $\delta_{A_x^w}$ is defined as $\delta_{A_x^w}((s_u^{c}, s_v), b) = (\delta_{A_x^{w, \mathsf{P}}}(s_u^{c}, b), \delta_{A_x^{w, \mathsf{S}}}(s_v, b))$ \end{itemize} \end{definition} We will now show that the the language $L(A_x^w)$ satisfies $L(A_x^w) \cap \{0,1\}^n = G_x^w$. \fi \iffalse \begin{lem}~\label{lem: correctnessA_x^w1} On input $z \in \{0,1\}^n$, the branching program $A_x^w$ goes to a state $((s_u^c), s_t)$ if and only if \begin{itemize} \item $u$ is the longest prefix of $z$ which is in $\mathsf {Prefix}(\mathsf {Suffix}(L_x))$, and \item $t$ is the longest suffix of $z$ which is in $\mathsf {Prefix}(L_x)$ \end{itemize} \end{lem} \begin{proof} The proof follows immediately from Lemma~\ref{lem:correctnessA_x^w^s} and Lemma~\ref{lem:correctnessA_x^w^p}. \end{proof} From Lemma~\ref{lem: correctnessA_x^w1}, and the definition of the final states of $A_x^w$, we get, \fi \iffalse \begin{lem}~\label{lem:correctnessA_x^w2} A string $z$ is accepted by $A_x^w$ if and only if, it has a prefix $\beta$ and a suffix $\alpha$, possibly empty, such that $\alpha\beta \in L_x$. \end{lem} \begin{proof} Let $A_x^w$ goes to state $(s_u^b, s_t)$ on reading the string $z$. For one direction, let $z$ have a prefix $\beta$ and a suffix $\alpha$ such that $\alpha\beta \in L_x$. Clearly, $\alpha \in \mathsf {Prefix}(L_x)$ and $\beta \in \mathsf {Suffix}(L_x)$. From Lemma~\ref{lem:correctnessA_x^w^p}, it follows that $\beta$ is a prefix of $u$. Similarly, from Lemma~\ref{lem:correctnessA_x^w^s} it follows that $\alpha$ is a suffix of $t$. From the definition of the branching program, it follows that $(s_u^b, s_t)$ is an accepting state. For the other direction, let $(s_u^b, s_t)$ be an accepting state. From the definition of an accepting state, it follows that there is a suffix $\alpha$ of $t$ and a prefix $\beta$ of $u$ such that $\alpha\beta \in L_x$. Then, from Lemma~\ref{lem:correctnessA_x^w^s}, it follows that $t$ is the longest suffix of $z$ in $\mathsf {Prefix}(L_x)$. In particular, $\alpha$ which is a suffix of $t$ is also a suffix of $z$. Similarly, from Lemma~\ref{lem:correctA_x^w^p}, it follows that $u$ itself is a prefix of $z$. In particular, $\beta$ which is a prefix of $u$ is a prefix of $z$. Therefore, $z$ has a prefix $\beta$ and a suffix $\alpha$, such that $\alpha\beta \in L_x$. \end{proof} \iffalse \begin{proof} From Lemma~\ref{lem: correctnessA_x^w1}, we know that on input $z$, $A_x^w$ goes to state $(s_u^b, s_t)$ if and only if $u$ is the longest prefix of $z$ which is in $\mathsf {Prefix}(\mathsf {Suffix}(L_x))$, and $t$ is the longest suffix of $z$ which is in $\mathsf {Prefix}(L_x)$. Now, let us argue the only if part. If $z$ has a prefix $\beta$ and a suffix $\alpha$ such that $\alpha\beta \in L_x$, then $\beta \in \mathsf {Suffix}(L_x)$ and $\alpha \in \mathsf {Prefix}(L_x)$. This implies that $\beta \in \mathsf {Prefix}(\mathsf {Suffix}(L_x))$. Since $u$ is the longest prefix of $z$ in $\mathsf {Prefix}(\mathsf {Suffix}(L_x))$, this means that $\beta$ is a prefix of $u$. Similarly, $\alpha$ is a suffix of $t$. Then, from the definition of $A_x^w$, $(s_u^c, s_t)$ is a final state of $A_x^w$ (for both choices of $c \in \{{\times}, {\checkmark}\}$) and hence, $z$ will be accepted. For the forward direction, $z$ is accepted implies that $(s_u^c, s_t)$ is a final state of $A_x^w$ (for both choices of $c$). From the definition of $A_x^w$, this implies that $u$ has a prefix $\beta$ and $t$ has a suffix $\alpha$ such that $\alpha\beta \in L_x$. Hence, $z$ has a prefix $\beta$ and suffix $\alpha$. \end{proof} \fi The Lemma implies that that $L(A_x^w)\cap \{0,1\}^n = G_x^w$. \fi \mathsf {Prefix}aragraph{Putting things together} From the constructions, it is clear that the size of the branching programs $B_x^w$ and $B_x^c$ are polynomial in the size of $L_x$ and hence polynomial in $n = |x|$. Moreover, by a product construction, we can efficiently construct the deterministic finite branching program $B_x$ which accepts the strings accepted by $B_x^w$ or $B_x^c$, which is $G_{x, \leq n}$. This observation, along with Lemma~\ref{lem:countautomaton} implies the following lemma. \begin{lem}~\label{lem:mainlem1} There is an algorithm which takes as input a string $x$ in $\{0,1\}^n$ and outputs the size of $G_{x, \leq n}$ in time polynomial in $n$. \end{lem} \mathsf {Suffix}ubsubsection{Computing $|G_{x, \leq p}|$ efficiently} In this section, we will show that for every $p|n$, we can compute the quantity $|G_{x, \leq p}|$ efficiently. The algorithm will be a small variation of our algorithm for computing $|G_{x, \leq n}|$ from the previous section. Let $p$ be a divisor of $n$ with $p < n$. Every string $y \in G_{x, \leq p}$ is of the form $a^{\frac{n}{p}}$ for some $a \in \{0,1\}^p$, and every string in $\mathsf{Orbit}(y)$ is of the form $(\mathbb{R}ot^i(a))^{\frac{n}{p}}$, for some $i \leq p$. Let us write the string $x$ as $x_1x_2\ldots x_{\frac{n}{p}}$ where for each $i$, $x_i$ is of length exactly $p$. We will now try to characterize the strings in $G_{x,\leq p}$. From the definitions, $y = a^{\frac{n}{p}} \in G_{x, \leq p}$ if and only if there is a rotation $0 \leq i < p$ such that $(\mathbb{R}ot^i(a))^{\frac{n}{p}}$ has a prefix in $L_x$. This, in turn, can happen if and only if there is an $i < p$ such that one of the following is true. \begin{itemize} \item $\mathbb{R}ot^i(a) < x_1$ in~lexicographic order, or \item there is $j$, $0 < j < \frac{n}{p}$, such that $\mathbb{R}ot^i(a) = x_1 = x_2 = x_3 = \ldots = x_i$ and $\mathbb{R}ot^i(a) < x_{i+1}$ in~lexicographic order. \end{itemize} The strings $y = a^{\frac{n}{p}}$ for which $a$ has a rotation which is less than $x_1$ in~lexicographic order are exactly the strings of the form $c^{\frac{n}{p}}$ with $c \in G_{x_1, \leq p}$. Via the algorithm of the previous subsection, there is a polynomial in $n$ time algorithm which outputs a branching program recognizing $G_{x_1, \leq p}$. The only strings which satisfy the second condition are of the form ${c}^{\frac{n}{p}}$, where $c$ is a rotation of $x_1$ and $x_1 < x_{i+1}$ in~lexicographic order. There are at most $|\mathsf{Orbit}(x_1)|$ such strings, and we can count them directly given $x$. This gives us our algorithm for computing $|G_{x, \leq p}|$:\\ \noindent {\bf Computing $|G_{x, \leq p}|$:}\\ \noindent{\bf Input:} \begin{itemize} \item Integers $n, p$ such that $p|n$ \item A string $x \in \{0,1\}^n$ \end{itemize} \noindent{\bf Algorithm:} \begin{enumerate} \item Write $x$ as $x = x_1x_2\ldots x_{\frac{n}{p}}$ where $|x_i| = p \forall i \in [\frac{n}{p}]$ \item Construct a branching program $A_{x_1}$ such that $L(A_{x_i}) \cap \{0,1\}^p = G_{x_1, \leq p}$ \item Let $M$ be the number of strings of length $p$ accepted by $A_{x_1}$ \item If there is an $0 < i < \frac{n}{p}$ such that $x_1 = x_2 = x_3 = \ldots x_i$ and $x_1 < x_{i+1}$ in~lexicographic order, and $x_1 \notin L(A_{x_1)}$, then output $M+|\mathsf{Orbit}(x_1)|$, else output $M$. \end{enumerate} From the construction in Section~\ref{sec:binalphabet} and Lemma~\ref{lem:mainlem1}, it follows that we can construct $A_{x_1}$ and count $M$ in time polynomial in $n$. We thus have the following lemma. \begin{lem}~\label{lem:divisors} For any divisor $p$ of $n$ and string $x\in \{0,1\}^x$, we can compute the size of the set $G_{x, \leq p}$ in time $\mathsf {Prefix}oly(n)$. \end{lem} We now have all the ingredients for the proof of the following theorem, which is a special case of Theorem~\ref{thm:indexing} when the alphabet under consideration is $\{0,1\}$. \begin{theorem} There is an algorithm for indexing necklaces of length $n$ over the alphabet $\{0,1\}$, which runs in time $\mathsf {Prefix}oly(n)$. \end{theorem} \begin{proof} The proof simply follows by plugging together the conclusions of Lemma~\ref{lem: 2}, Lemma~\ref{lem: 3}, Lemma~\ref{lem:4}, Lemma~\ref{lem:countautomaton} and Lemma~\ref{lem:divisors}. \end{proof} It is not difficult to see that the indexing algorithm can be used to obtain a reverse indexing algorithm as well and hence, we also obtain a special case of Theorem~\ref{thm:reverse_indexing} for the binary alphabet. \mathsf {Suffix}ubsubsection{Indexing necklaces over large alphabets} In this subsection we how to handle the case of general alphabets $\Sigma$ (with $|\Sigma| = q$). A direct generalization of the algorithm for the case of the binary alphabet, where the set $L_x$ is appropriately defined, will run in time polynomial in $n$ and $q$. Our goal here is to improve the running time to polynomial in $n$ and $\log q$. The basic idea is to represent the elements in $\Sigma$ by binary strings of length $t \defeq \lceil \log q \rceil$. Let $\mathsf{Bin} : \Sigma \to \{0,1\}^t$ be an injective map whose image is the set $\Gamma$ of $q$ lexicographically smallest strings in $\{0,1\}^t$. Extend this to a map $\mathsf{Bin} : \Sigma^n \to \{0,1\}^{tn}$ in the natural way. We now use the map $\mathsf{Bin}$ to convert our indexing/counting problems over the large alphabet $\Sigma$ to a related problem over the small alphabet $\{0,1\}$. For $x \in \Sigma^n$, we have $\mathsf{Bin}(\mathbb{R}ot^i(x)) = \mathbb{R}ot^{ti}(\mathsf{Bin}(x))$. For an orbit $E \mathsf {Suffix}ubseteq \Sigma^n$ and $x \in \{0,1\}^{tn}$, we say $E < x$ if some element $z \in E$ satisfies $\mathsf{Bin}(z) lexicographic q x$. Let $\mathbb{C}lasses_x$ be the set of orbits $E \mathsf {Suffix}ubseteq \Sigma^n$ which are less than $x$. For each $x \in \{0,1\}^{tn}$ and $p \mid n$, define: \begin{enumerate} \item $$G_{x, p} = \bigcup_{E < x, |E| = p} E.$$ \item $$ G_{x, \leq p} = \bigcup_{E < x, |E| \mbox{ divides } p} E.$$ \end{enumerate} The following identity allows us to count $G_{x, \leq n}$: $$|G_{x, \leq n}| = |\{ y \in \{0,1\}^{tn} \mid y \in \Gamma^n, \mathsf{ex}ists i < n \mbox{ s.t. } \mathbb{R}ot^{it}(y) lexicographic q x \}|.$$ It is easy to efficiently produce a branching program $A_0$ such that $L(A_0) \cap \{0,1\}^{tn} = \Gamma^n$. As we will describe below, the methods of the previous section can be easily adapted to efficiently produce a branching program $A_x$ such that $$L(A_x) \cap \{0,1\}^{tn} = \{ y \in \{0,1\}^{tn} \mid \mathsf{ex}ists i < n \mbox{ s.t. } \mathbb{R}ot^{it}(y) lexicographic q x \}.$$ The following lemma will be crucial in the design of this branching program. \begin{lem}~\label{lem:largesigma} Let $y \in \{0,1\}^{tn}$. There exists $i < n$ such that $\mathbb{R}ot^{it}(y) lexicographic q x$ if and only if at least one of the following events occurs: \begin{enumerate} \item there exists $w \in L_x$ such that $w$ appears as a contiguous substring of $y$ starting at a coordinate $j$ with $j \equiv 0 \mod t$ (where the coordinates of $x$ are $0,1, \ldots, (tn-1)$). \item there exist strings $w_1, w_2$ such that $w_1w_2 \in L_x$, $w_2$ is a prefix of $y$, $w_1$ is a suffix of $y$, and $|w_1| \equiv 0 \mod t$. \end{enumerate} \end{lem} Given this lemma, the construction of $A_x$ follows easily via the techniques of the previous subsections. The main addition is that one needs to remember the value of the current coordinate mod $t$, which can be done by blowing up the number of states of the branching program by a factor $t$. Intersecting the accepted sets of $A_x$ and $A_0$ gives us our desired branching program which allows us to count $|G_{x, \leq n}|$. This easily adapts to also count $|G_{x, \leq p}|$ for each $p \mid n$. We conclude using the ideas of Section~\ref{sec:reduc}. We can now compute $|G_{x, p}|$ for each $x$ and each $p \mid n$. From Lemma~\ref{lem: 2}, Lemma~\ref{lem: 3} and Lemma~\ref{lem:4}, it follows that for every $x$, we can compute $|\mathbb{C}lasses_x|$ efficiently. We thus get our main indexing theorem for necklaces from Lemma~\ref{lem:reduction}. \begin{theorem}~\label{thm:largesigma} There are $\mathsf {Prefix}oly(n, \log|\Sigma|)$-time indexing and reverse-indexing algorithms for necklaces of length $n$ over $\Sigma$. Furthermore, there are $\mathsf {Prefix}oly(n, \log|\Sigma|)$-time indexing and reverse-indexing algorithms for necklaces of length $n$ over $\Sigma$ with fundamental period exactly $n$. \end{theorem} \mathsf {Suffix}ection{Indexing irreducible polynomials}~\label{sec:irrind} In the previous section, we saw an algorithm for indexing necklaces of length $n$ over an alphabet $\Sigma$ of size $q$, which runs in time polynomial in $n$ and $\log q$. In this section, we will see how to use this algorithm to efficiently index irreducible polynomials over a finite field. More precisely, we will use an indexing algorithm for necklaces with fundamental period exactly equal to $n$ (which is also given by the methods of the previous sections). Let $q$ be a prime power, and let $\mathbb{F}_q$ denote the finite field of $q$ elements. For an integer $n > 0$, let $I_{q,n}$ denote the set of monic, irreducible polynomials of degree $n$ in $\mathbb{F}_q[T]$. \begin{theorem} For every $q, n$ as above, there is an algorithm that runs in $\mathsf {Prefix}oly(n, \log q)$ time, takes $O(n \log q)$ bits of advice, and indexes $I_{q,n}$. \end{theorem} \begin{proof} To prove this theorem, we start by first describing the connection between the tasks of indexing necklaces and indexing irreducible polynomials. Let $P(T) \in I_{q,n}$. Note that $P(T)$ has all its roots in the field $\mathbb{F}_{q^n}$. Let $\alpha \in \mathbb{F}_{q^n}$ be one of the roots of $P(T)$. Then we have that $\alpha, \alpha^q, \ldots, \alpha^{q^{n-1}}$ are all distinct, and: $$ P(T) = \mathsf {Prefix}rod_{i=0}^{n-1} (T - \alpha^{q^{i}}).$$ Conversely, if we take $\alpha \in \mathbb{F}_{q^n}$ such that $\alpha, \alpha^q, \ldots, \alpha^{q^{n-1}}$ are all distinct, then the polynomial $P(T)= \mathsf {Prefix}rod_{i=0}^{n-1} (T - \alpha^{q^i})$ is in $I_{q,n}$. Define an action of $\mathbb{Z}_n$ on $\mathbb{F}_{q^n}^*$ as follows: for $k \in \mathbb{Z}_n$ and $\alpha \in (\mathbb{F}_{q^n})^*$, define: $$k [\alpha] = \alpha^{q^k}.$$ This action partitions $\mathbb{F}_{q^n}^*$ into orbits. By the above discussion, $I_{q,n}$ is in one-to-one correspondence with the orbits of this action with size exactly $n$. Thus it suffices to index these orbits. Let $g$ be a generator of the the multiplicative group $(\mathbb{F}_{q^n})^*$. Define a map $E: \mathbb{Z}_{q^n-1} \to \mathbb{F}_{q^n}^*$ by: $$ E(a) = g^a.$$ We have that $E$ is a bijection. Via this bijection, we have an action of $\mathbb{Z}_n$ on $\mathbb{Z}_{q^n-1}$, where for $k \in \mathbb{Z}_n$ and $a \in \mathbb{Z}_{q^n - 1}$, $$ k[a] = q^k \cdot a.$$ Now represent elements of $\mathbb{Z}_{q^n -1}$ by integers in $\{0, 1, \ldots, q^n - 2\}$. Define $\Sigma = \{0,1, \ldots, q-1\}$. For $a \in \mathbb{Z}_{q^n-1}$, consider its base-$q$ expansion $a_\mathsf {Suffix}igma \in \Sigma^n$. This gives us a bijection between $\mathbb{Z}_{q^n - 1}$ and $\Sigma^n \mathsf {Suffix}etminus \{(q-1, \ldots, q-1) \}$. Via this bijection, we get an action of $\mathbb{Z}_n$ on $\Sigma^n \mathsf {Suffix}etminus \{(q-1, \ldots, q-1) \}$. This action is precisely the standard rotation action! This motivates the following algorithm.\\ \noindent {\bf \underline{The Indexing Algorithm:}}\\ \noindent{\bf Input:} $q$ (a prime power), $n \geq 0$, $i \in [ |I_{q,n}| ]$\\ \noindent{\bf Advice:} 1. A description of $\mathbb{F}_q$\\ \noindent 2. An irreducible polynomial $F(T) \in \mathbb{F}_q[T]$ of degree $n$, whose root is a generator $g$ of $(\mathbb{F}_{q^n})^*$ (a.k.a. primitive polynomial). \begin{enumerate} \item Let $\Sigma = \{0,1, \ldots, q-1\}$. \item Use $i$ to index an necklace $\mathsf {Suffix}igma \in \Sigma^n \mathsf {Suffix}etminus \{ (q-1, q-1, \ldots, q-1) \}$ with fundamental period exactly $n$ (via Theorem~\ref{thm:largesigma}). \item View $\mathsf {Suffix}igma$ as the base $q$ expansion of an integer $a \in \{0,1, \ldots, q^n - 2\}$. \item Use $F(T)$ to construct the finite field $\mathbb{F}_{q^n}$ and the element $g \in \mathbb{F}_{q^n}^*$. (This can be done by setting $\mathbb{F}_{q^n} = \mathbb{F}_q[T]/F(T)$, and taking the class of the element $T$ in that quotient to be the element $g$.) \item Set $\alpha = g^a$. \item Set $P(T) = \mathsf {Prefix}rod_{i= 0}^{n-1} (T -\alpha^{q^i})$. \item Output $P(T)$. \end{enumerate} For constant $q$, this algorithm can be made to work with $\mathsf {Prefix}oly(\log n)$ advice. Indeed, one can construct the finite field $\mathbb{F}_{q^n}$ in $\mathsf {Prefix}oly(q, n)$ time, and a wonderful result of Shoup~\cite{Shoup} constructs a set of $q^{\mathsf {Prefix}oly(\log n)}$ elements in $\mathbb{F}_{q^n}$, one of which is guaranteed to be a generator. The advice is then the index of an element of this set which is a generator. \end{proof} \mathsf {Suffix}ection{Explicit Generator Matrices and Parity Check Matrices for BCH codes} \label{sec:BCH} In this section, we will apply the indexing algorithm for necklaces to give a strongly explicit construction for generator and the parity check matrices for BCH codes. More precisely, we use the fact that our indexing algorithm is in fact an unranking algorithm for the lexicgraphic ordering on (lexicographically least representatives of) necklaces. BCH codes~\cite{MS78} are classical algebraic error-correcting codes based on polynomials over finite extension fields. They have played a central role since the early days of coding theory due to their remarkable properties (they are one of the few known families of codes that has better rate/distance tradeoff than random codes in some regimes). Furthermore, their study motivated many advances in algebraic algorithms. Using our indexing algorithm for necklaces, we can answer a basic question about BCH codes: we construct strongly explicit explicit generator matrices and parity check matrices for BCH codes. For the traditionally used setting of parameters (constant designed distance), it is trivial to construct generator matrices and parity check matrices for BCH codes. But for large values of the designed distance, as far as we are aware, this problem was unsolved. Let $q$ be a prime power, and let $n\geq 1$ and $0 \leq d < q^n-1$. The BCH code associated with these parameters will be of length $q^n$ over the field $\mathbb{F}_q$, where the $q^n$ coordinates are identified with the big field $\mathbb{F}_{q^n}$. Let: $$ V = \{ \langle P(\alpha) \rangle_{\alpha \in \mathbb{F}_{q^n}} \mid P(X) \in \mathbb{F}_{q^n}[X], \deg(P) \leq d, \mbox{ s.t. } \forall \alpha \in \mathbb{F}_{q^n}, P(\alpha) \in \mathbb{F}_q \}.$$ In words: this is the $\mathbb{F}_q$-linear space of all $\mathbb{F}_{q^n}$-evaluations of $\mathbb{F}_{q^n}$-polynomials of low degree, which have the property that all their evaluations lie in $\mathbb{F}_q$. In coding theory terminology, this is a subfield subcode of Reed-Solomon codes. The condition that $P(\alpha) \in \mathbb{F}_q$ for each $\alpha \in \mathbb{F}_{q^n}$ can be expressed as follows: $$ P(X)^q = P(X) \mod X^{q^n} - X.$$ Thus, if $P(X) = \mathsf {Suffix}um_{i=0}^d a_i X^i$, then the above condition is equivalent to: $$ \mathsf {Suffix}um_{i=0}^d a_i^q X^{iq} = \mathsf {Suffix}um_{i=0}^d a_i X^i \mod X^{q^n} - X,$$ which simplifies to: $$ \forall i, a_{iq \mod (q^n - 1)} = a_i^q.$$ Thus: \begin{enumerate} \item For every $i$, if $\ell$ is the smallest integer such that $iq^\ell \mod (q^n-1) = i$, then $a_i \in V_\ell = \{ \alpha \in \mathbb{F}_{q^n} \mid \alpha^{q^\ell} = \alpha \}$, \item Specifying $a_i \in V_\ell$ automatically determines $a_{iq \mod (q^n - 1) }, a_{iq^2 \mod (q^n-1) }, \ldots $, \item $a_i$ can take any value in $V_\ell$. \end{enumerate} This motivates the following choice of basis for BCH codes. Let $\mathcal F = \{ S \mathsf {Suffix}ubseteq \{0,1,\ldots, d\} \mid i \in S \mathbb{R}ightarrow (iq \mod (q^n-1)) \in S \}.$ Let $\alpha_{S,1}, \ldots, \alpha_{S, |S|}$ be a basis for $V_{|S|}$ over $\mathbb{F}_q$ (note that when $j \mid n$, we have that $V_\ell = \{ \alpha \in \mathbb{F}_{q^n} \mid \alpha^{q^\ell} = \alpha \}$ is an $\mathbb{F}_q$-linear subspace of $\mathbb{F}_{q^n}$ of dimension $\ell$). For $S \in \mathcal F$, define $m_S = \min_{i \in S} i$. For $S \in \mathcal F$ and $j \in [|S|]$, define: $$ P_{S, j} (X) = \mathsf {Suffix}um_{k = 0}^{|S|-1} \alpha_j^{q^k} X^{m_S q^{k} \mod (q^n-1)}.$$ It is easy to see from the above description that $\left( P_{S, j} \right)_{S \in \mathcal F, j \in [n]}$ forms an $\mathbb{F}_q$ basis for the BCH code $V$. Thus it remains to show that one can index the sets of $\mathcal F$. If we write all the elements of $S \in \mathcal F$ in base $q$, we soon realize that the $S$ are precisely in one-to-one correspondence with those rotation orbits of $\Sigma^n$ (with $\Sigma = \{0,1, \ldots, q-1\}$) where all elements of the orbit are lexicographically $\leq$ some fixed string in $\Sigma^n$ (in this case the fixed string turns out to be the base $q$ representation of the integer $d$). By our indexing algorithm for orbits, $\mathcal F$ can be indexed efficiently. Thus we can compute any given entry of a generator matrix for BCH codes. The parity check matrices can be constructed similarly. For a given designed distance $d$, one starts with $d \times \mathbb{F}_{q^n}^*$ matrix $M$ whose $i, \alpha$ entry equals $\alpha^i$. Note that every $d$ columns of this matrix form a van der Monde matrix: thus they are linearly independent over $\mathbb{F}_{q^n}$ (and hence also over $\mathbb{F}_q$). Define an equivalence $\mathsf {Suffix}im$ relation on $[d]$ as follows: $i_1 \mathsf {Suffix}im i_2$ iff $i_2 = i_1 \cdot q^k \mod (q^n-1)$ for some $k$. Now amongst the rows of $M$, for each equivalance class $E \mathsf {Suffix}ubseteq d$, keep only one row from $E$ (i.e., for some $i \in E$, keep the $i$'th row of $M$ and delete the $j$'th row for all $j \in E \mathsf {Suffix}etminus \{i\}$). The remarkable dimension-distance tradeoff of BCH codes is based on the fact that this operation, while it reduces the dimension of the ambient space in which the columns of this matrix lie, preserves the property that every $d$ columns of this matrix are linearly independent over the small field $\mathbb{F}_q$. This reduced matrix $\tilde{M}$ is the parity-check matrix of the BCH code. We now give a direct construction of the parity-check matrix $\tilde{M}$. Let $\mathcal F = \{ S \mathsf {Suffix}ubseteq [q^n-1] \mid i \in S \implies iq \in S \}$. For $S \in \mathcal F$, let $m_S = \min_{i \in S} i$. Then the rows of $\tilde{M}$ are indexed by those $S \in \mathcal F$ for which $m_S \leq d$. The $(S, \alpha)$ entry of $\tilde{M}$ equals $\alpha^{m_S}$. Writing all the integers of $[q^n-1]$ in base $q$, we see that the elements of $\mathcal F$ are orbits of the $\mathbb{Z}_n$ action on $\Sigma^n$, where $\Sigma = \{0, 1, \ldots, q-1\}$. Furthermore, the $S$ with $m_S \leq d$ are precisely those orbits which have some element lexicographically at most a given fixed element $x$ (which in this case is the base $q$ representation of $d$). By our indexing algorithm, the rows of $\tilde{M}$ can be indexed efficiently, and hence each entry of the $\tilde{M}$ can be computed in time $\mathsf {Prefix}oly(n)$, as desired. \mathsf {Suffix}ection{Open Problems}\label{sec:openprobs} We conclude with some open problems. \begin{enumerate} \item Can the orbits of group actions be indexed in general? One formulation of this problem is as follows: Let $G$ be a finite group acting on a set $X$, both of size $\mathsf {Prefix}oly(n)$. Suppose $G$ and its action on $X$ are given as input explicitly. For a finite alphabet $\Sigma$, consider the action of $G$ on $\Sigma^X$ (by permuting coordinates according to the action on $X$). Can the orbits of this action be indexed? Can they be reverse-indexed? \item Let $G$ be the symmetric group $S_n$. Consider its action on $\{0,1\}^{{[n] \choose 2}}$, where $G$ acts by permuting coordinates. The orbits of this action correspond to the isomorphism classes of $n$-vertex graphs. Can these orbits be indexed? More ambitiously, can these orbits be reverse-indexed? This would imply that graph isomorphism is in $P$. \item It would be interesting to explore the complexity theory of indexing and reverse-indexing. Which languages can be indexed efficiently? Can this be characterized in terms of known complexity classes? In particular, it would be nice to disprove the conjecture: ``Every pair-language $L \in P$ for which the counting problem can be solved efficiently can be efficiently indexed". \end{enumerate} \mathsf {Suffix}ection*{Acknowledgements} We would like to thank Joe Sawada for making us aware of the work of Kociumaka et al~\cite{KRR14}. \appendix \mathsf {Suffix}ection{Alternative indexing algorithm for binary necklaces of prime length} \label{sec:magic} In this section we give another algorithm for indexing necklaces in $\{0,1\}^n$ in the special case where $n$ is prime. For convenience, we will denote the $n$ coordinates of $\{0,1\}^n$ by $0,1, \ldots, n-1$, and identify them with elements of $\mathbb{Z}_n$. \begin{definition} Let $x \in \{0,1\}^n$. We say $x$ is top-heavy if for every $j$, $0 \leq j < n$: $$ \mathsf {Suffix}um_{k = 0}^j \left( x_k - \frac{wt(x)}{n} \right) \geq 0.$$ \end{definition} In words: every prefix of $x$ has normalized Hamming weight at least as large as the normalized Hamming weight of $x$. The next lemma by Dvoretzky and Motzkin~\cite{DM47} shows that every string has a unique top-heavy rotation. \begin{lem}[\cite{DM47}] Let $n$ be prime. For each $x \in \{0,1\}^n \mathsf {Suffix}etminus \{0^n, 1^n\}$, there exists a unique $i$, $0\leq i < n$ such that $\mathbb{R}ot^i(x)$ is top-heavy. \end{lem} \begin{proof} Define $f: \{0,1 \}^n \times \mathbb{N} \to \mathbb R$ by: $$f(x, j) = \mathsf {Suffix}um_{k = 0}^j \left(x_{k \mod n} - \frac{{\mathsf{wt}}(x)}{n} \right).$$ Then the top-heaviness of $x$ is equivalent to $f(x, j) \geq 0$ for all $j \in \mathbb N$. We make two observations: \begin{enumerate} \item If $j = j' \mod n$, then $f(x, j) = f(x, j')$. This follows from the fact that: $$\mathsf {Suffix}um_{k = 0}^{n-1} \left(x_{k} - \frac{{\mathsf{wt}}(x)}{n} \right) = 0.$$ \item For nonnegative integers $j, \ell$ with $j < n$, we have: $$f(\mathbb{R}ot^j(x), \ell) = f(x, j + \ell) - f(x, j).$$ \end{enumerate} Putting these two facts together, we get that: \begin{align} \label{eqtop} f(\mathbb{R}ot^j(x), \ell) = f(x, (j+\ell) \mod n) - f(x, j). \end{align} Now fix $x \in \{0,1\}^n \mathsf {Suffix}etminus \{0^n, 1^n\}$. Define $i \in \{0,1, \ldots, n-1\}$ to be such that $f(x, i)$ is minimized. By Equation~\eqref{eqtop}, we get that $f(\mathbb{R}ot^i(x), \ell) \geq 0$ for all nonnegative integers $\ell$. This proves the existence of $i$. For uniqueness of $i$, we make two more observations: \begin{enumerate} \item If $f(x, j) > f(x, i)$, then $$f(\mathbb{R}ot^j(x), n + i-j) = f(x, n+i) - f(x,j) = f(x, i) - f(x,j) < 0,$$ and thus $\mathbb{R}ot^j(x)$ is not top-heavy. \item If $f(x, j) = f(x,j')$, then $j = j' \mod n$. To see this, first note that we may assume $j < j'$. Then: \begin{align*} 0 &= f(x, j') - f(x,j)\\ &= \mathsf {Suffix}um_{k = j+1}^{j'} \left(x_{k \mod n} - \frac{{\mathsf{wt}}(x)}{n} \right)\\ &= \left(\mathsf {Suffix}um_{k = j+1}^{j'} x_{k \mod n} \right) - (j'-j) \cdot\frac{{\mathsf{wt}}(x)}{n}. \end{align*} Thus, since the first term is an integer, we must have that $(j' - j) \cdot {\mathsf{wt}}(x)$ must be divisible by $n$, and by our hypothesis on $x$, we have that $j' = j \mod n$. \end{enumerate} Thus $i \in \{0,1, \ldots, n-1\}$, for which $\mathbb{R}ot^i(x)$ is top-heavy, is unique. \end{proof} The above lemma implies that each orbit $E$ contains a unique top-heavy string. We define the canonical element of $E$ to be that element. We now show that there is a branching program $A$ such that $L(A) \cap \{0, 1\}^n$ precisely equals the set of top-heavy strings. By the discussion in the introduction, this immediately gives an indexing algorithm for orbits of $E$. How does a branching program verify top-heaviness? In parallel, for each $\ell \in \{ 1, \ldots, n-1\}$, the branching program checks if condition $C_\ell$ holds, where $C_\ell$ is: $$`` \forall 0 \leq j < n, \mathsf {Suffix}um_{k = 0}^j x_k \geq \frac{k \cdot \ell}{n} " .$$ At the same time, it also computes the weight of $x$. At the final state, it checks if $C_{{\mathsf{wt}}(x)}$ is true. $x$ is top-heavy if and only if it is true. This completes the description of the indexing algorithm. We also know an extension of this approach that can handle $n$ which have $O(1)$ prime factors. The key additional ingredient of this extension is a new encoding of strings that enables verification of properties like top-heaviness by automata. \mathsf {Suffix}ection{Complexity of indexing} \label{sec:complexity} In this section, we explore some basic questions about the complexity theory of indexing and reverse indexing. We would like to understand what sets can be indexed/reverse-indexed efficiently. The outline of this section is as follows. We first deal with indexing and reverse-indexing in a nonuniform setting. Based on some simple observations about what cannot be indexed/reverse-indexed, we make some naive, optimistic conjectures characterizing what is efficiently indexable/reverse-indexable, and then proceed to disprove these conjectures. We then make some natural definitions for indexing and reverse-indexing in a uniform setting, and conclude with some analogous naive, optimistic conjectures. \mathsf {Suffix}ubsection{Indexing and reverse-indexing in the nonuniform setting} By simple counting, most sets $S \mathsf {Suffix}ubseteq \{0,1\}^n$ cannot be indexed or reverse-indexed by circuits of size $\mathsf {Prefix}oly(n)$. We now make two naive and optimistic conjectures: \begin{itemize} \item If $S \mathsf {Suffix}ubseteq \{0,1\}^n$ has a $\mathsf {Prefix}oly(n)$-size circuit recognizing it, then there is a $\mathsf {Prefix}oly(n)$-size circuit for indexing $S$. \item If $S \mathsf {Suffix}ubseteq \{0,1\}^n$ has a $\mathsf {Prefix}oly(n)$-size circuit recognizing it, then there is a $\mathsf {Prefix}oly(n)$-size circuit for reverse-indexing $S$. \end{itemize} Note that the simple observations about indexing made in the introduction are consistent with these conjectures. We now show that these conjectures are false (unless the polynomial hierarchy collapses). Assuming the conjectures, we will give $\Sigma_4$ algorithms to count the number of satisfying assignments of a given boolean formula $\mathsf {Prefix}hi$. By Toda's theorem~\cite{Toda}, this would imply that the polynomial hierarchy collapses. Let $S \mathsf {Suffix}ubseteq \{0,1\}^n$ be the set of satisfying assignments of a given boolean formula $\mathsf {Prefix}hi$ of size $m$ ($m \geq n$). We know that $S$ can be recognized by a circuit of size $m$ (namely $\mathsf {Prefix}hi$). By the conjectures, there are circuits $C_i$ and $C_r$ of size $\mathsf {Prefix}oly(m)$ for indexing $S$ and reverse-indexing $S$. We will now see that a $\Sigma_4$ algorithm can get its hands on these circuits, and then use these circuits to count the number of elements in $S$. \mathsf {Prefix}aragraph{Indexing} Consider the $\Sigma_4$ algorithm that does the following on input $\mathsf {Prefix}hi$. Guess a circuit $C:\{0,1\}^n \to \{0,1\}^n \cup \{``\mbox{{\bf too large}}"\}$ of size $\mathsf {Prefix}oly(m)$, and an integer $K < 2^n$ and then verify the following properties: \begin{itemize} \item for all $i \in [K]$, $C(i) \neq$ {\bf too large} and $\mathsf {Prefix}hi(C(i)) = 1$. \item for all $i \notin [K]$, $C(i) =$ {\bf too large}. \item for all $x \in \{0,1\}^n$, if $\mathsf {Prefix}hi(x) = 1$, then there exists a unique $i \in [K]$ for which $C(i) = x$. \end{itemize} If $C = C_i$, and $K = |S|$, then these properties hold. It is also easy to see that if all these properties hold, then $C$ is an indexing circuit for $S$, and $K = |S|$. Thus the above gives a $\Sigma_4$ algorithm to compute $|S|$. \mathsf {Prefix}aragraph{Reverse-indexing} Consider the $\Sigma_4$ algorithm that does the following on input $\mathsf {Prefix}hi$. Guess a circuit $C:\{0,1\}^n \to \{0,1\}^n \cup \{``\mbox{{\bf false}}"\}$ of size $\mathsf {Prefix}oly(m)$, and an integer $K < 2^n$ and then verify the following properties: \begin{itemize} \item for all $x \in \{0,1\}^n$, either ($\mathsf {Prefix}hi(x) = 1$ and $C(x) \in [K]$) or ($\mathsf {Prefix}hi(x) = 0$ and $C(x)= $ {\bf false}). \item for all $i \in [K]$, there exists a unique $x \in \{0,1\}^n$ such that $C(x) = i$. \end{itemize} If $C = C_r$, and $K = |S|$, then these properties hold. It is also easy to see that if all these properties hold, then $C$ is a reverse-indexing circuit for $S$, and $K = |S|$. Thus the above gives a $\Sigma_4$ algorithm to compute $|S|$. \mathsf {Suffix}ubsection{Indexing and reverse-indexing in the uniform setting} We now introduce a natural framework for talking about indexing in the uniform setting. Let $L \mathsf {Suffix}ubseteq \Sigma^* \times \Sigma^*$ be a pair-language. For $x \in \Sigma^*$, define $L_x = \{ y \mid (x,y) \in L \}$. An algorithm $M(x,i)$ is said to be an indexing algorithm for $L$ if for every $x \in \Sigma^*$, the function $M(x, \cdot)$ is an indexing of the set $L_x$. An algorithm $M(x,y)$ is said to be a reverse indexing algorithm for $L$ if for every $x \in \Sigma^*$, the function $M(x, \cdot)$ is a reverse indexing of the set $L_x$. Indexing/reverse-indexing algorithms are said to be efficient if they run in time $\mathsf {Prefix}oly(|x|)$. We now make some preliminary observations about the limitations of efficient indexing/reverse-indexing. \begin{enumerate} \item If $L$ can be efficiently indexed, then the counting problem for $L$ can be solved efficiently (recall that the counting problem for $L$ is the problem of determining $|L_x|$ when given $x$ as input. The counting problem can be solved via binary search using an indexing algorithm). \item If $L$ can be efficiently reverse indexed, then $L$ must be in $P$. Indeed, the reverse indexing algorithm $M(x,y)$ immediately tells us whether $(x,y) \in L$. \end{enumerate} In the absence of any other easy observations, we gleefully made the following optimistic conjectures. \begin{enumerate} \item Every pair-language $L \in P$ for which the counting problem can be solved efficiently can be efficiently indexed. \item Every pair-language $L \in P$ can be efficiently reverse indexed. \end{enumerate} Using ideas similar to those used in the nonuniform case, one can show that the latter of these conjectures is not true (unless the polynomial hierarchy collapses). However we have been unable to say anything interesting about the first conjecture, and we leave the conjecture that it is false as an open problem. \iffalse{ \begin{theorem} \begin{itemize} \item There exists a language $L$ in $P$ for which the counting problem can be solved efficiently, such that if $L$ can be indexed efficiently, then $\#P \mathsf {Suffix}ubseteq P^{NP}$. \item There exists a language $L$ in $P$, such that if $L$ can be reverse-indexed efficiently, then $\#P \mathsf {Suffix}ubseteq P^{\Sigma_2}$. \item For all $n$, there exists a set $S \mathsf {Suffix}ubseteq \{0,1\}^n$ which can be recognized by polynomial sized circuits, such that if $S$ can be indexed or reverse-indexed by polynomial sized circuits, then $\#P \mathsf {Suffix}ubseteq P^{\Sigma_2}/poly$. \end{itemize} \end{theorem} \begin{proof} Define $$L = \{(\mathsf {Prefix}hi, bw) : \mathsf {Prefix}hi \mbox{ is a CNF formula on $m$ variables, } b \in \{0,1\}, w \in \{0,1\}^m \text{ and } \mathsf {Prefix}hi(w) = b\}.$$ Note that $L \in P$ and the counting problem is trivial (then answer is always $2^m$). Now suppose $L$ has an efficient indexing algorithm $M(\mathsf {Prefix}hi, i)$. Then by asking $NP$ questions of the form: ``does there exists $i$ with $i \geq i_0$ for which $M(\mathsf {Prefix}hi,i)$ is of the form $1w$ (for some $w$)", we can determine if the number of satisfying assignments is at least $i_0$. Then by binary search, we can count the number of satisfying assignments of $\mathsf {Prefix}hi$ in $P^{NP}$, and so $\#P \mathsf {Suffix}ubseteq P^{NP}$. THIS IS ALL WRONG .. I THINK I FORGOT WHAT THE EXAMPLE WAS. THE PREVIOUS THING YOU WROTE ALSO DIDN'T WORK (BECAUSE WE DIDN'T GET that the counting problem could be solved efficiently). We now argue that if there is an efficient algorithm to reverse index $L$, then $\#P \mathsf {Suffix}ubseteq P^{\Sigma_2}$. Suppose $L$ had an efficient reverse indexing algorithm $M(x,y)$. By asking $\Sigma_2$ questions of the form: ``does there exist $y$ with $M(x,y) < b$ REACHED HERE then $\#P \mathsf {Suffix}ubseteq P^{\Sigma_2}$. Let $R_{\mathsf {Prefix}hi}$ be an efficient algorithm to reverse index $L_{\mathsf {Prefix}hi}$. We will now use this algorithm to count the number of satisfying assignments of $\mathsf {Prefix}hi$. Item 2 will then follow from the fact that counting the satisfying assignments of a CNF formula is $\#P$ complete. Consider the following language. $$C_{\mathsf {Prefix}hi} = \left\lbrace (a, b) : \mathsf{ex}ists w_1 \forall w_2 a \leq R_{\mathsf {Prefix}hi}(x) \leq b \text{ and } (R_{\mathsf {Prefix}hi}(w2) <R_{\mathsf {Prefix}hi}(w1) \text{ or } R_{\mathsf {Prefix}hi}(w2) = \text{ false } ) \right\rbrace$$ Intuitively, we are trying to capture the range of indices given by pairs $(a,b)$ where the satisfying assignment of $\mathsf {Prefix}hi$ with maximum index lies. It is easy to see that with an oracle access to language $C_{\mathsf {Prefix}hi}$ for any $\mathsf {Prefix}hi$, we can count the number of satisfying assignments of $\mathsf {Prefix}hi$ by doing a binary search starting with the pair $(0, 2^n)$. This completes the proof of the second item. THIRD ITEM \end{proof} }\fi \end{document}
\betaegin{document} \title[Invariable generation] {Invariable generation and the Chebotarev invariant of a finite group} ^{\rm th}anks{The authors acknowledge partial support from NSF grant DMS~0753640 (W.\hspace{1pt}M.\hspace{1pt}K.), ERC Advanced Grants 226135 (A.\hspace{1pt}L.) and 247034 (A.\hspace{1pt}S.), and ISF grant 754/08 (A.\hspace{1pt}L. and A.\hspace{1pt}S.). The first author is grateful for the warm hospitality of the Hebrew University while this paper was being written.} \alphauthor{W. M. Kantor} \alphaddress{University of Oregon, Eugene, OR 97403} \email{[email protected]} \alphauthor{A. Lubotzky} \alphaddress{Institute of Mathematics, Hebrew University, Jerusalem 91904} \email{[email protected]} \alphauthor{A. Shalev} \alphaddress{Institute of Mathematics, Hebrew University, Jerusalem 91904} \email{[email protected]} { } \betaegin{abstract} A subset $S$ of a finite group $G$ {\em invariably generates} $G$ if $G=\<s^{g(s)}\mid s\in S \>$ for each choice of $g(s)\in G, s\in S$. We give a tight upper bound on the minimal size of an invariable generating set for an arbitrary finite group $G$. In response to a question in \cite{KZ} we also bound the size of a randomly chosen set of elements of $G$ that is likely to generate $G$ invariably. Along the way we prove that every finite simple group is invariably generated by two elements. \end{abstract} \maketitle {\rm C}erline{\em Dedicated to Bob Guralnick in honor of his 60th birthday} \section{Introduction} \label{Introduction} For many years there has been a rapidly growing literature concerning the generation of finite groups. This has involved the number $d(G)$ of generators of a group $G$, or the expected number $E(G)$ of random choices of elements in order to probably generate $G$, among other group-theoretic invariants. In this paper we will study further invariants. Dixon \cite{Di1} began the probabilistic direction for generating (almost) simple groups, and later he also introduced yet another direction based on the goal of determining Galois groups \cite{Di2}. This has led to the following notions: {\noindent \betaf Definition.} Let $G$ be a finite group. \betaegin{itemize} \item [\rm(a)] A subset $S$ of $G$ {\em invariably generates} $G$ if $G=\<s^{g(s)}\mid s\in S \>$ for each choice of $g(s)\in G, s\in S$ \cite{Di2}. \item [\rm(b)] Let $d_I(G):= \min\{ |S|\,\betaig| \,S \mbox{ invariably generates } G\}$. \item [\rm(c)] The {\em Chebotarev invariant} $C(G)$ of $G$ is the expected value of the random variable $n$ that is minimal subject to the requirement that $n$ randomly chosen elements of $G$ invariably generate $G$ \cite{KZ}. \end{itemize} There have been several papers discussing (a) for specific groups (such as finite simple groups) \cite{LP,NP,Sh,FG,KZ}, but not for finite groups in general. Concerning (c), recall Chebotarev's Theorem that provides elements of a suitable Galois group $G$, where the elements are obtained only up to conjugacy in $G$; the interest in (c) comes from computational group theory, where there is a need to know how long one should expect to wait in order to ensure that choices of representatives from the conjugacy classes provided by Chebotarev's Theorem will generate $G$. This is discussed more carefully in \cite{Di2,KZ}. Our main results are the next two theorems, which depend on the classification of the finite simple groups. \betaegin{theorem} \label{Theorem 1} Every finite group $G$ is invariably generated by at most $\log_2|G|$ elements. \end{theorem} This bound is best possible: we show that $d_I(G) = \log_2|G|$ if and only if $G$ is an elementary abelian $2$-group. It is trivial that $d(G)\le \log_2|G|$ using Lagrange's Theorem. However, $d_I(G)$ may be much larger than $d(G)$: Proposition~\ref{powers} states that, {\em for every $r\ge1,$ there is a finite group $G$ such that $d(G)=2$ but $d_I(G)\ge r$.} Theorem~\ref{composition length} contains a more precise statement of Theorem~\ref{Theorem 1} involving the length and structure of a chief series of $G$. \betaegin{theorem} \label{Theorem 2} There exists an absolute constant $c$ such that $$C(G) \le c|G|^{1/2}(\log|G| )^{1/2}$$ for all finite groups $G$. \end{theorem} This bound is close to best possible: it is easy to see that sharply 2-transitive groups provide an infinite family of groups $G$ for which $C(G)\sim |G|^{1//2}$ (compare \cite[Sec.~4]{KZ}). In fact \cite[Sec.~9]{KZ} asks whether $C(G) = O(|G|^{1/2})$ for all finite groups $G$ (which we view as rather likely). For an arbitrary finite group it is interesting to compare $d_I(G)$ with $d(G)$, and $C(G)$ with $E(G)$. The upper bounds for $d_I(G)$ and $d(G)$ are identical, although (as stated above) these quantities may be very different. On the other hand, $E(G)\le ed(G)+2e\log\log|G|+11 = O(\log|G|)$ ~\cite{Lu}, which is far smaller than the bound in Theorem~\ref{Theorem 2}. We will need the following result of independent interest. \betaegin{theorem} \label{Theorem 3} Every nonabelian finite simple group is invariably generated by $2$ elements. \end{theorem} In fact, for proofs of Theorems~\ref{Theorem 1} and \ref{Theorem 2} we will need slightly stronger results on simple groups involving automorphisms as well (cf. Theorems~\ref{Theorem 3A} and \ref{Theorem 3C}). The same week that we proved these results about simple groups essentially the same result as Theorem~\ref{Theorem 3A} with a roughly similar proof was posted in \cite{GM2}. Dealing with simple groups uses the rather large literature of known properties of those groups. The fact that, for finite simple groups $G$, $d_I(G)$ and $C(G)$ are bounded by some (unspecified) constant $c$ follows for alternating groups from \cite{LP} (cf. \cite{KZ}), and for Lie type groups from results announced in \cite{FG} related to ``Shalev's $\epsilon$-Conjecture'', which concerns the number of fixed-point-free elements in simple permutation groups (cf. Section~\ref{proof of Theorem 2}). The proof of Theorem~\ref{Theorem 2} uses bounds in \cite{CC} and \cite{FG} on the number of fixed-point-free elements of a transitive permutation group, together with a recent bound on the number of maximal subgroups of a finite group \cite{LPS}. We note that an explicit formula for $C(G)$ is given in \cite[Proposition~2.7]{KZ}, but we have not been able to use it since it appears to be too difficult to evaluate its terms for most groups $G$. The proofs of Theorems~\ref{Theorem 1}, \ref{Theorem 2} and \ref{Theorem 3} are given in Sections~\ref{proof of Theorem 1}, \ref{proof of Theorem 2} and \ref{proof of Theorem 3}, respectively. Section~\ref{Preliminaries} contains the aforementioned result on the non-relationship of $d(G)$ and $d_I(G)$, as well as a characterization of nilpotent groups as those finite groups all of whose generating sets invariably generate. This paper is dedicated to Bob Guralnick, who has made fundamental contributions in the various areas involved in this and other papers of ours. \section{Preliminary results and examples} \label{Preliminaries} Unless otherwise stated, we assume that the group $G$ is finite. If $X, Y \subseteq G$, we say that $Y$ is \emph{similar} to $X$ if there is a function $f\colon X \rightarrow Y$ such that $f(X) = Y$ and, for each $x \in X$, $f(x)$ is conjugate in $G$ to $x$. Thus $X$ invariably generates $G$ if and only if $\langle Y \rangle = G$ for each $Y \subseteq G$ that is similar to $X$. Let ${\mathcal M}ax(G)$ denote the set of maximal subgroups of $G$. Let ${{\mathcal M}} ={\mathcal M}(G)$ be a set of representatives of conjugacy classes of maximal subgroups of $G$. If $M\in {\mathcal M}ax(G)$, write $$\widetilde{M} = \betaigcup_{g \in G} M^g \mbox{\ \ and \ } v( M ) = \frac{ |\widetilde{M}|}{|G|}\:.$$ Clearly $\widetilde{M_1} = \widetilde{M_2}$ if the maximal subgroups $M_1, M_2$ are conjugate in $G$. Also, $\widetilde{M}$ is the set of elements of $G$ having at least one fixed point in the primitive permutation representation of $G$ on the set $G/M$ of (left) cosets of $M$ in $G$. \betaegin{lemma} \label{invariable generation criterion} A subset $X \subseteq G$ generates $G$ invariably if and only if $X \not\subseteq \widetilde{M}$ for all $M \in {\mathcal M}$. \end{lemma} \noindent {\bf Proof.~} If $X \subseteq \widetilde{M}$ for some $M \in {\mathcal M}$ then each element of $X$ is conjugate to an element of $M$, and hence $X$ does not generate $G$ invariably. Conversely, if $X$ does not generate $G$ invariably, then there exists a set $Y$ similar to $X$ such that $\langle Y \rangle \ne G$. Hence (using the finiteness of $G$) there exist $M \in {\mathcal M}$ and $g\in G$ such that $\langle Y \rangle \subseteq M^g$, and hence $X \subseteq \widetilde{M}$. \hbox{~~\Aaa\char'003} The ``only if'' part of the above lemma also holds for infinite groups. Moreover, the proof shows that $X \subseteq G$ generates an arbitrary group $G$ invariably only if $X \not\subseteq \widetilde{H}$ for all $H<G$. This enables us to show that some infinite groups are not invariably generated by any set of elements. For example, there are countable groups $G$ all of whose nontrivial elements are conjugate \cite{HNN} (and even 2-generated groups with this property \cite{Os}), so that $\widetilde{H}=G$ for every nontrivial subgroup $H$ and hence even $G$ itself does not generate $G$ invariably. However, for finite groups there are no anomalies of this kind, since $\widetilde{H} \ne G$ for all proper subgroups $H$. In fact, if $k(G)$ denotes the number of conjugacy classes of (elements of) the finite group $G$, then we have \betaegin{lemma} For any finite group $G$ we have $d_I(G) \le k(G)$. Moreover$,$ $d_I(G)$ is at most the number of conjugacy classes of cyclic subgroups of $G$. \end{lemma} \noindent {\bf Proof.~} If $H$ is the subgroup of $G$ generated by a set of cyclic subgroups, one from each conjugacy class, then the union of all conjugates of $H$ is $G$, and hence $H=G$.~\hbox{~~\Aaa\char'003} For $k \ge 1$, let $P_I(G,k)$ be the probability that $k$ randomly chosen elements of $G$ generate $G$ invariably. \betaegin{lemma} \label{trivial bounds} $\displaystyle \max_{M\in {\mathcal M}}v(M)^k \le 1-P_I(G,k) \le \sum_{M \in {\mathcal M}} v(M)^k $. \end{lemma} \noindent {\bf Proof.~} Let $g_1, \ldots , g_k \in G$ be randomly chosen. Given $M \in {\mathcal M}$, the probability that $g_i \in \widetilde{M}$ for all $i$ is $v(M)^k$. Both inequalities now follow easily from Lemma~\ref{invariable generation criterion}.~\hbox{~~\Aaa\char'003} We next characterize nilpotent groups in terms of invariable generation. \betaegin{Proposition} \label{nilpotent} A finite group $G$ is nilpotent if and only if every generating set of $G$ invariably generates $G$. \end{Proposition} \noindent {\bf Proof.~} Let ${\rm P}hi(G)$ denote the Frattini subgroup of $G$. Then a subset of $G$ generates $G$ if and only if its image in $G/{\rm P}hi(G)$ generates $G/{\rm P}hi(G)$. Suppose $G$ is nilpotent. Then $G/{\rm P}hi(G)$ is abelian. Suppose $X \subseteq G$ generates $G$, and let $Y \subseteq G$ be similar to $X$. Clearly the images of $X$ and $Y$ in the abelian group $G/{\rm P}hi(G)$ coincide. Since the image of $X$ generates $G/{\rm P}hi(G)$, so does the image of $Y$. It follows that $Y$ generates $G$. We conclude that $X$ invariably generates $G$. Now suppose $G$ is not nilpotent. We shall construct a generating set $X$ for $G$ that does not generate $G$ invariably using a theorem of Wielandt \cite[p. ~132]{R}: if $G/{\rm P}hi(G)$ is abelian then $G$ is nilpotent. Then $G/{\rm P}hi(G)$ is not abelian, and hence some maximal subgroup $M$ of $G$ is not normal in $G$. Let $g \in G$ with $M^g \ne M$. Let $x \in M^g \setminus M$ and $X: = M \cup \{ x \}$. Then $\langle X \rangle = G$ since $M$ is maximal, so that $M \cup \{ x^{g^{-1}} \} = M$ is similar to $X$ and is proper in $G$. This implies that $X$ does not generate $G$ invariably. \hbox{~~\Aaa\char'003} In particular, for nilpotent $G$ we have $d_I(G)=d(G)$. For simple groups, by Theorem~\ref{Theorem 3} we also have the same equality (with both sides 2). However, our next result shows that, in general, $d_I(G)$ is not bounded above by any function of $d(G)$: \betaegin{proposition} \label{powers} For every $r\ge1$ there is a finite group $G$ such that $d(G)=2$ but $d_I(G)\ge r$. \end{proposition} This group $G$ will be a power $T^k$ of an alternating group $T$. For this purpose we recall an elementary criterion in \cite[Proposition~6]{KL}: \betaegin{proposition} \label{KL criterion} Let $G=T^k$ for a nonbelian finite simple group $T$. Let $S=\{s_1,\dots,s_r\}\subset G$, so that $s_i=(t_1^i,\dots,t_k^i), t_j^i\in T$. Form the matrix $$ A=\betaegin{pmatrix} t_1^1&\dots & t_k^1 \\ &\dots & \\ t_1^r&\dots & t_k^r \\ \end{pmatrix}. $$ Then $S$ generates $G$ if and only if the following both hold$:$ \betaegin{itemize} \item[\rm(a)] If $1\le j\le k$ then $T=\< t_j^1,\dots,t_j^r \>;$ and \item[\rm(b)] The columns of $A$ are in different ${\rm Aut}(T)$-orbits for the diagonal action of ${\rm Aut}(T)$ on $T^r$. \end{itemize} \end{proposition} {\noindent \betaf Proof of Proposition~\ref{powers}.} Fix $n$, let $T=A_n$ and let $k=k(n)$ be the largest integer such that $d(G)=2$, where $G:=G_n=T^{k}$. Then $k\ge n!/8$ \, (\cite[Example~2]{KL}, obtained from Proposition \ref{KL criterion}). Let $S$ be as in Proposition~\ref{KL criterion}, and assume that $S$ invariably generates $G$. Then we can arbitrarily conjugate each $t_j^i$ independently and still generate $G$. Let ${\betaf C}(T)$ denote the set of conjugacy classes of $T$. Project each column $\betaeta_j$ of $A$ to $\betaar \betaeta_j\in {\betaf C}(T)^{r} $. In view of conditions (a) and (b) in Proposition~\ref{KL criterion}, the $\betaar \betaeta_j$ are in different ${\rm Aut}(T)$-orbits of the diagonal action on ${\betaf C}(T)^{r} $. The number of conjugacy classes in $T$ is at most $c^{\sqrt n}$, so $|{\betaf C}(T)|^{r} \le c^{r\sqrt n}$. The number of projections $\betaar \betaeta_j$ is $k$ (since $1\le j\le k$), where $k\ge n!/8$. Then $c^{r\sqrt n} \geq n!/8$ by the Pigeon Hole Principle, so that $|S|=r \geq C \sqrt n\log n$.~\hbox{~~\Aaa\char'003} \section{Proof of Theorem~\ref{Theorem 1}} \label{proof of Theorem 1} Let $l(G)$ denote the length of a chief series of $G$. The following is a stronger version of Theorem~\ref{Theorem 1}: \betaegin{Theorem} \label{composition length} Let $G$ be a finite group having a chief series with $a$ abelian chief factors and $b$ non-abelian chief factors. Then $$d_I(G) \le a + 2b.$$ In particular$,$ $d_I(G) \le 2l(G),$ and if $G$ is solvable then $d_I(G) \le l(G)$. \end{Theorem} \noindent {\bf Proof.~} We use induction on $|G|$ (the case $|G|=1$ being trivial). Suppose $|G| > 1$ and let $N \lhd G$ be a minimal normal subgroup of $G$. It suffices to show that $$d_I(G) \le d_I(G/N) + c,$$ where $c=1$ if $N$ is abelian and $c=2$ if $N$ is non-abelian. In the latter case our proof relies on Theorem~\ref{Theorem 3A} (proved below). Let $X \subseteq G$ be a set of size $d_I(G/N)$ whose image in $G/N$ generates $G/N$ invariably. Suppose first that $N$ is abelian. Let $x \in N$ be any non-identity element of $N$. We claim that $Y = X \cup \{ x \}$ \emph{invariably generates $G$}. Indeed, suppose $Z \subseteq G$ is similar to $Y$. Then the image of $Z$ in $G/N$ generates $G/N$ (by the assumption on $X$). Moreover, $Z$ contains a conjugate $z = x^g$ that is a non-identity element of $N$. Since $G/N$ acts irreducibly on $N$, $\<Z\>\ge N$. It follows that $\<Z\>=G$, so $Y$ generates $G$ invariably. Thus $d_I(G) \le d_I(G/N) + 1$ in this case. Now suppose $N$ is non-abelian. Then $N = T_1 \times \cdots \times T_k$, where $k \ge 1$ and the $T_i$ are non-abelian finite simple groups such that the conjugation action of $G$ on $N$ induces a transitive action of $G/N$ on the set $\{ T_1, \ldots , T_k \}$. The group $A := N_G(T_1)/C_G(T_1)$ is an almost simple group with socle $T_1^\star:=T_1C_G(T_1)/C_G(T_1)\cong T_1$. By Theorem~\ref{Theorem 3A}, there are elements $x_1\in T_1^\star, $ $ x_2 \in A$ such that $\langle x_1^{a_1}, x_2^{a_2} \rangle \ge T_1^\star$ for all $a_1, a_2 \in A$. Let $y_1\in T_1, y_2 \in N_G(T_1)$, be pre-images of $x_1, x_2$, respectively. We claim that $Y: = X \cup \{ y_1, y_2 \}$ {\em invariably generates $G$.} To see this, let $Z$ be a set similar to $Y$, so $Z = X'\cup \{ y_1^{g_1},y_2^{g_2}\}$ where $X'$ is similar to $X$ and $g_i\in G$ ($i=1,2)$. We need to show that $Z$ generates $G$. Let $K = \langle Z \rangle$ and $H = \langle X' \rangle$. Since $X$ invariably generates $G$ modulo $N$ we have $HN = G$. Hence $H$ acts transitively (by conjugation) on $\{ T_1, \ldots , T_k \}$. Moreover, $T_1^{g_1} = T_i$ and $T_1^{g_2} = T_j$ for some $i, j$. By the transitivity of $H$ there are elements $h_1, h_2 \in H$ such that $T_i^{h_1} = T_1$ and $T_j^{h_2} = T_1$. Then $g_1h_1, g_2h_2 \in N_G(T_1)$. Clearly $y_1^{g_1h_1}\in T_1^{g_1h_1}=T_1$ and $y_2^{g_2h_2}\in N_G(T_1)^{g_2h_2}= N_G(T_1).$ Then $y_1^{g_1h_1}$ and $y_2^{g_2h_2}$ induce automorphisms of $T_1$ by conjugation. In view of our choice of $x_1$ and $x_2$, $\<y_1^{g_1h_1}, y_2^{g_2h_2}\>$ induces all inner automorphisms of $T_1$. In particular, the conjugates of the element $y_1^{g_1h_1}\in T_1$ under this group generate the simple group $T_1$. Thus, $K\ge \<y_1^{g_1h_1},y_2^{g_2h_2},H \>\ge T_1$, so that $K \ge T_i$ for all $i$ and hence $G=KN = K$, as required. We see that $d_I(G) \le d_I(G/N) + 2$ in the non-abelian case. This completes the proof of the first assertion in the theorem. The last two assertions follow immediately. \hbox{~~\Aaa\char'003} {\noindent \betaf We can now complete the proof of Theorem~\ref{Theorem 1}.} Let $G,a,b$ be as above. Every abelian chief factor of $G$ has order at least $2$, while every non-abelian chief factor has order at least $60$. This yields $|G| \ge 2^a 60^b$, so that $$\log_2|G| \ge a + (\log_2{60})b \ge a+2b \ge d_I(G),$$ as required. Moreover, if $d_I(G)= \log_2|G|$ then we must have $b=0$, and all chief factors of $G$ have order $2$. Thus $G$ is a 2-group, so that $d_I(G)=d(G)= \log_2|G|$ by Proposition \ref{nilpotent}. Now $d(G) = \log_2 |G|$ easily implies that $G$ is an elementary abelian $2$-group. \hbox{~~\Aaa\char'003} Note that the bound in Theorem~\ref{composition length} is tight both for non-abelian simple groups and for elementary abelian $p$-groups. \section{Proof of Theorem~\ref{Theorem 2}} \label{proof of Theorem 2} The main result of this section is the following. \betaegin{theorem} \label{square root theorem} For any $\epsilon > 0$ there exists $c = c(\epsilon)$ such that $P_I(G,k) \ge 1 - \epsilon$ for any finite group $G$ and any $k \ge c |G|^{1/2}(\log{|G|})^{1/2}$. \end{theorem} \noindent {\bf Proof.~} For $M \le G$ let $M_G=\cap_{g\in G}M^g$ denote the {\em core} of $M$ in $G$, the kernel of the permutation action of $G$ on the set of conjugates of $M$. Divide the set ${\mathcal M}$ of representatives of conjugacy classes of maximal subgroups of $G$ into three subsets ${\mathcal M}_1, {\mathcal M}_2, {\mathcal M}_3$ as follows. The set ${\mathcal M}_1$ consists of the subgroups $M \in {\mathcal M}$ such that the primitive group $G/M_G$ is not of affine type. The set ${\mathcal M}_2$ consists of the subgroups $M \in {\mathcal M}$ such that the primitive group $G/M_G$ is of affine type and $|G\colon\! M| \le |G|^{1/2}/(\log{|G|})^{1/2}.$ Finally, ${\mathcal M}_3$ consists of the remaining subgroups in ${\mathcal M}$, namely the subgroups $M$ such that $G/M_G$ is affine and $|G \colon \!M| > |G|^{1/2}/(\log{|G|})^{1/2}.$ By \cite[Theorem~1.3]{LPS}, for any finite group $G$ we have $|{\mathcal M}ax(G)| \le c_1 |G|^{3/2},$ where $c_1$ is an absolute constant. In particular, for $i=1,2,3$, $$|{\mathcal M}_i| \le |{\mathcal M}| \le c_1 |G|^{3/2}.$$ Fix $k \ge 1$ and let $g_1, \ldots , g_k\in G$ be randomly chosen (we will restrict $k$ in later parts of the proof). By Lemma 2.1, $$1-P_I(G,k) \le P_1 + P_2 + P_3,$$ where $P_i$ is the probability that $g_1, \ldots , g_k \in \widetilde{M}$ for some $M \in {\mathcal M}_i$ ($i=1,2,3$). It suffices to show that, for $k$ as in the statement of the theorem, $P_i < \epsilon/3$ for $i = 1,2,3$. We bound each of the probabilities $P_i$ separately. By increasing the constant $c$ we may assume that $|G|$ is as large as required in various parts of the proof. \para{The set ${\mathcal M}_1$.}\ To bound $P_1$ we use \cite[Theorem~8.1]{FG}: the proportion of fixed-point-free permutations in a non-affine primitive group of degree $n$ is at least $c_2/\log{n}$, for some absolute constant $c_2 > 0$. This shows that, for $M \in {\mathcal M}_1$, $$v(M) \le 1 - c_2/\log{|G \colon \!M|} \le 1 - c_2/\log{|G|}.$$ By Lemma 2.3 and its proof, $$P_1 \le \sum_{M \in {\mathcal M}_1} v(M)^k \le |{\mathcal M}_1| (1 - c_2/\log{|G|})^k \le c_1 |G|^{3/2} (1 - c_2/\log{|G|})^k.$$ Since $(1-x)^k \le \exp(-kx)$ for $0<x<1$, for any $c_3> \log c_1+3/2$ the right hand side is bounded above by $\exp(c_3 \log{|G|} - c_2k/{\log{|G|}})$. If $k >c_4(\log{|G|})^2$ for a suitable absolute constant $c_4$, then the latter expression tends to zero as $|G| \rightarrow \infty$, and hence so does $P_1$. In particular we have $P_1 < \epsilon/3$ for $|G|$ large enough. \para{The set ${\mathcal M}_2$. }\ We next bound $P_2$. Here our main tool is the theorem that the proportion of fixed-point-free elements in any transitive permutation group of degree $n$ is at least $1/n$ \cite{CC}. This implies that, if $M \in {\mathcal M}_2$, then $$v(M) \le 1-|G:M|^{-1} \le 1-(|G|/\log{|G|})^{-1/2}.$$ Therefore $$P_2 \! \le \!\!\sum_{M \in {\mathcal M}_2} \!\! v(M)^k \le |{\mathcal M}_2| \betaig ( 1 - (|G|/\log{|G|})^{-1/2} \betaig)^k \! \le c_1 |G|^{3/2} \betaig( 1 - ( |G|/\log{|G|} )^{-1/2} \betaig )^k.$$ As before the right side is bounded above by $\exp(c_3\log{|G|} - k \betaig(|G|/\log{|G|})^{-1/2} ) \betaig)$ for suitable $c_3>3/2$. This in turn tends to zero as $|G| \rightarrow \infty$ for any $k > c_5|G|^{1/2}(\log{|G|})^{1/2}$, for arbitrary $c_5>c_3$. Therefore $P_2 \rightarrow 0$ for such $k$, and $P_2 < \epsilon/3$ for all sufficiently large $|G|$. \para{The set ${\mathcal M}_3$. }\ Finally we bound $P_3$. If $M \in {\mathcal M}_3$ then $G/M_G = V\hbox{^2\kern-.8pt Bbb o} H$, where $V$ is an elementary abelian $p$-group for some prime $p$, acting regularly on the set of cosets of $M$ in $G$, and $H$ is a point-stabilizer acting irreducibly on $V$. Fix a chief series $\{ G_i \}$ of $G$. Fix ${M \in {\mathcal M}_3}$, and let ${\pi \colon G \rightarrow G/M_G}$ be the canonical projection. The series $\{ \pi(G_i) \}$ of normal subgroups of $\pi(G) = G/M_G$ descends from $G/M_G = V \hbox{^2\kern-.8pt Bbb o} H$ to $1$. If $i$ is minimal such that $\pi(G_{i+1}) = 1$, then $\pi(G_i)$ is a minimal normal subgroup of $G/M_G$, and hence is $V$, the unique minimal normal subgroup of $G/M_G$. In this situation we shall say that $M$ {\it uses} $G_i/G_{i+1}$, in which case $G_i/G_{i+1} \cong V$. (For, since $\pi(G_i)=\pi(G_i)/\pi(G_{i+1})$ is a nontrivial $G$-homomorphic image of $G_i/G_{i+1}$ it is isomorphic to $G_i/G_{i+1}$.) We have seen that every $M \in {\mathcal M}_3$ uses $G_i/G_{i+1}$ for a unique $i$. Moreover, since $M\in {\mathcal M}_3$, $$|G_i\!:\!G_{i+1}| = |V|=|G\!:\! M| > (|G|/\log{|G|})^{1/2}.$$ We claim that, {\em if $G$ is sufficiently large$,$ then it has at most two abelian chief factors used by any maximal subgroups in ${\mathcal M}_3$.} Indeed, if there were (at least) three such chief factors, appearing at places $i>j>l$ in our chief series, then we would obtain the contradiction $$|G| \ge |G_i \colon\! G_{i+1}||G_j \colon\! G_{j+1}||G_l \colon\! G_{l+1}| > \betaig((|G|/\log{|G|})^{1/2} \betaig)^3.$$ Fix an abelian chief factor $V=G_i/G_{i+1}$ of $G$ as above. Then each $g \in G_i \setminus G_{i+1}$ acts fixed-point-freely on the cosets of any $M$ that uses $G_i/G_{i+1}$ (since $gM_G \in V \setminus \{ 1 \}$). For each such $M$ we have $\widetilde{M} \subseteq G \setminus (G_i \setminus G_{i+1}).$ Since $$|G \colon \!G_i| \le |G|/|G_i \colon\!G_{i+1}| =|G|/|V|\le (|G| \log{|G|} )^{1/2}$$ by the definition of ${\mathcal M}_3$, the proportion of elements $g \in G_i \setminus G_{i+1}$ inside $G$ is at least ${1 \over 2} |G\colon\! G_i|^{-1} \ge {1 \over 2} (|G| \log{|G|})^{-1/2}$. Since the union of $\widetilde M^k$ over all $M$ using $G_i/G_{i+1}$ is contained in $\betaig (G \setminus (G_i \setminus G_{i+1})\betaig)^k$, it follows that the probability that randomly chosen elements $g_1, \ldots , g_k$ of $G$ all lie in $\widetilde{M}$ for some such $M$ is at most $ (1 - {1 \over 2} (|G| \log{|G|})^{-1/2} )^k $. Although there may be many choices for $M$ in ${\mathcal M}_3$, there are at most two choices for the chief factor $G_i/G_{i+1}$. Thus, $$P_3 \le 2 \betaig(1 - {1 \over 2} (|G| \log{|G|})^{-1/2} \betaig)^k \le 2 \exp \betaig( -{k \over 2} (|G| \log{|G|})^{-1/2} \betaig), $$ where the right hand side is less than $\epsilon/3$ for $k \ge c (|G| \log{|G|})^{1/2}$ for some $c = c(\epsilon)$. Our bounds on the three probabilities $P_i$ complete the proof. ~\hbox{~~\Aaa\char'003} \noindent{\betaf Remark.~} Recall that the {\em $\epsilon$-conjecture}, posed by the third author of this paper, states that there exists an absolute constant $\epsilon > 0$ such that the proportion of fixed-point-free elements in any finite simple transitive permutation group is at least $\epsilon$. This amounts to saying that $v(M) \le 1-\epsilon$ for any finite simple group $G$ and any $M\in {\mathcal M}ax(G)$. This conjecture holds for alternating groups \cite{LP} and for Lie type groups of bounded rank \cite[Secs. 3 and 4]{FG}. Moreover, in \cite[Theorem~1.3]{FG} it is announced that the $\epsilon$-conjecture holds in general, and proofs in some additional cases appear in \cite{FG2}. When ${M \in {\mathcal M}_1}$ our proof of Theorem~\ref{square root theorem} uses \cite[Theorem 8.1]{FG}, which in turn relies on the $\epsilon$-conjecture. However, we now show that Theorem~\ref{Theorem 3C} below easily yields a weaker version of \cite[Theorem 8.1]{FG} that still suffices for our purpose. \para{The set ${\mathcal M}_1$ revisited.}\ Namely, we claim that there exists $c_2>0$ such that $$v(M) \le 1 - c_2 (\log |G|)^{-2} |G|^{-1/3},$$ {\em where $G$ is any non-affine primitive permutation group and $M$ is a point-stabilizer.} For, if $s_1, s_2$ generate $G$ invariably, and if ${M\in{\mathcal M}ax(G),}$ then $\widetilde {M} \cap s_i^G = \emptyset$ for $i = 1$ or $2$, in which case $v(M) \le 1 - |s_i^G|/|G|.$ Then $v(M) \le 1 - {1 \over 2}|G|^{-1/3}$ for each sufficiently large finite simple group $G$ and each such $M$, by Theorem~\ref{Theorem 3C}. This implies that, for all finite simple groups $G$ and all $M\in {\mathcal M}ax(G)$, we have $v(M) \le 1 - c_3|G|^{-1/3}$ for some constant $c_3 > 0$. Consequently, if $G$ is an almost simple group with socle $T$ then, since $|{\rm Out}(T)| \le c_4 \log|T|$ (cf. \cite[Sec.~2.5]{GLS}), we easily obtain $$v(M) \le 1 - c_5 (\log |G|)^{-1} |G|^{-1/3}$$ for all $M\in {\mathcal M}ax(G)$ not containing $T$, for some $c_5 > 0$. Our claim follows by combining this inequality with the reduction to almost simple groups given in the proof of \cite[Theorem~8.1]{FG}. Thus, if $M \in {\mathcal M}_1$, then the above claim yields $$P_1 \le \sum_{M \in {\mathcal M}_1} v(M)^k \le c_1 \log|G| (1 - c_2 (\log |G|)^{-2} |G|^{-1/3})^k.$$ The right hand side tends to zero when $k \ge c_6 (\log|G|)^3|G|^{1/3}$; but for the proof of Theorem~\ref{square root theorem} we can assume the stronger inequality $k \ge c_7 |G|^{1/2} (\log|G|)^{1/2}$. Consequently $P_1\to 0$, as required. \para{Completion of proof of Theorem \ref{Theorem 2}}. Apply Theorem \ref{square root theorem} with $\epsilon = 1/2$ and let $c = c(1/2)$. Let $k=\lceil c|G|^{1/2} (\log|G|)^{1/2}\rceil$. Then $k$ randomly chosen elements of $G$ invariably generate $G$ with probability at least $1/2$. This implies that $$C(G) \le 2k \le(2c+1)|G|^{1/2} (\log|G|)^{1/2}. \ \ \ \hbox{~~\Aaa\char'003}$$ \betaegin{corollary} \betaegin{itemize} \item[(a)] If $G$ is a finite group without abelian composition factors$,$ then $C(G) = O((\log{|G|})^2)$. \item[(b)] If $G$ is an almost simple group$,$ then $C(G) = O(\log{|G|} \log \log |G|)$. \end{itemize} \end{corollary} \noindent {\bf Proof.~} We have already seen (a) in our first treatment of the non-affine case ($M\in {\mathcal M}_1$) of Theorem~\ref{square root theorem}. To prove (b) we first note that, for some $c > 0$ and all $M \in {\mathcal M}$, we have $v(M) \le 1 - c/\log|G|$. Indeed, if $M$ has trivial core then this follows from \cite[Theorem 8.1]{FG} (and hence from the correctness of the $\epsilon$-conjecture stated above). Otherwise, $M$ contains the simple socle $T$ of $G$, and $|G/T| \le |{\rm Out}(T)| \le c_4 \log|T| \le c_4 \log|G|$ as noted above. In this situation, if $g \in G$ acts fixed-point-freely on the cosets of $M$ in $G$, so do all the elements of $gT$, so that $v(M) \le 1 - c_4^{-1}/\log|G|$. By \cite[Theorem~1.3]{GLT}, $|{\mathcal M}| \le c_1 (\log{|G|})^3$ when $G$ is almost simple. This yields $$\sum_{M \in {\mathcal M}} v(M)^k \le c_1 (\log{|G|})^3 (1- c/\log|G|)^k \le c_1 (\log{|G|})^3 \exp({- ck/\log|G|} ).$$ The right hand side tends to zero as $|G| \rightarrow \infty$ when $k \ge c_2 \log|G| \log \log |G|$. This proves part (b). \hbox{~~\Aaa\char'003} We observe that {\em the bound in} (b) {\em is almost best possible, up to the $\log \log{|G|}$ factor.} To show this we use the following example \cite[p.~115]{FG}. Fix any prime $p$. Let $G = {\rm PSL}(2,p^b).b$, the extension of the simple group by the group $B$ of $b$ field automorphisms, where $b$ is a prime not dividing $p(p^2-1)$. Let $G$ act on the cosets of the maximal subgroup $N_G(B)$ of $G$. Then all fixed-point-free elements are contained in the socle of $G$, so their proportion is less than $1/b$. Therefore $v(M) \ge 1- 1/b$. Hence, by Lemma~\ref{trivial bounds}, $P_I(G,k) \le 1- (1-1/b)^k$, so that for sufficiently large $b$ we obtain $$P_I(G,k) \le 1 - (1-c_1/\log|G|)^k \le 1 - \exp(-c_2k/\log|G|),$$ where $c_1, c_2$ are suitable constants. Thus $P_I(G,k) \le 1/2$ for all $k \le c_3 \log|G|$, where $c_3 > 0$ is an absolute constant. The probability that it takes at least $k+1$ random choices of elements to invariably generate $G$ is $1-P_I(G,k)$. By the definition of the expectancy $C(G)$ we have $C(G) \ge (k+1)(1-P_I(G,k))$. If $k=[c_3 \log|G|]$ then $1-P_I(G,k) \ge 1/2$ and $k+1 \ge c_3 \log|G|$. This yields $C(G) \ge (k+1) (1 / 2) \ge (c_3/2)\log|G|.$ \section{Simple groups} \label{proof of Theorem 3} We will prove the following slightly stronger version of Theorem~\ref{Theorem 3}: \betaegin{Theorem} \label{Theorem 3A} Let $G$ be a finite simple group. \betaegin{itemize} \item[\rm(a)] If $G$ is not one of the groups ${\rm P\Omega}^+(8,q),$ $q=2$ or $3,$ then there are two elements $s_1,s_2\in G$ such that $G=\<s_1^{g_1}, s_2^{g_2} \>$ for each choice of $g_i\in {\rm Aut}(G).$ \item[\rm(b)] If $G$ is ${\rm P\Omega}^+(8,q),$ $q=2$ or $3,$ and if $G\le G^\star\le {\rm Aut}(G),$ then there are elements $s_1\in G,s_2\in G^\star$ such that $G\le \<s_1^{g_1}, s_2^{g_2} \>$ for each choice of $g_i\in G^\star.$ \end{itemize} \end{Theorem} Of course, Theorem~\ref{Theorem 3} is just (a) using inner automorphisms. This theorem is also obtained in \cite[Theorem~7.1]{GM2}, along with the fact that ${\rm P\Omega}^+(8,2) $ is an actual exception. We begin with the easiest case: \betaegin{Lemma} \label{alternating} {\rm Theorem~\ref{Theorem 3A}} holds for each alternating group $A_n,$ $n\ge5$. \end{Lemma} \noindent {\bf Proof.~} If $n\ne6$ then ${\rm Aut}(A_n)=S_n$. For even $n>6$ use the product of a disjoint $2$-cycle and $(n-2)$-cycle, and the product of a disjoint $p$-cycle and $(n-p)$-cycle for a prime $p\le n-3$ not dividing $n$; it is easy to check that such a prime exists. These two elements generate a group $H$ that is readily seen to be transitive and even primitive. Since $H$ contains a $p$-cycle, $H=A_n$ by a classical result of Jordan \cite[Theorem~13.9]{Wie}. If $n$ is odd then an $n$-cycle and a $p$-cycle can be used in the same manner, for an odd prime $p\le n-3$ not dividing $n$. Finally, $A_6$ is generated by any elements of order 4 and 5. \hbox{~~\Aaa\char'003} For groups of Lie type we will use the knowledge of all maximal overgroups $M$ of a carefully chosen semisimple element $t_1$. Then, by Lemma~\ref{invariable generation criterion}, we only need to choose an ${\rm Aut}(G)$-conjugacy class of elements that does not meet the union of the corresponding sets $\widetilde M$. Our arguments differ from those in \cite{GM2} primarily due to that paper using \cite{GM} whereas we rely more on the earlier paper \cite{MSW}. \betaegin{Lemma} \label{classical} {\rm Theorem~\ref{Theorem 3A}} holds for each classical simple group other than ${\rm P\Omega}^+(8,q)$. \end{Lemma} { \font\sevenroman=cmr8 \font\seventemp=cmsy8 \font\sevenital=cmmi8 \textfont0=\sevenroman \textfont2=\seventemp \textfont1=\sevenital \betaegin{table}[t] \caption{Classical groups} \label{classical torus} $$\betaegin{array}{|l|l|l|l|l|} \hline \mbox{\sevenroman quasisimple } G\!\!& |t_1|& t_1 ~\mbox{\sevenroman on }V& |t_2|& t_2 ~\mbox{\sevenroman on }V\\ \hline {\rm SL }(n,\!q) & (q^n-1)/(q-1)& n&(q^{n-1}-1)/(q-1) &(n-1)\oplus1 \\\ \ n \text{ \sevenroman odd}&&&& \\ \hline {\rm SL }(n,\!q) &(q^{n-1}\!-\!1)/(q\!-\!1)\!\! &(n-1)\oplus1 & (q^n-1)/(q-1)& n \\\ \ n\ge4\text{ \sevenroman even}&&&&\\ \hline {\rm Sp}(2m,q)& q^m+1&2m& {\rm lcm}(q^{m-1}+1,q+1) &(2m-2)\perp 2\!\! \\\ \ m\ge2 &&&& \\ \hline {\rm O}mega(2m+1,q)& (q^m+1)/2 & 2m^-\perp 1 & (q^m-1)/2& (m\oplus m)\perp 1 \\\ \ q \text{ \sevenroman odd}&&&& \\ \hline {\rm O}mega^+(4k,q) &( q^{n'-1}+1)/\delta_1 \raisebox{2.3ex} {~} &(n-2)^-\!\!\perp\! 2^-\!\!\! &{\rm lcm}(q^{n'-2}\!+\!1,q^{2}\!+\!1)/\delta_2\!\! & (n-4)^-\perp 4^-\!\! \\ \,\, n=2n'=4k\!&&&& \\ \hline {\rm O}mega^+(4k+2,q)\! &( q^{n'-1}+1)/\delta_1 \raisebox{2.3ex} {~} &(n-2){}^-\!\!\perp 2^-\!\!& (q^{n'}-1)/\delta_2 & n'\oplus n' \\ \,\, 2n'\!=4k+2\!&&&& \\ \hline {\rm O}mega^-(4k,q)& (q^{n'}+1)/\delta_1 \raisebox{2.3ex} {~} &n^- & (q^{{n'}-1}-1)/\delta_2 &(n-2)^+\perp 2^- \!\! \\ \,\, n=2n'\!=4k&&&& \\ \hline {\rm O}mega^-(4k+2,q)\!& (q^{2k+1}+1)/\delta_1 \raisebox{2.3ex} {~} &(4k+2)^- & (q^{2k}+1)/\delta_2 &4k^-\perp 2^+ \\ \hline {\rm SU}(2m,q) &q^{2m-1}+1&(2m-1)\perp 1 &(q^{2m}-1)/(q+1)&2m \\ \hline {\rm SU}(2m+1,q) \!& (q^{n}+1)/(q+1)&n&q^{n-1}-1&n-1\perp 1 \\ \hline \end{array} $$ \end{table} } \noindent {\bf Proof.~} We will consider the corresponding quasisimple linear group $G$, using semisimple elements $t_1$ and $t_2$ in Table~\ref{classical torus} that decompose the space as indicated in the table. (Here $\delta_i$ is 1 or 2, $n$ is the dimension of the underlying vector space $V$, and $n'=n/2$. If an entry involves ${\rm lcm}(q^i+1,q^j+1)$ for some $i,j$, then $t_2$ induces irreducible elements of order $q^i+1$ or $q^j+1$ on the indicated subspaces of dimension $2i$ or $2j$.) In each case, $t_1$ is the element called ``$s$'' in \cite[Theorem~1.1]{MSW}; if there is a $1-$ or $2-$space indicated then it is centralized. For each group $G$, all maximal overgroups of $t_1$ are listed in \cite[Theorem~1.1]{MSW}. Until the end of the proof we will exclude the case $G={\rm Sp}(4,q)$. Then all automorphisms of $G$ act on $V$, preserving the underlying geometry \cite[Sec.~2.5]{GLS}. It follows that all ${\rm Aut}(G)$-conjugates of $t_i$ act on $V$ as $t_i$ does (for $i=1,2$). We always use conjugates of $t_1$ and $t_2$ that have no assumed relationship to one another, so if the two elements studied generate $G$ then they invariably generate $G$. If $G$ is not ${\rm SL }(2,q)$, ${\rm Sp}(4,q)$ or ${\rm Sp}(8,2)$, then $t_1$ and $t_2$ invariably generate $G$ by \cite[Theorem 1.1]{MSW}: all of the exceptions in that theorem do not arise here due to the behavior of {\em both} $t_1$ and $t_2$ on $V$. If $G={\rm Sp}(8,2)$ then we replace $t_2$ by another element, as follows. Let $f\in G$ have order 5 and centralize a nondegenerate $4-$space. Then $C_G(f)=\<f\>\times {\rm Sp}(4,2)$. Let $c=(1,2)(3,4,5,6)\in S_6\cong {\rm Sp}(4,2)<C_G(f)$. Then $c\notin S_5\cong {\rm O}^-(4,2)$, and hence $fc$ is not in an overgroup ${\rm O}^-(8,2)$ of $t_1$. Since its order implies that $fc$ is also not in any of the other maximal overgroups of $t_1$ \cite[Theorem~1.1]{MSW}, it follows that $t_1$ and $fc$ invariably generate $G$. Case ${\rm SL }(2,q)$. When $q$ is $4, 5$ or 9, see Lemma~\ref{alternating}. When $q=7$, elements of order 7 and 4 invariably generate $G$. For all other $q\ge4$, the same $t_1$ and $t_2$ as indicated in the table (but with $t_1$ acting irreducibly on each $1-$space) invariably generate $G$ by \cite[Ch.~XII]{Di}. Case ${\rm Sp}(4,q)$. We may assume that $q\ge4 $ since ${\rm Sp}(4,2)$ is not simple and ${\rm PSp}(4,3)\cong {\rm PSU}(4,2)$. We again use $t_1$ and $t_2$ as in the table, such that $t_2$ induces an element of order $q+1$ inside the ${\rm Sp}(2,q)$ produced by each factor in the decomposition $4=2\perp 2$. Once again $t_1$ and $t_2$ invariably generate $G$ by \cite[Theorem~1.1]{MSW}.~\hbox{~~\Aaa\char'003} We note that classical groups were considered in \cite[Section~10]{NP} from a probabilistic point of view: a large number of pairs of elements was described that invariably generate various classical groups. The group $^2\kern-.8pt GL(n,q)$ was also handled in \cite{Sh} for large $n$. All groups of Lie type also were dealt with probabilistically, at least for bounded rank, in \cite[Theorem~5.3]{FG}. \betaegin{Lemma} \label{O+8} {\rm Theorem~\ref{Theorem 3A}} holds for ${\rm P\Omega}^+(8,q)$. \end{Lemma} \noindent {\bf Proof.~} Once again we will consider the corresponding linear group $G={\rm O}mega^+(8,q)$, using the properties of ${\rm Aut}(G/Z(G))$ contained in \cite[Sec.~2.5]{GLS}. We have $G/Z(G) \betareak\le G^\star\le {\rm Aut}(G/Z(G))$. (a) Suppose first that $q>3$. We will use the same $\<t_1\>$ as above (mod $Z(G)$), of order $ (q^3+1)/(2,q-1)$. It acts on our space as $8^+=6^-\perp 2^-$, centralizing the $2-$space. We also use an element $t_3 \in G$ of order $(q^3-1)/(2,q-1)$. Here $t_3 $ decomposes our space as $8^+=(3\oplus 3)\perp (1\oplus 1)$ using totally singular $3-$ and $1-$spaces, inducing isometries of order $q-1$ on the subspace $1\oplus 1$ and of order $q^3-1$ on the subspace $3\oplus 3$, and hence acting irreducibly on the indicated $3-$spaces. Then $t_3 $ fixes exactly two singular $1-$spaces, and two totally singular $4-$spaces in each $G$-orbit of such 4-spaces (each of the latter fixed subspaces has the form $3\perp 1$). If $\tau $ is any automorphism of $G/Z(G)$, then $t_3 ^\tau$ has the same properties. In particular, neither $t_3 $ nor $t_3 ^\tau$ fixes any anisotropic $1-$ or $2-$space for any $\tau\in {\rm Aut}(G/Z(G))$. (N.\:B.--This requires that $q>3$: if $q=3$ then the analogous element $t_3 $ induces $-1$ on the $2^+-$space $1\oplus 1$ and hence fixes all of its $1-$spaces.) However, by \cite[Theorem 1.1]{MSW} each maximal subgroup of $G/Z(G)$ that contains $t_1$ (mod $Z(G))$ either fixes such a $1-$ or $2-$space or its image under a triality automorphism behaves that way. Hence, there is no maximal subgroup containing $t_1 $ and $t_3$ mod $Z(G)$, and we have invariably generated $G/Z(G)$. (b) From now on $q\le 3$. First consider the case where $G^\star$ acts (projectively) on $V$ (this includes the situation in Theorem~\ref{Theorem 3}). We use elements $t_3$ and $t_4$ of $G/Z(G)$ of order $(q^4-1)/(4, q^4-1)$ arising from a decomposition $8^+=4^-\perp 2^-\perp 2^+$ and from a decomposition $8^+=4\oplus 4$ into totally singular $4-$spaces (the corresponding cyclic groups $\<t_i\>$ are conjugate under ${\rm Aut}(G/Z(G))$ but not under $G^\star$). The Sylow $5$-subgroups of $\<t_3\>$ and $\<t_4\>$ behave differently on the vector space, and $\<t_4\>$ is an element of order $(q^4-1)/(4, q^4-1)$ that acts fixed-point-freely on $V$. Hence, by \cite{Kl}, $\<t_3,t_4\>$ is contained in no proper subgroup of $G$, so that $\<t_3,t_4\> = G$. Finally, suppose that $G^\star$ does not have any ${\rm Aut}(G/Z(G))$-conjugate that acts on $V$. Here we return to the original setting of the theorem, now letting $G$ denote the simple group ${\rm P\Omega}^+(8,q)$. Since Out$(G)\cong S_4$ or $S_3$, we may assume that $G^\star$ contains a triality outer automorphism. Consequently, there is a subgroup $\hbox{^2\kern-.8pt Bbb Z}_3\times{\rm SL }(3,q)$ of $G^\star$ that contains an element $t_5$ of order $3(q^2 + q +1)$ such that $\tau=t_5^{q^2+q+1}$ is a triality automorphism and $t_5^3$ acts projectively on $V$ as $8^+=(3\oplus 3)\perp(1\oplus 1)$. By \cite[Theorem 1.1]{MSW}, $\<t_1,t_5\>\cap G \ge \<t_1,t_5^3\>$ is either $G$, ${\rm O}mega(7,q)$ or lies in $A_9<{\rm O}mega^+(8,2)$. Since $\<t_1,t_5\>\cap G $ is invariant under $\tau$, only the first of these can occur (for example, $\<t_1,t_5\>\cap G $ cannot be $A_9$ or ${\rm PSL}(2,8)<A_9$). Thus, $t_1$ and $t_5$ invariably generate $G \<\tau\>$.~\hbox{~~\Aaa\char'003} {\noindent \betaf Completion of proof.} In \cite[Tables~6~and~9]{GM} there are lists of carefully chosen cyclic subgroups of exceptional and sporadic simple groups, as well as all of the maximal overgroups $M$ of those subgroups. It is straightforward to use those tables to handle these final cases of Theorem~\ref{Theorem 3A}. This amounts to exhibiting an element order for $G$ not appearing in any of the listed subgroups $M$. We provide some details for the exceptional groups. Table~\ref{exceptional torus} reproduces part of \cite[Table~6]{GM}. Here $T_1$ is a cyclic maximal torus and $M$ runs through the isomorphism types of maximal overgroups of $T_1$. (Notation: $\epsilon =\pm1$, ${\rm P}hi_n ={\rm P}hi_n (q)$ is the $n$th cyclotomic polynomial evaluated at~$q$, ${\rm P}hi_8'={\rm P}hi_8'(q)=q^2+\sqrt{2}q+1$, ${\rm P}hi_{12}'={\rm P}hi_{12}'(q)=q^2+\sqrt{3}q+1$ and ${\rm P}hi_{24}'={\rm P}hi_{24}'(q)=q^4+\sqrt{2}q^3+q^2+\sqrt{2}q+1$.) In each case, the order of $t_2$ guarantees that it is not contained in any of the listed maximal overgroups $M$ (there are also other choices for $t_2$). Hence, a generator of $T_1$ together with $t_2$ behave as required in the theorem. \hbox{~~\Aaa\char'003} In Section~\ref{proof of Theorem 2} we needed a bit more information than in the preceding theorem for an alternative proof of Theorem~\ref{square root theorem} and hence of Theorem 1.2: \betaegin{Theorem} \label{Theorem 3B} \label{Theorem 3C} For all sufficiently large $G$ in {\rm Theorem~\ref{Theorem 3A}}$,$ the elements $s_i$ can be chosen so that $|s_i^G|>|G|^{2/3} /2$ for $i=1,2$. \end{Theorem} \noindent {\bf Proof.~} This is a straightforward matter of examining each part of the proof of Theorem~\ref{Theorem 3A}. In each case we need to check that $|C_G(s_i)|< 2|G|^{1/3}\,$ for $i=1,2$ and all sufficiently large $|G|$. For alternating groups, when $n$ is even each of the groups $C_G(s)$ is the direct product of two cyclic groups, and hence has order satisfying the required bound. When $n$ is odd the same holds if we replace the $p$-cycle by the product of a disjoint $p$-cycle and an $(n-p)$-cycle (a power of which is a $p$-cycle). In Lemma~\ref{classical} -- excluding ${\rm SL }(2,q)$~-- we have $|C_G(T_1) | \sim q^r$ and $|C_G(t_2) | \sim q^r$, where $r$ is the rank of the corresponding algebraic group. (For example, for ${\rm SL }(n,q) $ we have $|C_G(T_1) | = (q^n-1)/(q-1)$ or $q^{n-1}-1$, for ${\rm Sp}(2m,q)$ we have $|C_G(t_2) | \le (q^{m-1}+1) (q+1)$, and for ${\rm O}mega^+(4k+2,q)$ we have $|C_G(T_1) | \le (q^{2k}+1)(q+1)$.) A straightforward calculation using $|G|$ verifies that these bounds are small enough for our purposes. When $G={\rm SL }(2,q)$ we have $|C_G(T_1) | =q+1$, so that $|s_i^G|>|G|^{2/3} /2$ and a denominator larger than 1 is essential. \hbox{~~\Aaa\char'003} { \font\sevenroman=cmr8 \font\seventemp=cmsy8 \font\sevenital=cmmi8 \textfont0=\sevenroman \textfont2=\seventemp \textfont1=\sevenital \betaegin{table}[t] \caption{Exceptional groups} \label{exceptional torus} $$\betaegin{array}{|l|l|l|l|l|} \hline G& |T_1|& M\ge T_1& \mbox{\sevenroman further max.} & |t_2|\\ \hline ^2B_2(q^2)& {\rm P}hi_8'& N_G(T_1)& - \raisebox{2ex} {~}\raisebox{-1.2ex} {~} &{\rm P}hi_8'(-q) \\ \ \ q^2\ge8 & & & & \\\hline ^2G_2(q^2)\!\!& {\rm P}hi_{12}'& N_G(T_1)& -\raisebox{2ex} {~}\raisebox{-1.2ex} {~}& {\rm P}hi_{12}'(-q) \\ \ \ q^2\ge27 & & & & \\\hline G_2(q),\: 3|q+\epsilon\!\! & q^2+\epsilon q+1 \raisebox{2.2ex} {~} & {\rm SL } ^\epsilon(3,q).2& {\rm PSL}(2,13) & q^2-\epsilon q+1\\ & & & \ (q=4) & \\\hline G_2(q),\: 3|q& q^2+q+1& {\rm SL } (3,q).2& {\rm PSL}(2,13) & q^2- q+1\\ & & & \ (q=3) & \\\hline ^3D_4(q)& {\rm P}hi_{12}& N_G(T_1)& - \raisebox{2.2ex} {~}\raisebox{-1.2ex} {~} & (q^3 \!+\! 1)(q \!- \!1)/(2,q\!-\!1)\!\! \\ \hline ^2F_4(q^2)& {\rm P}hi_{24}'& N_G(T_1)& - \raisebox{2.2ex} {~}\raisebox{-1.2ex} {~} & {\rm P}hi_{24}'(-q) \\ \ \ q^2\ge8 & & & & \\\hline F_4(q) & {\rm P}hi_{12}& ^3D_4(q).3 & {\rm PSL}(4,3).2_2, & q^4+1 \\ & & & ^2F_4(2) ~ (q\!=\!2),\!\! & \\ & & & {\rm PSL}(4,3).2_2 \, (q\!=\!2)\!\! & \\\hline E_6(q)& {\rm P}hi_9/(3,q-1)& {\rm SL }(3,q^3).3& - & (q\! + \!1)(q^5 \!-\! 1)/(6,q\!-\!1)\!\!\\ \hline ^2\hspace{-.5pt}E_6(q)& {\rm P}hi_{18}/(3,q+1)& {\rm SU}(3,q^3).3& - \raisebox{2.2ex} {~}\raisebox{-1.2ex} {~}& (q \!-\! 1)(q^5\! +\! 1)/(6,q\!+\!1)\!\!\\ \hline E_7(q)& {\rm P}hi_2{\rm P}hi_{18}/(2,q\!-\!1)\!\!& ^2\!E_6(q)_{sc}.D_{q+1} \!\! \! & - & {\rm P}hi_7/(2,q-1)\!\! \\ \hline E_8(q)& {\rm P}hi_{30} & N_G(T_1)& - &{\rm P}hi_{24} \\ \hline \end{array} $$ \end{table} } \para{Random generation.} We conclude with remarks concerning the random generation of finite simple groups. All finite simple groups $G$ are generated by two randomly chosen elements with probability tending to 1 as $|G| \rightarrow \infty$ \cite{Di1,KL,LS}. We claim that this does not hold for invariable generation: {\em the probability that two~--~or any bounded number of~--~random elements of a finite simple group $G$ invariably generate $G$ is bounded away from $1$.} To show this we need the following result that is implicit in \cite{FG}. \betaegin{lemma} There exists an absolute constant $\epsilon > 0$ such that any finite simple group $G$ has a maximal subgroup $M$ for which $v(M) \ge \epsilon$. \end{lemma} \noindent {\bf Proof.~} This is trivial for alternating groups $A_n$, where we take $M$ to be a point-stabilizer in the natural action, so $v(M) \sim 1-e^{-1}$. For groups $G$ of Lie type of bounded rank over a field with $q$ elements we may assume $q$ is large, and then the result follows with $M$ a maximal subgroup containing a maximal torus (see the discussion in \cite[start of Sec.~4]{FG}). For classical groups of large rank the result follows from \cite[Theorem~1.7]{FG}. Sporadic simple groups satisfy the conclusion trivially. \hbox{~~\Aaa\char'003} This lemma can be considered as a kind of weak analogue of the $\epsilon$-conjecture (stated above) but in the opposite direction. We can now deduce \betaegin{corollary} There is an absolute constant $\epsilon > 0$ such that $P_I(G,k) \le 1 - \epsilon^k$ for all finite simple groups $G$ and positive integers $k$.\end{corollary} \noindent {\bf Proof.~} This follows by combining the above lemma with Lemma~\ref{trivial bounds}. \hbox{~~\Aaa\char'003} In \cite[p.~114]{FG} it is announced that, for any $\epsilon > 0$, there is $c = c(\epsilon)$ such that $P_I(G,k) \ge 1-\epsilon$ whenever $G$ is a finite simple group of Lie type and $k \ge c$. The case of bounded rank is proved in \cite[Theorem 4.4]{FG}, and a similar result for alternating groups was proved earlier in \cite{LP}. Using these results it follows that, for any function $f\colon \!\hbox{^2\kern-.8pt Bbb N} \rightarrow \hbox{^2\kern-.8pt Bbb N}$ such that ${f(n) \rightarrow \infty}$ as $n \rightarrow \infty$ (even if arbitrarily slowly), we have $P_I(G,f(|G|)) \rightarrow 1$ for finite simple groups $G$ whose orders tend to infinity. \betaegin{thebibliography}{[MSW]} \betaibitem[CC]{CC} P.~J. Cameron and A.~M. Cohen, On the number of fixed point free elements in a permutation group, Discrete Math. 106/107 (1992) 135--138. \betaibitem[Di]{Di} L. E. Dickson, Linear groups with an exposition of the Galois field theory. Dover (reprint), New York 1958. \betaibitem[Di1]{Di1}J.~D.~Dixon, The probability of generating the symmetric group. Math. Z. 110 (1969) 199--205. \betaibitem[Di2]{Di2} J.~D.~Dixon, Random sets which invariably generate the symmetric group. Discrete Math. 105 (1992) 25--39. \betaibitem[FG1]{FG} J. Fulman and R.~M.~Guralnick, Derangements in simple and primitive groups. Groups, combinatorics \& geometry (Durham, 2001; Eds. A.~A. Ivanov, M.~W. Liebeck and J.~Saxl), 99--121, World Sci. Publ., River Edge, NJ 2003. \betaibitem[FG2]{FG2} J. Fulman and R.~M.~Guralnick, Bounds on the number and sizes of conjugacy classes in finite Chevalley groups with applications to derangements (to appear in Trans. AMS; preprint arXiv:0902.2238v1). \betaibitem[GLS]{GLS} D. Gorenstein, R. Lyons and R. Solomon, The classification of the finite simple groups. Number 3. Part I. Chapter~A. Almost simple K-groups. AMS, Providence 1998. \betaibitem[GLT]{GLT} R.~M. Guralnick, M. Larsen and P.~H. Tiep, Representation growth in positive characteristic and conjugacy classes of maximal subgroups (preprint arXiv:1009.2437). \betaibitem[GM1]{GM} R.~M. Guralnick and G. Malle, Products of conjugacy classes and fixed point spaces (preprint arXiv:1005.3756v2). \betaibitem[GM2]{GM2} R.~M. Guralnick and G. Malle, Simple groups admit Beauville structures (preprint arXiv:1009.6183). \betaibitem[HNN]{HNN} G. Higman, B. H. Neumann and H. Neumann, Embedding theorems for groups. J. London Math. Soc. 24 (1949) 247--254. \betaibitem[Kl]{Kl} P.~B. Kleidman, The maximal subgroups of the finite $8$-dimensional orthogonal groups $P{\rm O}mega^+_8(q)$ and of their automorphism groups. J. Algebra 110 (1987) 173--242. \betaibitem[KL]{KL} W.~M. Kantor and A. Lubotzky, The probability of generating a finite classical group. Geom. Ded. 36 (1990) 67--87. \betaibitem[KZ]{KZ}E. Kowalski and D. Zywina, The Chebotarev invariant of a finite group (preprint arXiv:1008.4909v). \betaibitem[LPS]{LPS} M. W. Liebeck, L. Pyber and A. Shalev, On a conjecture of G.~E. Wall. J. Algebra 317 (2007) 184--197. \betaibitem[LS]{LS} M.~W. Liebeck and A. Shalev, The probability of generating a finite simple group. Geom. Ded. 56 (1995) 103--113. \betaibitem[Lu]{Lu} A. Lubotzky, The expected number of random elements to generate a finite group. J. Algebra 257 (2002) 452--459. \betaibitem[LuP]{LP} T. \L{}uczak and L. Pyber, On random generation of the symmetric group. Combin. Probab. Comput. 2 (1993) 505--512. \betaibitem[MSW] {MSW} G. Malle, J. Saxl and T. Weigel, Generation of classical groups. Geom. Ded. 49 (1994) 85--116. \betaibitem[NP]{NP} A. Niemeyer and C.~E. Praeger, A recognition algorithm for classical groups over finite fields. Proc. London Math. Soc. 77 (1998) 117--169. \betaibitem[Os]{Os} D. Osin, Small cancellations over relatively hyperbolic groups and embedding theorems. Ann. Math. 172 (2010) 1--39 \betaibitem[Rob]{R} D. J. Robinson, A Course in the Theory of Groups. Springer, New York 1982. \betaibitem[Sh]{Sh} A. Shalev, A theorem on random matrices and some applications. J. Algebra 199 (1998) 124--141. \betaibitem[Wie]{Wie} H. Wielandt, Finite Permutation Groups. Academic Press, New York and London 1964. \end{thebibliography} \end{document}
\begin{document} \begin{abstract} This note studies the asymptotic behavior of global solutions to the fourth-order generalized Hartree equation $$i\dot u+\Delta^2 u\pm(I_\alpha*|u|^p)|u|^{p-2}u=0.$$ Indeed, for both attractive and repulsive sign, the scattering is obtained in the mass super-critical and energy sub-critical regimes, with radial setting. \end{abstract} \maketitle \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \section{Introduction} This note is concerned with the energy scattering theory of the Cauchy problem for the following Choquard equation \begin{equation} \left\{ \begin{array}{ll} i\dot u+\Delta^2 u+\epsilon(I_\alpha*|u|^p)|u|^{p-2}u=0 ;\\ u(0,.)=u_0. \label{S} \end{array} \right. \end{equation} Here and hereafter $u: \mathbb{R}\times\mathbb{R}^N \to \mathbb{C}$, for some $N\geq5$. The defocusing or focusing regime is determined with $\epsilon=\pm1$. The source term satisfies $p\geq2$. The Riesz-potential is defined on $\mathbb{R}^N$ by $$I_\alpha:x\to\frac{\Gamma(\frac{N-\alpha}2)}{\Gamma(\frac\alpha2)\pi^\frac{N}22^\alpha|x|^{N-\alpha}},\quad 0<\alpha<N.$$ The bi-harmonic Schr\"odinger problem was considered first in \cite{Karpman,Karpman 1} to take into account the role of small fourth-order dispersion terms in the propagation of intense laser beams in a bulk medium with a Kerr non-linearity.\\ The equation \end{equation}ref{S} satisfies the scaling invariance $$u_\lambda=\lambda^\frac{4+\alpha}{2(p-1)}u(\lambda^4.,\lambda .),\quad\lambda>0.$$ This gives the critical Sobolev index $$s_c:=\frac N2-\frac{4+\alpha}{2(p-1)}.$$ In this note, one focus on the mass super-critical and energy sub-critical regimes $0<s_c<1$. This is equivalent to $1+\frac{\alpha+4}N<p<1+\frac{\alpha+4}{N-4}$.\\ To the author knowledge, there exist a few literature treating the fourth-order Hartree equation. Indeed, some local and global well-posedness results in $H^s$ for the Cauchy problem associated to the fourth-order non-linear Schr\"odinger-Hartree equation with variable dispersion coefficients were obtained in \cite{cb}. Moreover, a sharp threshold of global well-psedness and scattering of energy solutions versus finite time blow-up dichotomy was given \cite{st0} in the mass-super-critical and energy sub-critical regimes. See also \cite{cd} for the stationary case.\\ It is the aim of this note, to investigate the asymptotic behavior of global solutions to the fourth-order generalized Hartree equation \end{equation}ref{S}. Indeed, in the attractive sign, by use of a Morawetz estimate and a decay result in the spirit of \cite{NV}, one obtains the scattering of global solutions in the energy space. In the repulsive sign, thanks to the small data theory, Morawetz estimate and a variational analysis, the scattering of global solutions is established.\\ The rest of this paper is organized as follows. The second section contains the main results and some technical estimates. Section three is devoted to prove the scattering of global solutions in the defocusing sign. The last section is consecrated to establish the scattering of global solutions in the focusing regime.\\ Here and hereafter, $C$ denotes a constant which may vary from line to another. Denote the Lebesgue space $L^r:=L^r({\mathbb{R}^N})$ with the usual norm $\|\cdot\|_r:=\|\cdot\|_{L^r}$ and $\|\cdot\|:=\|\cdot\|_2$. The inhomogeneous Sobolev space $H^2:=H^2({\mathbb{R}^N})$ is endowed with the norm $$ \|\cdot\|_{H^2} := \Big(\|\cdot\|^2 + \|\Delta\cdot\|^2\Big)^\frac12.$$ Let us denote also $C_T(X):=C([0,T],X)$ and $X_{rd}$ the set of radial elements in $X$. Finally, for an eventual solution to \end{equation}ref{S}, $T^*>0$ denotes it's lifespan. \section{Background and main results} This section contains the contribution of this paper and some standard estimates needed in the sequel. \subsection{Preliminary} The mass-critical and energy-critical exponents for the Choquard problem \end{equation}ref{S} read respectively $$p_*:=1+\frac{\alpha+4}N\quad\mbox{and}\quad p^*:=\left\{ \begin{array}{ll} 1+\frac{4+\alpha}{N-4},\quad\mbox{if}\quad N\geq5;\\ \infty,\quad\mbox{if}\quad 1\leq N\leq4. \end{array} \right.$$ The above fourth-order Schr\"odinger problem \end{equation}ref{S} has a local solution \cite{st0} in the energy space for the energy sub-critical regime $2\leq p<p^*$. Moreover, the solution satisfies the following conservation laws \begin{gather*} Mass:=M[u(t)]:=\int_{\mathbb{R}^N}|u(t,x)|^2dx = M[u_0];\\ Energy:=E[u(t)] :=\int_{\mathbb{R}^N}\Big(|\Delta u(t)|^2+\frac\epsilon p (I_\alpha *|u(t)|^p)|u(t)|^p\Big)dx= E[u_0]. \end{gather*} \begin{rem} Thanks to the inequality \end{equation}ref{ineq}, the energy is well-defined for $1+\frac\alpha N\leq p \leq p^*$. So, the condition $p\geq2$ which gives a restriction on the space dimension, seems to be technical. \end{rem} For $u\in H^2$ and $\epsilon=-1$, take the action, the constraint and two positive real numbers \begin{gather*} S[u]:=M[u]+E[u]=\|u\|_{H^2}^2-\frac1p\int_{\mathbb{R}^N}(I_\alpha*|u|^p)|u|^p\,dx;\\ K[u]:=\|\Delta u\|^2-\frac B{2p}\int_{\mathbb{R}^N}(I_\alpha*|u|^p)|u|^p\,dx;\\ B:=\frac{Np-N-\alpha}2\quad\mbox{and} \quad A:=2p-B. \end{gather*} \begin{defi} Let us recall that a ground state of \end{equation}ref{S} is a solution to \begin{equation}\label{grnd} \phi+\Delta^2\phi-(I_\alpha*|\phi|^p)|\phi|^{p-2}\phi=0,\quad0\neq\phi\in H^2, \end{equation} which minimizes the problem $$m:=\inf_{0\neq u\in H^2}\Big\{S[u] \quad\mbox{s\,. t}\quad K[u]=0\Big\}.$$ \end{defi} In the focusing regime, one denotes, for $u\in H^2$ and $\phi$ a ground state solution to \end{equation}ref{grnd}, the scale invariant quantities \begin{gather*} \mathcal{ME} [u]:=\frac{E[u]^{s_c}M[u]^{2-s_c}}{E[\phi]^{s_c}M[\phi]^{2-s_c}};\\ {\mathcal M\mathcal G}[u]:=\frac{\|\Delta u\|^{s_c}\|u\|^{2-s_c}}{\|\Delta\phi\|^{s_c}\|\phi\|^{2-s_c}}. \end{gather*} There exist a sharp threshold of global existence versus finite time blow-up of solutions \cite{st0}. \begin{prop}\label{Blow-up} Let $N\geq2$, $0<\alpha<N<8+\alpha$, $0<s_c<2$, $\phi$ be a ground state solution to \end{equation}ref{grnd} and a maximal solution ${u}\in C_{T^*}(H^2_{rd})$ of \end{equation}ref{S}. Suppose that \begin{equation} \mathcal{ME}[u]<1.\label{ss} \end{equation} \begin{enumerate} \item[1.] Assume that $p<3$ and $${\mathcal M\mathcal G}[u]>1.$$ Then, ${u}$ blows-up in finite time, i.e, $0<T^*<\infty$ and $$\limsup_{t\to T^*}\|\Delta u(t)\|= +\infty;$$ \item[2.] Assume that $E(u_0)\geq0$ and \begin{equation} {\mathcal M\mathcal G}[u]<1.\label{ss2}\end{equation} Then, $T^*=\infty$ and $u$ scatters. Precisely, there exists $\psi\in H^2$ such that $$\limsup_{t\to\infty}\|u(t)-e^{it\Delta^2}\psi\|_{H^2}=0.$$ \end{enumerate} \end{prop} \begin{rems} \begin{enumerate} \item[1.] The finite time blow-up part seems to be a partial result because of the restriction $p<3$; \item[2.] the previous result is inspired by the works in the NLS case \cite{km,Holmer}. \end{enumerate} \end{rems} Let us close this sub-section with a sharp Gagliardo-Nirenberg inequality \cite{st0} related to the Choquard problem \end{equation}ref{S}. \begin{prop}\label{gag} Let $0<\alpha<N\geq1$ and $1+\frac\alpha N<q p< p^*$. Then, \begin{enumerate} \item[1.] there exists a positive constant $C(N,p,\alpha)$, such that for any $u\in H^2$, \begin{equation}\label{ineq} \int_{\mathbb{R}^N}(I_\alpha*|u|^p)|u|^p\,dx\leq C(N,p,\alpha)\|u\|^A\|\Delta u\|^B; \end{equation} \item[2.] the minimization problem $$\frac1{C(N,p,\alpha)}=\inf\Big\{J(u):=\frac{\|u\|^A\|\Delta u\|^B}{\int_{\mathbb{R}^N}(I_\alpha*|u|^p)|u|^p\,dx},\quad0\neq u\in H^2\Big\}$$ is attained in some $Q\in H^2$ satisfying ${C(N,p,\alpha)}=\int_{\mathbb{R}^N}(I_\alpha*|Q|^{p})|Q|^p\,dx$ and $$B\Delta^2Q+AQ-\frac{2p}{C(N,p,\alpha)}(I_\alpha*|Q|^p)|Q|^{p-2}Q=0;$$ \item[3.] furthermore $$C(N,p,\alpha)=\frac{2p}{A}(\frac AB)^{\frac{B}2}\|\phi\|^{-2(p-1)},$$ where $\phi$ is a ground state solution to \end{equation}ref{grnd}. \end{enumerate} \end{prop} \subsection{Main results} This sub-section contains the contribution of this note. The first main goal is to prove the following scattering result in the defocusing radial regime. \begin{thm}\label{sctr} Let $N\geq5$, $0<\alpha<N<8+\alpha$ and $p_*< p<p^*$ such that $p\geq2$. Take $\epsilon=1$ and $u\in C(\mathbb{R},H^2_{rd})$ be a global solution to \end{equation}ref{S}. Then, there exists $u_\pm\in H^2$ such that $$\lim_{t\to\pm\infty}\|u(t)-e^{it\Delta^2}u_\pm\|_{H^2}=0.$$ \end{thm} In order to prove the scattering, one needs a decay of global solutions to the Choquard equation \end{equation}ref{S}. \begin{prop}\label{dcy} Let $N\geq5$, $0<\alpha<N<8+\alpha$ and $p_*< p<p^*$ such that $p\geq2$. Take $\epsilon=1$ and $u\in C(\mathbb{R},H^2_{rd})$ be a global solution to \end{equation}ref{S}. Then, $$\lim_{t\to\pm\infty}\|u(t)\|_r=0,\quad\mbox{for all}\quad 2<r<\frac{2N}{N-4}.$$ \end{prop} The following Morawetz estimate stand for a standard tool to prove the previous decay result. \begin{prop}\label{cr} Let $N\geq5$, $0<\alpha<N<8+\alpha$ and $p_*< p<p^*$ such that $p\geq2$. Take $\epsilon=1$ and $u\in C(\mathbb{R},H^2_{rd})$ be a global solution to \end{equation}ref{S}. Then, $$\int_\mathbb{R}\int_{\mathbb{R}^N}|x|^{-1}(I_\alpha*|u(t)|^{p})|u(t,x)|^p\,dx\,dt\lesssim \|u_0\|_{H^2}.$$ \end{prop} \begin{rems} \quad{}\\ \begin{enumerate} \item[1.] The condition $N\geq5$ is required because of Morawetz estimate; \item[2.] the radial assumption is required in one step of the proof of Morawetz estimate; \item[3.] the decay of solutions is weaker than the scattering, but it is available in the mass-sub-critical case. \end{enumerate} \end{rems} The second main goal of this manuscript is to prove the next scattering result in the focusing radial regime. \begin{thm}\label{sctr2} Let $\epsilon=-1$, $N\geq5$, $\frac{24}5<\frac{24+\alpha}5<N<8+\alpha$ and $p_*< p<p^*$ such that $p\geq2$. Let $\phi$ be a ground state solution to \end{equation}ref{grnd} and a maximal radial solution ${u}\in C_{T^*}(H^2_{rd})$ of \end{equation}ref{S} satisfying $E(u_0)\geq0$ with \end{equation}ref{ss} and \end{equation}ref{ss2}. Then, $T^*=\infty$ and $u$ scatters. Precisely, there exists $\psi\in H^2$ such that $$\limsup_{t\to\infty}\|u(t)-e^{it\Delta^2}\psi\|_{H^2}=0.$$ \end{thm} \begin{rem} \begin{enumerate} \item[1.] The scattering of \end{equation}ref{S} in the focusing sign was proved in \cite{st0} with the concentration-compactness method due to Kenig and Merle \cite{km}. In this note, one proves the same result with a recent arguments of Dodson and Murphy \cite{dm};\\ \item[2.] the condition $\frac{24+\alpha}5<N$ is be technical related to the method used here. \end{enumerate} \end{rem} \subsection{Useful estimates} Let us gather some classical tools needed in the sequel. \begin{defi}\label{adm} A couple of real numbers $(q,r)$ is said to be admissible if $$2\leq r<\frac{2N}{N-4}\quad\mbox{and}\quad N\Big(\frac12-\frac1r\Big)=\frac4q,$$ where $\frac{2N}{N-4}=\infty$ if $1\leq N\leq4$. Denote the set of admissible pairs by $\Gamma$ and the Strichartz spaces $$S(I):=\cap_{(q,r)\in\Gamma}L^q(I,L^r)\quad\mbox{and}\quad S'(I):=\cap_{(q,r)\in\Gamma}L^{q'}(I,L^{r'}).$$ \end{defi} Recall the Strichartz estimates \cite{bp,guo,vdd}. \begin{prop}\label{prop2} Let $N \geq 1$ and $t_0\in I\subset \mathbb{R}$ an interval. Then, \begin{enumerate} \item[1.] $\sup_{(q,r)\in\Gamma}\|u\|_{L^q(I,L^r)}\lesssim\|u(t_0)\|+\inf_{(\tilde q,\tilde r)\in\Gamma}\|i\dot u+\Delta^2 u\|_{L^{\tilde q'}(I,L^{\tilde r'})}$; \item[2.] $\sup_{(q,r)\in\Gamma}\|\Delta u\|_{L^q(I,L^r)}\lesssim\|\Delta u(t_0)\|+\|i\dot u+\Delta^2 u\|_{L^2(I,\dot W^{1,\frac{2N}{2+N}})}, \quad\forall N\geq3$; \item[3.] Let $(q,r)\in\Gamma$ and $k>\frac q2$ such that $\frac1k+\frac1m=\frac2q$. Then, $$\|u-e^{i\cdot\Delta^2}u_0\|_{L^k(I,L^r)}\lesssim \|i\dot u+\Delta^2 u\|_{L^{m'}(I,L^{r'})}.$$ \end{enumerate} \end{prop} Let us recall a Hardy-Littlewood-Sobolev inequality \cite{el}. \begin{lem}\label{Hardy-Littlewwod-Sobolev} Let $0 <\lambda < N\geq1$ and $1<s,r<\infty$ be such that $\frac1r +\frac1s +\frac\lambda N = 2$. Then, $$\int_{\mathbb{R}^N\times\mathbb{R}^N} \frac{f(x)g(y)}{|x-y|^\lambda}\,dx\,dy\leq C(N,s,\lambda)\|f\|_{r}\|g\|_{s},\quad\forall f\in L^r,\,\forall g\in L^s.$$ \end{lem} The next consequence \cite{st}, is adapted to the Choquard problem. \begin{cor}\label{cor}\label{lhs2} Let $0 <\lambda < N\geq1$ and $1<s,r,q<\infty$ be such that $\frac1q+\frac1r+\frac1s=1+\frac\alpha N$. Then, $$\|(I_\alpha*f)g\|_{r'}\leq C(N,s,\alpha)\|f\|_{s}\|g\|_{q},\quad\forall f\in L^s, \,\forall g\in L^q.$$ \end{cor} Finally, let us give an abstract result. \begin{lem}\label{abs} Let $T>0$ and $X\in C([0,T],\mathbb{R}_+)$ such that $$X\leq a+bX^{\theta}\mbox{ on } [0,T],$$ where $a$, $b>0$, $\theta>1$, $a<(1-\frac{1}{\theta})(\theta b)^{\frac{1}{1-\theta}}$ and $X(0)\leq (\theta b)^{\frac{1}{1-\theta}}$. Then $$X\leq\frac{\theta}{\theta -1}a \mbox{ on } [0,T].$$ \end{lem} \begin{proof} The function $f(x):=bx^\theta-x +a$ is decreasing on $[0,(b\theta)^{\frac1{1-\theta}}]$ and increasing on $[(b\theta)^\frac1{1-\theta} ,\infty)$. The assumptions imply that $f((b\theta)^\frac1{1-\theta})< 0$ and $f(\frac\theta{\theta-1}a)\leq0$. As $f(X(t))\geq 0$, $f(0) > 0$ and $X(0)\leq(b\theta)^\frac1{1-\theta}$, we conclude the proof by a continuity argument. \end{proof} \section{The defocusing regime $\epsilon=1$} This section is concerned with the defocusing regime, so one takes $\epsilon=1$. Moreover, one denotes the source term by $\mathcal N:=(I_\alpha*|u|^p)|u|^{p-2}u$. Also, one adopts the convention that repeated indexes are summed. Finally, if $f,g$ are two differentiable functions, one defines the momentum brackets by $$ \{f,g\}_p:=\mathbb{R}e(f\nabla\bar g-g\nabla\bar f).$$ \subsection{Morawetz identity} This subsection is devoted to prove Proposition \ref{cr} about a classical Morawetz estimate satisfied by the energy global solutions to the defocusing Choquard problem \end{equation}ref{S}. Let us start with an auxiliary result. \begin{prop}\label{mrwtz} Take $N\geq5$, $0<\alpha<N<8+\alpha$, $2\leq p<p^*$ and $u\in C_{T}(H^2)$ be a local solution to \end{equation}ref{S}. Let $a:\mathbb{R}^N\to\mathbb{R}$ be a convex smooth function and the real function defined on $[0,T)$, by $$M:t\to2\int_{\mathbb{R}^N}\nabla a(x)\Im(\nabla u(t,x)\bar u(t,x))\,dx.$$ Then, the following equality holds on $[0,T)$, \begin{eqnarray*} M' &=&2\int_{\mathbb{R}^N}\Big(2\partial_{jk}\Delta a\partial_ju\partial_k\bar u-\frac12(\Delta^3a)|u|^2-4\partial_{jk}a\partial_{ik}u\partial_{ij}\bar u\\ &+&\Delta^2a|\nabla u|^2-\partial_ja\{(I_\alpha*|u|^p)|u|^{p-2}u,u\}_p^j\Big)\,dx\\ &=&2\int_{\mathbb{R}^N}\Big(2\partial_{jk}\Delta a\partial_ju\partial_k\bar u-\frac12(\Delta^3a)|u|^2-4\partial_{jk}a\partial_{ik}u\partial_{ij}\bar u+\Delta^2a|\nabla u|^2\Big)\\ &+&2\Big((-1+\frac2p)\int_{\mathbb{R}^N}\Delta a(I_\alpha*|u|^p)|u|^p\,dx+\frac2{p}\int_{\mathbb{R}^N}\partial_ka\partial_k(I_\alpha*|u|^p)|u|^{p}\,dx\Big). \end{eqnarray*} \end{prop} \begin{proof} Let us compute \begin{eqnarray*} \partial_t\Im(\partial_k u\bar u) &=&\Im(\partial_k\dot u\bar u)+\Im(\partial_k u\bar{\dot u})\\ &=&\mathbb{R}e(i\dot u\partial_k\bar u)-\mathbb{R}e(i\partial_k \dot u\bar{u})\\ &=&\mathbb{R}e(\partial_k\bar u(-\Delta^2 u-\mathcal N))-\mathbb{R}e(\bar u\partial_k(-\Delta^2 u-\mathcal N))\\ &=&\mathbb{R}e(\bar u\partial_k\Delta^2 u-\partial_k\bar u\Delta^2 u)+\mathbb{R}e(\bar u\partial_k\mathcal N-\partial_k\bar u\mathcal N). \end{eqnarray*} Thus, \begin{eqnarray*} M' &=&2\int_{\mathbb{R}^N}\partial_ka\mathbb{R}e(\bar u\partial_k\Delta^2 u-\partial_k\bar u\Delta^2 u)\,dx-2\int_{\mathbb{R}^N}\partial_ka\{\mathcal N,u\}_p^k\,dx\\ &=&-2\int_{\mathbb{R}^N}\Delta a\mathbb{R}e(\bar u\Delta^2 u)\,dx-4\int_{\mathbb{R}^N}\mathbb{R}e(\partial_ka\partial_k\bar u\Delta^2 u)\,dx-2\int_{\mathbb{R}^N}\partial_ka\{\mathcal N,u\}_p^k\,dx. \end{eqnarray*} The first equality in the above Lemma follows as in Proposition 3.1 in \cite{mwz}. On the other hand \begin{eqnarray*} (I) &:=&\int_{\mathbb{R}^N}\partial_ka\mathbb{R}e(\bar u\partial_k\mathcal N-\partial_k\bar u\mathcal N)\,dx\\ &=&\int_{\mathbb{R}^N}\partial_ka\mathbb{R}e(\partial_k[\bar u\mathcal N]-2\partial_k\bar u\mathcal N)\,dx\\ &=&-\int_{\mathbb{R}^N}\Big(\Delta a\bar u\mathcal N+2\partial_ka\mathbb{R}e(\partial_k\bar u\mathcal N)\Big)\,dx\\ &=&-\int_{\mathbb{R}^N}\Big(\Delta a(I_\alpha*|u|^p)|u|^p+2\partial_ka\mathbb{R}e(\partial_k\bar u\mathcal N)\Big)\,dx\\ &=&-\int_{\mathbb{R}^N}\Delta a(I_\alpha*|u|^p)|u|^p\,dx-\frac2{p}\int_{\mathbb{R}^N}\partial_ka\partial_k(|u|^{p})(I_\alpha*|u|^p)\,dx. \end{eqnarray*} Moreover, {\begin{eqnarray*} (A) &:=&\int_{\mathbb{R}^N}\partial_ka\partial_k(|u|^{p})(I_\alpha*|u|^p)\,dx\\ &=&-\int_{\mathbb{R}^N}div(\partial_ka(I_\alpha*|u|^p))|u|^{p}\,dx\\ &=&-\int_{\mathbb{R}^N}\Delta a(I_\alpha*|u|^p)|u|^{p}\,dx-\int_{\mathbb{R}^N}\partial_ka\partial_k(I_\alpha*|u|^p)|u|^{p}\,dx. \end{eqnarray*}} Then, \begin{eqnarray*} (I) &=&-\int_{\mathbb{R}^N}\Delta a(I_\alpha*|u|^p)|u|^p\,dx-\frac2{p}(A)\\ &=&-\int_{\mathbb{R}^N}\Delta a(I_\alpha*|u|^p)|u|^p\,dx+\frac2{p}\Big(\int_{\mathbb{R}^N}\Delta a(I_\alpha*|u|^p)|u|^{p}\,dx+\int_{\mathbb{R}^N}\partial_ka\partial_k(I_\alpha*|u|^p)|u|^{p}\,dx\Big)\\ &=&(-1+\frac2p)\int_{\mathbb{R}^N}\Delta a(I_\alpha*|u|^p)|u|^p\,dx+\frac2{p}\int_{\mathbb{R}^N}\partial_ka\partial_k(I_\alpha*|u|^p)|u|^{p}\,dx. \end{eqnarray*} This closes the proof. \end{proof} Now, one proves the Morawetz estimate. \begin{proof}[Proof of Proposition \ref{cr}] For a vector $e\in\mathbb{R}^N$, denote $$\nabla_eu:=(\frac e{|e|}.\nabla u)\frac e{|e|}\quad \mbox{and}\quad \nabla_e^\bot u:=\nabla u-\nabla_eu.$$ Compute, for $a:=|\cdot|$ and taking account of \cite{sw}, \begin{gather*} 2\partial_{jk}\Delta a\partial_ju\partial_k\bar u=\frac{2(N-1)}{|\cdot|^3}\Big(2|\nabla_e u|^2-|\nabla_e^\bot u|\Big);\\ \partial_{jk}a\partial_{ij}\bar u\partial_{ik} u=\frac1{|\cdot|}\sum_i\Big(|\nabla\partial_iu|^2-|\nabla_e\partial_iu|^2\Big)\geq\frac{N-1}{|\cdot|^3}|\nabla_eu|^2. \end{gather*} Compute for $N\geq5$, the derivatives \begin{gather*} \nabla a=\frac.{|\cdot|},\quad \Delta a=\frac{N-1}{|\cdot|};\\ \Delta^2a=-\frac{(N-1)(N-3)}{|\cdot|^3}. \end{gather*} Moreover, $$\Delta^3a=\left\{ \begin{array}{ll} C\delta_0,\quad\mbox{if}\quad N=5;\\ \frac{3(N-1)(N-3)(N-5)}{|\cdot|^5},\quad\mbox{if}\quad N\geq6. \end{array} \right.$$ Thus, one gets \begin{eqnarray*} M' &=&2\int_{\mathbb{R}^N}\Big(2\partial_{jk}\Delta a\partial_ju\partial_k\bar u-\frac12(\Delta^3a)|u|^2-4\partial_{jk}a\partial_{ik}u\partial_{ij}\bar u+\Delta^2a|\nabla u|^2\Big)\,dx\\ &+&2(-1+\frac2p)\int_{\mathbb{R}^N}\Delta a(I_\alpha*|u|^{p})|u|^p\,dx+\frac4{p}\int_{\mathbb{R}^N}\partial_ka\partial_k(I_\alpha*|u|^p)|u|^{p}\,dx\\ &\leq&2\int_{\mathbb{R}^N}\Big(\frac{2(N-1)}{|x|^3}\Big(2|\nabla_e u|^2-|\nabla_e^\bot u|\Big)-4\frac{N-1}{|x|^3}|\nabla_eu|^2\Big)\,dx\\ &+&2(-1+\frac2p)\int_{\mathbb{R}^N}\Delta a(I_\alpha*|u|^{p})|u|^p\,dx+\frac4{p}\int_{\mathbb{R}^N}\partial_ka\partial_k(I_\alpha*|u|^p)|u|^{p}\,dx\\ &\leq&2(-1+\frac2p)\int_{\mathbb{R}^N}\Delta a(I_\alpha*|u|^{p})|u|^p\,dx+\frac4{p}\int_{\mathbb{R}^N}\partial_ka\partial_k(I_\alpha*|u|^p)|u|^{p}\,dx. \end{eqnarray*} This gives $$\int_0^T\int_{\mathbb{R}^N}\Big(2(1-\frac2p)\Delta a(I_\alpha*|u|^p)|u|^p-\frac4{p}\partial_ka\partial_k(I_\alpha*|u|^p)|u|^{p}\Big)\,dx\lesssim \sup_{[0,T]}|M|.$$ Thus, \begin{eqnarray*} \|u_0\|_{H^2} &\gtrsim&\sup_{[0,T]}|M|\\ &\gtrsim&\int_0^T\int_{\mathbb{R}^N}\Big(\Delta a(I_\alpha*|u|^p)|u|^p-\partial_ka\partial_k(I_\alpha*|u|^p)|u|^{p}\Big)\,dx\,dt\\ &\gtrsim&\int_0^T\int_{\mathbb{R}^N}\Big((I_\alpha*|u|^p)|x|^{-1}|u|^p+(N-\alpha)\frac x{|x|}[\frac.{|\cdot|^2}I_\alpha*|u|^p]|u|^{p}\Big)\,dx\,dt\\ &\gtrsim&\int_0^T\int_{\mathbb{R}^N}\Big([I_\alpha*|u|^p])|x|^{-1}|u|^{p}+\frac x{|x|}[\frac.{|\cdot|^2}I_\alpha*|u|^p]|u|^{p}\Big)\,dx\,dt. \end{eqnarray*} Now, write \begin{eqnarray*} (D) &:=&\int_{\mathbb{R}^N}\frac{x}{|x|}[\frac.{|\cdot|^2}I_\alpha*|u|^p]|u(x)|^{p}\,dx\\ &=&\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}\frac{x}{|x|}\frac{x-z}{|x-z|^2}I_\alpha(x-z)|u(z)|^p|u(x)|^{p}\,dx\,dz\\ &=&\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}\frac{z}{|z|}\frac{z-x}{|x-z|^2}I_\alpha(x-z)|u(z)|^p|u(x)|^{p}\,dx\,dz\\ &=&\frac12\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}\frac{I_\alpha(x-z)}{|x-z|^2}|u(z)|^p|u(x)|^{p}(x-z)\Big(\frac x{|x|}-\frac{z}{|z|}\Big)\,dx\,dz. \end{eqnarray*} Then, $(D)\geq0$ because $$(x-z)\Big(\frac{x}{|x|}-\frac{z}{|z|}\Big)=(|x||z|-xz)\Big(\frac{|x|+|z|}{|x||z|}\Big)\geq0.$$ The proof is closed. \end{proof} \subsection{Decay of global solutions} The goal of this subsection is to prove the long time decay of the energy global solutions to the defocusing Choquard problem \end{equation}ref{S}. Let us give an intermediate result. \begin{lem}\label{dcyy2} Take $N\geq5$, $0<\alpha<N<8+\alpha$, $2\leq p<p^*$. Let $\chi\in C_0^{\infty}(\mathbb{R}^N)$ to be a cut-off function and $(\varphi_n)$ be a sequence in $H^2$ satisfying $\displaystyle\sup_{n}\|\varphi_n\|_{H^2}<\infty$ and $\varphi_n\rightharpoonup\varphi$ in $H^2$. Let $u_n$ $($respectively $u )$ be the solution in $C(\mathbb{R},H^2)$ to \end{equation}ref{S} with initial data $\varphi_n$ $($respectively $\varphi )$. Then, for every $\varepsilon>0$, there exist $T_{\varepsilon}>0$ and $n_{\varepsilon}\in\mathbb{N}$ such that $$\|\chi(u_n-u)\|_{L^{\infty}_{T_{\varepsilon}}(L^2)}<\varepsilon ,\quad\forall n>n_{\varepsilon}.$$ \end{lem} \begin{proof} Let $v_n:=\chi u_n$ and $v:=\chi u$. Denote $w_n:=v_n-v$ and $\mathcal N_u:=(I_\alpha*|u|^p)|u|^{p-2}u$. Using Strichartz estimate and Corollary \ref{cor}, assuming that $supp(\chi)\subset\{|x|<1\}$, one has \begin{eqnarray*} \|\chi(\mathcal N_{u_n}-\mathcal N_{u})\|_{S'(0,T)} &\lesssim& \|(I_\alpha*[|u_n|^p-|u|^p)]|u_n|^{p-2}v_n\|_{L^{q'}_T(L^{r'}(|x|<1))}\\ &+&\|(I_\alpha*|u|^p)(|u_n|^{p-2}v_n-|u|^{p-2}v)\|_{L^{q'}_T(L^{r'}(|x|<1))}\\ &\lesssim&(I)+(II), \end{eqnarray*} where $(q,r)\in\Gamma$. Take $r:=\frac{2Np}{\alpha+N}$. Then, $1+\frac\alpha N=\frac{2p}{r}$ and using H\"older and Hardy-Littlewood-Sobolev inequalities, one gets \begin{eqnarray*} (II) &=&\|(I_\alpha*|u|^p)(|u_n|^{p-2}v_n-|u|^{p-2}v)\|_{L^{q'}_T(L^{r'}(|x|<1))}\\\\ &\lesssim&\|(I_\alpha*|u|^p)(|u_n|^{p-2}+|u|^{p-2})w_n\|_{L^{q'}_T(L^{r'}(|x|<1))}\\ &\lesssim&\|(\|u_n\|_{r}^{2(p-1)}+\|u\|_{r}^{2(p-1)})\|w_n\|_{r}\|_{L^{q'}(0,T)}. \end{eqnarray*} Because $2\leq p<p^*$, there exists $\delta>0$ such that $\frac1{q'}=\frac{1}q+\frac1\delta$ and $2<r<\frac{2N}{N-4}$. Then, taking account of Sobolev embeddings and H\"older inequality, one obtains \begin{eqnarray*} (II) &\lesssim&T^{\frac1\delta}\Big(\|u_n\|_{L_T^\infty(L^r)}^{2(p-1)}+\|u\|_{L_T^\infty(L^r)}^{2(p-1)}\Big)\|w_n\|_{S(0,T)}\\ &\lesssim&T^{\frac1\delta}\Big(\|u_n\|_{L_T^\infty(H^2)}^{2(p-1)}+\|u\|_{L_T^\infty(H^2)}^{2(p-1)}\Big)\|w_n\|_{S(0,T)}\\ &\lesssim&T^{\frac1\delta}\|w_n\|_{S(0,T)}. \end{eqnarray*} Similarly, one estimates $(I)$. Now, taking account of computation done in the proof of Lemma 2.2 in \cite{st1}, one gets $$\|w_n\|_{S(0,T)}\lesssim \|\chi(\varphi_n-\varphi)\|+T+T^{\frac1\delta}\|w_n\|_{S(0,T)}.$$ The proof is achieved via Rellich Theorem. \end{proof} Now, let us prove the long time decay for global solutions to \end{equation}ref{S}. \begin{proof}[Proof of Proposition \ref{dcy}] By an interpolation argument, it is sufficient to establish the equality $$\lim_{t\to\infty}\|u(t)\|_{2+\frac4N}=0.$$ Recall the localized Gagliardo-Nirenberg inequality \cite{st1}, $$\|u\|_{2+\frac4N}^{2+\frac4N}\lesssim \Big(\sup_{x\in\mathbb{R}^N}\|u\|_{L^2(Q_1(x))}\Big)^{1+\frac4N}\|u\|_{H^2}.$$ Here $Q_r(x)$ denotes the cubic in $\mathbb{R}^N$ with center $x$ and radius $r>0$. One proceeds by contradiction. Assume that there exist a sequence $(t_n)$ of positive real numbers and $\varepsilon>0$ such that $\displaystyle\lim_{n\rightarrow\infty}t_n=\infty$ and $$\|u(t_n)\|_{L^{2+\frac4N}}>\varepsilon,\quad\forall n\in\mathbb{N}.$$ Thanks to the conservation laws and the localized Gagliardo-Nirenberg inequality above, there exist a sequence $(x_n)$ in $\mathbb{R}^N$ and a positive real number denoted also by $\varepsilon>0$ such that \begin{equation}\label{..} \|u(t_n)\|_{L^2(Q_1(x_n))}\geq\varepsilon,\quad\forall n\in\mathbb{N}. \end{equation} Following the paper \cite{st1}, via the previous Lemma, there exist $T,n_\epsilon>0$ such that $t_{n+1}-t_n>T$ for $n\geq n_{\varepsilon}$ and $$\|u(t)\|_{L^2(Q_2(x_n))}\geq\frac{\varepsilon}{4},\quad\forall t\in [t_n,t_n+T],\quad\forall n\geq n_{\varepsilon}.$$ Thanks to Morawetz estimate in Proposition \ref{cr}, one gets \begin{eqnarray*} \|u_0\|_{H^2} &\gtrsim&\int_\mathbb{R}\int_{\mathbb{R}^N}(I_\alpha*|u(t)|^p)|x|^{-1}|u(t,x)|^{p}\,dx\,dt\\ &\gtrsim&\sum_n\int_{t_n}^{t_n+T}\int_{Q_2(x_n)}\int_{Q_2(x_n)}\frac{|x|^{-1}}{|x-y|^{N-\alpha}}|u(t,y)|^p|u(t,x)|^{p}\,dy\,dx\,dt. \end{eqnarray*} Now, with the radial assumption via the equation \end{equation}ref{..}, the sequence $(x_n)$ is bounded. Thus, \begin{eqnarray*} \|u_0\|_{H^2} &\gtrsim&\sum_n\int_{t_n}^{t_n+T}\Big(\int_{Q_2(x_n)}|u(t,x)|^{p}\,dx\Big)^2\,dt\\ &\gtrsim&\sum_n\int_{t_n}^{t_n+T}\|u(t)\|_{L^2(Q_2(x_n))}^{2p}\,dt\\ &\gtrsim&\sum_n(\frac\varepsilon4)^{2p}T=\infty. \end{eqnarray*} This contradiction achieves the proof. \end{proof} \subsection{Scattering} This subsection is concerned with the proof of the scattering of energy global solutions to the defocusing Choquard problem \end{equation}ref{S}. Here and hereafter, one denotes the operator $$\left\langle\cdot\right\rangle :=(1+\Delta) \cdot.$$ Let us give an intermediate result. \begin{lem}\label{twl2} Let $N\geq5$, $0<\alpha<N<8+\alpha$ and $p_*< p<p^*$ such that $p\geq2$. Take $u\in C_{T}(H^2)$ be a local solution to \end{equation}ref{S}. Then, there exist $2<p_1,p_2<\frac{2N}{N-4}$ and $0<\theta_1,\theta_2<2(p-1)$ such that $$\|\left\langle u-e^{i.\Delta^2}u_0\right\rangle\|_{S(0,T)}\lesssim\|u\|_{L^\infty_T(L^{p_1})}^{\theta_1}\|\left\langle u\right\rangle\|_{S(0,T)}^{2p-1-\theta_1}+\|u\|_{L^\infty_T(L^{p_2})}^{\theta_2}\|\left\langle u\right\rangle\|_{S(0,T)}^{2p-1-\theta_2}.$$ \end{lem} \begin{proof} With Duhamel formula and Strichartz estimates, one writes {\begin{eqnarray*} &&\|\left\langle u-e^{i.\Delta^2}u_0\right\rangle\|_{S(0,T)}\\ &\lesssim&\|(I_\alpha*|u|^p)|u|^{p-2}u\|_{S_T'(|x|<1)}+\|\nabla((I_\alpha*|u|^p)|u|^{p-2}u)\|_{L_T^2(L^\frac{2N}{2+N}(|x|<1))}\\ &:=&(A)+(B). \end{eqnarray*}} Now, let us deal with the quantity $(B)$. \begin{eqnarray*} (B) &:=&\|\nabla((I_\alpha*|u|^p)|u|^{p-2}u)\|_{L^{2}_T(L^{\frac{2N}{2+N}})}\\ &\lesssim&\|(I_\alpha*|u|^p)|u|^{p-2}\nabla u\|_{L^{2}_T(L^{\frac{2N}{2+N}})}+\|(I_\alpha*|u|^{p-1}\nabla u)|u|^{p-1}\|_{L^{2}_T(L^{\frac{2N}{2+N}})}\\ &\lesssim& (B_1)+(B_2). \end{eqnarray*} Thanks to H\"older, Hardy-Littlewood-Sobolev and Sobolev inequalities, one has \begin{eqnarray*} (B_1) &:=&\|(I_\alpha*|u|^p)|u|^{p-2}\nabla u\|_{L^{2}_T(L^{\frac{2N}{2+N}})}\\ &\lesssim&\|\|u\|_{r_1}^{2(p-1)}\|\Delta u\|_{a_1}\|_{L^{2}(0,T)}\\ &\lesssim&\|u\|_{L^\infty_T(L^{r_1})}^{\theta_1}\|\|u\|_{r_1}^{2(p-1)-\theta_1}\|\Delta u\|_{r_1}\|_{L^{2}(0,T)}\\ &\lesssim&\|u\|_{L^\infty_T(L^{r_1})}^{\theta_1}\|\|u\|_{W^{2,r_1}}^{2p-1-\theta_1}\|_{L^{2}(0,T)}\\ &\lesssim&\|u\|_{L^\infty_T(L^{r_1})}^{\theta_1}\|u\|_{L^{q_1}_T(W^{2,r_1})}^{2p-1-\theta_1}. \end{eqnarray*} Here $q_1:=2(2p-1-\theta_1)$, $(q_1,r_1)\in\Gamma$, $\frac1{a_1}=\frac1{r_1}-\frac1N$ and \begin{gather*} \frac{4+2\alpha+N}{2N}-\frac{2p-1}{r_1} =0;\\ N(\frac12-\frac1{r_1})=\frac4{q_1}=\frac2{2p-1-\theta_1}. \end{gather*} A computation gives that the condition $\theta_1\in(0,2(p-1))$ is equivalent to $$2<\frac{8(2p-1)}{N(2p-1)-(4+2\alpha+N)}<2(2p-1).$$ This is satisfied because $p_*<p<p^*$. The second term is controlled similarly. The estimate of $(A)$ follows as $(B_1)$. This finishes the proof. \end{proof} Now, let us prove the main result of this section. \begin{proof}[Proof of Theorem \ref{sctr}] Taking account of Lemma \ref{twl2} via the decay of solutions and the absorption Lemma \ref{abs}, one gets $$\left\langle u\right\rangle\in S(\mathbb{R}):=\cap_{(q,r)\in\Gamma}{L^q(\mathbb{R},L^r(\mathbb{R}^N))}.$$ This implies that, via Strichartz estimate and the proof of the previous Lemma, that when $s,t\to\infty$, \begin{eqnarray*} \|e^{-it\Delta^2}u(t)-e^{-is\Delta^2}u(s)\|_{H^2} &\lesssim& \|(I_\alpha*|u|^p)|u|^{p-2}u\|_{L^2((t,s),W^{1,\frac{2N}{2+N}})}\\ &\lesssim&\|u\|_{L^\infty(\mathbb{R},L^{q_1})}^{\theta_1}\|\left\langle u\right\rangle\|_{S(s,t)}^{2p-1-\theta_1}+\|u\|_{L^\infty(\mathbb{R},L^{q_2})}^{\theta_2}\|\left\langle u\right\rangle\|_{S(s,t)}^{2p-1-\theta_2}\\ &&\to0. \end{eqnarray*} Take $u_\pm:=\lim_{t\to\pm\infty}e^{-it\Delta^2}u(t)$ in $H^2$. Thus, $$\lim_{t\to\pm\infty}\|u(t)-e^{it\Delta^2}u_\pm\|_{H^2}=0.$$ The scattering is proved. \end{proof} \section{The focusing regime $\epsilon=-1$} This section deals with the focusing sign. Thus, one takes $\epsilon=-1$. Moreover, here and hereafter one denotes the real numbers $$a:=\frac{4p(p-1)}{2p-B},\quad r:=\frac{2Np}{\alpha+N}.$$ Take also $T>0$ and the time slab $I:=(T,\infty)$. \subsection{Small data theory} Let us start with a global existence and scattering result for small data. \begin{lem}\label{sdt} Let $N\geq5$, $4<4+\alpha<N<8+\alpha$ and $p_*< p<p^*$ such that $p\geq2$. Let $A> 0$ such that $\|u(T)\|_{H^2}< A$. Then, there exists $\delta:=\delta(A) > 0$ such that if $$\|e^{i(\cdot-T)\Delta^2} u(T)\|_{L^a((T,\infty),L^r)}<\delta,$$ the solution to \end{equation}ref{S} exists globally in time and satisfies \begin{gather*} \|u\|_{L^a((T,\infty),L^r)}<2\|e^{i\cdot\Delta^2} u(T)\|_{L^a((T,\infty),L^r)};\\ \|\left\langle u\right\rangle\|_{S(T,\infty)}< 2C\|u(T)\|_{H^2}. \end{gather*} Moreover, if $\|u\|_{L^\infty(\mathbb{R},H^2)}<A$, then $u$ scatters. \end{lem} \begin{proof} Define the function $$\phi(u):=e^{i(\cdot-T)\Delta^2}u(T)-i\int_T^\cdot e^{i(\cdot-s)\Delta^2}[(I_\alpha*|u|^p)|u|^{p-2}u]\,dx.$$ Let, for $T,R,R'>0$, the space $$X_{T,R,R'}:=\{\left\langle u\right\rangle\in L^q(I,L^r),\quad \|\left\langle u\right\rangle\|_{S(I) }\leq R,\quad\|u\|_{L^a(I,L^r)}\leq R'\},$$ endowed with the complete distance $$d(u,v):=\|u-v\|_{S(I)\cap L^a(I,L^r)}.$$ Take the admissible pairs \begin{gather*} (q,r):=\Big(\frac{4p}{B},\frac{2Np}{\alpha+N}\Big);\\ (q_1,r_1):=\Big(\frac{4p}{(N-2)p-(\alpha+N)},\frac{2Np}{2(\alpha+N)-p(N-4)}\Big). \end{gather*} With Strichartz and Hardy-Littlewood-Sobolev estimates \begin{eqnarray*} \|\phi(u)-\phi(v)\|_{S(I)} &\lesssim&\|(I_\alpha*|u|^p)|u|^{p-2}u-(I_\alpha*|v|^p)|v|^{p-2}v\|_{L^{q'}((T,\infty),L^{r'})}\\ &\lesssim&\|(I_\alpha*|u|^p)[|u|^{p-2}+|v|^{p-2}](u-v)\|_{L^{q'}((T,\infty),L^{r'})}\\ &+&\||v|^{p-1}[I_\alpha*(|u|^p-|v|^p)]\|_{L^{q'}((T,\infty),L^{r'})}\\ &\lesssim&\|u\|_{L^a((T,\infty),L^{r})}^p[\|u\|_{L^a((T,\infty),L^{r})}^{p-2}+\|v\|_{L^a((T,\infty),L^{r})}^{p-2}]\|u-v\|_{L^q((T,\infty),L^{r})}\\ &+&\|v\|_{L^a((T,\infty),L^{r})}^{p-1}\sum_{k=0}^{p-1}\|u\|_{L^a((T,\infty),L^{r})}^k\|v\|_{L^a((T,\infty),L^{r})}^{p-k-1}\|u-v\|_{L^q((T,\infty),L^{r})}\\ &\lesssim&R'^{2(p-1)}d(u,v). \end{eqnarray*} Take the real number satisfying $\frac1a+\frac1m=\frac2q$, $$m:=\frac{4p(p-1)}{2p(B-1)-B}.$$ With Strichartz and Hardy-Littlewood-Sobolev estimates, via the identity $1=\frac2q+\frac{2(p-1)}a$, \begin{eqnarray*} \|\phi(u)-\phi(v)\|_{L^a(I,L^r)} &\lesssim&\|(I_\alpha*|u|^p)|u|^{p-2}u-(I_\alpha*|v|^p)|v|^{p-2}v\|_{L^{m'}((T,\infty),L^{r'})}\\ &\lesssim&\|(I_\alpha*|u|^p)[|u|^{p-2}+|v|^{p-2}](u-v)\|_{L^{m'}((T,\infty),L^{r'})}\\ &+&\||v|^{p-1}[I_\alpha*(|u|^p-|v|^p)]\|_{L^{m'}((T,\infty),L^{r'})}\\ &\lesssim&(\|u\|_{L^a((T,\infty),L^r)}^{2(p-1)}+\|u\|_{L^a((T,\infty),L^r)}^{2(p-1)})\|u-v\|_{L^a((T,\infty),L^{r})}\\ &\lesssim&R'^{2(p-1)}\|u-v\|_{L^a((T,\infty),L^{r})}. \end{eqnarray*} Then, $$d(\phi(u),\phi(v))\leq CR'^{2(p-1)}d(u,v).$$ Let us prove that the space $X_{T,R,R'}$ is stable under the above function. Taking $v=0$ in the previous computation, one gets \begin{eqnarray*} \|\phi(u)\|_{L^a(I,L^r)} &\lesssim&\|e^{i(\cdot-T)\Delta^2}u(T)\|_{L^a(I,L^r)}+R'^{2p-1}. \end{eqnarray*} Thanks to Strichartz estimate \begin{eqnarray*} \|\left\langle \phi(u)\right\rangle\|_{S(I)} &\lesssim&\|u(T)\|_{H^2}+\|(I_\alpha*|u|^p)|u|^{p-2}u\|_{L^{q'}(I,L^{r'})}+\|\nabla[(I_\alpha*|u|^p)|u|^{p-2}u]\|_{L^{2}(I,L^{\frac{2N}{2+N}})}\\ &\lesssim&\|u(T)\|_{H^2}+\|u\|_{L^a(I,L^r)}^{2(p-1)}\|u\|_{L^q(I,L^r)}+\|\nabla[(I_\alpha*|u|^p)|u|^{p-2}u]\|_{L^{2}(I,L^{\frac{2N}{2+N}})}\\ &\lesssim&\|u(T)\|_{H^2}+R'^{2(p-1)}R+\|\nabla[(I_\alpha*|u|^p)|u|^{p-2}u]\|_{L^{2}(I,L^{\frac{2N}{2+N}})}. \end{eqnarray*} Moreover, \begin{eqnarray*} &&\|\nabla[(I_\alpha*|u|^p)|u|^{p-2}u]\|_{L^{2}(I,L^{\frac{2N}{2+N}})}\\ &\leq&\|(I_\alpha*|u|^p)|u|^{p-2}\nabla u\|_{L^{2}(I,L^{\frac{2N}{2+N}})}+\|(I_\alpha*\nabla(|u|^p))|u|^{p-1}]\|_{L^{2}(I,L^{\frac{2N}{2+N}})}\\ &\leq&(A_1)+(A_2). \end{eqnarray*} Moreover, by H\"older and Hardy-Littlewood-Sobolev inequalities via Sobolev injection and the identities \begin{gather*} \frac\alpha N+\frac{2+N}{2N}=\frac{2(p-1)}r+\frac1{r_1}-\frac1N;\\ \frac12=\frac{2(p-1)}a+\frac1{q_1}, \end{gather*} one has \begin{eqnarray*} (A_1)+(A_2) &\lesssim&\|\|u\|_r^{2(p-1)}\|\nabla u\|_{\frac{r_1N}{N-r_1}}\|_{L^2(I)}\\ &\lesssim&\|\|u\|_r^{2(p-1)}\|\Delta u\|_{r_1}\|_{L^2(I)}\\ &\lesssim&\|u\|_{L^a(I,L^r)}^{2(p-1)}\|\Delta u\|_{L^{q_1}(I,L^{r_1})}\\ &\lesssim&R'^{2(p-1)}R. \end{eqnarray*} Thus, \begin{eqnarray*} \|\left\langle \phi(u)\right\rangle\|_{S(I)} &\lesssim&\|u(T)\|_{H^2}+R'^{2(p-1)}R. \end{eqnarray*} Taking $R'=2\|e^{i(\cdot-T)\Delta^2}u(T)\|_{L^a(I,L^r)}<<1$ and $R=2C\|u(T)\|_{H^2}$, the proof of the first part follows with Picard fixed point Theorem. Now, let us prove the scattering. Take $v(t):=e^{-it\Delta^2}u(t)$ and $0<t_1<t_2<\infty$. With the integral formula, one has \begin{eqnarray*} \|v(t_1)-v(t_2)\|_{H^2} &=&\|\int_{t_1}^{t_2}e^{-is\Delta^2}[(I_\alpha*|u|^p)|u|^{p-2}u]\,ds\|_{H^2}\\ &\lesssim&\|u\|_{L^a((t_1,t_2),L^r)}^{2(p-1)}\|\Delta u\|_{S(t_1,t_2)}\to0. \end{eqnarray*} Take $u_\pm:=\lim_{t\to\pm\infty}v(t)$ in $H^2$. Then, $$\|u(t)-e^{it\Delta^2}u_\pm\|_{H^2}\to0.$$ The proof is complete. \end{proof} \subsection{Variational Analysis} In this section, one collects some estimates needed in the proof of the scattering of global solutions to the focusing Choquard problem \end{equation}ref{S}. Take a radial smooth function $0\leq\psi\leq1$ satisfying for $R>0$, $$\psi\in C_0^\infty(\mathbb{R}^N),\quad supp(\psi)\subset \{|x|<1\}, \quad\psi=1\,\,\mbox{on}\,\,\{|x|<\frac12\},\quad\psi_R:=\psi(\frac\cdot R).$$ \begin{lem}\label{bnd} Take $N\geq1$, $0<\alpha<N<8+\alpha$ and $p_*<p<p^*$ such that $p\geq2$ and $u_0\in H^2$ satisfying $$\max\{\mathcal M\mathcal E(u_0),{\mathcal M\mathcal G}\mathcal M(u_0)\}<1.$$ Then, there exists $\delta>0$ such that the solution $u\in C(\mathbb{R},H^2)$ satisfies $$\max\{\sup_{t\in\mathbb{R}}\mathcal M\mathcal E(u(t)),\sup_{t\in\mathbb{R}}{\mathcal M\mathcal G}\mathcal M(u(t))\}<1-\delta.$$ \end{lem} \begin{proof} Denote $C_{N,p,\alpha}:=C(N,p,\alpha)$ given by Proposition \ref{gag}. The inequality $\mathcal M\mathcal E(u_0)<1$ gives the existence of $\delta>0$ such that \begin{eqnarray*} 1-\delta &>&\frac{M(u_0)^{\frac{2-s_c}{s_c}}E(u_0)}{M(\phi)^{\frac{2-s_c}{s_c}}E(\phi)}\\ &>&\frac{M(u_0)^{\frac{2-s_c}{s_c}}}{M(\phi)^{\frac{2-s_c}{s_c}}E(\phi)}\Big(\|\Delta u(t)\|^2-\frac1p\int_{\mathbb{R}^N}(I_\alpha*|u|^p)|u|^p\,dx\Big)\\ &>&\frac{M(u_0)^{\frac{2-s_c}{s_c}}}{M(\phi)^{\frac{2-s_c}{s_c}}E(\phi)}\Big(\|\Delta u(t)\|^2-\frac{C_{N,p,\alpha}}p\|u\|^A\|\Delta u(t)\|^B\Big). \end{eqnarray*} Thanks to Pohozaev identities, one has $$E(\phi)=\frac{B-2}B\|\Delta\phi\|^2=\frac{B-2}A\|\phi\|^2.$$ Thus, {\small\begin{eqnarray*} 1-\delta &>&\frac B{B-2}\frac{M(u_0)^{\frac{2-s_c}{s_c}}}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}\Big(\|\Delta u(t)\|^2-\frac{C_{N,p,\alpha}}p\|u\|^A\|\Delta u(t)\|^B\Big)\\ &>&\frac B{B-2}\frac{M(u_0)^{\frac{2-s_c}{s_c}}\|\Delta u(t)\|^2}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}-\frac B{B-2}\frac{M(u_0)^{\frac{2-s_c}{s_c}}}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}\frac{C_{N,p,\alpha}}p\|u\|^A\|\Delta u(t)\|^B\\ &>&\frac B{B-2}\frac{M(u_0)^{\frac{2-s_c}{s_c}}\|\Delta u(t)\|^2}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}-\frac B{B-2}\frac{M(u_0)^{\frac{2-s_c}{s_c}}}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}\frac2A(\frac AB)^\frac{B}2\|\phi\|^{-2(p-1)}\|u\|^A\|\Delta u(t)\|^B\\ &>&\frac B{B-2}\frac{M(u_0)^{\frac{2-s_c}{s_c}}\|\Delta u(t)\|^2}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}-\frac B{B-2}\frac2A\frac{M(u_0)^{\frac{2-s_c}{s_c}}}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}(\frac{\|\phi\|}{\|\Delta\phi\|})^{B}\|\phi\|^{-2(p-1)}\|u\|^A\|\Delta u(t)\|^B\\ &>&\frac B{B-2}\frac{M(u_0)^{\frac{2-s_c}{s_c}}\|\Delta u(t)\|^2}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}-\frac B{B-2}\frac2A\frac{\|u_0\|^{A+2\frac{1-s_c}{s_c}}}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}(\frac{\|\phi\|}{\|\Delta\phi\|})^{B}\|\phi\|^{-2(p-1)}\|\Delta u(t)\|^B. \end{eqnarray*}} Using the equalities $s_c=\frac{B-2}{p-1}$ and $\frac BA=(\frac{\|\Delta\phi\|}{\|\phi\|})^2$, one has \begin{eqnarray*} 1-\delta &>&\frac B{B-2}\frac{M(u_0)^{\frac{2-s_c}{s_c}}\|\Delta u(t)\|^2}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}-\frac B{B-2}\frac2A\frac{(\|u_0\|^{\frac{2-s_c}{s_c}}\|\Delta u(t)\|)^B}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}(\frac{\|\phi\|}{\|\Delta\phi\|})^{B}\|\phi\|^{-2(p-1)}\\ &>&\frac B{B-2}\frac{M(u_0)^{\frac{2-s_c}{s_c}}\|\Delta u(t)\|^2}{M(\phi)^{\frac{2-s_c}{s_c}}\|\Delta\phi\|^2}-\frac2{B-2}\frac{(\|u_0\|^{\frac{2-s_c}{s_c}}\|\Delta u(t)\|)^B}{\|\phi\|^{2\frac{2-s_c}{s_c}-B+2p}\|\Delta\phi\|^{B}}\\ &>&\frac B{B-2}\Big(\frac{\|u_0\|^{\frac{2-s_c}{s_c}}\|\Delta u(t)\|}{\|\phi\|^{\frac{2-s_c}{s_c}}\|\Delta\phi\|}\Big)^2-\frac2{B-2}\Big(\frac{\|u_0\|^{\frac{2-s_c}{s_c}}\|\Delta u(t)\|}{\|\phi\|^{\frac{2-s_c}{s_c}}\|\Delta\phi\|}\Big)^B. \end{eqnarray*} Take the real function defined on $[0,1]$ by $f(x):=\frac B{B-2}x^2-\frac2{B-2}x^B$, with first derivative $f'(x)=\frac{2B}{B-2}x(1-x^{B-2})$. Thus, with the table change of $f$ and the continuity of $t\to X(t):=\frac{\|u_0\|^{\frac{2-s_c}{s_c}}\|\Delta u(t)\|}{\|\phi\|^{\frac{2-s_c}{s_c}}\|\Delta\phi\|}$, it follows that $X(t)<1$ for any $t<T^*$. Thus, $T^*=\infty$ and there exists $\epsilon>0$ near to zero such that $X(t)\in f^{-1}([0,1-\delta])=[0,1-\epsilon]$. This finishes the proof. \end{proof} Let us prove a coercivity estimate on centered balls with large radials. \begin{lem}\label{crcv} There exists $R_0:=R_0(\delta,M(u),\phi)>0$ such that for any $R>R_0$, $$\sup_{t\in\mathbb{R}}\|\psi_R u(t)\|^{2-s_c}\|\Delta(\psi_Ru(t))\|^{s_c}<(1-\delta)\|\phi\|^{2-s_c}\|\Delta\phi\|^{s_c}.$$ In particular, there exists $\delta'>0$ such that $$\|\Delta(\psi_Ru)\|^2-\frac B{2p}\int_{\mathbb{R}^N}(I_\alpha*|\psi_Ru|^p)|\psi_Ru|^p\,dx\geq\delta'\|\psi_Ru\|_{\frac{2Np}{N+\alpha}}^2.$$ \end{lem} \begin{proof} Taking account of Proposition \ref{gag}, one gets \begin{eqnarray*} E(u) &=&\|\Delta u\|^2-\frac1p\int_{\mathbb{R}^N}(I_\alpha*|u|^p)|u|^p\,dx\\ &\geq&\|\Delta u\|^2\Big(1-\frac{C_{N,p,\alpha}}p\|u\|^A\|\Delta u\|^{B-2}\Big)\\ &\geq&\|\Delta u\|^2\Big(1-\frac{C_{N,p,\alpha}}p[\|u\|^{2-s_c}\|\Delta u\|^{s_c}]^{p-1}\Big). \end{eqnarray*} So, with the previous Lemma \begin{eqnarray*} E(u) &\geq&\|\Delta u\|^2\Big(1-(1-\delta)\frac2A(\frac AB)^{\frac B2}\|\phi\|^{-2(p-1)}[\|\phi\|^{2-s_c}\|\Delta\phi\|^{s_c}]^{p-1}\Big)\\ &\geq&\|\Delta u\|^2\Big(1-(1-\delta)\frac2A(\frac AB)^{\frac B2}[\frac{\|\Delta\phi\|}{\|\phi\|}]^{s_c(p-1)}\Big)\\ &\geq&\|\Delta u\|^2\Big(1-(1-\delta)\frac2B(\frac{\|\phi\|}{\|\Delta\phi\|})^{B-2}[\frac{\|\Delta\phi\|}{\|\phi\|}]^{B-2}\Big)\\ &\geq&\|\Delta u\|^2\Big(1-(1-\delta)\frac2B\Big). \end{eqnarray*} Thus, using Sobolev injections with the fact that $p<p^*$, one gets $$\|\Delta u\|^2-\frac B{2p}\int_{\mathbb{R}^N}(I_\alpha*|u|^p)|u|^p\,dx\geq\delta\|\Delta u\|^2\geq\delta'\|u\|_{\frac{2Np}{N+\alpha}}^2.$$ This gives the second part of the claimed Lemma provided that the first point is proved. A direct computation \cite{vdd} gives $$\|\Delta(\psi_R u)\|^2-\|\psi_R\Delta u\|^2\leq C(u_0,\phi)R^{-2}.$$ Then, one gets the proof of the first point and so the Lemma. \end{proof} \subsection{Morawetz estimate} In this sub-section, one proves the next result. \begin{lem}\label{bnd} Take $N\geq1$, $0<\alpha<N<8+\alpha$ and $p_*<p<p^*$ such that $p\geq2$ and $u_0\in H^2_{rd}$ satisfying $$\max\{\mathcal M\mathcal E(u_0),{\mathcal M\mathcal G}\mathcal M(u_0)\}<1.$$ Then, for any $T>0$, one has $$\int_0^T\|u(t)\|_{L^\frac{2Np}{N+\alpha}}^2\,dt\leq CT^{\frac13}.$$ \end{lem} \begin{proof} Take a smooth real function such that $0\leq f''\leq1$ and $$f:r\to\left\{ \begin{array}{ll} \frac{r^2}2,\,\,\mbox{if}\,\, 0\leq r\leq\frac12;\\ 1,\,\,\mbox{if}\,\, r\geq1. \end{array} \right. $$ Moreover, for $R>0$, let the smooth radial function defined on $\mathbb{R}^N$ by $f_R:=R^2f(\frac{|\cdot|}R)$. One can check that $$0\leq f_R''\leq1,\quad f'(r)\leq r,\quad N\geq\Delta f_R.$$ Let the real function $$M_R:t\to2\int_{\mathbb{R}^N}\nabla f_R(x)\Im(\nabla u(t,x)\bar u(t,x))\,dx.$$ By Morawetz estimate in Proposition \ref{mrwtz}, one has \begin{eqnarray*} M_R' &=&2\int_{\mathbb{R}^N}\Big(2\partial_{jk}\Delta f_R\partial_ju\partial_k\bar u-\frac12(\Delta^3f_R)|u|^2-4\partial_{jk}f_R\partial_{ik}u\partial_{ij}\bar u+\Delta^2f_R|\nabla u|^2\Big)\,dx\\ &-&2\Big((-1+\frac2p)\int_{\mathbb{R}^N}\Delta f_R(I_\alpha*|u|^p)|u|^p\,dx+\frac2{p}\int_{\mathbb{R}^N}\partial_kf_R\partial_k(I_\alpha*|u|^p)|u|^{p}\,dx\Big)\\ &=&2\int_{\mathbb{R}^N}\Big(2\partial_{jk}\Delta f_R\partial_ju\partial_k\bar u-\frac12(\Delta^3f_R)|u|^2-\frac2{p}\partial_kf_R\partial_k(I_\alpha*|u|^p)|u|^{p}\,dx+\Delta^2f_R|\nabla u|^2\Big)\,dx\\ &+&2\Big(-(-1+\frac2p)N\int_{\{|x|<\frac R2\}}(I_\alpha*|u|^p)|u|^p\,dx-4\int_{\{|x|<\frac R2\}}|\Delta u|^2\,dx\Big)\\ &+&2\Big((-1+\frac2p)\int_{\{\frac R2<|x|<R\}}\Delta f_R(I_\alpha*|u|^p)|u|^p\,dx-4\int_{\{\frac R2<|x|<R\}}\partial_{jk}f_R\partial_{ik}u\partial_{ij}\bar u\,dx\Big). \end{eqnarray*} Using the estimate $\||\nabla|^kf_R\|_\infty\lesssim R^{2-k}$, one has \begin{gather*} |\int_{\mathbb{R}^N}\partial_{jk}\Delta f_R\partial_ju\partial_k\bar u\,dx|\lesssim R^{-2};\\ |\int_{\mathbb{R}^N}(\Delta^3f_R)|u|^2\,dx|\lesssim R^{-4};\\ |\int_{\mathbb{R}^N}\Delta^2f_R|\nabla u|^2\,dx|\lesssim R^{-2}. \end{gather*} Moreover, by the radial setting, one writes $$\int_{\{\frac R2<|x|<R\}}\partial_{jk}f_R\partial_{ik}u\partial_{ij}\bar u\,dx\geq (N-1)\int_{\{\frac R2<|x|<R\}}\frac{f'_R(r)}{r^3}|\partial_ru|^2\,dx=\mathcal O(R^{-2}).$$ Now, by Hardy-Littlewood-Sobolev and Strauss inequalities \begin{eqnarray*} |\int_{\{\frac R2<|x|<R\}}\Delta f_R(I_\alpha*|u|^p)|u|^p\,dx| &\lesssim&\|u\|_{L^{\frac{2Np}{\alpha+N}}(|x|>R)}^{2p}\\ &\lesssim&\Big(\int_{|x|>R}|u(x)|^{\frac{2Np}{\alpha+N}-2}|u(x)|^2\,dx\Big)^{\frac{\alpha+N}N}\\ &\lesssim&\|u\|_{L^\infty(|x|>R)}^\frac{4B}N\Big(\int_{|x|>R}|u(x)|^2\,dx\Big)^{\frac{\alpha+N}N}\\ &\lesssim&R^{-\frac{2B(N-1)}N}. \end{eqnarray*} Thus, since $\|\nabla u\|^2\lesssim \|\Delta u\|\lesssim 1$, one gets \begin{eqnarray*} M_R' &\leq&2\Big(-(-1+\frac2p)N\int_{\{|x|<\frac R2\}}(I_\alpha*|u|^p)|u|^p\,dx-4\int_{\{|x|<\frac R2\}}|\Delta u|^2\,dx\Big)\\ &-&\frac4{p}\int_{\mathbb{R}^N}\partial_kf_R\partial_k(I_\alpha*|u|^p)|u|^{p}\,dx+\mathcal O(R^{-2}). \end{eqnarray*} Now, let us define the sets \begin{gather*} \Omega:=\{(x,y)\in\mathbb{R}^N\times\mathbb{R}^N,\,\,\mbox{s\,.t}\,\,\frac R2<|x|<R\}\cup \{(x,y)\in\mathbb{R}^N\times\mathbb{R}^N,\,\,\mbox{s\,.t}\,\,\frac R2<|y|<R\};\\ \Omega':=\{(x,y)\in\mathbb{R}^N\times\mathbb{R}^N,\,\,\mbox{s\,.t}\,\,|x|>R,|y|<\frac R2\}\cup \{(x,y)\in\mathbb{R}^N\times\mathbb{R}^N,\,\,\mbox{s\,.t}\,\,|x|<\frac R2, |y|> R\}. \end{gather*} Consider the term \begin{eqnarray*} (I) &:=&\int_{\mathbb{R}^N}\nabla f_R(\frac.{|\cdot|^2}I_\alpha*|u|^p)|u|^{p}\,dx\\ &=&\frac12\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}(\nabla f_R(x)-\nabla f_R(y))(x-y)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\,dx\,dy\\ &=&\Big(\int_{\Omega}+\int_{\Omega'}+\int_{|x|,|y|<\frac R2}+\int_{|x|,|y|>R}\Big)\Big(\nabla f_R(x)(x-y)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\,dx\,dy\Big). \end{eqnarray*} Compute \begin{eqnarray*} (a) &:=&\int_{\Omega'}\Big(\nabla f_R(x)(x-y)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &=&\int_{\{|x|>R,|y|<\frac R2\}}\Big(\nabla f_R(x)(x-y)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &+&\int_{\{|y|>R,|x|<\frac R2\}}\Big(\nabla f_R(x)(x-y)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &=&\int_{\{|x|>R,|y|<\frac R2\}}\Big((\nabla f_R(x)-\nabla f_R(y))(x-y)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &=&2\int_{\{|x|>R,|y|<\frac R2\}}\Big(y(y-x)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy. \end{eqnarray*} Moreover, \begin{eqnarray*} (b) &:=&\frac12\int_{\{|x|<\frac R2,|y|<\frac R2\}}\Big((\nabla f_R(x)-\nabla f_R(y))(x-y)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &=&\frac12\int_{\{|x|<\frac R2,|y|<\frac R2\}}\Big((x-y)(x-y)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &=&\frac12\int_{\{|x|<\frac R2,|y|<\frac R2\}}\Big(I_\alpha(x-y)|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &=&\frac12\int_{\mathbb{R}^N}(I_\alpha*|\psi_Ru|^p)|\psi_Ru|^{p}\,dx. \end{eqnarray*} Furthermore, \begin{eqnarray*} (c) &:=&\int_{\{\frac R2<|x|<R\}}\int_{\mathbb{R}^N}\Big(\nabla f_R(x)(x-y)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &=&\int_{\{\frac R2<|x|<R,|y-x|>\frac R4\}}\Big(\nabla f_R(x)(x-y)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &+&\int_{\{\frac R2<|x|<R,|y-x|<\frac R4\}}\Big(\nabla f_R(x)(x-y)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &=&\mathcal O\Big(\int_{\{|x|>\frac R2\}}(I_\alpha*|u|^p)|u|^{p}\,dx\Big). \end{eqnarray*} Moreover, since for large $R>0$ on $\{|x|>R,|y|<\frac R2\}$, $|x-y|\simeq |x|>R>>\frac R2>|y|$, one has \begin{eqnarray*} (a) &=&\int_{\{|x|>R,|y|<\frac R2\}}\Big(y(y-x)\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &\lesssim&\int_{\{|x|>R,|y|<\frac R2\}}\Big(|y||x-y|\frac{I_\alpha(x-y)}{|x-y|^2}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &\lesssim&\int_{\{|x|>R,|y|<\frac R2\}}\Big({I_\alpha(x-y)}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &\lesssim&\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}\Big({I_\alpha(x-y)}\chi_{|x|>R}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &\lesssim&\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}\Big({I_\alpha(x-y)}\chi_{|x|>R}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy. \end{eqnarray*} Taking account of Hardy-Littlewood-Sobolev inequality, H\"older and Strauss estimates and Sobolev injections via the fact that $p_*<p<p^*$, write \begin{eqnarray*} (a) &\lesssim&\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}\Big({I_\alpha(x-y)}\chi_{|x|>R}|u(y)|^p|u(x)|^{p}\Big)\,dx\,dy\\ &\lesssim&\|u\|_{L^{\frac{2Np}{N+\alpha}}(|x|>R)}^p\|u\|_{L^{\frac{2Np}{N+\alpha}}}^p\\ &\lesssim&\Big(\int_{\{|x|>R\}}|u|^{\frac{2Np}{N+\alpha}}\,dx\Big)^{\frac{N+\alpha}{2N}}\\ &\lesssim&\Big(\int_{\{|x|>R\}}|u|^2(|x|^{-\frac{N-1}2}\|u\|^\frac12\|\nabla u\|^\frac12)^{-2+\frac{2Np}{N+\alpha}}\,dx\Big)^{\frac{N+\alpha}{2N}}\\ &\lesssim&\|u\|^{\frac{N+\alpha}{N}}\frac1{R^{\frac{B(N-1)}{2N}}}(\|u\|\|\nabla u\|)^{\frac B{2N}}\\ &\lesssim&R^{-\frac{B(N-1)}{2N}}. \end{eqnarray*} Then, \begin{eqnarray*} M' &\leq&2\Big(-(-1+\frac2p)N\int_{\{|x|<\frac R2\}}(I_\alpha*|u|^p)|u|^p\,dx-4\int_{\{|x|<\frac R2\}}|\Delta u|^2\,dx\Big)\\ &+&\frac2{p}(N-\alpha)\int_{\mathbb{R}^N}(I_\alpha*|\psi_Ru|^p)|\psi_Ru|^{p}\,dx +\mathcal O(R^{-2})\\ &\leq&\frac{4B}p\int_{\{|x|<\frac R2\}}(I_\alpha*|\psi_Ru|^p)|\psi_Ru|^p\,dx-8\int_{\{|x|<\frac R2\}}|\Delta(\psi_R u)|^2\,dx +\mathcal O(R^{-2}). \end{eqnarray*} So, with Lemma \ref{crcv}, one gets \begin{eqnarray*} \sup_{[0,T]}|M| &\geq&8\int_0^T\Big(\int_{\{|x|<\frac R2\}}|\Delta(\psi_R u)|^2\,dx-\frac{B}{2p}\int_{\{|x|<\frac R2\}}(I_\alpha*|\psi_Ru|^p)|\psi_Ru|^p\,dx\Big)\,dt+\mathcal O(R^{-2})T\\ &\geq&8\delta'\int_0^T\|\psi_Ru(t)\|_{\frac{2Np}{N+\alpha}}^2\,dt+\mathcal O(R^{-2})T\\ &\geq&8\delta'\int_0^T\|u(t)\|_{L^\frac{2Np}{N+\alpha}(|x|<\frac R2)}^2\,dt+\mathcal O(R^{-2})T. \end{eqnarray*} Thus, with previous computation \begin{eqnarray*} \int_0^T\|u(t)\|_{\frac{2Np}{N+\alpha}}^2\,dt &\leq& C\Big(\sup_{[0,T]}|M|+T(R^{-2}+R^{-\frac{B(N-1)}{2N}}\Big)\\ &\leq& C\Big(R+TR^{-2}\Big). \end{eqnarray*} Taking $R=T^\frac13>>1$, one gets the requested estimate $$\int_0^T\|u(t)\|_{\frac{2Np}{N+\alpha}}^2\,dt\leq CT^{\frac13}.$$ For $0<T<<1$, the proof follows with Sobolev injections. \end{proof} As a consequence, one has the following energy evacuation. \begin{lem}\label{evac} Take $N\geq1$, $0<\alpha<N<8+\alpha$ and $p_*<p<p^*$ such that $p\geq2$ and $u_0\in H^2_{rd}$ satisfying $$\max\{\mathcal M\mathcal E(u_0),{\mathcal M\mathcal G}\mathcal M(u_0)\}<1.$$ Then, there exists a sequence of real numbers $t_n\to\infty$ such that $$\lim_n\int_{\{|x|<R\}}|u(t_n,x)|^2\,dx=0,\quad\mbox{for all}\quad R>0.$$ \end{lem} \begin{proof} Take $t_n\to\infty$. By H\"older estimate $$\int_{\{|x|<R\}}|u(t_n,x)|^2\,dx\leq R^{\frac{2B}p}\|u(t_n)\|_{\frac{2Np}{N+\alpha}}^2\to0.$$ Indeed, by the previous Lemma $$\|u(t_n)\|_{\frac{2Np}{N+\alpha}}\to0.$$ \end{proof} \subsection{Scattering} This sub-section is devoted to prove Theorem \ref{sctr2}. Let us start with an auxiliary result. \begin{prop}\label{fn} Take $N\geq5$, $\frac{24}5<\frac{24+\alpha}5<N<8+\alpha$ and $p_*<p<p^*$ such that $p\geq2$ and $u_0\in H^2_{rd}$ satisfying $$\max\{\mathcal M\mathcal E(u_0),{\mathcal M\mathcal G}\mathcal M(u_0)\}<1.$$ Then, for any $\varepsilon>0$, there exist $T,\mu>0$ such that $$\|e^{i(\cdot-T)\Delta^2}u(T)\|_{L^a((T,\infty),L^r)}\lesssim \varepsilon^\mu.$$ \end{prop} \begin{proof} Let $\beta>0$ and $T>\varepsilon^{-\beta}>0$. By the integral formula \begin{eqnarray*} e^{i(\cdot-T)\Delta^2}u(T) &=&e^{i\cdot\Delta^2}u_0+i\int_0^Te^{i(\cdot-s)\Delta^2}[(I_\alpha*|u|^p)|u|^{p-2}u]\,ds\\ &=&e^{i\cdot\Delta^2}u_0+i\Big(\int_0^{T-\varepsilon^{-\beta}}+\int_{T-\varepsilon^{-\beta}}^T\Big)e^{i(\cdot-s)\Delta^2}[(I_\alpha*|u|^p)|u|^{p-2}u]\,ds\\ &:=&e^{i\cdot\Delta^2}u_0+F_1+F_2. \end{eqnarray*} $\bullet$ The linear term. Take the real number $\frac1b:=\frac1r+\frac{s_c}N$. Since $(a,b)\in\Gamma$, by Strichartz estimate and Sobolev injections, one has \begin{eqnarray*} \|e^{i\cdot\Delta^2}u_0\|_{L^a((T,\infty),L^r)} &\lesssim&\||\nabla|^{s_c}e^{i\cdot\Delta^2}u_0\|_{L^a((T,\infty),L^b)}\lesssim\|u_0\|_{H^2}. \end{eqnarray*} $\bullet$ The term $F_2$. By Strichartz estimate \begin{eqnarray*} \|F_2\|_{L^a((T,\infty),L^r)} &\lesssim&\|(I_\alpha*|u|^p)|u|^{p-1}\|_{L^{m'}((T-\varepsilon^{-\beta},T),L^{r'})}\\ &\lesssim&\|u\|_{L^a((T-\varepsilon^{-\beta},T),L^{r})}^{2p-1}\\ &\lesssim&\varepsilon^{-\frac{(2p-1)\beta} a}\|u\|_{L^\infty((T-\varepsilon^{-\beta},T),L^{r})}^{2p-1}. \end{eqnarray*} Now, by Lemma \ref{evac}, one has $$\int_{\mathbb{R}^N}\psi_R(x)|u(T,x)|^2\,dx<\epsilon^2.$$ Moreover, a computation with use of \end{equation}ref{S} and the properties of $\psi$ give $$|\frac d{dt}\int_{\mathbb{R}^N}\psi_R(x)|u(t,x)|^2\,dx|\lesssim R^{-1}.$$ Then, for any $T-\varepsilon^{-\beta}\leq t\leq T$ and $R>\varepsilon^{-2-\beta}$, yields $$\|\psi_Ru(t)\|\leq\Big( \int_{\mathbb{R}^N}\psi_R(x)|u(T,x)|^2\,dx+C\frac{T-t}R\Big)^\frac12\leq C\varepsilon.$$ This gives, for $R>\varepsilon^{-\frac{N(N-4)(p^*-p)}{4B(N-1)}}$, \begin{eqnarray*} \|u\|_{L^\infty((T,\infty),L^r)} &\leq&\|\psi_Ru\|_{L^\infty((T,\infty),L^r)}+\|(1-\psi_R)u\|_{L^\infty((T,\infty),L^r)}\\ &\lesssim&\|\psi_Ru\|_{L^\infty((T,\infty),L^2)}^{\frac{N-4}{4p}(p^*-p)}\|\psi_Ru\|_{L^\infty((T,\infty),L^\frac{2N}{N-4})}^{1-\frac{N-4}{4p}(p^*-p)}\\ &+&\|(1-\psi_R)u\|_{L^\infty((T,\infty),L^\infty)}^{\frac{2B}{Np}}\|(1-\psi_R)u\|_{L^\infty((T,\infty),L^2)}^{1-\frac{2B}{Np}}\\ &\lesssim&\varepsilon^{\frac{N-4}{4p}(p^*-p)}+R^{-\frac{B(N-1)}{Np}}\\ &\lesssim&\varepsilon^{\frac{N-4}{4p}(p^*-p)}. \end{eqnarray*} So, \begin{eqnarray*} \|F_2\|_{L^a((T,\infty),L^r)} &\lesssim&\varepsilon^{-\frac{(2p-1)\beta} a}\|u\|_{L^\infty((T-\varepsilon^{-\beta},T),L^{r})}^{2p-1}\\ &\lesssim&\varepsilon^{-\frac{(2p-1)\beta} a}\varepsilon^{\frac{N-4}{4p}(p^*-p)(2p-1)}\\ &\lesssim&\varepsilon^{-\frac{2p-1}{4p}((N-4)(p^*-p)+\beta\frac{4p}a)}. \end{eqnarray*} $\bullet$ The term $F_1$. Take $\frac1r=\frac\lambda b$. By interpolation \begin{eqnarray*} \|F_1\|_{L^a((T,\infty),L^r)} &\lesssim&\|F_1\|_{L^a((T,\infty),L^b)}^\lambda\|F_1\|_{L^a((T,\infty),L^\infty)}^{1-\lambda}\\ &\lesssim&\|e^{i(\cdot-(T-\varepsilon^{-\beta}))\Delta^2}u(T-\varepsilon^{-\beta})-e^{i\cdot\Delta^2}u_0\|_{L^a((T,\infty),L^b)}^\lambda\|F_1\|_{L^a((T,\infty),L^\infty)}^{1-\lambda}\\ &\lesssim&\|F_1\|_{L^a((T,\infty),L^\infty)}^{1-\lambda}. \end{eqnarray*} With the free Schr\"odinger operator decay $$\|e^{it\Delta^2}\cdot\|_r\leq\frac C{t^{\frac N2(\frac12-\frac1r)}}\|\cdot\|_{r'}, \quad \forall r\geq2,$$ for $T\leq t$, and $$2\leq d:=\frac{2p-1}{1+\frac\alpha N}\leq\frac{2N}{N-4},$$ one gets \begin{eqnarray*} \|F_1\|_{\infty} &\lesssim&\int_0^{T-\varepsilon^{-\beta}}\frac1{(t-s)^{\frac N4}}\|(I_\alpha*|u|^p)|u|^{p-2}u\|_1\,ds\\ &\lesssim&\int_0^{T-\varepsilon^{-\beta}}\frac1{(t-s)^{\frac N4}}\|u(s)\|_{d}^{2p-1}\,ds\\ &\lesssim&(t-T+\varepsilon^{-\beta})^{1-\frac N4}. \end{eqnarray*} Thus, if $\frac N4>1+\frac1a$, it follows that \begin{eqnarray*} \|F_1\|_{L^a((T,\infty),L^r)} &\lesssim&\|F_1\|_{L^a((T,\infty),L^\infty)}^{1-\lambda}\\ &\lesssim&\Big(\int_T^\infty(t-T+\varepsilon^{-\beta})^{a[1-\frac N4]}\,dt\Big)^{\frac{1-\lambda}a}\\ &\lesssim&\varepsilon^{(1-\lambda)\beta[\frac N4-1-\frac1a]}. \end{eqnarray*} Since the above condition is satisfied for $N>\frac{24+\alpha}N$, one concludes the proof by collecting the previous estimates. \end{proof} \begin{proof}[Proof of Theorem \ref{sctr2}] The scattering of energy global solutions to the focusing problem \end{equation}ref{S} follows with Proposition \ref{fn} via Lemmas \ref{sdt}. \end{proof} \end{document}
\begin{document} \title{Variations of Weyl's tube formula} \author[A.~Burtscher]{Annegret Burtscher$^\dagger$} \author[G.~Heckman]{Gert Heckman$^\dagger$} \makeatletter \@namedef{subjclassname@2020}{ \textup{2020} Mathematics Subject Classification} \makeatother \subjclass[2020]{53A07 (primary), 53A55, 20G05 (secondary)} \keywords{Tubes, volumes, curvature, invariant theory, finite reflection groups} \thanks{$^\dagger$Department of Mathematics, IMAPP, Radboud University Nijmegen, The Netherlands, \textit{Emails}: \texttt{[email protected]}, \texttt{[email protected]}.\\ \indent \emph{Acknowledgements:} Research of the first author supported by the Dutch Research Council (NWO), Project number VI.Veni.192.208. Part of this material is based upon work supported by the Swedish Research Council under grant no.\ 2016-06596 while the first author was in residence at Institut Mittag-Leffler in Djursholm, Sweden in the Fall of 2019.} \maketitle \begin{abstract} In 1939 Weyl showed that the volume of spherical tubes around compact submanifolds $M$ of Euclidean space depends solely on the induced Riemannian metric on $M$. Can this intrinsic nature of the tube volume be preserved for tubes with more general cross sections $\mathbb{D}$ than the round ball? Under sufficiently strong symmetry conditions on $\mathbb{D}$ the answer turns out to be yes. \end{abstract} \section{Introduction}\label{Introduction} Let us be given a compact connected manifold $M$ (possibly with a boundary) of dimension $n$ embedded in $\mathbb{R}^{n+m}$ as submanifold of codimension $m$. For each $r\in M$ we have an orthogonal decomposition $T_rM\oplus N_rM$ of $\mathbb{R}^{m+n}$ into tangent space and normal space at $r$ of $M$. It was shown by Weyl~\cite{Weyl 1939a} that the Euclidean volume of the spherical tube \[ \{r+n \, ; r\in M, \, n\in N_rM, \, |n|\leq a\} \] around $M$ with radius $a>0$ sufficiently small is equal to \[ V_M(a)=\Omega_m\;\sum_{d=0}^n\frac{k_d(M)\,a^{m+d}}{(m+2)\cdots(m+d)} \qquad (d\;\mathrm{even}) \] with $\Omega_m$ the volume of the unit ball $\mathbb{B}^m=\{t \, ; |t|\leq1\}$ in $\mathbb{R}^m$. The remarkable insight of Weyl is that the coefficients $k_d(M)$ are integral invariants of $M$ only determined by the \emph{intrinsic} metric nature of $M$. For example, the initial coefficient $k_0(M)=\int_M ds$ is the Riemannian volume of $M$ and the next coefficient is $k_2(M)=\tfrac12\int_MS\,ds$ with $S$ the scalar curvature of $M$. If $M$ has empty boundary and is of even dimension it was proved by Allendoerfer and Weil~\cite{Allendoerfer--Weil 1943} in their approach towards the Gauss--Bonnet theorem that the top coefficient $k_n(M)=(2\pi)^{n/2}\chi(M)$ with $\chi(M)$ the Euler characteristic of $M$ is even of topological nature. See also the text books of Gray on tubes~\cite{Gray 2004} and of Morvan on generalized curvatures~\cite{Morvan 2008} for further details. Due to the \emph{local} nature of the tube formula we can assume that the submanifold $M$ of $\mathbb{R}^{n+m}$ comes with a chosen orthonormal frame in the normal bundle $NM$ of $M$ in $\mathbb{R}^{n+m}$. In turn this gives for all $r\in M$ an identification of the normal space $N_rM$ with $\mathbb{R}^m$, and so for $\mathbb{D}^m$ a compact domain around $0$ in $\mathbb{R}^m$ we can consider the \emph{generalized tube} \[ \{r+n \, ; r\in M, \, n\in a\mathbb{D}^m\} \] around $M$ of type $a\mathbb{D}^m$ for $a>0$ sufficiently small. The main result of this paper is that under sufficiently strong (relative to the dimension $n$ of $M$) symmetry requirements on the domain $\mathbb{D}^m$ a similar intrinsic formula for the volume $V(a)$ of the above generalized tube remains valid as in Weyl's case where $\mathbb{D}^m$ equals the unit ball $\mathbb{B}^m$. Our generalized tubes share the feature that the domains $\mathbb{D}^m$ are invariant under the following subgroups of the orthogonal group. \begin{definition}\label{orthogonal of degree n definition} A subgroup $G_m$ of the orthogonal group $\mathrm{O}_m(\mathbb{R})$ on $\mathbb{R}^m$ is called \emph{orthogonal of degree $n$} if any polynomial $p(t)\in\mathbb{R}[t]$ on $\mathbb{R}^m$ of degree $\leq n$ that is invariant under $G_m$ is in fact invariant under the full orthogonal group $\mathrm{O}_m(\mathbb{R})$. \end{definition} Our principal result is the following generalized tube formula. \begin{theorem}\label{abstract tube formula} Let $M$ be a compact connected manifold of dimension $n$ embedded in Euclidean space $\mathbb{R}^{n+m}$. If the compact domain $\mathbb{D}^m$ around $0$ in $\mathbb{R}^m$ has a symmetry group $G_m$ inside $\mathrm{O}_m(\mathbb{R})$ that is orthogonal of degree $n$ then the volume of the generalized tube of type $a\mathbb{D}^m$ for $a>0$ sufficiently small is given by \[ V_M(a)= \sum_{d=0}^n \frac{\{\int_{\mathbb{D}^m}|t|^d\,dt\}\,k_d(M)\,a^{m+d}}{m(m+2)\cdots(m+d-2)} \qquad (d\;\mathrm{even}) \] with intrinsic coefficients $k_d(M)=\int_M H_d\,ds$ as specified in Theorem~\ref{integrand average theorem}. \end{theorem} In Sections~\ref{The volume of tubes}--\ref{Averaging the integrand} we shall review the proof of the tube formula following Weyl's original approach and along the way obtain variations of the tube volume formula for polyhedral (and related) tubes rather than spherical tubes. We discuss several examples in Section~\ref{Polyhedral examples} and counterexamples in Section~\ref{No go results for diamant domains}, as well as causal tubes in Minkowski space in Section~\ref{Riemannian submanifolds of a Lorentzian vector space}. After this paper was finished we learned that tube volume formulas with more general cross sections $\mathbb{D}^m$ had already been studied before by Domingo-Juan and Miquel ~\cite{Domingo-Juan--Miquel 2004}. We decided to leave our paper as it was, but add Section~\ref{Pappus type theorems} in order to briefly survey their approach and compare their results with ours. \section{The volume of tubes}\label{The volume of tubes} Locally a submanifold $M$ of dimension $n$ in $\mathbb{R}^{n+m}$ is given in the Gaussian approach by a parametrization \[ r\colon U^n\rightarrow M \subset \mathbb{R}^{n+m},\quad u\mapsto r(u) \] with $u=(u^1,\ldots,u^n)\in U^n\subset\mathbb{R}^n$ and $\partial_i=\partial/\partial u^i$ in the usual notation of differential geometry. A summation sign $\sum$ without explicit mention of indices always means a summation over all indices, which occur both as upper and as lower index. The first fundamental form (or Riemannian metric) is given by \[ ds^2=\sum g_{ij}du^idu^j,\quad g_{ij}=\partial_ir\cdot\partial_jr, \] with $\cdot$ the scalar product on the ambient Euclidean space $\mathbb{R}^{n+m}$ and $g_{ij}=g_{ij}(u)$ a positive definite symmetric matrix for all $u\in U^n$. Let us choose an orthonormal frame field $u\mapsto n_1(u),\ldots,n_m(u)$ in the normal bundle of $M$, and so $\partial_ir\cdot n_p=0$ and $n_p\cdot n_q=\delta_{pq}$ along $M$ for all $i=1,\ldots,n$ and $p,q=1,\ldots,m$. Let $t=(t^1,\ldots,t^m)$ be Cartesian coordinates on $\mathbb{R}^m$. Let us be given a compact domain $\mathbb{D}^m$ around $0$ in $\mathbb{R}^m$ such that the map \[ x \colon U^n\times\mathbb{D}^m \to\mathbb{R}^{n+m}, \quad (u,t) \mapsto x(u,t)=r(u)+\sum t^pn_p(u) \] is a diffeomorphism of $U^n\times\mathbb{D}^m$ onto its image in $\mathbb{R}^{n+m}$. This image is called a tube of type $\mathbb{D}^m$ around $r(U^n)$. We are interested in the Euclidean volume $V_{U^n}(a)$ of the local tube \[ \{x(u,t)=r(u)+\sum t^pn_p(u) \, ; u\in U^n,\;t\in a\mathbb{D}^m\} \subset\mathbb{R}^{n+m} \] of type $a\mathbb{D}^m$ as a function of a small positive parameter $a>0$. By the Jacobi substitution theorem we have \[ V_{U^n}(a)=\int_{U^n}\{\int_{a\mathbb{D}^m}J(u,t)\,dt\}\,du \] with $J$ the absolute value of the determinant \[ \det(\partial_1x\;\cdots\;\partial_nx\;n_1\;\cdots\;n_m ) \] and $\partial_ix=\partial_ir+\sum t^p\partial_in_p$ for $i=1,\ldots,n$. Recall that \[ \partial_i\partial_jr=\sum\Gamma_{ij}^k\partial_kr+ \sum h_{ij}^pn_p \] with $\Gamma_{ij}^k=\Gamma_{ij}^k(u)$ the Christoffel symbols and $h_{ij}^p=h_{ij}^p(u)$ the coefficients of the second fundamental form $h_{ij}$ relative to the orthonormal normal frame $n_p$. Here indices $p,q=1,\ldots,m$ are coordinate indices in the normal direction, while the other indices $i,j,k=1,\ldots,n$ are coordinate indices on the submanifold $M$. Since $\partial_jr\cdot n_p=0$ we get \[ \partial_in_p\cdot\partial_jr=-n_p\cdot\partial_i\partial_jr=-\sum\delta_{pq}h_{ij}^q \] with $\delta_{pq}$ the Kronecker symbol. Writing $t_p=\sum\delta_{pq}t^q$ we find \[ \partial_ix=\partial_ir-\sum \delta_{pq}t^ph_{ik}^qg^{kj}\partial_jr+\ldots= \sum(\delta_i^j-\sum t_ph_{ik}^{p}g^{kj})\partial_jr+\ldots \] with $g_{ij}=\partial_ir\cdot\partial_jr$, $g^{ij}$ its inverse matrix, $\det g = \det (g_{ij})$ its determinant and $\ldots$ stands for a linear combination of the normal fields $n_p$. If in the usual notation we write $h_{i}^{jp}=\sum g^{jk}h_{ik}^p$ for $i,j=1,\ldots,n$ and $p=1,\ldots,m$ then we get \begin{gather*} \det(\partial_1x\;\cdots\;\partial_nx\;n_1\;\cdots\;n_m )= \\ \det(\delta_i^j-\sum t_ph_{i}^{jp}) \det(\partial_1r\;\cdots\;\partial_nr\;n_1\;\cdots\;n_m) \end{gather*} which in turn implies \[ V_{U^n}(a)=\int_{U^n}\{\int_{a\mathbb{D}^m} \det(\delta_i^j-\sum t_ph_{i}^{jp})\,dt\}\sqrt{\det g_{ij}} \, du \] for all $a>0$ sufficiently small. For fixed $u\in U^n$ the integrand $\det(\delta_i^j-\sum t_ph_{i}^{jp})$ is a polynomial in $t$ of degree $n$ and so, after integration and patching together the locally defined tubes, we conclude that \[ V_M(a)=\sum_{d=0}^n v_d\,a^{m+d} \] is a polynomial in $a$ of degree $m+n$ and with coefficient $v_0= \mathrm{vol}(\mathbb{D}^m)\mathrm{vol}(M)$. If the domain $\mathbb{D}^m$ is centrally symmetric with respect the origin, that is if $-\mathbb{D}^m=\mathbb{D}^m$, then the integrals of odd degree monomials in $t$ over $\mathbb{D}^m$ vanish and so $v_d=0$ for $d$ odd. In order to show that the volume $V_M(a)$ of a generalized tube of type $\mathbb{D}^m$ depends only on intrinsic quantities of $M$, two steps are necessary. Firstly, by assuming that $M$ is embedded in flat $\mathbb{R}^{n+m}$, one observes that certain combinations of the second fundamental forms are intrinsic curvature quantities (this was already done by Weyl). Secondly, by imposing certain symmetry conditions on $\mathbb{D}^m$, we show that only those intrinsic combinations remain in the volume formula $V_M(a)$ for the generalized tube (done by Weyl for the ball $\mathbb{B}^m$). These steps are carried out in Sections~\ref{The Gauss equations} and \ref{Averaging the integrand}, respectively. \section{The Gauss equations}\label{The Gauss equations} As before, we write \[ \partial_i\partial_jr=\sum \Gamma_{ij}^k\partial_kr+h_{ij} \] with $\Gamma_{ij}^k$ the Christoffel symbols given by \[ \tfrac{1}{2} \sum g^{kl}(\partial_ig_{jl}+\partial_jg_{il}-\partial_lg_{ij}) \] and $h_{ij}=\sum h_{ij}^pn_p$ the second fundamental form relative to the orthonormal frame $n_p$ in the normal bundle along $M$. Given scalar functions $g_{ij}$ and $h_{ij}^p$, the integrability conditions for the existence of an embedding of $M$ into flat Euclidean space with these functions as coefficients of the first and second fundamental forms are given by \[ \partial_i(\sum \Gamma_{jk}^l\partial_lr+\sum h_{jk}^pn_p)- \partial_j(\sum \Gamma_{ik}^l\partial_lr+\sum h_{ik}^pn_p)=0 \] for all $i,j,k$ (by working out $\partial_i(\partial_j\partial_kr)- \partial_j(\partial_i\partial_kr)=0$). In the normal directions this leads to the Codazzi--Mainardi equations \[ \partial_ih_{jk}^p-\partial_jh_{ik}^p+ \sum(\Gamma_{jk}^lh_{il}^p-\Gamma_{ik}^lh_{jl}^p)=0 \] for all $i,j,k$ and all $p$. In the tangential directions this amounts to the Gauss equations \[ R_{kij}^l=\sum \delta_{pq}g^{ln}(h_{in}^ph_{jk}^q-h_{jn}^ph_{ik}^q) \] for all $i,j,k,l$ with \[ R_{kij}^l=\partial_i\Gamma_{kj}^l-\partial_j\Gamma_{ki}^l+ \sum (\Gamma_{kj}^m\Gamma_{mi}^l-\Gamma_{ki}^m\Gamma_{mj}^l) \] the coefficients of the Riemann curvature tensor. As mentioned earlier, by raising indices $h_{i}^{jp}=\sum g^{jk}h_{ki}^p$ and $R_{ij}^{kl}=\sum g^{ln}R_{nij}^k$ the Gauss equations take the form \[ R_{ij}^{kl}=\sum \delta_{pq}(h_{i}^{kp}h_{j}^{lq}-h_{j}^{kp}h_{i}^{lq})= h_i^{k}\cdot h_j^l-h_j^k\cdot h_i^l \] for all $i,j,k,l$ and $h_i^j=\sum h_i^{jp}n_p=\sum g^{jk}h_{ki}$ normal vectors along $M$. \section{Averaging the integrand} \label{Averaging the integrand} For $p(t)\in\mathbb{R}[t_1,\cdots,t_m]$ a polynomial on Euclidean space $\mathbb{R}^m$ and $G_m$ a closed subgroup of the orthogonal group $\mathrm{O}_m(\mathbb{R})$ let us write \[ \langle p(t)\rangle_{G_m}=\int_{G_m}\;p(gt)\,d\mu(g)\] for the average of $p$ over $G_m$, with $\mu$ the normalized Haar measure on $G_m$. Clearly \[ \langle p(t)\rangle_{\mathrm{O}_m(\mathbb{R})}\in\mathbb{R}[t\cdot t] \] with $t\cdot t=|t|^2$ the norm squared of $t\in\mathbb{R}^m$. The crucial step for the intrinsic nature of the coefficients of the tube volume formula is the following result (see the Lemma on page 470 of Weyl's paper \cite{Weyl 1939a}). \begin{theorem}\label{integrand average theorem} We have (with $1\leq i,j\leq n$ and $1\leq p\leq m$) \[ \langle\det(\delta_i^j-\sum t_ph_{i}^{jp}) \rangle_{\mathrm{O}_m(\mathbb{R})}= \sum_{d=0}^n\frac{H_d\,|t|^d}{m(m+2)\cdots(m+d-2)} \] with $H_d$ intrinsic functions on $M$ given by \begin{align*} H_d = \begin{cases} 0 & \text{if}~d~\text{odd}, \\ 1 & \text{if}~d=0, \\ \sum \varepsilon_{i_1\ldots i_d}^{j_1\ldots j_d}\; R_{i_1i_2}^{j_1j_2}\cdots R_{i_{d-1}i_d}^{j_{d-1}j_d} & \text{if}~d>0~\text{even}, \end{cases} \end{align*} and \[ R_{ij}^{kl}=H_{ij}^{kl}-H_{ji}^{kl}\;\;,\;\; H_{ij}^{kl}=h_i^k\cdot h_j^l=\sum \delta_{pq}h_i^{kp}h_{j}^{lq} \] for $i,j,k,l=1,\ldots,n$. In the expression for $H_d$ with $d>0$ even the sum runs over all cardinality $d$ subsets $\mathcal{D}$ of $\{1,\ldots,n\}$ and over all possible couplings of pairs \[ _{i_1i_2}^{j_1j_2}\,|\, _{i_3i_4}^{j_3j_4}\,|\,\cdots\,|_{i_{d-1}i_d}^{j_{d-1}j_d}\,| \] taken from $\mathcal{D}=\{i_1,\dots,i_d\}=\{j_1,\dots,j_d\}$. Here a pair $i_1i_2$ means two distinct numbers $i_1,i_2$ irrespective of their order. \end{theorem} \begin{proof} Averaging the characteristic polynomial $\det(\lambda\delta_i^j-\sum t_ph_{i}^{jp})$ over the orthogonal group $\mathrm{O}_m(\mathbb{R})$ acting on $t\in\mathbb{R}^m$ yields \[ \langle\det(\lambda\delta_i^j-\sum t_ph_{i}^{jp})\rangle_{\mathrm{O}_m(\mathbb{R})}= \sum_{d=0}^n\,{\lambda}^{n-d}\,|t|^d\sum_{\mathcal{D}}\,A_{\mathcal{D}}(h_i^j) \] with the sum over all even $d$ and all cardinality $d$ subsets $\mathcal{D}$ of $\{1,\ldots,n\}$ and with $A_{\mathcal{D}}(h_i^j)$ given by \[ \langle\det(t\cdot h_i^j)_{i,j\in\mathcal{D}}\rangle_{\mathrm{O}_m(\mathbb{R})}= |t|^dA_{\mathcal{D}}(h_i^j) \] as degree $d$ polynomial on $\mathbb{R}^{m\times d^2}$, which is invariant under the diagonal action of $\mathrm{O}_m(\mathbb{R})$. By the first fundamental theorem of invariant theory for $\mathrm{O}_m(\mathbb{R})$ (see Corollary 4.2.3 of \cite{Goodman--Wallach 1998}, which is a modern reincarnation of Weyl's classic \cite{Weyl 1939b}) we have \[ A_{\mathcal{D}}(h_i^j)=B_{\mathcal{D}}(H_{ij}^{kl}) \] with $H_{ij}^{kl}=h_i^k\cdot h_j^l$ and $i,j,k,l\in\mathcal{D}$ with $i\neq j,k\neq l$. From the explicit determinantal form it follows that $B_{\mathcal{D}}(H_{ij}^{kl})$ is in fact a linear combination of monomials of the form \[ H_{i_1i_2}^{j_1j_2}\cdots H_{i_{d-1}i_d}^{j_{d-1}j_d} \] with $\mathcal{D}=\{i_1,\dots,i_d\}=\{j_1,\dots,j_d\}$. Moreover under the action of the symmetric group $\mathfrak{S}_d$ acting on both the lower and the upper indices $B_{\mathcal{D}}(H_{ij}^{kl})$ transforms under the sign character. Therefore $B_{\mathcal{D}}(H_{ij}^{kl})=C_{\mathcal{D}}(R_{ij}^{kl})$ with $R_{ij}^{kl}=H_{ij}^{kl}-H_{ji}^{kl}$ and by symmetry for $\mathfrak{S}_d$ we arrive at \[ C_{\mathcal{D}}(R_{ij}^{kl})=c(m,d)\sum \varepsilon_{i_1\ldots i_d}^{j_1\ldots j_d}\; R_{i_1i_2}^{j_1j_2}\cdots R_{i_{d-1}i_d}^{j_{d-1}j_d} \] with the sum over all possible couplings of pairs from $\mathcal{D}$ and $c(m,d)$ a constant depending solely on $m$ and $d$. The conclusion is that \[ \langle\det(\delta_i^j-\sum t_ph_{i}^{jp})\rangle_{\mathrm{O}_m(\mathbb{R})}= \sum_{d=0}^n\,c(m,d)H_d\,|t|^d \] and all that is left is the computation of the constant $c(m,d)$. For this computation we take the special choice $h_i^{jp}=\delta_i^j$ for $p=1$ and $h_i^{jp}=0$ for $p\geq 2$. In that case \[ A_{\mathcal{D}}(h_i^j)=\frac{\int_{S^{m-1}}\,t_1^d\,d\mu(t)}{\int_{S^{m-1}}\,d\mu(t)} \] with $\mu$ the Euclidean measure on the unit sphere $S^{m-1}$ in $\mathbb{R}^m$. The integral in the numerator becomes \[ \int_{-1}^1\,r^d(1-r^2)^{(m-3)/2}dr=\int_0^1\,s^{(d-1)/2}(1-s)^{(m-3)/2}ds \] (apart from a factor volume $\omega_{m-1}$ of $S^{m-2}$) and so equals \[ \mathrm{B}((d+1)/2,(m-1)/2)=\frac{\Gamma((d+1)/2)\Gamma((m-1)/2)}{\Gamma((d+m)/2)}\,. \] In turn this implies \[ A_{\mathcal{D}}(h_i^j)=\frac{\Gamma((d+1)/2)\Gamma(m/2)}{\Gamma(1/2)\Gamma((d+m)/2)}= \frac{1\cdot3\cdots(d-1)}{m(m+2)\cdots(m+d-2)}\,. \] On the other hand $R_{ij}^{kl}=\delta_i^k\delta_j^l-\delta_j^k\delta_i^l$ and so equal to $\varepsilon_{ij}^{kl}$ if the pairs $ij$ and $kl$ coincide and $0$ otherwise, and hence \[ C_{\mathcal{D}}(R_{ij}^{kl})=c(m,d)\,\frac{d!}{2^{d/2}(d/2)!}=c(m,d)\cdot1\cdot3\cdots(d-1)\,. \] Hence $c(m,d)=1/m(m+2)\cdots(m+d-2)$ as desired. \end{proof} Recall that the volume $\omega_m$ of the unit sphere $S^{m-1}$ and the volume $\Omega_m$ of the unit ball $\mathbb{B}^m$ are related by $\Omega_m=\omega_m/m$. The tube formula of Weyl can now be easily derived. \begin{corollary}\label{Weyl tube formula} If the domain $\mathbb{D}^m$ in $\mathbb{R}^m$ is equal to the unit ball $\mathbb{B}^m$ then the tube volume is given by \[ V_M(a)=\Omega_m\sum_{d=0}^n\frac{k_d(M)\,a^{m+d}}{(m+2)\cdots(m+d)} \qquad (d\;\mathrm{even}) \] for $a>0$ small and $k_d(M)=\int_M H_d\,ds$ with $H_d$ the intrinsic expression on $M$ in the previous theorem and $ds$ the Riemannian measure on $M$. \end{corollary} \begin{proof} By Section~\ref{The volume of tubes} and the symmetry of $\mathbb{B}^m$ we have \begin{align*} V_{U^n}(a) &=\int_{U^n}\{\int_{a\mathbb{B}^m} \det(\delta_i^j-\sum t_ph_{i}^{jp})\,dt\}\sqrt{\det g_{ij}}\, du \\ &=\int_{U^n}\{\int_{a\mathbb{B}^m} \sum_{d=0}^n \frac{H_d\,|t|^d}{m(m+2)\cdots(m+d-2)}\,dt\}\sqrt{\det g_{ij}} \, du \\ &=\int_{U^n}\{\omega_m \int_0^a \sum_{d=0}^n\frac{H_d\,r^{m+d-1}}{m(m+2)\cdots(m+d-2)}\,dr\}\sqrt{\det g_{ij}} \, du \\ &=\Omega_m\sum_{d=0}^n\frac{\{\int_{U^n} H_d\,ds\}\,a^{m+d}}{(m+2)\cdots(m+d-2)(m+d)} \end{align*} and the result follows. \end{proof} If we consider domains $\mathbb{D}^m$ with symmetry groups $G_m < \mathrm{O}_m(\mathbb{R})$ such that the invariant polynomials of degree $\leq n = \dim M$ for both groups agree, then we can prove Theorem~\ref{abstract tube formula}. \begin{proof}[Proof of Theorem~\ref{abstract tube formula}] By the Fubini theorem we have for $H<G$ compact groups and $f$ a continuous function on $G$ that \[ \int_G\,f(g)\,d\mu_G(g)=\int_{G/H}\{\int_H\,f(gh)\,d\mu_H(h)\}\,d\mu_{G/H}(gH) \] with $\mu_G,\mu_H$ and $\mu_{G/H}$ the normalized invariant measures on $G,H$ and $G/H$ respectively. Hence by the assumption on $G_m$ we have \[ \langle\det(\delta_i^j-\sum t_ph_{i}^{jp})\rangle_{G_m}=\langle \det(\delta_i^j-\sum t_ph_{i}^{jp})\rangle_{\mathrm{O}_m(\mathbb{R})} \] and so we can just argue as in the previous proof. \end{proof} Using the discussion in Section~\ref{The volume of tubes}, for $n=1$ the tube formula is intrinsic as long as $\mathbb{D}^m$ is centrally symmetric, the case already covered by Hotelling~\cite{Hotelling 1939} if $\mathbb{D}^m=\mathbb{B}^m$. \begin{corollary}\label{Curves} If $M$ is a curve of finite length in $\mathbb{R}^{m+1}$ and $\mathbb{D}^m$ is centrally symmetric, then for $a>0$ sufficiently small \[ V_M(a) = \mathrm{length}(M) \, \mathrm{vol}(\mathbb{D}^m) \, a^m \] and hence is intrinsic. \qed \end{corollary} The following example shows that central symmetry is not a necessary condition. \begin{example} Let $\mathbb{D}^m$ be the union of half the unit ball $\{|t|\leq1,t_1\leq0\}$ and the cone $\{t_2^2+\dots+t_{m}^2\leq (1-t_1/b)^2,0\leq t_1\leq b\}$ with top $(b,0,\ldots,0)$ for some $b>0$. By symmetry the average of any linear function of $t_2,\ldots,t_m$ over $\mathbb{D}^m$ equals zero. By direct computation the average of $t_1$ over $\mathbb{D}^m$ is equal to zero if $b=\sqrt{m}$, and so the previous corollary remains valid for this domain as well. \end{example} Indeed for curves it is sufficient that the center of mass of $\mathbb{D}^m$ is at the origin by the generalized Pappus centroid theorem. See Section~\ref{Pappus type theorems} for the higher dimensional case. \section{Examples of polyhedral domains $\mathbb{D}^m$} \label{Polyhedral examples} If we are looking for domains $\mathbb{D}^m$ in $\mathbb{R}^m$ with a sufficiently large symmetry group $G_m<\mathrm{O}_m(\mathbb{R})$ it is natural to consider regular polytopes $\mathbb{D}^m$ in $\mathbb{R}^m$. It is well known that the symmetry group $G_m$ in that case is an irreducible finite reflection group. Such groups are classified by their Coxeter diagrams or by letters $\mathrm{X}_m$ with $\mathrm{X}=\mathrm{A},\mathrm{B}, \mathrm{D},\mathrm{E},\mathrm{F},\mathrm{H},\mathrm{I}(k)$ for $k\geq5$. The corresponding reflection groups are denoted by $G_m=W(\mathrm{X}_m)$. It is a well known theorem due to Shephard and Todd \cite{Shephard--Todd 1954} (with a case by case proof) and Chevalley \cite{Chevalley 1955} (with a proof from the Book) that the algebra of polynomial invariants for a finite reflection group $W<\mathrm{O}(\mathbb{R}^m)$ is itself a polynomial algebra. \begin{theorem}\label{Chevalley theorem} The algebra $\mathbb{R}[\mathbb{R}^m]^{W}$ of polynomial invariants for $W$ is of the form $\mathbb{R}[p_1,\ldots,p_m]$ with $p_1,\ldots,p_m$ algebraically independent homogeneous invariants of degrees $d_1\,d_2,\ldots,d_m$ respectively. \end{theorem} For each of the irreducible types these degrees can be calculated and are given in the next table. The proof of these results can be found in the standard text books by Bourbaki \cite{Bourbaki 1968} or by Humphreys \cite{Humphreys 1990}. \begin{center} \begin{tabular}{|l|l|l|} \hline $\mathrm{type}$ & $m$ & $d_1,d_2,\ldots,d_m$ \\ \hline $\mathrm{A}_m$ & $\geq 1$ & $2,3,\ldots,m+1$ \\ $\mathrm{B}_m$ & $\geq 2$ & $2,4,\ldots,2m$ \\ $\mathrm{D}_m$ & $\geq 4$ & $2,4,\ldots,2m-2,m$ \\ $\mathrm{E}_6$ & $6$ & $2,5,6,8,9,12$ \\ $\mathrm{E}_7$ & $7$ & $2,6,8,10,12,14,18$ \\ $\mathrm{E}_8$ & $8$ & $2,8,12,14,18,20,24,30$ \\ $\mathrm{F}_4$ & $4$ & $2,6,8,12$ \\ $\mathrm{H}_3$ & $3$ & $2,6,10$ \\ $\mathrm{H}_4$ & $4$ & $2,12,20,30$ \\ $\mathrm{I}_2(k)$ & $2$ & $2,k \geq 5$ \\ \hline \end{tabular} \end{center} So for $m\geq 2$ the irreducible finite reflection group $W(\mathrm{X}_m)<\mathrm{O}(\mathbb{R}^m)$ is orthogonal of degree $d_2-1$ in the sense of Definition{\;\ref{orthogonal of degree n definition}}. \begin{corollary}\label{polyhedral tube formula} If $\mathbb{D}^m$ is a domain in $\mathbb{R}^m$ invariant under a finite reflection group $W(\mathrm{X}_m)$ then the tube formula of Theorem~\ref{abstract tube formula} does hold with intrinsic coefficients if $n=\dim M < d_2$, that is, the dimension $n$ of $M$ is strictly smaller than the second fundamental degree $d_2$. \qed \end{corollary} For example, if $\mathbb{D}^3$ is an icosahedron with symmetry group $W(\mathrm{H}_3)$ then the tube formula is intrinsic for submanifolds $M$ of dimension $n\leq5$ in $\mathbb{R}^{n+3}$, and if $\mathbb{D}^4$ is a $600$-cell with symmetry group $W(\mathrm{H}_4)$ then the tube formula is intrinsic for $n\leq11$. For any dimension $n$ of $M\hookrightarrow\mathbb{R}^{n+2}$ with $\mathbb{D}^2$ a regular $k$-gon with $k>n$ the tube formula is intrinsic, since its symmetry group is $W(\mathrm{I}_2(k))$. For dimension $n=2$ or $3$ we find in this way examples of intrinsic tube formulas for arbitrary codimension $m$ via domains $\mathbb{D}^m$ with symmetry groups $W(\mathrm{A}_m)$ ($n=2$) and $W(\mathrm{B}_m)$ ($n=2,3$), respectively. However, for dimension $n\geq 4$ we obtain in this way only examples of intrinsic tube formulas with relatively small codimension $m\leq8$. Examples with larger codimension $m$ can be obtained by the following construction. \begin{corollary} Let $G$ be a noncompact simple Lie group acting on its Lie algebra $\mathfrak{g}$, and let $\theta$ be a Cartan involution of $G$ and $\mathfrak{g}$ and $\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{p}$ the decomposition in $+1$ and $-1$ eigenspaces of $\theta$ on $\mathfrak{g}$. If the domain $\mathbb{D}\subset\mathfrak{p}$ is the convex hull of a nonzero orbit of $K =G^\theta$ on $\mathfrak{p}$ then the tube formula of Theorem~\ref{abstract tube formula} does hold with intrinsic coefficients under the assumption that the dimension $n$ of $M\hookrightarrow\mathbb{R}^{n+m}$ (with $m=\dim\mathfrak{p}$) is strictly smaller than the second fundamental degree $d_2$ of the Weyl group $W$ of the pair $(\mathfrak{g},\theta)$. \end{corollary} \begin{proof} The Killing form $(\cdot,\cdot)$ on $\mathfrak{p}$ is positive definite and the fixed point group $K=G^{\theta}$ of $\theta$ on $G$ acts on $\mathfrak{p}$ as a subgroup of $\mathrm{SO}(\mathfrak{p})$. If $\mathfrak{a}\subset\mathfrak{p}$ is a maximal Abelian subspace then each orbit of $K$ on $\mathfrak{p}$ intersects $\mathfrak{a}$ in an orbit of the Weyl group $W=\mathrm{N}_K(\mathfrak{a})/\mathrm{Z}_K(\mathfrak{a})$ of the pair $(\mathfrak{g},\theta)$. Hence each invariant polynomial $p\in\mathbb{R}[\mathfrak{p}]^K$ for $K$ on $\mathfrak{p}$ restricts to a Weyl group invariant polynomial on $\mathfrak{a}$. It is a theorem of Chevalley (see Lemma $7$ in \cite{Harish-Chandra 1958}) that the restriction map \[ \mathbb{R}[\mathfrak{p}]^K\rightarrow \mathbb{R}[\mathfrak{a}]^W \] is an isomorphism of algebras. Since $W$ acts on $\mathfrak{a}$ as a finite reflection group the latter algebra is described by Theorem {\ref{Chevalley theorem}}. The possible finite reflection groups that can occur as such a Weyl group $W$ are those reflection groups, which can be defined over $\mathbb{Z}$. This means that $\mathrm{H}_3$ and $\mathrm{H}_4$ are excluded and only the dihedral types $\mathrm{I}_2(k)=\mathrm{A}_2,\mathrm{B}_2,\mathrm{G}_2$ for $k=3,4,6$ respectively are allowed. The text books \cite{Helgason 1978} and \cite{Helgason 1984} by Helgason give a thorough exposition of the theory. Using the convexity theorem of Kostant \cite{Kostant 1973} it is easy to see that the convex hull of an orbit of $K$ on $\mathfrak{p}$ intersects $\mathfrak{a}$ in the convex hull of an orbit of $W$ on $\mathfrak{a}$. \end{proof} For example, if $G$ is the complex Lie group of type $\mathrm{E}_8$ (and so $K$ is the compact Lie group of type $\mathrm{E}_8$ acting on $\mathfrak{p}=i\mathfrak{k}$) then we do find in this way examples of local submanifolds $M$ of Euclidean space of dimension $n \leq 7$ and of codimension $m=248$ for which the tube formula of Theorem~\ref{abstract tube formula} has intrinsic coefficients. Presumably this large codimension relative to the small dimension of $M$ allows for an abundance of room for isometric deformations for the embedding of $M$ in such a Euclidean space. \section{No-go results for diamond domains $\widehat{\mathbb{D}}^m$} \label{No go results for diamant domains} In this section we shall denote by $\widehat{\mathbb{D}}^m$ the convex hull of the subset \[ \{t_1^2+\ldots+t_{m-1}^2\leq1,t_m=0\}\sqcup\{(0,\ldots,0,\pm1)\} \] in $\mathbb{R}^m$, $m\geq 2$. Any multiple $a\widehat{\mathbb{D}}^m$ for $a>0$ will be called a diamond domain. The symmetry group $G_m$ of $\widehat{\mathbb{D}}^m$ is equal to $\mathrm{O}_{m-1}(\mathbb{R})\times\mathrm{O}_1(\mathbb{R})$ for $m\geq 3$ while for $m=2$ the symmetry group $G_2$ is equal to the dihedral group $W(\mathrm{B}_2)$ of order $8$. The essential point of Weyl's argument for the intrinsic nature of the volume formula for tubes is the computation of the integral \[ \int_{a\widehat{\mathbb{D}}^m}\det(\delta_i^j-\sum t_ph_{i}^{jp})\,dt \] as a polynomial in $a$, by first averaging over the symmetry group $G_m$ of the domain $\widehat{\mathbb{D}}^m$ in $\mathbb{R}^m$. The outcome should hopefully be a polynomial expression in Riemann curvature components $R_{ij}^{kl}$ as in Theorem~\ref{integrand average theorem}. We will work out two examples, one with $n=2$ and $m\geq3$ and the other with $n=4$ and $m=2$, where this does not work. \begin{example}\label{example n=2, m>2} Let us first consider the case of surfaces of codimension $m$ at least $3$. Since the symmetry group of $\widehat{\mathbb{D}}^m$ is then $G_m = \mathrm{O}_{m-1}(\mathbb{R}) \times \mathrm{O}_1(\mathbb{R})$ the invariants of degree $2$ in $\mathbb{R} [t_1,\ldots,t_m]$ are linear combinations of $R=(t_1^2+\ldots+t_{m-1}^2)/(m-1)$ and $S=t_m^2$. The above determinant for $n=2$ becomes \[ \det(\delta_i^j-\sum t_ph_{i}^{jp}) = 1-\sum t_p(h_{1}^{1p}+h_{2}^{2p})+\sum t_pt_q (h_{1}^{1p}h_{2}^{2q}-h_{1}^{2p}h_{2}^{1q}) \] and averaging over the symmetry group $G_m$ yields \[ 1+A(h_{i}^j)R(t)+B(h_{i}^j)S(t) \] with \[ A = \sum_{p=1}^{m-1}(h_{1}^{1p}h_{2}^{2p}-h_{1}^{2p}h_{2}^{1p}) \qquad B = (h_{1}^{1m}h_{2}^{2m}-h_{1}^{2m}h_{2}^{1m}) \] and $A+B= R_{12}^{12}$ intrinsic. Thus by the above, if the integrals of $R$ and $S$ over $\widehat{\mathbb{D}}^m$ agree, then the generalized tube volume is intrinsic as well. The integrals of $R(t)$ and $S(t)$ over $\widehat{\mathbb{D}}^m$ amount respectively to (put $r=\sqrt{R}$ and $s=\sqrt{S}$) \[ \frac{2\,\omega_{m-1}}{m-1}\int r^2r^{m-2}\,dr\,ds\quad \mathrm{and}\quad 2\,\omega_{m-1}\int s^2 r^{m-2}\,dr\,ds, \] integrated over the triangle $\{(r,s)\,;r,s\geq0,r+s\leq1\}$, and we will show that for $m\geq3$ these are distinct. Apart from the factor $2\,\omega_{m-1}$ the left integral becomes \[ \frac{1}{m-1}\int_0^1(1-r)r^m\,dr=\frac{1}{(m-1)(m+1)(m+2)} \] while the right integral equals \[ \int_0^1\tfrac13(1-r)^3r^{m-2}\,dr=\frac{2}{(m-1)m(m+1)(m+2)} \] and for the difference we find \[ \frac{m-2}{(m-1)m(m+1)(m+2)} \] which is nonzero for $m\geq3$, as claimed. Hence the tube volume formula for a general surface $M$ in $\mathbb{R}^{2+m}$ with diamond domain $\widehat{\mathbb{D}}^m$ is no longer intrinsic for $m\geq3$. For $m=2$ it still is intrinsic as should, because $G_2=W(\mathrm{B}_2)$ is orthogonal of degree $3$ (in fact, it is not only intrinsic for $n=2$ but also for $n=3$ since odd exponents vanish for the centrally symmetric diamond $\widehat{\mathbb{D}}^m$). \end{example} \begin{example}\label{example n=4, m=2} Let us next consider the case that $n=4$ and $m=2$. The symmetry group of the diamond domain $\widehat{\mathbb{D}}^2=\{(t_1,t_2) \, ; |t_1|+|t_2|\leq1\}$ is the dihedral group $W(\mathrm{B}_2)$ of order $8$ generated by the two reflections $s_1(t_1,t_2)=(-t_1,t_2)$ and $s_{2}(t_1,t_2)=(t_2,t_1)$. The invariant polynomials for this group $W(\mathrm{B}_2)$ are generated as an algebra by the quadratic invariant $P(t)=t_1^2+t_2^2$ and the quartic invariant $Q(t)=t_1^2t_2^2$. Hence any quartic invariant is a unique linear combination of $Q$ and $R(t)=t_1^4+t_2^4=P^2-2Q$. We would like to know if Weyl's averaging trick (over the dihedral group $W(\mathrm{B}_2)$ this time) remains valid for any pencil of second fundamental forms. In order to keep the calculation as simple as possible we look at the special case that $h_{i}^{jp}=0$ for $i\neq j$ and $p=1,2$. If we write $h_{i}^{i1}=a_i$ and $h_{i}^{i2}=b_i$ we get \[ \det(\delta_i^j-\sum t_p h_{i}^{jp})=\prod_{i=1}^4\;(1-t_1a_i-t_2b_i) \] and averaging over the dihedral group $W(\mathrm{B}_2)$ yields \[ 1+A(a,b)P(t)+B(a,b)Q(t)+C(a,b)R(t) \] with $A,B,C$ homogeneous polynomials of degree $2,4,4$ respectively. A direct calculation gives \begin{align*} A &= \tfrac12\sum_{i<j}\;(a_ia_j+b_ib_j) = \tfrac12 \sum_{i<j} R_{ij}^{ij}\\ B &= \sum_{i<j,k<l}\;a_ia_jb_kb_l = \sum_{i<j,k<l} R_{ij}^{ij} R_{kl}^{kl} - 3 (a_1a_2a_3a_4 + b_1b_2b_3b_4) \\ C &= \tfrac12(a_1a_2a_3a_4+b_1b_2b_3b_4) \end{align*} with $\{i,j,k,l\}=\{1,2,3,4\}$ in the sum for $B$ and $R_{ij}^{ij}=h_i^i\cdot h_j^j$ as before. Note that $A$ as well as $B+6C$ are intrinsic quantities. For the integrals of $Q$ and $R$ over $\widehat{\mathbb{D}}^2$ we find (put $r=|t_1|$ and $s=|t_2|$) \[ \int Q(t) \, dt_1 \, dt_2 = 4\int_0^1 \{\int_0^{1-r} r^2 s^2 \, ds\} \, dr = \tfrac{1}{45} \] and \[ \int R(t) \, dt_1 \, dt_2 = 4\int_0^1 \{\int_0^{1-r} (r^4 + s^4) \, ds\} \, dr = \tfrac{4}{15} \neq \tfrac{6}{45}. \] Hence for fourfolds in $\mathbb{R}^6$ with diamond domain $\widehat{\mathbb{D}}^2$ we see that the tube volume formula need no longer be intrinsic. \end{example} The conclusion therefore is that the tube formula for submanifolds $M$ in $\mathbb{R}^{n+m}$ of dimension $n$ with cross section the diamond $\widehat{\mathbb{D}}^m$ will in general no longer be intrinsic, unless we are in one of the cases of the following table. \begin{center} \begin{tabular}{|l|l|l|} \hline $m=\operatorname{codim} M$ & symmetry group of the diamond in $\mathbb{R}^{m}$ & $n=\dim M$ \\ \hline $1$ & $\mathrm{O}_1(\mathbb{R})$ & any \\ $2$ & $W(\mathrm{B}_2)$ & $\leq 3$ \\ any & $\mathrm{O}_{m-1}(\mathbb{R}) \times \mathrm{O}_{1}(\mathbb{R})$ & $1$ \\ \hline \end{tabular} \end{center} Our motivation for looking at diamond tubes in a Euclidean vector space came from the analogous causal tubes in a Lorentzian vector space, which are discussed in the next section. \section{Riemannian submanifolds of a Lorentzian vector space} \label{Riemannian submanifolds of a Lorentzian vector space} Let us suppose that $M$ is a compact connected $n$-dimensional Riemannian submanifold of an ambient Cartesian space $\mathbb{R}^{n+m}$, equipped with a nondegenerate but possibly indefinite scalar product denoted by a dot. Let $\mathbb{D}^m$ be a compact domain around $0$ in $\mathbb{R}^m$. Say we have a local parametrization around $M$ given by \[ x:U^n\times \mathbb{D}^m\rightarrow\mathbb{R}^{n+m},\quad (u,t)\mapsto x(u,t)=r(u)+\sum t^pn_p(u) \] with $u=(u^1,\ldots,u^n)\in U^n$, $t=(t^1,\ldots,t^m)\in \mathbb{D}^m$ while $n_1(u),\ldots,n_m(u)$ are vectors in $\mathbb{R}^{n+m}$ depending smoothly on $u\in U^n$ and \[ \partial_ir(u)\cdot n_p(u)=0,\quad n_p(u)\cdot n_q(u)=\eta_{pq} \] for all $u\in U^n$, all $i=1,\ldots,n$, all $p,q=1,\ldots,m$ and $\eta_{pq}$ a $m\times m$ diagonal matrix with entries $\pm1$ (so in particular constant, that is independent of $u\in U^n$). Observe that the choice of such an orthonormal frame for the normal bundle of $M$ in $\mathbb{R}^{n+m}$ is in principle only possible locally. Indeed if $0\in U^n$ then by linear algebra we can choose a basis $n_1(0),\ldots,n_m(0)$ for the orthogonal complement of the tangent vectors $\partial_1r(0),\ldots,\partial_nr(0)$ with $n_p(0)\cdot n_q(0)=\eta_{pq}$ and subsequently apply Gram--Schmidt to the vectors $\partial_1r(u),\ldots,\partial_nr(u),n_1(0),\ldots,n_m(0)$ for $u$ small. As in Section~\ref{The volume of tubes} we can write \[ \partial_ix=\sum_j(\delta_i^j-\sum t^pn_p\cdot h_i^j)\partial_jr+\ldots \] with second fundamental form normal vectors $h_i^j=\sum g^{jk}h_{ik}$ and the dots $\ldots$ stand for a linear combination of the normal fields $n_p$. Likewise writing $t_p=\sum \eta_{pq}t^q$ we arrive at the generalized tube volume formula \[ V_{U^n}(a)=\int_{U^n}\{\int_{a\mathbb{D}^m}\det(\delta_i^j- \sum t_ph_i^{jp})\,dt\}\sqrt{\det g_{ij}}\,du \] with $\mathbb{D}^m$ a compact domain around $0$ and $a>0$ sufficiently small. Hence $V_{M}(a)$ is a polynomial in $a$ of degree $m+n$ with $\mathrm{vol}(M)\mathrm{vol}(\mathbb{D}^m)a^m$ as lowest order term. The Gauss equations \[ R_{ij}^{kl}=h_i^k\cdot h_j^l-h_j^k\cdot h_i^l \] as derived in Section~\ref{The Gauss equations} remain valid for an indefinite scalar product. For Riemannian curves $M$ of dimension $n=1$ and a centrally symmetric domain $\mathbb{D}^m$ around $0$ in $\mathbb{R}^m$ we get \[ V_{M}(a)=\mathrm{length}(M)\mathrm{vol}(\mathbb{D}^m)a^m \] just like the original case of Hotelling~\cite{Hotelling 1939}. Also, if $\eta_{pq}=\delta_{pq}$ then we are essentially in the original setting of Weyl and his spherical tube formula and our variations hold without change. Let us suppose for the rest of this section that $M$ is a compact Riemannian submanifold of a Lorentzian vector space $\mathbb{R}^{n+m-1,1}$ with scalar product $\cdot$ of signature $(n+m-1,1)$ and thus $\eta_{pq}=\mathrm{diag}(1,\ldots,1,-1)$ in $\mathbb{R}^m$. If we denote by $\mathbb{J}=\{x\in \mathbb{R}^{n+m-1,1}\,;x\cdot x\leq0\}$ the causal future and past of the origin then for $e$ a unit timelike vector the domain $\widehat{\mathbb{D}}^{n+m}(e)=\{e+\mathbb{J}\}\cap\{-e+\mathbb{J}\}$ is called the \emph{causal diamond} around $0$ with unit timelike normal $e$. It is the locus traced out by all causal curves between $e$ and $-e$. Any two causal diamonds around $0$ can be transformed into each other by an element of the Lorentz group $\mathrm{O}_{n+m-1,1}(\mathbb{R})$, while the symmetry group of a causal diamond is isomorphic to $\mathrm{O}_{n+m-1}(\mathbb{R})\times\mathrm{O}_1(\mathbb{R})$. The set \[ \{r+n\,;r\in M,\,n\in N_rM\cap a\widehat{\mathbb{D}}^{n+m}(n_m(r))\} \] will be called the \emph{causal tube} with radius $a>0$ (sufficiently small) around $M$ relative to the unit timelike normal field $n_m$. Its volume is given by \[ V_M(a)=\int_M\{\int_{a\widehat{\mathbb{D}}^m}\det(\delta_i^j-\sum t_ph_i^{jp})\,dt\}\,ds \] with $\widehat{\mathbb{D}}^m$ the diamond domain in $\mathbb{R}^m$ in the notation of the previous section. In accordance with Weyl's tube formula, apart from the $\pm$ sign, we obtain the following version of the tube formula for Riemannian hypersurfaces. \begin{corollary} For a spacelike hypersurface $M$ of codimension $m=1$ in a Lorentzian vector space $\mathbb{R}^{n,1}$ the causal tube volume formula takes the form \[ V_{M}(a)=2\sum_{d=0}^n \frac{(-1)^{d/2}k_d(M)a^{1+d}} {3\cdot5\cdots(1+d)}\qquad (d\;\mathrm{even}). \] \end{corollary} Indeed if $h_{ij}$ is the scalar valued second fundamental form then $H_{ij}^{kl}=-h_i^kh_j^l$ and so $R_{ij}^{kl}$ in Theorem~\ref{integrand average theorem} also picks up a minus sign, that is $H_d$ and $k_d(M)=\int_M H_d\,ds$ pick up a factor $(-1)^{d/2}$. There is yet another case, where the causal tube formula has an intrinsic form, namely in case $M\hookrightarrow\mathbb{R}^{n+m-1}\hookrightarrow\mathbb{R}^{n+m-1,1}$. This can be checked easily using Weyl's tube formula in a straightforward way. The next example shows, however, that the positive result for diamond tubes for $\dim M = \operatorname{codim} M = 2$ of Section~\ref{No go results for diamant domains} cannot be extended to the Lorentzian setting. \begin{example}\label{example n=2, m=2} If we specialize to the case $n=m=2$ and $\eta_{pq}=\mathrm{diag}(1,-1)$ of a compact spacelike surface $M$ in Minkowski spacetime $\mathbb{R}^{3,1}$ then the integrand \[ \det(\delta_i^j-\sum t_ph_{i}^{jp}) = 1-\sum t_p(h_{1}^{1p}+h_{2}^{2p})+\sum t_pt_q (h_{1}^{1p}h_{2}^{2q}-h_{1}^{2p}h_{2}^{1q}) \] averages over the symmetry group $W(\mathrm{B}_2)$ of the square $\widehat{\mathbb{D}}^2$ as in Example~\ref{example n=2, m>2} to the expression \[ 1+(A(h_i^j)+B(h_i^j))(t_1^2+t_2^2)/2 \] with $A=h_1^{11}h_2^{21}-h_1^{21}h_2^{11}$ and $B=h_1^{12}h_2^{22}-h_1^{22}h_2^{12}$. Since \[ \int_{a\widehat{\mathbb{D}}^2}dt_1\,dt_2=2a^2, \quad\int_{a\widehat{\mathbb{D}}^2}t_1^2\,dt_1\,dt_2= \int_{a\widehat{\mathbb{D}}^2}t_2^2\,dt_1\,dt_2=a^4/3 \] we find \[ V_M(a)=\int_M\{2a^2+(A+B)a^4/3\}\,ds = \mathrm{area}(M)\,2a^2+\int_M(A+B)\,ds\,a^4/3 \] for the volume of the causal tube along $M$. On the other hand, the Gauss equation (for n=2 there is just a single one) in this particular case of spacelike surfaces in Minkowski spacetime becomes \[ R_{12}^{12}=h_1^1\cdot h_2^2-h_1^2\cdot h_2^1=A-B. \] Since $\int_M(A+B)\,ds$ enters in the tube volume formula while $\int_M(A-B)\,ds$ is the total Gauss curvature for $M$, the volume formula for causal tubes around surfaces need not be intrinsic. \end{example} The conclusion is that for spacelike submanifolds of Minkowski spacetime $\mathbb{R}^{3,1}$ the causal tube volume formula will in general no longer be intrinsic, except for the obvious cases of spacelike curves ($n=1$) or hypersurfaces ($m=1$). This question about the intrinsic nature of causal tube volume formulas was the starting point for our work. \section{Pappus type theorems}\label{Pappus type theorems} Let us denote the graded commutative algebra $\mathbb{R}[t_1,\ldots,t_m]$ by $P=\oplus\,P^d$. The subalgebra of invariants for $\mathrm{O}_m(\mathbb{R})$ is equal to $\mathbb{R}[t_1^2+\ldots+t_m^2]$ and is denoted $I=\oplus\,I^d$. The graded subspace \[ C=\{p\in P\,;\int_{\mathrm{O}_m(\mathbb{R})}\,g(p)\,d\mu(g)=0\}=\oplus\,C^d \] is the unique invariant complement of $I$ in $P$. Here $\mu$ is the normalized Haar measure on $\mathrm{O}_m(\mathbb{R})$. Hence $P=I\,\oplus\,C$ and clearly $C^d=P^d$ for $d$ odd while $C^d$ has codimension one in $P^d$ for $d$ even. \begin{definition} A compact domain $\mathbb{D}^m$ in $\mathbb{R}^m$ is called \emph{symmetric of degree $n$} if \[ \int_{\mathbb{D}^m}\,p(t)\,dt=0 \] for all polynomials $p\in C^1\oplus\ldots\oplus C^n$. \end{definition} If the compact domain $\mathbb{D}^m$ has a symmetry group $G_m$ that is orthogonal of degree $n$ (in the sense of our Definition~\ref{orthogonal of degree n definition}) then \[ \int_{\mathbb{D}^m}\,p(t)\,dt=\int_{\mathbb{D}^m}\,\langle p(t)\rangle_{G_m}\,dt= \int_{\mathbb{D}^m}\,\langle p(t)\rangle_{\mathrm{O}_m(\mathbb{R})}\,dt \] for all polynomials $p(t)$ of degree $\leq n$. In particular, if the symmetry group $G_m$ of $\mathbb{D}^m$ is orthogonal of degree $n$ then the domain $\mathbb{D}^m$ is necessarily symmetric of degree $n$. From the discussions in Section~\ref{The volume of tubes} and Section~\ref{Averaging the integrand} it follows that our Theorem~\ref{abstract tube formula} holds with the condition on the symmetry group $G_m$ of $\mathbb{D}^m$ being orthogonal of degree $n$ replaced by the condition on $\mathbb{D}^m$ being symmetric of degree $n$. This more general form of Theorem~\ref{abstract tube formula} was obtained as Theorem~4.4 in \cite{Domingo-Juan--Miquel 2004}. A compact domain $\mathbb{D}^m$ in $\mathbb{R}^m$ is symmetric of degree $1$ if and only if the center of mass of $\mathbb{D}^m$ lies at the origin. Hence the condition for $\mathbb{D}^m$ to be symmetric of degree $1$ is a good deal more general than the condition for the symmetry group $G_m$ to be orthogonal of degree $1$. If the manifold $M$ is a circle in $\mathbb{R}^3$ then the tube volume formula boils down to the ancient Pappus's centroid theorem. For this reason the higher dimensional tube volume formulas are sometimes also called Pappus type theorems. The next example shows that for a planar domain $\mathbb{D}^2$ and for all $n\geq 1$ the notion for $\mathbb{D}^2$ to be symmetric of degree $n$ is strictly weaker than the notion for the symmetry group $G_2$ of $\mathbb{D}^2$ being orthogonal of degree $n$. \begin{example} Consider in polar coordinates $t_1=r\cos\phi,t_2=r\sin\phi$ the planar domain $\mathbb{D}^2=\{(r,\phi)\,;0\leq r\leq a(\phi),\phi\in\mathbb{R}/2\pi\mathbb{Z}\}$ for some continuous function $a\colon\mathbb{R}/2\pi\mathbb{Z}\rightarrow(0,\infty)$. The space $C^d$ is spanned by the functions $r^d\cos(e\phi)$ and $r^d\sin(e\phi)$ with $1\leq e\leq d$ and $e\equiv d$ (mod $2$). The condition that $\mathbb{D}^2$ is symmetric of degree $n$ amounts to \[ \int_0^{2\pi}\int_0^{a(\phi)}\,r^{d+1}\,dr\cos(e\phi)\,d\phi= \int_0^{2\pi}\int_0^{a(\phi)}\,r^{d+1}\,dr\sin(e\phi)\,d\phi=0 \] or equivalently \[ \int_0^{2\pi}(a(\phi))^{d+2}\cos(e\phi)\,d\phi= \int_0^{2\pi}(a(\phi))^{d+2}\sin(e\phi)\,d\phi=0 \] for all $1\leq d\leq n$, $1\leq e\leq d$ and $e\equiv d$ (mod $2$). Clearly these conditions are satisfied if for some $k>n$ the function $a(\phi)$ is invariant under the cyclic group $C_k$ of order $k$ acting on the circle $\mathbb{R}/2\pi\mathbb{Z}$ by rotations. Indeed, in that case the Fourier coefficients of all functions $a(\phi)^{d+2}$ vanish for modes not contained in $k\mathbb{Z}$. This is in accordance with our Theorem~\ref{abstract tube formula} since the symmetry group $C_k$ of this domain $\mathbb{D}^2$ is orthogonal of degree $k>n$. However, if for a fixed $n\geq1$ one chooses integers $p>n$ and $q>(n+3)p$ then the function $a(\phi)=b(\phi)(2+\cos(p\phi))$ with $b>0$ invariant under $C_q$ has the property that the Fourier coefficients of all functions $a(\phi)^{d+2}$ for $1\leq d\leq n$ vanish for modes $\pm1,\dots,\pm n$. Hence this domain $\mathbb{D}^2$ is certainly symmetric of degree $n$. On the other hand, if we pick $p$ and $q$ relatively prime then the symmetry group $G_2$ of $\mathbb{D}^2$ will be trivial in case $b(\phi)$ is chosen sufficiently general (so that the symmetry group for $b(\phi)$ is not larger than $C_q$), and $G_2=\{1\}$ is not orthogonal of any degree $n\geq 1$. \end{example} The examples obtained in Proposition~4.3 of \cite{Domingo-Juan--Miquel 2004} of compact domains $\mathbb{D}^m$ in $\mathbb{R}^m$ that are symmetric of degree $n$ are for $n\geq2$ domains $\mathbb{D}^2$ with dihedral symmetry and for $n=2,3$ domains $\mathbb{D}^m$ with hyperoctahedral symmetry, besides of course the unit ball $\mathbb{B}^m$ for all $n$. Hence apart from giving a pedestrian exposition of Weyl's tube volume formula and also a discussion of tube volume formulas for Riemannian submanifolds of a Lorentzian vector space our paper gives a more complete and transparent discussion in Section~\ref{Polyhedral examples} of examples based on symmetry of cross sections $\mathbb{D}^m$ for which the intrinsic tube volume formula holds. \end{document}
\begin{document} \title{On ring-like event systems in quantum logic} \begin{abstract} A class of ring-like event systems (RLSEs) is studied that generalizes Boolean rings. Quantum logics represented by orthomodular lattices are characterized within this class and the correspondence between Boolean algebras and Boolean rings is enlarged to orthomodular lattices. The structure of RLSEs and various subclasses is analysed and classical logics are especially identified. Moreover, sets of numerical events within different contexts of physical problems are described. A numerical event is defined as a function $p$ from a set $S$ of states of a physical system to $[0,1]$ such that $p(s)$ is the probability of the occurrence of an event when the system is in state $s\in S$. In particular, the question is answered whether a given (small) set of numerical events will give rise to the assumption that one deals with a classical physical system or a quantum mechanical one. \end{abstract} {\bf AMS Subject Classification:} 06C15, 03G12, 81P16 {\bf Keywords:} Quantum logic, orthomodular lattice, ring-like structure of events, numerical event \section{Introduction} In quantum mechanics so-called quantum logics, also referred to as event systems, are an essential tool for theoretical reasoning and practical computations. The most common event systems are orthomodular lattices (and generalizations of them), in particular, the lattices of closed subspaces of a separable Hilbert space, known as Hilbert logics. Orthomodular lattices can be viewed as a generalization of Boolean algebras, which are characteristic for event systems in classical physics. So the question arises whether the logic behind an experiment might be a Boolean algebra, which, being in one-to-one correspondence to a Boolean ring, can also be understood as a Boolean ring. As often common with electrical engineering, calculations within rings are sometimes preferred to carrying out calculations within lattices. Then the question is whether a logic is a Boolean ring which entails the problem to find an appropriate generalization of Boolean rings corresponding to orthomodular lattices, a problem we will answer in this paper. For this end we first recall the definition of ring-like structures of events (RLSEs), which we will later on reformulate for a wide class of RLSEs $\mathbf R=(R,+,\cdot,0,1)$ of characteristic $2$ (which means that they satisfy the identity $x+x\approx0$) showing that these structures can be simply obtained by weakening customary axioms of Boolean rings or also by more suggestive other laws. \begin{definition}\label{def1} {\rm(}cf.\ {\rm\cite{DL21})} A {\em ring-like structure of events (RLSE)} is an algebra $(R,+,\cdot,0,1)$ of type $(2,2,0,0)$ such that $(R,\cdot,0,1)$ is a bounded meet-semilattice and satisfying the following identities: \begin{enumerate}[{\rm(R1)}] \item $x+y\approx y+x$, \item $(xy+1)(x+1)+1\approx x$, \item $\big((xy+1)x+1\big)x\approx xy$, \item $xy+(x+1)\approx(xy+1)x+1$. \end{enumerate} \end{definition} As one can easily verify, all Boolean rings are RLSEs. \begin{definition} An {\rm RLSE} $(R,+,\cdot,0,1)$ is called {\em specific} if it satisfies the identity \begin{enumerate} \item[{\rm(R5)}] $x+y\approx x(y+1)+(x+1)y$. \end{enumerate} \end{definition} As we will note below, specific RLSEs are of characteristic $2$ (which, in general, is not the case). Moreover, we will show that there is a one-to-one correspondence between orthomodular lattices and specific RLSEs. This means that the class of all RLSEs is larger than the class of specific RLSEs. In this paper we will consider various classes of RLSEs, answer the question when an RLSE is a Boolean ring, study structural properties of RLSEs and link RLSEs to so-called {\em algebras of numerical events} (which are sets of probabilities that can be gained by measurements -- cf.\ \cite{BM91} and \cite{MT}). Next we will apply results obtained for RLSEs to algebras of numerical events. We will show that under certain conditions the operation $+$ of RLSEs will coincide with the summation of real functions and the order of the elements of an RLSE with the order of functions. Further, we will give answers to the question, whether a (small) set of numerical events obtained by measurements will justify that one deals with a classical physical system or not. Finally we will weaken the concept of RLSEs by omitting axiom (R3) and associate these structures to sets of numerical events endowed with operations which are relevant for experiments. To some extent our research is related to the one-to-one correspondence of arbitrary bounded lattices with an antitone involution and so-called pGBQRs (partial generalized Boolean quasirings). -- For further results on pGBQRs cf.\ \cite{BDM} and \cite{DDL10b} -- \cite{DLM01}. \section{Elementary properties of ring-like structures of \\ events} Dealing with orthomodular lattices we will denote the supremum of two of its elements $x,y$ by $x\vee y$, their infimum by $x\wedge y$, the complement of an element $x$ by $x'$ and write $x\perp y$ if $x$ and $y$ are orthogonal, i.e.\ if $x\wedge y'=x$. Further we agree to define for $x,y$ of an RLSE $R$, $x\leq y$ if and only if $xy=x$ and to call $x$ and $y$ orthogonal to each other (as ring-like elements), if $x(1+y)=x$. For every algebra $\mathbf R=(R,+,\cdot,0,1)$ of type $(2,2,0,0)$ let $\mathbb L(\mathbf R)$ denote the algebra $(R,\vee,\wedge,{}',0,1)$ defined by \begin{align*} x\vee y & :=(x+1)(y+1)+1, \\ x\wedge y & :=xy, \\ x' & :=x+1 \end{align*} for all $x,y\in R$. As already shown in \cite{DL21} (cf.\ Theorem~2.1) if $\mathbf R$ is an RLSE then $\mathbb L(\mathbf R)$ is an orthomodular lattice. We will likewise use the operations of this lattice within RLSEs. Obviously, the lattice-theoretic orthogonality relation and the one defined above for RLSEs coincide. -- We notice that for RLSEs the orthomodular lattice $\mathbb L(\mathbf R)$ can be a Boolean algebra without $\mathbf R$ being a Boolean ring (cf.\ \cite{DL21}), but this will not be the case with specific RLSEs, as we will show below. In the following we will make use of the following theorem. \begin{theorem}\label{th0} {\rm(}cf.\ {\rm\cite{DL21})} Let $\mathbf R=(R,+,\cdot,0,1)$ be an algebra of type $(2,2,0,0)$. Then $\mathbf R$ is an {\rm RLSE} if and only if $\mathbb L(\mathbf R)=(R,\vee,\wedge,{}',0,1)$ is an orthomodular lattice and $+$ satisfies the following conditions for all $x,y\in R$: \begin{enumerate}[{\rm(a)}] \item $x+y=y+x$, \item $x+1=x'$, \item $x+y=x\vee y$ if $x\leq y'$. \end{enumerate} \end{theorem} Further, by means of the lattice structure of RLSEs one can easily see: \begin{proposition}\label{prop1} {\rm(}cf.\ {\rm\cite{DL21})} An {\rm RLSE} $\mathbf R=(R,+,\cdot,0,1)$ has the following properties for all $x,y\in R$: \begin{enumerate}[{\rm(i)}] \item $(x+1)+1=x$, \item $x(x+1)=0$, and as a consequence $1+1=0$, \item $x+0=x$, \item $x+(x+1)=1$, \item $x\le y$ if and only if $y+1\le x+1$, \item $x\perp y$ implies $(x+y)+1=(x+1)(y+1)$, \item $x+y=x\vee y$ if $x\leq y'$, \item $x+x=0$ if $\mathbf R$ is specific. \end{enumerate} \end{proposition} Recalling that two elements $x,y$ of an ortholattice are said to {\em commute} (abbreviated by $x\mathrel{\C}y$) if $(x\wedge y)\vee (x\wedge y')= x$ and that $c(x,y):=(x\wedge y)\vee(x\wedge y')\vee(x'\wedge y)\vee(x'\wedge y')$ is called the {\em commutator} of $x$ and $y$, we define analogous concepts for RLSEs $(R,+,\cdot,0,1)$: We say that the elements $x,y$ of $R$ {\em commute} (also indicated by $x\mathrel{\C}y$) if $xy+x(y+1)=x$ and we call the element $c(x,y):=\big(xy+x(y+1)\big)+\big((x+1)y+(x+1)(y+1)\big)$ the {\em commutator} of $x$ and $y$. That these definitions are justified is asserted by the following proposition. \begin{proposition}\label{prop2} Let $\mathbf R=(R,+,\cdot,0,1)$ be an {\rm RLSE} and $a,b\in R$. Then the following hold: \begin{enumerate} \item[\rm(ix)] $a\mathrel{\C}b$ in $\mathbf R$ if and only if $a\mathrel{\C}b$ in $\mathbb L(\mathbf R)$, \item[\rm(x)] The commutator of $a$ and $b$ in $\mathbf R$ coincides with the commutator of $a$ and $b$ in $\mathbb L(\mathbf R)$. \end{enumerate} \end{proposition} \begin{proof} \ \begin{enumerate} \item[(ix)] Since $a\wedge b\perp a\wedge b'$ we have $(a\wedge b)\vee(a\wedge b')=(a\wedge b)+(a\wedge b')=ab+a(b+1)$. Now $a\mathrel{\C}b$ in $\mathbb L(\mathbf R)$ if and only if $(a\wedge b)\vee(a\wedge b')=a$. \item[(x)] From (vii) of Proposition~\ref{prop1} we know that $(a\wedge b)\vee(a\wedge b')=ab+a(b+1)$. Replacing $a$ by $a'$ we obtain $(a'\wedge b)\vee(a'\wedge b')=(a+1)b+(a+1)(b+1)$. Since $(a\wedge b)\vee(a\wedge b')\perp(a'\wedge b)\vee(a'\wedge b')$ we then have \begin{align*} & (a\wedge b)\vee(a\wedge b')\vee(a'\wedge b)\vee(a'\wedge b')= \\ & =\big((a\wedge b)\vee(a\wedge b')\big)\vee\big((a'\wedge b)\vee(a'\wedge b')\big)= \\ & =\big((a\wedge b)\vee(a\wedge b')\big)+\big((a'\wedge b)\vee(a'\wedge b')\big)= \\ & =(ab+a(b+1))+\big((a+1)b+(a+1)(a+1)\big). \end{align*} \end{enumerate} \end{proof} For every algebra $\mathbf L=(L,\vee,\wedge,{}',0,1)$ of type $(2,2,1,0,0)$ let $\mathbb R(\mathbf L)$ denote the algebra $(L,+,\cdot,0,1)$ of type $(2,2,0,0)$ defined by \begin{align*} x+y & :=(x\wedge y')\vee(x'\wedge y), \\ xy & :=x\wedge y \end{align*} for all $x,y\in L$. \begin{theorem}\label{th2} Let $\mathbf R$ be a specific {\rm RLSE} and $\mathbf L$ an orthomodular lattice. Then the following hold: \begin{enumerate}[{\rm(i)}] \item $\mathbb L(\mathbf R)$ is an orthomodular lattice, \item $\mathbb R(\mathbf L)$ is a specific {\rm RLSE}, \item $\mathbb R\big(\mathbb L(\mathbf R)\big)=\mathbf R$, \item $\mathbb L\big(\mathbb R(\mathbf L)\big)=\mathbf L$. \end{enumerate} \end{theorem} \begin{proof} Let \begin{align*} \mathbf R & =(R,+,\cdot,0,1), \\ \mathbb L(\mathbf R) & =(R,\vee,\wedge,{}',0,1), \\ \mathbb R\big(\mathbb L(\mathbf R)\big) & =(R,\oplus,\odot,0,1), \\ \mathbf L & =(L,\vee,\wedge,{}',0,1), \\ \mathbb R(\mathbf L) & =(L,+,\cdot,0,1), \\ \mathbb R\big(\mathbb L(\mathbf R)\big) & =(L,\cup,\cap,{}^*,0,1). \end{align*} \begin{enumerate} \item[(i)] follows from Theorem~\ref{th0}. \item[(iii)] Using Proposition~\ref{prop1} (vii) we get \begin{align*} x\oplus y & \approx(x\wedge y')\vee(x'\wedge y)\approx x(y+1)+(x+1)y\approx x+y, \\ x\odot y & \approx x\wedge y\approx xy. \end{align*} \item[(iv)] We have \begin{align*} x+1 & \approx(x\wedge1')\vee(x'\wedge1)\approx x', \\ x\cup y & \approx(x+1)(y+1)+1\approx(x'\wedge y')'\approx x\vee y, \\ x\cap y & \approx xy\approx x\wedge y, \\ x^* & \approx x+1\approx x'. \end{align*} \item[(ii)] follows from Theorem~\ref{th0} and (iv) since for all $x,y\in L$\begin{enumerate}[(a)] \item $x+y=(x\wedge y')\vee(x'\wedge y)=(y\wedge x')\vee(y'\wedge x)=y+x$, \item $x+1=(x\wedge1')\vee(x'\wedge1)=x'$, \item $x+y=(x\wedge y')\vee(x'\wedge y)=x\vee y$ if $x\leq y'$. \end{enumerate} \end{enumerate} \end{proof} \begin{corollary} For fixed base set $A$, the mappings $\mathbb L$ and $\mathbb R$ are mutually inverse bijections between the set of all specific {\rm RLSEs} over $A$ and the set of all orthomodular lattices over $A$. \end{corollary} \begin{corollary}\label{cor1} For a specific {\rm RLSE} $\mathbf R$ the associated orthomodular lattice $\mathbb L(\mathbf R)$ is a Boolean algebra if and only if $\mathbf R$ is a Boolean ring. \end{corollary} \begin{corollary}\label{cor2} For a specific {\rm RLSE} $(R,+,\cdot,0,1)$ the condition $x(y+1)=xy+x$ for some $x,y\in R$ is equivalent to $x\mathrel{\C}y$. \end{corollary} \begin{proof} The equation $x(y+1)=xy+x$ is equivalent to $x\wedge(x'\vee y')=x\wedge y'$ which according to results in \cite K is equivalent to $x\mathrel{\C}y'$ and hence to $x\mathrel{\C}y$. \end{proof} \begin{corollary}\label{cor3} A specific {\rm RLSE} $(R,+,\cdot,0,1)$ is a Boolean ring if and only if it satisfies the identity $x(y+1)\approx xy+x$. \end{corollary} \begin{proof} This follows from Corollary~\ref{cor2} and from the fact that an orthomodular lattice $(L,\vee,\wedge,{}',0,1)$ is a Boolean algebra if and only if $x\mathrel{\C}y$ for all $x,y\in L$ (cf.\ \cite K). \end{proof} \section{Structure theory of RLSEs} \begin{definition}\label{def2} An {\rm RLSE} $(R,+,\cdot,0,1)$ is called {\em weakly distributive} if it satisfies the identity \begin{enumerate} \item[{\rm(R6)}] $(xy+1)x\approx xy+x$. \end{enumerate} \end{definition} Obviously, any specific RLSE $(R,+,\cdot,0,1)$ is weakly distributive, because according to Proposition~\ref{prop1} (ii) and (R5) we have \[ (xy+1)x\approx xy(x+1)+(xy+1)x\approx xy+x. \] Moreover, any weakly distributive RLSE is of characteristic $2$ since \[ x+x\approx x1+x\approx(x1+1)x\approx(x+1)x\approx0 \] according to Proposition~\ref{prop1} (ii). \begin{example} The specific {\rm RLSE} corresponding to the orthomodular lattice $\mathbf{MO}_2$ is weak\-ly distributive. \end{example} According to Corollary~\ref{cor1} this example shows that in general a specific RLSE $\mathbf R$ is not a Boolean ring. The next theorem explains how the initially introduced axioms for RLSEs can be rephrased by weakening the customary axioms of associativity and distributivity known from Boolean rings in case of weakly distributive RLSEs. \begin{theorem}\label{th4} Let $\mathbf R=(R,+,\cdot,0,1)$ be an algebra of type $(2,2,0,0)$ such that $(R,\cdot,0,1)$ is a bounded meet-semilattice. Then the following are equivalent: \begin{enumerate}[{\rm(i)}] \item $\mathbf R$ is a weakly distributive {\rm RLSE}, \item $\mathbf R$ satisfies the following identities: \begin{enumerate}[{\rm(W1)}] \item $0+1\approx 1$, \item $x+y\approx y+x$, \item $(xy+x)+1\approx xy+(x+1)$, \item $(xy+x)+x\approx xy+(x+x)$, \item $(xy+1)x\approx xy+x$, \item $(xy+1)(x+1)\approx xy(x+1)+(x+1)$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} $\text{}$ \\ (i) $\Rightarrow$ (ii): \begin{enumerate}[(W1)] \item follows from Proposition~\ref{prop1} (iii) and (R1). \item equals (R1). \item We have \[ (xy+x)+1\approx(xy+1)x+1\approx xy+(x+1) \] according to (R6) and (R4). \item We find \[ (xy+x)+x\approx(xy+1)x+x\approx\big((xy+1)x+1\big)x\approx xy\approx xy+0\approx xy+(x+x) \] according to (R6), (R3), Proposition~\ref{prop1} (iii) and the fact that every weakly distributive RLSE is of characteristic $2$. \item equals (R6). \item We notice \[ (xy+1)(x+1)\approx\big((xy+1)(x+1)+1\big)+1\approx x+1\approx0+(x+1)\approx xy(x+1)+(x+1) \] according to Proposition~\ref{prop1} (i), (R2), Proposition~\ref{prop1} (iii), (R1) and Proposition~\ref{prop1} (ii). \end{enumerate} (ii) $\Rightarrow$ (i): \\ Putting $y=0$ in (W5) and using (W1) and (W2) yields $x+0\approx0+x\approx x$. \\ Putting $y=0$ in (W4) yields $x+x\approx0$. \\ Setting $y=1$ in (W5) we get $x(x+1)\approx0$. \\ Setting $x=1$ in (W4) we obtain $(y+1)+1\approx y$. \\ In the sequel we often use these identities without mentioning them. \begin{enumerate} \item[(R1)] equals (W2). \item[(R2)] We have \[ (xy+1)(x+1)+1\approx\big(xy(x+1)+(x+1)\big)+1\approx\big(0+(x+1)\big)+1\approx(x+1)+1\approx x \] due to (W6). \item[(R3)] We get \[ \big((xy+1)x+1\big)x\approx(xy+1)x+x\approx(xy+x)+x\approx xy \] according to (W5) and (W4). \item[(R4)] We see that \[ xy+(x+1)\approx(xy+x)+1\approx(xy+1)x+1 \] accordingly to (W3) and (W5). \item[(R6)] equals (W5). \end{enumerate} \end{proof} Recalling that for an RLSE $\mathbf R=(R,+,\cdot,0,1)$ $a\leq b$ for $a,b\in R$ means that $ab=a$, a notion which coincides with $a\leq b$ for the associated lattice $\mathbb L(\mathbf R)$, we have $\{(xy,x)\mid x,y\in R\}=\{(x,y)\in R^2\mid x\leq y\}$ and we can now rephrase Theorem~\ref{th4} as follows: \begin{theorem}\label{th5} Let $\mathbf R=(R,+,\cdot,0,1)$ be an algebra of type $(2,2,0,0)$ such that $(R,\cdot,0,1)$ is a bounded meet-semilattice. Then the following are equivalent: \begin{enumerate}[{\rm(i)}] \item $\mathbf R$ is a weakly distributive {\rm RLSE}, \item $\mathbf R$ satisfies the following identities and conditions: \begin{enumerate}[{\rm(1)}] \item $0+1\approx 1$, \item $x+y\approx y+x$, \item if $x\leq y$ then $(x+y)+1=x+(y+1)$, \item if $x\leq y$ then $(x+y)+y=x+(y+y)$, \item if $x\leq y$ then $(x+1)y=x+y$, \item if $x\leq y$ then $(x+1)(y+1)=x(y+1)+(y+1)$. \end{enumerate} \end{enumerate} \end{theorem} The identities (W3) and (W4) in Theorem~\ref{th4} are special cases of associativity. A further version of associativity is the following. \begin{definition} {\rm(}cf.\ {\rm\cite{DL21})} An {\rm RLSE} $(R,+,\cdot,0,1)$ is called {\em weakly associative} if it satisfies the identity \begin{enumerate} \item[{\rm(R7)}] $(x+y)+1\approx x+(y+1)$. \end{enumerate} \end{definition} Of course, every Boolean ring is weakly associative. The converse does not hold. The RLSE $\mathbf R$ of characteristic $2$ with $\mathbb L(\mathbf R)=\mathbf{MO}_2$, $a+b=a'+b'=c$ and $a+b'=a'+b=c'$ for $a\neq b$, $0<a,b<1$ and an arbitrary $c\in{\rm MO}_2$ is weakly associative, but not a Boolean ring. As shown in \cite{DL21} a weakly associative specific RLSE is a Boolean ring. We now note that a weakly associative RLSE is weakly distributive (and hence of characteristic $2$) since \[ (xy+1)x\approx\big((xy+1)x+1\big)+1\approx\big(xy+(x+1)\big)+1\approx\big((xy+x)+1\big)+1\approx xy+x \] according to Proposition~\ref{prop1} (i), (R4) and weak associativity. The converse does not hold. The specific RLSE corresponding to the orthomodular lattice $\mathbf{MO}_2$ is weakly distributive, but not weakly associative since for two incomparable elements $a$ and $b$ \begin{align*} (a+b)+1 & =\big((a\wedge b')\vee(a'\wedge b)\big)'=(0\vee0)'=0'=1\neq0=0\vee0=(a\wedge b)\vee(a'\wedge b')= \\ & =a+b'=a+(b+1). \end{align*} We close this section with a purely algebraic remark about the structure of RLSEs. Let $\mathbf A=(A,F)$ be an algebra. Then by $\Con\mathbf A$ we denote the set of all congruences on $\mathbf A$ and by $\BCon\mathbf A=(\Con\mathbf A,\subseteq)$ the congruence lattice of $\mathbf A$. The algebra $\mathbf A$ is called \begin{itemize} \item {\em congruence permutable} if $\Theta\circ\Phi=\Phi\circ\Theta$ for all $\Theta,\Phi\in\Con\mathbf A$, \item {\em congruence distributive} if $\BCon\mathbf A$ is distributive, \item {\em arithmetical} if it is both congruence permutable and congruence distributive, \item {\em congruence regular} if for all $a\in A$ and $\Theta,\Phi\in\Con\mathbf A$, $[a]\Theta=[a]\Phi$ implies $\Theta=\Phi$, \item {\em congruence uniform} if for every $\Theta\in\Con\mathbf A$ all classes of $\Theta$ have the same cardinality. \end{itemize} \begin{remark} {\rm RLSEs} are arithmetical, congruence regular and congruence uniform. \end{remark} \begin{proof} Let $\mathbf R$ be an RLSE. Since the fundamental operations of $\mathbb L(\mathbf R)$ are terms in $\mathbf R$ we have $\Con\mathbf R\subseteq\Con\mathbb L(\mathbf R)$. Now the theorem follows from the fact (see e.g.\ \cite{CEL}) that orthomodular lattices are arithmetical, congruence regular and congruence uniform. \end{proof} \section{Algebras of numerical events} Let $S$ be a set of states of a physical system and $p(s)$ the probability of the occurrence of an event when the system is in state $s\in S$. The function $p$ from $S$ to $[0,1]$ is called a {\em numerical event}, or more precisely, an {\em S-probability} (cf.\ \cite{BM91} and \cite{BM93}). Let $P$ be a set of S-probabilities including the constant functions $0$ and $1$. We denote the order of real functions by $\leq$, write $p':=1-p$ for the counter probability of $p\in P$ and $p\perp q$ if $p$ and $q$ are orthogonal in $P$, i.e.\ $p\leq q'$. If the infimum or supremum of $p,q\in P$ exists in $P$, we denote this by $p\wedge q$ and $p\vee q$, respectively. Finally we agree to write $p+q$, $p-q$ and $pq$ for the sum, difference and product of functions $p,q\in P$. Not to mix up the sum and product of functions with the sum and product within RLSEs, we will use with RLSEs $\oplus$ and $\odot$ instead of $+$ and $\cdot$, respectively. \begin{definition}{\rm(}cf.\ {\rm\cite{BM91})}\label{def3} A set $P$ of S-probabilities is called an {\em algebra of S-probabilities} if \begin{enumerate}[{\rm(S1)}] \item $0,1\in P$, \item $p'\in P$ for every $p\in P$, \item if $p\perp q\perp r\perp p$ for $p,q,r\in P$ then $p+q+r\in P$. \end{enumerate} \end{definition} Putting $r=0$ in axiom (S3) one obtains that $p\perp q$ implies $p+q\in P$ in which case one can show that $p+q=p\vee q$ (in respect to the order $\leq$ of the functions of $P$). In general, $(P,\vee,\wedge,{}',0,1)$ is an orthomodular poset in respect to the partial order of functions, but from now on we will assume with good cause that $P$ is a lattice. That an algebra of S-probabilities is a lattice and hence an orthomodular lattice, is a typical feature of many quantum logics. In particular, every Hilbert-space logic can be considered as a lattice-ordered algebra of S-probabilities (cf.\ \cite{BM93}), and in the important case that $|S|=2$ every algebra of S-probabilities is a lattice (cf.\ \cite{DDL10a}). Of course, also all classical logics whose order $\leq$ correspond to Boolean algebras (cf.\ \cite{MT})can be understood as lattice-ordered algebras of numerical events. If measurements are available in the context of a set $P_n$ of numerical events it is often crucial to get to know whether one deals with a classical situation or a quantum-mechanical one which means that one has to decide whether $P_n$ can be embedded into an algebra of S-probabilities $P$ in such a way that the elements of $P_n$ lie within a Boolean subalgebra of $P$. If this is the case, $P_n$ is called {\em Boolean embeddable}, or for short, only {\em embeddable} (cf.\ \cite{DLM20}). Let $P$ be a lattice-ordered algebra of S-probabilities and \begin{align*} p\oplus q & :=(p\wedge q')\vee(p'\wedge q), \\ p\odot q & :=p\wedge q \end{align*} for all $p,q\in P$. Then, as shown in \cite{DL21}, $\mathbf R=(P,\oplus,\odot,0,1)$ is an RLSE with $\mathbb L(\mathbf R)=P$. We call $\mathbf R$ the {\em {\rm RLSE} associated to P}. $\mathbf R$ has characteristic $2$ and is weakly distributive, because $\mathbf R$ obviously satisfies identity (R5). If a set $P_n$ of numerical events is Boolean embeddable into $P$ we will also say that it is Boolean embeddable into $\mathbf R$. Next we express $\oplus$, $\odot$ and $\leq$ of RLSEs associated to algebras of S-probabilities by the sum, difference and $\leq$ of real functions. \begin{proposition}\label{prop3} Let $\mathbf R$ be the RLSE associated to a lattice-ordered algebra of S-prob\-a\-bil\-i\-ties $P$ and $p,q\in P$. Then \begin{enumerate}[{\rm(i)}] \item $p\oplus q=p\odot(1-q)+(1-p)\odot q$, \item $p\oplus q=q-p$ if $p\leq q$, \item $p\oplus1=1-p$, \item $p\oplus q=p+q$ if $p\perp q$. \end{enumerate} \end{proposition} \begin{proof} \ \begin{enumerate}[(i)] \item holds since the operations $\odot$ and $\wedge$ coincide, $p'=1-p$ and $(p\odot q')\perp(p'\odot q)$. \item is a consequence of Theorem \ref{th5} and Proposition 2.1 in \cite{DDL10a}. \item follows from (ii). \item results from Proposition~\ref{prop1} (vii). \end{enumerate} \end{proof} When we say that a set $P_n$ of S-probabilities is Boolean embeddable into an RLSE $\mathbf R=(R,\oplus,\odot,0,1)$ we assume that there exists an (arbitrary) lattice-ordered algebra $P$ of S-probabilities such that $P=\mathbb L(\mathbf R)$. If the elements of $P_n$ can only have two values, namely $0$ and $1$, then we also assume this for the elements of $P$. Such an algebra of S-probabilities then is a so-called concrete logic, that is a quantum logic which can be represented by sets. \begin{theorem}\label{th6} Let $P_2=\{p,q\}$. Then the following holds: \begin{enumerate}[{\rm(i)}] \item $P_2$ is Boolean embeddable if and only if $p\odot(1-q)=p-p\odot q$. \item If $p$ and $q$ are two-valued then $P_2$ is Boolean if and only if $p\odot q=pq$. \end{enumerate} \end{theorem} \begin{proof} \ \begin{enumerate}[(i)] \item $p$, $q$ are Boolean embeddable into an orthomodular lattice if and only if $p\mathrel{\C}q$. According to Corollary~\ref{cor2} this is equivalent to $p\odot(q\oplus1)=p\odot q\oplus p$ which by Proposition~\ref{prop3} means $p\odot(1-q)=p-p\odot q$. \item We assume that $p$ and $q$ can only have the values $0$ and $1$. For orthomodular lattices $p\mathrel{\C}q$ is equivalent to $p\mathrel{\C}q'$, hence by what we have already proved $p$ and $q$ are Boolean embeddable if and only if $p\odot q=p-p\odot(1-q)$. Within a Boolean subalgebra $p(s)\wedge q(s)=\min\big(p(s),q(s)\big)=p(s)q(s)$ with $\min$ short for minimum. Hence $pq$ must be the element $p\odot q$ of $\mathbf R$. Conversely, if $pq=p\odot q$ then $p\odot q\oplus p=p-p\odot q=p-pq=p(1-q)=p\odot(1-q)$ since $p(1-q)$ is an element of $\mathbf R$ that coincides with $p\wedge(1-q)$. \end{enumerate} \end{proof} Let $A$ be a finite subset of an RLSE $\mathbf R$. We denote by $\prod_{\bf R}A$ the product in $\mathbf R$ of all elements of $A$ and by $\bigwedge A$ the infimum of these elements in $\mathbb L(\mathbf R)$. Moreover, we will denote the product within the reals of all functions belonging to $A$ by $\prod A$ and the set-theoretic union of $A$ and $B$ by $A\cup B$. As proven in \cite{DL14} for $n>1$ an $n$-element subset $T$ of an orthomodular poset is Boolean embeddable if and only if $\bigwedge A$ and $\bigwedge B$ commute for every $k\in\{1,\ldots,n-1\}$ and every $k$-element subset $A$ and $B$ of $T$. Taking this into account one can derive from Theorem~\ref{th6} a rough procedure to find out whether a set $P_n=\{p_1,\ldots,p_n\}$ of S-probabilities is Boolean embeddable, namely For $k=1$ to $n-1$: \\ Check for every $k$-element subsets $A$ and $B$ of $P_n$ whether $\prod_{\bf R}A\odot(1-\prod_{\bf R}B)=\prod_{\bf R}A-\prod_{\bf R}A\odot\prod_{\bf R}B$, or rather whether $\prod_{\bf R}A\odot\prod_{\bf R}B=\prod(A\cup B)$, if all S-probabilities are two-valued. We conclude this paragraph by weakening two former concepts. \begin{definition} Omitting axioms {\rm(R3)} and {\rm(R4)} in the definition of an {\rm RLSE} we will call the structure arising this way a {\em near-RLSE}, and if a {\em near-RLSE} satisfies axiom {\rm(R5)} we will call it {\em specific}. Moreover, substituting axiom {\rm(S3)} in the definition of algebras of S-probabilities by its special case $r=0$, i.e.\ if $p\perp q$ for $p,q\in P$ then $p+q\in P$, we obtain a so-called {\em generalized field of events} {\rm(GFE)} {\rm(}cf.\ {\rm\cite D)}. \end{definition} If $\mathbf R=(R,\oplus,\odot,0,1)$ is a near-RLSE then $\mathbb L(\mathbf R)$ is a lattice with an antitone involution $'$ that in general is not a complementation. Such a lattice could be considered as a quantum logic, however, we will now focus on a different approach to near-RLSEs. We consider a set $Q$ of S-probabilities containing $0$ and $1$ endowed by the operations $\oplus$ and $\odot$ defined for $p,q\in Q$ by \begin{align*} (p\oplus q)(s) & :=\max\big(p(s),q(s)\big)-\min\big(p(s),q(s)\big), \\ (p\odot q)(s) & :=\min\big(p(s),q(s)\big) \end{align*} for all $s\in S$ with $\min$ and $\max$ having the obvious meanings. As for these operations one could think of repeating an experiment several times for the same states $s\in S$, $p\oplus q$ giving the bandwidth between the lowest and highest values of two repetitions $p$ and $q$, $p\odot q$ the obtained lowest values and $p\oplus1=1-p$ providing the counter probability to $p$. In the proof of the next theorem we use the following two lemmas. \begin{lemma}\label{lem1} Let $p,q\in Q$ and $s\in S$ and assume $p(s),q(s)\in\{0,1\}$. Then \begin{align*} p(s)\oplus q(s) & =p(s)+q(s)-2p(s)q(s), \\ p(s)\odot q(s) & =p(s)q(s). \end{align*} \end{lemma} \begin{proof} Consider the four cases $\big(p(s),q(s)\big)\in\{0,1\}^2$. \end{proof} \begin{lemma}\label{lem2} Put \begin{align*} x\oplus y & :=\max(x,y)-\min(x,y), \\ x\odot y & :=\min(x,y) \end{align*} for all $x,y\in[0,1]$ and let $a,b\in[0,1]$. Then the following holds: \begin{enumerate}[{\rm(i)}] \item $a\oplus b=b\oplus a$, \item $(a\odot b\oplus1)\odot(a\oplus1)\oplus1=a$, \item $\big((a\odot b\oplus1)\odot a\oplus1\big)\odot a=a\odot b$ if and only if $b\geq\min(a,1-a)$, \item $a\odot b\oplus(a\oplus1)=(a\odot b\oplus1)\odot a\oplus1$ if and only if $a\in\{0,1\}$ or $b=0$, \item $a\oplus b=a\odot(b\oplus1)\oplus(a\oplus1)\odot b$. \end{enumerate} \end{lemma} \begin{remark} The equalities in {\rm(i)} -- {\rm(v)} correspond exactly to the identities {\rm(R1)} -- {\rm(R5)}. \end{remark} \begin{proof}[Proof of Lemma~\ref{lem2}] \ \begin{enumerate}[(i)] \item $a\oplus b=\max(a,b)-\min(a,b)=\max(b,a)-\min(b,a)=b\oplus a$ \item $(a\odot b\oplus1)\odot(a\oplus1)\oplus1=1-\min\big(1-\min(a,b),1-a\big)=\max(\min(a,b),a)=a$ \item If $b\geq a$ then \begin{align*} \big((a\odot b\oplus1)\odot a\oplus1\big)\odot a & =\min\big(1-\min(1-a,a),a\big)=\min\big(\max(a,1-a),a\big)= \\ & =a=\min(a,b)=a\odot b. \end{align*} If $b<a$ and $b\geq1-a$ then \[ \big((a\odot b\oplus1)\odot a\oplus1\big)\odot a=\min(b,a)=\min(a,b)=a\odot b. \] If, finally, $b<a$ and $b<1-a$ then \[ \big((a\odot b\oplus1)\odot a\oplus1\big)\odot a=\min(1-a,a)>b=\min(a,b)=a\odot b. \] \item Because of \begin{align*} a\odot b\oplus(a\oplus1) & =\max\big(\min(a,b),1-a\big)-\min\big(\min(a,b),1-a\big)= \\ & =\max\big(\min(a,b),1-a\big)-\min(a,b,1-a) \end{align*} and \[ (a\odot b\oplus1)\odot a\oplus1=1-\min\big(1-\min(a,b),a\big)=\max\big(\min(a,b),1-a\big), \] $a\odot b\oplus(a\oplus1)=(a\odot b\oplus1)\odot a\oplus1$ if and only if $\min(a,b,1-a)=0$ which means $a\in\{0,1\}$ or $b=0$. \item If $a\leq1-b$ then \begin{align*} a\oplus b & =\max(a,b)-\min(a,b)= \\ & =\max\big(\min(a,1-b),\min(1-a,b)\big)-\min\big(\min(a,1-b),\min(1-a,b)\big)= \\ & =a\odot(b\oplus1)\oplus(a\oplus1)\odot b. \end{align*} If $a\geq1-b$ then \begin{align*} a\oplus b & =\max(a,b)-\min(a,b)=1-\min(a,b)-\big(1-\max(a,b)\big)= \\ & =\max(1-b,1-a)-\min(1-b,1-a)= \\ & =\max\big(\min(a,1-b),\min(1-a,b)\big)-\min\big(\min(a,1-b),\min(1-a,b)\big)= \\ & =a\odot(b\oplus1)\oplus(a\oplus1)\odot b. \end{align*} \end{enumerate} \end{proof} Now we can prove our final result. \begin{theorem} For $\mathbf Q=(Q,\oplus,\odot,0,1)$ the following hold: \begin{enumerate}[{\rm(i)}] \item $\mathbf Q$ is a specific near-{\rm RLSE}, \item $\mathbf Q$ is a {\rm GFE} in respect to the order $\leq$ of functions, \item the following are equivalent: \begin{enumerate}[{\rm(a)}] \item $Q\subseteq\{0,1\}^S$, \item $\mathbf Q$ satisfies identity {\rm(R3)}, \item $\mathbf Q$ satisfies identity {\rm(R4)}, \item $\mathbf Q$ is an {\rm RLSE}, \item $\mathbf Q$ is a Boolean ring. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} Let $p,q\in Q$. \begin{enumerate}[(i)] \item follows immediately from Lemma~\ref{lem2}. \item If $p\perp q$ (in respect to $\leq$, not in the sense of RLSEs) then $p\leq1-q$ and hence \[ p+q=1-(1-q-p)=1-\big(\max(p,1-q)-\min(p,1-q)\big)=\big(p\oplus(q\oplus1)\big)\oplus1\in Q. \] \item The equivalence of (a) -- (d) can be directly derived from Lemma~\ref{lem2}. \\ \big((a) and (d)\big) $\Rightarrow$ (e): \\ Using Lemma~\ref{lem1} we obtain \[ p\odot(q\oplus1)=p(1-q)=p-pq=pq+p-2pqp=p\odot q\oplus p \] which by Corollary~\ref{cor3} is equivalent to (e). \\ (e) $\Rightarrow$ (d): \\ This is already well-known. \end{enumerate} \end{proof} Authors' addresses: Dietmar Dorninger \\ TU Wien \\ Faculty of Mathematics and Geoinformation \\ Institute of Discrete Mathematics and Geometry \\ Wiedner Hauptstra\ss e 8-10 \\ 1040 Vienna \\ Austria \\ [email protected] Helmut L\"anger \\ TU Wien \\ Faculty of Mathematics and Geoinformation \\ Institute of Discrete Mathematics and Geometry \\ Wiedner Hauptstra\ss e 8-10 \\ 1040 Vienna \\ Austria, and \\ Palack\'y University Olomouc \\ Faculty of Science \\ Department of Algebra and Geometry \\ 17.\ listopadu 12 \\ 771 46 Olomouc \\ Czech Republic \\ [email protected] \end{document}
\begin{document} \title{ A Fitted Multi-Point Flux Approximation Method for Pricing two options} \titlerunning{Fitted Multi-Point Flux Approximation method for pricing options} \author{ Rock Stephane Koffi, Antoine Tambue } \authorrunning{ R. S. Koffi, A. Tambue} \institute{A. Tambue (Corresponding author) \at Western Norway University of Applied Sciences, Inndalsveien 28, 5063 Bergen, Norway,\\ The African Institute for Mathematical Sciences(AIMS), 6-8 Melrose Road, Muizenberg 7945, South Africa\\ Center for Research in Computational and Applied Mechanics (CERECAM), and Department of Mathematics and Applied Mathematics, University of Cape Town, 7701 Rondebosch, South Africa.\\ Tel.: +47 55 58 70 06, \email{[email protected], [email protected], [email protected]} \\ \and R.S. Koffi \at The African Institute for Mathematical Sciences(AIMS) , 6-8 Melrose Road, Muizenberg 7945, South Africa\\ Department of Mathematics and Applied Mathematics, University of Cape Town, 7701 Rondebosch, South Africa\\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} In this paper, we develop novel numerical methods based on the Multi-Point Flux Approximation (MPFA) method to solve the degenerated partial differential equation (PDE) arising from pricing two-assets options. The standard MPFA is used as our first method and is coupled with a fitted finite volume in our second method to handle the degeneracy of the PDE and the corresponding scheme is called fitted MPFA method. The convection part is discretized using the upwinding methods (first and second order) that we have derived on non uniform grids. The time discretization is performed with $\theta$- Euler methods. Numerical simulations show that our new schemes can be more accurate than the current fitted finite volume method proposed in the literature. \keywords{Finite volume methods, Multi-Point Flux Approximation, Degenerated PDEs, Options pricing, Multi-asset options } \end{abstract} \section{Introduction} \label{intro} Pricing multi-assets options is of great interest in the financial industry (see \cite{persson2007pricing}). Multi-asset options are options based on more than one underlying. There are several kinds of multi-assets options, few of them are exchange options, rainbow options, baskets options, best or worst options, quotient options, foreign exchange options, quanto options, spread options, dual-strike options and out-performance options. Pricing these options lead to the resolution of the following second order degenerated Black-Scholes Partial Differential Equations (PDE)(see \cite{persson2007pricing}) \begin{eqnarray} \label{multi} \frac{\partial U}{\partial\tau}=\frac{1}{2}\sum_{i,j=1}^n \sigma_i\sigma_j\rho_{ij}S_iS_j\frac{\partial^2U}{\partial S_i \partial S_j}+r\sum_{i=1}^nS_i\frac{\partial U}{\partial S_i}-rU \end{eqnarray} where $r$ is the risk free interest, $U$ is the option value at time $\tau$, $\tau=T-t$ with $t$ and $T$ respectively the instantaneous and maturity time, $\,S_i~$ represents the asset $i$ price, $\sigma_i$ represents the volatility of asset $i$, $\rho_{ij}$ represents the correlation between the assets $i$ and $j$, where $i, j=1,...,n$. The main difference between multi-assets options is their payoff functions which represent the initial condition of the corresponding backward PDE. The spatial domain of the PDE is infinite, but for its numerical resolution, a truncation is required (see \cite{duffy2013finite},Chapter 3). It has been observed that when the stock price $S$ approaches the region near to zero, the Black Scholes PDE is degenerated (see \cite{duffy2013finite}, chapter 30.3). Moreover, the initial condition of the PDE has a discontinuity in its first derivative when the stock price is equal to the strike $K$. This discontinuity has an adverse impact on the accuracy when the finite difference method is used (see \cite{wilmott2005best}, chapter 26). Therefore, for the spatial discretization of the PDE, it is suitable to use non-uniform grids with more points in the region around $S=0$ and $S=K$ in order to handle the degeneracy and the discontinuity. To overcome the above challenges, many methods have been proposed in the literature. Thereby, \cite{wang2004novel} proposed a fitted finite volume method for one dimensional Black Scholes PDE and the rigorous convergence proof is provided by \cite{angermann2007convergence}. Besides, \cite{huang2006fitted} adapted the fitted finite volume discretization method for the two-dimensional Black-Scholes PDE and its rigorous convergence proof is analysed by \cite{huang2009convergence}. Although these two fitted finite volume methods are stable, they are only order 1 with respect to asset price variables. In this paper, we present two novel discretization methods for the two-dimensional Black Scholes PDE based on a special kind of finite volume method, the so-called Multi-Point Flux Approximation (MPFA) method. This method was introduced by \cite{aavatsmark2002introduction} and has been used in fluid dynamics for flow and transport equations (see \cite{sandve2012efficient} and references therein). Actually, the MPFA was designed to give a correct discretization of the flow equation for general grids including fractures (see \cite{aavatsmark2002introduction,sandve2012efficient}). The MPFA method is essentially based on the approximation of a linear function gradient over a triangle, the calculation and the continuity of flux through edges of this triangle. The convergence of MPFA method is usually second order in space domain on rough grids (see \cite{aavatsmark2007multipoint,stephansen2012convergence}). Our first numerical method here is the standard MPFA , which is fully used to approximate the second order operator. To the best of our knowledge, this method was not yet used to solve degenerated Black Scholes PDE in finance. To build our new fitted MPFA method, we couple the standard MPFA with the upwind methods (first and second order) to approximate two dimensional options pricing. Besides, the fitted finite volume proposed by \cite{wang2004novel} is used to handle the degeneracy of the PDE in the region where the stocks price approach zero (degeneracy region). In the region, where the PDE in not degenerated, we apply the MPFA method. The novel numerical technique from this combination is called fitted MPFA method and will obviously improve the accuracy of the current fitted finite volume in the literature, since more approximations involving are second order in space. Naturally, these two methods are applicable to other types of multi-asset options and also to financial models such as \cite{heston1993closed} model and \cite{bates1996jumps} model on non-uniform grids. Another advantage of our novel fitted MPFA is that it can easily be adapted to more structured commercial or open-source softwares as the standard MPFA (see \cite{lie2012open}). \\ The rest of the paper is organized as follows. In section 2, we start by introducing the Black Scholes model for option with 2 stocks and the corresponding partial differential equation. Afterwards, we set the frame of the numerical domain of study suitable for the finite volume method application. Section 3 is devoted to the spatial discretization of the PDE. We describe the Multi-Point Flux Approximation method for the discretization of the diffusion term of the PDE. The upwind methods (first and second order) are used for the the convection term discretization. We end the section 3 with the fitted MPFA which is a combination of a fitted finite volume method and the MPFA method. The time discretization is performed using the $\theta-$Euler methods in the section 4. In section 5, we perform numerical experiments. Those numerical simulations show that the two proposed schemes (the standard MPFA method and fitted MPFA method ) can be more accurate than the current fitted finite volume method proposed in the literature. General conclusion is given in section 6. \section{Formulation of the problem} \subsection{Black-Scholes model with 2 underlying assets} An option with two underlying assets modeled by the Black Scholes equation is formulated as follows \begin{align} \left\lbrace \begin{array}{l} dx(t)~~~~=~\mu_1xdt+\sigma_1 xdW_1\\ \\ dy(t)~~~~=~\mu_2ydt+\sigma_2 ydW_2\\ \\ dW_1(t)dW_2(t) =~\rho dt \end{array}\right. \end{align} where $\mu_i,\sigma_i, W_i$ are respectively the drift, the volatility and the Wiener process governing the stocks $x,y$ and $\rho$ is the correlation coefficient between the two Wiener processes. By applying the Ito's formula and using the standard arbitrage argument, it is well known ( see \cite{hull2003options,kwok2008mathematical,wilmott1993option} ) that the value of the option $U$ follows the following two-dimensional Black-Scholes Partial differential equation on the domain $D=[0,+\infty) \times[0,+\infty) \times[0,T]$ \begin{equation} \label{twoop} \frac{\partial U}{\partial \tau}= \frac{1}{2} \sigma^2_1 x^2 \frac{\partial^2 U}{\partial x^2} +\rho \sigma_1 \sigma_2 xy\frac{\partial^2 U}{\partial x \partial y} + \frac{1}{2} \sigma_2^2 y^2 \frac{\partial^2 U}{\partial y^2}+rx\frac{\partial U}{\partial x} +ry\frac{\partial U}{\partial y}-rU \end{equation} where $\tau=T-t$, $T$ is the maturity time, $t$ the current time and $r$ is the risk-free interest. For European rainbow option price on maximum of two risky assets, the following initial and boundary conditions are used \begin{align} \left\lbrace \begin{array}{l} U(x,y,0)=\max\left(\max(x,y)-K,0\right) \\ \\ U(0,y,\tau)=0\\ \\ U(x,0,\tau)=0\\ \end{array}\right. \end{align} with $K$ the strike price. But to compare our numerical solution with the existing fitted finite volume method, the exact solution will be used at the boundary. In order to apply the finite volume method, it is convenient to re-write the Partial Differential Equation \eqref{twoop} in the following divergence form \begin{eqnarray} \label{conservation} \frac{\partial U}{\partial \tau }= \nabla \cdot(\mathbf{M}\nabla U)+\nabla (f U)+\lambda U \end{eqnarray} where \begin{eqnarray*} & & \mathbf{M}=\frac{1}{2}\left(\begin{array}{lr} \sigma_1^2 x^2 & \rho\sigma_1\sigma_2xy \\ & \\ \rho\sigma_1\sigma_2xy & \sigma_2^2 y^2 \end{array}\right), f=\left(\begin{array}{c} (r-\sigma_1^2-\frac{1}{2}\rho\sigma_1\sigma_2)x \\ \\ (r-\sigma_2^2-\frac{1}{2}\rho\sigma_1\sigma_2)y \end{array}\right)\\ & & \\ & &\\ & &~~~~~~~~~~~~~~~~~~~~~~~\lambda = -3r+\sigma_1^2+\sigma^2_2+\rho\sigma_1\sigma_2 \end{eqnarray*} Note that $\mathbf{M}$ does not satisfying the standard ellipticity condition (see \cite[(3)]{tambue2016exponential}), so the PDE \eqref{conservation} is degenerated. We will assume Dirichlet boundary condition in the entire domain. \subsection{Finite volume method} Let us consider the new domain $\Omega$ of study by truncating $D$ such that $\Omega=I_x\times I_y\times [0,T]$ where $I_x=[0,x_{\max}]$ and $I_y=[0,y_{\max}]$. In the sequel of this work, the Black-Scholes partial differential equation \eqref{twoop} is considered over the truncated domain $\Omega$. At $x=x_{\max}$ and $y=y_{\max}$, the linear boundary condition will be applied (see \cite{huang2006fitted}). The intervals $I_x$ and $I_y$ will be subdivided into $N$ part in the following way (see \cite{huang2006fitted,huang2009convergence}) without loss the generality as irregular grids such as triangular grids can be used. \begin{eqnarray} I_{x_i}=[x_{i-1};x_i], \,\,I_{y_j}=[y_{j-1};y_j]\quad i,j =1,...,N+1. \end{eqnarray} Let us set the mid-points $x_{i-\frac{1}{2}}$ and $y_{j-\frac{1}{2}}$ as follows \begin{eqnarray} x_{i-\frac{1}{2}}=\frac{x_{i-1}+x_i}{2}, \,\,\, y_{j-\frac{1}{2}}=\frac{y_{j-1}+y_j}{2} \qquad i,j =1,...,N, \end{eqnarray} with $h_i=x_{i+\frac{1}{2}}-x_{i-\frac{1}{2}},~~~l_j=y_{j+\frac{1}{2}}-y_{j-\frac{1}{2}}$~~~~~and \begin{eqnarray*} x_{-\frac{1}{2}}=x_0=0, \qquad x_{N+\frac{3}{2}}=x_{N+1}=x_{\max},\, y_{-\frac{1}{2}}=y_0=0 \,, \;\,y_{N+\frac{3}{2}}=y_{N+1}=y_{\max}. \end{eqnarray*} For $i,j=1,\ldots,N$, we denote by $\mathcal{C}_{ij}=[x_{i-\frac{1}{2}};x_{i+\frac{1}{2}}]\times[y_{j-\frac{1}{2}};y_{j+\frac{1}{2}}]$ a control volume associated to our subdivision. \begin{figure} \caption{The control volume $\mathcal{C} \end{figure} Note that the control volume $\mathcal{C}_{ij}$ is the area surrounding the grid point $(x_i,y_j)$. Our goal is to approximate the option function $U$ at $(x_i,y_j)$ \footnote{center of the control volume $\mathcal{C}_{i,j}$} by a function denoted $\mathcal{U}$. The matrix $\mathbf{M}$ in \eqref{conservation} will be replaced by its average value within each control volume \begin{equation} \mathbf{M}^{ij}=\frac{1}{\mathrm {meas}(\mathcal{C}_{i,j})}\int_{\mathcal{C}_{i,j}}\mathbf{M} dxdy,\,\,\, i,j=1,...,N. \end{equation} where $~\mathrm{meas}(\mathcal{C}_{ij})$ is the measure of $\mathcal{C}_{ij}$. Thereby, we have \begin{equation*} M^{i,j}= \left[\begin{array}{lcr} \frac{\sigma_1^2}{6}\frac{x_{i+\frac{1}{2}}^3-x_{i-\frac{1}{2}}^3}{x_{i+\frac{1}{2}}-x_{i-\frac{1}{2}}}& &\frac{\rho\sigma_1\sigma_2}{8}(x_{i+\frac{1}{2}}+x_{i-\frac{1}{2}})(y_{j+\frac{1}{2}}+y_{j-\frac{1}{2}}) \\ & & \\ \frac{\rho\sigma_1\sigma_2}{8}(x_{i+\frac{1}{2}}+x_{i-\frac{1}{2}})(y_{j+\frac{1}{2}}+y_{j-\frac{1}{2}}) & & \frac{\sigma_2^2}{6}\frac{y_{j+\frac{1}{2}}^3 -y_{j-\frac{1}{2}}^3}{y_{j+\frac{1}{2}}-y_{j-\frac{1}{2}}} \end{array}\right]. \end{equation*} \\ \\ Now let us consider the divergence form given in \eqref{conservation}. Following the finite volume method's principle, we integrate the partial differential equation \eqref{conservation} over each control volume $\mathcal{C}_{ij}$ and we have \begin{align} \label{eqfinvol} \int_{\mathcal{C}_{ij}}\frac{\partial{U}}{\partial{\tau}}d\mathcal{C}=\int_{\mathcal{C}_{ij}}\nabla \cdot(\mathbf{M}\nabla U) d\mathcal{C}+\int_{\mathcal{C}_{ij}}\nabla (f U)d\mathcal{C} +\int_{\mathcal{C}_{ij}}\lambda U d\mathcal{C}. \end{align} The next section will be dedicated to spatial discretization of equation \eqref{eqfinvol}. For the term in the left hand side of \eqref{eqfinvol} and for the last term in its right hand side, we use the mid-point quadrature rule for their approximations. More precisely \begin{eqnarray} \int_{{\mathcal{C}}_{ij}}\frac{\partial{U}}{\partial{\tau}}d\mathcal{C} \approx \mathrm {meas}(\mathcal{C}_{ij})\frac{d\mathcal{U}}{d\tau}(x_i,y_j,\tau) \end{eqnarray} \begin{eqnarray} \label{linearterm} \int_{{\mathcal{C}}_{ij}}\lambda U d\mathcal{C} \approx meas(\mathcal{C}_{ij})\lambda \mathcal{U}(x_i,y_j,\tau). \end{eqnarray} The diffusion term \begin{equation} \label{diffusionterm} \int_{\mathcal{C}_{ij}}\nabla \cdot(\mathbf{M}\nabla \mathcal{U}) d\mathcal{C} \end{equation} of \eqref{eqfinvol} will be approximated using the \textbf{Multi-point flux approximation} (MPFA) method or our novel \textbf{fitted Multi-point flux approximation}. More details will be given in the next section. Besides, the convection term \begin{equation} \label{convectionterm} \int_{\mathcal{C}_{ij}}\nabla (f \mathcal{U})d\mathcal{C} \end{equation} of \eqref{eqfinvol} will be approximated using the upwind methods (first or second order). Note that the standard two -point flux approximation in \cite{tambue2016exponential} can only be consistent in the approximation of \eqref{diffusionterm} if and only if the grid is $\mathbf{M}-$orthogonal. \section{Space discretization} The spatial discretization of \eqref{conservation} consists of approximating all terms in \eqref{eqfinvol} over the control volumes of the study domain. \subsection{Discretization of the diffusion term} Let us start by applying the divergence theorem to the diffusion term \eqref{diffusionterm} as follows, for $~~i,j=1,...,N$ \begin{equation} \label{diffterm-disc} \mathcal{F}^{ij}=\int_{\mathcal{C}_{ij}}\nabla \cdot(\mathbf{M}^{ij}\nabla \mathcal{U})=\int_{\partial \mathcal{C}_{ij}}(\mathbf{M}^{ij}\nabla \mathcal{U})\cdot\vec{n}d\partial\mathcal{C} \end{equation} where $\vec{n}$ is the outward vector from the control volume.\\ \\ Now, we can apply the so-called $\textbf{Multi-Point Flux Aprroximation(MPFA)}$ to approximate the integral defined in \eqref{diffterm-disc}. \subsubsection{Multi-Point Flux Approximation (MPFA) method } There exists several types of Multi-Point Flux Approximation methods. The most known of MPFA methods are the O-method and the L-method. In our study, we focus on the O-method because it is the classical MPFA method and it is more intuitive comparing to the L-method which is fairly new and less intuitive (see \cite{aavatsmark2002introduction}). Here, we follow the description of the O-method developed by \cite{aavatsmark2002introduction}.\\ We will start by giving an approximation of the gradient in the integral expression \eqref{diffterm-disc}. \begin{itemize} \item[] Let us consider a triangle $x_1x_2x_3$, $\nu_i$ the outer normal vector of the edge located opposite of vertex $x_i,~i=1,2,3$ and $f$ a linear function over this triangle (see \figref{fig:triangle}). The length of $\nu_i$ is equal to the length of the edge to which it is normal. \begin{figure}\label{fig:triangle} \label{fig:triangle in CV} \end{figure} The gradient expression of the function $f$ in the triangle may be written in the form \begin{equation} \label{grad-tri} \nabla f=-\frac{1}{2\mathcal{A}}\left[\Big(f(x_2)-f(x_1)\Big)\nu_2+\Big(f(x_3)-f(x_1)\Big)\nu_3\right] \end{equation} where $\mathcal{A}$ is the area of the triangle. Thereby, assuming that our solution $\mathcal{U}$ is linear over the control volume $\mathcal{C}_{ij}$ with center $x_1(x_{i},y_{j})$, and applying \eqref{grad-tri} in the triangle $x_1\bar{x}_1\bar{x}_2$ (see \figref{fig:triangle in CV}), we have \begin{equation} \label{grad-cv} \nabla \mathcal{U}=\frac{1}{2 \mathcal{A}}\left[(\bar{\mathcal{U}}_1-\mathcal{U}_{ij})\omega_1+(\bar{\mathcal{U}}_2-\mathcal{U}_{ij})\omega_2\right] \end{equation} where $\mathcal{U}_{ij}=\mathcal{U}(x_1)=\mathcal{U}(x_{i},y_{j}),\bar{\mathcal{U}}_1=\mathcal{U}(\bar{x}_1),\bar{\mathcal{U}}_2=\mathcal{U}(\bar{x}_2)$ and the vectors $\omega_1$ and $\omega_2$ are respectively inner normal vector to the edge $x_1\bar{x}_1$ and $x_1\bar{x}_2$ with the same length with those vectors, and $\mathcal{A}$ is the area of the triangle $x_1\bar{x}_2\bar{x}_2$.\\ Let us called \textbf{interaction volume} $\mathcal{R}_{ij}$ a cell grid defined as follows \begin{equation} ~~~\mathcal{R}_{ij}=[x_{i-1};x_i]\times[y_{j-1};y_j],\,\,i,j=1,\ldots,N+1. \end{equation} We may notice that an interaction volume $\mathcal{R}_{ij}$ is covering an area in the intersection of the control volumes $\mathcal{C}_{i-1,j-1},\mathcal{C}_{i-1,j},\mathcal{C}_{i,j-1}$ and $\mathcal{C}_{ij}$. Here, we follow closely \cite{aavatsmark2007multipoint}. \item[] We denote respectively by $x_1(x_{i-1},y_{j-1}),x_2(x_{i},y_{j-1}),x_3(x_{i-1},y_{j})$ ~and~$ x_4(x_{i},y_{j})$ the centre of the control volume $\mathcal{C}_{i-1,j-1},\mathcal{C}_{i,j-1},\mathcal{C}_{i-1,j}$ and $\mathcal{C}_{i,j}$. We denote also by $\bar{x}_1,\bar{x}_2,\bar{x}_3~and~\bar{x}_4$ the midpoints of the segment $x_1x_2$, $x_3x_4,x_1x_3~ \text{and}~x_2x_4$ \begin{figure} \caption{Interaction volume} \label{fig:intervol} \end{figure} Our goal in an interaction volume is to compute the flux through the half edges $1,2,3~and~4$ inside the interaction volume (see \figref{fig:intervol}). The flux through the half edge p seen from the centre $x_1=(x_{i-1},y_{j-1})$ of the control volume $\mathcal{C}_{i-1,j-1}$ is denoted $f_p^{i-1,j-1}$. By using the expression \eqref{diffterm-disc}, we have \begin{equation} \label{flux-expr} f_p^{i-1,j-1}=\Gamma_p\vec{n}_p^T\mathbf{M}^{i-1,j-1}\nabla \mathcal{U} \end{equation} where $\Gamma_p$ is the length of half edge p, $\vec{n}_p$ is the outward unit normal vector to the half edge p. It is convenient to let $\vec{n}_p$ point in the direction of increasing global cell indices. In that case, we have two kinds of inner normal vectors. The vertical ones denoted $\omega_1$ and the horizontal ones denoted $\omega_2$.\\ By considering the triangle $x_1\bar{x}_1\bar{x}_3$ (\figref{fig:intervol}) in the control volume $\mathcal{C}_{i-1,j-1}$, using the expression of gradient \eqref{grad-cv} and the flux expression \eqref{flux-expr}, we have for $i,j=1,...,N$ \begin{equation} \label{flux-vec} \left[\begin{array}{c} f_1^{i-1,j-1} \\ \\ f_3^{i-1,j-1} \end{array}\right] = G^{i-1,j-1}\left[\begin{array}{c} \bar{\mathcal{U}}_1-\mathcal{U}_{i-1,j-1} \\ \\ \bar{\mathcal{U}}_3-\mathcal{U}_{i-1,j-1} \end{array} \right] \end{equation} with \begin{equation*} G^{i-1,j-1}=\begin{bmatrix} \Gamma_1n_1^T M^{i-1,j-1} \omega_1 & & & & \Gamma_1n_1^T M^{i-1,j-1} \omega_2 \\ & & & & \\ \Gamma_2n_1^T M^{i-1,j-1} \omega_2 & & & & \Gamma_2n_2^T M^{i-1,j-1} \omega_2 \end{bmatrix} \end{equation*} By applying \eqref{flux-vec} in the triangles $x_2\bar{x}_1\bar{x}_4,x_3\bar{x}_2\bar{x}_3$ and $x_4\bar{x}_4\bar{x}_2$ (see \figref{fig:intervol}), we have \begin{eqnarray} \label{flux-vec1} \left[\begin{array}{c} f_1^{i,j-1} \\ \\ f_4^{i,j-1} \end{array}\right] = G^{i,j-1}\left[\begin{array}{c} \mathcal{U}_{i,j-1}-\bar{\mathcal{U}}_1 \\ \\\bar{\mathcal{U}}_4-\mathcal{U}_{i,j-1} \end{array} \right]~~~~~~~~~~~~~\left[\begin{array}{c} f_2^{i-1,j} \\ \\ f_3^{i-1,j} \end{array}\right] = G^{i-1,j}\left[\begin{array}{c} \bar{\mathcal{U}}_2-\mathcal{U}_{i-1,j} \\ \\ \mathcal{U}_{i-1,j}-\bar{\mathcal{U}}_3 \end{array} \right] \nonumber\\ \nonumber \\ \\ \nonumber \\ \left[\begin{array}{c} f_2^{ij} \\ \\ f_4^{ij} \end{array}\right] = G^{ij}\left[\begin{array}{c} \mathcal{U}_{ij}-\bar{\mathcal{U}}_2 \\ \\ \mathcal{U}_{ij}-\bar{\mathcal{U}}_4 \end{array} \right] \nonumber ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray} Since the flux through an edge is continuous, from \eqref{flux-vec} and \eqref{flux-vec1} we have \begin{equation} \begin{array}{ccccc} f_1 & = & f_1^{i-1,j-1} & = & f_1^{i-1,j} \\ & & & & \\ f_2 & = & f_2^{ij} & = & f_2^{i-1,j} \\ & & & & \\ f_3 & = & f_3^{i-1,j} & = & f_3^{i-1,j-1} \\ & & & & \\ f_4 & = & f_4^{i,j-1} & = & f_4^{ij}. \end{array} \end{equation} It follows that \begin{eqnarray} \label{flux-cont} f_1 & = & g_{11}^{i-1,j-1}(\bar{\mathcal{U}}_1-\mathcal{U}_{i-1,j-1})+g_{12}^{i-1,j-1}(\bar{\mathcal{U}}_3-\mathcal{U}_{i-1,j-1}) = -g_{11}^{i,j-1}(\bar{\mathcal{U}_1}-\mathcal{U}_{i,j-1})+g_{12}^{i,j-1}(\bar{\mathcal{U}}_4-\mathcal{U}_{i,j-1}) \nonumber \\ & & \nonumber \\ f_2 & = & -g_{11}^{ij} (\bar{\mathcal{U}}_2-\mathcal{U}_{ij})-g_{12}^{ij}(\bar{\mathcal{U}}_4-\mathcal{U}_{ij}) = g_{11}^{i-1,j}(\bar{\mathcal{U}}_2-\mathcal{U}_{i-1,j})-g_{12}^{i-1,j}(\bar{\mathcal{U}}_3-\mathcal{U}_{i-1,j}) \nonumber \\ & & \\ f_3 & = & g_{21}^{i-1,j}(\bar{\mathcal{U}}_2-\mathcal{U}_{i-1,j})-g_{22}^{i-1,j}(\bar{\mathcal{U}}_3-\mathcal{U}_{i-1,j}) = g_{21}^{i-1,j-1}(\bar{\mathcal{U}}_1-\mathcal{U}_{i-1,j-1})+g_{22}^{i-1,j-1}(\bar{\mathcal{U}}_3-\mathcal{U}_{i-1,j-1})\nonumber \\ & & \nonumber \\ f_4 & = & -g_{21}^{i,j-1}(\bar{\mathcal{U}}_1-\mathcal{U}_{i,j-1})+g_{22}^{i,j-1}(\bar{\mathcal{U}}_4-\mathcal{U}_{i,j-1}) = -g_{21}^{ij}(\bar{\mathcal{U}}_2-\mathcal{U}_{ij})-g_{22}^{ij}(\bar{\mathcal{U}}_4-\mathcal{U}_{ij}) \nonumber \end{eqnarray} Let us set \begin{equation} f=\left[\begin{array}{c} f_1 \\ f_2 \\ f_3\\ f_4\end{array}\right],~~~~~~~~~~\mathcal{U}=\left[\begin{array}{c} \mathcal{U}_{i-1,j-1} \\ \mathcal{U}_{i,j-1} \\ \mathcal{U}_{i-1,j} \\ \mathcal{U}_{ij}\end{array}\right],~~~~~~~\mathcal{V}=\left[\begin{array}{c} \bar{\mathcal{U}}_1 \\ \bar{\mathcal{U}}_2 \\ \bar{\mathcal{U}}_3 \\ \bar{\mathcal{U}}_4\end{array}\right] \end{equation} The equation \eqref{flux-cont} allows to have \begin{equation} \label{flux-eq1} f=C^{ij}\mathcal{V}+F^{ij}\mathcal{U} \end{equation} where \begin{eqnarray*} C^{ij} & = & \left[\begin{array}{ccccccc} g_{11}^{i-1,j-1} & & 0 & & g_{12}^{i-1,j-1} & & 0 \\ & & & & & & \\ 0 & & -g_{11}^{ij} & & 0 & & -g_{12}^{ij} \\ & & & & & & \\ 0 & & g_{21}^{i-1,j} & & -g_{22}^{i-1,j} & & 0 \\ & & & & & & \\ -g_{21}^{i,j-1} & & 0 & & 0 & & g_{22}^{i,j-1} \end{array}\right] \end{eqnarray*} \begin{eqnarray*} F^{ij}=\left[\begin{array}{ccccccc} -g_{11}^{i-1,j-1}-g_{12}^{i-1,j-1} & & 0 & & 0 & & 0 \\ & & & & & & \\ 0 & & 0 & & 0 & & g_{11}^{ij}+g_{12}^{ij}\\ & & & & & & \\ 0 & & 0 & & -g_{21}^{i-1,j}+g_{22}^{i-1,j} & & 0 \\\\ & & & & & & \\ 0 & & g_{21}^{i,j-1}-g_{22}^{i,j-1} & & 0 & & 0 \end{array}\right] \end{eqnarray*} From \eqref{flux-cont}, we can also have \begin{eqnarray} \label{flux-eq2} A^{ij}\mathcal{V}=B^{ij}\mathcal{U} \end{eqnarray} where \begin{eqnarray*} A^{ij} & = &\left[\begin{array}{ccccccc} g_{11}^{i-1,j-1}+g_{11}^{i,j-1} & & 0 & & g_{12}^{i-1,j-1} & & -g_{12}^{i,j-1}\\ & & & & & & \\ 0 & & -g_{11}^{ij}-g_{11}^{i-1,j} & & g_{12}^{i-1,j} & & -g_{12}^{ij}\\ & & & & & & \\ -g_{21}^{i-1,j-1} & & g_{21}^{i-1,j} & & -g_{22}^{i-1,j}-g_{22}^{i-1,j-1} & & 0 \\ & & & & & & \\ -g_{21}^{i,j-1} & & g_{21}^{ij} & & 0 & & g_{22}^{i,j-1}+g_{22}^{ij} \end{array}\right] \end{eqnarray*} \begin{eqnarray*} B^{ij} & = & \left[\begin{array}{ccccccc} g_{11}^{i-1,j-1}+g_{12}^{i-1,j-1} & & g_{11}^{i,j-1}-g_{12}^{i,j-1} & & 0 & & 0\\ & & & & & & \\ 0 & & 0 & & -g_{11}^{i-1,j}+g_{12}^{i-1,j} & & -g_{11}^{ij}-g_{12}^{ij}\\ & & & & & & \\ -g_{21}^{i-1,j-1}-g_{22}^{i-1,j-1} & & 0 & & g_{21}^{i-1,j}-g_{22}^{i-1,j} & & 0 \\ & & & & & & \\ 0 & & -g_{21}^{i,j-1}+g_{22}^{i,j-1} & & 0 & & g_{21}^{ij}+g_{22}^{ij}\end{array}\right] \end{eqnarray*} Thereby, $\mathcal{V}$ can be eliminated from \eqref{flux-eq1} by solving \eqref{flux-eq2} with respect to $\mathcal{V}$. This gives the following the expression of the flux through the 4 half edges inside the interaction volume $\mathcal{R}_{ij}$ \begin{eqnarray} \label{flux-trans} f=T^{ij}\mathcal{U}, \;\;\;\,\,\,\, i,j=1,...,N+1. \end{eqnarray} where \begin{equation} \label{trans} T^{ij}=C^{ij}\left[A^{ij}\right]^{-1}B^{ij}+F^{ij} \end{equation} $T^{ij}$ is called transmissibility matrix of the interaction volume $\mathcal{R}_{ij}$.\\ From \eqref{flux-trans}, we are now able to get the flux through the half edges 1,2,3 and 4 inside the interaction volume $\mathcal{R}_{ij}$.\\ Let us recall that to approximate the integral in \eqref{diffterm-disc}, we need to compute the flux through the edges on a control volume $\mathcal{C}_{ij}$. We might notice that we need four interaction volume with centres the four vertices of the control volumes in order to cover all the edges of the considered control volume (see \figref{fig:intervol1}). \begin{figure}\label{fig:intervol1} \end{figure} For the volume control $\mathcal{C}_{ij}$, we denote by ${}_{\mathcal{E}}f_{l}^{ij}$ the flux through lower half eastern edge, by ${}_{\mathcal{E}}f_{u}^{ij}$ the flux through the upper half eastern edge. The flux ${}_{\mathcal{E}}f^{ij}$ through the east edge of the control volume $\mathcal{C}_{ij}$ is calculated as follows: The lower half eastern edge is contained in the interaction volume $\mathcal{R}_{i+1,j}$ and it is in position 2 in the interaction of volume (see \figref{fig:intervol1}). So by using \eqref{flux-trans} we have: \begin{equation*} {}_{\mathcal{E}}f_{l}^{ij}=T_{21}^{i+1,j}\mathcal{U}_{i,j-1}+T_{22}^{i+1,j}\mathcal{U}_{i+1,j-1} +T_{23}^{i+1,j}\mathcal{U}_{ij}+T_{24}^{i+1,j}\mathcal{U}_{i+1,j}. \end{equation*} Similarly, the upper half eastern edge is contained in the interaction volume $\mathcal{R}_{i+1,j+1}$ and it is in position 1 in the interaction volume. So by using \eqref{flux-trans} we have: \begin{equation*} {}_{\mathcal{E}}f_u^{ij}=T_{11}^{i+1,j+1}\mathcal{U}_{ij}+T_{12}^{i+1,j+1}\mathcal{U}_{i+1,j}+T_{13}^{i+1,j+1}\mathcal{U}_{i,j+1} +T_{14}^{i+1,j+1}\mathcal{U}_{i+1,j+1}. \end{equation*} Finally the flux through the east edge of the control volume $\mathcal{C}_{i+1,j+1}$ will be the addition of ${}_{\mathcal{E}}f_{l}^{ij}$ and ${}_{\mathcal{E}}f_u^{ij}$. Thereby we have \begin{eqnarray*} {}_{\mathcal{E}}f_{}^{ij} & = & {}_{\mathcal{E}}f_{l}^{ij}+{}_{\mathcal{E}}f_u^{ij} \\ & & \\ & = & T_{21}^{i+1,j}\mathcal{U}_{i,j-1}+T_{22}^{i+1,j}\mathcal{U}_{i+1,j-1} +T_{23}^{i+1,j}\mathcal{U}_{ij}+T_{24}^{i+1,j}\mathcal{U}_{i+1,j}+ T_{11}^{i+1,j+1}\mathcal{U}_{ij}\\ & & \\ & & +T_{12}^{i+1,j+1}\mathcal{U}_{i+1,j}+T_{13}^{i,j}\mathcal{U}_{i,j+1}+T_{14}^{i,j}\mathcal{U}_{i+1,j+1} \\ & & \\ {}_{\mathcal{E}}f_{}^{ij} & = & (T_{11}^{i+1,j+1}+T_{23}^{i+1,j})\mathcal{U}_{ij}+(T_{12}^{i+1,j+1}+T_{24}^{i+1,j})\mathcal{U}_{i+1,j}+ T_{14}^{i+1,j+1}\mathcal{U}_{i+1,j+1}\\ & &\\ & & +T_{13}^{i+1,j+1}\mathcal{U}_{i,j+1}+T_{21}^{i+1,j}\mathcal{U}_{i,j-1}+T_{22}^{i+1,j}\mathcal{U}_{i+1,j-1}. \end{eqnarray*} Similarly, we compute the flux through the northern, western and southern edge of the control volume $\mathcal{C}_{ij}$. Afterwards, we sum up the flux through the 4 edges of the control to get the outflux $\mathcal{F}^{ij}$ through the edges of the control volume $\mathcal{C}_{ij}$. Therefore we have for $i,j=1,...,N$ \begin{eqnarray} \label{flux-mpfa} \mathcal{F}^{ij}& = & a_{ij}\mathcal{U}_{ij}+b_{ij}\mathcal{U}_{i+1,j}+c_{ij}\mathcal{U}_{i+1,j+1}+d_{ij}\mathcal{U}_{i,j+1} +e_{ij}\mathcal{U}_{i-1,j+1}+\alpha_{ij}\mathcal{U}_{i-1,j}+\beta_{ij}\mathcal{U}_{i-1,j-1} \nonumber \\ & & +\gamma_{ij}\mathcal{U}_{i,j-1}+\lambda_{ij}\mathcal{U}_{i+1,j-1}. \end{eqnarray} where \begin{eqnarray*} & &a_{ij}=T_{11}^{i+1,j+1}+T_{23}^{i+1,j}+T_{31}^{i+1,j+1}+T_{42}^{i,j+1}-T_{12}^{i,j+1}-T_{24}^{ij}-T_{33}^{i+1,j}-T_{44}^{ij};\\ & & \\ & & b_{ij}=T_{12}^{i+1,j+1}+T_{24}^{i+1,j}+T_{32}^{i+1,j+1}-T_{34}^{i+1,j}\\ & & \\ & &c_{ij}=T_{14}^{i+1,j+1}+T_{34}^{i+1,j+1}; d_{ij}=T_{13}^{i+1,j+1}+T_{33}^{i+1,j+1}+T_{44}^{i,j+1}-T_{14}^{i,j+1}; e_{ij}=T_{43}^{i,j+1}-T_{13}^{i,j+1};\\ & & \\ & & \alpha_{ij}=T_{41}^{i,j+1}-T_{11}^{i,j+1}-T_{23}^{ij}-T_{43}^{ij}; \beta_{ij}=-T_{21}^{ij}-T_{41}^{ij};\\ & & \\ & & \gamma_{ij}=T_{21}^{i+1,j}-T_{22}^{ij}-T_{31}^{i+1,j}-T_{42}^{ij};\\ & & \\ & & \lambda_{ij}=T_{22}^{i+1,j}-T_{32}^{i+1,j}. \end{eqnarray*} Let us notice that for the control volumes near to the boundary of the our domain, some terms from the boundary conditions will be involved in \eqref{flux-mpfa} .\\ Hence \eqref{diffterm-disc} becomes \begin{equation} \label{flux-mpfa-mat} \mathcal{F}=A_{mp}\mathcal{U}+F_{mp} \end{equation} where $A_{mp}$ is a $N^2\times N^2$ matrix and \begin{equation*} \mathcal{F}=\begin{bmatrix} \mathcal{F}_{11}\\ \mathcal{F}_{12}\\ \vdots\\ \mathcal{F}_{1N}\\ \mathcal{F}_{21}\\ \mathcal{F}_{22}\\ \vdots\\ \vdots\\ \mathcal{F}_{NN} \end{bmatrix},~~~ \mathcal{U}=\begin{bmatrix} \mathcal{U}_{11}\\ \mathcal{U}_{12}\\ \vdots\\ \mathcal{U}_{1N}\\ \mathcal{U}_{21}\\ \mathcal{U}_{22}\\ \vdots\\ \vdots\\ \mathcal{U}_{NN} \end{bmatrix},~~~ A_{mp}=\begin{bmatrix} W_1 & X_1 & 0_N & \ldots & \ldots & \ldots & \ldots & 0_N\\ Y_2 & W_2 & X_2 & \ddots & & & & \vdots \\ 0_N & Y_3 & W_3 & X_3 & \ddots & & & \vdots \\ \vdots & \ddots & Y_4 & W_4 & X_4 & \ddots & & \vdots\\ \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & & \ddots& \ddots & \ddots & \ddots & 0_N \\ \vdots & & & & \ddots & Y_{N-1} & W_{N-1} & X_{N-1} \\ 0_N & \ldots & \ldots & \ldots & \ldots & 0_N & Y_N & W_N \end{bmatrix} \end{equation*} with $0_{N}$ is $N\times N$ null matrix , $W_i,Y_i,X_i$ are tridiagonal matrices, and $F_{mp}$ is a $N^2$ vector coming from the boundary conditions. The structure of the diffusion matrix $A_{mp}$ can be viewed in \figref{diffusion1} \begin{figure} \caption{Structure of diffusion matrix coming from standard MPFA} \label{diffusion1} \end{figure} \end{itemize} \subsection{Discretization of the convection term} In this section, the convection term \begin{equation*} \int_{\mathcal{C}_{ij}}\nabla (f \mathcal{U})d\mathcal{C} \end{equation*} with \begin{equation*} f=\left(\begin{array}{c} (r-\sigma_1^2-\frac{1}{2}\rho\sigma_1\sigma_2)x \\ \\ (r-\sigma_2^2-\frac{1}{2}\rho\sigma_1\sigma_2)y \end{array}\right)=\left(\begin{array}{c} p \\ \\ q \end{array}\right) \end{equation*} will be approximated by the upwind methods (first and second order). \subsubsection{First order upwind} The \textbf{first order upwind method} discussed by \cite[chapter 4.8]{leveque2004finite} or \cite{tambue2016exponential} will be applied to approximate the second term of \eqref{eqfinvol}. Using the divergence theorem, we have for $i,j=2,...,N$ \begin{equation} I^{ij}=\int_{\mathcal{C}_{ij}}\nabla (f \mathcal{U})d\mathcal{C}=\int_{\partial \mathcal{C}_{ij}}(f\cdot\mathcal{U})\cdot\vec{n}d\partial \mathcal{C}. \end{equation} Note that $I^{ij}$ is calculated by summing up the flux through the edges of the control volume $\mathcal{C}_{ij}$. The flux through an edge using the first order upwind will depend on the sign of $f\cdot\vec{n}$ on this edge. If the sign of $f\cdot\vec{n}$ is positive, $\mathcal{U}_{ij}$ will be used to approximate $\mathcal{U}$ in the expression $(f\cdot \vec{n} \mathcal{U})$ otherwise we will use the value of $\mathcal{U}$ in other side of the edge. Note that an edge may be the interface of two control volumes. By doing so, we have for $i,j=1,...,N$ \begin{eqnarray} \label{flux-up1} I^{ij} & = & \epsilon_{ij}\mathcal{U}_{i-1,j}+\mu_{ij}\mathcal{U}_{i,j-1}+\Omega_{ij}\mathcal{U}_{ij}+ \phi_{ij}\mathcal{U}_{i,j+1}+\Psi_{ij}\mathcal{U}_{i+1,j}. \end{eqnarray} where \begin{eqnarray*} & &\epsilon_{ij}=-l_jf_x^{i-1}\max(f_x^{i-1},0);~~~~~\mu_{ij}=-h_if_y^{j-1}\max(f_y^{j-1},0)\\ & & \\ & & \Omega_{ij}=l_j\Bigg(f_x^{i}\max(f_x^{i},0)- f_x^{i-1}\min(f_x^{i-1},0)\Bigg) +h_i\Bigg(f_y^j\max(f_y^j,0)-f_y^{j-1}\min(f_y^{j-1},0)\Bigg)\\ & & \\ & & \\ & & \phi_{ij}=h_if_y^j\min(f_y^j,0);~~~~~~~~ \Psi_{ij}=l_jf_x^{i}\min(f_x^{i},0). \end{eqnarray*} with \begin{eqnarray*} f_x^i=(r-\sigma_1-\frac{1}{2}\rho\sigma_1\sigma_2)x_{i+1}~~~~~~~~~~~~~~~~f_y^j=(r-\sigma_2-\frac{1}{2}\rho\sigma_1\sigma_2)x_{j+1} \end{eqnarray*} Let us notice that for the control volumes near to the boundary of the our domain, some terms from the boundary conditions will be involved in \eqref{flux-up1}. Hence, \eqref{flux-up1} gives \begin{equation} \label{flux-up1-mat} I=A_{up} \mathcal{U}+F_{up} \end{equation} where $A_{up}$ is a $N^2\times N^2$ matrix \begin{equation*} I=\begin{bmatrix} I^{11}\\ I^{12}\\ \vdots\\ I^{1N}\\ I^{21}\\ I^{22}\\ \vdots\\ \vdots\\ I^{NN}\\ \end{bmatrix}, ~~~~~~\mathcal{U}=\begin{bmatrix} \mathcal{U}_{11}\\ \mathcal{U}_{12}\\ \vdots\\ \mathcal{U}_{1N}\\ \mathcal{U}_{21}\\ \mathcal{U}_{22}\\ \vdots\\ \vdots\\ \mathcal{U}_{NN}\\ \end{bmatrix}, A_{up}=\begin{bmatrix} H_1 & P_1 & 0_N & \ldots & \ldots & \ldots & 0_N \\ Q_2 & H_2 & P_2& \ddots & & & \vdots \\ 0_N & Q_3 & H_3 & P_3 & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & \ddots & Q_{N-2} & H_{N-2} & P_{N-2} & 0_N \\ \vdots & & & \ddots & Q_{N-1} & H_{N-1} & P_{N-1}\\ 0_N & \dots & \ldots & \ldots & 0_N & Q_N & H_N \end{bmatrix} \end{equation*} with $0_{N}$ is $N\times N$ null matrix, $H_i$ is a tridiagonal matrix, $P_i,Q_i$ are diagonal matrices and $F_{up}$ is a vector coming from the boundary conditions. Therefore, combining the MPFA method \eqref{flux-mpfa-mat} and the first order upwind \eqref{flux-up1-mat}, we have \begin{equation} \label{mpfa-up1} \frac{d\mathcal{U}}{d\tau}=A\mathcal{U}+F \end{equation} with \begin{eqnarray*} A=L^{-1}\Bigg(A_{mp}+A_{up}+A_L\Bigg)~~~~~~F=L^{-1}\Bigg(F_{mp}+F_{up}\Bigg) \end{eqnarray*} where $A_L$ is a diagonal matrix of size $N^2\times N^2$ coming from the discretisation of \eqref{linearterm}. The diagonal elements of $A_L$ are $A_{ii}=h_il_i\lambda$ for $i=1,...,N$ with $\lambda$ given in \eqref{conservation}. The matrix $L$ is also a diagonal matrix of size $N^2\times N^2$ whose diagonal elements are $L_{ii}=h_il_i$ for $i=1,\ldots,N$ \subsubsection{Upwind second order} We start by applying the mid-quadrature rule as follows. \begin{eqnarray} \label{quad-upw2} J^{ij}=\int_{\mathcal{C}_{ij}}\nabla (f\mathcal{U}) d\mathcal{C}& = & mes(\mathcal{C}_{ij})\nabla (f\mathcal{U})|_{(x_i,y_j)}\nonumber\\ & & \nonumber \\ & = & (x_{i+\frac{1}{2}}-x_{i-\frac{1}{2}})(y_{j+\frac{1}{2}}-y_{j-\frac{1}{2}})\Bigg[p_i\frac{\partial \mathcal{U}_{ij}}{\partial x}+q_j\frac{\partial \mathcal{U}_{ij}}{\partial y}+\Bigg(\frac{\partial p_i}{\partial x}+\frac{\partial q_j}{\partial y}\Bigg)\mathcal{U}_{ij}\Bigg]\nonumber\\ & & \nonumber \\ & = & h_il_j\Bigg[\Bigg(p_i\frac{\partial \mathcal{U}_{ij}}{\partial x}+q_j\frac{\partial \mathcal{U}_{ij}}{\partial y}\Bigg)+\omega\mathcal{U}_{ij}\Bigg],\;\;\,i,j=1,\ldots,N. \end{eqnarray} where $~~~~~p_i=(r-\sigma_1^2-\frac{1}{2}\rho\sigma_1\sigma_2)x_i,~~~q_j=(r-\sigma_2^2-\frac{1}{2}\rho\sigma_1\sigma_2)y_j$ and $\omega=2r-\sigma_1^2-\sigma_2^2-\rho\sigma_1\sigma_2$. Let us use the second order upwind to approximate the first derivatives in \eqref{quad-upw2} at the point $(x_i,y_j)$.\\ \textbf{Approximation of the first derivative using a 3 points stencil} Here, we want to express the first derivative $\frac{\partial \mathcal{U}_{ij}}{\partial x}$ in terms of $\mathcal{U}_{i+2,j},\mathcal{U}_{i+1,j}$ and $\mathcal{U}_{ij}$. Set $h=\underset{1\leq i\leq N}{\max} h_i$. Let us find $a,b$ and $c$ such that \begin{equation} \label{first-dev} \frac{\partial \mathcal{U}_{ij}}{\partial x}=a\mathcal{U}_{i+2,j}+b\mathcal{U}_{i+1,j}+c\mathcal{U}_{ij} \end{equation} Thereby, using a $2^{nd}$ order Taylor expansion at the point $(x_i,y_j)$ on $\mathcal{U}_{i+2,j}$ and $\mathcal{U}_{i+1,j}$, we have \begin{eqnarray*} \frac{\partial \mathcal{U}_{ij}}{\partial x} & = & a\mathcal{U}_{i+2,j}+b\mathcal{U}_{i+1,j}+c\mathcal{U}_{ij}\\ & & \\ & = & a\Bigg(\mathcal{U}_{ij}+(h_{i+1}+h_{i+2})\frac{\partial \mathcal{U}_{ij}}{\partial x}+ \frac{1}{2}(h_{i+1}+h_{i+2})^2\frac{\partial^2 \mathcal{U}_{ij}}{\partial x^2}+\mathcal{O}(h^3)\Bigg)+b\Bigg(\mathcal{U}_{ij}+h_{i+1}\frac{\partial \mathcal{U}_{ij}}{\partial x}+\frac{1}{2}h_{i+1}^2\frac{\partial^2 \mathcal{U}_{ij}}{\partial x^2}+\mathcal{O}(h^3)\Bigg)\\ & & \\ & & +c\mathcal{U}_{ij}.\\ & & \\ \frac{\partial \mathcal{U}_{ij} }{\partial x} & = & \Big(a+b+c\Big)\mathcal{U}_{ij}+\Bigg(a(h_{i+1}+h_{i+2})+bh_{i+1}\Bigg)\frac{\partial \mathcal{U}_{ij}}{\partial x}+\Bigg(\frac{1}{2}a\Big(h_{i+1}+h_{i+2}\Big)^2+\frac{1}{2}bh_{i+1}^2\Bigg)\frac{\partial^2 \mathcal{U}_{ij}}{\partial x^2}+\mathcal{O}(h^3). \end{eqnarray*} By matching, we have \begin{align} \label{sys-first-dev} \left\lbrace \begin{array}{l} a+b+c=0\\ \\ a(h_{i+1}+h_{i+2})+bh_{i+1}~=1\\ \\ \frac{1}{2}a\Big(h_{i+1}+h_{i+2}\Big)^2+\frac{1}{2}bh_{i+1}^2=0 \end{array}\right. \end{align} Solving \eqref{sys-first-dev}, we have \begin{equation} a=-\frac{h_{i+1}}{h_{i+2}(h_{i+1}+h_{i+2})}~~~~~~~~~~~~~~~~b=\frac{h_{i+1}+h_{i+2}}{h_{i+1}h_{i+2}}~~~~~~~~~~~~~~~ c=\frac{h_{i+1}^2-\Big(h_{i+1}+h_{i+2}\Big)^2}{h_{i+1}+h_{i+2}}. \end{equation} Therefore we have \begin{equation} \label{first-dev-2nd} \frac{\partial \mathcal{U}_{ij} }{\partial x} \approx \frac{-h_{i+1}^2\mathcal{U}_{i+2,j}+(h_{i+1}+h_{i+2})^2\mathcal{U}_{i+1,j}+(h_{i+1}^2-(h_{i+1}+h_{i+2})^2)\mathcal{U}_{ij}}{h_{i+1}h_{i+2}(h_{i+1}+h_{i+2})}.\\ \end{equation}\\ \textbf{Application to the $2^{nd}$ order upwind method on non uniform grids}\\ By analogy with the procedure to get the expression in \eqref{first-dev-2nd}, the term $p_i\frac{\partial\mathcal{U}_{ij}}{\partial x}$ is approximated as follows: \begin{itemize} \item[(i)] $p_i>0$ then \begin{equation*} p_i\frac{\partial \mathcal{U}_{ij}}{\partial x} \approx p_i \frac{(h_{i+1}+h_{i+2})^2\mathcal{U}_{i+1,j}+\Big[h_{i+1}^2-(h_{i+1}+h_{i+2})^2\Big]\mathcal{U}_{ij}-h_{i+1}^2\mathcal{U}_{i+2,j}}{h_{i+1}h_{i+2}(h_{i+1}+h_{i+2})} \end{equation*} \item[(ii)] $p_i<0$ then \begin{equation*} p_i\frac{\partial \mathcal{U}_{ij}}{\partial x} \approx p_i \frac{-(h_i+h_{i-1})^2\mathcal{U}_{i-1,j}+\Big[(h_{i}+h_{i-1})^2-h_i^2\Big]\mathcal{U}_{ij}+h_i^2\mathcal{U}_{i-2,j}}{h_ih_{i-1}(h_i+h_{i-1})} \end{equation*} \end{itemize} Similarly for the first derivative $\frac{\partial \mathcal{U}_{ij}}{\partial y}$, we have \begin{itemize} \item[(iii)] when $q_j>0$ then \begin{equation*} q_j\frac{\partial \mathcal{U}_{ij}}{\partial y} \approx q_j \frac{(l_{j+1}+l_{j+2})^2\mathcal{U}_{i,j+1}+\Big[l_{j+1}^2-(l_{j+1}+l_{j+2})^2\Big]\mathcal{U}_{ij}-l_{j+1}^2\mathcal{U}_{i,j+2}}{l_{j+1}l_{j+2}(l_{j+1}+l_{j+2})} \end{equation*} \item[(iv)] when $q_j<0$ \begin{equation*} q_j\frac{\partial \mathcal{U}_{ij}}{\partial y} \approx q_j \frac{-(l_j+l_{j-1})^2\mathcal{U}_{i,j-1}+\Big[(l_j+l_{j-1})^2-l_j^2\Big]\mathcal{U}_{ij}+l_j^2\mathcal{U}_{i,j-2}}{l_{j}l_{j-1}(l_{j}+l_{j-1})}. \end{equation*} \end{itemize} By combining $(i),(ii),(iii),(iv)$ in \eqref{quad-upw2}, for $i,j=2,\ldots,N-1$, we have \begin{eqnarray} \label{flux-upw2} & & \\ J^{ij} & = & \epsilon_{ij}\mathcal{U}_{i-2,j}+\eta_{ij}\mathcal{U}_{i-1,j}+\kappa_{ij}\mathcal{U}_{i,j-2} +\mu_{ij}\mathcal{U}_{i,j-1}+\Omega_{ij}\mathcal{U}_{ij}+\phi_{ij}\mathcal{U}_{i,j+1} +\Psi_{ij}\mathcal{U}_{i,j+2}+\Delta_{ij}\mathcal{U}_{i+1,j}+\Pi_{ij}\mathcal{U}_{i+2,j} \nonumber \end{eqnarray} where \begin{eqnarray*} & & \epsilon_{ij}=\frac{h_i^2}{h_ih_{i-1}(h_i+h_{i-1})}\min(p_i,0)~~~~~~~~~\eta_{ij}=-\frac{(h_i+h_{i-1})^2}{h_ih_{i-1}(h_i+h_{i-1})}\min(p_i,0)~~~~~~~~\kappa_{ij}=\frac{l_j^2}{l_jl_{j-1}(l_j+l_{j-1})}\min(q_j,0)\\ & & \\ & & \\ & & ~~~~~~~~~\mu_{ij}=-\frac{(l_j+l_{j-1})^2}{l_jl_{j-1}(l_j+l_{j-1})}\min(q_j,0)\\ & & \\ & & \\ & & \Omega_{ij}=\omega+\frac{(h_i+h_{i-1})^2-h_i^2}{h_{i}h_{i-1}(h_{i}+h_{i-1})}\min(p_i,0) +\frac{h_{i+1}^2-(h_{i+1}+h_{i+2})^2}{h_{i+1}h_{i+2}(h_{i+1}+h_{i+2})}\max(p_i,0)+\frac{(l_j+l_{j-1})^2-l_j^2}{l_jl_{j-1}(l_j+l_{j-1})}\min(q_j,0)\\ & & \\ & & ~~~~~~~+\frac{l_{j+1}^2-(l_{j+1}+l_{j+2})^2}{l_{j+1}l_{j+2}(l_{j+1}+l_{j+2})}\max(q_j,0)\\ & & \\ & & \phi_{ij}=\frac{(l_{j+1}+l_{j+2})^2}{l_{j+1}l_{j+2}(l_{j+1}+l_{j+2})}\max(q_j,0)~~~~~~~ \Psi_{ij}=-\frac{l_{j+1}^2}{l_{j+1}l_{j+2}(l_{j+1}+l_{j+2})}\max(q_j,0)~~~~~~~\\ & &\\ & &\\ & & \Delta_{ij}=\frac{(h_{i+1}+h_{i+2})^2}{h_{i+1}h_{i+2}(h_{i+1}+h_{i+2})}\max(p_i,0)~~~~~~~~~~ \Pi_{ij}=-\frac{h_{i+1}^2}{h_{i+1}h_{i+2}(h_{i+1}+h_{i+2})}\max(p_i,0). \end{eqnarray*} For the control volumes near the boundary of the study domain, two ghost points or the first order upwind method can be used. Finally, we have the following matrix form \begin{equation} \label{flux-up2-mat} J=A_{2up} \mathcal{U}+F_{2up} \end{equation} where \begin{equation*} J=\begin{bmatrix} J^{11}\\ J^{12}\\ \vdots\\ J^{1N}\\ J^{21}\\ J^{22}\\ \vdots\\ J^{2N}\\ \vdots\\ \vdots\\ J^{NN}\\ \end{bmatrix}, F_{2up}=\begin{bmatrix} F^{11}_{up}\\ F^{12}_{up}\\ \vdots\\ F^{1N}_{up}\\ F^{21}_{up}\\ F^{22}_{up}\\ \vdots\\ F^{2N}_{up}\\ \vdots\\ \vdots\\ F^{NN}_{up} \end{bmatrix},~~~~~~ \mathcal{U}=\begin{bmatrix} \mathcal{U}_{11}\\ \mathcal{U}_{12}\\ \vdots\\ \mathcal{U}_{1N}\\ \mathcal{U}_{21}\\ \mathcal{U}_{22}\\ \vdots\\ \mathcal{U}_{2N}\\ \vdots\\ \vdots\\ \mathcal{U}_{NN}\\ \end{bmatrix} \end{equation*} and \begin{equation*} A_{2up}=\begin{bmatrix} H_1 & P_1 & 0_N & 0 & \ldots & \ldots & \ldots & & 0_N & 0_N\\ Q_2 & H_2 & P_2& R_2 & 0_N & & & & & 0_N \\ W_3& Q_3 & H_3 & P_3 & R_3 & 0_N & & & & \vdots \\ 0_N & W_4 & Q_4 & H_4 & P_4 & R_4 & \ddots\\ 0_N & 0_N & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \\ \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \\ \vdots & & & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & 0_N \\ & & & & \ddots & W_{N-2} & Q_{N-2} & H_{N-2} & P_{N-2} & R_{N-2} \\ \vdots & & & & & \ddots & W_{N-1} & Q_{N-1} & H_{N-1} & P_{i,N-1}\\ 0_N & \dots & \ldots & \ldots & & \ldots & 0_N & 0_N & Q_{N} & H_{N} \\ \end{bmatrix} \end{equation*} where $H_1,H_N$ are tridiagonal matrices, for $i=2,\ldots,N-1, ~H_i$ are penta-diagonal matrices and $P_i,R_i,W_i,Q_i$ are diagonal matrices, and $F_{up}$ is a vector coming from the boundary conditions. A structure of the advection matrix using the second order upwind method can be viewed in \figref{upw22}. \begin{figure} \caption{A structure of the advection matrix using 2nd order upwind method.} \label{upw22} \end{figure} As for the first order upwinding, combining the MPFA method \eqref{flux-mpfa-mat} and the second order upwind method \eqref{flux-up2-mat}, we have \begin{equation} \label{mpfa-up2} \frac{d\mathcal{U}}{d\tau}=A\mathcal{U}+F \end{equation} \begin{eqnarray*} A=L^{-1}\Bigg(A_{mp}+A_{2up}+A_L\Bigg)~~~~~~F=L^{-1}\Bigg(F_{mp}+F_{2up}\Bigg), \end{eqnarray*} where $A_L$ is a diagonal matrix of size $N^2\times N^2$ coming from the discretisation of \eqref{linearterm}. The elements of $A_L$ are $h_il_j\lambda$ for $i,j=1,...,N$ with $\lambda$ given in \eqref{conservation}. The matrix $L$ is also a diagonal matrix of size $N^2\times N^2$ whose diagonal elements are $L_{ii}=h_il_i$ for $i=1,\ldots,N$ Actually, the PDE \eqref{twoop} is degenerated when the stock price is approaching zero $(x\rightarrow 0, y\rightarrow 0)$ which has an adverse impact on the accuracy of the numerical method. However, to overcome the degeneracy, we are going to apply a fitted finite volume method in the degeneracy region $(x\rightarrow 0, y\rightarrow 0)$. More details about this fitted method is given in the next section. \subsection{Fitted Multi-Point Flux Approximation } The fitted Multi-Point Flux Approximation is a combination of the fitted finite volume method ( see \cite{huang2006fitted,huang2009convergence}) and the Multi-Point Flux Approximation method. The fitted finite volume helps to deal with the degeneracy of the PDE \eqref{twoop}. We approximate simultaneously the diffusion term and the convection term in the degeneracy region by solving a two-points boundary problem. In the region where the PDE is not degenerated, we apply the standard Multi-point flux approximation to the diffusion term as described in the previous section.\\ Let us set \begin{equation} \label{diff-conv} k(\mathcal{U})=\nabla \cdot (\mathbf{M}\nabla \mathcal{U}+ f\mathcal{U}) \end{equation} where $\mathbf{M}$ and $f$ are defined in \eqref{conservation}. Thereby, we have the following decomposition over a control volume $\mathcal{C}_{ij}$, for $~~i,j=1,...,N$ \begin{eqnarray} \label{diff-conv-int} \int_{\mathcal{C}_{ij}}\nabla k(\mathcal{U})d\mathcal{C} & = & \int_{\mathcal{C}_{ij}}\nabla \cdot (M\nabla\mathcal{U}+f\mathcal{U})d\mathcal{C} \nonumber\\ & &\nonumber \\ & = & \int_{\partial \mathcal{C}_{ij}}(M\nabla\mathcal{U}+f\mathcal{U}) \cdot\vec n d\partial\mathcal{C}\nonumber\\ & & \nonumber\\ & = & \int_{(x_{i+\frac{1}{2}},y_{j-\frac{1}{2}})}^{(x_{i+\frac{1}{2}},y_{j+\frac{1}{2}})}\Bigg(m_{11}\frac{\partial \mathcal{U}}{\partial x}+m_{12}\frac{\partial \mathcal{U}}{\partial y}+p\mathcal{U}\Bigg)dy\\ & & \nonumber\\ & & -\int_{(x_{i-\frac{1}{2}},y_{j-\frac{1}{2}})}^{(x_{i-\frac{1}{2}},y_{j+\frac{1}{2}})}\Bigg(m_{11}\frac{\partial \mathcal{U}}{\partial x}+m_{12}\frac{\partial \mathcal{U}}{\partial y}+p\mathcal{U}\Bigg)dy\nonumber\\ & & \nonumber\\ & & +\int_{(x_{i-\frac{1}{2}},y_{j+\frac{1}{2}})}^{(x_{i+\frac{1}{2}},y_{j+\frac{1}{2}})}\Bigg(m_{21}\frac{\partial \mathcal{U}}{\partial x}+m_{22}\frac{\partial \mathcal{U}}{\partial y}+q\mathcal{U}\Bigg)dx\nonumber\\ & & \nonumber\\ & & -\int_{(x_{i-\frac{1}{2}},y_{j-\frac{1}{2}})}^{(x_{i+\frac{1}{2}},y_{j-\frac{1}{2}})}\Bigg(m_{21}\frac{\partial \mathcal{U}}{\partial x}+m_{22}\frac{\partial \mathcal{U}}{\partial y}+q\mathcal{U}\Bigg)dx \nonumber \end{eqnarray} with $\vec{n}$ is the outward unit normal vector, $m_{11},m_{12},m_{21},m_{22}$ the coefficients of the matrix $\mathbf{M}$ and $p,q$ coefficients of vector $f$ defined in \eqref{conservation}.\\ In their work, \cite{huang2006fitted,huang2009convergence} showed how the fitted finite method is used to approximate each of the integral in \eqref{diff-conv-int}. \subsubsection{Fitted Finite volume method in the degeneracy region} Following \cite{huang2006fitted}, the fitted finite volume method is used to approximate the flux through the edges which are effectively in the degeneracy region notably the western edge of the control volume $\mathcal{C}_{1,j}$ for $j=1,\ldots,N$ and the southern edge of the control volume $\mathcal{C}_{i,1}$ for $i=1,\ldots,N$ .\\ Thereby, the flux through the southern edge of the control volume $\mathcal{C}_{i,1}$ for $i=1,\ldots,N$ is calculated as follows.\\ The fitted finite volume method is applied to approximate the integral along the southern edge of control volume $\mathcal{C}_{i,1}$. The idea is to approximate the integral over $[x_{i-\frac{1}{2}};x_{i+\frac{1}{2}}]$ by a constant. We start by applying the mid-quadrature rule as follows: \begin{equation} \label{south-appr} \int_{(x_{i-\frac{1}{2}},y_{\frac{1}{2}})}^{(x_{i+\frac{1}{2}},y_{\frac{1}{2}})}\Bigg(m_{21}\frac{\partial \mathcal{U}}{\partial x}+m_{22}\frac{\partial \mathcal{U}}{\partial y}+q\mathcal{U}\Bigg)dx\approx \Bigg(m_{21}\frac{\partial \mathcal{U}}{\partial x}+m_{22}\frac{\partial \mathcal{U}}{\partial y}+q\mathcal{U}\Bigg)_{\vert_{x_i,y_{\frac{1}{2}}}}\cdot h_i \end{equation} Besides we have \begin{equation} m_{21}\frac{\partial \mathcal{U}}{\partial x}+m_{22}\frac{\partial \mathcal{U}}{\partial y}+q\mathcal{U}=y\Bigg(ey\frac{\partial \mathcal{U}}{\partial y}+h'\frac{\partial \mathcal{U}}{\partial x}+k\mathcal{U}\Bigg) \end{equation} with $e=\frac{1}{2}\sigma_2^2,~~~k=r-\sigma_2^2-\frac{1}{2}\rho\sigma_1\sigma_2$ and $h'=\frac{1}{2}\rho\sigma_1\sigma_2x$.\\ \\ We want to approximate \begin{equation*} f(\mathcal{U})=ey\frac{\partial \mathcal{U}}{\partial y}+k\mathcal{U} \end{equation*} by a linear function over $I_{y_1}=(0,y_{1})$ satisfying the following two-points boundary value problem \begin{align} \label{two-bvp} \left\lbrace \begin{array}{l} f'(\mathcal{U})~~~~=\Bigg(ey\frac{\partial \mathcal{U}}{\partial y}+k\mathcal{U}\Bigg)'=K_1\\ \\ \mathcal{U}(x_i,0) =\mathcal{U}_{i,0}~~~~~~~~~\mathcal{U}(x_i,y_1)=\mathcal{U}_{i,1} \end{array}\right. \end{align} By solving this problem we get \begin{eqnarray} \label{sol-bvp} \mathcal{U}=\mathcal{U}_{i,0}+(\mathcal{U}_{i,1}-\mathcal{U}_{i,0})\frac{y}{y_{1}} \end{eqnarray} Thereby, by using \eqref{south-appr}, \eqref{two-bvp}, \eqref{sol-bvp} and the forward difference for approximating the first partial derivative $\frac{\partial \mathcal{U}}{\partial x}$ we get \begin{equation} \label{south-appr-int} \int_{(x_{i-\frac{1}{2}},y_{\frac{1}{2}})}^{(x_{i+\frac{1}{2}},y_{\frac{1}{2}})}\Bigg(m_{21}\frac{\partial \mathcal{U}}{\partial x}+m_{22}\frac{\partial \mathcal{U}}{\partial y}+q\mathcal{U}\Bigg)dx \approx \frac{1}{2}y_1\Big[\frac{1}{2}h_i(e+k)-h_i'\Big]\mathcal{U}_{i,1}+\frac{1}{2}h_i'y_1\mathcal{U}_{i+1,1}-\frac{1}{4}y_1h_i(e-k)\mathcal{U}_{i,0} \end{equation} where \begin{eqnarray*} e=\frac{1}{2}\sigma_2^2,~~~~~~~~~k=r-\sigma_2^2-\frac{1}{2}\rho\sigma_1\sigma_2 ~~~~~~~h_i'=\frac{1}{2}\rho\sigma_1\sigma_2x_i~~~~~~h_i=x_{i+\frac{1}{2}}-x_{i-\frac{1}{2}} \end{eqnarray*} Similarly, for the western edge of the control volume $\mathcal{C}_{1,j},~~for~~j=1,...,N$, we have \begin{equation} \label{west-appr-int} \int_{(x_{\frac{1}{2}},y_{j-\frac{1}{2}})}^{(x_{\frac{1}{2}},y_{j+\frac{1}{2}})}\Bigg(m_{11}\frac{\partial \mathcal{U}}{\partial x}+m_{12}\frac{\partial \mathcal{U}}{\partial y}+p\mathcal{U}\Bigg)dy \approx \frac{1}{2}x_1\Big[\frac{1}{2}l_j(a+b)-d_j\Big]\mathcal{U}_{1,j}+\frac{1}{2}d_jx_1\mathcal{U}_{1,j+1}-\frac{1}{4}l_jx_1(a-b)\mathcal{U}_{0,j} \end{equation} with \begin{eqnarray*} a=\frac{1}{2}\sigma_1^2~~~~~~~~~b=r-\sigma_1^2-\frac{1}{2}\rho\sigma_1\sigma_2~~~~~~~~~d_j=\frac{1}{2}\rho\sigma_1\sigma_2y_j~~~~~~~~l_j=y_{j+\frac{1}{2}}-y_{j-\frac{1}{2}} \end{eqnarray*} \subsubsection{Fitted Multi-Point Flux Approximation } The fitted Multi-Point Approximation method consists of calculating the flux through the edges which are totally in the degeneracy region using the fitted finite volume method as described in the previous paragraph. For the edges which are not totally in the degeneracy region, the flux is approximated using simultaneously the Multi-point flux approximation and the upwind methods (first order or second order). In the other hand, the MPFA method and the upwind methods are used to approximate respectively the diffusion term and the convection term over the control volumes which are not in the degeneracy region. \\ Considering \eqref{diff-conv-int}, in fact, in the control volume $\mathcal{C}_{11}$, the southern and western edges are in the degeneracy region, the northern and the eastern edges are not in the degeneracy region. Thereby, the flux through the southern and western edges are approximated using the fitted finite volume method, while the flux through the eastern and northern edges are approximated using simultaneously of the MPFA method and the upwind method. This gives \begin{eqnarray} \label{appr-intc11} \int_{{\mathcal{C}}_{11}}\nabla k(\mathcal{U}) & \approx & a_{11}^1\mathcal{U}_{11}+b_{11}^1\mathcal{U}_{21}+c_{11}^1\mathcal{U}_{22}+d_{11}^1\mathcal{U}_{12} +\omega_{11}^1\mathcal{U}_{02}+\phi_{11}^1\mathcal{U}_{01} \nonumber\\ & & \nonumber\\ & & +r_{11}^1\mathcal{U}_{10}+s_{11}^1\mathcal{U}_{20} \end{eqnarray} with \begin{eqnarray*} && a_{11}^1=T_{11}^{22}+T_{23}^{21}+T_{31}^{22}+T_{42}^{12}+l_1\max(f_x^2,0) +h_1\max(f_y^2,0)-\frac{1}{2}x_1\Big[\frac{1}{2}l_1(a+b)-d_1\Big]\\\ & & \\ & & ~~~~~~~~~-\frac{1}{2}y_1\Big[\frac{1}{2}h_1(e+k)-h_1'\Big]\\ & & \\ & & b_{11}^1=T_{12}^{22} +T_{24}^{21}+T_{32}^{22}+l_1\min(f_x^2,0)-\frac{1}{2}h_1'y_1;~~~~~~~~c^1_{11}=T_{14}^{22}+T_{34}^{22}\\ & & \\ & & d_{11}^1=T_{13}^{22}+T_{33}^{22}+T_{44}^{12}+h_1\min(f_y^2,0)-\frac{1}{2}d_1x_1;~~~~~~~~ \omega_{11}^1=T_{43}^{12}~~~~~~~~~~~\\ & & \\ & & \phi_{11}^1=T_{41}^{12}+\frac{1}{4}l_1x_1(a-b)~~~~~~~~~~~r_{11}^1=T_{21}^{21}+\frac{1}{4}h_1y_1(e-k)~~~~~~~s_{11}^1=T_{22}^{21} \end{eqnarray*} Similarly, for the control volume $\mathcal{C}_{1,j}~~~~j=1,\ldots,N$, we have \begin{eqnarray} \label{appr-intc1j} \int_{{\mathcal{C}}_{1,j}}\nabla k(\mathcal{U}) & \approx & a_{1,j}^1\mathcal{U}_{1,j}+bb_{1,j}\mathcal{U}_{2,j}+c_{1,j}^1\mathcal{U}_{2,j+1}+d_{1,j}^1\mathcal{U}_{1,j+1} +\gamma_{1,j}^1\mathcal{U}_{1,j-1}+\lambda_{1,j}^1\mathcal{U}_{2,j-1} \nonumber\\ & & \nonumber\\ & & \omega_{1,j}^1\mathcal{U}_{0,j+1}+\phi_{1,j}^1\mathcal{U}_{0,j} +\mathcal{U}psilon_{1,j}^1\mathcal{U}_{0,j-1} \end{eqnarray} \begin{eqnarray*} & & a_{1,j}^1=T_{11}^{2,j+1}+T_{23}^{2,j}+T_{31}^{2,j+1}+T_{42}^{1,j+1}-T_{33}^{2,j}-T_{44}^{1,j}-\frac{1}{2}x_1\Big(\frac{1}{2}l_j(a+b)-d_j\Big)\\ & & \\ & &~~~~~~~~~+l_j\max(f_x^2,0)+h_1\max(f^{j+1}_y,0)-h_1\min(f^j_y,0)\\ & & \\ & & b_{1,j}^1=T_{12}^{2,j+1}+T_{24}^{2,j}+T_{32}^{2,j+1}-T_{34}^{2,j}+l_j\min(f_x^2,0);~~~~~~~~~~~~c_{1,j}^1=T_{14}^{2,j+1}+T_{34}^{2,j+1};\\ & & \\ & &d_{1,j}^1=T_{13}^{2,j+1}+T_{33}^{2,j+1}+T_{44}^{1,j+1}+h_1\min(f_y^{j+1},0)-\frac{1}{2}d_jx_1+\\ & & \\ & & \gamma_{1,j}^1=T_{21}^{2,j}-T_{31}^{2,j}-T_{42}^{1,j}-h_1\max(f_y^{j},0);~~~~~~~~~~~~~\lambda_{1,j}^1=T_{22}^{2,j}-T_{32}^{2,j};\\ & & \\ & & \omega_{1,j}^1=T_{43}^{1,j+1};~~~~~~~~~~~~~~\phi_{1,j}^1=T_{41}^{1,j+1}-T_{43}^{1,j}+\frac{1}{4}l_jx_1(a-b);~~~~~~~~ \mathcal{U}psilon_{1,j}^1=-T_{41}^{1,j}; \end{eqnarray*} For the control $\mathcal{C}_{i,1}~~~~i=2,\ldots,N$, we have: \begin{eqnarray} \label{appr-intci1} \int_{{\mathcal{C}}_{i,1}}\nabla k(\mathcal{U}) & \approx & a_{i,1}^1\mathcal{U}_{i,1}+b_{i,1}^1\mathcal{U}_{i+1,1}+c_{i,1}^1\mathcal{U}_{i+1,2}+d_{i,1}^1\mathcal{U}_{i,2}+e_{i,1}^1\mathcal{U}_{i-1,2} +\alpha_{i,1}^1\mathcal{U}_{i-1,1}+t_{i,1}^1\mathcal{U}_{i-1,0} \nonumber\\ & & \nonumber\\ & & +r_{i,1}^1\mathcal{U}_{i,0}+s_{i,1}^1\mathcal{U}_{i+1,0} \end{eqnarray} with \begin{eqnarray*} & & a_{i,1}^1=T_{11}^{i+1,2}+T_{23}^{i+1,1}+T_{31}^{i+1,2}+T_{42}^{i,2}-T_{12}^{i,2}-T_{24}^{i,1} -\frac{1}{2}y_1\Big[\frac{1}{2}h_i(e+k)-h_i'\Big]\\ & & \\ & & ~~~~~~~~~~~+l_1\max(f_x^{i+1},0)+h_i\max(f_y^2,0)\Big)-l_1\min(f_x^{i},0)\\ & & \\ & & b_{i,1}^1=T_{12}^{i+1,2}+T_{24}^{i+1,1}+T_{32}^{i+1,2}+l_i\min(f_x^{i+1},0)-\frac{1}{2}h_i'y_1;~~~~~~~~~~~~~~~c_{i,1}^1=T_{14}^{i+1,2}+T_{34}^{i+1,2}\\ & & \\ & & d_{i,1}^1=T_{13}^{i+1,2}+T_{33}^{i+1,2}+T_{44}^{i,2}-T_{14}^{i,2} +h_i\min(f_y^2,0);~~~~~~~~~~~~~~~~~~~~~~e_{i,1}^1=T_{43}^{i,2}-T_{13}^{i,2}\\ & & \\ & & \alpha_{i,1}^1=T_{41}^{i,2}-T_{11}^{i,2}-T_{23}^{i,1}-l_1\max(f_x^{i},0);~~~~~~~~~~~t_{i,1}^1=-T_{21}^{i,1};\\ & & \\ & & r_{i,1}^1=T_{21}^{i+1,1}-T_{22}^{i,1}+\frac{1}{4}y_1h_i(e-k)~~~~~~~~s_{i,1}^1=T_{22}^{i+1,1} \end{eqnarray*} As we already mentioned, for the control volumes which are not in the degeneracy region, we use the multi-Point flux approximation to approximate the diffusion term and the upwind methods (first and second order) to approximate the convection term. So by combining as before, we obtain the following ODE \begin{equation} \label{fit-mpfa-up1} \frac{d\mathcal{U}}{d\tau}=A\mathcal{U}+F \end{equation} where \begin{equation*} \mathcal{U}=\begin{bmatrix} \mathcal{U}_{11}\\ \mathcal{U}_{12}\\ \vdots\\ \mathcal{U}_{1N}\\ \mathcal{U}_{21}\\ \mathcal{U}_{22}\\ \vdots\\ \mathcal{U}_{2N}\\ \vdots\\ \vdots\\ \mathcal{U}_{N,1}\\ \mathcal{U}_{N,2}\\ \vdots\\ \mathcal{U}_{NN} \end{bmatrix} ~~~~A=L^{-1}\Big(Z+A_L\Big)~~ \end{equation*} with $F$ the vector of boundary conditions, $A_L$ is a diagonal matrix of size $N^2\times N^2$ coming from the discretisation of \eqref{linearterm}. The elements of $A_L$ are $h_il_j\lambda$ for $ i,j=1,...,N$ with $\lambda$ given in \eqref{conservation}. The matrix $L$ is also a diagonal matrix of size $N^2\times N^2$ whose diagonal elements are $h_il_j$ for $i,j=1,\ldots,N$ and \begin{equation*} Z=\begin{bmatrix} D_1 & K_1 & 0_N & \ldots & \ldots & \ldots & \ldots & 0_N\\ L_2 & D_2 & K_2 & \ddots & & & & \vdots \\ 0_N & L_3 & D_3 & K_3 & \ddots & & & \vdots\\ \vdots & \ddots & L_4 & D_4 & K_4 & \ddots & & \vdots\\ \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & & \ddots& \ddots & \ddots & \ddots & 0_N& \\ \vdots & & & & \ddots & L_{N-1} & D_{N-1} & K_{N-1} \\ 0_N & \ldots & \ldots & \ldots & \ldots & 0_N & L_N & D_N \end{bmatrix} \end{equation*} The fitted matrix $Z$ uses the first order upwind method. The matrices $D_i,K_i,L_i$ are tri-diagonal matrices defined as follows. For $i=1,N$ \begin{eqnarray*} k=1,\ldots,N~~(D_i)_{kk}=a_{1,k}^1~~~~~~~~~k=1,\ldots,N-1~~(D_i)_{k,k+1}=d_{1,k}^1,~~~~~~~~~~~~ k=2,\ldots,N~~(D_i)_{k,k-1}=\gamma_{1,k}^1\\ & & \\ k=1,\ldots,N~~(K_1)_{kk}=b_{1,k}^1~~~~~~~~~k=1,\ldots,N-1~~(K_1)_{k,k+1}=c_{1,k}^1,~~~~~~~~~~~~ k=2,\ldots,N~~(K_1)_{k,k-1}=\lambda_{1,k}^1\\ & & \\ (L_N)_{11}=\alpha_{N,1}^1~~~(L_N)_{12}=e_{N,1}^1~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ & & \\ k=2,\ldots,N~~(L_N)_{kk}=\alpha_{N,k}+\epsilon_{N,k}~~~~~~~~~k=1,\ldots,N-1~~(L_N)_{k,k+1}=e_{N,k},~~~~~~~~~k=2,\ldots,N~~(L_N)_{k,k-1}=\beta_{N,k} \end{eqnarray*} For $i=2,\ldots,N-1$ \begin{eqnarray*} & & \\ & & ~(D_i)_{11}=a_{i,1}^1~;~(D_i)_{12}=d_{i,1}^1;~~~~~(K_i)_{11}=b_{i,1}^1~ ;~(K_i)_{12}=c_{i,1}^1~~~~(L_i)_{11}=\alpha_{i,1}~;~(L_i)_{12}=e_{i,1}^1\\ & & \\ & & k=2,\ldots,N~~~~~(D_i)_{kk}=a_{i,k}+\Omega_{i,k};~~~~~~~ (K_i)_{kk}=b_{i,k}+\psi_{i,k};~~~~~~~~ (L_i)_{kk}=\alpha_{i,k}+\epsilon_{i,k}\\ & & \\ & & k=2,\ldots,N-1~~~~~(D_i)_{k,k+1}=d_{i,k}+\phi_{i,k};~~~~~~~ (K_i)_{k,k+1}=c_{i,k};~~~~~~~~ (L_i)_{k,k+1}=e_{i,k}\\ & & \\ & & k=2,\ldots,N~~~~~(D_i)_{k,k-1}=\gamma_{i,k}+\mu_{i,k};~~~~~~~ (K_i)_{k,k-1}=\lambda_{i,k};~~~~~~~~ (L_i)_{k,k-1}=\beta_{i,k} \end{eqnarray*} where all the elements $a_{i,j}^1,b_{i,j}^1,c_{i,j}^1,d_{i,j}^1,e_{i,j}^1,\gamma_{i,j}^1,\lambda_{i,j}^1$ are defined in \eqref{appr-intc11},\eqref{appr-intc1j},\eqref{appr-intci1} and the others elements are defined in \eqref{flux-mpfa} and \eqref{flux-up1}. Similarly, combining the fitted finite volume method, the MPFA and the second order upwind method we have \begin{equation} \label{fit-mpfa-up2} \frac{d\mathcal{U}}{d\tau}=A\mathcal{U}+F \end{equation} where \begin{equation*} \mathcal{U}=\begin{bmatrix} \mathcal{U}_{11}\\ \mathcal{U}_{12}\\ \vdots\\ \mathcal{U}_{1N}\\ \mathcal{U}_{21}\\ \mathcal{U}_{22}\\ \vdots\\ \mathcal{U}_{2N}\\ \vdots\\ \vdots\\ \mathcal{U}_{N,1}\\ \mathcal{U}_{N,2}\\ \vdots\\ \mathcal{U}_{NN} \end{bmatrix} ~~~~A=L^{-1}\Big(Y+A_L\Big)~~ \end{equation*} with $G$ the vector of boundary conditions, $A_L$ is a diagonal matrix of size $N^2\times N^2$ coming from the discretisation of \eqref{linearterm}. The elements of $A_L$ are $h_il_j\lambda$ for $i,j=1,...,N$ with $\lambda$ given in \eqref{conservation}. The matrix L is also a diagonal matrix of size $N^2\times N^2$ whose elements are $h_il_j$ for $i,j=1,\ldots,N$ and \begin{equation*} Y=\begin{bmatrix} H_1 & P_1 & 0_N & 0 & \ldots & \ldots & \ldots & & 0_N & 0_N\\ Q_2 & H_2 & P_2& R_2 & 0_N & & & & & 0_N \\ W_3& Q_3 & H_3 & P_3 & R_3 & 0_N & & & & \vdots \\ 0_N & W_4 & Q_4 & H_4 & P_4 & R_4 & \ddots\\ 0_N & 0_N & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \\ \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \\ \vdots & & & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & 0_N \\ & & & & \ddots & W_{N-2} & Q_{N-2} & H_{N-2} & P_{N-2} & R_{N-2} \\ \vdots & & & & & \ddots & W_{N-1} & Q_{N-1} & H_{N-1} & P_{i,N-1}\\ 0_N & \dots & \ldots & \ldots & & \ldots & 0_N & 0_N & Q_{N} & H_{N} \\ \end{bmatrix} \end{equation*} The elements of matrix Y are matrices. Indeed $0_N$ is a zeros matrix of size $N\times N$. The matrices $H_i,P_i,Q$ are tri-diagonal matrices and $W_i,R_i$ are diagonal matrices defined as follows: \begin{eqnarray*} & & (H_1)_{11}=a_{11}^1~;~(H_1)_{12}=d_{11}^1~~~~~~(P_1)_{11}=b_{11}^1~;~ (P_1)_{12}=c_{11}^1\\ & & \\ & & k=2,\ldots, N~~(H_1)_{kk}=a_{1,k}^1;~~~~~k=2,\ldots, N-1~~(H_1)_{k,k+1}=d_{1,k}^1;~~~~~~k=2,\ldots ,N ~~(H_1)_{k,k-1}=\gamma_{1,k}^1\\ & & \\ & & k=2,\ldots, N~~(P_1)_{kk}=b_{1,k}^1;~~~~~k=2,\ldots, N-1~~(P_1)_{k,k+1}=c_{1,k}^1;~~~~~~k=2,\ldots ,N ~~(P_1)_{k,k-1}=\lambda_{1,k}^1 \end{eqnarray*} For $i=2,\ldots,N-1$ \begin{eqnarray*} & & ~(H_i)_{11}=a_{i,1}^1~;~(H_i)_{12}=d_{i,1}^1;~~~~~(P_i)_{11}=b_{i,1}^1+\Delta_{i,1}~ ;~(P_i)_{12}=c_{i,1}^1~~~~(Q_i)_{11}=\alpha_{i,1}+\eta_{i,1}~;~(Q_i)_{12}=e_{i,1}^1\\ & & \\ & & k=2,\ldots, N,~~~(H_i)_{kk}=a_{i,k}+\Omega_{i,k};~~~~~~~~ (P_i)_{kk}=b_{i,k}+\Delta_{i,k};~~~~~~~~~ (Q_i)_{kk}=\alpha_{i,k}+\eta_{i,k}~~~~~~ \\ & & \\ & & k=2,\ldots,N-1, ~~~~~ (H_i)_{k,k+1}=d_{i,k}+\phi_{i,k};~~~~~~~ (P_i)_{k,k+1}=c_{i,k};~~~~~~~~ (Q_i)_{k,k+1}=e_{i,k}\\ & & \\ & & k=2,\ldots,N,~~~~~(H_i)_{k,k-1}=\lambda_{i,k}+\mu_{i,k};~~~~~~~ (P_i)_{k,k-1}=\lambda_{i,k};~~~~~~~~ (Q_i)_{k,k-1}=\beta_{i,k}\\ & & \\ & & k=2,\ldots, N-2, ~~~~~(H_i)_{k,k+2}=\Psi_{i,k};~~~~~~~~~~~~~~~~~k=3,\ldots,N~~~(H_i)_{k,k-2}=\kappa_{i,k}\\ & & \\ \end{eqnarray*} and \begin{eqnarray*} & &~~ (R_i)_{kk}=\Pi_{ik},~~i=2,\ldots, N-2,\,\,k=2,\ldots, N-1\\ & &~ (W_i)_{kk}=\epsilon_{ik},~~~i=3,\ldots, N-1, ~~=2,\ldots, N-1, \end{eqnarray*} where all the elements $a_{i,j}^1,b_{i,j}^1,c_{i,j}^1,d_{i,j}^1,e_{i,j}^1,\gamma_{i,j}^1,\lambda_{i,j}^1$ are defined \eqref{appr-intc11},\eqref{appr-intc1j},\eqref{appr-intci1}, and the others elements are defined in \eqref{flux-mpfa} and \eqref{flux-upw2}. \section{Time discretization} Let us consider the ODE stemming from the spatial dicretization and given by \eqref{mpfa-up1},\eqref{mpfa-up2},\eqref{fit-mpfa-up1} and \eqref{fit-mpfa-up2} \begin{equation*} \frac{d\mathcal{U}}{d\tau}=A\mathcal{U}+F \end{equation*} Using the $\theta$-method for the time discretization, we have \begin{eqnarray} \frac{\mathcal{U}^{n+1}-\mathcal{U}^n}{\Delta\tau}=\theta\Big(A\mathcal{U}^{n+1}+F^{n+1}\Big)+(1-\theta)\Big(A\mathcal{U}^n+F^n\Big) \end{eqnarray} Hence \begin{equation} \mathcal{U}^{n+1}=\Big(I-\theta\Delta\tau A\Big)^{-1}\Bigg[\Big(I+(1-\theta)\Delta\tau A\Big)\mathcal{U}^n+\theta \Delta \tau F^{n+1}+(1-\theta)\Delta\tau F^n\Bigg] \end{equation} with \begin{eqnarray*} & & \mathcal{U}^n=\begin{bmatrix} \mathcal{U}_{11}(\tau_n)~~ \mathcal{U}_{12}(\tau_n)~~ \ldots~~ \mathcal{U}_{1N}(\tau_n)~~ \mathcal{U}_{21}(\tau_n)~~ \mathcal{U}_{22}(\tau_m)~~ \ldots~~ \mathcal{U}_{2N}(\tau_n)~~ \ldots~~ \mathcal{U}_{N,1}(\tau_n)~~ \mathcal{U}_{N,2}(\tau_n)~ \ldots\ldots~ \mathcal{U}_{NN}(\tau_n) \end{bmatrix}^T\\ & & \\ & & F^n=F(\tau_n),\,\,\;\;\tau_n=n\Delta \tau. \end{eqnarray*} \section{Numerical experiments} In this section, we perform some numerical simulations and compare different numerical schemes developed in this work. More precisely, we compare the novel fitted MPFA method combined to the upwind methods, first method (fitted MPFA-$1^{st}$ upw) and second order (fitted MPFA-$2^{nd}$ upw), with the fitted finite volume method by \cite{huang2006fitted} (fitted FV) and the standard MPFA method combined to the upwind methods, first (MPFA-$1^{st}$ upw) and second order (MPFA-$2^{nd}$ upw). The analytical solution of the PDE (\ref{twoop}) is well known (see \cite{haug2007complete} ) and given as \begin{eqnarray} \label{analsol} C(x,y,K,T) & = & xe^{-rT}M(y_1,d;\rho_1)+ye^{-rT}M(y_2,-d+\sigma\sqrt{T},\rho_2) \nonumber\\ & & \\ & & -Ke^{-rT}\times\left(1-M(-y_1+\sigma_1\sqrt{T},-y_2+\sigma_2\sqrt{T},\rho)\right) \nonumber\ \end{eqnarray} where \begin{eqnarray*} & & d=\frac{\ln(x/y)+(b_1-b_2+\sigma_1^2/2)T}{\sigma\sqrt{T}},\\ & & \\ & & y_1 = \frac{\ln(x /K)+(b_1+\sigma_1^2/2)T}{\sigma_1\sqrt{T}},~~~~~~y_2=\frac{\ln(y/K)+(b_1+\sigma_2^2/2)T}{\sigma_2\sqrt{T}},\\ & & \\ & & \sigma=\sqrt{\sigma_1^2+\sigma_2^2-2\rho\sigma_1\sigma_2},~~~~~\rho_1=\frac{\sigma_1-\rho\sigma_2}{\sigma}~~~~~~~\rho_2=\frac{\sigma_2-\rho\sigma_1}{\sigma}, \end{eqnarray*} and \begin{equation*} M(a,b,\rho)=\frac{1}{2\pi\sqrt{1-\rho^2}}\int_{-\infty}^a\int_{-\infty}^b \exp\left(-\frac{u^2-2\rho uv+v^2}{2(1-\rho^2)}\right)dudv. \end{equation*} Note that in all our numerical schemes, the Dirichlet Boundary condition is used with the value equal to the analytical solution. \begin{figure} \caption{ Analytical solution for option price at final time $T$. The computational domain of the problem is $\Omega=[0;300]\times [0;300]\times[0;T]$ with $T=1/12$, $K=100$, the volatilities $\sigma_1=\sigma_2=0.3$. The correlation coefficient is $\rho=0.5$, the risk free interest $r=0.03$ and $\Delta \tau=1/100$.} \label{fig1} \end{figure} The graphs of option price with different methods are given in \figref{fig1},\figref{fig2} and \figref{fig3} \begin{figure} \caption{ Option price for MPFA-upwind methods at final time $T$. The computational domain of the problem is $\Omega=[0;300]\times [0;300]\times[0;T]$ with $T=1/12$, $K=100$, the volatilities $\sigma_1=\sigma_2=0.3$. The correlation coefficient is $\rho=0.5$, the risk free interest $r=0.03$ and $\Delta \tau=1/100$.} \label{fig2} \end{figure} \begin{figure} \caption{Option price for fitted MPFA-upwind methods at final time $T$.The computational domain of the problem is $\Omega=[0;300]\times [0;300]\times[0;T]$ with $T=1/12$, $K=100$, the volatilities $\sigma_1=\sigma_2=0.3$. The correlation coefficient is $\rho=0.5$, the risk free interest $r=0.03$ and $\Delta \tau=1/100$. } \label{fig3} \end{figure} In this paragraph, we consider the four numerical methods illustrated in the previous sections and the fitted finite volume method \cite{huang2006fitted}. We evaluate the error of these numerical method with respect to the analytical solution \eqref{analsol}. The $L^2$-norm is used to compute the error as follows: \begin{equation} \label{error-l2} err=\frac{\sqrt{\sum_{i,j=1}^N meas(\mathcal{C}_{ij}) \big(\mathcal{U}_{ij}-U_{ij}^{ana}\big)^2}}{\sqrt{\sum_{i,j=1}^n meas(\mathcal{C}_{ij}) \big(U_{ij}^{ana}\big)^2}} \end{equation} where $\mathcal{U}$ is the numerical solution, $U^{ana}$ the analytical solution and $meas(\mathcal{C}_{i,j})$ is the measure of the control volume $\mathcal{C}_{ij}$. This gives the following table: \begin{table}[!h] \centering \begin{tabular}{|c||c|c|c|c|c|} \hline \backslashbox{Nb of grid pts}{Num method}& Fitted fin vol & MPFA-$1^{st}$ upw & MPFA-$2^{nd}$ upw & fitted MPFA-$1^{st}$ upw & fitted MPFA -$2^{nd}$ upw\\ \hline $50\times 50$ & 0.0134& 0.0060& 0.0059 & 0.0060 & 0.0060 \\ \hline $70\times 70$ & 0.0133 & 0.0044 & 0.0044 & 0.0044 & 0.0044 \\ \hline $85\times 85$ & 0.0132 & 0.0037 & 0.0037 & 0.0037 & 0.0037\\ \hline $100\times 100$ & 0.0132 & 0.0032 & 0.0032& 0.0032 & 0.0032 \\ \hline $150\times 150$ & 0.0131 & 0.0024 & 0.0023& 0.0023 & 0.0023 \\ \hline \end{tabular} \caption{Table of errors. The computational domain of the problem is $\Omega=[0;300]\times [0;300]\times[0;T]$ with $T=1/6$, $K=100$, the volatilities $\sigma_1=\sigma_2=0.3$. The correlation coefficient is $\rho=0.5$, the risk free interest $r=0.1$ and $\Delta \tau=1/100$.} \label{errorss1} \end{table} \begin{table}[!h] \centering \begin{tabular}{|c||c|c|c|c|c|} \hline \backslashbox{Nb of grid pts}{Num method}& Fitted fin vol & MPFA-$1^{st}$ upw & MPFA-$2^{nd}$ upw & fitted MPFA-$1^{st}$ upw & fitted MPFA -$2^{nd}$ upw\\ \hline $50\times 50$ & 0.0134& 0.0060& 0.0059 & 0.0060 & 0.0060 \\ \hline $100\times 100$ & 0.0104 & 0.0064 & 0.0063& 0.0064 & 0.0063 \\ \hline $150\times 150$ & 0.0131 & 0.0056 & 0.0055& 0.0056 & 0.0055 \\ \hline \end{tabular} \caption{Table of errors. The computational domain of the problem is $\Omega=[0;300]\times [0;300]\times[0;T]$ with $T=1/6$, $K=100$, the volatilities $\sigma_1=\sigma_2=0.3$. The correlation coefficient is $\rho=0.5$, the risk free interest $r=0.08$ and $\Delta \tau=1/100$.} \label{errorss2} \end{table} \begin{table}[!h] \centering \begin{tabular}{|c||c|c|c|c|c|} \hline \backslashbox{Nb of grid pts}{Num method}& Fitted fin vol & MPFA-$1^{st}$ upw & MPFA-$2^{nd}$ upw & fitted MPFA-$1^{st}$ upw & fitted MPFA -$2^{nd}$ upw\\ \hline $100\times 100$ & 0.0152 & 0.0239 & 0.0235& 0.0240 & 0.0229 \\ \hline $150\times 150$ & 0.0151 & 0.0231 & 0.0228& 0.0232 & 0.0229 \\ \hline \end{tabular} \caption{Table of errors. The computational domain of the problem is $\Omega=[0;300]\times [0;300]\times[0;T]$ with $T=1/6$, $K=100$, the volatilities $\sigma_1=\sigma_2=0.3$. The correlation coefficient is $\rho=0.5$ , the risk free interest $r=0$ and $\Delta \tau=1/100$. } \label{errorss3} \end{table} \begin{table}[!h] \centering \begin{tabular}{|c||c|c|c|c|c|} \hline \backslashbox{Nb of grid pts}{Num method}& Fitted fin vol & MPFA-$1^{st}$ upw & MPFA-$2^{nd}$ upw & fitted MPFA-$1^{st}$ upw & fitted MPFA -$2^{nd}$ upw\\ \hline $50 \times 50$ & 0.1208 & 0.0631 & 0.0669& 0.0623 & 0.0659 \\ \hline $100\times 100$ & 0.1203 & 0.0572 & 0.0648& 0.0559 & 0.0629 \\ \hline \end{tabular} \caption{Table of errors. The computational domain of the problem is $\Omega=[0;4]\times [0;4]\times[0;T]$ with $T=2$, $K=1$, the volatilities $\sigma_1=\sigma_2=1$. The correlation coefficient is $\rho=0.3$, the risk free interest $r=0.5$ and $\Delta \tau=1/100$.} \label{errorss3} \end{table} \begin{table}[!h] \centering \begin{tabular}{|c||c|c|c|c|c|} \hline \backslashbox{Nb of grid pts}{Num method}& Fitted fin vol & MPFA-$1^{st}$ upw & MPFA-$2^{nd}$ upw & fitted MPFA-$1^{st}$ upw & fitted MPFA -$2^{nd}$ upw\\ \hline $50 \times 50$ & 0.1196 & 0.0562 & 0.0643& 0.0555 & 0.0624 \\ \hline $100\times 100$ & 0.1201 & 0.0626 & 0.0664& 0.0618 & 0.0654 \\ \hline \end{tabular} \caption{Table of errors. The computational domain of the problem is $\Omega=[0;4]\times [0;4]\times[0;T]$ with $T=2$, $K=1$, the volatilities $\sigma_1=\sigma_2=1$. The correlation coefficient is $\rho=0.3$, the risk free interest $r=0.5$ and $\Delta \tau=1/10$.} \label{errorss4} \end{table} As we can observe in \tabref{errorss1}-\tabref{errorss4}, the errors from our fitted MPFA and MPFA methods are smaller compared to those of fitted finite volume in \cite{huang2006fitted}. We can also note that when $r$ become smaller, the gaps between the errors of the fitted finite volume in \cite{huang2006fitted} and our fitted MPFA and MPFA methods reduce. \section{Conclusion} In this paper, we have presented the Multi-Point Flux Approximation (MPFA) to approximate the diffusion term of Black-Scholes Partial Differential Equation in its divergence form. The MPFA method coupled with the upwind methods (first and second order) have been used to solve numerically the Black-Scholes PDE. To handle the degeneracy of Black Scholes PDE, we have proposed a novel method based on a combination of the MPFA method and fitted finite volume by \cite{huang2006fitted}. We have performed some numerical simulations, which show that our fitted MPFA method coupled with first or second order upwinding methods are more accurate than the fitted finite volume method by \cite{huang2006fitted}. Rigorous convergence proof of the fitted MPFA will be our nearest future work. \end{document}
\begin{document} \newtheorem{thm}{Theorem} \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}{Lemma} \theoremstyle{remark}\newtheorem{rem}{Remark} \theoremstyle{definition}\newtheorem{defn}{Definition} \title{Unconditional convergence of the differences of Fej\'er kernels on $L^2(\mathbb{R})$} \author{Sakin Demir\\ Agri Ibrahim Cecen University\\ Faculty of Education\\ Department of Basic Education\\ 04100 A\u{g}r{\i}, Turkey\\ E-mail: [email protected] } \maketitle \renewcommand{\arabic{footnote}}{} \footnote{2020 \emph{Mathematics Subject Classification}: Primary 42A55, 26D05; Secondary 42A24.} \footnote{\emph{Key words and phrases}: Unconditional Convergence, Fej\'er Kernel.} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0} \begin{abstract} Let $K_n(x)$ denote the Fej\'er kernel given by $$K_n(x)=\sum_{j=-n}^n\left(1-\frac{|j|}{n+1}\right)e^{-ijx}$$ and let $\sigma_nf(x)=(K_n\ast f)(x)$, where as usual $f\ast g$ denotes the convolution of $f$ and $g$.\\ Let the sequence $\{n_k\}$ be lacunary. Then the series $$\mathcal{G}f(x)=\sum_{k=1}^\infty \left(\sigma_{n_{k+1}}f(x)-\sigma_{n_k}f(x)\right)$$ converges unconditionally for all $f\in L^2(\mathbb{R})$.\\ Let $(n_k)$ be a lacunary sequence, and $\{c_k\}_{k=1}^\infty \in \ell^\infty$. Define $$\mathcal{R}f(x)=\sum_{k=1}^\infty c_k\left(\sigma_{n_{k+1}}f(x)-\sigma_{n_k}f(x)\right).$$ Then there exists a constant $C>0$ such that $$\|\mathcal{R}f\|_2\leq C\|f\|_2$$ for all $f\in L^2(\mathbb{R})$, i.e., $\mathcal{R}f$ is of strong type $(2,2)$. As a special case it follows that $\mathcal{G}f$ also is of strong type $(2,2)$. \end{abstract} \section{Preliminaries} Even though the Fej\'er kernel has a long history in Fourier analysis, it is not hard to see by a quick literature review that this subject has not been studied extensively. For example, variation inequalities for the Fej\'er kernel have been studied in 2004 by R. L. Jones and G. Wang~\cite{rljgw}. Since then we do not see any remarkable work on this subject. In this research we study the unconditional convergence of the the Fej\'er kernel, we prove that the difference of the convolution with the Fej\'er kernels for lacunary sequence converges unconditionally for all $f\in L^2(\mathbb{R})$. In order to prove our result we first control the Fourier transform and then use this control to prove required inequality for unconditional convergence.\\ \begin{defn} The series $\sum_{n=1}^\infty x_n$ in a Banach space $X$ is said to converge unconditionally if the series $\sum_{n=1}^\infty\epsilon_nx_n$ converges for all $\epsilon_n$ with $\epsilon_n=\pm 1$ for $n=1,2,3,\dots$.\\ The series $\sum_{n=1}^\infty x_n$ in a Banach space $X$ is said to be weakly unconditionally convergent if for every functional $x^\ast\in X^\ast$ the scalar series $\sum_{n=1}^\infty x^\ast ( x_n)$ is unconditionally convergent. \end{defn} \begin{prop}\label{wuc} For a series $\sum_{n=1}^\infty x_n$ in a Banach space $X$ the following conditions are equivalent: \begin{enumerate}[label=\}upshape(\roman*), leftmargin=*, widest=iii] \item The series $\sum_{n=1}^\infty x_n$ is weakly unconditionally convergent; \item There exists a constant $C$ such that for every $\{c_n\}_{n=1}^\infty\in \ell^\infty$ $$\sup_N\left\|\sum_{n=1}^N c_n x_n\right\|\leq C\|\{c_n\}\|_{\infty}.$$ \end{enumerate} \end{prop} \begin{proof}See page 59 in P.~Wojtaszczyk~\cite{pwoj}. \end{proof} \begin{cor}\label{ucc}Let $X$ be a Banach space. If $\sum_{n=1}^\infty f_n$ is a series in $L^p(X)$, $1<p<\infty$, the following are equivalent: \begin{enumerate}[label=\upshape(\roman*), leftmargin=*, widest=iii] \item The series $\sum_{n=1}^\infty f_n$ is unconditionally convergent; \item There exists a constant $C$ such that for every $\{c_n\}_{n=1}^\infty\in \ell^\infty$ $$\sup_N\left\|\sum_{n=1}^N c_n f_n\right\|_p\leq C\|\{c_n\}\|_{\infty}.$$ \end{enumerate} \end{cor} \begin{proof}It is known (see page 66 in P.~Wojtaszczyk~\cite{pwoj}) that every weakly unconditionally convergent series in a weakly sequentially complete space is unconditionally convergent. Since $L^p(X)$ is a weakly sequentially complete space for $1<p<\infty$, the corollary follows from Proposition~\ref{wuc}. \end{proof} \begin{defn}\label{lacunary} A sequence $(n_k)$ of integers is called lacunary if there is a constant $\alpha >1$ such that $$\frac{n_{k+1}}{n_k}\geq\alpha$$ for all $k=1,2,3,\dots$ . \end{defn} \section{The Results} We denote by $K_n(x)$ the Fej\'er kernel given by $$K_n(x)=\sum_{j=-n}^n\left(1-\frac{|j|}{n+1}\right)e^{-ijx}.$$ We let $\sigma_nf(x)=(K_n\ast f)(x)$, where as usual $f\ast g$ denotes the convolution of $f$ and $g$.\\ \noindent Our first result is the following: \begin{thm}\label{occfk}Let the sequence $\{n_k\}$ be lacunary. Then the series $$\mathcal{G}f(x)=\sum_{k=1}^\infty \left(\sigma_{n_{k+1}}f(x)-\sigma_{n_k}f(x)\right)$$ converges unconditionally for all $f\in L^2(\mathbb{R})$. \end{thm} \begin{proof} Let $\{c_k\}_{k=1}^\infty \in \ell^\infty$ and define $$T_Nf(x)=\sum_{k=1}^Nc_k \left(\sigma_{n_{k+1}}f(x)-\sigma_{n_k}f(x)\right).$$ In order to prove that $\mathcal{G}f$ converges unconditionally for all $f\in L^2(\mathbb{R})$ we have to show that for every $\{c_n\}_{n=1}^\infty\in \ell^\infty$ there exists a constant $C>0$ such that $$\sup_N\|T_Nf\|_2\leq C\|\{c_n\}\|_{\infty}$$ for all $f\in L^2(\mathbb{R})$ since this will verify the condition of Corollary~\ref{ucc} for $\mathcal{G}f$.\\ Let $$S_N(x)=\sum_{k=1}^N\left(K_{n_{k+1}}(x)-K_{n_k}(x)\right).$$ \noindent We clearly have \begin{align*} |\widehat{S}_N(x)|&=\left|\sum_{k=1}^N\left(\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right)\right|\\ &\leq \sum_{k=1}^N\left|\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right|. \end{align*} We first want to show that there exits a constant $C>0$ such that $$ |\widehat{S}_N(x)|\leq C$$ for all $x\in\mathbb{R}$.\\ The Fej\'er kernel has a Fourier transform given by $$ \widehat{K}_n(x) = \left\{ \begin{array}{ll} 1-\frac{|x|}{n+1}&\;\;\textrm{if}\:\:|x|\leq n;\\ 0&\;\;\textrm{if}\:\:|x|>n. \end{array} \right. $$ Fix $x\in\mathbb{R}$, and let $k_0$ be the first $k$ such that $|x|\leq n_k$ and let $$I(x)=\sum_{k=1}^N\left|\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right|.$$ Then we have \begin{align*} I(x)&=\sum_{k=1}^{n_{k_0-1}}\left|\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right|+\sum_{k=n_{k_0}}^N\left|\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right|\\ &=I_1(x)+I_2(x). \end{align*} \noindent We clearly have $I_1(x)=0$ since $\widehat{K}_n(x)=0$ for $|x|>n$ so in order to control $|\widehat{S}_N(x)|$ it suffices to control $$I_2(x)=\sum_{k=n_{k_0}}^N\left|\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right|.$$ We have \begin{align*} I_2(x)&=\sum_{k=n_{k_0}}^N\left|\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right|\\ &=\sum_{k=n_{k_0}}^N\left|1-\frac{|x|}{n_{k+1}+1}+\frac{|x|}{n_k+1}-1\right|\\ &=\sum_{k=n_{k_0}}^N\left|-\frac{|x|}{n_{k+1}+1}+\frac{|x|}{n_k+1}\right|\\ &\leq \sum_{k=n_{k_0}}^N\frac{|x|}{n_{k+1}+1}+\sum_{k=n_{k_0}}^N\frac{|x|}{n_k+1}\\ &\leq \sum_{k=n_{k_0}}^N\frac{|x|}{n_{k+1}}+\sum_{k=n_{k_0}}^N\frac{|x|}{n_k}\\ &\leq \sum_{k=n_{k_0}}^N\frac{n_{k_0}}{n_{k+1}}+\sum_{k=n_{k_0}}^N\frac{n_{k_0}}{n_k}. \end{align*} On the other hand, since the sequence $\{n_k\}$ is lacunary there is a real number $\alpha >1$ such that $$\frac{n_{k+1}}{n_k}\geq \alpha$$ for all $k\in\mathbb{N}$. Hence we have $$\frac{n_{k_0}}{n_k}=\frac{n_{k_0}}{n_{k_0+1}}\cdot\frac{n_{k_0+1}}{n_{k_0+2}}\cdot\frac{n_{k_0+2}}{n_{k_0+3}}\cdots \frac{n_{k-1}}{n_k}\leq\frac{1}{\alpha^k}.$$ Thus we get $$\sum_{k=n_{k_0}}^N\frac{n_{k_0}}{n_k}\leq \sum_{k=n_{k_0}}^N\frac{1}{\alpha^k}\leq \frac{\alpha}{\alpha -1}.$$ and similarly, we have $$\sum_{k=n_{k_0}}^N\frac{n_{k_0}}{n_{k+1}}\leq \frac{\alpha}{\alpha -1}$$ and this proves that $$I_2(x)\leq 2\frac{\alpha}{\alpha -1}.$$ Since the bound does not depend on the choice of $x\in \mathbb{R}$ what we have just proved is true for all $x\in \mathbb{R}$.\\ We conclude that there exits a constant $C>0$ such that $$|\widehat{S}_N(x)|\leq C\;\;\;\;\;\;\;\;\;\;\;\;\; \textrm{($\ast$)}$$ for all $x\in\mathbb{R}$ and $N\in\mathbb{N}$.\\ We now have \begin{align*} \|T_Nf\|_2^2&=\int_{\mathbb{R}}\left|\sum_{k=1}^Nc_k\left(\sigma_{n_{k+1}}f(x)-\sigma_{n_k}f(x)\right)\right|^2\, dx\\ &=\int_{\mathbb{R}}\left|\sum_{k=1}^Nc_k\left(K_{n_{k+1}}\ast f(x)-K_{n_k}\ast f(x)\right)\right|^2\, dx\\ &\leq \|\{c_n\}\|_{\infty}^2 \int_{\mathbb{R}}\left|\sum_{k=1}^N\left(K_{n_{k+1}}\ast f(x)-K_{n_k}\ast f(x)\right)\right|^2\, dx\\ &= \|\{c_n\}\|_{\infty}^2 \int_{\mathbb{R}}|S_N\ast f(x)|^2\, dx\\ &= \|\{c_n\}\|_{\infty}^2 \int_{\mathbb{R}}|\widehat{S_N\ast f}(x)|^2\, dx\;\;\;\textrm{(by Plancherel's theorem)}\\ &= \|\{c_n\}\|_{\infty}^2\int_{\mathbb{R}}|\widehat{S}_N(x)|^2\cdot|\hat{ f}(x)|^2\, dx\\ &\leq C \|\{c_n\}\|_{\infty}^2\int_{\mathbb{R}}|\hat{ f}(x)|^2\, dx\;\;\;\textrm{(by ($\ast$))}\\ &=C\|\{c_n\}\|_{\infty}^2\int_{\mathbb{R}}| f(x)|^2\, dx\;\;\;\textrm{(by Plancherel's theorem)}\\ &=C\|\{c_n\}\|_{\infty}^2\|f\|_2^2 \end{align*} and thus we get $$\sup_N\|T_Nf\|_2\leq \sqrt{C}\|\{c_n\}\|_{\infty}\|f\|_2$$ which completes our proof. \end{proof} \begin{thm}\label{rfl2} Let $(n_k)$ be a lacunary sequence, and $\{c_k\}_{k=1}^\infty \in \ell^\infty$. Define $$\mathcal{R}f(x)=\sum_{k=1}^\infty c_k\left(\sigma_{n_{k+1}}f(x)-\sigma_{n_k}f(x)\right).$$ Then there exists a constant $C>0$ such that $$\|\mathcal{R}f\|_2\leq C\|f\|_2$$ for all $f\in L^2(\mathbb{R})$, i.e., $\mathcal{R}f$ is of strong type $(2,2)$. \end{thm} \begin{proof}We have proved that in the proof of Theorem~\ref{occfk} that given $N\in\mathbb{N}$ there exists a constant $C_1>0$ such that $$\sum_{k=1}^N\left|\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right|\leq C_1$$ for all $x\in\mathbb{R}$, we also have by taking limit \begin{align*} \sum_{k=1}^\infty\left|\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right|^2&\leq \sum_{k=1}^\infty\left|\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right|\\ &\leq C_1 \end{align*} $x\in\mathbb{R}$.\\ Then we obtain \begin{align*} \|\mathcal{R}f\|_2^2&=\int_{\mathbb{R}}\left|\sum_{k=1}^Nc_k\left(\sigma_{n_{k+1}}f(x)-\sigma_{n_k}f(x)\right)\right|^2\, dx\\ &=\int_{\mathbb{R}}\left|\sum_{k=1}^{\infty}c_k\left(K_{n_{k+1}}\ast f(x)-K_{n_k}\ast f(x)\right)\right|^2\, dx\\ &\leq \|\{c_n\}\|_{\infty}^2 \int_{\mathbb{R}}\left|\sum_{k=1}^\infty\left(K_{n_{k+1}}\ast f(x)-K_{n_k}\ast f(x)\right)\right|^2\, dx\\ &= \|\{c_n\}\|_{\infty}^2 \int_{\mathbb{R}}\sum_{k=1}^\infty\left|\left(K_{n_{k+1}}\ast f(x)-K_{n_k}\ast f(x)\right)\right|^2\, dx\\ &= \|\{c_n\}\|_{\infty}^2 \sum_{k=1}^\infty\int_{\mathbb{R}}\left|\left(K_{n_{k+1}}\ast f(x)-K_{n_k}\ast f(x)\right)\right|^2\, dx\\ &= \|\{c_n\}\|_{\infty}^2 \sum_{k=1}^\infty\int_{\mathbb{R}}\left|\left(\widehat{K_{n_{k+1}}\ast f}(x)-\widehat{K_{n_k}\ast f}(x)\right)\right|^2\, dx\;\;\;\textrm{(by Plancherel's theorem)}\\ &= \|\{c_n\}\|_{\infty}^2 \sum_{k=1}^\infty\int_{\mathbb{R}}\left|\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right|^2|\hat{f}(x)|^2\, dx\\ &= \|\{c_n\}\|_{\infty}^2\int_{\mathbb{R}} \sum_{k=1}^\infty\left|\widehat{K}_{n_{k+1}}(x)-\widehat{K}_{n_k}(x)\right|^2|\hat{f}(x)|^2\, dx\\ &\leq \|\{c_n\}\|_{\infty}^2C_1\int_{\mathbb{R}} |\hat{f}(x)|^2\, dx\\ &=\|\{c_n\}\|_{\infty}^2C_1\int_{\mathbb{R}}| f(x)|^2\, dx\;\;\;\textrm{(by Plancherel's theorem)}\\ &=\|\{c_n\}\|_{\infty}^2C_1\|f\|_2^2. \end{align*} This means that there exists a constant $C>0$ such that $$\|\mathcal{R}f\|_2\leq C\|f\|_2$$ for all $f\in L^2(\mathbb{R})$, i.e., $\mathcal{R}f$ is of strong type $(2,2)$. \end{proof} \begin{cor}Let $(n_k)$ be a lacunary sequence. Then there exists a constant $C>0$ such that $$\|\mathcal{R}f\|_2\leq C\|f\|_2$$ for all $f\in L^2(\mathbb{R})$, i.e., $\mathcal{R}f$ is of strong type $(2,2)$. \end{cor} \begin{proof}When we choose $c_k=1$ for all $k$ in the definition $\mathcal{R}f$ we obtain $$\mathcal{G}f=\mathcal{R}f$$ and the proof follows from Theorem~\ref{rfl2}. \end{proof} \end{document}
\begin{document} \RUNAUTHOR{Truong, Wang} \RUNTITLE{} \TITLE{Prophet Inequality with Correlated Arrival Probabilities, with Application to Two Sided Matchings} \ARTICLEAUTHORS{ \AUTHOR{Van-Anh Truong, Xinshang Wang} \AFF{Department of Industrial Engineering and Operations Research, Columbia University, New York, NY, USA, \EMAIL{[email protected], [email protected]} \URL{}} } \ABSTRACT{The classical Prophet Inequality arises from a fundamental problem in optimal-stopping theory. In this problem, a gambler sees a finite sequence of independent, non-negative random variables. If he stops the sequence at any time, he collects a reward equal to the most recent observation. The Prophet Inequality states that, knowing the distribution of each random variable, the gambler can achieve at least half as much reward in expectation, as a prophet who knows the entire sample path of random variables \citep{krengel1978semiamarts}. \textcolor{black}{In this paper, we prove a corresponding bound for \emph{correlated} non-negative random variables.} We analyze two methods for proving the bound, a constructive approach, which produces a worst-case instance, and a reductive approach, which characterizes a certain submartingale arising from the reward process of our online algorithm. We apply this new prophet inequality to the design of algorithms for a class of two-sided bipartite matching problems that underlie \emph{online task assignment problems}. In these problems, demand units of various types arrive randomly and sequentially over time according to some stochastic process. Tasks, or supply units, arrive according to another stochastic process. Each demand unit must be irrevocably matched to a supply unit or rejected. The match earns a reward that depends on the pair. The objective is to maximize the total expected reward over the planning horizon. The problem arises in mobile crowd-sensing and crowd sourcing contexts, where workers and tasks must be matched by a platform according to various criteria. We derive the first online algorithms with worst-case performance guarantees for our class of two-sided bipartite matching problems. } \maketitle \section{Introduction} The classical Prophet Inequality arises from a fundamental problem in optimal-stopping theory. In this problem, a gambler sees a finite sequence of independent, non-negative random variables. If he stops the sequence at any time, he collects a reward equal to the most recent observation. The Prophet Inequality states that, knowing the distribution of each random variable, the gambler can achieve at least half as much reward in expectation, as a prophet who knows the entire sample path of random variables \citep{krengel1978semiamarts}. The classical prophet inequality with independent random variables was proved by \cite{krengel1977}. Its importance arises from its role as a primitive in a wide range of decision problems. Since the appearance of the first result, various versions of the prophet inequality has been proved. \cite{hill1983stop} study the inequality for independent, uniformly bounded random variables. \cite{rinott1987comparisons} prove a version for bounded negatively-dependent random variables. \cite{samuel1991prophet} obtain general results for negatively dependent random variables, and provide some examples for the case of positively dependent variables. \textcolor{black}{In this paper, we study a version of prophet inequality where the sequence of random variables are modeled as a customer arrival process and can be arbitrarily correlated. The specification of our prophet inequality will be made clear in the problem formulation.} We apply this new prophet inequality to the design of algorithms for a class of two-sided bipartite matching problems underlying \emph{online task assignment problems} (OTA). In these problems, demand units of various types arrive randomly and sequentially over time according to some stochastic process. Tasks, or supply units, arrive according to another stochastic process. Each demand unit must be irrevocably matched to a supply unit or rejected. The match earns a reward that depends on the pair. The objective is to maximize the total expected reward over the planning horizon. The problem arises in mobile crowd-sensing and crowd-sourcing contexts, where workers and tasks that arrive randomly overtime and must be matched by a platform according to various criteria. For example, the marketplaces Upwork, Fiverr, and Freelancer match providers with customers of professional services. Walmart evaluated a proposal to source its own customers to deliver orders \citep{barr2013walmart}. The mobile platforms Sensorly, Vericell, VTrack, and PIER outsource the task of collecting analyzing data, called \emph{sensing}, to millions of mobile users. Enabled by information technology, these crowd-sourcing and crowd-sensing businesses are revolutionalizing the traditional marketplace. For example, Freelancers constitute 35\% of the U.S. workforce and have generated a trillion dollars in income as of 2015 \citep{pofeldt2016freelancers}. A survey found that 73\% of freelancers have found work more easily because of technology \citep{pofeldt2016freelancers}. The \emph{two-sided matching problem} that underlies many examples of OTA is very difficult to solve optimally, due to three main reasons. First, given the many characteristics of both demand and supply types and their importance in determining the quality of a match, the decision problem must keep track of a vast amount of information, including the current state of supply and future demand and supply arrivals. Second, both demand and supply processes may change over time, so that the decision-making environment might be constantly changing. Third, demand units tend to be time-sensitive and unmatched supply units might also leave the system after a time, so that they have a finite period of availability. Our contributions in this paper are as follow: \begin{itemize} \item \textcolor{black}{We prove the first prophet inequality for a class of arbitrarily correlated non-negative random variables.} We analyze two methods for proving the bound, a constructive approach, which produces a worst-case instance, and a reductive approach, which characterizes a certain submartingale arising from the reward process of our online algorithm. \item We formulate a new model of bipartite matching with non-homogenous Poisson arrivals for both demand and supply units. Supply units can wait a deterministic amount of time, whereas demand units must be matched irrevocably upon arrival. Decisions are not batched and must be made for one demand unit at at time. Our model underlies an important class of online task assignment problems for crowd-sourcing and crowd-sensing applications. \item We derive the first online algorithms with worst-case performance guarantees for our class of two-sided bipartite matching problems. We prove that our algorithms have expected reward no less than $\frac{1}{4}$ times that of an optimal offline policy, which knows all demand and supply arrivals upfront and makes optimal decisions given this information. \item We provide numerical experiments showing that despite the conservative provable ratio of $1/4$, our online algorithm captures about half of the offline expected reward. We propose improved algorithms that \textcolor{black}{in the experiments} capture $65\%$ to $70\%$ of the offline expected reward. Moreover, the improved algorithms outperform the greedy and the bid-price heuristics in all scenarios. These results demonstrate the advantage of using our online algorithms as they have not only optimized performance in the worst-case scenario, but also satisfactory performance in average-case scenarios. \end{itemize} \section{Literature Review} We review five streams of literature that are most closely related to our problem class. \subsection{Static matching} A variety of matching problems have been studied in static settings, for example, college-admissions problems, marriage problems, and static assignment problems. In these problems, the demand and supply units are known. The reward of matching each demand with each supply unit is also known. The objective is to find a maximum-reward matching. See \cite{abdulkadiroglu2013matching} for a recent review. Our setting differs in that demand units arrive randomly over time and decisions must be made before all the units have been fully observed. \subsection{Dynamic assignment} Dynamic-assignment problems are a class of problems in which a set of resources must be dynamically assigned to a stream of tasks that randomly arrive over time. These problems have a long history, beginning with Derman, Lieberman and Ross (1972)\nocite{derman1972sequential}. See \cite{su2005patient} for a recent review of this literature. \cite{spivey2004dynamic} study a version of the dynamic assignment problem in which the resources may arrive randomly over time. They develop approximate-dynamic-programming heuristics for the problem. They do not derive performance bounds for their heuristics. \cite{anderson2013efficient} study a specialized model in which supply and demand units are identical, and arrivals are stationary over time. They characterize the performance of the greedy policy under various structures for the demand-supply graph, where the objective is to minimize the total waiting time for all supply units. \cite{akbarpour2014dynamic} analyze a dynamic matching problem for which they derive several broad insights. They analyze two algorithms for which they derive bounds on the relative performance under various market conditions. Their work differs from ours in four ways. First, they assume that arrivals are stationary where as we allow non-stationary arrivals. Second, they assume that demand units are identical except for the time of arrivals and supply units are also identical except for the time of arrival, whereas we allow heterogeneity among the units. Finally, they study an unweighted matching problem in which each match earns a unit reward, whereas we study a more general weighted matching problem. Finally, their results hold in asymptotic regimes, where the market is large and the horizon is long, whereas our results hold in any condition. More recently \cite{hu2015dynamic} study a dynamic assignment problem for two-sided markets similar to ours. They also allow for random, non-stationary arrivals of demand and supply units. They derive structural results for the optimal policy and asymptotic bounds. We depart from both of the above papers in focusing on providing algorithms with theoretical performance guarantees on all problem instances. \cite{baccara2015optimal} study a dynamic matching problem in which demand units can wait, and there is a tradeoff between waiting for a higher-quality match, and incurring higher waiting costs. Their setting is limited to just two types of units (demand or supply), whereas we allow arbitrarily many types. They also assume stationary arrivals whereas we allow non-stationary arrivals. \subsection{Online Matching} In online matching, our work fundamentally extends the class of problems that have been widely studied. In existing online matching problems, the set of available supply units is known and corresponds to one set of nodes. Demand units arrive one by one, and correspond to a second set of nodes. As each demand node arises, its adjacency to the resource nodes is revealed. Each edge has an associated weight. The system must match each demand node irrevocably to an adjacent supply node. The goal is to maximize the total weighted or unweighted size of the matching. When demands are chosen by an adversary, the online \emph{unweighted} bipartite matching problem is originally shown by Karp, Vazirani and Vazirani (1990)\nocite{karp1990optimal} to have a worst-case relative reward of $0.5$ for deterministic algorithms and $1-1/e$ for randomized algorithms. The \emph{weighted} this problem cannot be bounded by any constant \citep{mehta2012online}. Many subsequent works have tried to design algorithms with bounded relative reward for this problem under more regulated demand processes. Three types of demand processes have been studied. The first type of demand processes studied is one in which each demand node is independently and identically chosen with replacement from a \emph{known} set of nodes. Under this assumption, \citep{jaillet2013online, manshadi2012online, bahmani2010improved, feldman2009online} propose online algorithms with worst-case relative reward higher than $1-1/e$ for the unweighted problem. Haeupler, Mirrokni, Vahab and Zadimoghaddam (2011)\nocite{haeupler2011online} study online algorithms with worst-case relative reward higher than $1-1/e$ for the weighted bipartite matching problem. The second type of demand processes studied is one in which the demand nodes are drawn randomly without replacement from an unknown set of nodes. This assumption has been used in the secretary problem (Kleinberg 2005, Babaioff, Immorlica, Kempe, and Kleinberg 2008)\nocite{kleinberg2005multiple,babaioff2008online}, ad-words problem \citep{goel2008online} and bipartite matching problem (Mahdian and Yan 2011, Karande, Mehta, and Tripathi 2011)\nocite{mahdian2011online,karande2011online}. A variation to the second type of demand processes studied is one in which each demand node requests a very small amount of resource. This assumption, called the \emph{small bid} assumption, together with the assumption of randomly drawn demands, lead to polynomial-time approximation schemes (PTAS) for problems such as ad-words \citep{Devanur09theadwords}, stochastic packing (Feldman, Henzinger, Korula, Mirrokni, and Stein 2010)\nocite{feldman2010online}, online linear programming (Agrawal, Wang and Ye 2009)\nocite{agrawal2009dynamic}, and packing problems \citep{molinaro2013geometry}. Typically, the PTAS proposed in these works use dual prices to make allocation decisions. Devanur, Jain, Sivan, and Wilkens (2011)\nocite{devanur2011near} study a resource-allocation problem in which the distribution of nodes is allowed to change over time, but still needs to follow a requirement that the distribution at any moment induce a small enough offline objective value. They then study the asymptotic performance of their algorithm. In our model, the amount capacity requested by each customer is not necessary small relative to the total amount of capacity available. The third type of demand processes studied are \emph{independent}, non-homogenous Poisson processes. \textcolor{black}{Alaei, Hajiaghayi and Liaghat (2012)\nocite{alaei2012online}, Wang, Truong and Bank (2015)\nocite{wangTB2015} and Stein, Truong and Wang (2017)\nocite{SteinTW2015} propose online algorithms for online allocation problems. } \textcolor{black}{ We depart from these papers in two major ways. The algorithms in these papers consist of two main steps. In the first step, they solve a deterministic assignment LP to find the probabilities of routing each demand to each supply unit. Given this routing, in the second step, they make an online decision} to determine whether to match a routed demand unit to a supply unit at any given time. In contrast, in the first step, we find \emph{conditional probabilities} of routing demand to supply units, given the set of supply units that have arrived at any given time. We approximate these conditional probabilities because they are intractable to compute directly. In the second step, after routing demands to supply units according to these conditional probabilities, \textcolor{black}{we design an admission algorithm based on the solution of our prophet inequality that, unlike existing admission techniques, deals with \emph{correlated demand arrivals}.} \subsection{Online Task Assignment} This is a subclass of online matching problems that has seen an explosion of interest in recent years. Almost all of these works model either the tasks or the workers as being fixed. \cite{ho2012online, assadi2015online, hassan2014multi, manshadi2012online} study variations of OTA problems. \cite{singer2013pricing} consider both pricing and allocation decisions for OTA. \cite{singla2013truthful} study both learning and allocation decisions for OTA. \cite{zhao2014crowdsource, subramanian2015online} study auction mechanism for OTA. \cite{tong2016online} study OTA when the arrivals of both workers and tasks are in random order. Their algorithms achieve a competitive ratio of $1/4$. Concurrent with our work, \cite{dickerson2018assigning} study a similar model with two-sided, i.i.d. arrivals. They prove that a non-adaptive algorithm achieves a competitive ratio of 0.295. Further, they show that no online algorithm can achieve a ratio better than 0.581, even if all rewards are the same. Note that both the models of \cite{tong2016online} and \cite{dickerson2018assigning} are more restrictive than ours. In a model with time-varying arrivals such as ours, non-adaptive algorithms such as the one proposed by \cite{dickerson2018assigning}, or a greedy algorithms such as one of the algorithms proposed by \cite{tong2016online}, are unlikely to perform well. \subsection{Revenue Management} Our work is also related to the revenue management literature. We refer to \cite{TalluriV2004} for a comprehensive review of this literature. Our work is related to the still limited literature on designing policies for revenue management that are have worst-case performance guarantees. \cite{ball2009toward} analyze online algorithms for the single-leg revenue-management problem. Their performance metric compares online algorithms with an optimal offline algorithm under the worst-case instance of demand arrivals. They prove that the competitive ratio cannot be bounded by any constant when there are arbitrarily many customer types. Qin, Zhang, Hua and Shi (2015)\nocite{CongApproximationRM} study approximation algorithms for an admission control problem for a single resource when customer arrival processes can be correlated over time. They prove a constant approximation ratio for the case of two customer types, and also for the case of multiple customer types with specific restrictions. They allow only one type of resource to be allocated. Gallego, Li, Truong and Wang (2015)\nocite{gallegoLTW2015} study online algorithms for a personalized choice-based revenue-management problem. They allow multiple customer types and products, and non-stationary independent demand arrivals. They allow customers to select from assortments of offered products according to a general choice model. They prove that an LP-based policy earns at least half of the expected revenue of an optimal policy that has full hindsight. \section{Prophet Inequality with Correlated Arrivals} \subsection{Problem Formulation} \label{sec:prophetModel} Throughout this paper, we let $[k]$ denote the set $\{1,2,\ldots,k\}$ for any positive integer $k$. Consider a finite planning horizon of $T$ periods. There are $I$ customer types and one unit of a single resource that is managed by some platform. In each period $t$, depending on exogenous state information $S_t$ that is observable by time $t$, a customer of type $i$ will arrive with some probability $p_{it}(S_t)$. Upon an arrival of a customer, the platform can either sell the resource to the customer, or irrevocably reject the customer. The reward earned by the platform for selling the resource to a customer of type $i \in [I]$ in period $t \in [T]$ is $r_{it}\geq 0$. The goal of the platform is to maximize the expected total reward collected over the planning horizon. Unlike existing research assuming independent or stationary arrival distributions, we allow the sequence of arrival probabilities $(p_{1t}(S_t), p_{2t}(S_t),\ldots, p_{It}(S_t))_{t=1,\ldots,T}$ to be a correlated stochastic process, which depends on the sample path $(S_1, S_2,\ldots,S_T)$ that is realized. We assume that we know the joint distribution of this stochastic process of arrival probabilities, that is, the distribution of $\{S_t\}$, and the distribution of arrivals conditional upon $\{S_t\}$. As a simple example, $S_t$ may represent the weather history at times $\{1,\ldots, t\}$. Our model would capture any correlation in the weather forecast, and assumes that the customer arrival probabilities $p_{it}(S_t)$ in each period $t$ are determined by the weather history $S_t$ up to time $t$. Let $X_{it} \in \{0,1\}$ be the random variable indicating whether a customer of type $i \in [I]$ will arrive in period $t \in [T]$, in state $S_t$. We must have $\mathbb{E}[X_{it} | S_t] = p_{it}(S_t)$. We assume the time increments are sufficiently fine so that, similar to many standard models in revenue management \citep{van2005introduction}, we can assume that at most one customer arrives in any given period. More precisely, we apply the common practice in revenue management assuming that $\sum_{i \in [I]} p_{it}(S_t) \leq 1$ almost surely for all $t \in [T]$, and \begin{equation}\label{eq:XitDefinition} X_{it} = \mathbf{1}\{u_t \in [\sum_{k=1}^{i-1} p_{kt}(S_t), \sum_{k=1}^i p_{kt}(S_t))\}, \end{equation} where $u_t$ is an independent $[0,1]$ uniform random variable associated with period $t$. As a result, we have $\sum_{i \in [I]} X_{it} \leq 1$ almost surely for all $t \in [T]$. In each period $t \in [T]$, events take place in the following order: \begin{enumerate} \item The platform observes $S_t$, and thus knows $p_{it}(S_t)$ for all $i \in [I]$. \item The arrivals $(X_{1t}, \ldots, X_{It})$ are realized according to \eqref{eq:XitDefinition}. \item The platform decides whether to sell the resource to the arriving customer if any. \end{enumerate} Let $\cal{S}_t$ denote the support of $S_t$, for all $t \in [T]$. \textcolor{black}{Without loss of generality and for ease of notation, we define $S_t$ as the set of external information in all the periods $1,2,\ldots,t$. As a result, given any $S_t$, the path $(S_1,\ldots, S_t)$ is uniquely determined. We call a realization of $S_T$ a \emph{sample path}. We assume that we know the joint distribution of $(S_1,\ldots, S_T)$ in the sense that we are able to simulate the sample paths, and able to estimate the expected values of functions of the sample paths. } \textcolor{black}{ We can use a tree structure to represent the process $(S_t)_{t \in [T]}$. Let $S_0$ be a dummy root node of the tree. Every realization of $S_1$ is a direct descendant of $S_0$. Recursively, for any tree node that is a realization of $S_t$, its direct descendants are all the different realizations of $S_{t+1}$ conditional on the value of $S_t$. } \subsection{Definition of Competitive Ratios} \textcolor{black}{We will state the prophet inequality with correlated arrivals by proving the \emph{competitive ratio} of an algorithm used by the platform. Specifically,} define an \emph{optimal offline algorithm} $\mathsf{OFF}$ as an algorithm that knows $(X_{it})_{i \in [I]; t \in [T]}$ at the beginning of period $1$ and makes optimal decisions to sell the resource given this information. \textcolor{black}{By contrast, the platform can use an \emph{online algorithm} to make decisions in each period $t \in [T]$ based on only $S_t$ and $(X_{it'})_{i \in [I]; t' \in [t]}$.} We use $V^\mathsf{OFF}$ to denote the reward of $\mathsf{OFF}$, and $V^\mathsf{ON}$ the reward of an online algorithm $\mathsf{ON}$. \begin{definition} An online algorithm $\mathsf{ON}$ is \emph{$c$-competitive} if \[ \mathbb{E}[V^\mathsf{ON}] \geq c \,\mathbb{E}[V^\mathsf{OFF}],\] where the expectation is taken over both $S_T$ and the random arrivals $(X_{it})_{i \in [I]; t \in [T]}$. \end{definition} \subsection{Offline Algorithm and Its Upper Bound} \textcolor{black}{We first show that the expected reward of the offline algorithm can be bounded from above by the expected total reward collected from the entire sample path.} \begin{proposition} \label{prop:prophetUpperBound} $\mathbb{E}[ V^\mathsf{OFF}] \leq \mathbb{E}[ \sum_{t=1}^T \sum_{i=1}^I r_{it}p_{it}(S_{t})]$. \end{proposition} \begin{proof}{Proof.} Suppose the resource has infinitely many units, so that the platform sells one unit of the resource to every arriving customer. The resulting expected total reward \[ \mathbb{E}[\sum_{i \in [I]} \sum_{t \in [T]} r_{it}X_{it}] = \sum_{i \in [I]} \sum_{t \in [T]} r_{it}\mathbb{E}[X_{it}] = \sum_{i \in [I]} \sum_{t \in [T]} r_{it} \mathbb{E}[ p_{it}(S_t)] \] is clearly an upper bound on $\mathbb{E}[ V^\mathsf{OFF}]$. \halmos \end{proof} \section{Online Algorithm} \label{sec:prophetAlg} \textcolor{black}{In this section, we propose a simulation-based threshold policy ($\mathsf{STP}$) for the model of prophet inequality with correlated arrivals and prove its performance guarantee. } Conditioned on any sample path $S_T$, define \[ \cal{T}(S_T):= \sum_{i \in [I]} \sum_{t \in [T]} p_{it}(S_t)\] as the total expected number of customer arrivals. $\mathsf{STP}$ needs to know a uniform upper bound $\bar \cal{T}$ on $\cal{T}(\cdot)$ (i.e., $\bar \cal{T} \geq \cal{T}(S_T)$ with probability one). Given such a $\bar \cal{T}$, $\mathsf{STP}$ computes the threshold \begin{equation}\label{eq:hnew} h(S_t) := \mathbb{E}\!\!\left[\frac{\sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'})}{1 + \bar \cal{T} -\sum_{t'=1}^t \sum_{i=1}^I p_{it'}(S_{t'})}\ \big|\ S_t\right] \end{equation} for deciding whether to sell the resource in period $t$. Specifically, upon an arrival of a type-$i$ customer in period $t$, if the resource is still available, $\mathsf{STP}$ sells the resource to the customer if $r_{it} \geq h(S_t)$ and rejects the customer otherwise. Note that $h(\cdot)$ can be computed for each scenario $S_t$ by simulating the sample paths that potentially arise conditional upon $S_t$. In particular, by Proposition \ref{prop:prophetUpperBound} and the definition of $h(\cdot)$, we have \textcolor{black}{(recall that $S_0$ is a (deterministic) dummy variable)} \begin{equation} \label{eq:hS0} h(S_0) = \mathbb{E}\!\!\left[\frac{\sum_{t'=1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'})}{1 + \bar \cal{T}} \big|\ S_0\right] = \frac{\mathbb{E}[\sum_{t'=1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'})]}{1 + \bar \cal{T}} \geq \frac{\mathbb{E}[V^\mathsf{OFF}]}{1 + \bar \cal{T}}. \end{equation} Given that $\mathbb{E}[V^\mathsf{OFF}]$ can be upper-bounded by $h(S_0)$ with a multiplicative factor, the goal of our analysis is to establish a relationship between $h(S_0)$ and the expected reward $\mathbb{E}[V^\mathsf{STP}]$ of our online algorithm. \subsection{Performance Guarantee} In this section, we provide two methods for proving the prophet inequality\textcolor{black}{, i.e., the competitive ratio of $\mathsf{STP}$, under correlated arrival probabilities.} The first method is constructive in the sense that it reasons about the structure of a worst-case problem instance, and exhibits this structure explicitly. The second method is deductive in that it proves the existence of the bound without shedding light on the worst-case problem instance. \subsubsection{Constructive method for proving performance bound.} \textcolor{black}{With the constructive method, we focus on proving the competitive ratio of $\mathsf{STP}$ for the case $\bar \cal{T} = 1$. Since all the decisions made by an online algorithm are based on the realized value of $S_1$, we can assume without loss of generality that $S_1$ is deterministic (i.e., the competitive ratio holds when $S_1$ is set to be any of its possible realizations). } \textcolor{black}{Then we fix the upper bound \begin{equation}\label{eq:OfflineReward} R:= \sum_{t=1}^T \sum_{i=1}^I \mathbb{E}[r_{it}p_{it}(S_t)|S_1] \end{equation} on $V^\mathsf{OFF}$. } We will transform problem data progressively, each time making the expected total reward of $\mathsf{STP}$ smaller on this instance. Then we will show that at some point, the expected total reward of $\mathsf{STP}$ is easily bounded below by a constant. Conditioned on $S_t$ and the event that the resource has not been sold by the beginning of period $t$, let $V^{\mathsf{STP}}(S_t)$ be the expected reward earned from assigning it to a demand unit during periods from $t$ to $T$. We can express $V^{\mathsf{STP}}(S_t)$ explicitly by the following recursion: \begin{equation}\label{eq:costh} V^{\mathsf{STP}}(S_t) = \sum_{i=1}^I p_{it}(S_t)\mathbf{1}(r_{it} \geq h(S_t)) \left(r_{it}-\mathbb{E}[V^{\mathsf{STP}}(S_{t+1})|S_t]\right) + \mathbb{E}[V^{\mathsf{STP}}(S_{t+1})|S_t], \end{equation} and $V^{\mathsf{STP}}(S_{T+1})=0$. We will work with the tree representation of the stochastic process $S_1, S_2,\ldots, S_T$. We call a node $S_t$ in the tree a \emph{terminal node} if $p_{it}(S_t)>0$ for some $i$ but $p_{it'}(S_{t'})=0$ for all $i=1,2,...,I$ and all descendants $S_{t'}$ of $S_t$. We will arrive at our bound by proving two sets of structural results for the worst-case instance of the problem. The first set of structural results concern the reward process. \begin{lemma}\label{lem:StructureRewards} Assume that the given problem instance achieves the worst-case ratio $V^{\mathsf{STP}}/V^{\mathsf{OFF}}$. Then without loss of generality, \begin{enumerate} \item $\sum_{t'=1}^T \sum_{i=1}^I p_{it'}(S_{t'}) = \bar \cal{T}$ almost surely. \item The reward is scenario dependent. That is, demand unit $(i,t)$ has reward $r_{it}(S_t)$ at each scenario $S_t$. \item In each scenario $S_t$, there is at most one customer type with positive arrival probability. We thus use $p(S_t)$ and $r(S_t)$ to denote the arrival probability and the reward of that customer type. \item $r(S_t) = h(S_t)$ or $r(S_t)= (h(S_t))^-$ \textcolor{black}{for all non-terminal nodes $S_t$}. \end{enumerate} \end{lemma} \proof{Proof.} The first property is easy to see, since we can always add nodes with $0$ reward and positive arrival probabilities to paths in the tree to ensure that the property holds. This transformation does not change \textcolor{black}{either $R$ or the outcome of $\mathsf{STP}$.} By adding demand types if necessary, we can assume without loss of generality that the reward is scenario dependent. That is, demand unit $(i,t)$ has reward $r_{it}(S_t)$ at each scenario $S_t$. We can split up each period into several periods if necessary, such that each scenario $S_t$ has at most one arrival with probability $p(S_t)$ and reward $r(S_t)$. This change preserves \textcolor{black}{$R$} and decreases the expected reward for $h$ according to \eqref{eq:costh}, if the rewards are chosen to be increasing with time. In the tree representation, if there is some highest-level non-terminal node $S_t$ for which $r(S_t) > h(S_t)$ then conditioned on $S_t$, we can decrease $r(S_t)$ and scale up $r(S_{t'})$ by some factor for all scenarios $S_{t'}$ that descend from $S_t$, using the same factor for all $S_{t'}$, such that the value of \[ \sum_{t'=t}^T \mathbb{E}[r(S_{t'})p(S_{t'})|S_t] \] is unchanged. As a result, the equality in \eqref{eq:OfflineReward} is maintained. Do this until $r(S_t)=h(S_t)$. We claim that this change decreases $V^{\mathsf{STP}}(S_t)$, hence $V^{\mathsf{STP}}(S_1)$. To see the claim, note that according to \eqref{eq:costh}, the change reduces the immediate reward given $S_t$ by some amount $\Delta$ and increases the future reward given $S_t$ by no more than $\Delta$. Thus the net effect is to reduce $V^{\mathsf{STP}}(S_t)$. Also, since the value of $h(S_s)$ and the rewards stay the same for every node $S_s$ preceding $S_t$, $V^{\mathsf{STP}}(S_1)$, is reduced. In the tree representation, if there is some highest-level non-terminal node $S_t$ for which $r(S_t) < h(S_t)$ then conditioned on $S_t$, we can increase $r(S_t)$ by a small amount and scale down $r(S_{t'})$ by some factor for all scenarios $S_{t'}$ descending from $S_t$, using the same factor for all $S_{t'}$, such that the equality in \eqref{eq:OfflineReward} is maintained and all rewards remain non-negative. Do this until $r(S_t)= (h(S_t))^-$, where $(h(S_t))^-$ denotes a value infinitessimally smaller than $h(S_t)$. It is easy to see that this change decreases $V^{\mathsf{STP}}(S_t)$. Hence $V^{\mathsf{STP}}(S_1)$ is decreased as we argued just above. Repeat the previous transformations until at all non-terminal nodes $S_t$, we have $r(S_t) = h(S_t)$ or $r(S_t)= (h(S_t))^-$. \halmos \endproof Lemma \ref{lem:StructureRewards} implies that $\cal{T}(S_T) = \bar \cal{T}$ in the worst-case instance. We will assume for the rest of the subsection that our worst-case data has the structure imposed by Lemma \ref{lem:StructureRewards}. Therefore, for the rest of this subsection, we write the threshold function $h$ in the following alternative way \begin{equation}\label{eq:h} h(S_t) = \mathbb{E}\!\!\left[\frac{\sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'})}{1+\sum_{t'=t+1}^T \sum_{i=1}^I p_{it'}(S_{t'})}\ \big|\ S_t\right]. \end{equation} Our second set of structural results concern the arrival probabilities. \begin{lemma}\label{lem:StructureArrivals} Let $\bar \cal{T}=1$. Assume that the data achieves the worst-case ratio \textcolor{black}{$V^{\mathsf{STP}}/R$}. Without loss of generality, for $T\geq 2$, the followings hold: \begin{enumerate} \item There is a unique path $S_1, S_2, \ldots, S_T$ with positive arrival probabilities. \item At every node $S_t$, $t=1,\ldots,T-1$, $r(S_t)=h(S_T)=\mathbb{E}[r(S_T)p(S_T)|S_t]$; \item $p(S_t)=0$, $t=2,\ldots,T$; \item $V^{\mathsf{STP}}(S_1) \geq\frac{R}{2}$. \end{enumerate} \end{lemma} \proof{Proof.} We will prove the theorem by induction on $T$. First, we prove the result for $T=2$. By the tree reward simplifications, $$r(S_1)\approx h(S_1)=\sum_{u \in\mathcal{S}_2} \mathbb{P}(S_2 =u | S_1) \frac{r(u)p(u)}{1+p(u)} \leq \sum_{u \in\mathcal{S}_2}\mathbb{P}(S_2 =u | S_1) r(u)p(u) = \mathbb{E}[V^{\mathsf{STP}}(S_2) | S_1].$$ Therefore $r(S_1)=h(S_1)$ to make $V^{\mathsf{STP}}(S_1)$ as small as possible. Define \begin{eqnarray*} c_2&=&\sum_{u \in \mathcal{S}_2} \mathbb{P}[S_2=u]p(u),\\ R_2&=&\sum_{u \in \mathcal{S}_2} \mathbb{P}[S_2=u]p(u)r(u),\\ c &=&c_2 + p(S_1). \end{eqnarray*} \textcolor{black}{Notice that by definition of $R$ we have \[ R = R_2 +p(S_1)r(S_1).\]} Fix $R$, $R_2$, and $c$. Consider what happens when we scale down $c_2$ by a factor $\alpha$, scale up $r(u)$ for leaf nodes $u$ to maintain $R_2$ constant, and scale down $r(S_1)$ to maintain $p(S_1)r(S_1)=R-R_2$ constant. We argue that we will reduce $V^{\mathsf{STP}}(S_1)$ while keeping \textcolor{black}{$R$} constant. Indeed, for $\alpha=1$, \begin{eqnarray*} p(S_1)&=&\bar \cal{T}-\alpha c_2. \end{eqnarray*} This implies that \begin{eqnarray*} V^{\mathsf{STP}}(S_1) &=& p(S_1)r(S_1) + (1-p(S_1))\sum_{u} \mathbb{P}[S_2=u] p(u)r(u),\\ &=& R-R_2 + (1-\bar \cal{T}+\alpha c_2)R_2. \end{eqnarray*} This is where we make $p(S_1)r(S_1)=R-R_2$. Hence, \begin{eqnarray*} \frac{\partial V^{\mathsf{STP}}(S_1)}{\partial \alpha} =c_2 R_2\geq 0. \end{eqnarray*} Therefore, \textcolor{black}{$V^{\mathsf{STP}}/R$} is minimized when $\alpha=0$, or $p(S_2)=0$ for all $S_2\in\mathcal{S}_2$. Therefore, the base case is proved. Assume the theorem holds for $T-1$. Fix $c^+ = \sum_{t=2}^T \mathbb{P}(S_t|S_1)p(S_t)$ and $R^+ = \sum_{t=2}^T \mathbb{P}(S_t|S_1)p(S_t)r(S_t)$. Let the immediate successors of $S_1$ be $S_2^k$, $k=1,\ldots,K$. Let $$R^k=\mathbb{E}[\sum_{t=2}^T r(S_t)p(S_t)|S_2^k],$$ $k=1,\ldots,K$. Since the instance that minimizes $V^{\mathsf{STP}}(S_1)$ must minimize $\mathbb{E}[V^(S_2)|S_1]$ subject to $c^+$ and $R^+$, we have by the induction hypothesis, that \begin{eqnarray*} \mathbb{E}[V^{\mathsf{STP}}(S_2)|S_1] &\geq& \sum_{1}^K \mathbb{P}(S_2^k)\frac{R^k}{2}\\ &=& \frac{R^+}{2}. \end{eqnarray*} This lower bound is attained when $K=1$, $\sum_{t=2}^T \mathbb{P}(S_t|S^K_2)p(S_t)=1$, $p(S_t)=0$ for all $t=3,\ldots,T$, and $\mathbb{P}(S^K_2|S_1)=c-p(S_1)$. By the induction hypothesis, $V^{\mathsf{STP}}(S_2)=\mathbb{E}[p(S_T)r(S_T)|S_2]=h(S_1)$. We also know that $r(S_1) \approx h(S_1)$ by the tree reward simplifications, we conclude that $r(S_1)=h(S_1)$, since the impact on $V^{\mathsf{STP}}(S_1)$ is the same in either case. Thus, \begin{eqnarray*} V^{\mathsf{STP}}(S_1) &=& \mathbb{E}[p(S_T)r(S_T)|S_1]. \end{eqnarray*} We know that \begin{eqnarray*} R &=&p(S_T)r(S_T)(\mathbb{P}(S_T|S_1)+\sum_{s=1}^{T-1}\mathbb{P}(S_s|S_1)\mathbb{P}(S_T|S_s)p(S_s))\\ &=&p(S_T)r(S_T)(\mathbb{P}(S_T|S_1)+\mathbb{P}(S^K_2|S_1)\mathbb{P}(S_T|S^K_2)p(S^K_2)+\mathbb{P}(S_T|S_1)p(S_1))\\ &=&p(S_T)r(S_T)\mathbb{P}(S_T|S_1)(1+p(S^K_2)+p(S_1)). \end{eqnarray*} This implies that \begin{eqnarray*} V^{\mathsf{STP}}(S_1) &=& p(S_T)r(S_T)\mathbb{P}(S_T|S_1)\\ &=& \frac{R}{1+p(S^K_2)+p(S_1)}\\ &\geq&\frac{R}{2}, \end{eqnarray*} with the lower bound being realizable when $p(S^K_2)=0$ and $p(S_1)=1$. By induction, the lemma holds for all $T$. \halmos \endproof Lemmas \ref{lem:StructureRewards} and \ref{lem:StructureArrivals} \textcolor{black}{and Proposition \ref{prop:prophetUpperBound} combine to give us the competitive ratio of $\mathsf{STP}$} directly: \textcolor{black}{ \begin{theorem} \label{thm:admission} For $\bar\cal{T} = 1$, we have \begin{equation} \label{eq:ratio} V^{\mathsf{STP}}(S_1) \geq \frac{R}{2} = \frac{1}{2} \sum_{t=1}^T \sum_{i=1}^I \mathbb{E}[r_{it}p_{it}(S_t)|S_1] \geq \frac{1}{2}\mathbb{E}[ V^\mathsf{OFF}] . \end{equation} \end{theorem} } \subsubsection{Reductive martingale method for proving performance bound.} In this section, we provide an alternative, reductive proof for the performance bound of $\mathsf{STP}$ that works for a more general case, when \textcolor{black}{$\bar \cal{T} >0$}. Define $\tau \in [T+1]$ as the random period in which the resource is sold to a customer under $\mathsf{STP}$. If the resource is not sold at the end of the last period $T$, we set $\tau = T+1$. In this way, $\tau$ is a stopping time bounded from above by $T+1$. We define $p_{i,T+1}(S_{T+1}) = 0$ for all $i \in [I]$. Define a stochastic process $\{Z(S_t)\}_{t=0,1,\ldots,T+1}$ as \[ Z(S_t) = h(S_t) + \sum_{t'=1}^{t} \sum_{i=1}^I p_{it'}(S_{t'})(r_{it'} - h(S_{t'}))^+.\] \begin{proposition}\label{prop:ZReward} $\mathbb{E}[V^\mathsf{STP}] = \mathbb{E}[Z(S_\tau)]$. \end{proposition} \begin{proof}{Proof.} Recall that in any period $t \in [T]$, at most one customer can arrive, i.e., $\sum_{i=1}^I X_{it} \leq 1$ with probability one. For any $t \in [T]$, conditioned on $\tau = t$, i.e., $\mathsf{STP}$ sells the resource in period $t$, the following two conditions must hold: \begin{enumerate} \item Exactly one customer arrives in period $t$, i.e., $\sum_{i=1}^I X_{it} = 1$. \item The type $i$ of the customer who arrives in period $t$ must satisfy the threshold condition $r_{it} \geq h(S_t)$ (so that $\mathsf{STP}$ sells the resource), or more precisely, $\sum_{i=1}^I X_{it} (r_{it} - h(S_t)) \geq 0$. \end{enumerate} Altogether, using the fact that $X_{it}$'s are indicators, we can obtain \begin{equation}\label{eq:ZRewardProof1} \sum_{i=1}^I X_{it} (r_{it} -h(S_t)) = \sum_{i=1}^I X_{it} (r_{it} -h(S_t))^+. \end{equation} The expected reward of $\mathsf{STP}$ is \begin{align*} \mathbb{E}[V^\mathsf{STP}] = & \mathbb{E}\!\left[\sum_{t=1}^T \sum_{i=1}^I X_{it} r_{it} \mathbf{1}(\tau = t)\right]\\ =& \mathbb{E}\!\left[\sum_{t=1}^T \left(\sum_{i=1}^I X_{it} (r_{it} - h(S_t)) +\sum_{i=1}^I X_{it} h(S_t)\right) \mathbf{1}(\tau = t)\right]\\ \overset\text{\ding{172}}\textcolor{black}space=& \mathbb{E}\!\left[\sum_{t=1}^T \left(\sum_{i=1}^I X_{it} (r_{it} - h(S_t)) + h(S_t)\right) \mathbf{1}(\tau = t)\right]\\ \overset\text{\ding{173}}\textcolor{black}space=& \mathbb{E}\!\left[\sum_{t=1}^T \left(\sum_{i=1}^I X_{it} (r_{it} - h(S_t))^+ + h(S_t)\right) \mathbf{1}(\tau = t)\right]\\ =& \mathbb{E}\!\left[\sum_{t=1}^T \sum_{i=1}^I X_{it} (r_{it} - h(S_t))^+ \mathbf{1}(\tau = t)\right] + \mathbb{E}\left[ \sum_{t=1}^T h(S_t) \mathbf{1}(\tau = t)\right]\\ \overset\text{\ding{174}}\textcolor{black}space=& \mathbb{E}\!\left[\sum_{t=1}^T \sum_{i=1}^I X_{it} (r_{it} - h(S_t))^+ \mathbf{1}(\tau = t)\right] + \mathbb{E}[h(S_\tau)]. \end{align*} Above, $\text{\ding{172}}\textcolor{black}space$ is because $\sum_{i \in [I]} X_{it} = 1$ conditioned on $\tau = t \in [T]$; $\text{\ding{173}}\textcolor{black}space$ is by equation \eqref{eq:ZRewardProof1}; $\text{\ding{174}}\textcolor{black}space$ is because the definition of $h(\cdot)$ naturally gives $h(S_{T+1}) = 0$. If $\tau > t$, i.e., $\mathsf{STP}$ does not sell the resource in period $1,2,\ldots,t$, then any customer who arrives in period $t$ must not satisfy the threshold condition. Thus, conditioned on $\tau > t$, we must have $\sum_{i=1}^I X_{it} (r_{it} - h(S_t))^+=0$. Consequently, \[ \sum_{i=1}^I X_{it} (r_{it} - h(S_t))^+ \mathbf{1}(\tau > t) = 0\] \[\Longrightarrow \sum_{i=1}^I X_{it} (r_{it} - h(S_t))^+ \mathbf{1}(\tau = t) = \sum_{i=1}^I X_{it} (r_{it} - h(S_t))^+ \mathbf{1}(\tau \geq t).\] Then the expected reward can be further written as \begin{align*} \mathbb{E}[V^\mathsf{STP}] =& \mathbb{E}\!\left[\sum_{t=1}^T \sum_{i=1}^I X_{it} (r_{it} - h(S_t))^+ \mathbf{1}(\tau = t)\right] + \mathbb{E}[h(S_\tau)]\\ = & \mathbb{E}\!\left[\sum_{t=1}^T \sum_{i=1}^I X_{it} (r_{it} - h(S_t))^+ \mathbf{1}(\tau \geq t)\right] + \mathbb{E}[h(S_\tau)]\\ = & \mathbb{E}\!\left[\sum_{t=1}^T \mathbb{E}\!\left[\sum_{i=1}^I X_{it}(r_{it} - h(S_t))^+ \mathbf{1}(\tau \geq t)\big| S_t, \{X_{i't'}\}_{i'=1,2,...,I; t'=1,2,...,t-1}\right] \right] + \mathbb{E}[h(S_\tau)]\\ = & \mathbb{E}\!\left[\sum_{t=1}^T \mathbb{E}\!\left[\sum_{i=1}^I X_{it}(r_{it} - h(S_t))^+ \big| S_t, \{X_{i't'}\}_{i'=1,2,...,I; t'=1,2,...,t-1}\right] \mathbf{1}(\tau \geq t)\right] + \mathbb{E}[h(S_\tau)]\\ & \text{(the event $\tau \geq t$ depends only on the information from periods $1$ to $t-1$)}\\ = & \mathbb{E}\!\left[\sum_{t=1}^T \mathbb{E}\!\left[\sum_{i=1}^I X_{it}(r_{it} - h(S_t))^+ | S_t\right] \mathbf{1}(\tau \geq t)\right] + \mathbb{E}[h(S_\tau)]. \end{align*} Finally, we use $p_{it}(S_t) = \mathbb{E}[X_{it}|S_t]$ and $p_{i,T+1}(S_{T+1}) = 0$ to obtain \begin{align*} \mathbb{E}[V^\mathsf{STP}]= & \mathbb{E}\!\left[\sum_{t=1}^T \mathbb{E}\!\left[\sum_{i=1}^I X_{it}(r_{it} - h(S_t))^+ | S_t\right] \mathbf{1}(\tau \geq t)\right] + \mathbb{E}[h(S_\tau)]\\ = & \mathbb{E}\!\left[\sum_{t=1}^T \sum_{i=1}^I p_{it}(S_t)(r_{it} - h(S_t))^+ \mathbf{1}(\tau \geq t)\right] + \mathbb{E}[h(S_\tau)]\\ = & \mathbb{E}\!\left[\sum_{t=1}^\tau \sum_{i=1}^I p_{it}(S_t)(r_{it} - h(S_t))^+ \right] + \mathbb{E}[h(S_\tau)]\\ = & \mathbb{E}[Z(S_\tau)]. \end{align*} \halmos \end{proof} \begin{lemma} \label{lm:ratio} For any $b\geq 1$, $a \geq 0$, $r_1,...,r_n \geq 0$ and $p_1,...,p_n \geq 0$, \[ \frac{a + \sum_{i=1}^n p_i r_i}{b + \sum_{i=1}^n p_i} \leq \frac{a}{b} + \sum_{i=1}^n p_i (r_i - \frac{a}{b})^+.\] \end{lemma} \begin{proof}{Proof.} Let $I \subseteq \{1,2,...,n\}$ be the set such that $r_i \geq a/b$ for all $i \in I$. \begin{align*} & \frac{a}{b} + \sum_{i=1}^n p_i (r_i - \frac{a}{b})^+\\ = & \frac{a}{b} + \sum_{i \in I} p_i (r_i - \frac{a}{b})\\ = & \frac{\left( \frac{a}{b} + \sum_{i \in I} p_i (r_i - \frac{a}{b})\right) ( b + \sum_{i \in I} p_i) }{ b + \sum_{i \in I} p_i}\\ = & \frac{ a + \sum_{i \in I} p_i r_i + (b-1+\sum_{i \in I} p_i)( \sum_{i \in I}p_i (r_i - a/b))}{b + \sum_{i \in I} p_i}\\ \geq & \frac{ a + \sum_{i \in I} p_i r_i }{b + \sum_{i \in I} p_i}\\ \geq & \frac{ a + \sum_{i =1}^n p_i r_i }{b + \sum_{i=1}^n p_i}. \end{align*} The last inequality follows from the fact that for any $j \not\in I$, \[r_j < \frac{a}{b} \leq \frac{ a + \sum_{i \in I} p_i r_i }{b + \sum_{i\in I} p_i}.\] \halmos \end{proof} \begin{proposition} \label{prop:submartingale} The process $\{Z(S_t)\}_{t\geq 0}$ is a sub-martingale with respect to $S_t$. \end{proposition} \begin{proof}{Proof.} For any $t \geq 1$, by definition of $Z(S_t)$ and $Z(S_{t-1})$, we can obtain \begin{align*} & \mathbb{E}[Z(S_{t}) | S_{t-1}]\\ = & \mathbb{E}[h(S_t) + \sum_{t'=1}^{t} \sum_{i=1}^I p_{it'}(S_{t'})(r_{it'} - h(S_{t'}))^+ | S_{t-1}]\\ =& \mathbb{E}[h(S_t) - h(S_{t-1}) + \sum_{i=1}^Ip_{it}(S_{t})(r_{it} - h(S_{t}))^+ | S_{t-1}] + Z(S_{t-1}). \end{align*} It suffices to prove that in expectation, $ h(S_{t-1})\leq h(S_t) + \sum_{i=1}^I p_{it}(S_{t})(r_{it} - h(S_{t}))^+ $. We can derive \begin{align*} & h(S_{t-1}) \\ = & \mathbb{E}\!\!\left[\frac{\sum_{t'=t}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'})}{1 + \bar \cal{T} -\sum_{t'=1}^{t-1} \sum_{i=1}^I p_{it'}(S_{t'})}\ \big|\ S_{t-1}\right]\\ = & \mathbb{E}\!\!\left[ \frac{\sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'}) + \sum_{i=1}^I r_{it}p_{it}(S_{t}) }{1 + \bar \cal{T}-\sum_{t'=1}^{t} \sum_{i=1}^I p_{it'}(S_{t'})+ \sum_{i=1}^I p_{it}(S_{t})} \ \big|\ S_{t-1}\right]\\ = & \mathbb{E}\!\!\left[ \mathbb{E}\!\!\left[ \frac{\sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'}) + \sum_{i=1}^I r_{it}p_{it}(S_{t}) }{1 + \bar \cal{T}-\sum_{t'=1}^{t} \sum_{i=1}^I p_{it'}(S_{t'})+ \sum_{i=1}^I p_{it}(S_{t})} \big| S_t\right] \ \big|\ S_{t-1}\right]\\ = & \mathbb{E}\!\!\left[ \frac{\mathbb{E}\!\!\left[ \sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'}) | S_t \right] + \sum_{i=1}^I r_{it}p_{it}(S_{t}) }{1 + \bar \cal{T}-\sum_{t'=1}^{t} \sum_{i=1}^I p_{it'}(S_{t'})+ \sum_{i=1}^I p_{it}(S_{t})} \ \big|\ S_{t-1}\right]\\ \leq & \mathbb{E}\!\!\left[ \frac{\mathbb{E}\!\!\left[ \sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'}) | S_t \right]}{1 + \bar \cal{T}-\sum_{t'=1}^{t} \sum_{i=1}^I p_{it'}(S_{t'})} + \sum_{i=1}^I p_{it}(S_t)\left(r_{it} -\frac{\mathbb{E}\!\!\left[ \sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'}) | S_t \right] }{1 + \bar \cal{T}-\sum_{t'=1}^{t} \sum_{i=1}^I p_{it'}(S_{t'})} \right)^+\ \big|\ S_{t-1}\right]\\ = & \mathbb{E}\!\!\left[ \mathbb{E}\!\!\left[\frac{ \sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'}) }{1 + \bar \cal{T}-\sum_{t'=1}^{t} \sum_{i=1}^I p_{it'}(S_{t'})}| S_t \right] + \sum_{i=1}^I p_{it}(S_t)\left(r_{it}-\mathbb{E}\!\!\left[\frac{ \sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'}) }{1 + \bar \cal{T}-\sum_{t'=1}^{t} \sum_{i=1}^I p_{it'}(S_{t'})}| S_t \right] \right)^+\ \big|\ S_{t-1}\right]\\ = & \mathbb{E}\!\!\left[ h(S_t) + \sum_{i=1}^I p_{it}(S_t)\left(r_{it} -h(S_t) \right)^+\ \big|\ S_{t-1}\right], \end{align*} where the inequality follows from Lemma \ref{lm:ratio} and the fact that $\bar \cal{T}$ is an upper bound on the expected total number of arrivals on any sample path: \[\bar \cal{T} \geq \sum_{t'=1}^{t} \sum_{i=1}^I p_{it'}(S_{t'}) \Longrightarrow 1 + \bar \cal{T}-\sum_{t'=1}^{t} \sum_{i=1}^I p_{it'}(S_{t'}) \geq 1.\] \halmos \end{proof} With Propositions \ref{prop:ZReward} and \ref{prop:submartingale} established, we apply the optional stopping theorem, to obtain our main result, namely the prophet inequality under correlated arrival probabilities: \begin{theorem}\label{thm:prophet} \[ \mathbb{E}[V^\mathsf{STP}] \geq \mathbb{E}[h(S_0)] = \frac{\mathbb{E}[\sum_{t=1}^T \sum_{i=1}^I r_{it}p_{it}(S_{t})]}{1 + \bar \cal{T}} \geq \frac{\mathbb{E}[V^\mathsf{OFF}]}{1 + \bar \cal{T}}. \] \end{theorem} \begin{proof}{Proof.} \[ \mathbb{E}[V^\mathsf{STP}] = \mathbb{E}[Z(S_\tau)] \geq \mathbb{E}[Z(S_0)] = \mathbb{E}[h(S_0)] = \frac{\mathbb{E}[\sum_{t=1}^T \sum_{i=1}^I r_{it}p_{it}(S_{t})]}{1 + \bar \cal{T}} \geq \frac{\mathbb{E}[V^\mathsf{OFF}]}{1 + \bar \cal{T}},\] where the first inequality is by the optimal stopping theorem for sub-martingales, and the last equality is given by \eqref{eq:hS0}. \halmos \end{proof} \section{Application to Two-sided Matching Problems} In this section, we describe how our results can be applied to design algorithms for a basic matching problem in two-sided markets. \subsection{Model} Again consider a finite planning horizon of $T$ periods. There are $I$ types of demand units and $J$ types of supply units. Both demand and supply units arrive randomly over the $T$ periods. The demand unit of type $i \in [I]$ that arrives at time $t \in [T]$ if any, can be identified using the pair $(i,t)$. Similarly, the supply unit of type $j \in [J]$ that arrives at time $s \in [T]$ if any, can be identified with the pair $(j,s)$. Each demand unit $(i,t)$ has a known non-negative reward $r_{ijts}$ when matched with a supply unit $(j,s)$. This reward can capture how far apart the units are in time and how compatible their respective types are. If types $i$ and $j$ are incompatible, then the reward $r_{ijts}$ could be very small or $0$. If $i$ and $j$ are compatible, then $r_{ijts}$ can decrease with the length of the interval $[s, t]$ to capture the diminishing value of the match when the supply unit must wait for a long time for the demand unit. We assume that supply units can wait but demand units cannot. At the end of each period $t$, after arrivals of demand and supply units have been observed, the demand unit that arrives in period $t$, if any, must be matched immediately to an existing supply unit or rejected. Note that if a supply unit $(j,s)$ can only wait a finite amount of time, then we can require that $r_{ijts}=0$ for any $t$ that is sufficiently large compared to $s$. Let $\Lambda_{it}\in\{0,1\}$ be a random indicator of whether demand unit $(i,t)$ arrives, and $M_{js}\in\{0,1\}$ a random indicator of whether supply unit $(j,s)$ arrives. We assume that all the random indicators $\Lambda_{it}$, $\forall i \in [I], t \in [T]$, and $M_{js}$, $\forall j \in [J], s \in [T]$, are mutually independent. The arrival probabilities $\lambda_{it} := \mathbb{E}[\Lambda_{it}]$ and $\mu_{js} := \mathbb{E}[M_{js}]$ are deterministic and known to the platform a priori. To avoid trivialities in the analysis, we assume all the $\lambda_{it}$ and $\mu_{js}$ are strictly positive, but their values can be arbitrarily small. Thus, regardless of the availability of supply units, the demand arrival processes are \emph{not} correlated a priori. However, for each particular supply unit, the best demand unit that should be assigned to it must depend on the availability of other supply units. When there are many supply units, some of them might not even be assigned to any demand unit. By contrast, when there are very few supply units, each of them can be matched to some demand unit. We also assume that the time increments are sufficiently fine, so that at most one demand unit and one supply unit arrive in any period $t \in [T]$. More precisely, we require $\sum_{i \in [I]} \lambda_{it} \leq 1$ and $\sum_{j \in [J]} \mu_{js} \leq 1$ for all $ t,s \in [T]$. Also similar to \eqref{eq:XitDefinition}, the arrival events can be defined as \begin{equation}\label{eq:LambdaMuDefinition} \Lambda_{it} = \mathbf{1}\{u_t \in [\sum_{k=1}^{i-1} \lambda_{kt}, \sum_{k=1}^i \lambda_{kt})\}, \quad M_{js} = \mathbf{1}\{v_s \in [\sum_{k=1}^{j-1} \mu_{ks}, \sum_{k=1}^j \mu_{ks})\}, \end{equation} where $u_1,\ldots,u_T$ and $v_1,\ldots,v_T$ are mutually independent $[0,1]$ uniform random variables. In any period $t$, the platform first observes $\Lambda_{1t},\ldots,\Lambda_{It}$ and $M_{1t},\ldots, M_{Jt}$. Then, if there is any arriving demand unit in period $t$, the platform uses an online algorithm to make an assignment decision. In other words, a demand unit can be matched to any supply unit arriving in the same period or earlier. The objective of the problem is to match demand and supply units in an online manner to maximize the expected total reward earned over the horizon. We do not allow fractional matchings. That is, each demand unit must be matched in whole to a supply unit. \subsection{Offline Algorithm and Its Upper Bound} An optimal offline algorithm $\mathsf{OFF}$ can see the arrivals of all the demand and supply units $(\Lambda,M)$ at the beginning of period 1. Given $(\Lambda,M)$, The maximum offline reward $V^\mathsf{OFF}(\Lambda,M)$ is equal to the value of the following maximum-weight matching problem. \begin{align} \begin{split}\label{eq:LP1} V^\mathsf{OFF}(\Lambda,M) =& \max_{x_{ijts}(\Lambda,M), i \in [I]; j \in [J]; t,s \in [T]} \quad \sum_{i,j,t,s} x_{ijts}(\Lambda,M) r_{ijts}\\ \text{s.t.} & \sum_{i,t} x_{ijts}(\Lambda,M) \leq M_{js}, \quad \forall j \in [J]; s \in [T],\\ & \sum_{j,s} x_{ijts}(\Lambda,M) \leq \Lambda_{it}, \quad \forall i \in [I]; t \in [T],\\ & x_{ijts}(\Lambda,M) \leq \Lambda_{it}M_{js}, \quad \forall i \in [I]; j \in [J]; t,s \in [T],\\ & x_{ijts}(\Lambda,M) = 0, \quad \forall i \in [I]; j \in [J]; t \in [T]; s = t+1,\ldots, T,\\ & x_{ijts}(\Lambda,M) \geq 0, \quad \forall i \in [I]; j \in [J]; t,s \in [T]. \end{split} \end{align} In the above LP, the variable $x_{ijts}(\Lambda,M)$ encapsulates the probability that both demand unit $(i,t)$ and supply unit $(j,s)$ arrive \emph{and} $(i,t)$ is assigned to $(j,s)$. The fourth constraint requires that a demand unit in period $t$ cannot be matched to any supply unit arriving later than $t$. The competitive ratio $c$ of an online algorithm $\mathsf{ON}$ is similarly defined as \[c = \mathbb{E}[V^\mathsf{ON}]/\mathbb{E}[V^\mathsf{OFF}(\Lambda,M)],\] where $V^\mathsf{ON}$ is the total reward of the online algorithm, and the expectation is taken over $(\Lambda, M)$. Note that \eqref{eq:LP1} cannot be solved without a priori access to the realizations of $(\Lambda,M)$. Thus, we are interested in finding an upper bound on the expected optimal offline reward $\mathbb{E}[V^\mathsf{OFF}(\Lambda,M)]$ when we \emph{do not} have such a priori access. The following LP solves for the total probabilities $x_{ijts}$ of having demand unit $(i,t)$ arrived and assigned to $(j,s)$. \begin{align} \begin{split}\label{eq:LP3} \max_{x_{ijts}} & \,\, \sum_{i,j,t,s} x_{ijts} r_{ijts}\\ \mbox{s.t. }& \sum_{i,t} x_{ijts} \leq \mu_{js}, \quad \forall j \in [J]; s \in [T],\\ & \sum_{j,s} x_{ijts} \leq \lambda_{it}, \quad \forall i \in [I]; t \in [T],\\ & x_{ijts} \leq \lambda_{it}\mu_{js}, \quad \forall i \in [I]; j \in [J]; t,s \in [T],\\ & x_{ijts} = 0, \quad \forall i \in [I]; j \in [J]; t \in [T]; s = t+1,\ldots, T,\\ & x_{ijts} \geq 0, \quad \forall i \in [I]; j \in [J]; t,s \in [T]. \end{split} \end{align} The constraints above are derived from those of \eqref{eq:LP1}. \begin{theorem}\label{thm:upperbound} The optimal objective value of \eqref{eq:LP3} is an upper bound on $\mathbb{E}[V^\mathsf{OFF}(\Lambda,M)]$. \end{theorem} \proof{Proof.} Let $x^*_{ijts}(\Lambda,M)$ be an optimal solution to LP \eqref{eq:LP1}. Define a solution to LP \eqref{eq:LP3} as \[ \bar x_{ijts} := \mathbb{E}[x^*_{ijts}(\Lambda,M)].\] Since $\sum_{i,t} x^*_{ijts}(\Lambda,M) \leq M_{js}$, $\sum_{j,s} x^*_{ijts}(\Lambda,M) \leq \Lambda_{it}$, and $x^*_{ijts}(\Lambda,M) \leq \Lambda_{it}M_{js}$ are required in \eqref{eq:LP1}, we must have \begin{align*} &\sum_{i,t} \bar x_{ijts} =\sum_{i,t} \mathbb{E}[x^*_{ijts}(\Lambda,M)] \leq \mathbb{E}[M_{js}] = \mu_{js},\\ &\sum_{j,s} \bar x_{ijts} =\sum_{j,s} \mathbb{E}[x^*_{ijts}(\Lambda,M)] \leq \mathbb{E}[\Lambda_{it}] = \lambda_{it},\\ &\bar x_{ijts} = \mathbb{E}[x^*_{ijts}(\Lambda,M)] \leq \mathbb{E}[\Lambda_{it}M_{js}] = \lambda_{it} \mu_{js}. \end{align*} Also, $x^*_{ijts}(\Lambda,M) = 0$ for all $s > t$ implies $\bar x_{ijts} = 0$ for all $s > t$. Thus, $\bar x_{ijts}$ is a feasible solution to LP (\ref{eq:LP3}). It follows that the optimal value of LP (\ref{eq:LP3}) is an upper bound on \[ \sum_{i,j,t,s} \bar x_{ijts} r_{ijts} = \sum_{i,j,t,s} \mathbb{E}[x^*_{ijts}(\Lambda,M)] r_{ijts} = \mathbb{E}[V^\mathsf{OFF}(\Lambda,M)]. \] \halmos \endproof \subsection{Online Algorithm for Two-Sided Matching} In this section, we describe and analyze a matching algorithm. The algorithm is composed of two simpler sub-routines, a Separation Subroutine and an Admission Subroutine. The Separation Subroutine randomly samples a supply unit for each incoming demand unit. This sampling splits the demand arrivals into separate arrival streams, each coming to a separate supply unit. Subsequently, for each supply unit independently, the Admission Subroutine uses the algorithm in Section \ref{sec:prophetAlg} to control the matching of at most one among all incoming demand units to it. Let $\mathcal{S}_t := \{0,1\}^{J\times t}$. Define $S_t \in \mathcal{S}_t$ as the information set that records the arrivals of supply units up to period $t$. That is, \[S_t= (\{M_{j1}\}_{j=1,2,...,J}, \{M_{j2}\}_{j=1,2,...,J},..., \{M_{jt}\}_{j=1,2,...,J}).\] For convenience, let $S_0$ be a dummy constant. In our analysis, $S_T = M$ is the \emph{sample path} of scenarios defined in Section \ref{sec:prophetModel}. The matching algorithm first needs to compute an optimal solution $x^*$ to LP \eqref{eq:LP3}. Then, the Separation Subroutine calculates a probability \begin{equation}\label{eq:pijts} p_{ijts}(S_t) := \frac{ \min(\lambda_{it}, \sum_{j'=1}^J \sum_{s'=1}^t M_{j's'}\frac{x^*_{ij'ts'}}{\mu_{j's'}} )}{ \sum_{j'=1}^J \sum_{s'=1}^t M_{j's'}\frac{x^*_{ij'ts'}}{\mu_{j's'}} } \cdot M_{js} \frac{x^*_{ijts}}{\mu_{js}} \end{equation} of choosing $(j,s)$ as a candidate supply unit to be matched to $(i,t)$. Note that if $s > t$, then we must have $x^*_{ijts} = 0$ (see LP \eqref{eq:LP3}) and thus $p_{ijts}(S_t) = 0$. That is, our algorithm never tries to match a demand unit in period $t$ to a supply unit arriving later than $t$. When applying the prophet inequality theory developed in previous sections, we will fix a supply unit $(j,s)$, and think of $(p_{ijts}(S_t))_{i,t}$ as the probabilities that demand units ``arrive'' at $(j,s)$. We first establish some important properties regarding the arrival probabilities $p_{ijts}(\cdot)$. \begin{proposition}\label{prop:pijts} \begin{enumerate} \item[] \item[1.] $\sum_{i=1}^I \sum_{t=1}^T p_{ijts}(S_t) \leq 1$, for all $j \in [J]$ and $s \in [T]$. \item[2.] $\sum_{j=1}^J\sum_{s=1}^t p_{ijts}(S_t) \leq \lambda_{it}$, for all $i \in [I]$ and $t \in [T]$. \end{enumerate} \end{proposition} \begin{proof}{Proof.} \begin{align*} \sum_{i=1}^I \sum_{t=1}^T p_{ijts}(S_t) & = \sum_{i=1}^I \sum_{t=1}^T \frac{ \min(\lambda_{it}, \sum_{j'=1}^J \sum_{s'=1}^t M_{j's'}\frac{x^*_{ij'ts'}}{\mu_{j's'}} )}{ \sum_{j'=1}^J \sum_{s'=1}^t M_{j's'}\frac{x^*_{ij'ts'}}{\mu_{j's'}} } \cdot M_{js} \frac{x^*_{ijts}}{\mu_{js}}\\ & \leq M_{js} \sum_{i=1}^I \sum_{t=1}^T \frac{x^*_{ijts}}{\mu_{js}}\\ & \leq M_{js}\\ & \leq 1, \end{align*} where the second inequality is given by the first constraint of LP (\ref{eq:LP3}). We can then derive \begin{align*} & \sum_{j=1}^J\sum_{s=1}^t p_{ijts}(S_t) \\ = & \sum_{j=1}^J\sum_{s=1}^t \frac{ \min(\lambda_{it}, \sum_{j'=1}^J \sum_{s'=1}^t M_{j's'}\frac{x^*_{ij'ts'}}{\mu_{j's'}} )}{ \sum_{j'=1}^J \sum_{s'=1}^t M_{j's'}\frac{x^*_{ij'ts'}}{\mu_{j's'}} } \cdot M_{js} \frac{x^*_{ijts}}{\mu_{js}} \\ = & \frac{ \min(\lambda_{it}, \sum_{j'=1}^J \sum_{s'=1}^t M_{j's'}\frac{x^*_{ij'ts'}}{\mu_{j's'}} )}{ \sum_{j'=1}^J \sum_{s'=1}^t M_{j's'}\frac{x^*_{ij'ts'}}{\mu_{j's'}} } \cdot \sum_{j=1}^J\sum_{s=1}^t M_{js} \frac{x^*_{ijts}}{\mu_{js}}\\ = &\min(\lambda_{it}, \sum_{j'=1}^J \sum_{s'=1}^t M_{j's'}\frac{x^*_{ij'ts'}}{\mu_{j's'}} )\\ \leq & \lambda_{it}. \end{align*} \halmos \end{proof} \noindent {\bf Online Matching Algorithm:} \begin{itemize} \item (Initialization) Solve \eqref{eq:LP3} for an optimal solution $x^*$. \item Upon an arrival of a demand unit $(i,t)$ in period $t$: \begin{enumerate} \item (Separation Subroutine) Randomly pick a supply unit $(j,s)$ with probability $p_{ijts}(S_t)/\lambda_{it}$, for all $j \in [J]$, $s \in [t]$ (recall that we assume $\lambda_{it}$ to be strictly positive). Notice that by definition of $p_{ijts}(\cdot)$, only those supply units that have arrived (i.e., satisfy $M_{js}=1$ and $s \leq t$) can have a positive probability to be picked. Also, since Proposition \ref{prop:pijts} gives $\sum_{j,s} p_{ijts}(S_t) / \lambda_{it} \leq 1$, if the inequality is strict (i.e., $\sum_{j,s} p_{ijts}(S_t) / \lambda_{it} < 1$), then it is possible that no supply unit is picked. In such a case, reject the demand unit directly. \item (Admission Subroutine) Let $X_{ijts}$ be the indicator of whether demand unit $(i,t)$ arrives \emph{and} the Separation Routine picks supply unit $(j,s)$. We have \[ \mathbb{E}[ X_{ijts} | S_t] = \mathbb{P}(\Lambda_{it}=1) \cdot p_{ijts}(S_t)/\lambda_{it} = \lambda_{it} \cdot p_{ijts}(S_t)/\lambda_{it} = p_{ijts}(S_t).\] For the supply unit $(j,s)$ picked by the Separation Subroutine (i.e., $X_{ijts}=1$), we apply algorithm $\mathsf{STP}$ by viewing $(p_{ijts}(S_t))_{i,t}$ as the sequence of correlated arrival probabilities. Specifically, match $(i,t)$ to $(j,s)$ if $(j,s)$ is still available and $r_{ijts} \geq h_{js}(S_t)$, where \[ h_{js}(S_t) := \mathbb{E}\!\!\left[\frac{\sum_{t'=t+1}^T \sum_{i=1}^I r_{ijt's}p_{ijt's}(S_{t'})}{2 -\sum_{t'=1}^t \sum_{i=1}^I p_{ijt's}(S_{t'})}\ \big|\ S_t\right].\] Notice that we have chosen $\bar \cal{T} = 1$ (see \eqref{eq:hnew}) for this two-sided online matching problem. This is because the first property of Proposition \ref{prop:pijts} guarantees $\sum_{i=1}^I \sum_{t=1}^T p_{ijts}(S_t) \leq 1 = \bar \cal{T}$. \end{enumerate} \end{itemize} \subsection{Performance of the Online Algorithm} We first establish an approximation bound for the Separation Subroutine, which relates the arrival probabilities $p_{ijts}(\cdot)$ to the LP upper bound \eqref{eq:LP3}. We start with a technical lemma. \begin{lemma}\label{lm:routinglm1} For any $\lambda > 0$ and $x > 0$, \[ \frac{\min(\lambda,x)}{x} \geq 1 - \frac{1}{4\lambda} x.\] \end{lemma} \begin{proof}{Proof.} For $x \leq \lambda$, $\min(\lambda, x) / x = 1 \geq 1 - \frac{1}{4\lambda} x$. For $x > \lambda$, \begin{align*} & \frac{\min(\lambda,x)}{x} - (1 - \frac{1}{4\lambda} x)\\ = & \frac{\lambda}{x} - 1 + \frac{1}{4\lambda} x\\ = & \frac{\lambda}{x} + \frac{1}{4} \cdot \frac{x}{\lambda} - 1\\ \geq & 2 \cdot \sqrt{\frac{\lambda}{x}} \cdot \frac{1}{2} \sqrt{\frac{x}{\lambda}} - 1\\ \geq & 0. \end{align*} \halmos \end{proof} Using the above lemma and the definition of $p_{ijts}(\cdot)$, we are ready to prove the proximity of $p_{ijts}(\cdot)$ relative to $x^*$. \begin{theorem}\label{thm:pijtsBound} $\mathbb{E}[p_{ijts}(S_t)] \geq 0.5 x^*_{ijts}$. \end{theorem} \begin{proof}{Proof.} Fix any $i,j,t,s$, but $M$ (and thus $S_t$) is random. For any supply unit $(j',s')$, define \[ Y_{j's'} \equiv M_{j's'}\cdot x^*_{ij'ts'} / \mu_{j's'}.\] Note that $\mathbb{E}[Y_{j's'}] = x^*_{ij'ts'} / \mu_{j's'} \cdot \mathbb{P}(M_{j's'}=1) = x^*_{ij'ts'} / \mu_{j's'} \cdot \mu_{j's'} = x^*_{ij'ts'}$. We can then deduce \begin{align*} & \mathbb{E}[p_{ijts}(S_t)]\\ = & \mathbb{E}[\frac{\min(\lambda_{it}, \sum_{j',s'} Y_{j's'})}{\sum_{j',s'} Y_{j's'}} \cdot Y_{js}]\\ \geq & \mathbb{E}[ (1 - \frac{1}{4\lambda_{it}} \sum_{j',s'} Y_{j's'}) \cdot Y_{js}] \\ & \,\,\,\,\,\text{(by Lemma \ref{lm:routinglm1}; if $\sum_{j',s'} Y_{j's'} = 0$, we have $Y_{js} = 0$ so the inequality still holds)}\\ = & \mathbb{E}[1 - \frac{1}{4\lambda_{it}} \sum_{j' \not= j,s'\not= s} Y_{j's'}] \mathbb{E}[Y_{js}] - \frac{1}{4 \lambda_{it}}\mathbb{E}[Y_{js}^2] \\ = & (1 - \frac{1}{4\lambda_{it}} \sum_{j'\not= j, s'\not= s}x^*_{ij'ts'})x^*_{ijts} - \frac{1}{4\lambda_{it}} x^*_{ijts}\cdot x^*_{ijts} / \mu_{js}\\ \geq & (1 - \frac{1}{4})x^*_{ijts} - \frac{1}{4} x^*_{ijts}\\ & \,\,\,\,\,\text{(because the constraints of LP \eqref{eq:LP3} requires $\sum_{j's'}x^*_{ij'ts'} \leq \lambda_{it}$ and $x^*_{ijts} \leq \mu_{js} \lambda_{it}$)}\\ =& \frac{1}{2} x^*_{ijts}. \end{align*} \halmos \end{proof} Now, we tie together the above approximation bound with the prophet inequality established in Section \ref{sec:prophetAlg}: \begin{theorem} The total reward $V^\mathsf{ON}$ of our matching algorithm satisfies \[\mathbb{E}[V^\mathsf{ON}] \geq \frac{1}{4} \mathbb{E}[V^\mathsf{OFF}(\Lambda,M)].\] \end{theorem} \proof{Proof.} Fix any $(j,s)$ for $j \in [J]$ and $s \in [T]$. A demand unit $(i,t)$ is matched to $(j,s)$ if and only if $X_{ijts}=1$ and $r_{ijts} \geq h_{js}(S_t)$. This is exactly the single-resource problem presented in Section \ref{sec:prophetModel}. Therefore, by Theorem \ref{thm:prophet}, the expected total reward earned from $(j,s)$ is at least (recall that we choose $\bar \cal{T} = 1$ for this two-sided online matching problem) \[ \mathbb{E}[h_{js}(S_0)] = \frac{ \mathbb{E}[ \sum_{t=1}^T \sum_{i=1}^I r_{ijts}p_{ijts}(S_{t})]}{2}.\] We then use Theorem \ref{thm:pijtsBound} to obtain \[ \mathbb{E}[h_{js}(S_0)] = \frac{ \mathbb{E}[ \sum_{t=1}^T \sum_{i=1}^I r_{ijts}p_{ijts}(S_{t})]}{2} = \frac{ \sum_{t=1}^T \sum_{i=1}^I r_{ijts}\mathbb{E}[ p_{ijts}(S_{t})]}{2} \geq \frac{ \sum_{t=1}^T \sum_{i=1}^I r_{ijts} x^*_{ijts}}{4}.\] Consequently, the total expected reward summed over all supply units is at least \[ \sum_{j \in [J]} \sum_{s \in [T]} \mathbb{E}[h_{js}(S_0)] \geq \sum_{j \in [J]} \sum_{s \in [T]}\frac{ \sum_{t \in [T]} \sum_{i \in [I]} r_{ijts} x^*_{ijts}}{4} \geq \frac{1}{4} \mathbb{E}[V^\mathsf{OFF}(\Lambda,M)], \] where the final inequality follows from Theorem \ref{thm:upperbound}. \halmos \endproof Note that the ratio of $1/4$ above results from a loss of a factor of $1/2$ from the solution of Prophet Inequalities, and another factor of $1/2$ from the tractable approximation to the upper-bound deterministic LP. \textcolor{black}{Since $1/2$ is an upper bound on the competitive ratio for the standard prophet inequality (which is a special case of our prophet inequality with correlated arrival probabilities), any improvement to the bound of the online algorithm must come from refining the solution to the LP.} \section{Numerical Studies} \label{sec:numerical} In this section, we conduct numerical experiments to explore the performance of our algorithms. We model our experiments on applications that match employers with freelancers for short-term projects, such as web design, art painting, and data entry. We assume there are $30$ employer (demand) types and $30$ worker (supply) types. We set the reward $r_{ijts}$ according to the formula \[ r_{ijts} = s_{ij} \cdot f_{ts} \cdot g_{ij},\] where we use $s_{ij}, f_{ts}$ and $g_{ij}$ to capture three different aspects of a matching: \begin{itemize} \item \textbf{Ability to accomplish tasks.} $s_{ij}$ represents the ability of workers of type $j$ to work for employers of type $i$. We randomly draw $s_{ij}$ from a normal distribution $\cal{N}(0,1)$ for each pair $(i,j)$. In particular, if $s_{ij} < 0$, the reward of the matching will be negative, and thus no algorithm will ever match worker type $j$ to employer type $i$. \item \textbf{Idle time of workers.} It may be wise to limit the total time that a worker is idle in the system before being assigned a job. Thus, we set \[ f_{ts} = 1-\alpha + \alpha e^{-(t-s)/\tau}\] so that the reward of a matching is discounted by $\alpha$ when the idle time of the worker exceeds $\tau$. \item \textbf{Geographical distance.} Certain freelance jobs may require a short commute distance between workers and employers. For demonstration purpose, we assume that each worker type and employer type is associated with a random zip code in Manhattan, with probability proportional to the total population in the zip code zone. Let $d(i,j)$ be the Manhattan distance between the centers of zip code zones of worker type $j$ and employer type $i$. We assume that \[ g_{ij} = 1 - \beta + \beta e^{-d(i,j) / \omega},\] so the reward is discounted by $\beta$ when the commute distance exceeds $\omega$. \end{itemize} We consider a horizon of 60 periods. Depending on the application, one period may correspond to a day or a 10-minute span. In any period, a random number of workers may sign in to be ready to provide service. The type of a worker is uniformly drawn from all the worker types. Let $\mu(t)$ be the rate at which workers appear in the system. Similarly, when an employer arrives, the type of the employer is uniformly drawn from all the employer types. Let $\lambda(t)$ be the arrival rate of employers. We randomly generate multiple test scenarios. In each scenario, we independently draw $\mu(t)$ and $\lambda(t)$, for every period $t$, from a uniform distribution over $[0,1]$. Given the rates $\mu(t)$ and $\lambda(t)$, we further vary other model parameters by first choosing the base case to be $\alpha = 0.5$, $\tau = 10 \text{ (periods)}$, $\beta = 0.5$, $\omega = 0.05^\circ$, and then each time varying one of these parameters. We test the following algorithms: \begin{itemize} \item ($\mathsf{ON}$) Our online algorithm without resource sharing. \item ($\mathsf{ON}_+^1$) A variant of our online algorithm with resource sharing. When $\mathsf{ON}_+^1$ rejects a customer in the admission subroutine, $\mathsf{ON}_+^1$ offers another resource with the largest non-negative margin \[ \mathbb{E}\left[ \frac{\sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'})}{2 - \sum_{t'=1}^t \sum_{i=1}^I p_{it'}(S_{t'})} \big|\ S_t\right] - r_{ijts}.\] \item ($\mathsf{ON}_+^2$) A variant of our online algorithm with resource sharing. When $\mathsf{ON}_+^2$ rejects a customer in the admission subroutine, $\mathsf{ON}_+^2$ offers another resource with the largest non-negative margin \[ 130\% \times \mathbb{E}\left[ \frac{\sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'})}{2 - \sum_{t'=1}^t \sum_{i=1}^I p_{it'}(S_{t'})} \big|\ S_t\right] - r_{ijts}.\] \item ($\mathsf{ON}_+^3$) A variant of our online algorithm with resource sharing. When $\mathsf{ON}_+^3$ rejects a customer in the admission subroutine, $\mathsf{ON}_+^3$ offers another resource with the largest non-negative margin \[ 160\% \times\mathbb{E}\left[ \frac{\sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'})}{2 - \sum_{t'=1}^t \sum_{i=1}^I p_{it'}(S_{t'})} \big|\ S_t\right] - r_{ijts}.\] \item ($\mathsf{ON}_+^4$) A variant of our online algorithm with resource sharing. When $\mathsf{ON}_+^4$ rejects a customer in the admission subroutine, $\mathsf{ON}_+^4$ offers another resource with the largest non-negative margin \[ 200\% \times \mathbb{E}\left[ \frac{\sum_{t'=t+1}^T \sum_{i=1}^I r_{it'}p_{it'}(S_{t'})}{2 - \sum_{t'=1}^t \sum_{i=1}^I p_{it'}(S_{t'})} \big|\ S_t\right] - r_{ijts}.\] \item A greedy algorithm that always offers a resource with the highest reward. \item A bid-price heuristic based on the optimal dual prices of LP (\ref{eq:LP1}) \end{itemize} We report numerical results in Tables \ref{tab:simulation1} to \ref{tab:simulation4}, where the performance of each algorithm is simulated using 1000 replicates. For our online algorithms, in each period we compute the threshold $h_{js}(S_t)$ by simulating 100 future sample paths. We find that, despite the $1/4$ provable ratio, the algorithm $\mathsf{ON}$ captures about half of the offline expected reward, and the improved algorithms $\mathsf{ON}_+^1$ and $\mathsf{ON}_+^2$ capture $65\%$ to $70\%$ of the offline expected reward. Moreover, the improved algorithms outperform the greedy and the bid-price heuristics in all scenarios. These results demonstrate the advantage of using our online algorithms as they have not only optimized performance in the worst-case scenario, but satisfactory performance on average as well. \begin{table} \begin{center} \caption{Scenario 1. Performance of different algorithms relative to LP (\ref{eq:LP1}).} \label{tab:simulation1} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $\mathsf{ON}$ & Greedy & BPH & $\mathsf{ON}_+^1$ & $\mathsf{ON}_+^2$ & $\mathsf{ON}_+^3$ & $\mathsf{ON}_+^4$\\ \hline Base &$ 49.8\% $&$ 62.2\% $&$ 63.7\% $&$ 66.1\% $&$ 67.9\% $&$ 67.9\% $&$ 67.5\% $\\ $ \alpha = 0 $&$ 48.0\% $&$ 56.9\% $&$ 63.4\% $&$ 65.3\% $&$ 67.6\% $&$ 67.9\% $&$ 67.4\% $\\ $ \alpha = 0.2 $&$ 48.6\% $&$ 58.8\% $&$ 64.0\% $&$ 65.7\% $&$ 67.8\% $&$ 67.9\% $&$ 67.5\% $\\ $ \alpha = 0.8 $&$ 51.8\% $&$ 64.9\% $&$ 62.9\% $&$ 66.1\% $&$ 67.6\% $&$ 67.6\% $&$ 67.2\% $\\ $ \alpha = 1 $&$ 53.4\% $&$ 65.3\% $&$ 64.0\% $&$ 65.7\% $&$ 66.9\% $&$ 66.8\% $&$ 66.4\% $\\ $ \tau = 2 $&$ 50.3\% $&$ 62.8\% $&$ 65.7\% $&$ 66.9\% $&$ 68.8\% $&$ 69.0\% $&$ 68.3\% $\\ $ \tau = 5 $&$ 50.3\% $&$ 63.1\% $&$ 64.8\% $&$ 66.6\% $&$ 68.2\% $&$ 68.5\% $&$ 67.9\% $\\ $ \tau = 20 $&$ 49.3\% $&$ 60.8\% $&$ 63.2\% $&$ 65.8\% $&$ 67.5\% $&$ 67.8\% $&$ 67.4\% $\\ $ \tau = 30 $&$ 49.1\% $&$ 60.0\% $&$ 63.2\% $&$ 65.6\% $&$ 67.6\% $&$ 67.7\% $&$ 67.1\% $\\ $ \beta = 0 $&$ 49.9\% $&$ 62.6\% $&$ 63.6\% $&$ 66.1\% $&$ 68.0\% $&$ 68.1\% $&$ 67.5\% $\\ $ \beta = 0.2 $&$ 49.9\% $&$ 62.6\% $&$ 63.7\% $&$ 66.1\% $&$ 68.0\% $&$ 67.9\% $&$ 67.7\% $\\ $ \beta = 0.8 $&$ 49.9\% $&$ 60.4\% $&$ 63.2\% $&$ 65.3\% $&$ 67.0\% $&$ 67.1\% $&$ 66.6\% $\\ $ \beta = 1 $&$ 49.6\% $&$ 56.9\% $&$ 62.4\% $&$ 64.0\% $&$ 65.7\% $&$ 65.8\% $&$ 65.4\% $\\ $ \omega = 0.005 $&$ 49.7\% $&$ 61.1\% $&$ 63.0\% $&$ 65.6\% $&$ 67.4\% $&$ 67.4\% $&$ 66.9\% $\\ $ \omega = 0.02 $&$ 49.8\% $&$ 61.6\% $&$ 63.5\% $&$ 65.8\% $&$ 67.6\% $&$ 67.6\% $&$ 67.1\% $\\ $ \omega = 0.08 $&$ 49.9\% $&$ 62.5\% $&$ 63.8\% $&$ 66.2\% $&$ 68.0\% $&$ 68.1\% $&$ 67.5\% $\\ $ \omega = 0.15 $&$ 50.1\% $&$ 62.7\% $&$ 63.7\% $&$ 66.1\% $&$ 68.0\% $&$ 68.2\% $&$ 67.6\% $\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Scenario 2. Performance of different algorithms relative to LP (\ref{eq:LP1}).} \label{tab:simulation2} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $\mathsf{ON}$ & Greedy & BPH & $\mathsf{ON}_+^1$ & $\mathsf{ON}_+^2$ & $\mathsf{ON}_+^3$ & $\mathsf{ON}_+^4$\\ \hline Base &$ 50.3\% $&$ 65.4\% $&$ 66.6\% $&$ 67.7\% $&$ 69.6\% $&$ 69.4\% $&$ 69.2\% $\\ $ \alpha = 0 $&$ 48.2\% $&$ 61.2\% $&$ 67.3\% $&$ 67.9\% $&$ 70.1\% $&$ 70.3\% $&$ 70.0\% $\\ $ \alpha = 0.2 $&$ 49.2\% $&$ 63.1\% $&$ 67.3\% $&$ 67.8\% $&$ 69.9\% $&$ 70.1\% $&$ 69.7\% $\\ $ \alpha = 0.8 $&$ 51.7\% $&$ 66.5\% $&$ 65.1\% $&$ 67.0\% $&$ 68.7\% $&$ 68.4\% $&$ 68.2\% $\\ $ \alpha = 1 $&$ 52.5\% $&$ 66.5\% $&$ 65.2\% $&$ 66.1\% $&$ 67.7\% $&$ 67.5\% $&$ 67.1\% $\\ $ \tau = 2 $&$ 51.3\% $&$ 67.4\% $&$ 69.3\% $&$ 69.4\% $&$ 71.5\% $&$ 71.3\% $&$ 71.0\% $\\ $ \tau = 5 $&$ 50.9\% $&$ 66.6\% $&$ 67.8\% $&$ 68.3\% $&$ 70.1\% $&$ 70.0\% $&$ 69.7\% $\\ $ \tau = 20 $&$ 49.7\% $&$ 64.1\% $&$ 66.3\% $&$ 67.4\% $&$ 69.6\% $&$ 69.5\% $&$ 69.2\% $\\ $ \tau = 30 $&$ 49.4\% $&$ 63.5\% $&$ 66.3\% $&$ 67.6\% $&$ 69.6\% $&$ 69.5\% $&$ 69.4\% $\\ $ \beta = 0 $&$ 50.5\% $&$ 66.5\% $&$ 67.1\% $&$ 68.6\% $&$ 70.5\% $&$ 70.4\% $&$ 69.9\% $\\ $ \beta = 0.2 $&$ 50.5\% $&$ 66.4\% $&$ 66.9\% $&$ 68.3\% $&$ 70.3\% $&$ 70.0\% $&$ 69.8\% $\\ $ \beta = 0.8 $&$ 50.1\% $&$ 62.4\% $&$ 65.9\% $&$ 66.2\% $&$ 68.1\% $&$ 67.8\% $&$ 67.7\% $\\ $ \beta = 1 $&$ 50.4\% $&$ 58.2\% $&$ 64.8\% $&$ 64.5\% $&$ 66.5\% $&$ 66.3\% $&$ 66.1\% $\\ $ \omega = 0.005 $&$ 50.2\% $&$ 64.6\% $&$ 66.0\% $&$ 67.1\% $&$ 68.9\% $&$ 68.7\% $&$ 68.6\% $\\ $ \omega = 0.02 $&$ 50.2\% $&$ 64.9\% $&$ 66.2\% $&$ 67.3\% $&$ 69.3\% $&$ 69.0\% $&$ 68.8\% $\\ $ \omega = 0.08 $&$ 50.4\% $&$ 65.8\% $&$ 66.7\% $&$ 68.0\% $&$ 69.8\% $&$ 69.7\% $&$ 69.5\% $\\ $ \omega = 0.15 $&$ 50.3\% $&$ 66.1\% $&$ 67.0\% $&$ 68.2\% $&$ 70.2\% $&$ 70.1\% $&$ 69.8\% $\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Scenario 3. Performance of different algorithms relative to LP (\ref{eq:LP1}).} \label{tab:simulation3} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $\mathsf{ON}$ & Greedy & BPH & $\mathsf{ON}_+^1$ & $\mathsf{ON}_+^2$ & $\mathsf{ON}_+^3$ & $\mathsf{ON}_+^4$\\ \hline Base &$ 51.2\% $&$ 63.5\% $&$ 65.2\% $&$ 67.0\% $&$ 68.2\% $&$ 67.7\% $&$ 67.1\% $\\ $ \alpha = 0 $&$ 48.6\% $&$ 58.1\% $&$ 65.8\% $&$ 66.7\% $&$ 68.2\% $&$ 67.7\% $&$ 66.8\% $\\ $ \alpha = 0.2 $&$ 49.7\% $&$ 60.2\% $&$ 66.1\% $&$ 66.8\% $&$ 68.1\% $&$ 67.6\% $&$ 66.9\% $\\ $ \alpha = 0.8 $&$ 53.2\% $&$ 65.9\% $&$ 63.8\% $&$ 66.3\% $&$ 67.6\% $&$ 67.1\% $&$ 66.6\% $\\ $ \alpha = 1 $&$ 54.2\% $&$ 66.2\% $&$ 64.7\% $&$ 65.4\% $&$ 66.5\% $&$ 66.0\% $&$ 65.6\% $\\ $ \tau = 2 $&$ 52.1\% $&$ 64.9\% $&$ 68.1\% $&$ 68.2\% $&$ 69.5\% $&$ 69.0\% $&$ 68.5\% $\\ $ \tau = 5 $&$ 52.0\% $&$ 64.8\% $&$ 66.7\% $&$ 67.4\% $&$ 68.7\% $&$ 68.2\% $&$ 67.6\% $\\ $ \tau = 20 $&$ 50.4\% $&$ 61.9\% $&$ 64.7\% $&$ 66.6\% $&$ 68.1\% $&$ 67.4\% $&$ 66.9\% $\\ $ \tau = 30 $&$ 50.1\% $&$ 61.0\% $&$ 64.9\% $&$ 66.7\% $&$ 68.0\% $&$ 67.5\% $&$ 66.9\% $\\ $ \beta = 0 $&$ 51.0\% $&$ 64.4\% $&$ 65.3\% $&$ 67.4\% $&$ 68.9\% $&$ 68.2\% $&$ 67.6\% $\\ $ \beta = 0.2 $&$ 51.4\% $&$ 64.4\% $&$ 65.1\% $&$ 67.4\% $&$ 68.6\% $&$ 68.1\% $&$ 67.5\% $\\ $ \beta = 0.8 $&$ 50.8\% $&$ 60.5\% $&$ 64.8\% $&$ 64.9\% $&$ 66.3\% $&$ 65.8\% $&$ 65.0\% $\\ $ \beta = 1 $&$ 50.8\% $&$ 56.3\% $&$ 64.0\% $&$ 63.4\% $&$ 64.7\% $&$ 64.2\% $&$ 63.4\% $\\ $ \omega = 0.005 $&$ 50.6\% $&$ 62.8\% $&$ 65.2\% $&$ 66.4\% $&$ 67.7\% $&$ 67.2\% $&$ 66.5\% $\\ $ \omega = 0.02 $&$ 51.0\% $&$ 62.9\% $&$ 65.0\% $&$ 66.4\% $&$ 67.8\% $&$ 67.3\% $&$ 66.5\% $\\ $ \omega = 0.08 $&$ 51.4\% $&$ 63.9\% $&$ 65.2\% $&$ 67.0\% $&$ 68.3\% $&$ 67.8\% $&$ 67.4\% $\\ $ \omega = 0.15 $&$ 51.3\% $&$ 64.2\% $&$ 65.1\% $&$ 67.2\% $&$ 68.6\% $&$ 68.2\% $&$ 67.4\% $\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Scenario 4. Performance of different algorithms relative to LP (\ref{eq:LP1}).} \label{tab:simulation4} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $\mathsf{ON}$ & Greedy & BPH & $\mathsf{ON}_+^1$ & $\mathsf{ON}_+^2$ & $\mathsf{ON}_+^3$ & $\mathsf{ON}_+^4$\\ \hline Base &$ 50.4\% $&$ 64.1\% $&$ 66.5\% $&$ 68.6\% $&$ 69.9\% $&$ 69.5\% $&$ 68.8\% $\\ $ \alpha = 0 $&$ 48.1\% $&$ 59.5\% $&$ 66.4\% $&$ 68.8\% $&$ 70.0\% $&$ 69.9\% $&$ 69.1\% $\\ $ \alpha = 0.2 $&$ 49.0\% $&$ 61.2\% $&$ 67.0\% $&$ 68.6\% $&$ 70.1\% $&$ 69.7\% $&$ 68.9\% $\\ $ \alpha = 0.8 $&$ 51.9\% $&$ 65.9\% $&$ 65.3\% $&$ 67.9\% $&$ 69.1\% $&$ 68.8\% $&$ 68.1\% $\\ $ \alpha = 1 $&$ 53.2\% $&$ 65.7\% $&$ 65.3\% $&$ 67.2\% $&$ 68.1\% $&$ 67.8\% $&$ 67.3\% $\\ $ \tau = 2 $&$ 50.9\% $&$ 65.5\% $&$ 69.2\% $&$ 70.0\% $&$ 71.4\% $&$ 70.8\% $&$ 70.3\% $\\ $ \tau = 5 $&$ 50.9\% $&$ 65.2\% $&$ 67.5\% $&$ 69.0\% $&$ 70.4\% $&$ 70.0\% $&$ 69.4\% $\\ $ \tau = 20 $&$ 49.9\% $&$ 62.6\% $&$ 66.0\% $&$ 68.3\% $&$ 69.6\% $&$ 69.2\% $&$ 68.6\% $\\ $ \tau = 30 $&$ 49.5\% $&$ 61.9\% $&$ 65.9\% $&$ 68.3\% $&$ 69.7\% $&$ 69.4\% $&$ 68.7\% $\\ $ \beta = 0 $&$ 50.4\% $&$ 64.5\% $&$ 66.4\% $&$ 68.8\% $&$ 69.9\% $&$ 69.6\% $&$ 68.9\% $\\ $ \beta = 0.2 $&$ 50.3\% $&$ 64.4\% $&$ 66.5\% $&$ 68.7\% $&$ 70.0\% $&$ 69.6\% $&$ 69.1\% $\\ $ \beta = 0.8 $&$ 50.2\% $&$ 62.2\% $&$ 66.4\% $&$ 67.7\% $&$ 68.8\% $&$ 68.4\% $&$ 67.8\% $\\ $ \beta = 1 $&$ 50.3\% $&$ 59.1\% $&$ 65.6\% $&$ 66.4\% $&$ 67.5\% $&$ 67.3\% $&$ 66.5\% $\\ $ \omega = 0.005 $&$ 50.3\% $&$ 63.8\% $&$ 66.4\% $&$ 68.5\% $&$ 69.7\% $&$ 69.2\% $&$ 68.7\% $\\ $ \omega = 0.02 $&$ 50.7\% $&$ 63.9\% $&$ 66.3\% $&$ 68.4\% $&$ 69.7\% $&$ 69.5\% $&$ 68.9\% $\\ $ \omega = 0.08 $&$ 50.3\% $&$ 64.1\% $&$ 66.6\% $&$ 68.6\% $&$ 69.9\% $&$ 69.6\% $&$ 68.9\% $\\ $ \omega = 0.15 $&$ 50.3\% $&$ 64.3\% $&$ 66.6\% $&$ 68.8\% $&$ 69.9\% $&$ 69.6\% $&$ 69.0\% $\\ \hline \end{tabular} \end{center} \end{table} \end{document}
\begin{equation}gin{document} \title{Operational characterization of quantumness of unsteerable bipartite states } \author{Debarshi Das} \email{[email protected]} \affiliation{Centre for Astroparticle Physics and Space Science (CAPSS), Bose Institute, Block EN, Sector V, Salt Lake, Kolkata 700 091, India} \author{Bihalan Bhattacharya} \email{[email protected]} \affiliation{S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700 098, India} \author{Chandan Datta} \email{[email protected]} \affiliation{Institute of Physics, Sachivalaya Marg, Bhubaneswar 751005, Odisha, India} \affiliation{Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400085, India} \author{Arup Roy} \email{[email protected]} \affiliation{Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B. T. Road, Kolkata 700108, India} \author{C. Jebaratnam} \email{[email protected]} \affiliation{S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700 098, India} \author{A. S. Majumdar} \email{[email protected]} \affiliation{S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700 098, India} \author{R. Srikanth} \email{[email protected]} \affiliation{Poornaprajna Institute of Scientific Research Bangalore- 560 080, Karnataka, India} \begin{equation}gin{abstract} Recently, the quantumness of local correlations arising from separable states in the context of a Bell scenario has been studied and linked with superlocality [\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.032120}{Phys. Rev. A {\bf 95}, 032120 (2017)}]. Here we investigate the quantumness of unsteerable correlations in the context of a given steering scenario. Generalizing the concept of superlocality, we define as \textit{super-correlation}, the requirement for a larger dimension of the preshared randomness to simulate the correlations than that of the quantum states that generate them. Since unsteerable states form a subset of Bell local states, it is an interesting question whether certain unsteerable states can be super-correlated. Here, we answer this question in the affirmative. In particular, the quantumness of certain unsteerable correlations can be pointed out by the notion of \textit{super-unsteerability}, the requirement for a larger dimension of the classical variable that the steering party has to preshare with the trusted party for simulating the correlations than that of the quantum states which reproduce them. This provides a generalized approach to quantify the quantumness of unsteerable correlations in convex operational theories. \end{abstract} \pacs{} \maketitle \section{INTRODUCTION} Ideas and concepts of classical physics significantly differ from that of quantum mechanics (QM). A pioneering contribution showing an incompatibility between local-realism (which is a classical concept) and QM is the Bell-CHSH (Bell-Clauser-Horne-Shimony-Holt) inequality \cite{Bell, chsh, bell2}, which shows that measurements on certain spatially separated system can lead to nonlocal correlations which cannot be explained by local hidden variable (LHV) theory. Bell-CHSH inequality puts an upper bound on the correlations admitting any LHV model. Violation of Bell-CHSH inequality is though, not a defining nonclassical feature of QM as there are post-quantum correlations obeying the no-signalling (NS) principle, which also violate a Bell inequality. Nonlocality in QM is limited by the Tsirelson bound \cite{tsi}. Motivated by this fact, Popescu and Rohrlich proposed NS correlations which are more nonlocal than quantum correlations \cite{pr}.\\ In generalized NS theory, the only constraint on the correlations is the NS principle \citep{barrett, masanes}. The set of NS correlations form a polytope which can be categorized by nonlocal and local vertices in contrast to the set of correlations arising out from QM which form convex set but fail to form a polytope \cite{pp}. Since QM correlations are contained within the NS polytope, any QM correlation can be written as a convex combination of the extremal boxes of the NS polytope. One of the goals of studying generalized NS theory is to find out how one can single out QM from other NS theories \cite{bell2, ps, ps2}.\\ The seminal argument by Einstein, Podolsky and Rosen (EPR) \cite{epr} to demonstrate the incompleteness of QM, motivated Schrodinger with the concept of `quantum steering' \cite{scro}. The concept of steering in the form of a task has been introduced recently \cite{steer, steer2}. The task of quantum steering is to prepare different ensembles at one part of a bipartite system by performing local quantum measurements on another part of the bipartite system in such a way that these ensembles cannot be explained by a local hidden state (LHS) model. This implies that the steerable correlations cannot be reproduced by a local hidden variable-local hidden state (LHV-LHS) model. In recent years, investigations related to quantum steering have been acquiring considerable significance, as evidenced by a wide range of studies \cite{st8, st10, steer22, steer3, st4, st9, st5, steer24, s6, new}. \\ Bell-nonlocal states form a subset of steerable states which also form a subset of entangled states \cite{steer, st11}. However, unlike quantum nonlocality \cite{bell2} and entanglement \cite{ent}, the task of quantum steering is inherently asymmetric \cite{st7}. In this case, the outcome statistics of one subsystem (which is being ‘steered’) is due to valid QM measurements on a valid QM state. On the other hand, there is no such constraint for the other subsystem. The study of quantum steering also finds applications in semi device independent scenario where the party, which is being ‘steered’, has trust on his/her quantum device but the other party's device is untrusted. Secure quantum key distribution (QKD) using quantum steering has been demonstrated \cite{st12}, where one party cannot trust his/her devices.\\ Quantum discord \cite{disc, disc2, disc3} and local broadcasting \cite{bc} indicate the existence of quantumness even in separable states, and these concepts can be linked with the non-commutativity of measurements \cite{disc4}. On the other hand, quantum nonlocality and steering are also associated with incompatibility of measurements \cite{com1, com2, com, com3}. From an operational perspective, nonlocal or steerable states require pre-shared randomness with non-zero communication cost \cite{cc1, cc2}. Here we are concerned with the question of how to give such an operational characterization to the quantumness of local or unsteerable correlations. \\ Such a question has been partially addressed in case of local correlations. It has been demonstrated that there exists some local correlations for which the dimension of the pre-shared randomness required to simulate the correlations exceeds the dimension of the quantum system reproducing them by applying suitable measurements. This is known as superlocality \cite{sl1, sl2, sl3, sl4, sl5, sl6}. On the other hand, any unsteerable correlation can be reproduced by appropriate measurements on an appropriate separable state \cite{nsst, guhne}. Moreover, Moroder et. al. \cite{guhne} have shown that the unsteerable correlations arising from $2 \times 2 \times 2$ experimental scenario ($2$ parties, $2$ measurement settings per party, $2$ outcomes per measurement setting) can be reproduced by classical-quantum states (which form a subset of the set of separable states) with dimension at the untrusted party $d \leq 4$.\\ In this work, our motivation is to analyze the resource requirement for simulating unsteerable correlations in the context of a given steering scenario. Extending the concept of superlocality, we define \textit{super-correlation} as the requirement for a larger dimension of the preshared randomness to simulate the correlations than that of the quantum states that generate them. Superlocality is an instance of super-correlation. Since unsteerable states form a subset of Bell local states, it is an interesting question whether certain unsteerable states can be super-correlated. Here, we find that certain unsteerable correlations evidence super-correlation, a phenomenon we term ``super-unsteerability''. In other words, we show that quantumness is necessary to reproduce certain unsteerable correlations in the scenario where the dimension of the resource reproducing the correlations is restricted. More specifically, we show that there are certain unsteerble correlations in the $2 \times 2 \times 2$ experimental scenario ($2$ parties, $2$ measurement settings per party, $2$ outcomes per measurement setting) whose simulation with LHV-LHS model requires the steering party to preshare hidden variables with dimension exceeding the local Hilbert space dimension of the quantum systems (generating the given unsteerable correlation) at the steering party's side. This is termed as ``super-unsteerability".\\ The plan of the paper is as follows. In Section II the basic notions of NS polytope and the fundamental ideas of quantum steering has been presented. Our purpose is to decompose the given NS correlation in terms of convex combinations of extremal boxes of NS polytope which leads to a LHV-LHS decomposition of the given correlation \cite{DDJ+17}. In Section III, we present formal definition of super-unsteerability and demonstrate some specific examples of it. In Section IV, we illustrate how quantumness is captured by the notion of super-unsteerability. The inequivalence between superlocality and super-unsteerability has been demonstrated in Section V. Finally, in the concluding Section VI, we elaborate a bit on the significance of the results obtained. \section{Framework} \subsection{No-signalling Boxes} In this work, we are interested in generalized NS bipartite correlations which are treated as “boxes” shared between two parties, say Alice and Bob. The input variables on Alice's and Bob's sides are denoted by $x$ and $y$ respectively, and the outputs are denoted by $a$ and $b$ respectively. We restrict ourselves to the probability space in which the boxes have binary inputs and binary outputs, i.e., $x, y, a, b \in \{0, 1\}$. In this case, the state of every box is given by the set of 16 joint probability distributions $p(ab|xy)$. A bipartite box $P$ = $P(ab|xy)$ := $\{ p(ab|xy) \}_{a,x,b,y}$ is the set of joint probability distributions $p(ab|xy)$ for all possible $a$, $x$, $b$, $y$. The single-partite box $P(a|x)$ := $\{p(a|x)\}_{a,x}$ of a NS box $P(ab|xy)$ is the set of marginal probability distributions $p(a|x)$ for all possible $a$ and $x$; which are given by, \begin{equation} p(a|x)=\sum_b p(ab|xy), \quad \forall a,x,y. \end{equation} The single-partite box $P(b|y)$ := $\{p(b|y)\}_{b,y}$ of a NS box $P(ab|xy)$ is the set of marginal probability distributions $p(b|y)$ for all possible $b$ and $y$; which are given by, \begin{equation} p(b|y)=\sum_a p(ab|xy), \quad \forall x,b,y. \end{equation} \\ A NS box $P(ab|xy)$ is nonlocal if it cannot be reproduced by a LHV model, \begin{equation}gin{equation} p(ab|xy)=\sum_\lambda p(\lambda) p(a|x,\lambda)p(b|y,\lambda) \hspace{0.3cm} \forall a,b,x,y; \end{equation} where $\lambda$ denotes shared randomness which occurs with probability $p(\lambda)$; each $p(a|x,\lambda)$ and $p(b|y,\lambda)$ are conditional probabilities. The set of local boxes which have a LHV model forms a convex polytope called local polytope. In the case of two-binary-inputs and two-binary-outputs Bell scenario, the local polytope has $16$ extremal boxes which are local-deterministic boxes given by, \begin{equation} \label{LDB} P_{D}^{\alpha \begin{equation}ta \gamma \epsilon} (ab|xy) = \begin{equation}gin{dcases} 1,& \text{if } a = \alpha x \oplus \begin{equation}ta, b = \gamma y \oplus \epsilon \\ 0, & \text{otherwise}. \end{dcases} \end{equation} Here, $\alpha, \begin{equation}ta, \gamma, \epsilon \in \{0,1\}$ and $\oplus$ denotes addition modulo $2$. Any local box can be written as a convex mixture of the local-deterministic boxes. All the local-deterministic boxes as defined above can be written as the product of marginals corresponding to Alice and Bob, i.e., $P_D^{\alpha\begin{equation}ta\gamma\epsilon}(ab|xy)=P^{\alpha\begin{equation}ta}_D(a|x)P^{\gamma\epsilon}_D(b|y)$, with the deterministic box on Alice's side given by \begin{equation}gin{equation} P_D^{\alpha\begin{equation}ta}(a|x)=\left\{ \begin{equation}gin{array}{lr} 1, & a=\alpha x\oplus \begin{equation}ta\\ 0 , & \text{otherwise}\\ \end{array} \right. \label{} \end{equation} and the deterministic box on Bob's side given by, \begin{equation}gin{equation} P_D^{\gamma\epsilon}(b|y)=\left\{ \begin{equation}gin{array}{lr} 1, & b=\gamma x\oplus \epsilon\\ 0 , & \text{otherwise}.\\ \end{array} \right. \label{} \end{equation} A local box satisfies the complete set of Bell inequalities \cite{werner}. In the case of two-binary-inputs and two-binary-outputs, the Bell-CHSH inequalities \cite{chsh} are given by, \begin{equation}gin{eqnarray} \label{chsh} \mathcal{B}_{\alpha \begin{equation}ta \gamma} =&& (-1)^{\gamma} \langle A_0 B_0 \rangle + (-1)^{\begin{equation}ta \oplus \gamma} \langle A_0 B_1 \rangle \nonumber\\ &&+ (-1)^{\alpha \oplus \gamma} \langle A_1 B_0 \rangle + (-1)^{\alpha \oplus \begin{equation}ta \oplus \gamma \oplus 1} \langle A_1 B_1 \rangle \leq 2, \end{eqnarray} where $\alpha, \begin{equation}ta, \gamma \in \{0, 1 \}$, $\langle A_x B_y \rangle = \sum_{a,b} (-1)^{a\oplus b} p(ab|x y)$, form the complete set of Bell inequalities. All these tight Bell inequalities form the nontrivial facets for the local polytope. All nonlocal boxes lie outside the local polytope and violate a Bell inequality. The set of all bipartite two-input-two-output NS boxes forms an $8$ dimensional convex polytope with $24$ extremal boxes \cite{barrett}, which can be divided into two classes: i) nonlocal boxes having $8$ Popescu-Rohrlich (PR) boxes as extremal boxes, which are given by, \begin{equation} P_{PR}^{\alpha \begin{equation}ta \gamma} (ab|xy) = \begin{equation}gin{dcases} \frac{1}{2},& \text{if } a \oplus b = x.y \oplus \alpha x \oplus \begin{equation}ta y \oplus \gamma \\ 0, & \text{otherwise}, \end{dcases} \end{equation} and ii) local boxes having $16$ local-deterministic boxes as extremal boxes, which are given in Eq. (\ref{LDB}). The extremal boxes in a given class are equivalent under ``local reversible operations" (LRO). By using LRO Alice and Bob can convert any extremal box in one class into any other extremal box within the same class. LRO is designed \cite{barrett} as follows: Alice may relabel her inputs: $x \rightarrow x \oplus 1$, and she may relabel her outputs (conditionally on the input) : $a \rightarrow a \oplus \alpha x \oplus \begin{equation}ta$; Bob can perform similar operations. \subsection{Quantum Steering} Let us consider a steering scenario where two spatially seperated parties, say Alice and Bob, share an unknown quantum system $\rho_{AB}\in \mathcal{B}(\mathcal{H}_A \otimes \mathcal{H}_B)$. Here $\mathcal{B}(\mathcal{H}_A \otimes \mathcal{H}_B)$ stands for the set of all bounded linear operators acting on the Hilbert space $\mathcal{H}_A \otimes \mathcal{H}_B$. Alice performs a set of black-box measurements and the Hilbert-space dimension of Bob's subsystem is known. Such a scenario is called one-sided device-independent since Alice's measurement operators $\{M_{a|x}\}_{a,x}$ are unknown. The steering scenario is completely characterized by the set of unnormalized conditional states on Bob's side $\{\sigma_{a|x}\}_{a,x}$, which is called an unnormalized assemblage. Each element in the unnormalized assemblage is given by $\sigma_{a|x}=p(a|x)\rho_{a|x}$, where $p(a|x)$ is the conditional probability of getting the outcome $a$ when Alice performs the measurement $x$; $\rho_{a|x}$ is the normalized conditional state on Bob's side. Quantum theory predicts that all valid assemblages should satisfy the following criteria: \begin{equation}gin{equation} \sigma_{a|x}=\operatorname{Tr}_A ( M_{a|x} \otimes \openone \rho_{AB}) \hspace{0.5cm} \forall \sigma_{a|x} \in \{\sigma_{a|x}\}_{a,x} \end{equation} Let $\Sigma^{S}$ denote the set of all valid assemblages. \\ In the above scenario, Alice demonstrates steerability to Bob if the assemblage does not have a local hidden state (LHS) model, i.e., if for all $a$, $x$, there is no decomposition of $\sigma_{a|x}$ in the form, \begin{equation}gin{equation} \sigma_{a|x}=\sum_\lambda p(\lambda) p(a|x,\lambda) \rho_\lambda, \end{equation} where $\lambda$ denotes classical random variable which occurs with probability $p(\lambda)$; $\rho_{\lambda}$ are called local hidden states which satisfy $\rho_\lambda\ge0$ and $\operatorname{Tr}\rho_\lambda=1$. Let $\Sigma^{US}$ denote the set of all unsteerable assemblages. Any element in the given assemblage $\{\sigma_{a|x}\}_{a,x} \in \Sigma^{US}$ can be decomposed in terms of deterministic distributions as follows: \begin{equation}gin{equation} \sigma_{a|x}=\sum_\chi D(a|x,\chi) \sigma_\chi, \end{equation} where $D(a|x,\chi):=\delta_{a,f(x,\chi)}$ is the single-partite extremal conditional probability for Alice determined by the variable $\chi$ through the function $f(x,\chi)$ and $\sigma_\chi$ satisfy $\sigma_\chi\ge0$ and $\operatorname{Tr} \sum_{\chi} \sigma_\chi=1$ \cite{newpusey}.\\ Suppose Bob performs a set of projective measurements $\{\Pi_{b|y}\}_{b,y}$ on $\{\sigma_{a|x}\}_{a,x}$. Then the scenario is characterized by the set of measurement correlations, or box between Alice and Bob $P(ab|xy)$:=$\{p(ab|xy)\}_{a,x,b,y}$, where $p(ab|xy)$ = $\operatorname{Tr} ( \Pi_{b|y} \sigma_{a|x} )$. The box $P(ab|xy)$ detects steerability from Alice to Bob, iff it does not have a decomposition as follows \cite{steer, steer2}: \begin{equation}gin{equation} p(ab|xy)= \sum_\lambda p(\lambda) p(a|x,\lambda) p(b|y, \rho_\lambda) \hspace{0.3cm} \forall a,x,b,y; \label{LHV-LHS} \end{equation} where, $\sum_{\lambda} p(\lambda) = 1$, $p(a|x, \lambda)$ denotes an arbitrary probability distribution arising from local hidden variable (LHV) $\lambda$ ($\lambda$ occurs with probability $p(\lambda)$) and $p(b|y, \rho_{\lambda}) $ denotes the quantum probability of outcome $b$ when measurement $y$ is performed on local hidden state (LHS) $\rho_{\lambda}$. Hence, the box $P(ab|xy)$ will be called steerable iff it does not have a LHV-LHS model. In a given steering scenario, correlations having LHV-LHS model form a convex subset of the set of all correlations in that scenario.\\ Till date various criteria for showing quantum steering have been demonstrated \cite{stt, stt2, stt3, stt5}, but none of these criteria is a necessary and sufficient condition for quantum steering. Only recently, a necessary and sufficient condition for quantum steering in the $2 \times 2 \times 2$ experimental scenario with mutually unbiased measurements at trusted party has been established \cite{stt6}. Suppose two spatially separated parties Alice and Bob each have a choice between two dichotomic measurements to perform: $\{ A_1, A_2\}$, $\{ B_1, B_2 \}$, and outcomes of $A$ are labeled $a \in \{0, 1 \}$ and similarly for the other measurements. Furthermore, suppose that $B_1$ and $B_2$ are two mutually unbiased measurements. In this scenario, the necessary and sufficient condition for quantum steering from Alice to Bob is given by, \begin{equation}gin{align} &\sqrt{\langle (A_1 + A_2) B_1 \rangle^2 + \langle (A_1 + A_2) B_2 \rangle^2 } \nonumber \\ &+\sqrt{\langle (A_1 - A_2) B_1 \rangle^2 + \langle (A_1 - A_2) B_2 \rangle^2 } \leq 2, \label{chshst} \end{align} where $\langle A_x B_y \rangle = \sum_{a,b} (-1)^{a\oplus b} p(ab|xy)$. This inequality is called the analogous CHSH inequality for quantum steering.\\ Now, we are in a position to establish our results, i.e., demonstrating the notion of \textit{``super-unsteerability"} for certain unsteerable correlations. \section{Super-unsteerability} In this Section we are going to present the formal definition of the notion \textit{``super-unsteerability"} which is followed by some of its examples. Before that we present the definition of \textit{``superlocality"} \cite{sl1, sl2, sl3, sl4, sl5, sl6}. Consider the Bell scenario, where both parties perform black box measurements. In this scenario, superlocality is defined as follows: \begin{equation}gin{definition} Suppose we have a quantum state in $\mathbb{C}^{d_A}\otimes\mathbb{C}^{d_B}$ and measurements which produce a local bipartite box $P(ab|xy)$ := $\{ p(ab|xy) \}_{a,x,b,y}$. Then, superlocality holds iff there is no decomposition of the box in the form, \begin{equation}gin{equation} p(ab|xy)=\sum^{d_\lambda-1}_{\lambda=0} p(\lambda) p(a|x, \lambda) p(b|y, \lambda) \hspace{0.3cm} \forall a,x,b,y, \end{equation} with dimension of the shared randomness/hidden variable $d_\lambda\le$ min($d_A$, $d_B$). Here $\sum_{\lambda} p(\lambda) = 1$, $p(a|x, \lambda)$ and $p(b|y, \lambda)$ denotes arbitrary probability distributions arising from LHV $\lambda$ ($\lambda$ occurs with probability $p(\lambda)$). \end{definition} Now, consider a different scenario where one of the parties (say, Alice) performs black box measurements and another party (say, Bob) performs quantum measurements. In this steering scenario, we define the notion of \textit{``super-unsteerability"} as follows: \begin{equation}gin{definition} Suppose we have a quantum state in $\mathbb{C}^{d_A}\otimes\mathbb{C}^{d_B}$ and measurements which produce a unsteerable bipartite box $P(ab|xy)$ := $\{ p(ab|xy) \}_{a,x,b,y}$. Then, super-unsteerability holds iff there is no decomposition of the box in the form, \begin{equation}gin{equation} p(ab|xy)=\sum^{d_\lambda-1}_{\lambda=0} p(\lambda) p(a|x, \lambda) p(b|y, \rho_{\lambda}) \hspace{0.3cm} \forall a,x,b,y, \end{equation} with dimension of the shared randomness/hidden variable $d_\lambda\le d_A$. Here $\sum_{\lambda} p(\lambda) = 1$, $p(a|x, \lambda)$ denotes an arbitrary probability distribution arising from LHV $\lambda$ ($\lambda$ occurs with probability $p(\lambda)$) and $p(b|y, \rho_{\lambda}) $ denotes the quantum probability of outcome $b$ when measurement $y$ is performed on LHS $\rho_{\lambda}$ in $\mathbb{C}^{d_B}$. \end{definition} Hence, in order to demonstrate super-unsteerability of a given unsteerable correlation, we have to consider a LHV-LHS model of the given correlation with minimum dimension of the shared randomness and have to check whether this minimum dimension is greater than the local Hilbert space dimension of the shared quantum system (reproducing the given unsteerable correlation) at the untrusted party's side (who steers the other party, in the present case Bob). In the following we describe the procedure adopted in the present study to minimize the dimension of the shared randomness associated with the LHV-LHS model of the given unsteerable correlation.\\ At first we decompose the given unsteerable correlation (which is local as well) in terms of local-deterministc boxes. Then this local deterministic boxes are written as the product of marginals corresponding to the two parties. This decomposition produces a classical simulation protocol with LHV model of the given unsteerable correlation. In order to reduce the dimension of the shared randomness in this decomposition of the given correlation, we make each probability distribution at Bob's side non-deterministic and keep each probability distribution at Alice's side as deterministic. If each non-deterministic distribution at Bob's side can be produced by performing quantum measurements in the given steering scenario from some pure state, then this decomposition gives a LHV-LHS model in the given steering scenario. However, the dimension of the shared randomness in the above decomposition of the given correlation may be further reduced if there are several sets of equal non-determinstic probability distributions (which have some quantum realisation as described earlier) at Bob's side. In this case, by taking each of the equal non-determinstic probability distributions at Bob's side as common and by making corresponding probability distributions at Alice's side non-deterministic, the dimension of the shared randomness can be further reduced. This minimizes the dimension of the shared randomness in the above decomposition of the unsteerable correlation with different probability distributions (deterministic/non-deterministic) at Alice's side and with non-deterministic probability distributions having quantum realisations at Bob's side. \subsection{Super-unsteerability: Example 1} Consider the white noise-BB84 family defined as \begin{equation}gin{equation} \label{bb84} P_{BB84}(ab|xy) = \frac{1 + (-1)^{a \oplus b \oplus x.y} \delta_{x,y} V }{4}, \end{equation} where $V$ is a real number such that $0 < V \leq 1$; $x$, $y$ denote the input variables on Alice's and Bob's sides respectively; and $a$, $b$ denote the outputs on Alice's and Bob's sides respectively. We restrict ourselves to the probability space in which the boxes have binary inputs and binary outputs, i.e., $x, y, a, b \in \{0, 1\}$. The above box is local as it does not violate a Bell-CHSH inequality (\ref{chsh}). Therefore, it can be reproduced by sharing classical randomness. We now give an example of simulation of the white noise-BB84 family by using a quantum state which has quantumness. Consider that the two spatially separated parties (say, Alice and Bob) share the two qubit Werner state, \begin{equation}gin{equation} \label{w} \rho_V = V | \psi^- \rangle \langle \psi^-| + \frac{1-V}{4} \mathbb{I}_4, \end{equation} where, $|\psi^- \rangle = \frac{1}{\sqrt{2}} (|01 \rangle - |10 \rangle)$ ($|0\rangle$ and $|1\rangle$ are the eigenstates of $\sigma_z$), $\mathbb{I}_4$ is the $4 \times 4$ identity matrix and $0 < V \leq 1$. The above states are entangled iff $V > \frac{1}{3}$. The Werner states $\rho_V$ have nonzero quantumness (as quantified by quantum discord \cite{disc, disc2, disc3, disc4}) for any $V>0$. The white noise-BB84 family can be produced from the two-qubit Werner state if Alice performs the projective measurements of observables corresponding to the operators $A_0 = - \sigma_z$ and $A_1 = \sigma_x$, and Bob performs projective measurements of observables corresponding to the operators $B_0 = \sigma_z$ and $B_1 = \sigma_x$.\\ The BB84 family (\ref{bb84}) violates the analogous CHSH inequality for steering (\ref{chshst}) iff $V > \frac{1}{\sqrt{2}}$. Therefore, in this range the white noise-BB84 family detects steering in the $2 \times 2 \times 2$ experimental scenario where Alice performs black-box (uncharacterized) measurements and Bob performs two mutually unbiased qubit measurements. For instance, it detects steerability of the two-qubit Werner state in this range. Because the two-qubit Werner is steerable in the above steering scenario iff $V>1/\sqrt{2}$. In the following, we will demonstrate that for $0<V\le1/\sqrt{2}$, the BB84 box demonstrates super-unsteerability in the $2 \times 2 \times 2$ steering scenario. \subsubsection*{Simulating unsteerable white noise-BB84 family with LHV at one side and LHS at another side} In the context of no-signaling polytope, the white noise-BB84 distribution can be decomposed as \begin{equation}gin{equation} P_{BB84}(ab|xy) = V \bigg(\frac{P_{PR}^{000} + P_{PR}^{110}}{2}\bigg) + (1-V) P_N, \label{bb84box} \end{equation} where $P_N$ is the maximally mixed box, i.e., $P_N(a b| x y) = \frac{1}{4} \forall a, b, x, y$. We obtain \begin{equation}gin{eqnarray} P_{BB84}(ab|xy) = &&2V \frac{1}{2} \bigg(\frac{1}{2} P_{PR}^{000} + \frac{1}{2} P_N \bigg) + 2V \frac{1}{2} \bigg(\frac{1}{2} P_{PR}^{110} + \frac{1}{2} P_N \bigg)\nonumber\\&& + (1-2V) P_N . \end{eqnarray} Each box in the above decomposition can be decomposed in terms of the local deterministic boxes as follows: \begin{equation}gin{equation} \frac{1}{2} P_{PR}^{000} + \frac{1}{2} P_N = \frac{1}{8} \sum_{\alpha, \begin{equation}ta, \gamma} P_D^{\alpha \begin{equation}ta \gamma (\alpha \gamma \oplus \begin{equation}ta)} (a b|x y), \end{equation} \begin{equation}gin{equation} \frac{1}{2} P_{PR}^{110} + \frac{1}{2} P_N = \frac{1}{8} \sum_{\alpha, \begin{equation}ta, \gamma} P_D^{\alpha \begin{equation}ta \gamma (\begin{eqnarray}r{ \alpha} \begin{eqnarray}r{\gamma} \oplus \begin{equation}ta)} (a b|x y), \end{equation} where $\begin{eqnarray}r{ \alpha} = \alpha \oplus 1$, $\begin{eqnarray}r{\gamma}= \gamma \oplus 1$; and, \begin{equation}gin{equation} P_N = \frac{1}{16} \sum_{\alpha, \begin{equation}ta, \gamma, \epsilon} P_{D}^{\alpha \begin{equation}ta \gamma \epsilon}(ab|xy). \end{equation} Using the above decompositions and the relation $P_{D}^{\alpha \begin{equation}ta \gamma \epsilon} (ab|xy) = P_D^{\alpha \begin{equation}ta}(a|x) P_D^{\gamma \epsilon}(b|y)$ one obtains \begin{equation}gin{align} P_{BB84}&(ab|xy)\nonumber\\ &= \frac{1}{4} P_D^{00} \bigg[ 2V \bigg( \frac{P_D^{00} + P_D^{10} + P_D^{01} + P_D^{10}}{4} \bigg) \nonumber\\ &\quad+ (1-2V) \bigg( \frac{P_D^{00} + P_D^{10} + P_D^{01} + P_D^{11}}{4} \bigg) \bigg] \nonumber\\ &\quad+ \frac{1}{4} P_D^{01} \bigg[ 2V \bigg( \frac{P_D^{01} + P_D^{11} + P_D^{00} + P_D^{11}}{4} \bigg) \nonumber\\ &\quad+ (1-2V) \bigg( \frac{P_D^{00} + P_D^{10} + P_D^{01} + P_D^{11}}{4} \bigg) \bigg] \nonumber \\ &\quad+ \frac{1}{4} P_D^{10} \bigg[ 2V \bigg( \frac{P_D^{00} + P_D^{11} + P_D^{00} + P_D^{10}}{4} \bigg) \nonumber\\ &\quad+ (1-2V) \bigg( \frac{P_D^{00} + P_D^{10} + P_D^{01} + P_D^{11}}{4} \bigg) \bigg] \nonumber\\ &\quad+ \frac{1}{4} P_D^{11} \bigg[ 2V \bigg( \frac{P_D^{01} + P_D^{10} + P_D^{01} + P_D^{11}}{4} \bigg) \nonumber\\ &\quad+ (1-2V) \bigg( \frac{P_D^{00} + P_D^{10} + P_D^{01} + P_D^{11}}{4} \bigg) \bigg] \nonumber\\ &= \sum_{\lambda=0}^{3} p(\lambda) P(a|x, \lambda) P(b|y, \rho_{\lambda}) , \label{bblhvlhs} \end{align} where, $P(a|x, \lambda)$ := $\{p(a|x, \lambda)\}_{a,x}$ is the set of conditional probabilities $p(a|x,\lambda)$ for all possible $a$ and $x$; $P(b|y, \rho_{\lambda})$ := $\{p(b|y, \rho_{\lambda})\}_{b,y}$ is the set of conditional probabilities $p(b|y,\rho_{\lambda})$ for all possible $b$ and $y$.\\ In the decomposition (\ref{bblhvlhs}) $p(0)$ = $p(1)$ = $p(2)$ = $p(3)$ = $\frac{1}{4}$, and \\ $P(a|x,0)$ = $P_D^{00}$, $P(a|x,1)$ = $P_D^{01}$, $P(a|x,2)$ = $P_D^{10}$, $P(a|x,3)$ = $P_D^{11}$. Now let us set \begin{equation}gin{equation} P(b|y,\rho_0) =\begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $\frac{1+V}{2}$ & $\frac{1-V}{2}$ \\ (1) & $\frac{1-V}{2}$ & $\frac{1+V}{2}$ \\ \end{tabular}=\langle \psi _0 | \{\Pi_{b|y}\}_{b,y} | \psi_0 \rangle, \end{equation} where each row and column corresponds to a fixed measurement $(y)$ and a fixed outcome $(b)$ respectively. This convention is presented in \cite{not}. Throughout the paper we will follow the same convention. We set the other $P(b|y,\rho_{\lambda})$'s as \begin{equation}gin{eqnarray} &&P(b|y,\rho_1) = \begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $\frac{1-V}{2}$ & $\frac{1+V}{2}$ \\ (1) & $\frac{1+V}{2}$ & $\frac{1-V}{2}$ \\ \end{tabular}=\langle \psi _1 | \{\Pi_{b|y}\}_{b,y} | \psi_1 \rangle,\\ &&P(b|y,\rho_2) =\begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $\frac{1+V}{2}$ & $\frac{1-V}{2}$ \\ (1) & $\frac{1+V}{2}$ & $\frac{1-V}{2}$ \\ \end{tabular}=\langle \psi _2 | \{\Pi_{b|y}\}_{b,y} | \psi_2 \rangle,\\ &&P(b|y,\rho_3) =\begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $\frac{1-V}{2}$ & $\frac{1+V}{2}$ \\ (1) & $\frac{1-V}{2}$ & $\frac{1+V}{2}$ \\ \end{tabular}=\langle \psi _3 | \{\Pi_{b|y}\}_{b,y} | \psi_3 \rangle, \end{eqnarray} where $\{\Pi_{b|y}\}_{b,y}$ corresponds to the set of projective measurements of two observables corresponding to the operators $B_0 = |\uparrow_0 \rangle \langle \uparrow_0|$ $-$ $|\downarrow_0 \rangle \langle \downarrow_0|$ and $B_1 = |\uparrow_1 \rangle \langle \uparrow_1|$ $-$ $|\downarrow_1 \rangle \langle \downarrow_1|$, here $\{ |\uparrow_0 \rangle$, $|\downarrow_0 \rangle \}$ is an arbitrary orthonormal basis in the Hilbert space $\mathcal{C}^2$ and the orthonormal basis $\{ |\uparrow_1 \rangle, |\downarrow_1 \rangle \}$ in the Hilbert space $\mathcal{C}^2$ is such that aforementioned two measurements define two arbitrary projective mutually unbiased measurements in the Hilbert space $\mathcal{C}^2$. The quantum states $|\psi_{\lambda} \rangle$ in the Hilbert space $\mathcal{C}^2$ that produce $P(b|y,\rho_{\lambda})$s ($\lambda=0,1,2,3$), are given as follows: \begin{equation}gin{equation} |\psi_0 \rangle = \sqrt{\frac{1+V}{2}} |\uparrow_0 \rangle + e^{i \phi_0} \sqrt{\frac{1-V}{2}} |\downarrow_0 \rangle, \end{equation} where $cos \phi_0 = - \frac{V}{\sqrt{1+V}\sqrt{1-V}}$, \begin{equation}gin{equation} |\psi_1 \rangle = \sqrt{\frac{1-V}{2}} |\uparrow_0 \rangle + e^{i \phi_1} \sqrt{\frac{1+V}{2}} |\downarrow_0 \rangle, \end{equation} where $cos \phi_1 = \frac{V}{\sqrt{1+V}\sqrt{1-V}}$. \begin{equation}gin{equation} |\psi_2 \rangle = \sqrt{\frac{1+V}{2}} |\uparrow_0 \rangle + e^{i \phi_2} \sqrt{\frac{1-V}{2}} |\downarrow_0 \rangle, \end{equation} where $cos \phi_2 = \frac{V}{\sqrt{1+V}\sqrt{1-V}}$, \begin{equation}gin{equation} |\psi_3 \rangle = \sqrt{\frac{1-V}{2}} |\uparrow_0 \rangle + e^{i \phi_3} \sqrt{\frac{1+V}{2}} |\downarrow_0 \rangle, \end{equation} where $cos \phi_3 = - \frac{V}{\sqrt{1+V}\sqrt{1-V}}$. Now, $|cos \phi_i| \leq 1$ ($i = 0,1,2,3$) implies that $V \leq \frac{1}{\sqrt{2}}$.\\ Hence, the LHV-LHS decomposition of $P_{BB84}(ab|xy)$ for $ V \leq \frac{1}{\sqrt{2}}$ can be realized with hidden variable having dimension $4$ (with two arbitrary projective mutually unbiased measurements at trusted party). \begin{equation}gin{thm} The LHV-LHS decomposition of unsteerable white noise-BB84 box cannot be realized with hidden variable having dimension $3$ or $2$ for the whole range $ V \leq \frac{1}{\sqrt{2}}$. \end{thm} \begin{equation}gin{proof} Note that the noisy BB84 box corresponds to the following joint probabilities: \begin{equation}gin{equation} \label{matrix form BB84} P_{BB84}(ab|xy) =\begin{equation}gin{tabular}{c|cccc} \begin{eqnarray}ckslashbox{(x,y)}{(a,b)} & (0,0) & (0,1) & (1,0) & (1,1)\\\hline (0,0) & $\frac{1+V}{4}$ & $\frac{1-V}{4}$ & $\frac{1-V}{4}$ & $\frac{1+V}{4}$ \\ (0,1) & $\frac{1}{4}$ & $\frac{1}{4}$ & $\frac{1}{4}$ & $\frac{1}{4}$ \\ (1,0) & $\frac{1}{4}$ & $\frac{1}{4}$ & $\frac{1}{4}$ & $\frac{1}{4}$ \\ (1,1) & $\frac{1-V}{4}$ & $\frac{1+V}{4}$ & $\frac{1+V}{4}$ & $\frac{1-V}{4}$ \\ \end{tabular}, \end{equation} where each row and column corresponds to a fixed measurement setting $(xy)$ and a fixed outcome $(ab)$ respectively \cite{not}. The marginal probabilities for Alice's and Bob's side are \begin{equation}gin{equation} \label{marginal} p(a|x) = \frac{1}{2} \hspace{0.4cm} \forall a,x \end{equation} and \begin{equation}gin{equation} \label{marginalB} p(b|y) = \frac{1}{2} \hspace{0.4cm} \forall b,y \end{equation} respectively.\\ Now, let us try to construct a LHV-LHS decomposition of noisy BB84 box which requires a hidden variable of dimension $3$. Henceforth, we will denote a LHV-LHS decomposition of an unsteerable correlation having \textit{different deterministic probability distributions} at Alice's side and non-deterministic probability distributions (with quantum realization) at Bob's side, simply, by the term ``DLHV-LHS decomposition". Note that In $2 \times 2 \times 2$ Bell-scenario, hidden variable with dimnesion $d_{\lambda} \leq 4$ is sufficient for reproducing any local correlation \cite{sl1}. Since unsteerable correlations form a subset of the local correlations, in $2 \times 2 \times 2$ steering-scenario hidden variable with dimnesion $d_{\lambda} \leq 4$ is sufficient for reproducing any unsteerable correlation. Hence, constructing a LHV-LHS decomposition of noisy BB84 box with hidden variable dimension $3$ can be realized in the following two possible ways:\\ i) One has to construct a DLHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$) with a hidden variable of dimension $4$ as in Eq.(\ref{bblhvlhs}). Then taking equal non-deterministic distributions at Bob's side as common and making the corresponding probability distributions at Alice's side non-deterministic can reduce the dimension of the hidden variable to $3$. However, all the non-deterministic probability distributions (with quantum realization) at Bob's side $P(b|y, \rho_{\lambda})$ ($\lambda=0,1,2,3$) in the decomposition (\ref{bblhvlhs}) are unequal. In fact it can be easily checked that it is impossible to construct a DLHV-LHS decomposition of noisy BB84 box for the whole range $V \leq \frac{1}{\sqrt{2}}$ with a hidden variable of dimension $4$ with some/all non-deterministic probability distributions at Bob's side being equal to each other. Hence, the dimension of the hidden variable cannot be reduced from $4$ to $3$ in the DLHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$).\\ ii) One has to construct a DLHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$) with a hidden variable of dimension $3$. In the following we will check such possibility.\\ In this case the noisy BB84 box can be decomposed in the following way: \begin{equation}gin{equation} P_{BB84}(ab|xy) = \sum_{\lambda=0}^{2} p(\lambda) P(a|x, \lambda) P(b|y, \rho_{\lambda}). \end{equation} Here, $p(0)= q$, $p(1) = r$, $p(2) = s$ ($0 <q<1$, $0 <r<1$, $0 <s<1$, $q+r+s =1$). Since Alice's strategy is deterministic one, the three probability distributions $P(a|x, \lambda)$ $(\lambda = 0, 1, 2)$ must be equal to any three among $P_D^{00}$, $P_D^{01}$, $P_D^{10}$ and $P_D^{11}$. But any such combination will not satisfy the marginal probabilities for Alice given by Eq. (\ref{marginal}). So it is impossible to construct a DLHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$) with a hidden variable of dimension $3$.\\ Hence, one can conclude that it is impossible to construct a LHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$) with a hidden variable of dimension $3$ shared between Alice and Bob with deterministic/non-deterministic probability distributions at Alice's side and non-deterministic probability distributions (with quantum realization) at Bob's side.\\ Now, let us try to construct a LHV-LHS decomposition of noisy BB84 box which requires a hidden variable of dimension $2$. Since in $2 \times 2 \times 2$ steering-scenario hidden variables with dimnesions $d_{\lambda} \leq 4$ is sufficient for reproducing any unsteerable correlation, this can be realized in the following three possible ways:\\ i) One has to construct a DLHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$) with a hidden variable of dimension $4$ as in Eq.(\ref{bblhvlhs}). Then taking equal non-deterministic distributions at Bob's side as common and making the corresponding probability distributions at Alice's side non-deterministic can reduce the dimension of the hidden variable to $2$. However, as mentioned earlier, all the non-deterministic probability distributions at Bob's side $P(b|y, \rho_{\lambda})$ ($\lambda=0,1,2,3$) in the decomposition (\ref{bblhvlhs}) are unequal. In fact it can be easily checked that it is impossible to construct a DLHV-LHS decomposition of noisy BB84 box for the whole range $V \leq \frac{1}{\sqrt{2}}$ with a hidden variable of dimension $4$ with some/all non-deterministic probability distributions at Bob's side being equal to each other. Hence, the dimension of the hidden variable cannot be reduced from $4$ to $2$ in the DLHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$).\\ ii) One has to construct a DLHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$) with a hidden variable of dimension $3$. Then adapting the aforementioned procedure one can reduce the dimension of the hidden variable to $2$. However, it has already been shown that it is impossible to construct a DLHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$) with a hidden variable of dimension $3$.\\ iii) One has to construct a DLHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$) with a hidden variable of dimension $2$. In the following we will check such possibility.\\ In this case the noisy BB84 box can be decomposed in the following way: \begin{equation}gin{equation} P_{BB84}(ab|xy) = \sum_{\lambda=0}^{1} p(\lambda) P(a|x, \lambda) P(b|y, \rho_{\lambda}). \end{equation} Here, $p(0)=q$, $p(1)=r$ ($0 <q<1$, $0 <r<1$, $q+r =1$). Since Alice's strategy is a deterministic one, the two probability distributions $P(a|x, \lambda)$ $(\lambda = 0, 1)$ must be equal to any two among $P_D^{00}$, $P_D^{01}$, $P_D^{10}$ and $P_D^{11}$. In order to satisfy the marginal probabilities for Alice given by Eq. (\ref{marginal}), the only two possible choices of $P(a|x, 0)$ and $P(a|x, 1)$ are:\\ 1) $P_D^{00}$ and $P_D^{01}$ with $q=r=\frac{1}{2}$\\ 2) $P_D^{10}$ and $P_D^{11}$ with $q=r=\frac{1}{2}$.\\ \underline{1st Choice}\\ Now consider the first choice, i.e., $P(a|x, 0)$ = $P_D^{00}$ and $P(a|x, 1)$ = $P_D^{01}$ (with $q=r=\frac{1}{2}$). Now in order to satisfy the 1st and 4th row given in Eq.(\ref{matrix form BB84}), the only possible choice for $P(b|y, \rho_{0})$ and $P(b|y, \rho_{1})$ are: \begin{equation}gin{eqnarray} P(b|y,\rho_0) = \begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $\frac{1+V}{2}$ & $\frac{1-V}{2}$ \\ (1) & $\frac{1-V}{2}$ & $\frac{1+V}{2}$ \\ \end{tabular} = \langle \psi _0 | \{\Pi_{b|y}\}_{b,y} | \psi_0 \rangle \end{eqnarray} and \begin{equation}gin{eqnarray} P(b|y,\rho_1) = \begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $\frac{1-V}{2}$ & $\frac{1+V}{2}$ \\ (1) & $\frac{1+V}{2}$ & $\frac{1-V}{2}$ \\ \end{tabular} = \langle \psi _1 | \{\Pi_{b|y}\}_{b,y} | \psi_1 \rangle. \end{eqnarray} In this case, the marginal probabilities for Bob given by Eq.(\ref{marginalB}) are satisfied. But the joint probabilities given in the 2nd and 3rd row of Eq.(\ref{matrix form BB84}) are not satisfied. In a similar way, it can be shown that, in case of the first choice, if one wants to satisfy the 2nd and 3rd row in Eq.(\ref{matrix form BB84}), then the marginal probabilities for Bob given by Eq.(\ref{marginalB}) will be satisfied, but the 1st and 4th row in Eq.(\ref{matrix form BB84}) will not be satisfied. In this way it can be shown that with the choice $P(a|x, 0)$ = $P_D^{00}$ and $P(a|x, 1)$ = $P_D^{01}$, all joint probabilities cannot be satisfied simultaneously. \underline{2nd Choice}\\ Now consider the second choice, i.e., $P(a|x, 0)$ = $P_D^{10}$ and $P(a|x, 1)$ = $P_D^{11}$ (with $q=r=\frac{1}{2}$). In order to satisfy the 1st and 4th row given in Eq.(\ref{matrix form BB84}), the only possible choice for $P(b|y, \rho_{0})$ and $P(b|y, \rho_{1})$ are: \begin{equation}gin{eqnarray} P(b|y,\rho_0) = \begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $\frac{1+V}{2}$ & $\frac{1-V}{2}$ \\ (1) & $\frac{1+V}{2}$ & $\frac{1-V}{2}$ \\ \end{tabular} = \langle \psi _2 | \{\Pi_{b|y}\}_{b,y} | \psi_2 \rangle \end{eqnarray} and \begin{equation}gin{eqnarray} P(b|y,\rho_1) = \begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $\frac{1-V}{2}$ & $\frac{1+V}{2}$ \\ (1) & $\frac{1-V}{2}$ & $\frac{1+V}{2}$ \\ \end{tabular} = \langle \psi _3 | \{\Pi_{b|y}\}_{b,y} | \psi_3 \rangle. \end{eqnarray} In this case, the marginal probabilities for Bob given by Eq.(\ref{marginalB}) are satisfied. But the joint probabilities given in the 2nd and 3rd row of Eq.(\ref{matrix form BB84}) are not satisfied. In a similar way, it can be shown that, in case of the second choice, if one wants to satisfy the 2nd and 3rd row of Eq.(\ref{matrix form BB84}), then the marginal probabilities for Bob given by Eq.(\ref{marginalB}) will be satisfied. But the 1st and 4th row of Eq.(\ref{matrix form BB84}) will not be satisfied. In this way it can be shown that with the choice $P(a|x, 0)$ = $P_D^{10}$ and $P(a|x, 1)$ = $P_D^{11}$, all joint probabilities cannot be satisfied simultaneously. It is, therefore, impossible to construct a DLHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$) with a hidden variable of dimension $2$.\\ Hence, we can conclude that it is impossible to have a LHV-LHS decomposition of noisy BB84 box (for $V \leq \frac{1}{\sqrt{2}}$) with a hidden variable of dimension $2$ shared between Alice and Bob having deterministic/non-deterministic probability distributions at Alice's side and non-deterministic probability distributions (with quantum realization) at Bob's side.\\ \end{proof} The above theorem implies the following. \begin{equation}gin{cor} The unsteerable white-noise BB84 family demonstrates super-unsteerablity. \end{cor} \begin{equation}gin{proof} We have shown that the unsteerable white noise-BB84 family can have LHV-LHS model with the minimum dimension of the hidden variable being $4$ for the whole range $ V \leq \frac{1}{\sqrt{2}}$. On the other hand, we have seen that this white noise-BB84 family can be simulated by using $2 \otimes 2$ quantum system (\ref{w}). This is an instance of super-unsteerability since the minimum dimension of shared randomness needed for simulating the LHV-LHS model of unsteerable white-noise BB84 family is greater than the local Hilbert space dimension of the shared quantum system (reproducing unsteerable white-noise BB84 family) at the untrusted party's side (who steers the other party, in the present case Bob). \end{proof} As discussed before, in $2 \times 2 \times 2$ steering-scenario hidden variables with dimnesions $d_{\lambda} \leq 4$ is sufficient for reproducing any unsteerable correlation. Previously we have shown an example of super-unsteerability in the $2 \times 2 \times 2$ experimental scenario where the classical simulation protocol (with LHV-LHS model) requires hidden variables having minimum dimension $4$. Hence, there may be another form super-unsteerability where the classical simulation protocol (with LHV-LHS model) requires minimum hidden variable dimension $3$. In the following subsection we are going to present an example of it.\\ \subsection{Super-unsteerability: Example 2} Consider that the two spatially separated parties (say, Alice and Bob) share the following separable two-qubit state, \begin{equation}gin{equation} \label{state2n} \rho = \frac{1}{2} \Big( |00\rangle \langle 00| + |++ \rangle \langle ++| \Big) , \end{equation} where, $|0\rangle$ and $|+\rangle$ are the eigenstates of the operators $\sigma_z$ and $\sigma_x$, respectively, correspondng to the eigenvalue $+1$. The above state has nonzero quantum discord from both Alice to Bob and Bob to Alice since it is neither a classical-quantum state nor a quantum-classical state \cite{disc, disc2, disc3, disc4}. If Alice performs the projective measurements of observables corresponding to the operators $A_0 = \sigma_x$ and $A_1 = \sigma_z$, and Bob performs projective measurements of observables corresponding to the operators $B_0 = \sigma_x$ and $B_1 = \sigma_z$, then the following correlation is produced from the above quantum-quantum state, \begin{equation}gin{equation} \label{corr2} P(ab|xy) =\begin{equation}gin{tabular}{c|cccc} \begin{eqnarray}ckslashbox{(x,y)}{(a,b)} & (0,0) & (0,1) & (1,0) & (1,1)\\\hline\\[0.05cm] (0,0) & $\dfrac{5}{8}$ & $\dfrac{1}{8}$ & $\dfrac{1}{8}$ & $\dfrac{1}{8}$ \\[0.5cm] (0,1) & $\dfrac{1}{2}$ & $\dfrac{1}{4}$ & $\dfrac{1}{4}$ & $0$ \\[0.5cm] (1,0) & $\dfrac{1}{2}$ & $\dfrac{1}{4}$ & $\dfrac{1}{4}$ & $0$ \\[0.5cm] (1,1) & $\dfrac{5}{8}$ & $\dfrac{1}{8}$ & $\dfrac{1}{8}$ & $\dfrac{1}{8}$ \\ \end{tabular} \end{equation} Here $x$, $y$ denote the input variables on Alice's and Bob's sides respectively; and $a$, $b$ denote the outputs on Alice's and Bob's sides respectively. The above box does not violate the analogous CHSH inequality for steering (\ref{chshst}). Hence, the box (\ref{corr2}) is unsteerable in the scenario where Alice performs black-box (uncharacterized) measurements and Bob performs two mutually unbiased qubit measurements. In the following, we demonstrate that the box (\ref{corr2}) detects super-unsteerability of the quantum-quantum state (\ref{state2n}). \subsubsection*{Simulating the correlation given by Eq.(\ref{corr2}) with LHV at one side and LHS at another side} The correlation given by Eq.(\ref{corr2}) can be written as \begin{equation}gin{equation} P(ab|xy) = \frac{1}{8}\Big( 2 P_D^{0000} + P_D^{0010} + P_D^{0011} + P_D^{1000} + P_D^{1100} + P_D^{1010} + P_D^{1111} \Big) \nonumber \end{equation} \begin{equation}gin{equation} = \frac{1}{2} P_D^{00} \Big( \frac{2 P_D^{00} + P_D^{10} + P_D^{11}}{4} \Big) \nonumber \end{equation} \begin{equation}gin{equation} + \frac{1}{4} P_D^{10} \Big( \frac{P_D^{00} + P_D^{10}}{2} \Big) + \frac{1}{4} P_D^{11} \Big( \frac{P_D^{00} + P_D^{11}}{2} \Big) \nonumber \end{equation} \begin{equation}gin{equation} = \sum_{\lambda=0}^{2} p(\lambda) P(a|x, \lambda) P(b|y, \rho_{\lambda}), \label{eee} \end{equation} where $p(0)$ = $\frac{1}{2}$, $p(1)$ = $p(2)$ = $\frac{1}{4}$;\\ $P(a|x,0)$ = $P_D^{00}$, $P(a|x,1)$ = $P_D^{10}$, $P(a|x,2)$ = $P_D^{11}$;\\ and \begin{equation}gin{align} P(b|y,\rho_0) \!=\! \frac{2 P_D^{00} + P_D^{10} + P_D^{11}}{4} \! &=\! \begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $\frac{3}{4}$ & $\frac{1}{4}$ \\ (1) & $\frac{3}{4}$ & $\frac{1}{4}$ \\ \end{tabular} \nonumber\\ &=\! \langle \psi^{'}_0 | \{\Pi_{b|y}\}_{b,y} | \psi^{'}_0 \rangle, \end{align} \begin{equation}gin{align} P(b|y,\rho_1) \!=\! \frac{P_D^{00} + P_D^{10}}{2} \! &=\! \begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $1$ & $0$ \\ (1) & $\frac{1}{2}$ & $\frac{1}{2}$ \\ \end{tabular} \nonumber\\ &=\! \langle \psi^{'}_1 | \{\Pi_{b|y}\}_{b,y} | \psi^{'}_1 \rangle, \end{align} \begin{equation}gin{align} P(b|y,\rho_2) \!=\! \frac{P_D^{00} + P_D^{11}}{2} \! &=\! \begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $\frac{1}{2}$ & $\frac{1}{2}$ \\ (1) & $1$ & $0$ \\ \end{tabular} \nonumber\\ &=\! \langle \psi^{'}_2 | \{\Pi_{b|y}\}_{b,y} | \psi^{'}_2 \rangle, \end{align} where $\{\Pi_{b|y}\}_{b,y}$ corresponds to two arbitrary projective mutually unbiased measurements in the Hilbert space $\mathcal{C}^2$ corresponding to the operators $B_0 = |\uparrow_0 \rangle \langle \uparrow_0|$ $-$ $|\downarrow_0 \rangle \langle \downarrow_0|$ and $B_1 = |\uparrow_1 \rangle \langle \uparrow_1|$ $-$ $|\downarrow_1 \rangle \langle \downarrow_1|$ as described earlier. The $|\psi^{'}_{\lambda}\rangle$s that produce $p(b|y,\rho_{\lambda})$s given above are given by \begin{equation}gin{equation} |\psi^{'}_0 \rangle = \frac{\sqrt{3}}{2} |\uparrow_0 \rangle + e^{i \phi^{'}_0} \frac{1}{2} |\downarrow_0 \rangle, \end{equation} where $cos \phi^{'}_0 = \frac{1}{\sqrt{3}}$, \begin{equation}gin{equation} |\psi^{'}_1 \rangle = |\uparrow_0 \rangle \end{equation} and \begin{equation}gin{equation} |\psi^{'}_2 \rangle = \frac{1}{\sqrt{2}} |\uparrow_0 \rangle + \frac{1}{\sqrt{2}} |\downarrow_0 \rangle, \end{equation} which are all valid states in the Hilbert space $\mathcal{C}^2$.\\ Hence, the LHV-LHS decomposition of the correlation given by Eq.(\ref{corr2}) can be realized with hidden variable having dimension $3$ (with two arbitrary projective mutually unbiased measurements at trusted party). \begin{equation}gin{thm} The LHV-LHS decomposition of the correlation given by Eq.(\ref{corr2}) cannot be realized with hidden variable having dimension $2$. \end{thm} \begin{equation}gin{proof} Let us try to construct a LHV-LHS decomposition of the correlation given by Eq.(\ref{corr2}) which requires a hidden variable of dimension $2$. Since in $2 \times 2 \times 2$ steering-scenario hidden variables with dimnesions $d_{\lambda} \leq 4$ is sufficient for reproducing any unsteerable correlation, this can be realized in the following three possible ways:\\ i) One has to construct a DLHV-LHS decomposition of the correlation given by Eq.(\ref{corr2}) with a hidden variable of dimension $4$. Then taking equal non-deterministic distributions at Bob's side as common and making the corresponding probability distributions at Alice's side non-deterministic can reduce the dimension of the hidden variable to $2$. However, it can be shown that it is impossible to to construct a DLHV-LHS decomposition of the correlation given by Eq.(\ref{corr2}) with a hidden variable of dimension $4$ (for detailed calculations, see the Appendix).\\ ii) One has to construct a DLHV-LHS decomposition of the correlation given by Eq.(\ref{corr2}) with a hidden variable of dimension $3$ as in Eq.(\ref{eee}). Then by taking equal non-deterministic distributions at Bob's side as common and making the corresponding probability distributions at Alice's side non-deterministic one can reduce the dimension of the hidden variable to $2$. However, all the non-deterministic probability distributions at Bob's side $P(b|y, \rho_{\lambda})$ ($\lambda=0,1,2,3$) in the decomposition (\ref{eee}) are unequal. In fact it can be easily checked that it is impossible to construct a DLHV-LHS decomposition of the correlation (\ref{corr2}) with a hidden variable of dimension $3$ with some/all non-deterministic probability distributions at Bob's side being equal to each other. Hence, the dimension of the hidden variable cannot be reduced from $3$ to $2$ in the DLHV-LHS decomposition of the correlation (\ref{corr2}).\\ iii) One has to construct a DLHV-LHS decomposition of the correlation (\ref{corr2}) with a hidden variable of dimension $2$. In the following we will check such possibility.\\ In this case the correlation given by Eq.(\ref{corr2}) can be decomposed in the following way: \begin{equation}gin{equation} P(ab|xy) = \sum_{\lambda=0}^{1} p(\lambda) P(a|x, \lambda) P(b|y, \rho_{\lambda}). \end{equation} Here, $p(0)=q$, $p(1)=r$ ($0 <q<1$, $0 <r<1$, $q+r =1$). Since Alice's strategy is a deterministic one, the two probability distributions $P(a|x, \lambda)$ $(\lambda = 0, 1)$ must be equal to any two among $P_D^{00}$, $P_D^{01}$, $P_D^{10}$ and $P_D^{11}$. But it can be easily checked that none of these choices will satisfy all the joint probability distributions mentioned in Eq.(\ref{corr2}) simultaneously. It is, therefore, impossible to to construct a DLHV-LHS decomposition of the correlation (\ref{corr2}) with a hidden variable of dimension $2$.\\ Hence, we can conclude that it is impossible to have a LHV-LHS decomposition of the correlation (\ref{corr2}) with a hidden variable of dimension $2$ shared between Alice and Bob having deterministic/non-deterministic probability distributions at Alice's side and non-deterministic probability distributions (with quantum realization) at Bob's side.\\ \end{proof} The above theorem implies the following. \begin{equation}gin{cor} The correlation given by Eq.(\ref{corr2}) demonstrates super-unsteerablity. \end{cor} \begin{equation}gin{proof} We have shown that the unsteerable correlation given by Eq.(\ref{corr2}) can have a LHV-LHS model with the minimum dimension of the hidden variable being $3$. On the other hand, we have seen that the unsteerable correlation given by Eq.(\ref{corr2}) can be simulated by using $2 \otimes 2$ quantum system (\ref{state2n}). This is an instance of super-unsteerability since the minimum dimension of shared randomness needed to simulate the LHV-LHS model of the correlation (\ref{corr2}) is greater than the local Hilbert space dimension of the shared quantum system (reproducing the given unsteerable correlation) at the untrusted party's side (who steers the other party, in the present case Bob). \end{proof} \section{Quantumness as captured by super-unsteerability} Here we argue that the unsteerable boxes (\ref{bb84}) and (\ref{corr2}) in the given steering scenario, where the dimension of the steering party is bounded, have nonclassicality beyond steering which can be operationally identified with super-unsteerability.\\ In the Example $1$ of super-unsteerability, we have shown that the unsteerable BB84 box (for $ V \leq \frac{1}{\sqrt{2}}$) given by Eq.(\ref{bb84}) can be simulated by LHV-LHS model with random variable having minimum dimension $4$, where each LHS is a $2$ dimensional quantum system. In another words, the unsteerable BB84 box (for $ V \leq \frac{1}{\sqrt{2}}$) given by Eq.(\ref{bb84}) cannot be reproduced by classical-quantum state with dimension $d \otimes 2$ where $d < 4$. Hence, the super-unsteerable BB84 box (for $ V \leq \frac{1}{\sqrt{2}}$) cerifies quantumness of $d \otimes 2$ dimensional resources producing it, where $d <4$. For example, the super-unsteerable BB84 box certifies the quantumness of the $2 \otimes 2$ states given by Eq.(\ref{w}) for $ V \leq \frac{1}{\sqrt{2}}$.\\ Consider that Alice and Bob share the following qutrit-qubit state: \begin{equation}gin{equation} \label{erasure} \rho^{'}_V = V |\psi^- \rangle \langle \psi^- | + \frac{1- V}{2} |2 \rangle \langle 2 | \otimes \mathbb{I}_2, \end{equation} where $|\psi^- \rangle$ = $\frac{1}{\sqrt{2}} (|01 \rangle - |10 \rangle)$ is the singlet state, $\mathbb{I}_2$ is the identity in the $|0\rangle$, $|1\rangle$ qubit subspace and $0 < V \leq 1$. This state is known as the ``Erasure state'', as it can be obtained by sending Alice's qubit of a bipartite singlet state through an erasure channel; with probability $V$ the singlet state remains the same, and with $(1-V)$ probability Alice's qubit is replaced by the state $|2 \rangle \langle 2 |$ (orthogonal to the qubit subspace). This state is entangled for any non-zero $V$ which can be checked through the positive-partial-transpose criterion \cite{ppt}. Therefore, the Erasure state $\rho^{'}_V$ has nonzero quantumness (as quantified by quantum discord \cite{disc, disc2, disc3, disc4}) for any $V>0$. If Alice and Bob perform appropriate measurement on the Erasure state, the white noise-BB84 family can also be reproduced (for detailed calculations, see the Appendix). Hence, the super-unsteerable BB84 box certifies the quantumness of the $3 \otimes 2$ states given by Eq.(\ref{erasure}) for $ V \leq \frac{1}{\sqrt{2}}$.\\ Thus, super-unsteerability provides operational characterization of quantumness of the unsteerable (in the given steering scenario) $2 \otimes 2$ state (\ref{w}) and $3 \otimes 2$ state (\ref{erasure}) for $V \leq 1/\sqrt{2}$ if the local Hilbert-space dimension of the steering party is bounded.\\ In Example 2 we have considered a $2 \otimes 2$ separable mixed state (\ref{state2n}) having both nonzero Alice to Bob and non-zero Bob to Alice discord. We have shown that the correlation (\ref{corr2}) produced by performing some particular local noncommuting measurements on this state can be simulated with LHV-LHS model with random variable having minimum dimension $3$. This implies that the correlation (\ref{corr2}) cannot be simulated by classical-quantum state with dimension $2 \otimes 2$. Hence, the super-unsteerable correlation (\ref{corr2}) certifies quantumness of certain $2 \otimes 2$ dimensional resources producing it and provides operational characterization of quantumness of the state (\ref{state2n}).\\ Consider the classical-quantum or quantum-classical states \cite{bc} given by, \begin{equation}gin{equation} \rho_{CQ} = \sum_{i=0}^{1} p_i |i\rangle\langle i| \otimes \chi_i \label{eq:cq} \end{equation} and \begin{equation}gin{equation} \label{cq:eq} \rho_{QC} = \sum_{j=0}^{1} p_{j} \phi_j \otimes | j \rangle \langle j|, \end{equation} where $ \{ | i \rangle \}$ and $ \{ | j \rangle \}$ are the orthonormal sets, and, $\chi_i$ and $\phi_j$ are the arbitrary quantum states. These correlations can manifestly be simulated by presharing randomness of dimension 2. Since Eqs. (\ref{eq:cq}) and (\ref{cq:eq}) represent a family of states that are not super-correlated, they cannot be used to demonstrate super-unsteerability. The zero-discord states having classical-classical correlations (corresponding to orthogonal $\chi_i$ and $\phi_j$ in Eqs.(\ref{eq:cq}, \ref{cq:eq}) respectively) are also not super-unsteerable. Therefore one can conclude that the super-unsteerable states form a subset of states having \textit{quantum-quantum} correlations, and thus form a strict subset of the discordant states. Further, in the $2 \times 2 \times 2$ experimental scenario super-unsteerability classifies any bipartite states having quantum-quantum correlations into three types:\\ (i) quantum-quantum states which demonstrate super-unsteerability with unsteerable boxes having shared randomness of minimum dimension $4$.\\ (ii) quantum-quantum states which demonstrate super-unsteerability with unsteerable boxes having shared randomness of minimum dimension $3$ and \\ (iii) quantum-quantum states which do not demonstrate super-unsteerability. \section{Inequivalence between superlocality and super-unsteerability} Superlocality \cite{sl1, sl2, sl3, sl4, sl5, sl6} of bipartite quantum correlations, which do not violate a Bell inequality, refers to the higher dimensionality of shared randomness needed to reproduce them in the classical simulation scenarios compared to that of the quantum systems producing the correlations. In this classical simulation scenario the local hidden variables are used by both the parties to generate the shared randomness. On the other hand, super-unsteerability of bipartite quantum correlations, which do not violate a steering inequality, refers to the higher dimensionality of shared randomness needed to reproduce them compared to the local Hilbert space dimension of the quantum system (reproducing the correlation) at the untrusted party's side. This notion is defined in the classical simulation scenarios where local hidden variables are used by one of the parties and the other party uses local hidden states. Thus, the classical simulation scenarios in which superlocality and super-unsteerability defined are completely inequivalent; the former corresponds to the black-box models on both the sides as it corresponds to the Bell scenario, whereas the latter corresponds to the black-box model on one side and the quantum model on the other side as it corresponds to the steering scenario. \\ For instance, the BB84 family is superlocal for $0 < V \leq 1$ (see Proposition $3$ in Ref. \cite{sl1} where superlocality of the BB84 family with $V=1$ was shown) and detects steerability for $V>1/\sqrt{2}$. On the other hand, it detects super-unsteerability for $0 < V \leq \frac{1}{\sqrt{2}}$. Hence, these two different regions where the BB84 family is superlocal and super-unsteerable demonstrate the inequivalence between superlocality and super-unsteerability. \section{Discussion and Conclusion} In this work we have introduced the notion of super-unsteerability by showing that there are certain unsteerable correlations whose simulation with LHV-LHS model requires preshared randomness with dimension higher than the local Hilbert space dimension of the quantum system (reproducing the given unsteerable correlation) at the untrusted party's side. Two examples of super-unsteerability has been presented. These two examples are inequivalent from each other with respect to minimum dimension of the shared randomness required for simulating the correlations with LHV-LHS models. Note that in the present study we have restricted ourselves to the $2 \times 2 \times 2$ experimental scenario ($2$ parties, $2$ measurement settings per party, $2$ outcomes per measurement setting), and in this scenario shared randomness with dimension $d_{\lambda} \leq 4$ is sufficient to simulate any local as well as unsteerable correlation \cite{sl1}. Hence, there are two possible classes of super-unsteerability in the $2 \times 2 \times 2$ experimental scenario: one with shared randomness with minimum dimension of $4$ and the other with shared randomness with minimum dimension of $3$. We have presented examples of both this possible two classes of super-unsteerability in the $2 \times 2 \times 2$ experimental scenario. Further, our study of simulating unsteerable boxes with minimum dimension of the shared randomness in the $2 \times 2 \times 2$ experimental scenario classifies any bipartite states into three types: (i) States which do not demonstrate super-unsteerability. The classical-quantum and quantum-classical states belong to this class. (ii) quantum-quantum states which demonstrate super-unsteerability with unsteerable boxes having minimum hidden variable dimension $3$, and (iii) quantum-quantum states which demonstrate super-unsteerability with unsteerable boxes having minimum hidden variable dimension $4$. The present study also provides a efficient procedure to minimize the dimension of the shared randomness needed to construct the LHV-LHS model of an unsteerable correlation.\\ In Ref. \cite{sl5}, the authors have shown that the nonclassicality of a family of local correlations in the Bell-CHSH scenario can be characterized by super-correlation, in this case, superlocality. Extending this approach, we have shown here that the nonclassicality of certain unsteerable states in the related steering scenario can also be pointed out by super-correlations, i.e., the phenomenon of super-unsteerability. \\ Before concluding, we note that nonlocality or steerability of any correlation in QM or in any convex operational theory can be characterized by the non-zero communication cost that must be supplemented with pre-shared randomness in order to simulate the correlations. The question of an analogous operational characterization of quantumness of unsteerable correlations has been addressed here, and associated with super-unsteerability. The idea of super-unsteerability in the context of multipartite unsteerable boxes \cite{munbox, munbox2} would be worth probing for future studies. It would be interesting to study how to quantify super-unsteerability and whether there exists any quantum informational application of the quantumness of unsteerable correlations as witnessed by super-unsteerability. \begin{equation}gin{thebibliography}{99} \bibitem{Bell} J. S. Bell, \emph{On the Einstein-Podolsky-Rosen paradox}, \href{https://cds.cern.ch/record/111654/files/vol1p195-200_001.pdf}{Physics \textbf{1}, 195 (1965)}. \bibitem{chsh} J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, \emph{Proposed Experiment to Test Local Hidden-Variable Theories}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.23.880}{Phys. Rev. Lett. {\bf 23}, 880 (1969).} \bibitem{bell2} N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, \emph{Bell nonlocality}, \href{https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.86.419}{Rev. Mod. Phys. {\bf 86}, 419 (2014).} \bibitem{tsi} B. S. Tsirel'son, \emph{Quantum generalizations of Bell's inequality}, \href{https://link.springer.com/article/10.1007/BF00417500}{Lett. Math. Phys. {\bf 4}, 93 (1980).} \bibitem{pr} S. Popescu, and D. Rohrlich, \emph{Quantum nonlocality as an axiom}, \href{https://link.springer.com/article/10.1007/BF02058098}{Found. Phys. {\bf 24}, 379 (1994).} \bibitem{barrett} J. Barrett, N. Linden, S. Massar, S. Pironio, S. Popescu, and D. Roberts, \emph{Nonlocal correlations as an information-theoretic resource}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.71.022101}{Phys. Rev. A {\bf 71}, 022101 (2005).} \bibitem{masanes} Ll. Masanes, A. Acin, and N. Gisin, \emph{General properties of nonsignaling theories}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.73.012112}{Phys. Rev. A {\bf 73}, 012112 (2006).} \bibitem{pp} I. Pitwosky, \emph{Range Theorems for Quantum Probability and Entanglement}, \href{https://arxiv.org/abs/quant-ph/0112068}{arXiv:quant-ph/0112068 (2001).} \bibitem{ps2} M. Pawlowski, T. Paterek, D. Kaszlikowski, V. Scarani, A. Winter, and M. Zukowski, \emph{Information causality as a physical principle}, \href{https://www.nature.com/articles/nature08400}{Nature \textbf{461}, 1101 (2009).} \bibitem{ps} S. Popescu, \emph{Nonlocality beyond quantum mechanics}, \href{https://www.nature.com/articles/nphys2916}{Nature Phys. \textbf{10}, 264 (2014).} \bibitem{epr} A. Einstein, B. Podolsky, and N. Rosen, \emph{Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?}, \href{https://journals.aps.org/pr/abstract/10.1103/PhysRev.47.777}{Phys. Rev. {\bf 47}, 777 (1935).} \bibitem{scro} E. Schrodinger, \emph{Discussion of Probability Relations between Separated Systems}, \href{https://www.cambridge.org/core/journals/mathematical-proceedings-of-the-cambridge-philosophical-society/article/discussion-of-probability-relations-between-separated-systems/C1C71E1AA5BA56EBE6588AAACB9A222D}{Proc. Cambridge Philos. Soc. {\bf 31}, 555 (1935)}; E. Schrodinger, \emph{Probability relations between separated systems}, \href{https://www.cambridge.org/core/journals/mathematical-proceedings-of-the-cambridge-philosophical-society/article/probability-relations-between-separated-systems/641DDDED6FB033A1B190B458E0D02F22}{Proc. Cambridge Philos. Soc. {\bf 32}, 446 (1936).} \bibitem{steer} H. M. Wiseman, S. J. Jones, and A. C. Doherty, \emph{Steering, Entanglement, Nonlocality, and the Einstein-Podolsky-Rosen Paradox}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.140402}{Phys. Rev. Lett. {\bf 98}, 140402 (2007).} \bibitem{steer2} S. J. Jones, H. M. Wiseman, and A. C. Doherty, \emph{Entanglement, Einstein-Podolsky-Rosen correlations, Bell nonlocality, and steering}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.76.052116}{Phys. Rev. A. {\bf 76}, 052116 (2007).} \bibitem{st8} B. Wittmann, S. Ramelow, F. Steinlechner, N. K. Langford, N. Brunner, H. Wiseman, R. Ursin, and A. Zeilinger, \emph{Loophole-free Einstein-Podolsky-Rosen experiment via quantum steering}, \href{http://iopscience.iop.org/article/10.1088/1367-2630/14/5/053030/meta}{New. J. Phys. {\bf 14}, 053030 (2012).} \bibitem{st10} Q. Y. He, and M. D. Reid, \emph{Genuine Multipartite Einstein-Podolsky-Rosen Steering}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.250403}{Phys. Rev. Lett. {\bf 111}, 250403 (2013).} \bibitem{steer22} P. Chowdhury, T. Pramanik, A. S. Majumdar, and G. S. Agarwal, \emph{Einstein-Podolsky-Rosen steering using quantum correlations in non-Gaussian entangled states}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.89.012104}{Phys. Rev. A {\bf 89}, 012104 (2014).} \bibitem{steer3} A. Milne, S. Jevtic, D. Jennings, H. Wiseman, and T. Rudolph, \emph{Quantum steering ellipsoids, extremal physical states and monogamy}, \href{http://iopscience.iop.org/article/10.1088/1367-2630/16/8/083017/meta}{New. J. Phys. {\bf 16}, 083017 (2014).} \bibitem{st4} S. Jevtic, M. Pusey, D. Jennings, and T. Rudolph, \emph{Quantum Steering Ellipsoids}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.113.020402}{Phys. Rev. Lett. {\bf 113}, 020402 (2014).} \bibitem{st9} D. A. Evans, and H. M. Wiseman, \emph{Optimal measurements for tests of Einstein-Podolsky-Rosen steering with no detection loophole using two-qubit Werner states}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.90.012114}{Phys. Rev. A. {\bf 90}, 012114 (2014).} \bibitem{st5} S. Jevtic, M. J. W. Hall, M. R. Anderson, M. Zwierz, and H. M. Wiseman, \emph{Einstein-Podolsky-Rosen steering and the steering ellipsoid}, \href{https://www.osapublishing.org/josab/abstract.cfm?uri=josab-32-4-A40}{JOSA B \textbf{32}, A40 (2015).} \bibitem{s6} T. Pramanik, M. Kaplan, and A. S. Majumdar, \emph{Fine-grained Einstein-Podolsky-Rosen-steering inequalities}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.90.050305}{Phys. Rev. A \textbf{90}, 050305(R) (2014).} \bibitem{steer24} P. Chowdhury, T. Pramanik, and A. S. Majumdar, \emph{Stronger steerability criterion for more uncertain continuous-variable systems}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.92.042317}{Phys. Rev. A {\bf 92}, 042317 (2015).} \bibitem{new} D. Cavalcanti, and P. Skrzypczyk, \emph{Quantum steering: a review with focus on semidefinite programming}, \href{http://iopscience.iop.org/article/10.1088/1361-6633/80/2/024001/meta}{Rep. Prog. Phys. \textbf{80}, 024001 (2017).} \bibitem{st11} M. T. Quintino, T. Vertesi, D. Cavalcanti, R. Augusiak, M. Demianowicz, A. Acin, and N. Brunner, \emph{Inequivalence of entanglement, steering, and Bell nonlocality for general measurements}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.92.032107}{Phys. Rev. A {\bf 92}, 032107 (2015).} \bibitem{ent} R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, \emph{Quantum entanglement}, \href{https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.81.865}{Rev. Mod. Phys {\bf 81}, 865 (2009).} \bibitem{st7} J. Bowles, T. Vertesi, M. T. Quintino, and N. Brunner, \emph{One-way Einstein-Podolsky-Rosen Steering}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.200402}{Phys. Rev. Lett. {\bf 112}, 200402 (2014).} \bibitem{st12} C. Branciard, E. G. Cavalcanti, S. P. Walborn, V. Scarani, and H. M. Wiseman, \emph{One-sided device-independent quantum key distribution: Security, feasibility, and the connection with steering}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.85.010301}{Phys. Rev. A. {\bf 85}, 010301(R) (2012).} \bibitem{disc} H. Ollivier, and W. H. Zurek, \emph{Quantum Discord: A Measure of the Quantumness of Correlations}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.88.017901}{Phys. Rev. Lett. {\bf 88}, 017901 (2001).} \bibitem{disc2} L. Henderson, and V. Vedral, \emph{Classical, quantum and total correlations}, \href{http://iopscience.iop.org/article/10.1088/0305-4470/34/35/315/meta}{J. Phys. A: Math. Gen. {\bf 34}, 6899 (2001).} \bibitem{disc3} K. Modi, A. Brodutch, H. Cable, T. Paterek, and V. Vedral, \emph{The classical-quantum boundary for correlations: Discord and related measures}, \href{https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.84.1655}{Rev. Mod. Phys. {\bf 84}, 1655 (2012).} \bibitem{bc} M. Piani, P. Horodecki, and R. Horodecki, \emph{No-Local-Broadcasting Theorem for Multipartite Quantum Correlations}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.100.090502}{Phys. Rev. Lett. {\bf 100}, 090502 (2008).} \bibitem{disc4} Y. Guo, \emph{Non-commutativity measure of quantum discord}, \href{https://www.nature.com/articles/srep25241}{Sci. Rep. {\bf 6}, 25241 (2016).} \bibitem{com2} L. A. Khalfin, and B. S. Tsirelson, Symposium on the Foundations of Modern Physics, 441-460 (1985). \bibitem{com1} M. M. Wolf, D. Perez-Garcia, and C. Fernandez, \emph{Measurements Incompatible in Quantum Theory Cannot Be Measured Jointly in Any Other No-Signaling Theory}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.103.230402}{Phys. Rev. Lett. \textbf{103}, 230402 (2009).} \bibitem{com} M. T. Quintino, T. Vertesi, and N. Brunner, \emph{Joint Measurability, Einstein-Podolsky-Rosen Steering, and Bell Nonlocality}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.113.160402}{Phys. Rev. Lett. \textbf{113}, 160402 (2014).} \bibitem{com3} R. Uola, T. Moroder, and O. Guhne, \emph{Joint Measurability of Generalized Measurements Implies Classicality}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.113.160403}{Phys. Rev. Lett. \textbf{113}, 160403 (2014).} \bibitem{cc1} B. F. Toner, and D. Bacon, \emph{Communication Cost of Simulating Bell Correlations}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.91.187904}{Phys. Rev. Lett. {\bf 91}, 187904 (2003).} \bibitem{cc2} A. B. Sainz, L. Aolita, N. Brunner, R. Gallego, and P. Skrzypczyk, \emph{Classical communication cost of quantum steering}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.94.012308}{Phys. Rev. A {\bf 94}, 012308 (2016).} \bibitem{sl1} J. M. Donohue, and E. Wolfe, \emph{Identifying nonconvexity in the sets of limited-dimension quantum correlations}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.92.062120}{Phys. Rev. A {\bf 92}, 062120 (2015).} \bibitem{sl2} K. T. Goh, J.-D. Bancal, and V. Scarani, \emph{Measurement-device-independent quantification of entanglement for given Hilbert space dimension}, \href{http://iopscience.iop.org/article/10.1088/1367-2630/18/4/045022}{New J. Phys. {\bf 18} 045022 (2016).} \bibitem{sl3} S. Zhang, Proc. 3rd Innov. Theo. Comput. Sci. -ITCS '12 (ACM, 2012) pp. 39–59. \bibitem{sl4} Z. Wei, and J. Sikora, \emph{Device-independent characterizations of a shared quantum state independent of any Bell inequalities}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.032103}{Phys. Rev. A {\bf 95}, 032103 (2017).} \bibitem{sl5} C. Jebaratnam, S. Aravinda, and R. Srikanth, \emph{Nonclassicality of local bipartite correlations}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.032120}{Phys. Rev. A {\bf 95}, 032120 (2017).} \bibitem{sl6} C. Jebaratnam, D. Das, S. Goswami, R. Srikanth, and A. S. Majumdar, \emph{Operational nonclassicality of local multipartite correlations}, \href{https://arxiv.org/abs/1701.04363}{arXiv:1701.04363 [quant-ph] (2017).} \bibitem{nsst} M. Piani, and J. Watrous, \emph{Necessary and Sufficient Quantum Information Characterization of Einstein-Podolsky-Rosen Steering}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.060404}{Phys. Rev. Lett. {\bf 114}, 060404 (2015).} \bibitem{guhne} T. Moroder, O. Gittsovich, M. Huber, R. Uola, and O. Guhne, \emph{Steering Maps and Their Application to Dimension-Bounded Steering}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.090403}{Phys. Rev. Lett. {\bf 116}, 090403 (2016).} \bibitem{DDJ+17} D. Das, S. Datta, C. Jebaratnam, A. S. Majumdar, \emph{Cost of Einstein-Podolsky-Rosen steering in the context of extremal boxes}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.97.022110}{Phys. Rev. A \textbf{97}, 022110 (2018).} \bibitem{werner} R. F. Werner, and M. M. Wolf, \emph{Bell inequalities and entanglement}, \href{https://arxiv.org/abs/quant-ph/0107093}{Quantum Inf. Comput. {\bf 1}, 1 (2001).} \bibitem{newpusey} M. F. Pusey, \emph{Negativity and steering: A stronger Peres conjecture}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.88.032313}{Phys. Rev. A \textbf{88}, 032313 (2013).} \bibitem{stt} E. G. Cavalcanti, S. J. Jones, H. M. Wiseman, and M. D. Reid, \emph{Experimental criteria for steering and the Einstein-Podolsky-Rosen paradox}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.80.032112}{Phys. Rev. A {\bf 80}, 032112 (2009).} \bibitem{stt2} M. D. Reid, \emph{Demonstration of the Einstein-Podolsky-Rosen paradox using nondegenerate parametric amplification}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.40.913}{Phys. Rev. A. {\bf 40}, 913 (1989).} \bibitem{stt3} Z. Y. Ou, S. F. Pereira, H. J. Kimble, and K. C. Peng, \emph{Realization of the Einstein-Podolsky-Rosen paradox for continuous variables}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.68.3663}{Phys. Rev. Lett. {\bf 68}, 3663 (1992).} \bibitem{stt5} S. P. Walborn, A. Salles, R. M. Gomes, F. Toscano, and P. H. Souto Ribeiro, \emph{Revealing Hidden Einstein-Podolsky-Rosen Nonlocality}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.106.130402}{Phys. Rev. Lett. {\bf 106}, 130402 (2011).} \bibitem{stt6} E. G. Cavalcanti, C. J. Foster, M. Fuwa, and H. M. Wiseman, \emph{Analog of the Clauser-Horne-Shimony-Holt inequality for steering}, \href{https://www.osapublishing.org/josab/abstract.cfm?uri=josab-32-4-A74}{J. Opt. Soc. Am. B {\bf 32}, A74 (2015).} \bibitem{not} S. Abramsky, and L. Hardy, \emph{Logical Bell inequalities}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.85.062114}{Phys. Rev. A {\bf 85}, 062114 (2012).} \bibitem{ppt} A. Peres, \emph{Separability Criterion for Density Matrices}, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.77.1413}{Phys. Rev. Lett. \textbf{77}, 1413 (1996).} \bibitem{munbox} C. Jebaratnam, \emph{Detecting genuine multipartite entanglement in steering scenarios}, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.93.052311}{Phys. Rev. A \textbf{93}, 052311 (2016).} \bibitem{munbox2} C. Jebaratnam, D. Das, A. Roy, A. Mukherjee, S. S. Bhattacharya, B. Bhattacharya, A. Riccardi, D. Sarkar, \emph{Tripartite entanglement detection through tripartite quantum steering in one-sided and two-sided device-independent scenarios}, \href{https://arxiv.org/abs/1704.08162}{arXiv:1704.08162 [quant-ph] (2017).} \end{thebibliography} \appendix \section{Demontrating that the correlation given by Eq.(\ref{corr2}) cannot have a DLHV-LHS decomposition with a hidden variable of dimension $4$} If the correlation given by Eq.(\ref{corr2}) has a DLHV-LHS decomposition with a hidden variable of dimension $4$, i. e., a LHV-LHS decomposition with a hidden variable of dimension $4$ having different deterministic probability distributions at Alice's side and non-deterministic probability distributions (with quantum realization) at Bob's side, then the correlation (\ref{corr2}) can be written as follows, \begin{equation}gin{equation} \label{a1} P_{BB84}(ab|xy) = \sum_{\lambda=0}^{3} p(\lambda) P(a|x, \lambda) P(b|y, \rho_{\lambda}), \end{equation} where, $0<p(\lambda) <1$ ($\lambda = 0, 1, 2, 3$), $P(a|x,0) = P_D^{00}$, $P(a|x,1) = P_D^{01}$, $P(a|x,2) = P_D^{10}$, $P(a|x,3) = P_D^{11}$. Let us assume that the non-deterministic probability distributions (with quantum realization) at Bob's side are given by, \begin{equation}gin{equation} P(b|y,\rho_0) =\begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $t_1$ & $1-t_1$ \\ (1) & $t_2$ & $1-t_2$ \\ \end{tabular} \end{equation} \begin{equation}gin{equation} P(b|y,\rho_1) =\begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $t_3$ & $1-t_3$ \\ (1) & $t_4$ & $1-t_4$ \\ \end{tabular} \end{equation} \begin{equation}gin{equation} P(b|y,\rho_2) =\begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $t_5$ & $1-t_5$ \\ (1) & $t_6$ & $1-t_6$ \\ \end{tabular} \end{equation} \begin{equation}gin{equation} P(b|y,\rho_3) =\begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $t_7$ & $1-t_7$ \\ (1) & $t_8$ & $1-t_8$ \\ \end{tabular}. \end{equation} with $0 < t_i < 1$ ($i=1,2,3,4,5,6,7,8$). Comparing Eqs.(\ref{a1}) and (\ref{corr2}), we get, \begin{equation}gin{equation} \label{a2} p(11|01) = p(1)(1-t_4) + p(3)(1-t_8) =0 \end{equation} and \begin{equation}gin{equation} \label{a3} p(11|10) = p(1)(1-t_3) + p(2)(1-t_5) =0. \end{equation} Since $0<p(\lambda) <1$ ($\lambda = 0, 1, 2, 3$) and $0 < t_i < 1$ ($i=1,2,3,4,5,6,7,8$), from Eqs.(\ref{a2}) and (\ref{a3}), we get, \begin{equation}gin{equation} \label{a4} p(1)(1-t_4) = p(1)(1-t_3) = 0. \end{equation} Hence, we get either, \begin{equation}gin{equation} \label{a5} p(1)=0 \end{equation} or, \begin{equation}gin{equation} t_4 =1 \hspace{0.4cm} \text{and} \hspace{0.4cm} t_3 =1. \end{equation} Now, if $p(1) =0$, then the decomposition (\ref{a1}) becomes DLHV-LHS decomposition of the correlation (\ref{corr2}) with a hidden variable of dimension $3$. On the other hand, if $t_4 =1$ and $t_3 =1$, then $P(b|y, \rho_1)$ becomes \begin{equation}gin{equation} P(b|y,\rho_1) =\begin{equation}gin{tabular}{c|cc} \begin{eqnarray}ckslashbox{(y)}{(b)} & (0) & (1) \\\hline (0) & $1$ & $0$ \\ (1) & $1$ & $0$ \\ \end{tabular}, \end{equation} which has no quantum realisation, i. e., $P(b|y, \rho_1) \neq \langle \phi | \{\Pi_{b|y}\}_{b,y}|\phi\rangle$ for any quantum state $|\phi\rangle$.\\ Hence, one can conclude that the correlation given by Eq.(\ref{corr2}) cannot have a DLHV-LHS decomposition with a hidden variable of dimension $4$. \section{Reproducing white noise-BB84 box using qutrit-qubit system} Consider Alice and Bob share the following qutrit-qubit state: \begin{equation}gin{equation} \rho_E = V |\psi^- \rangle \langle \psi^- | + \frac{1- V}{2} |2 \rangle \langle 2 | \otimes \mathbb{I}_2, \end{equation} where $|\psi^- \rangle$ = $\frac{1}{\sqrt{2}} (|01 \rangle - |10 \rangle)$ is the singlet state; $0 <V \leq 1$; $|0\rangle$, $|1\rangle$ and $|2\rangle$ form an orthonormal basis in the Hilbert space in $\mathcal{C}^3$; $|0\rangle$ and $|1\rangle$ form an orthonormal basis in the Hilbert space in $\mathcal{C}^2$ (they are eigenvectors of the operator $\sigma_z$); $\mathbb{I}_2 = |0\rangle \langle 0| + |1\rangle \langle 1|$.\\ Now consider the following two dichotomic POVM $E^1 \equiv \{ E_i^1 (i=0,1) | \sum_i E_i^1 = \mathbb{I}, 0 < E_i^1 \leq \mathbb{I} \}$ and $E^2 \equiv \{ E_j^2 (j=0,1) | \sum_j E_j^2 = \mathbb{I}, 0 < E_j^2 \leq \mathbb{I} \}$, where\\ $E_0^1 = \begin{equation}gin{pmatrix} 0 && 0 && 0 \\ 0 && 1 && 0 \\ 0 && 0 && \frac{1}{2} \\ \end{pmatrix}$ and let us assume that the corresponding outcome is $0$,\\ $E_1^1 = \begin{equation}gin{pmatrix} 1 && 0 && 0 \\ 0 && 0 && 0 \\ 0 && 0 && \frac{1}{2} \\ \end{pmatrix}$ and let us assume that the corresponding outcome is $1$. \\ On the other hand,\\ $E_0^2 = \begin{equation}gin{pmatrix} \frac{1}{2} && \frac{1}{2} && 0 \\ \frac{1}{2} && \frac{1}{2} && 0 \\ 0 && 0 && \frac{1}{2} \\ \end{pmatrix}$ and let us assume that the corresponding outcome is $0$, \\ $E_1^2 = \begin{equation}gin{pmatrix} \frac{1}{2} && -\frac{1}{2} && 0 \\ -\frac{1}{2} && \frac{1}{2} && 0 \\ 0 && 0 && \frac{1}{2} \\ \end{pmatrix}$ and let us assume that the corresponding outcome is $1$, \\ Here, the matrix $E_0^1$, $E_1^1$, $E_0^2$ and $E_1^2$ are written in the basis \{$|0\rangle$, $|1\rangle$, $|2\rangle$\}. Now if Alice performs the POVMs corresponding to $A_0 = E^1$ and $A_1 = E^2$; and Bob performs the projective measurements corresponding to $B_0 = \sigma_z$ and $B_1 = \sigma_x$, then the white noise-BB84 family can be reproduced. \end{document}
\begin{document} \textbf{Liouville property for solutions of the linearized degenerate thin film equation of fourth order in a halfspace.} S.P.Degtyarev \textbf{Institute for applied mathematics and mechanics of Ukrainian National academy of sciences, Donetsk } E-mail: [email protected] \begin{abstract} We consider a boundary value problem in the half-space for a linear parabolic equation of fourth order with a degeneration on the boundary of the half-space. The equation under consideration is substantially a linearized thin film equation. We prove that, if the right hand side of the equation and the boundary condition are polynomials in the tangential variables and time, the same property has any solution of a power growth. It is shown also that the specified property does not apply to normal variable. As an application, we present a theorem of uniqueness for the problem in the class of functions of power growth. The final version is available on Springer at DOI: 10.1007/s00025-015-0467-x \end{abstract} Key words: \ Liouville theorem, fourth order, degenerate parabolic equation, thin film equation MSC: \ 35B53, 35K35, 35K65, 35Q35 \section{Introduction} \label{s1} The importance of the classical Liouville theorem is well known. This theorem in complex analysis states that the entire function, growing at infinity no more than a power, is a polynomial. In particular, a bounded entire function is a constant. It is known also that a similar property is inherited by solutions of many linear and nonlinear elliptic and parabolic problems. The presence of such property for solutions of a problem is an extremely important tool for investigation of qualitative properties of such solutions. As a simple example of the use of the partial Liouville theorem of this paper (Theorem \ref{T1.1})we present a result on the uniqueness for the initial-boundary value problems in a halfspace for a fourth order degenerate linear equation in the class of solutions of power growth. The literature on the subject is so vast and diverse that we do not try to give a brief overview in this introduction. Among the papers on the Liouville properties of solutions for degenerate equations we mention only the papers \cite{1} - \cite{10} and it is definitely not complete list. We note only that all such papers for degenerate equations are devoted mainly to second-order equations. Author does not know about the Liouville theorems for degenerate parabolic equations of higher than the second order. In this paper, we consider in the halfspace a boundary value problem for a fourth order parabolic equation with strong degeneration on the boundary of the halfspace. This problem is obtained as a model case under linearization of boundary value problem for the quasilinear degenerate thin film equation in the formulation of the papers \cite{11} - \cite{15}. Denote $R_{+}^{N}=\{x=(x_{1},...,x_{N})\in R^{N}:\ x_{N}>0\}$, $Q=R_{+}^{N}\times R^{1}=\{(x,t):\ x\in R_{+}^{N},t\in R^{1}\}$. For $R>0$ denote also $B_{R}=\left\{ x=(x^{\prime },x_{N})\in R_{+}^{N}:\quad 0<x_{N}<R,|x^{\prime }|<R\right\} $, $x^{\prime }=(x_{1},...,x_{N-1})$, $Q_{R}=\left\{ (x,t)\in Q:\quad x\in B_{R},-R^{2}<t<R^{2}\right\} $. Let a function $u(x,t)$ be defined in $Q$ ,$u(x,t)\in W_{\infty ,loc}^{4,1}(Q)$ and let $u(x,t)$ satisfies in this domain to the equation \begin{equation} \frac{\partial u}{\partial t}+\nabla \left( x_{N}^{2}\nabla \Delta u-\beta \nabla u\right) =f(x,t), \label{1.1} \end{equation} where $\nabla =(\partial /\partial x_{1},...,\partial /\partial x_{N})$ , $\Delta $ is the Laplace operator in space variables, $\beta \geq 0$ is a given nonnegative constant, $f(x,t)$ is a given function. We suppose that the function $u(x,t)$ possesses the following weighted regularity \begin{equation} \sum_{|\alpha |=4}\max_{Q_{R}}\left\vert x_{N}^{2}D_{x}^{\alpha }u\right\vert +\sum_{|\alpha |=3}\max_{Q_{R}}\left\vert x_{N}D_{x}^{\alpha }u\right\vert \leq C(R)<\infty , \label{1.2} \end{equation} where $\alpha =(\alpha _{1},...,\alpha _{N})$ is a multiindex, $D_{x}^{\alpha }u=\partial ^{|\alpha |}u/\partial x_{1}^{\alpha _{1}}...\partial x_{N}^{\alpha _{N}}$, $C(R)$ is a depending on $R$ constant. We suppose also that the function $u(x,t)$ satisfies on the boundary of the domain $Q$, that is at $x_{N}=0$, to the Dirichlet condition \begin{equation} u(x,t)|_{x_{N}=0}=u(x^{\prime },0,t)=g(x^{\prime },t), \label{1.3} \end{equation} where $g(x^{\prime },t)$ is a given function. Note here that, as it was shown in the paper \cite{11}, for example, in the one-dimensional setting, conditions \eqref{1.2}, \eqref{1.3} uniquely determine the solution $u(x,t)$. Thus, the restriction for the class of solution in the form as in \eqref{1.2} serves in some sense as a replacement for the second boundary condition, which is necessary for uniformly parabolic fourth order equations of the form \eqref{1.1} (in this regard see Remark \ref{R1.1} ). We also note that condition \eqref{1.2} can actually be relaxed to, for example, the local weighted integrability conditions for the senior derivatives. It is not our purpose in this paper to find the exact such conditions. Suppose finally that the function $u(x,t)$ has power growth at infinity \begin{equation} \max_{Q_{R}}|u(x,t)|\leq CR^{M},\quad R\geq 1,\quad M>0, \label{1.4} \end{equation} where $C$ is some positive constant, $M$ is a given nonnegative exponent. We agree here that in what follows we denote by the same symbols $C$, $\nu$, all absolute constants or constants depending only on the fixed initial data of the problem. Let us formulate now the main result. \begin{theorem} \label{T1.1} Let the function $f(x,t)$ from \eqref{1.1} is a polynomial with respect to the "tangent" variables $x^{\prime }$ and $t$ of degree $M_{f}$, and the function $g(x^{\prime },t)$ is a polynomial of degree $M_{g}$. \ Then, under conditions \eqref{1.1} -\eqref{1.4}, the function $u(x,t)$ is a polynomial with respect to the variables $x^{\prime }$ and $t$ of degree not greater than $M_{u}=[M]$. \end{theorem} \begin{remark} \label{R1.1} Note that the function $u(x,t)$ is not in general a polynomial in the variable $x_{N}$, as it is shown by the following example. Consider a function$v(x,t)\equiv v(x_{N})$, that depends only on the variable $x_{N}$ and satisfies the simplest inhomogeneous equation \eqref{1.1}, that is \begin{equation} l_{\beta }v\equiv \frac{d}{dx_{N}}\left( x_{N}^{2}\frac{d^{3}v}{dx_{N}^{3}}-\beta \frac{dv}{dx_{N}}\right) =b,\quad b=const. \label{1.5} \end{equation} The direct calculation shows that for $\beta >0$ the general solution has the form \begin{equation} v(x_{N})=C_{1}x_{N}^{a_{1}}+C_{2}x_{N}^{a_{2}}+C_{3}x_{N}+C_{4}-\frac{b}{ 2\beta }x_{N}^{2}, \label{1.6} \end{equation} where $C_{i}$, $i=1,...,4$ are arbitrary constants and the exponents $a_{1}$, $a_{2}$ are equal to \begin{equation} a_{1}=-\left( \frac{1}{2}+\sqrt{\frac{1}{4}+\beta }\right) <-1,\quad a_{2}=-\frac{1}{2}+\sqrt{\frac{1}{4}+\beta }. \label{1.7} \end{equation} At the same time for $\beta =0$ \begin{equation} v(x_{N})=C_{1}x_{N}\ln x_{N}+C_{2}x_{N}^{2}+C_{3}x_{N}+C_{4}-\frac{b}{2}\left( x_{N}^{2}\ln x_{N}-\frac{3}{2}x_{N}^{2}\right) . \label{1.8} \end{equation} In this case the conditions on the class of solutions in \eqref{1.2} are used to determine one of the arbitrary constants in \eqref{1.6}, \eqref{1.8} (exactly here they are a replacement of the boundary condition). It follows from \eqref{1.2} that the constant $C_{1}$ in relations \eqref{1.6}, \eqref{1.8} should be chosen by zero since the corresponding terms do not satisfy \eqref{1.2}. Thus, \begin{equation} v(x_{N})=\left\{ \begin{array}{c} C_{2}x_{N}^{a_{2}}+C_{3}x_{N}+C_{4}-\frac{b}{2\beta }x_{N}^{2},\quad \beta >0, \\ C_{2}x_{N}^{2}+C_{3}x_{N}+C_{4}-\frac{b}{2}\left( x_{N}^{2}\ln x_{N}-\frac{3}{2}x_{N}^{2}\right) ,\quad \beta =0. \end{array} \right. \label{1.9} \end{equation} This relation gives an example of a function, satisfying all the conditions of Theorem \ref{T1.1} and which is not a polynomial in the variable $x_{N}$. \end{remark} As a simple application of Liouville theorem \ref{T1.1} we give a corollary about uniqueness of the solution to the initial-boundary value problems in a halfspace for equation \eqref{1.1} in the class of functions with power growth. \begin{corollary} \label{C1.1} (Uniqueness.) Let a function $u(x,t)$ satisfies homogeneous equation \eqref{1.1} with $f\equiv 0$ in the halfspace $Q_{+}=Q\cap \left\{ t>0\right\} $, conditions \eqref{1.2}, homogeneous condition \eqref{1.3} with $g\equiv 0$, and homogeneous initial condition \begin{equation} u(x,0)\equiv 0,\quad t=0. \label{1.10} \end{equation} If $u(x,t)$ has at most power growth at infinity (that is, $u(x,t)$ satisfies \eqref{1.4}), then this function is identically equal to zero, $u(x,t)\equiv 0$. \end{corollary} \begin{proof} Extent the function $u(x,t)$ to the whole domain $Q$, assuming that it is identically zero in $\left\{ t\leq 0\right\} $, and save its previous designation $u(x,t)$. In view of the homogeneity of equation \eqref{1.1} and boundary condition \eqref{1.3}, it follows from \eqref{1.10} that the extended function satisfies homogeneous equation \eqref{1.1} with $f\equiv 0$ and homogeneous condition \eqref{1.3} with $g\equiv 0$ in the whole domain $Q$. Furthermore, the extended function inherits power growth at infinity. Consequently, such the function $u(x,t)$ satisfies all the conditions of Theorem \ref{T1.1} and therefore is a polynomial in the variable $t$ for all values of the variables $x$. Since this polynomial is identically equal to zero in $\left\{t \leq 0 \right\}$, we see that it is identically equal to zero for all $t$. Thus, the function $u(x,t)$ is identically equal to zero. \end{proof} To further content of the article is constructed as follows. In the second section we present the proof of Theorem \ref{T1.1}. This proof is based on local integral estimates for derivatives of the solution $u(x,t)$ with respect to the "tangent" variables $x^{\prime} = (x_{1},...,x_{N-1})$ and with respect to the variable $t$ via a local integral norm of the solution itself. The proof of the necessary local integral estimates is presented for convenience in the third final section. \section{The proof of Theorem \ref{T1.1}.} Denote for $R>0$, as before, $B_{R}=\left\{ x=(x^{\prime },x_{N})\in R_{+}^{N}:\quad 0<x_{N}<R,|x^{\prime }|<R\right\} $, $x^{\prime }=(x_{1},...,x_{N-1})$, $Q_{R}=\left\{ (x,t)\in Q:\quad x\in B_{R},-R^{2}<t<R^{2}\right\} $. The proof of Theorem \ref{T1.1} is based on the following lemma. \begin{lemma} \label{L2.1} Let the function $u(x,t)$ satisfies the conditions of Theorem \ref{T1.1} except for power growth at infinity. Let, besides, function $u(x,t)$ be infinitely differentiable in the variables $x^{\prime}$, $t$ in $\overline{Q}$ and its derivatives with respect to these variables belong to the same space as $u(x,t)$ itself. Suppose, in addition, $f \equiv 0$ in equation \eqref{1.1} and $g \equiv 0$ in condition \eqref {1.3}. Then for all $R>1$, $q>1$ the following estimates are valid \begin{equation} \int\limits_{Q_{R}}|\nabla u|^{2}dxdt\leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}u^{2}dxdt,\quad \int\limits_{Q_{R}}u_{t}^{2}dxdt\leq \frac{C_{q}}{R^{4}}\int\limits_{Q_{qR}}u^{2}dxdt. \label{2.1} \end{equation} \end{lemma} The proof of this lemma will be given in the next section, but now proceed to the proof of Theorem \ref{T1.1}. Suppose first that the function $u(x,t)$ is infinitely differentiable with respect to the variables $x^{\prime}$, $t$ in $\overline{Q}$ and its derivatives with respect to these variables belong to the same space that $u(x,t)$ itself. And let also $f \equiv 0$ in equation \eqref{1.1} and $g \equiv 0$ in condition \eqref{1.3}, that is, $u(x,t)$ satisfies the conditions of Lemma \ref{L2.1}. As the coefficients of equation \eqref{1.1} does not depend on $x^{\prime}$, $t$, differentiating this equation with respect to these variables and using estimates \eqref{2.1} iteratively, we obtain by induction \begin{equation} \int\limits_{Q_{R}}\left\vert D_{x^{\prime }}^{\alpha }D_{t}^{\beta }u\right\vert ^{2}dxdt\leq \frac{C(q,\alpha ,\beta )}{R^{2|\alpha |+4\beta }}\int\limits_{Q_{q^{|\alpha |+\beta }R}}u^{2}dxdt, \label{2.2} \end{equation} where $\alpha =(\alpha _{1},...,\alpha _{N-1})$ is a multiindex, $D_{x^{\prime }}^{\alpha }=D_{x_{1}}^{\alpha _{1}}...D_{x_{N-1}}^{\alpha _{N-1}}$. At the same time, condition \eqref{1.4} implies the estimate of the integral on the right side of \eqref{2.2} \begin{equation*} \int\limits_{Q_{q^{|\alpha |+\beta }R}}u^{2}dxdt\leq C(q,\alpha ,\beta )R^{2M+N+2}. \end{equation*} And then we obtain from \eqref{2.2} \begin{equation*} \int\limits_{Q_{R}}\left\vert D_{x^{\prime }}^{\alpha }D_{t}^{\beta }u\right\vert ^{2}dxdt\leq C(q,\alpha ,\beta )R^{2M+N+2-(2|\alpha |+4\beta )}. \end{equation*} Choose now in the last estimate sufficiently big $\left\vert \alpha \right\vert $ and $\beta $. Letting $R$ to infinity, we obtain $D_{x^{\prime }}^{\alpha }D_{t}^{\beta }u\equiv 0$ in $Q $ for $2|\alpha |+4\beta >2M+N+2$. This means that the function $u(x,t)$ is a polynomial in the variables $x^{\prime}$, $t$ and the degree of this polynomial $M_{u} \leq \lbrack M]$ by the estimate \eqref{1.4}. Thus, Theorem \ref{T1.1} is proved under our additional assumptions. Let us remove now our additional assumptions. Let $u(x,t)$ satisfies the conditions of Theorem \ref{T1.1} and let $\omega (x^{\prime },t)$ be a function of the class $C_{0}^{\infty }(R^{N-1}\times R^{1})$ with the compact support in the set $\left\{ (x^{\prime },t):|x^{\prime }|+|t|<1\right\} $ and with the properties (mollifier) \begin{equation*} \int\limits_{R^{N-1}\times R^{1}}\omega (x^{\prime },t)dx^{\prime }dt=\int\limits_{|x^{\prime }|+|t|<1}\omega (x^{\prime },t)dx^{\prime }dt=1. \end{equation*} For $\varepsilon \in (0,1)$ denote $\omega _{\varepsilon }(x^{\prime },t)=\varepsilon ^{-N}\omega (x^{\prime }/\varepsilon ,t/\varepsilon )$ and consider the function (th smoothing with respect to "tangent" variables) \begin{equation*} u_{\varepsilon }(x,t)=\omega _{\varepsilon }\ast u=\int\limits_{R^{N-1}\times R^{1}}\omega _{\varepsilon }(x^{\prime }-\xi ^{\prime },t-\tau )u(\xi ^{\prime },x_{N},\tau )d\xi d\tau . \end{equation*} It follows from well-known properties of convolution that the function $u_{\varepsilon}(x,t)$ has the following properties. 1. \ The function $u_{\varepsilon}(x,t)$ is infinitely differentiable in the variables $x^{\prime}$ and $t$, its derivatives in these variables belong to the same class as the $u(x,t)$ and satisfy \eqref{1.2} with a constant, independent of $\varepsilon \in (0,1)$. 2. \ Since the coefficients of equation \eqref{1.1} for $u(x,t)$ do not depend on $x^{\prime }$ and $t$, the function $u_{\varepsilon }(x,t)$ satisfies equation \eqref{1.1} with the right hand side $f_{\varepsilon }(x,t)$ and boundary condition \eqref{1.3} with the function $g_{\varepsilon }(x^{\prime },t)$, where \[ f_{\varepsilon }(x,t)=\int\limits_{R^{N-1}\times R^{1}}\omega _{\varepsilon }(\xi ^{\prime },\tau )f(x^{\prime }-\xi ^{\prime },x_{N},t-\tau )d\xi d\tau ,\quad \] \begin{equation} g_{\varepsilon }(x^{\prime },t)=\int\limits_{R^{N-1}\times R^{1}}\omega _{\varepsilon }(\xi ^{\prime },\tau )g(x^{\prime }-\xi ^{\prime },t-\tau )d\xi d\tau . \label{2.3} \end{equation} At the same time it follows from \eqref{2.3} that the functions $f_{\varepsilon }(x,t)$ and $g_{\varepsilon }(x^{\prime },t)$ are polynomials in the variables $x^{\prime }$, $t$ and the degrees of these polynomials coincide with those of $f(x,t)$ and $g(x^{\prime },t)$. So the degrees of these polynomials do not depend on $\varepsilon \in (0,1)$. 3. \ It follows from the properties of the function $\omega _{\varepsilon }(x^{\prime },t)$ and from the definition of the function $u_{\varepsilon }(x,t)$ that $u_{\varepsilon }(x,t)$ satisfies condition \eqref{1.4} with a constant $C$ and the exponent $M$, which do not depend on $\varepsilon \in (0,1)$. Moreover, \begin{equation*} D_{x^{\prime }}^{\alpha }D_{t}^{\beta }u_{\varepsilon }(x,t)=\int\limits_{R^{N-1}\times R^{1}}D_{x^{\prime }}^{\alpha }D_{t}^{\beta }\omega _{\varepsilon }(x^{\prime }-\xi ^{\prime },t-\tau )u(\xi ^{\prime },x_{N},\tau )d\xi d\tau \end{equation*} and so \begin{equation} \max_{Q_{R}}\left\vert D_{x^{\prime }}^{\alpha }D_{t}^{\beta }u_{\varepsilon }(x,t)\right\vert \leq \frac{C}{\varepsilon ^{|\alpha |+\beta }}R^{M}. \label{2.4} \end{equation} Suppose now that the multiindex $\alpha _{0}$ and $\beta_{0}$ are chosen so large comparatively with the degrees $M_{f}$ and $M_{g}$ of polynomials $f_{\varepsilon }(x,t)$ and $g_{\varepsilon }(x^{\prime },t)$ that $D_{x^{\prime }}^{\alpha _{0}}D_{t}^{\beta _{0}}f_{\varepsilon }(x,t)\equiv 0$, $D_{x^{\prime }}^{\alpha _{0}}D_{t}^{\beta _{0}}g_{\varepsilon }(x^{\prime },t)\equiv 0$. \ Denote $v_{\varepsilon }(x,t)=D_{x^{\prime }}^{\alpha _{0}}D_{t}^{\beta _{0}}u_{\varepsilon }(x,t)$. It follows from the above mentioned properties of the function $u_{\varepsilon }(x,t)$ that $v_{\varepsilon }(x,t)$ has all the same properties that the function $u_{\varepsilon }(x,t)$, including \eqref{2.4}. The difference is that $v_{\varepsilon }(x,t)$ satisfies equation \eqref{1.1} with zero right hand side and it satisfies zero boundary condition \eqref{1.3}. Thus, by the above, the function $v_{\varepsilon}(x,t)$ is a polynomial in the variables $x^{\prime}$ and $t$. But then, by the definition of $v_{\varepsilon}(x,t)$, the function $u_{\varepsilon}(x,t)$ is also a polynomial in these variables, and its degree does not depend on $\varepsilon \in (0,1)$ and does not exceed $[M]$, by virtue of \eqref{2.4}. Let $\varphi (x,t)$ be an arbitrary function from $C_{0}^{\infty }(Q) $ and let $m=[M]+1$. Since $D_{x_{1}}^{m}...D_{x_{N-1}}^{m}D_{t}^{m}u_{\varepsilon }(x,t)\equiv 0$, multiplying this identity by $\varphi (x,t)$ and integrating by parts over the domain $Q$, we obtain \begin{equation*} \int\limits_{Q}u_{\varepsilon }(x,t)D_{x_{1}}^{m}...D_{x_{N-1}}^{m}D_{t}^{m}\varphi (x,t)dxdt=0. \end{equation*} Since $u(x,t)$ $\in L_{2,loc}(Q)$, the averaged functions $u_{\varepsilon }(x,t)$ tend in this space to the original function $u(x,t)$ as $\varepsilon \rightarrow 0$, as it is well known. Therefore, taking in the last inequality the limit as $\varepsilon \rightarrow 0 $, we get \begin{equation*} \int\limits_{Q}u(x,t)D_{x_{1}}^{m}...D_{x_{N-1}}^{m}D_{t}^{m}\varphi (x,t)dxdt=0. \end{equation*} As the function $\varphi (x,t)$ is arbitrary, we infer that $D_{x_{1}}^{m}...D_{x_{N-1}}^{m}D_{t}^{m}u(x,t)\equiv 0$ in the sense of distributions. But it is well known that this means that the function $u(x,t)$ is a polynomial with respect to the variables $x^{\prime }$ and $t$. Moreover, in view of \eqref{1.4} the degree of this polynomial does not exceed $[M]$. Thus Theorem \ref{T1.1} is proved under the assumption of validity of Lemma \ref{L2.1}. \section{Proof of Lemma \ref{L2.1}.} In this section, we prove Lemma \ref{L2.1} and this will complete the proof of Theorem \ref{T1.1}. We need in the future some corollary of the following statement (\cite{16}, Lemma 3.1). \begin{lemma} \label{L3.1} Let $f(t)$ be a nonnegative bounded function with the domain $[r_{0},r_{1}]$, $r_{0}\geq 0$. Suppose that for $r_{0}\leq t<s\leq r_{1}$ \begin{equation*} f(t)\leq \theta f(s)+[A(s-t)^{-a}+B], \end{equation*} where $A$, $B$, $a$, $\theta $ are some nonnegative constants and $0\leq \theta <1$. Then for all $r_{0}\leq t<s\leq r_{1}$ \begin{equation*} f(t)\leq C_{\theta ,a}[A(s-t)^{-a}+B]. \end{equation*} \end{lemma} We will use this lemma in the following particular case. Recall that for $R>0$ we denote $B_{R}=\left\{ x=(x^{\prime },x_{N})\in R_{+}^{N}:\quad 0<x_{N}<R,|x^{\prime }|<R\right\} $, $x^{\prime }=(x_{1},...,x_{N-1})$, $Q_{R}=\left\{ (x,t)\in Q:\quad x\in B_{R},-R^{2}<t<R^{2}\right\} $. \begin{lemma} \label{L3.2} Let functions $u(x,t)\geq 0$, $U(x,t)\geq 0$ are locally integrable in $Q$ and for all $R>0$, $q\in (1,3)$ we have the estimate \begin{equation} \int\limits_{Q_{R}}u(x,t)dxdt\leq \varepsilon \int\limits_{Q_{qR}}u(x,t)dxdt+\frac{C}{(q-1)^{a}R^{b}}\int \limits_{Q_{qR}}U(x,t)dxdt+B, \label{3.1} \end{equation} where $\varepsilon \in (0,1)$, $B\geq 0$. Then for all $R>0$, $q\in (1,3)$ \begin{equation} \int\limits_{Q_{R}}u(x,t)dxdt\leq \frac{C_{a,\varepsilon }}{(q-1)^{a}R^{b}}\int\limits_{Q_{qR}}U(x,t)dxdt+B. \label{3.2} \end{equation} \end{lemma} This lemma is immediate consequence of Lemma \ref{L3.1}. Keeping in mind that $R$ and $q$ in \eqref{3.1} are arbitrary, it is enough to consider on the interval $[1,q]$ the function $f(t)=\int\limits_{Q_{tR}}u(x,\tau )dxd\tau $ and take into account that for $t\in \lbrack 1,q]$ we have $\int\limits_{Q_{tR}}U(x,\tau )dxd\tau \leq A=\int\limits_{Q_{qR}}U(x,t)dxdt$. Note that assertions analogous to Lemma \ref{L3.2} was implicitly used before in the papers \cite{17}- \cite{25}, for example,. We will use also the well known Nirenberg-Gagliardo inequality in it's particular case (see, for example,\cite{26} or \cite{27}, Theorem 5.2) \begin{equation} \int\limits_{B_{R}}|\nabla v|^{2}dx\leq C\left( \int\limits_{B_{R}}|D_{x}^{2}v|^{2}dx\right) ^{\frac{1}{2}}\left( \int\limits_{B_{R}}v^{2}dx\right) ^{\frac{1}{2}}, \label{3.3} \end{equation} where $|D_{x}^{2}v|^{2}\equiv \sum_{|\alpha |=2}|D_{x}^{\alpha }v|^{2}$ and the constant $C$ does not depend on $R$. This inequality is valid for functions $v(x,t)$, that vanish on the boundary $\partial B_{R}$ together with their first derivatives. It is well known from the theory of Dirichlet problem for the Poisson equation that for such functions \begin{equation} \int\limits_{B_{R}}|D_{x}^{2}v|^{2}dx\leq C\int\limits_{B_{R}}\left( \Delta v\right) ^{2}dx, \label{3.4} \end{equation} where the constant $C$ again does not depend on $R $. Substituting \eqref{3.4} in \eqref{3.3} and using the Cauchy inequality with $\varepsilon $ to estimate the product on the right hand side of \eqref{3.3}, we obtain \begin{equation*} \int\limits_{B_{R}}|\nabla v|^{2}dx\leq \varepsilon \int\limits_{B_{R}}\left( \Delta v\right) ^{2}dx+\frac{С }{\varepsilon }\int\limits_{B_{R}}v^{2}dx. \end{equation*} Integrating this inequality in $t$ from $-R^{2}$ t5o $R^{2}$, we arrive at the inequality \begin{equation} \int\limits_{Q_{R}}|\nabla v|^{2}dxdt\leq \varepsilon \int\limits_{Q_{R}}\left( \Delta v\right) ^{2}dxdt+\frac{C }{\varepsilon }\int\limits_{Q_{R}}v^{2}dxdt,\quad \varepsilon >0. \label{3.5} \end{equation} We will use also the well known Hardy inequality in the form (see, for example, \cite{26} or \cite{28}, formula (0.3)) \begin{equation*} \int\limits_{0}^{R}w^{2}dx_{N}\leq 4\int\limits_{0}^{R}x_{N}^{2}\left( \frac{\partial w}{\partial x_{N}}\right) ^{2}dx. \end{equation*} This inequality is valid for such functions $w(x,t)$ that $w|_{x_{N}=R}=0$. Integrating this inequality in $x^{\prime }\in \left\{ |x^{\prime }|<R\right\} $ and in $t$ from $-R^{2}$ to $R^{2}$, we obtain \begin{equation} \int\limits_{Q_{R}}w^{2}dxdt\leq 4\int\limits_{Q_{R}}x_{N}^{2}\left( \frac{\partial w}{\partial x_{N}}\right) ^{2}dxdt. \label{3.6} \end{equation} Turning to the proof of Lemma \ref{L2.1}, let us agree to denote everywhere below for brevity $C_{q}=C/(q-1)^{a}$, where $a$ is a nonnegative number. The proof of Lemma \ref{L2.1} will be obtained as a result of a collection of local integral estimates with the using of inequalities \eqref{3.5}, \eqref{3.6} an equation \eqref{1.1} together with the boundary condition \eqref{1.3}. In what follows $u(x,t)$ is some fixed function and it satisfies the conditions of Lemma \ref{L2.1}. Let $q\in (1,3)$ , $R>1$, and let $s>0$ be sufficiently large. Let also $\eta (x,t)$ be such a nonnegative function of the class $C^{\infty }(\overline{Q})$ that \[ 0\leq \eta \leq 1,\quad \eta (x,t)|_{Q_{R}}\equiv 1,\quad \eta (x,t)|_{Q\setminus Q_{qR}}\equiv 0,\quad \] \begin{equation} |D_{x}^{\alpha }D_{t}^{\beta }\eta |\leq \frac{C_{\alpha ,\beta }}{[(q-1)R]^{|\alpha |+2\beta }}\equiv \frac{C_{q}}{R^{^{|\alpha |+2\beta }}}. \label{3.7} \end{equation} Let us agree for brevity to call such a function cut-off function for the cylinder $Q_{R}$. Consider the function $v(x,t)=u(x,t)\eta ^{s}(x,t)$. Applying to this function inequality \eqref{3.5}, we obtain after simple calculations with the using \eqref{3.7} \begin{equation*} \int\limits_{Q_{R}}|\nabla u|^{2}dxdt\leq \varepsilon \int\limits_{Q_{qR}}\left( \Delta u\right) ^{2}dxdt+\varepsilon \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}|\nabla u|^{2}dxdt+\left( \frac{С }{\varepsilon }+\frac{C_{q}}{R^{4}}\right) \int\limits_{Q_{qR}}u^{2}dxdt. \end{equation*} Choosing in this estimate $\varepsilon =\varepsilon _{1}R^{2}/C_{q}$, $\varepsilon _{1}\in (0,1)$ and taking into account that $R>1$, we arrive at the inequality \begin{equation*} \int\limits_{Q_{R}}|\nabla u|^{2}dxdt\leq \varepsilon _{1}R^{2}C\int\limits_{Q_{qR}}\left( \Delta u\right) ^{2}dxdt+\varepsilon _{1}\int\limits_{Q_{qR}}|\nabla u|^{2}dxdt+\frac{C_{q}}{\varepsilon _{1}R^{2}}\int\limits_{Q_{qR}}u^{2}dxdt. \end{equation*} Now it follows from the last inequality and Lemma \ref{L3.2} that \begin{equation} \int\limits_{Q_{R}}|\nabla u|^{2}dxdt\leq \varepsilon _{1}R^{2}C\int\limits_{Q_{qR}}\left( \Delta u\right) ^{2}dxdt+\frac{C_{q}}{\varepsilon _{1}R^{2}}\int\limits_{Q_{qR}}u^{2}dxdt. \label{3.8} \end{equation} Let now $\eta (x,t)$ be a cut-off function of the cylinder $Q_{q^{2}R}$ and it identically equal to $1$ on $Q_{qR}$. Denote $w(x,t)=\eta ^{s}(x,t)\Delta u$ and apply to this function inequality \eqref{3.6} in the domain $Q_{q^{2}R}$. After simple calculations with the using \eqref{3.7}, we obtain \begin{equation*} \int\limits_{Q_{qR}}\left( \Delta u\right) ^{2}dxdt\leq С \int\limits_{Q_{q^{2}R}}x_{N}^{2}\left( \frac{\partial }{\partial x_{N}}\Delta u\right) ^{2}dxdt+\frac{C}{R^{2}}\int\limits_{Q_{q^{2}R}}x_{N}^{2}\left( \Delta u\right) ^{2}dxdt. \end{equation*} As $q\in (1,3)$ is arbitrary, substituting this estimate in \eqref{3.8} and denoting $q^{2}$ again by $q$, we get the inequality \[ \int\limits_{Q_{R}}|\nabla u|^{2}dxdt\leq \varepsilon _{1}R^{2}C\int\limits_{Q_{qR}}x_{N}^{2}\left\vert \nabla \Delta u\right\vert ^{2}dxdt+\varepsilon _{1}C\int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u\right) ^{2}dxdt+ \] \begin{equation} +\frac{C_{q}}{\varepsilon _{1}R^{2}}\int\limits_{Q_{qR}}u^{2}dxdt. \label{3.9} \end{equation} This is the first from several integral inequalities we need. It is obtained from Sobolev embeddings and we did not apply equation \eqref{1.1}. Now our goal is to use the equation to estimate the first and the second terms on the right hand side of \eqref{3.9} and also to obtain an analogous estimate for the time derivative. At this we will use also boundary condition \eqref{1.3} and the fact that it follows from \eqref{1.2} that \begin{equation} x_{N}^{2}D_{x}^{\alpha }u|_{x_{N}=0}=0,\quad |\alpha |\leq 3;\quad x_{N}D_{x}^{\alpha }u|_{x_{N}=0}=0,\quad |\alpha |\leq 2. \label{3.10} \end{equation} Recall besides that according to the conditions of the lemma the function $u$ is infinitely differentiable with respect to the "tangent" variables $x^{\prime }$ and $t$ in combinations with derivatives with respect to $x_{N}$ up to the fourth order. Let $\eta (x,t)$ be defined in \eqref{3.7}. Multiply equation \eqref{1.1} by the function $u(x,t)\eta ^{s}(x,t)$ and integrate by parts. Taking into account \eqref{1.3}, \eqref{3.10} and the fact that the support of $\eta (x,t)$ is a compact set, we obtain \begin{equation*} -s\int\limits_{Q_{qR}}u^{2}\eta _{t}\eta ^{s-1}dxdt-\int\limits_{Q_{qR}}x_{N}^{2}\nabla \Delta u\nabla u\eta ^{s}dxdt-s\int\limits_{Q_{qR}}x_{N}^{2}\nabla \Delta u\nabla \eta u\eta ^{s-1}dxdt+ \end{equation*} \begin{equation*} +\beta \int\limits_{Q_{qR}}\left\vert \nabla u\right\vert ^{2}\eta ^{s}dxdt+\beta s\int\limits_{Q_{qR}}\nabla u\nabla \eta u\eta ^{s-1}dxdt=0. \end{equation*} Integrating once more by parts in the terms with the expression $\nabla \Delta u$ and taking all terms without the expressions $\left( \Delta u\right) ^{2}\eta ^{s}$ and $\beta \left\vert \nabla u\right\vert ^{2}\eta ^{s}$ to the right hand side, we get \begin{equation*} \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u\right) ^{2}\eta ^{s}dxdt+\beta \int\limits_{Q_{qR}}\left\vert \nabla u\right\vert ^{2}\eta ^{s}dxdt=-2\int\limits_{Q_{qR}}x_{N}\Delta u\frac{\partial u}{\partial x_{N}}\eta ^{s}dxdt- \end{equation*} \begin{equation} -s\int\limits_{Q_{qR}}x_{N}^{2}\Delta u\left[ \Delta \eta \eta ^{s-1}+(s-1)(\nabla \eta )^{2}\eta ^{s-2}\right] udxdt-2s\int\limits_{Q_{qR}}x_{N}^{2}\Delta u\nabla u\nabla \eta \eta ^{s-1}dxdt- \label{3.11} \end{equation} \begin{equation*} -2s\int\limits_{Q_{qR}}x_{N}\Delta u\frac{\partial \eta }{\partial x_{N}}u\eta ^{s-1}dxdt+s\int\limits_{Q_{qR}}u^{2}\eta _{t}\eta ^{s-1}dxdt- \end{equation*} \begin{equation*} -\beta s\int\limits_{Q_{qR}}\nabla u\nabla \eta u\eta ^{s-1}dxdt\equiv I_{1}+I_{2}+I_{3}+I_{4}+I_{5}+I_{6}. \end{equation*} We estimate the integrals $I_{1}-I_{6}$ on the right hand side of \eqref{3.11} with the help of the integral H$\ddot{o}$lder inequality with the exponents $p=p^{\prime }=2$ and with the help of Cauchy's inequality with $\varepsilon $, taking into account properties \eqref{3.7} of the function $\eta (x,t)$. For example, we have for $I_{4}$ \begin{equation*} |I_{4}|\leq С \int\limits_{Q_{qR}}\left( |x_{N}\Delta u|\eta ^{s/2}\right) \left( u\left\vert \frac{\partial \eta }{\partial x_{N}}\right\vert \eta ^{s/2-1}\right) dxdt\leq \end{equation*} \begin{equation*} \leq C\left( \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u\right) ^{2}\eta ^{s}dxdt\right) ^{\frac{1}{2}}\left( \int\limits_{Q_{qR}}u^{2}\left\vert \frac{\partial \eta }{\partial x_{N}}\right\vert ^{2}\eta ^{s-2}dxdt\right) ^{\frac{1}{2}}\leq \end{equation*} \begin{equation*} \leq C\left( \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u\right) ^{2}\eta ^{s}dxdt\right) ^{\frac{1}{2}}\left( \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}u^{2}dxdt\right) ^{\frac{1}{2}}\leq \end{equation*} \begin{equation} \leq \varepsilon \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u\right) ^{2}\eta ^{s}dxdt+\frac{C_{q}}{\varepsilon R^{2}}\int\limits_{Q_{qR}}u^{2}dxdt. \label{3.12} \end{equation} Completely analogous estimates for the other integrals together with the fact that $x_{N}\leq CR$ on $Q_{qR}$ and with \eqref{3.11} give \begin{equation*} \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u\right) ^{2}\eta ^{s}dxdt+\beta \int\limits_{Q_{qR}}\left\vert \nabla u\right\vert ^{2}\eta ^{s}dxdt\leq \end{equation*} \begin{equation*} \leq \varepsilon С \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u\right) ^{2}\eta ^{s}dxdt+\frac{C_{q}}{\varepsilon }\int\limits_{Q_{qR}}\left\vert \nabla u\right\vert ^{2}dxdt+\frac{C_{q}}{\varepsilon R^{2}}\int\limits_{Q_{qR}}u^{2}dxdt. \end{equation*} Choose in this estimate $\varepsilon $ such that $\varepsilon С =1/2$ and move the first term on the right hand side to the left. Then, taking into account properties of $\eta $, we finally obtain \begin{equation} \int\limits_{Q_{R}}x_{N}^{2}\left( \Delta u\right) ^{2}dxdt+\beta \int\limits_{Q_{R}}\left\vert \nabla u\right\vert ^{2}dxdt\leq C_{q}\int\limits_{Q_{qR}}\left\vert \nabla u\right\vert ^{2}dxdt+\frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}u^{2}dxdt. \label{3.12+1} \end{equation} Turn now to the following estimate. Using the same function $\eta $ as before, multiply equation \eqref{1.1} by $\left( \Delta u\right) \eta ^{s}$ integrate over $Q_{R}$ and integrate by parts in the variables $x$ in terms without $\beta $. Bearing in mind that $u_{t}|_{x_{N}=0}=0$ in view of boundary condition \eqref{1.3}, we have \begin{equation*} -\int\limits_{Q_{qR}}\nabla u_{t}\nabla u\eta ^{s}dxdt-s\int\limits_{Q_{qR}}u_{t}\nabla u\nabla \eta \eta ^{s-1}dxdt-\int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u)^{2}\eta ^{s}dxdt- \end{equation*} \begin{equation*} -s\int\limits_{Q_{qR}}x_{N}^{2}\nabla \Delta u\nabla \eta \Delta u\eta ^{s-1}dxdt-\beta \int\limits_{Q_{qR}}(\Delta u)^{2}\eta ^{s}dxdt=0. \end{equation*} Substituting in the first term $\nabla u_{t}\nabla u=\left[ (\nabla u)^{2}/2\right] _{t}$ and integrating by parts in $t$ in this term, we can represent the last equality as \begin{equation*} \int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u)^{2}\eta ^{s}dxdt+\beta \int\limits_{Q_{qR}}(\Delta u)^{2}\eta ^{s}dxdt=-s\int\limits_{Q_{qR}}u_{t}\nabla u\nabla \eta \eta ^{s-1}dxdt- \end{equation*} \begin{equation} -s\int\limits_{Q_{qR}}x_{N}^{2}\nabla \Delta u\nabla \eta \Delta u\eta ^{s-1}dxdt+\frac{s}{2}\int\limits_{Q_{qR}}(\nabla u)^{2}\eta _{t}\eta ^{s-1}dxdt\equiv I_{1}+I_{2}+I_{3}. \label{3.12+2} \end{equation} The integrals $I_{1}-I_{3}$ are estimated as before. In particular, \begin{equation*} |I_{1}|\leq C_{q}\int\limits_{Q_{qR}}|u_{t}|\frac{|\nabla u|}{R}dxdt\leq \varepsilon _{2}\int\limits_{Q_{qR}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}R^{2}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt, \end{equation*} \begin{equation*} |I_{2}|\leq \varepsilon \int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u)^{2}\eta ^{s}dxdt+\frac{C_{q}}{\varepsilon R^{2}}\int\limits_{Q_{qR}}x_{N}^{2}(\Delta u)^{2}dxdt, \end{equation*} \begin{equation*} |I_{3}|\leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt. \end{equation*} Thus, we get from \eqref{3.12+2} \begin{equation*} \int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u)^{2}\eta ^{s}dxdt+\beta \int\limits_{Q_{qR}}(\Delta u)^{2}\eta ^{s}dxdt\leq \varepsilon \int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u)^{2}\eta ^{s}dxdt+ \end{equation*} \begin{equation*} +\frac{C_{q}}{\varepsilon R^{2}}\int\limits_{Q_{qR}}x_{N}^{2}(\Delta u)^{2}dxdt+\varepsilon _{2}\int\limits_{Q_{qR}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}R^{2}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt. \end{equation*} The first term on the right can be moved to the left side with the choice $\varepsilon =1/2$. As for the second term on the right, we use \eqref{3.12+1} with $qR$ instead of $R$ to estimate it. This gives \begin{equation*} \int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u)^{2}\eta ^{s}dxdt+\beta \int\limits_{Q_{qR}}(\Delta u)^{2}\eta ^{s}dxdt\leq \varepsilon _{2}\int\limits_{Q_{qR}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}R^{2}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt+ \end{equation*} \begin{equation*} +\frac{C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}\left\vert \nabla u\right\vert ^{2}dxdt+\frac{C_{q}}{R^{4}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt. \end{equation*} Now we the properties of the function $\eta $ ($\eta \equiv 1$ on $Q_{R}$), then we estimate the integrals over $Q_{qR}$ on the right hand side by the same integrals over $Q_{q^{2}R}$ and as $q$ is arbitrary, we denote $q^{2}$ again by $q$. We obtain finally \begin{equation*} \int\limits_{Q_{R}}x_{N}^{2}(\nabla \Delta u)^{2}dxdt+\beta \int\limits_{Q_{R}}(\Delta u)^{2}dxdt\leq \end{equation*} \begin{equation} \leq \varepsilon _{2}\int\limits_{Q_{qR}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}R^{2}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{4}}\int\limits_{Q_{qR}}u^{2}dxdt. \label{3.15} \end{equation} We now turn to the following estimate. Recall that the function $u$ is infinitely differentiable in the variables $(x^{\prime },t)$ and it satisfies the boundary conditions $u|_{x_{N}=0}=u_{t}|_{x_{N}=0}=0$. Let $\eta $ be the same function as above. Multiply equation \eqref{1.1} by $\Delta u_{t}\eta ^{s}$ and integrate by parts with respect to the space variables in the first and in the second terms. We obtain \begin{equation*} \int\limits_{Q_{qR}}(\nabla u_{t})^{2}\eta ^{s}dxdt+\int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u,\nabla \Delta u_{t})\eta ^{s}dxdt+\beta \int\limits_{Q_{qR}}\Delta u\Delta u_{t}\eta ^{s}dxdt= \end{equation*} \begin{equation*} =-s\int\limits_{Q_{qR}}\nabla u_{t}\nabla \eta u_{t}\eta ^{s-1}dxdt-s\int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u,\nabla \eta )\Delta u_{t}\eta ^{s-1}dxdt. \end{equation*} Integrating now by parts with respect to $t$ in the second and in the third terms on the left and moving the results to the right hand side, we obtain \begin{equation*} \int\limits_{Q_{qR}}(\nabla u_{t})^{2}\eta ^{s}dxdt=-s\int\limits_{Q_{qR}}\nabla u_{t}\nabla \eta u_{t}\eta ^{s-1}dxdt-s\int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u,\nabla \eta )\Delta u_{t}\eta ^{s-1}dxdt+ \end{equation*} \begin{equation} +\frac{1}{2}\int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u)^{2}\eta _{t}\eta ^{s-1}dxdt+\frac{\beta }{2}\int\limits_{Q_{qR}}\left( \Delta u\right) ^{2}\eta _{t}\eta ^{s-1}dxdt\equiv I_{1}+I_{2}+I_{3}+I_{4}. \label{3.16} \end{equation} The integrals $I_{1}-I_{4}$ are estimated in the same way as before. This gives \begin{equation} |I_{1}|\leq \frac{1}{2}\int\limits_{Q_{qR}}(\nabla u_{t})^{2}\eta ^{s}dxdt+\frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}(u_{t})^{2}dxdt, \label{3.17} \end{equation} \begin{equation} |I_{2}|\leq \varepsilon _{3}\int\limits_{Q_{qR}}x_{N}^{2}(\Delta u_{t})^{2}dxdt+\frac{C_{q}}{\varepsilon _{3}R^{2}}\int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u)^{2}dxdt, \label{3.18} \end{equation} \begin{equation} |I_{3}|\leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}x_{N}^{2}(\nabla \Delta u)^{2}dxdt, \label{3.19} \end{equation} \begin{equation} |I_{4}|\leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}\left( \Delta u\right) ^{2}dxdt. \label{3.20} \end{equation} Consider the integral $I_{4}$. Let $\eta _{q}$ be a function, which is analogous to $\eta $ with the replacing $R $ with $qR$. In particular, $\eta _{q}\equiv 1$ on $Q_{qR}$ and $\eta _{q}\equiv 0$ outside $Q_{q^{2}R}$. Using \eqref{3.6} and \eqref{3.7} for $\eta _{q}$, we proceed with the estimate for $I_{4}$ in the following way \begin{equation*} |I_{4}|\leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}\left( \Delta u\eta _{q}^{s}\right) ^{2}dxdt\leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}\left( \Delta u\eta _{q}^{s}\right) ^{2}dxdt\leq \end{equation*} \begin{equation*} \leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}x_{N}^{2}\left( \nabla \left( \Delta u\eta _{q}^{s}\right) \right) ^{2}dxdt\leq \end{equation*} \begin{equation} \leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}x_{N}^{2}\left( \nabla \Delta u\right) ^{2}dxdt+\frac{C_{q}}{R^{4}}\int\limits_{Q_{q^{2}R}}x_{N}^{2}\left( \Delta u\right) ^{2}dxdt. \label{3.21} \end{equation} We estimate now the right hand side of \eqref{3.16} with the help of the estimates for the integrals $I_{1}-I_{4}$. In this way we move the first integral in \eqref{3.17} to the right hand side of \eqref{3.16} and in estimates \eqref{3.18}, \eqref{3.19}, and \eqref{3.21} we estimate the integrals with $x_{N}^{2}\left( \nabla \Delta u\right) ^{2}$ and with $x_{N}^{2}\left( \Delta u\right) ^{2}$ from relations \eqref{3.15} and \eqref{3.12+1} correspondingly. We obtain \begin{equation*} \int\limits_{Q_{qR}}(\nabla u_{t})^{2}\eta ^{s}dxdt\leq \varepsilon _{3}\int\limits_{Q_{q^{3}R}}x_{N}^{2}(\Delta u_{t})^{2}dxdt+ \end{equation*} \begin{equation*} +\frac{C_{q}}{\varepsilon _{3}R^{2}}\int\limits_{Q_{q^{3}R}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}\varepsilon _{3}R^{4}}\int\limits_{Q_{q^{3}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{6}\varepsilon _{3}}\int\limits_{Q_{q^{3}R}}u^{2}dxdt. \end{equation*} Or, in view of the properties of $\eta $ and as $q$ is arbitrary, finally \begin{equation} \int\limits_{Q_{R}}(\nabla u_{t})^{2}\eta ^{s}dxdt\leq \varepsilon _{3}\int\limits_{Q_{qR}}x_{N}^{2}(\Delta u_{t})^{2}dxdt+ \label{3.22} \end{equation} \begin{equation*} +\frac{C_{q}}{\varepsilon _{3}R^{2}}\int\limits_{Q_{qR}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}\varepsilon _{3}R^{4}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{6}\varepsilon _{3}}\int\limits_{Q_{qR}}u^{2}dxdt. \end{equation*} Let us proceed. Let $\eta $ be again the same function as before. Multiply equation by \eqref{1.1} by $u_{t}\eta ^{s}$ and integrate by parts with respect to the $x$-variables: \begin{equation*} \int\limits_{Q_{qR}}u_{t}^{2}\eta ^{s}dxdt-\int\limits_{Q_{qR}}x_{N}^{2}\nabla \Delta u\nabla u_{t}\eta ^{s}dxdt-s\int\limits_{Q_{qR}}x_{N}^{2}\nabla \Delta u\nabla \eta u_{t}\eta ^{s-1}dxdt+ \end{equation*} \begin{equation*} +\beta \int\limits_{Q_{qR}}\nabla u\nabla u_{t}\eta ^{s}dxdt+\beta s\int\limits_{Q_{qR}}\nabla u\nabla \eta u_{t}\eta ^{s-1}dxdt=0. \end{equation*} Integrating once again by parts in the second term with respect to the $x$-variables, we can represent this equality in the form \begin{equation*} \int\limits_{Q_{qR}}u_{t}^{2}\eta ^{s}dxdt+\int\limits_{Q_{qR}}x_{N}^{2}\Delta u\Delta u_{t}\eta ^{s}dxdt+\beta \int\limits_{Q_{qR}}\nabla u\nabla u_{t}\eta ^{s}dxdt= \end{equation*} \begin{equation*} =s\int\limits_{Q_{qR}}x_{N}^{2}\nabla \Delta u\nabla \eta u_{t}\eta ^{s-1}dxdt-\beta s\int\limits_{Q_{qR}}\nabla u\nabla \eta u_{t}\eta ^{s-1}dxdt-2\int\limits_{Q_{qR}}x_{N}\Delta u\frac{\partial u_{t}}{\partial x_{N}}\eta ^{s}dxdt- \end{equation*} \begin{equation*} -s\int\limits_{Q_{qR}}x_{N}^{2}\Delta u\nabla u_{t}\nabla \eta \eta ^{s-1}dxdt. \end{equation*} Integrating by parts with respect to the $t$-variable in the second and in the third terms on the left and moving the results to the right, we obtain \begin{equation*} \int\limits_{Q_{qR}}u_{t}^{2}\eta ^{s}dxdt=s\int\limits_{Q_{qR}}x_{N}^{2}\nabla \Delta u\nabla \eta u_{t}\eta ^{s-1}dxdt-s\int\limits_{Q_{qR}}x_{N}^{2}\Delta u\nabla u_{t}\nabla \eta \eta ^{s-1}dxdt- \end{equation*} \begin{equation*} -2\int\limits_{Q_{qR}}x_{N}\Delta u\frac{\partial u_{t}}{\partial x_{N}}\eta ^{s}dxdt+\frac{1}{2}\int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u\right) ^{2}\eta _{t}\eta ^{s-1}dxdt-\beta s\int\limits_{Q_{qR}}\nabla u\nabla \eta u_{t}\eta ^{s-1}dxdt+ \end{equation*} \begin{equation} +\frac{\beta }{2}\int\limits_{Q_{qR}}\left( \nabla u\right) ^{2}\eta _{t}\eta ^{s-1}dxdt\equiv I_{1}+I_{2}+I_{3}+I_{4}+I_{5}+I_{6}. \label{3.23} \end{equation} We estimate the integrals $I_{1}-I_{6}$ in the same way as before. We have: \begin{equation*} |I_{1}|\leq \varepsilon \int\limits_{Q_{qR}}u_{t}^{2}\eta ^{s}dxdt+\frac{C_{q}}{\varepsilon }\int\limits_{Q_{qR}}x_{N}^{2}\left( \nabla \Delta u\right) ^{2}dxdt, \end{equation*} where we took into account that $x_{N}\leq С R$ on $Q_{qR}$. Choosing, for example, $\varepsilon =\frac{1}{10}$ and estimating the second integral from inequality \eqref{3.15}, we obtain \begin{equation} |I_{1}|\leq \frac{1}{10}\int\limits_{Q_{qR}}u_{t}^{2}\eta ^{s}dxdt+\varepsilon _{2}\int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}R^{2}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{4}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt. \label{3.24} \end{equation} Further, for $I_{2}$, taking again into account that $x_{N}\leq С R$ on $Q_{qR}$, we have \begin{equation*} |I_{2}|\leq \theta \int\limits_{Q_{qR}}\left( \nabla u_{t}\right) ^{2}dxdt+\frac{C_{q}}{\theta }\int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u\right) ^{2}dxdt. \end{equation*} Choosing here $\theta =\varepsilon _{4}R^{2}$ and using \eqref{3.22}, \eqref{3.12+1}, we obtain \begin{equation*} |I_{2}|\leq \frac{\varepsilon _{4}C_{q}}{\varepsilon _{3}}\int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\varepsilon _{3}\varepsilon _{4}R^{2}\int\limits_{Q_{q^{2}R}}x_{N}^{2}(\Delta u_{t})^{2}dxdt+ \end{equation*} \begin{equation*} +\left( \frac{\varepsilon _{4}}{\varepsilon _{2}\varepsilon _{3}}+\frac{1}{\varepsilon _{4}}\right) \frac{C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\left( \frac{\varepsilon _{4}}{\varepsilon _{3}}+\frac{1}{\varepsilon _{4}}\right) \frac{C_{q}}{R^{4}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt, \end{equation*} or, choosing $\varepsilon _{4}=\varepsilon _{5}\frac{\varepsilon _{3}}{C_{q}}$, \begin{equation*} |I_{2}|\leq \varepsilon _{5}\int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\varepsilon _{5}\varepsilon _{3}^{2}C_{q}R^{2}\int\limits_{Q_{q^{2}R}}x_{N}^{2}(\Delta u_{t})^{2}dxdt+ \end{equation*} \begin{equation} +\left( \frac{\varepsilon _{5}}{\varepsilon _{2}}+\frac{1}{\varepsilon _{5}\varepsilon _{3}}\right) \frac{C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\left( \varepsilon _{5}+\frac{1}{\varepsilon _{5}\varepsilon _{3}}\right) \frac{C_{q}}{R^{4}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt. \label{3.25} \end{equation} The integral $I_{3}$ is estimated completely analogously to $I_{2}$, which gives \begin{equation*} |I_{3}|\leq \varepsilon _{5}\int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\varepsilon _{5}\varepsilon _{3}^{2}C_{q}R^{2}\int\limits_{Q_{q^{2}R}}x_{N}^{2}(\Delta u_{t})^{2}dxdt+ \end{equation*} \begin{equation} +\left( \frac{\varepsilon _{5}}{\varepsilon _{2}}+\frac{1}{\varepsilon _{5}\varepsilon _{3}}\right) \frac{C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\left( \varepsilon _{5}+\frac{1}{\varepsilon _{5}\varepsilon _{3}}\right) \frac{C_{q}}{R^{4}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt. \label{3.26} \end{equation} For $I_{4}$, using again \eqref{3.12+1}, we have \begin{equation*} |I_{4}|\leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u\right) ^{2}dxdt\leq \end{equation*} \begin{equation} \leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{4}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt. \label{3.27} \end{equation} Further, \begin{equation} |I_{5}|\leq \frac{1}{10}\int\limits_{Q_{qR}}u_{t}^{2}\eta ^{s}dxdt+\frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt, \label{3.28} \end{equation} \begin{equation} |I_{6}|\leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt. \label{3.29} \end{equation} We use relations \eqref{3.24}- \eqref{3.29} to estimate the right hand side of \eqref{3.23} and move the terms with $u_{t}^{2}\eta ^{s}$ to the left. As $q$ is arbitrary, taking into account the properties of $\eta $, we obtain \begin{equation*} \int\limits_{Q_{R}}u_{t}^{2}dxdt\leq \varepsilon _{5}\int\limits_{Q_{qR}}u_{t}^{2}dxdt+\varepsilon _{5}\varepsilon _{3}^{2}C_{q}R^{2}\int\limits_{Q_{qR}}x_{N}^{2}(\Delta u_{t})^{2}dxdt+ \end{equation*} \begin{equation*} +\left( \frac{\varepsilon _{5}}{\varepsilon _{2}}+\frac{1}{\varepsilon _{5}\varepsilon _{3}}\right) \frac{C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\left( \varepsilon _{5}+\frac{1}{\varepsilon _{5}\varepsilon _{3}}\right) \frac{C_{q}}{R^{4}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt. \end{equation*} Choosing here $\varepsilon _{5}\lessdot 1$ and using Lemma \ref{L3.2}, we arrive at the estimate \begin{equation*} \int\limits_{Q_{R}}u_{t}^{2}dxdt\leq \varepsilon _{3}^{2}C_{q}R^{2}\int\limits_{Q_{qR}}x_{N}^{2}(\Delta u_{t})^{2}dxdt+ \end{equation*} \begin{equation} +\frac{1}{\varepsilon _{2}\varepsilon _{3}}\frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt+\left( 1+\frac{1}{\varepsilon _{3}}\right) \frac{C_{q}}{R^{4}}\int\limits_{Q_{qR}}u^{2}dxdt. \label{3.30} \end{equation} To obtain one more integral inequality multiply equation \eqref{1.1} by $\left[ \nabla \left( x_{N}^{2}\nabla \Delta u\right) _{t}\right]\eta ^{s}$ with subsequent integration and represent the result as \begin{equation*} \int\limits_{Q_{R}}u_{t}\nabla \left( x_{N}^{2}\nabla \Delta u_{t}\right) \eta ^{s}dxdt+\frac{1}{2}\int\limits_{Q_{R}}\left( \left[ \nabla \left( x_{N}^{2}\nabla \Delta u\right) \right] ^{2}\right) _{t}\eta ^{s}dxdt- \end{equation*} \[ -\beta \int\limits_{Q_{R}}\nabla \left( x_{N}^{2}\nabla \Delta u\right) _{t}\Delta u\eta ^{s}dxdt=0. \] Integrating by parts with respect to the $x$-variables in the first and in the third integrals and integrating by parts with respect to the $t$-variable in the second integral, we obtain \begin{equation*} -\int\limits_{Q_{qR}}x_{N}^{2}\nabla u_{t}\nabla \Delta u_{t}\eta ^{s}dxdt+\frac{\beta }{2}\int\limits_{Q_{qR}}x_{N}^{2}\left( \nabla \Delta u\right) _{t}^{2}\eta ^{s}dxdt= \end{equation*} \begin{equation*} =s\int\limits_{Q_{qR}}x_{N}^{2}u_{t}\nabla \Delta u_{t}\nabla \eta \eta ^{s-1}dxdt+\frac{s}{2}\int\limits_{Q_{qR}}\left[ \nabla \left( x_{N}^{2}\nabla \Delta u\right) \right] ^{2}\eta _{t}\eta ^{s-1}dxdt- \end{equation*} \[ -\beta s\int\limits_{Q_{qR}}x_{N}^{2}\nabla \Delta u_{t}\nabla \eta \eta ^{s-1}\Delta udxdt. \] We transform the resulting equation as follows. First we integrate by parts with respect to the $x$-variables in the first term on the left and in the first term on the right. The integral over the surface $\left\{ x_{N}=0\right\} $ vanishes in view of condition \eqref{1.2} and we recall that this condition is valid under assumptions of the lemma for the function $u$ and for it's derivative $u_{t}$. Besides, we integrate by parts with respect to the $t$-variable in the second term on the left and in the last term on the right. Finally, using equation \eqref{1.1}, we just replace in the second term on the right $\nabla \left( x_{N}^{2}\nabla \Delta u\right) =-u_{t}+\beta \Delta u$. As the result we get the equality \begin{equation*} \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u_{t}\right) ^{2}\eta ^{s}dxdt=-2\int\limits_{Q_{qR}}x_{N}\left( u_{t}\right) _{x_{N}}\Delta u_{t}\eta ^{s}dxdt-s\int\limits_{Q_{qR}}x_{N}^{2}\nabla u_{t}\Delta u_{t}\nabla \eta \eta ^{s-1}dxdt+ \end{equation*} \begin{equation*} +\frac{\beta s}{2}\int\limits_{Q_{qR}}x_{N}^{2}\left( \nabla \Delta u\right) ^{2}\eta _{t}\eta ^{s-1}dxdt-2s\int\limits_{Q_{qR}}x_{N}u_{t}\Delta u_{t}\eta _{x_{N}}\eta ^{s-1}dxdt-s\int\limits_{Q_{qR}}x_{N}^{2}\nabla u_{t}\Delta u_{t}\nabla \eta \eta ^{s-1}dxdt- \end{equation*} \begin{equation*} -s\int\limits_{Q_{qR}}x_{N}^{2}u_{t}\Delta u_{t}\nabla \left( \nabla \eta \eta ^{s-1}\right) dxdt+\frac{s}{2}\int\limits_{Q_{qR}}\left[ -u_{t}+\beta \Delta u\right] ^{2}\eta _{t}\eta ^{s-1}dxdt+ \end{equation*} \begin{equation} +\beta s\int\limits_{Q_{qR}}x_{N}^{2}\nabla \Delta u\left( \nabla \eta \eta ^{s-1}\right) _{t}\Delta udxdt+\beta s\int\limits_{Q_{qR}}x_{N}^{2}\nabla \Delta u\nabla \eta \eta ^{s-1}\Delta u_{t}dxdt\equiv \sum\limits_{k=1}^{9}I_{k}. \label{3.31} \end{equation} The integrals $I_{k}$ are estimated according to the same schema as above. We have \begin{equation} \left\vert I_{1}\right\vert +\left\vert I_{2}\right\vert +\left\vert I_{5}\right\vert \leq \varepsilon \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u_{t}\right) ^{2}\eta ^{s}dxdt+\frac{C}{\varepsilon }\int\limits_{Q_{qR}}\left( \nabla u_{t}\right) ^{2}dxdt\leq \label{3.32} \end{equation} \begin{equation*} \leq \varepsilon \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u_{t}\right) ^{2}\eta ^{s}dxdt+\varepsilon _{3}\frac{C_{q}}{\varepsilon }\int\limits_{Q_{q^{2}R}}x_{N}^{2}(\Delta u_{t})^{2}dxdt+ \end{equation*} \begin{equation*} +\frac{C_{q}}{\varepsilon \varepsilon _{3}R^{2}}\int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon \varepsilon _{2}\varepsilon _{3}R^{4}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{\varepsilon R^{6}\varepsilon _{3}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt, \end{equation*} where we made use of \eqref{3.22}. Further, \begin{equation*} \left\vert I_{3}\right\vert \leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}x_{N}^{2}\left( \nabla \Delta u\right) ^{2}dxdt\leq \end{equation*} \begin{equation} \leq \frac{\varepsilon _{2}C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}R^{4}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{6}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt, \label{3.33} \end{equation} where we took into account estimate \eqref{3.15}. Further, \begin{equation} \left\vert I_{4}\right\vert +\left\vert I_{6}\right\vert \leq \varepsilon \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u_{t}\right) ^{2}\eta ^{s}dxdt+\frac{C_{q}}{\varepsilon R^{2}}\int\limits_{Q_{qR}}u_{t}^{2}dxdt, \label{3.34} \end{equation} \begin{equation*} \left\vert I_{7}\right\vert \leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}u_{t}^{2}dxdt+\beta \frac{C_{q}}{R^{2}}\left( \beta \int\limits_{Q_{qR}}\left( \Delta u\right) ^{2}dxdt\right) \leq \end{equation*} \begin{equation} \leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}R^{4}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{6}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt, \label{3.35} \end{equation} where we made use of \eqref{3.15}. For the next integral we have \begin{equation*} \left\vert I_{8}\right\vert \leq \frac{C_{q}}{R^{2}}\left( \int\limits_{Q_{qR}}x_{N}^{2}\left( \nabla \Delta u\right) ^{2}dxdt+\beta \int\limits_{Q_{qR}}\left( \Delta u\right) ^{2}dxdt\right) \leq \end{equation*} \begin{equation} \leq \frac{\varepsilon _{2}C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}R^{4}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{6}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt, \label{3.36} \end{equation} which is analogous to \eqref{3.15}. Further, as before \begin{equation*} \left\vert I_{9}\right\vert \leq \varepsilon \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u_{t}\right) ^{2}\eta ^{s}dxdt+\frac{C_{q}}{\varepsilon R^{2}}\int\limits_{Q_{qR}}x_{N}^{2}\left( \nabla \Delta u\right) ^{2}dxdt\leq \end{equation*} \begin{equation*} \leq \varepsilon \int\limits_{Q_{qR}}x_{N}^{2}\left( \Delta u_{t}\right) ^{2}\eta ^{s}dxdt+ \end{equation*} \begin{equation} +\frac{\varepsilon _{2}C_{q}}{\varepsilon R^{2}}\int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}\varepsilon R^{4}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{\varepsilon R^{6}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt. \label{3.37} \end{equation} The estimate for the left hand side of \eqref{3.31} follows from estimates \eqref{3.32}- \eqref{3.37}. We substitute these estimates in \eqref{3.31}, then we choose sufficiently small $\varepsilon $ in \eqref{3.32}, \eqref{3.34}, and \eqref{3.37} and move the corresponding integrals with $x_{N}^{2}\left( \Delta u_{t}\right) ^{2}\eta ^{s}$ to the left hand side of \eqref{3.31}. We also estimate the integrals over $Q_{qR}$ on the right hand side of \eqref{3.31} by the same integrals over $Q_{q^{2}R}$ and we denote $q^{2}$ again by $q$ (as $q$ is arbitrary). As a result, we obtain the estimate \begin{equation*} \int\limits_{Q_{R}}x_{N}^{2}\left( \Delta u_{t}\right) ^{2}dxdt\leq \varepsilon _{3}C_{q}\int\limits_{Q_{qR}}x_{N}^{2}(\Delta u_{t})^{2}dxdt+ \end{equation*} \begin{equation*} +\frac{(1+\varepsilon _{2})C_{q}}{\varepsilon _{3}R^{2}}\int\limits_{Q_{qR}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{3}\varepsilon _{2}R^{4}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt+\frac{C_{q}}{\varepsilon _{3}R^{6}}\int\limits_{Q_{qR}}u^{2}dxdt. \end{equation*} Choosing here and in \eqref{3.30} $\varepsilon _{3}=\theta C_{q}^{-1}$, $\theta \in (0,1/2)$ and making use of Lemma \ref{L3.2}, we arrive at the estimate \begin{equation*} \int\limits_{Q_{R}}x_{N}^{2}\left( \Delta u_{t}\right) ^{2}dxdt\leq \end{equation*} \begin{equation} \leq \frac{(1+\varepsilon _{2})C_{q}}{R^{2}\theta }\int\limits_{Q_{qR}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}R^{4}\theta }\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{6}\theta }\int\limits_{Q_{qR}}u^{2}dxdt. \label{3.38} \end{equation} Now we will combine our estimates to prove \eqref{2.1}. Under our choice of $\varepsilon _{3}=\theta C_{q}^{-1}$ it follows from \eqref{3.30} and \eqref{3.38} that for $\varepsilon _{2}<1$ \begin{equation*} \int\limits_{Q_{R}}u_{t}^{2}dxdt\leq \frac{\theta ^{2}}{C_{q}}R^{2}\left( \frac{(1+\varepsilon _{2})C_{q}}{R^{2}\theta }\int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\frac{C_{q}C}{\varepsilon _{2}R^{4}\theta }\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{6}\theta }\int\limits_{Q_{qR}}u^{2}dxdt\right) + \end{equation*} \begin{equation*} +\frac{1}{\varepsilon _{2}\theta }\frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt+\left( 1+\frac{1}{\theta }\right) \frac{C_{q}}{R^{4}}\int\limits_{Q_{qR}}u^{2}dxdt\leq \end{equation*} \begin{equation*} \leq \theta \int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}\theta R^{2}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{\theta R^{4}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt. \end{equation*} Denoting here $q^{2}$ again by $q$ and applying Lemma \ref{L3.2}, we get the estimate \begin{equation} \int\limits_{Q_{R}}u_{t}^{2}dxdt\leq \frac{C_{q}}{\varepsilon _{2}R^{2}}\int\limits_{Q_{qR}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{4}}\int\limits_{Q_{qR}}u^{2}dxdt. \label{3.39} \end{equation} Substitute now estimates \eqref{3.12+1} an \eqref{3.15} in estimate \eqref{3.9} and take into account \eqref{3.15}. We obtain \begin{equation*} \int\limits_{Q_{R}}|\nabla u|^{2}dxdt\leq \end{equation*} \begin{equation*} \leq \varepsilon _{1}R^{2}\left( \varepsilon _{2}\int\limits_{Q_{q^{2}R}}u_{t}^{2}dxdt+\frac{C_{q}}{\varepsilon _{2}R^{2}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{4}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt\right) + \end{equation*} \begin{equation*} +\varepsilon _{1}\left( C_{q}\int\limits_{Q_{q^{2}R}}\left\vert \nabla u\right\vert ^{2}dxdt+\frac{C_{q}}{R^{2}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt\right) +\frac{C_{q}}{\varepsilon _{1}R^{2}}\int\limits_{Q_{qR}}u^{2}dxdt\leq \end{equation*} \begin{equation*} \leq \varepsilon _{1}R^{2}\varepsilon _{2}\left( \frac{C_{q}}{\varepsilon _{2}R^{2}}\int\limits_{Q_{q^{3}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{R^{4}}\int\limits_{Q_{q^{3}R}}u^{2}dxdt\right) + \end{equation*} \begin{equation*} +\frac{\varepsilon _{1}C_{q}}{\varepsilon _{2}}\int\limits_{Q_{q^{2}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{\varepsilon _{1}R^{2}}\int\limits_{Q_{q^{2}R}}u^{2}dxdt\leq \end{equation*} \begin{equation*} \leq \frac{\varepsilon _{1}C_{q}}{\varepsilon _{2}}\int\limits_{Q_{q^{3}R}}(\nabla u)^{2}dxdt+\frac{C_{q}}{\varepsilon _{1}R^{2}}\int\limits_{Q_{q^{3}R}}u^{2}dxdt. \end{equation*} Denoting here $q^{3}$ again by $q$, choosing $\varepsilon _{1}=\varepsilon \varepsilon _{2}C_{q}^{-1}$, an applying Lemma \ref{L3.2}, we obtain finally \begin{equation*} \int\limits_{Q_{R}}|\nabla u|^{2}dxdt\leq \frac{C_{q}}{R^{2}}\int\limits_{Q_{qR}}u^{2}dxdt, \end{equation*} that is exactly the first from inequalities \eqref{2.1}. The second from these inequalities follows now from \eqref{3.39}. Thus, Lemma \ref{L2.1} is proved and this finishes also the proof of Theorem \ref{T1.1}. \end{document}
\begin{document} \title{Hardness of Finding Independent Sets in $2$-Colorable \\ and Almost $2$-Colorable Hypergraphs} \author{Subhash Khot\thanks{Department of Computer Science, University of Chicago, USA. email: \texttt{[email protected]}} \and Rishi Saket\thanks{IBM T. J. Watson Research Center, USA. email: \texttt{[email protected]} }} \maketitle \thispagestyle{empty} \setcounter{page}{0} \begin{abstract} This work studies the hardness of finding independent sets in hypergraphs which are either $2$-colorable or are \emph{almost} $2$-colorable, i.e. can be $2$-colored after removing a small fraction of vertices and the incident hyperedges. To be precise, say that a hypergraph is $(1-\varepsilon)$-almost $2$-colorable if removing an $\varepsilon$ fraction of its vertices and all hyperedges incident on them makes the remaining hypergraph $2$-colorable. In particular we prove the following results. \begin{itemize} \item For an arbitrarily small constant $\gamma > 0$, there is a constant $\xi > 0$, such that, given a $4$-uniform hypergraph on $n$ vertices which is $(1 - \varepsilon)$-almost $2$-colorable for $\varepsilon = 2^{-(\log n)^\xi}$, it is quasi-NP-hard\footnote{A problem is quasi-NP-hard if it admits a $n^{\ensuremath{\textnormal{poly}}\xspace(\log n)}$ time reduction from $3$SAT.} to find an independent set of $\slfrac{n}{\left(2^{(\log n)^{1-\gamma}}\right)}$ vertices. \item For any constants $\varepsilon, \delta > 0$, given as input a $3$-uniform hypergraph on $n$ vertices which is $(1-\varepsilon)$-almost $2$-colorable, it is NP-hard to find an independent set of $\delta n$ vertices. \item Assuming the \emph{$d$-to-$1$ Games Conjecture} the following holds. For any constant $\delta > 0$, given a $2$-colorable $3$-uniform hypergraph on $n$ vertices, it is NP-hard to find an independent set of $\delta n$ vertices. \end{itemize} The hardness result on independent set in almost $2$-colorable $3$-uniform hypergraphs was earlier known only assuming the Unique Games Conjecture. In this work we prove the result \emph{unconditionally}, combining Fourier analytic techniques with the Multi-Layered PCP of \cite{DGKR03}. For independent sets in $2$-colorable $3$-uniform hypergaphs we prove the first strong hardness result, albeit assuming the $d$-to-$1$ Games Conjecture. Our reduction uses the $d$-to-$1$ Game as a starting point to construct a Multi-Layered PCP with the \emph{smoothness} property. We use analytical techniques based on the Invariance Principle of Mossel~\cite{Mossel}. The smoothness property is crucially exploited in a manner similar to recent work of H\aa stad~\cite{Hastad12} and Wenner~\cite{Wenner12}. Our result on almost $2$-colorable $4$-uniform hypergraphs gives the first nearly polynomial hardness factor for independent set in hypergraphs which are (almost) colorable with constantly many colors. It partially bridges the gap between the previous best lower bound of $\ensuremath{\textnormal{poly}}\xspace(\log n)$ and the algorithmic upper bounds of $n^{\Omega(1)}$. This also exhibits a bottleneck to improving the algorithmic techniques for hypergraph coloring. \end{abstract} \section{Introduction} A $k$-uniform hypergraph consists of a set of vertices and a set of hyperedges, where each hyperedge is a subset of exactly $k$ vertices. For $k=2$ this defines the usual notion of a graph. An \emph{independent set} in a $k$-uniform hypergraph is a subset of vertices such that no hyperedge has all of its $k$ vertices from this subset. In other words, an independent set does not contain any hyperedge. The problem of finding independent sets of maximum size in (hyper)graphs is a fundamental one in combinatorial optimization. Note that the complement of an independent set is a \emph{vertex cover}, i.e. a subset of vertices that contains at least one vertex from each hyperedge. Thus, finding a maximum sized independent set is same as finding a minimum vertex cover, an equally important problem in combinatorics. Throughout this paper, we shall frequently use the size of a set of vertices to mean its \emph{relative} size, i.e. as a fraction of the total weight of the vertices. The study of independent sets is closely related to that of hypergraph coloring. A hypergraph is $q$-colorable if its vertices can each be assigned one of $q$ distinct colors so that no hyperedge is monochromatic. The problem in hypergraph coloring is to determine the minimum possible value of $q$, which is known as the \emph{chromatic number} of the hypergraph. Note that the color classes in a $q$-coloring form a partition of the vertices into $q$ disjoint independent sets. Thus, a $q$-colorable hypergraph has an independent set of size at least $\slfrac{1}{q}$. On the other hand, if a hypergraph does not have an independent set of size $\slfrac{1}{q}$ then it is not $q$-colorable either. Thus, the absence of large independent sets implies a large chromatic number. This connection can also be studied with a relaxed notion of hypergraph coloring. Say that a hypergraph is \emph{ almost $q$-colorable} if there is a subset of vertices of size at most $\varepsilon$ such that removing this subset and all hyperedges containing a vertex from this subset makes the hypergraph $q$-colorable. Here $\varepsilon$ can be an arbitrarily small positive constant. It is easy to see that an almost $q$-colorable hypergraph contains $q$ pairwise disjoint independent sets containing within them at least $(1-\varepsilon)$ fraction of vertices. Thus, there is at least one independent set of size $\slfrac{(1-\varepsilon)}{q}$. The problem of finding independent sets in (almost) $q$-colorable $k$-uniform hypergraphs is most interesting for small values of $q$ and $k$ and has been studied extensively from the complexity perspective in a sequence of works including \cite{GHS, Khot-color,Holmerin,Khot-3,DRS,BK09,BK10,GSinop,KS12,Chan13}. For constant $q$ and $k$, the strongest hardness result in terms of the relative size of the independent set is by Khot~\cite{Khot-color} who showed the hardness of finding independent sets of size $(\log n)^{-c}$ in $5$-colorable $4$-uniform hypergraphs on $n$ vertices. On the other hand, the best algorithms for these problems yield independent sets of size $n^{-\Omega(1)}$. In this work we focus on the case of (almost) $2$-colorable $3$-uniform and $4$-uniform hypergraphs. The motivation for our first result stems from the gap between the algorithmic and complexity results mentioned above. We prove the following. \begin{theorem}\label{thm-main1} For any arbitrarily small constant $\gamma > 0$, there is a constant $\xi > 0$ such that given a $4$-uniform hypergraph $G(V, E)$ on $n$ vertices such that removing $2^{-(\log n)^\xi}$ fraction of vertices and all hyperedges incident on them makes the remaining hypergraph $2$-colorable, it is quasi-NP-hard to find an independent set in $G$ of $\slfrac{n}{\left(2^{(\log n)^{1-\gamma}}\right)}$ vertices. \end{theorem} This is the first result showing an almost polynomial factor hardness for independent set in (almost) $q$-colorable $k$-uniform hypergraphs. While existing algorithms are for the case of exact colorability, they rely on the presence of a small number of pairwise disjoint independent sets covering almost all the vertices, and are also applicable to the case of almost colorability. Thus, the above result indicates a bottleneck in the improvement of existing algorithms. The hardness factor obtained is exponentially stronger than the previous lower bound of $\ensuremath{\textnormal{poly}}\xspace(\log n)$ by Khot~\cite{Khot-color}, albeit for the case of exact colorability. Our next result is an analogue of the result of Bansal and Khot~\cite{BK09,BK10} who showed, assuming the Unique Games Conjecture (UGC), that it is NP-hard to find an independent set of $\delta$ fraction of vertices (for any constant $\delta > 0$) in an almost $2$-colorable graph (i.e. almost bipartite graph). The related work of Guruswami and Sinop~\cite{GSinop} showed a similar result for almost $2$-colorable $3$-uniform hypergraphs (with the hardness factor depending on the degree), assuming UGC. We show that it is possible to prove the result for $3$-uniform hypergraphs \emph{without} assuming UGC. \begin{theorem}\label{thm-main2} For any constants $\varepsilon, \delta > 0$, given a $3$-uniform hypergraph on $n$ vertices such that removing at most $\varepsilon$ fraction of vertices and the hyperedges incident on them makes the remaining hypergraph $2$-colorable, it is NP-hard to find an independent set of $\delta n$ vertices. \end{theorem} The instances constructed in the Theorems \ref{thm-main1} and \ref{thm-main2} are degree regular, and thus also work for an alternate definition of almost colorability -- which involves removing $\varepsilon$ fraction of the hyperedges instead of vertices -- used in \cite{GSinop}. Our final result proves the first strong hardness factor for finding independent sets in $2$-colorable $3$-uniform hypergraphs, assuming the \emph{$d$-to-$1$ Games Conjecture} of Khot~\cite{Khot02}. \begin{theorem}\label{thm-main3} Assuming the \emph{$d$-to-$1$ Games Conjecture} the following holds. For any constant $\delta > 0$, given a $2$-colorable $3$-uniform hypergraph on $n$ vertices, it is NP-hard to find an independent set of $\delta n$ vertices. \end{theorem} We note that Dinur, Regev and Smyth~\cite{DRS} showed that $2$-colorable $3$-uniform hypergraphs are NP-hard to color with constantly many colors. However, their reduction produced instances with linear sized independent sets in the NO Case, and thus did not yield any hardness for finding independent sets in such hypergraphs. Our result therefore proves a stronger property, albeit assuming the conjecture. In the remainder of this section we shall formally state the problems we study in this work, give an overview of previous related work and describe the techniques used in our results. \subsection{Problem Definition} Given a hypergraph $G$, let ${\sf IS}(G)$ be the size of the maximum independent set in $G$ and let $\chi(G)$ be its chromatic number, i.e. the minimum number of colors required to color the hypergraph such that every hyperedge is non-monochromatic. We define the problem of finding independent sets in $q$-colorable hypergraphs as follows. \noindent {\bf {\sc ISColor}$(k, q, Q)$} : Given a $k$-uniform hypergraph $G(V, E)$, decide between, \begin{itemize} \item YES Case: $\chi(G) \leq q$. \item NO Case: ${\sf IS}(G) < \frac{|V|}{Q}$. \end{itemize} It is easy to see that if {\sc ISColor}$(k, q, Q)$ is NP-hard for some parameters $q, Q \in \mathbb{Z}^+$ then it is NP-hard to color a $q$-colorable $k$-uniform hypergraph with $Q$ colors. In this paper we also study a slight variant of this problem, in which the goal is to find independent sets in almost colorable hypergraphs. For parameters $k, q, Q$, and a parameter $\varepsilon >0$ it is defined as follows. \noindent {\bf {\sc ISAlmostColor}$_\varepsilon(k, q, Q)$}: Given a $k$-uniform hypergraph $G(V, E)$, decide between, \begin{itemize} \item YES Case: There is a subset of $(1-\varepsilon)$ fraction of the vertices, such that for the $k$-uniform hypergraph $G'$ on this subset of vertices containing the hyperedges which lie completely inside it, $\chi(G') \leq q$. We also denote this by $\chi_\varepsilon(G) \leq q$. \item NO Case: ${\sf IS}(G) < \frac{|V|}{Q}$. \end{itemize} Note that the second property above, i.e. ${\sf IS}(G) < \frac{|V|}{Q}$, implies that $\chi_\varepsilon(G) \geq Q-1$ for sufficiently small $\varepsilon > 0$. Using the above definitions the results of this paper can be concisely restated as follows. The number of vertices in the hypergraph is denoted by $n$. \subsubsection*{Our Results} \begin{thm2}(Theorem \ref{thm-main1}) For an arbitrarily small constant $\gamma > 0$, there is a constant $\xi > 0$ such that {\sc ISAlmostColor}$_\varepsilon(4, 2, Q)$ is quasi-NP-hard, where $\varepsilon = 2^{-(\log n)^\xi}$ and $Q = 2^{(\log n)^{1-\gamma}}$. \end{thm2} \begin{thm2} (Theorem \ref{thm-main2}) For any constant $Q > 0$ and arbitrarily small constant $\varepsilon > 0$,\\ {\sc ISAlmostColor}$_\varepsilon(3, 2, Q)$ is NP-hard. \end{thm2} \begin{thm2} (Theorem \ref{thm-main3}) Assuming the $d$-to-$1$ Games Conjecture the following holds. For any constant $Q > 0$, {\sc ISColor}$(3, 2, Q)$ is NP-hard. \end{thm2} \subsection{Previous Work} The problem of finding independent sets in (almost) colorable graphs and hypergraphs has been studied extensively from algorithmic as well as complexity perspectives. On $2$-colorable, i.e. bipartite graphs, the maximum independent set can be computed in polynomial time. On the other hand, a significant body of work -- including \cite{Wigderson}, \cite{Blum}, \cite{KMS}, \cite{BK}, \cite{ACC}, and \cite{KT12} -- has shown that a $3$-colorable graph can be efficiently colored with $n^\alpha$ colors where the currently best value of $\alpha \approx 0.2038$ was shown in \cite{KT12}. In particular, this shows that {\sc ISColor}$(2, 3, n^{\alpha})$ can be efficiently solved. For $2$-colorable $3$-uniform hypergraphs Krivelevich et al.~\cite{KNS} gave a coloring algorithm using $O(n^{1/5})$ colors, thus solving {\sc ISColor}$(3, 2, O(n^{1/5}))$. Chen and Frize~\cite{CF} and Kelsen, Mahajan and Ramesh~\cite{KMR} independently gave algorithms for coloring $2$-colorable $4$-uniform hypergraphs using $O(n^{3/4})$ colors, which implies an algorithm for {\sc ISColor}$(4, 2, O(n^{3/4}))$. In related work Chlamtac and Singh~\cite{CS} gave an algorithm that on a $3$-uniform hypergraph which has an independent set of $\gamma n$ vertices, efficiently computes an independent set of $n^{\Omega(\gamma^2)}$ vertices. While the algorithmic approaches have studied the case of exactly colorable hypergraphs, they rely on the existance of disjoint independent sets and are also applicable to almost colorable hypergraphs. Several hardness results for these problems have been obtained using either the PCP Theorem or well known conjectures as the starting point. Under standard complexity assumptions, Khot \cite{Khot-color} showed the hardness of finding independent sets of size $(\log n)^{-c}$ in $5$-colorable $4$-uniform hypergraphs on $n$ vertices. Building upon similar work of Guruswami, H\aa stad and Sudan~\cite{GHS}, Holmerin~\cite{Holmerin} showed that it is NP-hard to find an independent set of size $\delta$ in a $2$-colorable $4$-uniform hypergraph, where $\delta > 0$ is any constant. For $3$-uniform hypergraphs which are $3$-colorable, Khot~\cite{Khot-3} showed a hardness of finding independent sets of size $(\log\log n)^{-c}$. On $3$-colorable graphs, assuming the so called \emph{Alpha Conjecture}, Dinur et al.~\cite{DMR} showed it is NP-hard to find independent sets of size $\delta$. Bansal and Khot~\cite{BK09, BK10} assumed the more well known Unique Games Conjecture to show that it is NP-hard to find independent sets of size $\delta$ in \emph{almost bipartite} (i.e. almost $2$-colorable) graphs. Guruswami and Sinop~\cite{GSinop} showed a similar result for almost $2$-colorable $3$-uniform hypergraphs, the focus of their work being the case of bounded degree hypergraphs. It is pertinent to note that while the algorithmic results have $\ensuremath{\textnormal{poly}}\xspace(n)$ factors, the previous best inapproximability was a $\ensuremath{\textnormal{poly}}\xspace(\log n)$ factor~\cite{Khot-color}. Our result for independent set in almost $2$-colorable $4$-uniform hypergraphs -- Theorem \ref{thm-main1} -- partially bridges this gap by showing an almost polynomial factor $2^{-(\log n)^{1-\varepsilon}}$, an exponential improvement over the previous lower bound. Theorem \ref{thm-main2} unconditionally proves the hardness result for independent set in almost $2$-colorable $3$-uniform hypergraphs, which was earlier known only assuming the Unique Games Conjecture. We also show -- in Theorem \ref{thm-main3} -- the first inapproximability for the case of $2$-colorable $3$-uniform hypergraphs assuming the $d$-to-$1$ Games Conjecture. In the rest of this section we give an informal overview of the techniques used to proves our results. \subsection{Overview of Techniques} The results of this work follow a common template of reductions from an instance of a NP-hard constraint satisfaction problem -- the so called \emph{Outer Verifier} -- via its combination with a proof encoding -- the \emph{Inner Verifier}. However, the techniques used to prove Theorems \ref{thm-main1}, \ref{thm-main2} and \ref{thm-main3} are somewhat varied and we describe them separately. \subsubsection*{Almost $2$-Colorable $4$-Uniform Hypergraphs} The goal of this result is to prove an almost polynomial hardness factor for independent set in almost $2$-colorable $4$-uniform hypergraphs. To accomplish this, the size of the hardness reduction needs to be bounded. Thus, one cannot use \emph{Long Codes} which have an unmanageable blowup for our purpose. Instead, we use Hadamard Codes which are exponentially shorter and have been used in previous works~\cite{KP06, KS08} for a similar reason. The Hadamard Code $H^v$ of an element $v \in \ensuremath{\mathbb{F}[2]}^m$ is indexed by all $x \in \ensuremath{\mathbb{F}[2]}^m$ such that $H^v(x) := x\cdot v \in \ensuremath{\mathbb{F}[2]}$. The ``gadget'' used for the reduction is as follows. Consider the following $4$-uniform hypergraph. The vertex set is $\ensuremath{\mathbb{F}[2]}^m$. Let $e_1 \in \ensuremath{\mathbb{F}[2]}^m$ be the element which has $1$ in the first coordinate and $0$ everywhere else. For any $x, y, z \in \ensuremath{\mathbb{F}[2]}^m$, add a hyperedge between the elements $x, y, x+z$ and $y+z+e_1$, where the addition is done in the vector space $\ensuremath{\mathbb{F}[2]}^m$. This is (essentially) a $4$-uniform hypergraph. Consider any element $v \in \ensuremath{\mathbb{F}[2]}^m$ such that $v_1 = 1$. It is easy to see that $H^v(x) + H^v(x+z) + H^v(y) + H^v(y + z + e_1) = 1$, and thus the coloring to $\ensuremath{\mathbb{F}[2]}^m$ given by the value of $H^v$ is a valid $2$-coloring of this hypergraph. On the other hand it can be shown that any independent set $S \subseteq \ensuremath{\mathbb{F}[2]}^m$ of size $\delta 2^m$ can be \emph{decoded} into a list of elements $v$ such that $v_1 = 1$. This analysis uses only some basic tools from Fourier Analysis. The above gadget can be combined with a parallel repetition of an appropriate linear constraint system. In our case, we choose a specialized instance of {\sc Max-$3$Lin} constructed by Khot and Ponnuswami~\cite{KP06}. The main idea in this combination is to do the \emph{folding} only over the homogeneous constraints and use the non-homogeneous constraints to play the role of $e_1$ in the above gadget. The almost polynomial hardness factor is obtained by an appropriate number of rounds of parallel repetition which is afforded by the parameters of the {\sc Max-$3$Lin} instance used in the reduction. \subsection*{Almost $2$-Colorable $3$-Uniform Hypergraphs} This reduction uses as the Outer Verifier a layered constraint satisfaction problem, referred to as the \emph{Multi-Layered PCP}. This PCP was used earlier by Khot~\cite{Khot-3} for similar results for $3$-Colorable $3$-Uniform Hypergraphs and by Dinur, Guruswami, Khot and Regev~\cite{DGKR03} and Sachdeva and Saket~\cite{SS11} in their hardness results for hypergraph vertex cover. Due to some fundamental limitations of existing techniques, the use of this PCP is necessitated for proving results for independent sets in $3$-uniform hypergraphs. The Inner Verifier uses a \emph{biased} Long Code encoding similar to the reductions of Dinur, Khot, Perkins and Safra~\cite{DKPS}, Khot and Saket~\cite{KS12} and Sachdeva and Saket~\cite{SS13}. The following gadget encapsulates the Inner Verifier. Consider the biased Long Code $\mc{H} = \{1,2,*\}^m$. The associated measure is induced by sampling each coordinate independently to be $1$ or $2$ with probability $\frac{1-\varepsilon}{2}$ and $*$ with probability $\varepsilon$. Let $\mc{H}_0, \mc{H}_1, \dots, \mc{H}_d$ be $d+1$ identical copies of $\mc{H}$. A vertex weighted $3$-uniform hypergraph is constructed by taking the union of the $d+1$ Long Codes with weights given by the measure. Consider $x \in \mc{H}_0$ and $y, z \in \mc{H}_k$ ($1\leq k \leq d$), such that for any $i \in [m]$ the tuple $(x_i, y_i, z_i)$ is not $(1,1,1)$ or $(2,2,2)$. Add a hyperedge between $x, y$ and $z$ for all such choices. It is easy to see that for any $j \in [m]$, removing all the vertices $x$ such that $x_j = *$ and all hyperedges incident on these vertices makes the hypergraph $2$-colorable by coloring the rest of the vertices $y$ according to whether $y_j = 1$ or $2$. On the other hand, using Russo's Lemma and Friedgut's Junta Theorem one can show that if there is an independent set $\mc{I}$ which has at least $\delta$ fraction of measure from each of the $d+1$ Long Codes, then it can be decoded into a distinguished coordinate $\ell \in [m]$. This Inner Verifier is robust enough to be combined with the Multi-Layered PCP to yield the desired result. The hardness factor obtained, however, is much weaker than in the previous reduction, due to our use of Long Codes and also due to the structure of the Multi-Layered PCP. \subsubsection{$2$-Colorable $3$-Uniform Hypergraphs} For independent set in $2$-colorable $3$-uniform hypergraphs, the existing PCP techniques seem insufficient to yield the desired results. Thus, we rely on the $d$-to-$1$ Games Conjecture of Khot~\cite{Khot02}. This conjecture was earlier used to establish hardness results for independent sets in $4$-colorable graphs~\cite{DMR}. Our use of this conjecture is similar to that of O'Donnell and Wu~\cite{OW} who showed an optimal $\frac{5}{8} + \varepsilon$ factor hardness for a satisfiable instance of {\sc Max-$3$CSP}. In a recent work H\aa stad~\cite{Hastad12} showed the same result unconditionally. We also make use of certain techniques used in \cite{Hastad12}. The Outer Verifier in our reduction is a multi-layered PCP constructed using the $d$-to-$1$ games problem. The construction of this PCP ensures a \emph{smoothness} property which has been used in several previous works~\cite{Khot-3, KS06, KS08a, GRSW} including the above mentioned work of H\aa stad~\cite{Hastad12} and a related work of Wenner~\cite{Wenner12}. The Inner Verifier yields a $3$-uniform hypergraph with hyperedges corresponding to a $3$-query PCP test over Long Codes which is in a same vein as the test used in \cite{OW} and \cite{Hastad12}. The analysis is based in large part on the Invariance Principle of Mossel~\cite{Mossel}, the application of which follows an approach used by O'Donnell and Wu~\cite{OW}, while avoiding certain complications they face. The smoothness property is crucial for the analysis and is leveraged in a manner similar to \cite{Hastad12}. \noindent {\bf Organization of Paper.} The next section contains the known PCP constructions which shall be the starting points in our reductions for Theorems \ref{thm-main1} and \ref{thm-main2}. We shall also state the $d$-to-$1$ Games Conjecture that we shall require for proving Theorem \ref{thm-main3} and describe the smooth layered PCP we construct based on this assumption, a sketch of the construction being deferred to Section \ref{sec-dto1multi}. Sections \ref{sec-main1}, \ref{sec-main2} and \ref{sec-main3} contain the hardness reduction and proofs for Theorems \ref{thm-main1}, \ref{thm-main2} and \ref{thm-main3} respectively along with a description of the mathematical tools needed to complete the analyses. \section{Preliminaries} In this section we shall describe some useful results in PCPs and hardness of approximation along with the description of the $d$-to-$1$ Games Conjecture. For proving Theorem \ref{thm-main1} we shall begin with the following theorem of Khot and Ponnuswami~\cite{KP06} on the hardness of a specific {\it gap} version of {\sc Max-$3$Lin} with a desirable setting of the parameters. An instance of {\sc Max-$3$Lin} consists of a system of linear equations over $\ensuremath{\mathbb{F}[2]}$ where each equation has exactly $3$ variables, the goal being to find an assignment to the variables satisfying the maximum number of equations. The instance is said to be $d$-regular if each variable occurs in exactly $d$ equations. \begin{theorem} \label{thm-KP} \cite{KP06} Given a $7$-regular instance $\mathcal{A}$ of {\sc Max-$3$Lin} over $\ensuremath{\mathbb{F}[2]}$ on $n$ variables, unless $\textnormal{NP} \subseteq \textnormal{DTIME}(2^{O(\log ^2 N)})$, there is no polynomial time algorithm to distinguish between the following two cases, \begin{itemize} \item \textnormal{{\bf YES} Case.} There is an assignment to the variables of $\mathcal{A}$ that satisfies $1 - c(n) := 1 - 2^{-\Omega(\sqrt{\log n})}$ fraction of the equations (completeness). \item \textnormal{{\bf NO} Case.} No assignment to the variables of $\mathcal{A}$ satisfies more than $1 - s(n) := 1 - \Omega(\log^{-3}n)$ fraction of the equations (soundness). \end{itemize} \end{theorem} The usefulness of the above theorem is due to the fact that the completeness is very close to $1$, while the soundness is bounded away from $1$ to allow for $\ensuremath{\textnormal{poly}}\xspace(\log n)$ rounds of parallel repetition. The rest of this section describes PCP constructions -- required for Theorems \ref{thm-main2} and \ref{thm-main3} -- which are somewhat more complicated. \subsection{Multi-Layered PCP}\label{sec-multi} The Multi-Layered PCP described here was constructed by Dinur, Guruswami, Khot and Regev~\cite{DGKR03} who also proved its useful properties. An instance $\Phi$ of the Multi-Layered PCP is parametrized by integers $L, R > 1$. The PCP consists of $L$ sets of variables $V_1, \dots, V_L$. The label set (or range) of the variables in the $l^\textrm{th}$ set $V_l$ is a set $R_l$ where $|R_l| = R^{O(L)}$. For any two integers $1 \leq l < l' \leq L$, the PCP has a set of constraints $\Phi_{l,l'}$ in which each constraint depends on one variable $v \in V_l$ and one variable $v' \in V_{l'}$. The constraint (if it exists) between $v \in V_l$ and $v' \in V_{l'}$ ($l < l'$) is denoted and characterized by a projection $\pi_{v\rightarrow v'} : R_l\to R_{l'}$. A labeling to $v$ and $v'$ satisfies the constraint $\pi_{v\rightarrow v'}$ if the projection (via $\pi_{v\rightarrow v'}$) of the label assigned to $v$ coincides with the label assigned to $v'$. The following useful `weak-density' property of the Multi-Layered PCP was defined in \cite{DGKR03}, which (roughly speaking) states that any significant subset of variables induces a significant fraction of the constraints between some pair of layers. \begin{definition}\label{def-weakly-dense} An instance $\Phi$ of the Multi-Layered PCP with $L$ layers is \textnormal{weakly-dense} if for any $\delta > 0$, given $m \geq \lceil\frac{2}{\delta}\rceil$ layers $l_1 < l_2 < \dots < l_m$ and given any sets $S_i\subseteq V_{l_i}$, for $i\in[m]$ such that $|S_i|\geq \delta|V_{l_i}|$; there always exist two layers $l_{i'}$ and $l_{i''}$ such that the constraints between the variables in the sets $S_{i'}$ and $S_{i''}$ is at least $\frac{\delta^2}{4}$ fraction of the constraints between the sets $V_{l_{i'}}$ and $V_{l_{l''}}$. \end{definition} The following inapproximability of the Multi-Layered PCP was proven by Dinur et al. \cite{DGKR03} based on the PCP Theorem (\cite{AS}, \cite{ALMSS}) and Raz's Parallel Repetition Theorem (\cite{Raz}). \begin{theorem}\label{thm-multi} There exists a universal constant $\gamma >0$ such that for any parameters $L > 1$ and $R$, there is a weakly-dense $L$-layered PCP $\Phi = \cup \Phi_{l,l'}$ such that it is NP-hard to distinguish between the following two cases: \begin{itemize} \item \textnormal{{\bf YES} Case:} There exists an assignment of labels to the variables of $\Phi$ that satisfies all the constraints. \item \textnormal{{\bf NO} Case:} For every $1\leq l < l' \leq L$, not more that $1/R^\gamma$ fraction of the constraints in $\Phi_{l,l'}$ can be satisfied by any assignment. \end{itemize} \end{theorem} \subsection{The $d$-to-$1$ Games Conjecture}\label{sec-dto1} Before we state the conjecture we need to define a $d$-to-$1$ Game. \begin{definition} For a positive integer $d$, a $d$-to-$1$ Game $\mc{L}$ consists two sets of variables $\mc{U}$ and $\mc{V}$, label sets $[k]$ and $[m]$, and set of constraints $\mc{E}$ where each constraint $\pi_{v\rightarrow u} : [m] \rightarrow [k]$ is between a variable $v \in \mc{V}$ and $u \in \mc{U}$, and for any $i \in [k]$ $\left|\pi_{v \rightarrow u}^{-1}(i)\right| = d$. A labeling $\sigma$ to the variables in $\mc{U}$ from $[k]$ and $\mc{V}$ from $[m]$ satisfies a constraint $\pi_{v\rightarrow u}$ iff $\pi_{v\rightarrow u}(\sigma(v)) = \sigma(u)$. \end{definition} Note that the definition of $d$-to-$1$ Game in \cite{Khot02} had the condition that $\left|\pi_{v \rightarrow u}^{-1}(i)\right| \leq d$. All of our proofs go through analogously with this relaxed condition, but to avoid notational complications we stick to assuming that the pre-image of every singleton is of size exactly $d$. We now state the $d$-to-$1$ Games Conjecture. \begin{conjecture} \label{conj-dto1bireg} \textnormal{($d$-to-$1$ Games Conjecture~\cite{Khot02})} There is a fixed positive integer $d$ such that for any $\zeta > 0$, there exist integers $k$ and $m$ such that given a $d$-to-$1$ Game instance $\mc{L}$ with label sets $[k]$ and $[m]$ it is NP-hard to distinguish between the following two cases: \begin{itemize} \item \textnormal{{\bf YES} Case.} There is a labeling to the variables that satisfies all the constraints. \item \textnormal{{\bf NO} Case.} Any labeling to the variables satisfies at most $\zeta$ fraction of constraints. \end{itemize} In addition we make the assumption\footnote{It is not known whether this assumption can be made WLOG. However, all known Label Cover constructions are bi-regular which makes the assumption, in the authors' opinion, a reasonable one.} that the instance $\mc{L}$ is \emph{bi-regular}, i.e. for any variable $v \in \mc{V}$ the number of constraints containing $v$ is the same, and similarly for any variable $u \in \mc{U}$ the number of constraints containing $u$ is the same. \end{conjecture} Using Conjecture \ref{conj-dto1bireg} we have the following layered PCP with an additional \emph{smoothness} property. \subsection{Smooth $d$-to-$1$ Multi-Layered PCP}\label{sec-dto1multidef} The following is an analogue of the Multilayered PCP based on the $d$-to-$1$ conjecture and also incorporating the \emph{smoothness} property. We shall refer to it as the \emph{Smooth $d$-to-$1$ MLPCP}. An instance $\Phi$ of the Smooth $d$-to-$1$ MLPCP is parametrized by integers $d, L, R, T > 1$. The PCP consists of $L$ sets of variables $V_1, \dots, V_L$. The label set (or range) of the variables in the $l^\textrm{th}$ set $V_l$ is a set $R_l$ where $|R_l| = R^{O(TL)}$. For any two integers $1 \leq l < l' \leq L$, the PCP has a set of constraints $\Phi_{l,l'}$ in which each constraint depends on one variable $v \in V_l$ and one variable $v' \in V_{l'}$. The constraint (if it exists) between $v \in V_l$ and $v' \in V_{l'}$ ($l < l'$) is denoted and characterized by a projection $\pi_{v\rightarrow v'} : R_l\to R_{l'}$. The projection $\pi_{v\rightarrow v'}$ has the property that for every $j \in R_{l'}$, $\left|\pi_{v\rightarrow v'}^{-1}(j)\right| = d^{l-l'}$. A labeling to $v$ and $v'$ satisfies the constraint $\pi_{v\rightarrow v'}$ if the projection (via $\pi_{v\rightarrow v'}$) of the label assigned to $v$ coincides with the label assigned to $v'$. We have a similar weak density property as in the previous section. \begin{definition}\label{def-weakly-dense-dto1} An instance $\Phi$ of Smooth $d$-to-$1$ MLPCP with $L$ layers is \textnormal{weakly-dense} if for any $\delta > 0$, given $m \geq \lceil\frac{2}{\delta}\rceil$ layers $l_1 < l_2 < \dots < l_m$ and given any sets $S_i\subseteq V_{l_i}$, for $i\in[m]$ such that $|S_i|\geq \delta|V_{l_i}|$; there always exist two layers $l_{i'}$ and $l_{i''}$ such that the constraints between the variables in the sets $S_{i'}$ and $S_{i''}$ is at least $\frac{\delta^2}{4}$ fraction of the constraints between the sets $V_{l_{i'}}$ and $V_{l_{l''}}$. \end{definition} We also have the \emph{smoothness} property as defined below. \begin{definition}\label{def-smoothness} An instance $\Phi$ of Smooth $d$-to-$1$ MLPCP with $L$ layers and parameter $T$ has the \emph{smoothness} property if for any two layers $l < l'$, and variable $v \in V_l$ and two distinct labels $i, j \in R_l$, $$\Pr_{v' \in N(v)\cap V_{l'}}\left[\pi_{v\rightarrow v'}(i) = \pi_{v\rightarrow v'}(j)\right] \leq \frac{1}{T},$$ where the probability is taken over a random variable in $V_{l'}$ which has a constraint with $v$. \end{definition} The following inapproximability of the Smooth $d$-to-$1$ MLPCP essentially follows from combining Conjecture \ref{conj-dto1bireg} with the layered construction of \cite{Khot-3}. A sketch of the construction is provided in Section \ref{sec-dto1multi}. \begin{theorem}\label{thm-dto1multi} Assuming Conjecture \ref{conj-dto1bireg} the following holds. There exists a universal constant positive integer $d$ such that for any arbitrarily small constant $\zeta > 0$, there exists a positive integer $R$, such that for every $L, T > 1$, there is a weakly-dense and smooth $L$-layered PCP with parameters $d, T, R$, $\Phi = \cup \Phi_{l,l'}$, such that it is NP-hard to distinguish between the following two cases: \begin{itemize} \item \textnormal{{\bf YES} Case.} There exists an assignment of labels to the variables of $\Phi$ that satisfies all the constraints. \item \textnormal{{\bf NO} Case.} For every $1\leq l < l' \leq L$, not more that $\zeta$ fraction of the constraints in $\Phi_{l,l'}$ can be satisfied by any assignment. \end{itemize} \end{theorem} \section{Independent Set in Almost $2$-Colorable $4$-Uniform Hypergraphs}\label{sec-main1} This section presents a hardness reduction from Theorem \ref{thm-KP} to an instance of {\sc ISAlmostColor}$_\varepsilon(4, 2, Q)$. The reduction employs an Inner Verifier based on Hadamard Codes. The Hadamard Code of an element $v \in \ensuremath{\mathbb{F}[2]}^m$ is a $\ensuremath{\mathbb{F}[2]}$-valued code indexed by the elements of $\ensuremath{\mathbb{F}[2]}^m$ and its value at $x \in \ensuremath{\mathbb{F}[2]}^m$ is the dot-product $x\cdot v \in \ensuremath{\mathbb{F}[2]}$. \subsection{Hardness Reduction} Let $\mc{A}$ be the {\sc Max-$3$Lin} instance given by Theorem \ref{thm-KP}. The reduction begins with choosing a positive integer $r$ which we shall set later. In the first part of the reduction we shall construct an Outer Verifier which shall be an $r$-round parallel repetition of a verifier-prover game obtained from the instance $\mc{A}$. \subsubsection{Outer Verifier} Let $\Phi_r$ be the collection of all blocks of $r$ variables each from $\mc{A}$, and $\Psi_r$ be the collection of all blocks of $r$ equations each. Consider the following $2$-prover $1$-round game {\sc $2$P$1$R}$(\mc{A}, r)$: \begin{enumerate} \item The Verifier chooses one block $W$ uniformly at random from $\Psi_r$. From each equation in $W$, the verifier chooses one out of the three variables at random to construct a block $U$ of $\Phi_r$. \item The Verifier sends $U$ to Prover-1 and $W$ to Prover-2 and expects from each prover an assignment to all the variables that it received. \item The Verifier accepts if the assignment given by Prover-2 satisfies all equations of $W$ and is consistent with the assignment given to the variables of $U$ by Prover-1. \end{enumerate} The Parallel Repetition Theorem of Raz~\cite{Raz} and its subsequent strengthening by Holenstein~\cite{Hol} and Rao~\cite{Rao} imply the following. \begin{theorem}\label{thm-2P1R} The $2$ prover $1$ round game {\sc $2$P$1$R}$(\mc{A}, r)$, where $\mc{A}$ is an instance on $n$ variables given by Theorem \ref{thm-KP}, has the following properties: \begin{itemize} \item \textnormal{{\bf YES} Case.} If $\mc{A}$ is a YES instance then the Verifier accepts with probability at least $(1 - c(n))^r$. \item \textnormal{{\bf NO} Case.} If $\mc{A}$ is a NO instance then the Verifier accepts with probability at most $(1 - s(n)^\kappa)^{r/\kappa}$ for some universal constant $\kappa > 1$. \end{itemize} \end{theorem} For the rest of the reduction we shall assume that none of the blocks $W$ or $U$ contain a repeated variable. This omits only a tiny fraction of blocks which does not change any parameter noticeably. \subsubsection{Inner Verifier} Consider a block $W$ of $r$ equations. It contains $3r$ distinct variables say $x_1, x_2 \dots, x_{3r-1}, x_{3r}$. We may assume without loss of generality that the $i$th equation consists of the variables $x_{3i-2}, x_{3i-1}$, and $x_{3i}$, for $i=1, \dots, r$. We shall now associate an element of $\ensuremath{\mathbb{F}[2]}^{3r+1}$ with each of the $r$ equations of $W$. Note that the $(3r+1)$th coordinate is extra and added to help with ensuring consistency. Suppose that the $i$th (for some $i \in [r]$) equation is of the form $x_{3i-2} + x_{3i-1} + x_{3i} = 0$, then let $h_i \in \ensuremath{\mathbb{F}[2]}^{3r+1}$ be such that the dot-product $h_i\cdot x = x_{3i-2} + x_{3i-1} + x_{3i}$ for any $x \in \ensuremath{\mathbb{F}[2]}^{3r + 1}$. Otherwise, if the $i$th equation is of the form $x_{3i-2} + x_{3i-1} + x_{3i} = 1$, then let $h_i$ be such that $h_i\cdot x = x_{3i-2} + x_{3i-1} + x_{3i} + x_{3r + 1}$. Our assumption that the block $W$ does not contain a repeated variable implies that the set of elements $\{h_i\}_{i=1}^r$ is linearly independent. Let $H_W$ be the $r$ dimensional space spanned by $\{h_i\}_{i=1}^r$. For completing the reduction we also define an element $h_W \in \ensuremath{\mathbb{F}[2]}^{3r+1}$ so that $h_W\cdot x = x_{3r+1}$ for any $x \in \ensuremath{\mathbb{F}[2]}^{3r+1}$. Let $\ol{C}[W]$ be a $\{0,1\}$ \emph{code} indexed by the elements of $\ensuremath{\mathbb{F}[2]}^{3r+1}/H_W$, i.e. the set of cosets of the subspace $H_W$ in the space $\ensuremath{\mathbb{F}[2]}^{3r+1}$. Since $H_W$ is a $r$ dimensional subspace, the size of the code $\ol{C}[W]$ is $2^{2r+1}$. We say that $\ol{C}[W]$ is \emph{folded} over $H_W$. It is easy to see that any $\ol{C}[W] : \ensuremath{\mathbb{F}[2]}^{3r+1}/H_W \mapsto \{0,1\}$ can be \emph{unfolded} into $C[W] : \ensuremath{\mathbb{F}[2]}^{3r+1} \mapsto \{0,1\}$ such that, $C[W](x + y) = \ol{C}[W](x + H_W)$ for any $x \in \ensuremath{\mathbb{F}[2]}^{3r+1}$ and $y \in H_W$. For notational convenience we shall represent the coset $x + H_W$ simply by $x$, and this shall be clear from the context. Ideally, $C[W]$ is supposed to be the Hadamard Code of a satisfying assignment to the variables in $W$ with the $(3r+1)$th coordinate set to $1 \in \ensuremath{\mathbb{F}[2]}$, so that the code $\ol{C}[W]$ is well defined and folded over the subspace $H_W$. We are now ready to define the vertices and hyperedges of the instance $G(V, E)$ of {\sc AlmostColHyp}$(2,4)$. \noindent {\bf Vertices.} The vertex set $V$ consists of all the locations of $\ol{C}[W]$ for each $W \in \Psi_r$, i.e. each block $W$ of $r$ equations. \noindent {\bf Hyperedges.} Consider any choice of $U \in \Phi_r$ and $W \in \Psi_r$ by the verifier in the game {\sc $2$P$1$R}$(\mc{A}, r)$ in Step 1. Let $U$ and $W' \in \Psi_r$ be another choice with the same block of $r$ variables $U$. Let $\pi_W : \ensuremath{\mathbb{F}[2]}^{3r+1} \mapsto \ensuremath{\mathbb{F}[2]}^r$ be a projection onto the coordinates of the $r$ variables of $U$ from the block of $(3r+1)$ coordinates corresponding to the $3r$ variables of $W$ and the extra coordinate as defined above. The extra coordinate plays no part in this projection. We shall also use the notation $\pi^{-1} : \ensuremath{\mathbb{F}[2]}^r \mapsto \ensuremath{\mathbb{F}[2]}^{3r + 1}$, which extends a vector by filling in zeros in the rest of the coordinates. Similarly, $\pi_{W'}$ be the projection for $W'$. Let $\ol{C}[W]$ and $\ol{C}[W']$ be the codes of $W$ and $W'$. For all such choices of $U$, $W$ and $W'$ do the following. \begin{enumerate} \item For all choices of elements $x, y \in \ensuremath{\mathbb{F}[2]}^{3r + 1}$ and $z \in \ensuremath{\mathbb{F}[2]}^r$ such that $z \neq 0$, do step 2. \item Add a hyperedge between the vertices (or locations of the codes): $\ol{C}[W](x), \ol{C}[W](x + \pi_W^{-1}(z) + h_W), \ol{C}[W'](y)$ and $\ol{C}[W'](y + \pi_{W'}^{-1}(z))$. It is easy to see that since $z \neq 0$ the four vertices chosen above are distinct. \end{enumerate} This completes the hardness reduction and we move to its analysis. \subsection{YES Case} In the YES Case the instance $\mc{A}$ has an assignment $\sigma^*$ to its variables that satisfies $(1 - c(n))$ fraction of its equations. Call the equations satisfied by $\sigma^*$ as \emph{good}. Similarly, call a block of $r$ equations as \emph{good} if all of its equations are good. Clearly, at least $(1 - c(n))^r$ fraction of the blocks are good. For any good block $W$ let $C[W] : \ensuremath{\mathbb{F}[2]}^{3r+1} \mapsto \ensuremath{\mathbb{F}[2]}$ be the Hadamard Code of the assignment $\sigma^*(W)\in \ensuremath{\mathbb{F}[2]}^{3r}$ to the variables in $W$, concatenated with a $1$ in the $(3r+1)$th coordinate. Let us denote this concatenated vector as $(\sigma^*(W), 1)$. In other words, $C[W](x) = (\sigma^*(W), 1)\cdot x \in \ensuremath{\mathbb{F}[2]}$, for $x \in \ensuremath{\mathbb{F}[2]}^{3r+1}$. Since $\sigma^*$ satisfies all equations in $W$, it is easy to see that it is invariant over the cosets of $H_W$, i.e. $C[W](x + y) = C[W](x)$ for $x \in \ensuremath{\mathbb{F}[2]}^{3r+1}$ and $y \in H_W$. Thus this can be folded into the code $\ol{C}[W]$ by defining $\ol{C}[W](x+H_W) = C[W](x)$. As before, we shall use $\ol{C}[W](x)$ to represent the value over the coset $x + H_W$. The above defines a $2$-coloring of the locations of the codes of all good blocks depending on its value in $\ensuremath{\mathbb{F}[2]}$. We shall show that any hyperedge completely induced by these locations is non-monochromatic. Consider a choice of $U, W$ and $W'$ in the construction of the hyperedges where $W$ and $W'$ are good blocks. Let $x, y$ and $z$ be chosen as in Step 1. We shall show that, \begin{equation} \ol{C}[W](x)+ \ol{C}[W](x + \pi_W^{-1}(z) + h_W)+ \ol{C}[W'](y) + \ol{C}[W'](y + \pi_{W'}^{-1}(z)) = 1, \end{equation} which implies that the corresponding hyperedge is non-monochromatic. To see this, observe that the LHS of the above equation is, \begin{eqnarray} & & (\sigma^*(W),1)\cdot x + (\sigma^*(W),1)\cdot(x + \pi_W^{-1}(z) + h_W) + (\sigma^*(W'),1)\cdot y + (\sigma^*(W'),1)\cdot(y + \pi_{W'}^{-1}(z)) \nonumber \\ & = & (\sigma^*(W),1)\cdot (\pi_W^{-1}(z)) + (\sigma^*(W'),1)\cdot (\pi_{W'}^{-1}(z)) + (\sigma^*(W),1)\cdot h_W \nonumber \\ & = & (\pi_W(\sigma^*(W)) + \pi_{W'}(\sigma^*(W'))\cdot z + 1 \nonumber \\ & = & 1, \end{eqnarray} where the second last equation follows from the definition of $h_W$ and last equation follows from the fact that $\sigma^*$ is a global assignment so its projection onto $U$ from $W$ or $W'$ is the same. Thus, after removing a $1 - (1-c(n))^r$ fraction of vertices corresponding to the blocks which are not good and all hyperedges incident on them, the rest of the hypergraph is $2$-colorable. \subsection{NO Case}\label{sec-multi-No} Let $\mc{I}$ be an independent set in $G$. For every block $W$ of $r$ equations, let $\ol{C}[W]$ be the indicator of $\mc{I}$ restricted to the locations of the code $\ol{C}[W]$. Here, $\ol{C}[W]$ is a thought of as a $\{0,1\}$ real valued code. Let $U, W$ and $W'$ be the choices in the construction of the hyperedges. For all choices of $x, y$ and $z$ in Step 1 of the construction, we have. \begin{equation} \ol{C}[W](x)\cdot \ol{C}[W](x+ \pi_W^{-1}(z) + h_W)\cdot \ol{C}[W'](y) \cdot \ol{C}[W'](y + \pi_{W'}^{-1}(z)) = 0. \end{equation} As mentioned earlier, we can unfold the codes into $C[W]$ and $C[W']$ to rewrite the above as, \begin{equation} C[W](x)\cdot C[W](x+ \pi_W^{-1}(z) + h_W)\cdot C[W'](y)\cdot C[W'](y + \pi_{W'}^{-1}(z)) = 0. \end{equation} For convenience of notation, we shall refer to $C[W]$ as $A$ and $C[W']$ as $B$. Doing the usual Fourier expansion and using standard tools from Fourier Analysis over folded codes (refer to Section \ref{sec-Fourier} for an overview) we get the following. \begin{eqnarray} & & \displaystyle\sum_{\substack{\alpha, \alpha', \beta, \beta' \in \ensuremath{\mathbb{F}[2]}^{3r+1}\\ \alpha, \alpha'\perp H_W \\ \beta, \beta'\perp H_{W'}}}\wh{A}_\alpha\chi_\alpha(x) \wh{A}_{\alpha'}\chi_{\alpha'}(x+ \pi_W^{-1}(z) + h_W) \wh{B}_{\beta}\chi_\beta(y) \wh{B}_{\beta'}\chi_{\beta'}(y + \pi_{W'}^{-1}(z)) = 0 \nonumber \\ &\Rightarrow & \displaystyle\sum_{\substack{\alpha, \alpha', \beta, \beta' \\ \alpha, \alpha'\perp H_W \\ \beta, \beta'\perp H_{W'}}}\wh{A}_\alpha \wh{A}_{\alpha'}\wh{B}_{\beta} \wh{B}_{\beta'} \chi_{(\alpha+\alpha')}(x)\chi_{\alpha'}(h_W)\chi_{\pi_W(\alpha')}(z) \chi_{(\beta+\beta')}(y)\chi_{\pi_{W'}(\beta')}(z) = 0 \label{eqn-zneq0} \end{eqnarray} The above is true for all $x, y$ and $z$ such that $z \neq 0$ which are independent of each other. Thus, for a fixed value of $x$ and $y$, the expectation of the LHS of Equation \eqref{eqn-zneq0} over all $z \in \ensuremath{\mathbb{F}[2]}^r$ is equal to $2^{-r}$ times its value at $z = 0$. Observing that in the expectation over all $z \in \ensuremath{\mathbb{F}[2]}^r$ only terms satisfying $\pi_W(\alpha') = \pi_{W'}(\beta')$ survive, we obtain, \begin{eqnarray} & & \displaystyle\sum_{\substack{ \alpha, \alpha'\perp H_W \\ \beta, \beta'\perp H_{W'} \\ \pi_W(\alpha') = \pi_{W'}(\beta')}}\wh{A}_\alpha \wh{A}_{\alpha'}\wh{B}_{\beta} \wh{B}_{\beta'} \chi_{(\alpha+\alpha')}(x)\chi_{\alpha'}(h_W) \chi_{(\beta+\beta')}(y) \nonumber \\ & = & 2^{-r} \sum_{\substack{ \alpha, \alpha'\perp H_W \\ \beta, \beta'\perp H_{W'}}}\wh{A}_\alpha \wh{A}_{\alpha'}\wh{B}_{\beta} \wh{B}_{\beta'} \chi_{(\alpha+\alpha')}(x)\chi_{\alpha'}(h_W) \chi_{(\beta+\beta')}(y). \end{eqnarray} Taking a further expectation over $x$ and $y$, we observe that the only terms that survive on the LHS are those in which $\alpha = \alpha'$, $\beta = \beta'$ and $\pi_W(\alpha) = \pi_{W'}(\beta)$, while the terms that survive on the RHS have $\alpha = \alpha'$ and $\beta = \beta'$. Thus we obtain, \begin{equation} \displaystyle\sum_{\substack{\alpha\perp H_W, \beta \perp H_{W'} \\ \pi_W(\alpha) = \pi_{W'}(\beta)}} \wh{A}_\alpha^2\wh{B}_\beta^2\chi_\alpha(h_W) = 2^{-r}\displaystyle\sum_{\substack{\alpha\perp H_W, \beta \perp H_{W'}}} \wh{A}_\alpha^2\wh{B}_\beta^2\chi_\alpha(h_W) \leq 2^{-r}, \end{equation} where the last inequality is because the sum of squares of the Fourier coefficients is at most $1$. Now, $\chi_\alpha(h_W) = -1$ if $\alpha\cdot h_W = 1$ and $1$ otherwise. Thus, \begin{eqnarray} \displaystyle\sum_{\substack{\alpha\perp H_W, \beta \perp H_{W'} \\ \pi_W(\alpha) = \pi_{W'}(\beta) \\ \alpha\cdot h_W = 1}} \wh{A}_\alpha^2\wh{B}_\beta^2 & \geq & \displaystyle\sum_{\substack{\alpha\perp H_W, \beta \perp H_{W'} \\ \pi_W(\alpha) = \pi_{W'}(\beta) \\ \alpha\cdot h_W = 0}} \wh{A}_\alpha^2\wh{B}_\beta^2 - 2^{-r} \nonumber \\ & \geq & \wh{A}_\emptyset^2\wh{B}_\emptyset^2 - 2^{-r}. \end{eqnarray} The above gives a strategy for the provers of {\sc $2$P$1$R}$(\mc{A}, r)$. Suppose Prover-1 receives a block of variables $U$ and Prover-2 receives a block of equations $W$. \noindent Strategy of Prover-2: It chooses a vector $\alpha \in \ensuremath{\mathbb{F}[2]}^{3r+1}$ satisfying: (i) $\alpha \perp H_W$, and (ii) $\alpha\cdot h_W = 1$ with probability $\wh{A}_\alpha^2$, where $A = C[W]$. Since $\alpha$ satisfies (i) and (ii), the first $3r$ coordinates give a satisfying assignment to the variables in $W$. This assignment is returned to the Verifier by Prover-2. \noindent Strategy of Prover-1: It chooses a block $W'$ from the choice of the verifier of {\sc $2$P$1$R}$(\mc{A}, r)$ conditioned on the block of variables picked being $U$. It then chooses $\beta \in \ensuremath{\mathbb{F}[2]}^{3r+1}$ satisfying: $\beta \perp H_{W'}$ with probability $\wh{B}_\beta^2$ where $B = C[W']$. The assignment to the variables in $U$ contained in the first $3r$ coordinates of $\beta$ is returned to the Verifier. Suppose that the independent set $\mc{I}$ contains $\delta$ fraction of the vertices of $G$, i.e. locations of the codes. Recall that we set the value of the code $\ol{C}[W]$ to be the indicator of $\mc{I}$ restricted to its locations. Thus, for at least $\delta/2$ fraction of the blocks $W$, $\E_x[\ol{C}[W](x)] \geq \delta/2$. Call such blocks as \emph{heavy}. Conditioned on the block of variables $U$, let $p_U$ be the fraction of choices of block of equations $W$ by the verifier {\sc $2$P$1$R}$(\mc{A}, r)$ such that $W$ is heavy. From the above $\E_U[p_U] \geq \delta/2$, by the regularity of $\mc{A}$. Thus, the probability that both $W$ and $W'$ are heavy -- where $W'$ is obtained from the strategy of Prover-1 -- is $E_U[p_U^2] \geq \E_U[p_U]^2 \geq \delta^2/4$. Noting that the weight of $\ol{C}[W]$ is same as that of $C[W]$ which is given by the empty coefficient of the Fourier expansion, we obtain that the verifier accepts with probability at least, $$\frac{\delta^2}{4}\left(\frac{\delta^2}{4} - 2^{-r}\right) \geq \left(\frac{\delta^2}{4} - 2^{-r}\right)^2.$$ From Theorem \ref{thm-2P1R} this implies that $\delta^2/4 \leq (1-s(n)^\kappa)^{r/2\kappa} + 2^{-r}$. {\bf Setting the Parameters.} We set $r = \log^\ell n$ for a large enough constant $\ell$. The size of the hypergraph is $N = 2^{\ensuremath{\textnormal{poly}}\xspace(\log n)}$. In the YES case, the number of vertices to be removed is at most $c(n)r \leq 2^{-(\log N)^\xi}$ for some positive constant $\xi$ (depending on $\ell$). Further, we obtain that $(1-s(n)^\kappa)^{r/2\kappa} + 2^{-r} \leq 2^{-(\log N)^{1-\gamma}}$ for an arbitrarily small $\gamma$ by an appropriately large choice of the constant $\ell$. The above analysis yields a bound of $2^{-(\log N)^{1-\gamma}}$ on the relative size of the largest independent set in the NO case, for arbitrarily small $\gamma > 0$. \section{Independent Set in Almost $2$-colorable $3$-uniform hypergraphs}\label{sec-main2} We first need a few useful definitions and results for our analysis which follows a pattern similar to previous works~\cite{DKPS, KS12, SS13} and we shall use their notation. \subsection{Preliminaries} A family $\mc{F} \subseteq \{*, 1, 2\}^m$ is called \emph{monotone} if for any $F \in \mc{F}$ and $F'$ obtained by changing a $*$ to either $1$ or $2$ in any coordinate, $F'\in \mc{F}$. For a parameter $p \in [0,1]$, define the measure $\mu_p$ on $\{1, 2, *\}^m$ by $\mu_{p}(F) = p^{m-m'}(1-p)^{m'}$, where $m'$ is the number of coordinates of $F$ with $*$ in them, for any $F \in \{1, 2, *\}^m$. In other words $\mu_p$ is the product measure assigning in each coordinate a measure $1-p$ to $*$ and $\frac{p}{2}$ to each of $1$ and $2$. The measure of a family $\mc{F} \subseteq \{1,2,*\}^m$ is $\mu_p(\mc{F}) = \sum_{F \in \mc{F}}\mu_p(F)$. A set $C \subseteq [m]$ is a $(\delta, p)$-{\em core} for a family $\mc{F}$, if there exists a family $\mc{F}'$ such that $\mu_p(\mc{F}\triangle\mc{F}' )\leq \delta$ and ${\cal F}'$ depends only on the coordinates in $C$. Let $t \in (0,1)$ be a given parameter and $C \subseteq [m]$. A \emph{core-family} $[\mc{F}]^t_C$ is a family on the set of coordinates $C$ which resembles $\mc{F}$ restricted to $C$. Formally, $$[\mc{F}]^t_C \defeq \left\{ F\in \{*, 1, 2\}^C\ \middle|\ \Pr_{F' \in \mu_p^{[m]\setminus C}}\left[(F, F')\in {\cal F}\right] > t\right\},$$ where $(F, F')$ is an element in $\{*,1,2\}^m$ by combining $F$ on coordinates in $C$ and $F'$ on $[m]\setminus C$. The \emph{influence} of a coordinate $i \in [m]$ for a family ${\cal F}$ under the measure $\mu_p$ is defined as follows: $$\tn{Inf}^p_i({\cal F}) := \mu_p\left(\left\{F : F\mid_{i=*}\not\in {\cal F}\textnormal{ and }F\mid_{i=j}\in {\cal F}\textnormal{ for some } j\in \{1,2\}\right \}\right),$$ where $F\mid_{i=*}$ is an element identical to $F$ except on the $i^\textrm{th}$ coordinate where it is $*$, and $F\mid_{i=r},$ for $r \in \{1,2\}$ is similarly defined. The \emph{average sensitivity} of ${\cal F}$ at $p$ is the sum of influence of all coordinates: $\tn{as}_{p}({\cal F}) := \sum_{i=1}^m\tn{Inf}^p_i({\cal F})$. Let $D^p$ be a distribution on $\{*, 1, 2\}^2$ defined by first sampling $(1, 2)$ and $(2,1)$ uniformly with probability $\slfrac{1}{2}$ each and then changing each coordinate to $*$ independently with probability $1 - p$. It is easy to see that both the marginals of $D^p$ are identical to $\mu_p$. \subsubsection{Useful Results} The following variant of Russo's Lemma was proved in \cite{DKPS} (as Lemma 1). \begin{lemma} [Russo's Lemma \cite{Russo}] \label{lem-russo} Let ${\cal F}\subseteq \{*, 1, 2\}^m$ be monotone, then $\mu_p({\cal F})$ is increasing with $p$. In fact, $$\frac{1}{2}\cdot \tn{as}_p({\cal F}) \leq \frac{d\mu_p({\cal F})}{dp} \leq \tn{as}_p({\cal F}).$$ \end{lemma} The following corollary follows from the above and is proved in \cite{SS13}. \begin{corollary}\label{cor-follow} For a monotone family $\mc{F}\subseteq \{*, 1,2\}^m$, \begin{enumerate} \item For any $p' \geq p$, $\mu_{p'}({\cal F}) \geq \mu_{p}({\cal F})$. \item For any $\varepsilon > 0$, there is a $p' \in [1 - \varepsilon, 1 - \varepsilon/2]$ such that $\tn{as}_{p'}({\cal F}) \leq \frac{4}{\varepsilon}$. \end{enumerate} \end{corollary} The following is a generalization of Friedgut's Junta Theorem which is proved in \cite{ST11}. \begin{theorem}[Friedgut's Theorem \cite{Friedgut, ST11}] \label{thm-Friedgut} Fix $\delta > 0$. Let $\mc{F} \subseteq \{*, 1, 2\}^m$ be monotone with $a = \tn{as}_p({\cal F})$, for $p \in [0,1]$. There exists a function $C_{Friedgut}(p, \delta, a) \leq c_p^{a/\delta}$, for a constant $c_p$ depending only on $p$, so that $\mc{F}$ has a $(\delta, p)$-core $C$ of size $|C| \leq C_{Friedgut}(p, \delta, a)$. \end{theorem} The above theorem shall be used along with the following generalization of Lemma 3.1 in \cite{DS} proved in \cite{SS13}. \begin{proposition}\label{prop-coremass} If $C$ is a $(\delta, p)$-core of $\mc{F}$, then $\mu_p^C\left([\mc{F}]_C^{\slfrac{3}{4}}\right) \geq \mu_p({\cal F}) - 3\delta.$ \end{proposition} Using the above one can prove the following lemma. \begin{lemma} \label{lem-2element} For a fixed parameter $p \in (0,1)$ and a positive constant $\delta$, given a monotone family $\mc{F}\subseteq \{*,1,2\}^m$ such that $\mu_p(\mc{F}) \geq \delta$, there exists a subset $S \subseteq [m]$ such that $|S|\leq \bar{c}_p^{\slfrac{16}{(1-p)\delta}}$, for some constant $\ol{c}_p$ depending only on $p$, and two elements $F, F' \in \mc{F}$ such that for all $j \not\in S$, $(F(j), F'(j))$ is not $(1,1)$ or $(2,2)$. \end{lemma} \begin{proof} We first choose $\bar{c}_p = \max\{c_{p'} \mid p' \in [1-\varepsilon, 1-\slfrac{\varepsilon}{2}]\}$ where $p := 1 - \varepsilon$. By Corollary \ref{cor-follow} there is a $p' \in [1-\varepsilon, 1-\slfrac{\varepsilon}{2}]$ such that $a := \tn{as}_{p'}(\mc{F}) \leq \slfrac{4}{\varepsilon} = \slfrac{4}{(1-p)}$. Using Theorem \ref{thm-Friedgut} one can obtain a $(\slfrac{\delta}{4}, p')$-core $S$ of $\mc{F}$ of size $|S|\leq \bar{c}_p^{\slfrac{16}{\delta(1-p)}}$. By Proposition \ref{prop-coremass} and using the fact that $\mu_{p'}(\mc{F}) \geq \mu_p(\mc{F}) \geq \delta$, we get that, $$\mu_{p'}^S\left([\mc{F}]_S^{\slfrac{3}{4}}\right)\geq \delta - \slfrac{3\delta}{4} = \slfrac{\delta}{4} > 0,$$ where $[\mc{F}]_S^{\slfrac{3}{4}}$ is the core-family with respect to the measure $\mu_{p'}$. Choose an element $\ol{F} \in [\mc{F}]_S^{\slfrac{3}{4}}$. Probabilistically construct $F, F' \in \mc{F}$ as follows. For $j \in S$, set $F(j)$ and $F'(j)$ to the corresponding value $\ol{F}(j)$. For $j \not\in S$ independently sample $(F(j), F'(j))$ from $D^{p'}$. Since the marginals of $D^{p'}$ are distributed as $\mu_{p'}$, by the definition of a core-family, we have, $$\Pr[F \in \mc{F}\textnormal{ and } F' \in\mc{F}] \geq 1 - 2\left(\frac{1}{4}\right) \geq \frac{1}{2}.$$ Moreover, since $(1,1)$ and $(2,2)$ do not lie in the support of $D^{p'}$, the elements $F, F'$ satisfy the condition of the lemma. \end{proof} \subsection{Hardness Reduction} Let $\delta, \varepsilon > 0$ be parameters that we shall set later. We begin with an instance $\Phi$ of the Multi-Layered PCP from Theorem \ref{thm-multi}. The number of layers $L$ of $\Phi$ is chosen to be $\lceil 32\delta^{-2}\rceil$. The parameter $R$ shall be set later to be large enough. In the following paragraphs we describe the construction of a weighted $3$-uniform hypergraph $G$ with vertex set $\mc{H}$ a hyperedge set $\mc{E}$ and a weight function $\textnormal{wt}$ on the vertices, as an instance of {\sc ISAlmostColor}$_\varepsilon(3, 2, \slfrac{1}{\delta})$. \noindent {\bf Vertices.} Consider a variable $v \in V_l$, i.e. in the $l$th layer of $\Phi$. Let a \emph{Long Code} $\mc{H}^v$ be a copy of the set $\{1,2,*\}^{R_l}$ equipped with the measure $\mu_p$ where $p:=1-\varepsilon$. The set of vertices $\mc{H} := \cup_{1\leq l\leq L}\cup_{v\in V_l}\mc{H}^v$. The weight of any $x \in \mc{H}^v$ is, $$\textnormal{wt}(x) = \frac{\mu_p(x)}{L|V_l|}.$$ Thus, the total weight of the vertices corresponding to any layer of the PCP is $1/L$, which is equally distributed over the Long Codes of all the variables in that layer. \noindent {\bf Hyperedges.} For all variables $v \in V_l$ and $u \in V_{l'}$ ($l < l'$) such that there is a constraint $\pi_{v\rightarrow u}$ between them, add a hyperedge between all $x \in \mc{H}^{u}$ and $y, z \in \mc{H}^v$ which satisfy the following condition: For any $j \in R_l$ and $i = \pi_{v\rightarrow u}(j) \in R_{l'}$, the tuple $(x_i, y_j, z_j)$ is not $(1,1,1)$ or $(2,2,2)$. \subsection{YES Case} In the YES case, there is an assignment $\sigma$ of labels to the variables of $\Phi$ that satisfies all the constraints. Construct a partition of $\mc{H}$ into disjoint subsets $\mc{H}_1, \mc{H}_2$ and $\mc{H}_*$ as follows. For any variable $v$ of $\Phi$, add $x \in \mc{H}^v$ to $\mc{H}_{x_{\sigma(v)}}$. It is easy to see that $\textnormal{wt}(\mc{H}_*) = \varepsilon$ and $\textnormal{wt}(\mc{H}_1) = \textnormal{wt}(\mc{H}_2) = \left(\frac{1-\varepsilon}{2}\right)$. Furthermore, Let $v, u$ be variables such that there is a constraint $\pi_{v\rightarrow u}$ between them. Suppose there is a hyperedge between $x \in \mc{H}^u$ and $y, z \in \mc{H}^v$. Since $\sigma$ is a satisfying assignment, $\pi_{v\rightarrow u}(\sigma(v)) = \sigma(u)$. By the construction of the hyperedges, this implies that the tuple $(x_{\sigma(u)}, y_{\sigma(v)}, z_{\sigma(v)})$ is not $(1,1,1)$ or $(2,2,2)$, and thus the hyperedge $(x,y,z)$ is not contained in $\mc{H}_1$ or in $\mc{H}_2$. Therefore, removing the set of vertices $\mc{H}_*$ of weight $\varepsilon$ and the hyperedges incident on it makes the hypergraph $2$-colorable. \subsection{NO Case} In the NO Case assume that there is a maximal independent set $\mc{I}\subseteq \mc{H}$ of weight $\textnormal{wt}(\mc{I}) \geq \delta$. From the construction of the hyperedges, it is easy to see that any maximal independent set is monotone. Let $\mc{I}^v := \mc{I}\cap \mc{H}^v$ for any variable $v$ of $\Phi$. Thus, each $\mc{I}^v$ is a monotone family. Consider the set of variables $$U := \left\{ u \in V \mid \mu_p(\mc{I}^u) = \frac{\textnormal{wt}(\mc{I}^u)}{\textnormal{wt}(\mc{H}^u)} \geq \frac{\delta}{2}\right\}.$$ By averaging, it is easy to see that, $$\sum_{u\in U}\textnormal{wt}(\mc{H}^u)\geq \frac{\delta}{2}.$$ Another averaging shows that for at least $\frac{\delta}{4}L \geq \frac{8}{\delta}$ layers $l$, at least $\frac{\delta}{4}$ fraction of variables in layer $l$ belong to $U$. Applying the weak density property we obtain two layers $l < l'$ such that at least $\frac{\delta^2}{64}$ fraction of the constraints between $V_l$ and $V_{l'}$ are between the variables in $U_l := U\cap V_l$ and $U_{l'} := U\cap V_{l'}$. The following key lemma follows from Lemma \ref{lem-2element}. \begin{lemma}\label{lem-key} For any variable $v \in U_l$ there is a subset $S^v \subseteq R_l$ of size $|S^v| \leq t(\varepsilon,\delta) := c_\varepsilon^{\slfrac{1}{\delta}} $ for some constant $c_\varepsilon > 0$ depending on $\varepsilon$, and elements $y^v, z^v \in \mc{I}^v$ such that for all $j \in R_l\setminus S^v$, the tuple $(y^v_j, z^v_j)$ is not $(1,1)$ or $(2,2)$. \end{lemma} Note that in the above, if $S^v$ is empty then $y^v$ and $z^v$ will trivially ensure a hyperedge in $\mc{I}$, so we may assume it is non-empty. Using the above lemma we can now define the labeling for each of the variables in $U_{l}$ and $U_{l'}$. \noindent {\bf Labeling for $v \in U_{l}$:} Choose a label $\rho(v) \in R_l$ uniformly at random from $S^v$. \noindent {\bf Labeling for $u \in U_{l'}$:} This choice is made depending on the labeling of variables in $U_l$. Let $N(u) \subseteq V_l$ be all the variables in $V_l$ which have a constraint with $u$. Choose a label $\lambda(u)$ defined below, $$\lambda(u) := \textnormal{argmax}_{a \in R_{l'}} \left|\{v \in N(u)\cap U_l \mid \pi_{u\rightarrow v}(\rho(v)) = a\}\right|.$$ In other words, $\lambda(u)$ is the label in $R_{l'}$ which is the projection of the maximum number of labels of the neighbors of $u$ in $U_l$. For the rest of the analysis we shall focus on one variable $u \in U_{l'}$ and its neighborhood in $U_l$, $N(u)\cap U_l$. To complete the analysis we need the following lemma proved in \cite{DGKR03}. \begin{lemma}\label{lem-inter} Let $A_1, A_2, \dots, A_N$ be a collection of $N$ sets, each of size at most $T\geq 1$. If there are not more than $D$ pairwise disjoint subsets in the collection then there must exist an element which belongs to at least $\frac{N}{TD}$ sets. \end{lemma} Consider the collection $\{\pi_{v\rightarrow u}(S^v) \mid v \in N(u)\cap U_l\}$. Each subset in this collection is of size at most $t(\varepsilon, \delta)$. Each such subset $\pi_{v \rightarrow u}(S^v)$ rules out $\mc{I}^u$ containing any element $x^u$ with $*$ in all coordinates corresponding to $\pi_{v\rightarrow u}(S^v)$. Otherwise, $(x^u, y^v, z^v)$ would be a hyperedge in $\mc{I}$. Suppose there are $r$ pairwise disjoint subsets in this collection. This would reduce the measure $\mu_p(\mc{I}^u)$ by a factor of $\left(1 - \varepsilon^{t(\varepsilon, \delta)}\right)^r$. However, $\mu_p(\mc{I}^u) \geq \frac{\delta}{2}$. Thus, $r$ is at most $\log \left(\frac{\delta}{2}\right)/\log \left(1 - \varepsilon^{t(\varepsilon, \delta)}\right)$. Applying Lemma \ref{lem-inter} there is an element $a$ contained in at least $$\frac{\log \left(1 - \varepsilon^{t(\varepsilon, \delta)}\right)}{\left(t(\varepsilon, \delta)\log\left(\frac{\delta}{2}\right)\right)},$$ fraction of the subsets in the collection $\{\pi_{v\rightarrow u}(S^v) \mid v \in N(u)\cap U_l\}$. This implies that in expectation, over the choice of $\{\rho(v) \mid v \in N(u)\cap U_l\}$, $\pi_{v\rightarrow u}(\rho(v)) = a$ for at least, $$\xi(\varepsilon, \delta): = \frac{\log \left(1 - \varepsilon^{t(\varepsilon, \delta)}\right)}{\left(t(\varepsilon, \delta)^2\log\left(\frac{\delta}{2}\right)\right)},$$ fraction of $N(u)\cap U_l$. Thus, in expectation the labelings $\rho$ and $\lambda$ satisfy $\xi(\varepsilon, \delta)\left(\frac{\delta^2}{64}\right)$ fraction of the constraints between the layers $l$ and $l'$. Choosing the parameter $R$ of $\Phi$ to be small enough gives a contradiction. \section{Independent Set in $2$-Colorable $3$-Uniform Hypergraphs}\label{sec-main3} We begin with a few useful definitions and results, which can also be found in greater detail in \cite{Mossel}. The correlation between two correlated probability spaces is defined as follows. \begin{definition}\label{def-corr} Suppose $(\Omega^{(1)} \times \Omega^{(2)}, \mu)$ is a finite correlated probability space with the marginal probability spaces $(\Omega^{(1)}, \mu)$ and $(\Omega^{(2)}, \mu)$. The \emph{correlation} between these spaces is, $$\rho(\Omega^{(1)}, \Omega^{(2)}; \mu) = \tn{sup} \left\{\left|\E_{\mu}[fg]\right| \mid f\in L^2(\Omega^{(1)}, \mu), g\in L^2(\Omega^{(2)}, \mu), \E[f]= \E[g] = 0; \E[f^2], \E[g^2] \leq 1\right\}.$$ Let $(\Omega^{(1)}_i\times \Omega^{(2)}_i, \mu_i)_{i=1}^n$ be a sequence of correlated spaces. Then, $$\rho(\prod_{i=1}^n\Omega^{(1)}_i, \prod_{i=1}^n\Omega^{(2)}_i; \prod_{i=1}^n\mu_i) \leq \max_{i} \rho(\Omega^{(1)}_i, \Omega^{(2)}_i; \mu_i).$$ Further, the correlation of $k$ correlated spaces $(\prod_{j=1}^k\Omega^{(j)}, \mu)$ is defined as follows: $$\rho(\Omega^{(1)},\Omega^{(2)}, \dots, \Omega^{(k)}; \mu) := \max_{1\leq i\leq k} \rho\left(\prod_{j=1}^{i-1}\Omega^{(j)}\times \prod_{j=i+1}^k\Omega^{(j)}, \Omega^{(i)}; \mu\right).$$ \end{definition} \begin{lemma}\label{lem-corbd} Let $(\Omega^{(1)}\times\Omega^{(2)}, \mu)$ be two correlated spaces such that the probability of the smallest atom in $(\Omega^{(1)}\times\Omega^{(2)}, \mu)$ is at least $\alpha \in (0,\slfrac{1}{2}]$. Define a bipartite graph between $\Omega^{(1)}$ and $\Omega^{(2)}$ with an edge between $(a,b) \in \Omega^{(1)}\times\Omega^{(2)}$ if $\mu(a,b) > 0$. If this graph is connected then, $$\rho(\Omega^{(1)}, \Omega^{(2)}; \mu) \leq 1 - \slfrac{\alpha^2}{2}.$$ \end{lemma} \noindent We shall also refer to the following Gaussian stability measures in our analysis. \begin{definition} Let $\Phi : \R\mapsto [0,1]$ be the cumulative distribution function of the standard Gaussian. For a parameter $\rho$, define, $$\underline{\Gamma}_\rho(\mu, \nu) = \Pr[X \leq \Phi^{-1}(\mu), Y \geq \Phi^{-1}(1- \nu)],$$ $$\ol{\Gamma}_\rho(\mu, \nu) = \Pr[X \leq \Phi^{-1}(\mu), Y \leq \Phi^{-1}(\nu)],$$ where $X$ and $Y$ are two standard Gaussian variables with covariance $\rho$. \end{definition} The Bonami-Beckner operator is defined as follows. \begin{definition} Given a probability space $(\Omega, \mu)$ and $\rho \geq 0$, consider the space $(\Omega\times \Omega, \mu')$ where $\mu'(x,y) = (1-\rho)\mu(x)\mu(y) + \rho\mathbf{1}_{\{x=y\}}\mu(x)$, where $\mathbf{1}_{\{x=y\}} = 1$ if $x=y$ and $0$ otherwise. The Bonami-Beckner operator $T_\rho$ is defined by, $$ (T_\rho f)(x) = \E_{(X,Y)\leftarrow \mu'}\left[f(Y) \mid X = x\right].$$ For product spaces $(\prod_{i=1}^n \Omega_i, \prod_{i=1}^n \mu_i)$, the Bonami-Beckner operator $T_\rho = \otimes_{i=1}^n T^i_\rho$, where $T^i_\rho$ is the operator for the $i$th space $(\Omega_i, \mu_i)$. \end{definition} \noindent By Proposition 2.12 and 2.13 of \cite{Mossel} and using Lemma 2.4 of \cite{Hastad12} we have the following lemma. \begin{lemma}\label{lem-Efron-damp} Let $(\Omega^{(1)}_i\times \Omega^{(2)}_i, \mu_i)_{i=1}^n$ be a sequence of correlated spaces with $\rho_i = \rho(\Omega^{(1)}_i, \Omega^{(2)}_i; \mu_i)$. Let $f : \prod_{i=1}^n\Omega^{(1)}_i \mapsto \R$ and $g : \prod_{i=1}^n\Omega^{(2)}_i \mapsto \R$, and let $g = \sum_{S} g_S$ be the Efron-Stein decomposition of $g$ (refer to \cite{Mossel} for a definition). Then, $$ \E[f(x)g_S(y)] \leq \|f\|_2\|g\|_2\prod_{i\in S}\rho_i.$$ If the Efron-Stein decomposition of $g$ contains only functions of weight at least $s$ and $\rho = \max_i\rho_i$, then, $$ \E[f(x)g(y)] \leq \rho^s\|f\|_2\|g\|_2.$$ The above also implies for the Bonami-Beckner operator $T_\rho$ that, $$\|T_\rho f\|_2 \leq \rho^s\|f\|_2,$$ if the Efron-Stein decomposition of $f$ contains functions of weight at least $s$. \end{lemma} The influence of a function on a product space is defined as follows. \begin{definition} Let $f$ be a measurable function on $(\prod_{i=1}^n\Omega_i, \prod_{i=1}^n\mu_i)$. The influence of the $i$th coordinate on $f$ is: $$\tn{Inf}_i(f) = \displaystyle \E_{\{x_j | j \neq i\}}\left[\tn{Var}_{x_i}\left[f(x_1, x_2, \dots, x_i, \dots, x_n)\right]\right].$$ \end{definition} In particular, if $f : \{-1,1\}^n \mapsto \R$, and $f = \sum_{\alpha\subseteq [n]}\wh{f}_\alpha\chi_\alpha$ is its Fourier decomposition, then $\tn{Inf}_i(f) = \sum_{\alpha: i \in \alpha}\wh{f}_\alpha^2$. The following key results in Mossel's work~\cite{Mossel} shall be used in the analysis of our reduction. We first restate Lemma 6.2 of \cite{Mossel}. \begin{lemma} \label{lem-Mossel-big} Let $(\Omega_1^{(j)}, \dots, \Omega_n^{(j)})_{j=1}^k$ be $k$ collections of finite probability spaces such that $\{\prod_{j=1}^k\Omega^{(j)}_i \mid i=1,\dots, n\}$ are independent. Suppose further that it holds for all $i = 1, \dots, n$ that $\rho(\Omega^{(j)}_i : 1\leq j\leq k) \leq \rho$. Then there exists an absolute constant $C$ such that for, $$\gamma = C\frac{(1- \rho)\nu}{\log \left(\slfrac{1}{\nu}\right)},$$ and $k$ functions $\left\{f_j \in L^2(\prod_{i=1}^n\Omega^{(j)}_i) \right\}_{j=1}^k$, the following holds, $$\displaystyle \left|\E\left[\prod_{j=1}^kf_j\right] - \E\left[\prod_{j=1}^kT_{1-\gamma}f_j\right]\right| \leq \nu\sum_{j=1}^k\sqrt{\tn{Var}[f_j]}\sqrt{\tn{Var}\left[\prod_{j'<j} T_{1-\gamma} f_{j'}\prod_{j'>j}f_{j'}\right]}.$$ \end{lemma} Our analysis shall also utilize the following bi-linear Gaussian stability bound from \cite{Mossel} to locate influential coordinates. \begin{theorem}\label{thm-Mossel-bilinear} Let $(\Omega^{(1)}_i\times \Omega^{(2)}_i, \mu_i)$ be a sequence of correlated spaces such that for each $i$, the probability of any atom in $(\Omega^{(1)}_i\times \Omega^{(2)}_i, \mu_i)$ is at least $\alpha \leq \slfrac{1}{2}$ and such that $\rho(\Omega^{(1)}_i, \Omega^{(2)}_i; \mu_i) \leq \rho$ for all $i$. Then there exists a universal constant $C$ such that, for every $\nu > 0$, taking $$ \tau = \tn{exp}\left(C\frac{\log(\slfrac{1}{\alpha}) \log(\slfrac{1}{\nu})}{\nu(1-\rho)}\right),$$ for functions $f: \prod_{i=1}^n\Omega^{(1)}_i\mapsto [0,1]$ and $ g: \prod_{i=1}^n\Omega^{(2)}_i\mapsto [0,1]$ that satisfy, $$\max \min_i(\tn{Inf}_i(f), \tn{Inf}_i(g)) \leq \tau,$$ for all $i$, we have, $$\underline{\Gamma}_\rho(\E[f], \E[g]) - \nu \leq \E[fg] \leq \ol{\Gamma}_\rho(\E[f], \E[g]) + \nu.$$ \end{theorem} Before describing the hardness reduction we define the following useful distribution and state its properties. \subsubsection*{Distribution $\mc{D}_{\delta, r}$} We define the probability measure $\mc{D}_{\delta,r}$ over the random variables $(X, Y = \{Y_i\}_{i=1}^r, Z = \{Z_i\}_{i=1}^r)$, where $X, Y_i, Z_i \in \{-1,1\}$. A tuple $(X, Y, Z)$ is sampled from $\mc{D}_{\delta, r}$ by first choosing $X, Y_1, \dots, Y_r \in \{-1,1\}$ independently and uniformly at random, and setting each $Z_i = -Y_i$. Finally, with probability $\delta$, $j \in [r]$ is chosen u.a.r and $Y_j$ and $Z_j$ are both set to $-X$. Let $X$, $Y$ and $Z$ define the correlated probability spaces $\Omega^{(1)}$, $\Omega^{(2)}$ and $\Omega^{(3)}$ respectively with the joint probability measure $\mc{D}_{\delta, r}$. Note that the marginal probability spaces $(\Omega^{(2)}, \mc{D}_{\delta, r})$ and $(\Omega^{(3)}, \mc{D}_{\delta, r})$ are identical. Also, for $i \neq j \in [r]$, $Y_i$ is independent of $Y_j$ and $Z_j$. It is easy to see the following lemma. \begin{lemma}\label{lem-correlation} For any probability $\delta$ and integer $r > 0$,\\ (i) The minimum probability of an atom in $\mc{D}_{\delta, r}$ is at least $\xi := \frac{\delta}{r2^r}$. \\ (ii) $\rho(\Omega^{(1)}, \Omega^{(2)}\times \Omega^{(3)}; \mc{D}_{\delta, r}) \leq \delta$. \\ (iii) $\rho(\Omega^{(1)} \times \Omega^{(2)}, \Omega^{(3)}; \mc{D}_{\delta, r}) \leq 1 - \slfrac{\xi^2}{2} = \frac{\delta^2}{r^22^{2r+1}}$. \\ (iv) $\rho(\Omega^{(2)}, \Omega^{(3)}; \mc{D}_{\delta, r}) \leq 1 - \xi^2/2$. \\ (v) $\rho(\Omega^{(1)}, \Omega^{(2)}, \Omega^{(3)}; \mc{D}_{\delta, r}) \leq 1 - \slfrac{\xi^2}{2}$. \end{lemma} \begin{proof} The first part can be computed by observing that the atom in $\D_{\delta, r}$ with minimum probability is the one in which there is a $j \in [r]$ such that $Y_j = Z_j$, and this atom has probability $\xi$ as defined. The second part is immediate since $X$ is independent of $(Y, Z)$ with probability $1-\delta$. The third and fourth parts follow from (i) and by showing that Lemma \ref{lem-corbd} is applicable, which can be inferred in a manner similar to the proof of connectedness in \cite{OW}. We omit the details here. The fifth part follows from Definition \ref{def-corr}. \end{proof} In the rest of this section we shall sometimes omit writing the joint distribution along with $\Omega^{(1)}, \Omega^{(2)}$ and $\Omega^{(3)}$, as it will be clear from the context. \subsection{Hardness Reduction}\label{sec-redn-2color} We begin with an instance $\Phi$ from Theorem \ref{thm-dto1multi} with the number of layers $L = \lceil 32\varepsilon^{-2}\rceil$, for a parameter $\varepsilon > 0$ which denotes the size of the independent set in the NO Case. \subsubsection{Construction of $G(H, E)$} We continue with the construction of the instance $G(H, E)$, a $3$-uniform hypergraph. The construction uses a parameter $\delta$ which we shall fix later. \noindent {\bf Vertices.} Consider a variable $v$ of $\Phi$ in layer $l$. Let $H^v$ be a copy of $\{-1, 1\}^{R_l}$. The vertex set $H := \cup_{l\in [L]}\cup_{v \in V_l}H^v$. The weight of a vertex $x \in H^v$ for $v \in V_l$ is $2^{-R_l}/(L|V_l|)$. Thus, the total weight of all the vertices corresponding to a particular layer is $1/L$. \noindent {\bf Hyperedges.} Consider two variables $v \in V_l$ and $u \in V_{l'}$ with a constraint $\pi_{v\rightarrow u}$ between them. Note that for every $i \in R_{l'}$, $\left|\pi_{v\rightarrow u}^{-1}(i)\right| = d^{l-l'}$. For convenience, we let $r = d^{l-l'}$, and dropping the subscript we shall refer to the projection simply as $\pi$. Let $x \in H^{u}$ and $y, z \in H^{v}$ be chosen by sampling $(x_i, y|_{\pi^{-1}(i)}, z|_{\pi^{-1}(i)})$ from $(\Omega^{(1)}\times \Omega^{(2)}\times \Omega^{(3)}; \mc{D}_{\delta, r})$ independently for each $i \in R_{l'}$. Let $\mc{D}^{vu}$ denote the probability distribution of the choice of $(x, y, z)$. For all such $(x, y,z)$ in the support of $\mc{D}^{vu}$ add a hyperedge between these three vertices $x, y$ and $z$. \subsection{YES Case} In the YES Case, let $\sigma$ be the labeling to the variables that satisfies all constraints in $\Phi$. For every vertex $x \in H^v$ for a variable $v$ in layer $l$, color $x$ with $x_{\sigma(v)}$. It is easy to see from the above construction of the hyperedges that this is a valid $2$-coloring of the hypergraph. \subsection{NO Case} Suppose that there is an independent set of $\varepsilon > 0$ fraction of vertices. For a variable $v$ of $\Phi$, let $f_v$ be the indicator of the independent set in the long code $H^v$. Let the \emph{heavy} variables $v$ be such that $\E[f_v] \geq \frac{\varepsilon}{2}$. After averaging and arguments analogous to those in Section \ref{sec-multi-No} we obtain two layers $l < l'$ such that the \emph{heavy} variables in these two layers induce at least $\frac{\varepsilon^2}{64}$ fraction of constraints between these two layers. As before, we set $r = d^{l-l'}$. Also, we shall denote $R_{l'}$ by $R_1$ and $R_{l}$ by $R_2$. We need to show that, \begin{equation} \E_{v,u}\left[\E_{(x,y,z)\leftarrow \mc{D}^{vu}}\left[f_u(x)f_v(y)f_v(z)\right]\right] > 0, \label{eqn-toshow} \end{equation} where the outer expectation is over pairs of heavy variables $v \in V_l$ and $u \in V_{l'}$ which share a constraint. The analysis consists of two main steps. In the first step we show that unless $f_u$ and $f_v$ share influential coordinates, one can re-randomize the $x$ variable to be independent in the inner expectation of Equation \eqref{eqn-toshow}. However, the notion of influence of $f_v$ used in this step depends on the choice of $u$. The second step shows that for a non-trivial fraction of heavy neighbors $u$ of $v$, the notion of influence used in the first step can be made independent of $u$. In addition it shows that for these $u$, the marginal expectation $\E[f_v(y)f_v(z)]$ induced by $\mc{D}^{vu}$ is bounded away from zero. This step crucially uses the smoothness property of the PCP. \subsubsection{Making $x$ independent} Let us fix a pair of heavy vertices $v, u$ which share a constraint $\pi$. For convenience we shall think of the distribution $\mc{D}^{vu}$ being on $\otimes_{i \in R_1}(x_i, y|_{\pi^{-1}(i)}, z|_{\pi^{-1}(i)})$, where each $(x_i, y|_{\pi^{-1}(i)}, z|_{\pi^{-1}(i)})$ is sampled independently from $(\Omega^{(1)}\times \Omega^{(2)}\times \Omega^{(3)}, D_{\delta, r})$. We represent the space of $(x_i, y|_{\pi^{-1}(i)}, z|_{\pi^{-1}(i)})$ by the correlated space $(\Omega^{(1)}_i \times \Omega^{(2)}_i\times \Omega^{(3)}_i)$, which is an independent copy of $(\Omega^{(1)}\times \Omega^{(2)}\times \Omega^{(3)})$. Thus, the space of $\otimes_{i\in R_1}(x_i, y|_{\pi^{-1}(i)}, z|_{\pi^{-1}(i)})$ is $\prod_{i \in R_1} (\Omega^{(1)}_i \times \Omega^{(2)}_i\times \Omega^{(3)}_i)$. The $i$th coordinate influence of a function $f$ on $\prod_{i \in R_1}\Omega^{(2)}_i = \prod_{i \in R_1}\Omega^{(3)}_i$ is denoted by $\ol{\tn{Inf}}_i(f_v)$. The probability measure on all these spaces is induced by $\mc{D}^{vu}$. Using the above and since the functions $f_u$ and $f_v$ are all in the range $[0,1]$ we have the following lemma which follows from Lemma \ref{lem-correlation} and Lemma \ref{lem-Mossel-big}. \begin{lemma}\label{lem-addnoise} There is a universal constant $C$ such that for an arbitrarily small choice of $\nu > 0$, letting $\gamma = C\frac{\nu\xi^2}{2\log(1/\nu)}$, the following holds, \begin{equation} \left| E[f_u(x)f_v(y)f_v(z)] - E[T_{1-\gamma}f_u(x)\ol{T}_{1-\gamma}f_v(y)\ol{T}_{1-\gamma}f_v(z)]\right| \leq \nu, \end{equation} \begin{equation} \left| E[f_v(y)f_v(z)] - E[\ol{T}_{1-\gamma}f_v(y)\ol{T}_{1-\gamma}f_v(z)]\right| \leq \nu, \end{equation} where the $T_{1-\gamma}$ is the Bonami-Beckner operator over $\{-1,1\}^{R_1} = \prod_{i\in R_1}\Omega^{(1)}$ and $\ol{T}_{1-\gamma}$ is the Bonami-Beckner operator over the space $\prod_{i\in R_1}\Omega^{(2)}_i = \prod_{ \in R_1}\Omega^{(3)}_i$. To be precise, $\ol{T}_{1-\gamma}$ resamples from each $\Omega^{(2)}_i$ independently with probability $\gamma$. Note that $\ol{T}_{1-\gamma}$ depends on the constraint $\pi$ and hence on the choice of $u$. \end{lemma} Using a value of $\gamma$ which we shall obtain from the above lemma, consider the function $ F(y,z) = \ol{T}_{1-\gamma}f_v(y)\ol{T}_{1-\gamma}f_v(z)$ over the space $\prod_{i \in R_1}(\Omega^{(2)}_i\times \Omega^{(3)}_i)$. For the time being let $f'$ denote $\ol{T}_{1-\gamma}f_v$ and $f'_i$ denote the function $f'$ depending only on the $i$th space $\Omega^{(2)}_i = \Omega^{(3)}_i$ where the fixing of the rest of the coordinates will be clear from the context. Thus, $F(y,z) = f'(y)f'(z)$. The $i$th influence of $F$ in the space $\prod_{i \in R_1}(\Omega^{(2)}_i\times \Omega^{(3)}_i)$ can be written as: \begin{eqnarray} \displaystyle \ol{\tn{Inf}}_i(F) & = & \frac{1}{2}\E_{\substack{(y|_{\pi^{-1}(j)}, z|_{\pi^{-1}(j)})\leftarrow (\Omega^{(2)}_j\times \Omega^{(3)}_j) \\ j \in R_1\setminus\{i\}}} \Big[ \nonumber \\ & & \E_{((Y_1, Z_1), (Y_2, Z_2))\leftarrow (\Omega^{(2)}\times \Omega^{(3)})^2}\left[(f_i'(Y_1)f_i'(Z_1) - f_i'(Y_2)f_i'(Z_2))^2\right]\Big] \label{eqn-infprod} \end{eqnarray} The following inequality was proved in Lemma 4 of the work of Samorodnitsky and Trevisan~\cite{ST-Gowers}. \begin{lemma}\label{lem-aibi} Let $a_1, a_2, b_2, b_2 \in [-1,1]$. Then, $(a_1a_2 - b_1b_2)^2 \leq 2\left((a_1 - b_1)^2 + (a_2 - b_2)^2\right)$. \end{lemma} Using the above lemma we obtain the following bound. \begin{lemma}\label{lem-prodinfbd} From the definitions used above, $$\displaystyle \ol{\tn{Inf}}_i(F) \leq 4\ol{\tn{Inf}}_i(f').$$ \end{lemma} \begin{proof} Using Lemma \ref{lem-aibi} and the fact that $f'$ is bounded in $[0,1]$ we can upper bound $\ol{\tn{Inf}}_i(F)$ in Equation \eqref{eqn-infprod} by \begin{eqnarray} & & \frac{1}{2}\E_{\substack{(y|_{\pi^{-1}(j)}, z|_{\pi^{-1}(j)})\leftarrow (\Omega^{(2)}_j\times \Omega^{(3)}_j) \\ j \in R_1\setminus\{i\}}} \Big[ \nonumber \\ && \E_{((Y_1, Z_1), (Y_2, Z_2))\leftarrow (\Omega^{(2)}\times \Omega^{(3)})^2}\left[(f_i'(Y_1) - f_i'(Y_2))^2 + (f_i'(Z_1) - f_i'(Z_2))^2\right]\Big] \nonumber \\ & = & 4\ol{\tn{Inf}}_i(f').\nonumber \end{eqnarray} \end{proof} We also have the following lemma. \begin{lemma}\label{lem-blocktobinary} Let $\tn{Inf}_j$ be the $j$th coordinate influence over the space $\{-1,1\}^{R_2}$ equipped with the uniform measure. Then, for $i \in R_1$, $\ol{\tn{Inf}}_i(f') \leq r\sum_{j\in \pi^{-1}(i)} \tn{Inf}_j(f')$. \end{lemma} \begin{proof} By the definition of influence, the LHS of the assertion can be written as, \begin{equation} \frac{1}{2}\E_{\substack{y|_{\pi^{-1}(j)}\leftarrow \Omega^{(2)}_j \\ j \in R_1\setminus\{i\}}} \left[ \E_{(Y^0, Y^r)\leftarrow (\Omega^{(2)})^2}\left[(f'_i(Y^0) - f'_i(Y^r))^2\right]\right]. \end{equation} Order the coordinates in $\pi^{-1}(i)$ as $1,\dots, r$ and define (depending on the choice of $Y^0$ and $Y^r$) a sequence $Y^1, \dots, Y^{r-1}$ where $Y^k$ contains the value of the first $r - k$ coordinates from $Y^0$ and the rest from $Y^r$. Letting $R_2' := R_2\setminus \pi^{-1}(i)$, the above expression can be rewritten as, \begin{eqnarray} & & \frac{1}{2}\E_{y|_{R_2'} \leftarrow \{-1,1\}^{R_2'}}\left[\E_{(Y^0, Y^r)\leftarrow (\{-1,1\}^r)^2}\left[\left(\sum_{k=0}^{r-1}(f_i'(Y^k) - f_i'(Y^{k+1})\right)^2\right]\right] \nonumber \\ & \leq & \frac{1}{2}\E_{y|_{R_2'} \leftarrow \{-1,1\}^{R_2'}}\left[\E_{(Y^0, Y^r)\leftarrow (\{-1,1\}^r)^2}\left[r\sum_{k=0}^{r-1}(f_i'(Y^k) - f_i'(Y^{k+1})^2\right]\right] \nonumber \\ & = & r\sum_{j\in \pi^{-1}(i)} \tn{Inf}_j(f'), \end{eqnarray} where we used Cauchy-Schwarz to obtain the first inequality. \end{proof} The following lemma uses the above analysis to show that $x$ can be made independent of $y$ and $z$ without incurring much loss, unless $f_u$ and $f_v$ have matching influential coordinates. \begin{lemma}\label{lem-main-inf} There is a universal constant $C$ such that for an arbitrarily small constant $\nu > 0$, and $$\gamma = \frac{\nu\xi^2}{2\log(1/\nu)} \ \ , \ \ \tau = \nu^{C\frac{\log(1/\xi)\log(1/\nu)}{\nu(1-\delta)}},$$ unless there is $i \in R_1$ such that, \begin{equation} \min(\tn{Inf}_i(T_{1-\gamma}f_u), 4r\sum_{j\in \pi^{-1}(i)} \tn{Inf}_j(\ol{T}_{1-\gamma}f_v)) \geq \tau, \label{eqn-inf-cond} \end{equation} we have, $$\E[f_u(x)f_v(y)f_v(z)] \geq \underline{\Gamma}_{\delta}(\E[f_u], \E[f_v(y)f_v(z)] - \nu) - 2\nu.$$ \end{lemma} \begin{proof} Suppose that there exists no $i \in R_1$ as in the condition of the lemma. Using Lemmas \ref{lem-prodinfbd} and \ref{lem-blocktobinary} our supposition implies that there exists no $i \in R_1$ such that, $$\min(\tn{Inf}_i(T_{1-\gamma}f_u), \ol{\tn{Inf}}_i(F)) \geq \tau,$$ where $F(y,z)$ was defined as $\ol{T}_{1-\gamma}f_v(y)\ol{T}_{1-\gamma}f_v(z)$. Using Theorem \ref{thm-Mossel-bilinear} and Lemma \ref{lem-correlation} the above implies, \begin{eqnarray} \displaystyle \E[T_{1-\gamma}f_u(x)\ol{T}_{1-\gamma}f_v(y)\ol{T}_{1-\gamma}f_v(z)] & \geq & \underline{\Gamma}_{\delta} \left(\E[T_{1-\gamma}f_u(x)], \E[\ol{T}_{1-\gamma}f_v(y)\ol{T}_{1-\gamma}f_v(z)]\right) - \nu \nonumber \\ & = & \underline{\Gamma}_{\delta} \left(\E[f_u(x)], \E[\ol{T}_{1-\gamma}f_v(y)\ol{T}_{1-\gamma}f_v(z)]\right) - \nu. \end{eqnarray} Using Lemma \ref{lem-addnoise} the above implies that \begin{equation} \E[f_u(x)f_v(y)f_v(z)] \geq \underline{\Gamma}_{\delta}\left(\E[f_u(x)], \E[f_v(y)f_v(z)] - \nu \right) - 2\nu. \end{equation} \end{proof} Note that there are two issues that are left to resolve. Firstly, we need to lower bound $\E[f_v(y)f_v(z)]$. Secondly, $\tn{Inf}_i(\ol{T}_{1-\gamma}(f_v))$ depends on the choice of $u$. We shall identify a significant fraction of heavy neighbors $u$ of $v$, for which the expectation is bounded as well as $\tn{Inf}_i(\ol{T}_{1-\gamma}(f_v)) \approx \tn{Inf}_i(T_{1-\gamma}(f_v))$, the latter being independent of $u$. For this we shall utilize the smoothness property of the PCP. \subsubsection{Identifying \emph{good} neighbors $u$} Let us first set a parameter $s$ as, $$s := \max\left(\frac{r}{\xi}\ln\left(\frac{1}{\nu}\right), \frac{r}{2\gamma}\ln\left(\frac{32r^2}{\tau}\right)\right).$$ Let the Efron-Stein decomposition of $f_v$ with respect to $\{-1,1\}^{R_2}$ be, \begin{equation} f_v = \sum_{\alpha \subseteq R_2}\wh{f}_{v,\alpha}\chi_\alpha. \label{eqn-Efron1} \end{equation} It can be seen (see \cite{Hastad12}) that the Efron-Stein decomposition of $f_v$ with respect to $\prod_{i\in R_1}\Omega^{(2)}_i$ is, \begin{equation} f_v = \sum_{\beta \subseteq R_1}f_v^\beta, \label{eqn-Efron2} \end{equation} where, \begin{equation} f_v^\beta = \sum_{\substack{\alpha \subseteq R_2 \\ \pi(\alpha) = \beta}} \wh{f}_{v, \alpha}\chi_\alpha \label{eqn-Efron3} \end{equation} We say that a subset $\alpha$ is \emph{shattered} by $\pi = \pi_{v\rightarrow u}$ if $|\pi(\alpha)| = |\alpha|$. Using this we decompose $f_v$ into three functions, depending on the choice of $u$, as follows \begin{eqnarray} f_1 & = & \sum_{\alpha : |\alpha|\geq s}\wh{f}_{v,\alpha}\chi_\alpha \label{eqn-f1} \\ f_2 & = & \sum_{\substack{\alpha : |\alpha| < s \\ \alpha \tn{ not shattered}}}\wh{f}_{v,\alpha}\chi_\alpha \label{eqn-f2} \\ f_3 & = & \sum_{\substack{\alpha : |\alpha| < s \\ \alpha \tn{ shattered}}}\wh{f}_{v,\alpha}\chi_\alpha \label{eqn-f3} \end{eqnarray} To identify the \emph{good} neighbors of $v$, we need the following key lemma. \begin{lemma}\label{lem-shatter} With expectation taken over a random neighbor $u \in V_{l'}$ which shares a constraint with $v$, $\E[\|f_2\|_2] \leq (s/\sqrt{T})$. Here $T$ is the smoothness parameter from Theorem \ref{thm-dto1multi}. \end{lemma} \begin{proof} For a given $\alpha \subseteq R_2$ such that $|\alpha| < s$, the probability (over $u$) that it is not shattered is at most $$\sum_{i\neq j \in \alpha} \Pr[\pi_{v\rightarrow u}(i) = \pi_{v\rightarrow u}(j)] \leq \frac{s^2}{T}.$$ Since, $\sum \wh{f}_{v, \alpha}^2 \leq 1$, we obtain that, $$\E[\|f_2\|_2] \leq \left(\E[\|f_2\|_2^2]\right)^{1/2} \leq \frac{s}{\sqrt{T}}.$$ \end{proof} The above lemma implies that for at least $1 - (s^2/T)^{1/4}$ fraction of the neighbors $u \in V_{l'}$ of $v$, $\|f_2\|_2 \leq (s^2/T)^{1/4}$. Call such neighbors $u$ of $v$ which satisfy this bound as \emph{good}. \subsubsection*{Lower bounding $\E[f_v(y)f_v(z)]$} We first set $\eta = \frac{2\delta}{r}$. It is easy to see that for any $j \in R_2$, $\E[y_jz_j] = -1\left(1-\frac{\eta}{2}\right) + \frac{\eta}{2} = -1 + \eta$. We shall first lower bound $\E[f_v(y)T_{1 - \eta}f_v(-y)]$. We shall need the following lemma from \cite{MORSS06} which is obtained using the \emph{reverse} hypercontractive inequality over the boolean domain. \begin{lemma}\label{lem-Hatami} Let $A, B \subseteq \{-1, 1\}^n$ have relative densities, $$\frac{|A|}{2^n} = e^{-a^2/2} \ \ \ \ \ \ \ \ \ \ \frac{|B|}{2^n} = e^{-b^2/2},$$ and let $y \in \{-1,1\}$ be uniform and $y'$ be a $\rho$-correlated copy of $y$, i.e. $\E[y_iy'_i] = \rho,$ independently for each $i \in [n]$, for some $\rho > 0$. Then, \begin{equation} \Pr[y \in A, y' \in B] \geq exp\left[-\frac{1}{2}\cdot \frac{a^2 + b^2 + 2\rho ab}{1 - \rho^2}\right]. \end{equation} \end{lemma} Since $f_v$ is an indicator function let $A = \{y \mid f_v(y) = 1\}$. As $v$ was chosen to be heavy, we have $\E[f_v] \geq \frac{\varepsilon}{2}$. Let $B = -A$, i.e. $B = \{-y \mid y \in A\}$. It is easy to see that \begin{equation} \E[f_v(y)T_{1 - \eta}f_v(-y)] = \Pr[y \in A, y' \in B], \end{equation} where $y'$ is a $1-\eta$ correlated copy of $y$. Using Lemma \ref{lem-Hatami} we obtain, \begin{equation} \E[f_v(y)T_{-1 + \eta}f_v(y)] \geq \left(\frac{\varepsilon}{2}\right)^{4/\eta}. \label{eqn-folk} \end{equation} The following two lemmas decompose two expectations we are interested in. \begin{lemma}\label{lem-decomp-1} Using the decompositions above, \begin{equation} \left|\E[f_v(y)T_{1 - \eta}f_v(-y)] - \E[f_3(y)T_{1 - \eta}f_3(-y)]\right| \leq 2\|f_2\|_2 + 2\nu. \end{equation} \end{lemma} \begin{proof} By Lemma \ref{lem-Efron-damp} and Equation \eqref{eqn-Efron1}, we have $$|\E[f_v(y)T_{1 - \eta}f_1(-y)]| \leq \|f_v\|_2\|f_1\|_2(1-\eta)^s \leq \nu,$$ by our setting of $s$ and since $\|f_v\|_2, \|f_1\|_2 \leq 1$. Furthermore, $$|\E[f_v(y)T_{1 - \eta}f_2(-y)]| \leq \|f_v\|_2\|f_2\|_2 \leq \|f_2\|_2.$$ We can repeat the above with $\E[f_v(y)T_{1 - \eta}f_3(-y)]$ using the fact that $\|T_{1 - \eta}f_3(-y)\|_2 \leq 1$ to obtain the lemma. \end{proof} \begin{lemma}\label{lem-decomp-2} Using the decompositions above and having $(y, z)$ sampled from $(\prod_{i \in R_1}(\Omega^{(2)}_i\times \Omega^{(3)}_i); \mc{D}^{vu})$, \begin{equation} \left|\E[f_v(y)f_v(z)] - \E[f_3(y)f_3(z)]\right| \leq 2\|f_2\|_2 + 2\nu. \end{equation} \end{lemma} \begin{proof}Using the bound (iv) of Lemma \ref{lem-correlation}, the decomposition in Equations \eqref{eqn-Efron2} and \eqref{eqn-Efron3}, and Lemma \ref{lem-Efron-damp} we obtain, $$|\E[f_v(y)f_1(z)]| \leq \|f_v\|_2\|f_1\|_2(1-\eta)^{s/r} \leq \nu,$$ by our setting of $s$. The rest of the proof is analogous to Lemma \ref{lem-decomp-1}. \end{proof} Note that $y_i$ is independent of $y_j$ and $z_j$ for $i \neq j \in R_2$. Also, when sampling $z$ given $y$ the coordinates in a shattered subset $\alpha$ are flipped independently with probability $1 - \frac{\eta}{2}$. Thus, $$ \E[f_3(y)f_3(z)] = \sum_{\substack{\alpha : |\alpha| < s \\ \alpha \tn{ shattered}}}\wh{f}_{v,\alpha}^2(-1 + \eta)^{|\alpha|} = \E[f_3(y)T_{1 - \eta}f_3(-y)].$$ From the above analysis, Lemma \ref{lem-shatter}, and Equation \eqref{eqn-folk}, we have that for all good neighbors $u$ of $v$, \begin{equation} \E[f_v(y)f_v(z)] \geq \left(\frac{\varepsilon}{2}\right)^{4/\eta} - 4\left(\frac{s^2}{T}\right)^{1/4} - 4\nu, \end{equation} where $y$ and $z$ are sampled according to $\mc{D}^{vu}$. \subsubsection*{Showing $\tn{Inf}_i(\ol{T}_{1-\gamma}f_v) \approx \tn{Inf}_i(T_{1-\gamma}f_v)$} Recall that $\ol{T}_{1-\gamma}$ is the Bonami-Beckner operator on the space $\prod_{i \in R_1}\Omega^{(2)}_i$ and $T_{1-\gamma}$ is over $\{-1,1\}^{R_2}$ equipped with the uniform measure. Let $h = \ol{T}_{1-\gamma}f_v$ and $g = T_{1-\gamma}f_v$. Define the functions $h_i := \ol{T}_{1-\gamma}f_i$ and $g_i := T_{1-\gamma}f_i$ for $i=1,2,3$. Since the operators $\ol{T}_{1-\gamma}$ and $T_{1-\gamma}$ are contractions, by Lemma \ref{lem-shatter} we have that for good neighbors $u$, $\|h_2\|_2, \|g_2\|_2 \leq (s^2/T)^{1/4}$. Also, by Lemma \ref{lem-Efron-damp} and Efron-Stein decompositions of $f_v$ (Equations \eqref{eqn-Efron1}, \eqref{eqn-Efron2} and \eqref{eqn-Efron3}), we obtain: $\|h_1\|_2 \leq (1- \gamma)^{s/r}$ and $\|g_1\|_2 \leq (1- \gamma)^s$. By our setting of $s$, we get $\|h_1\|_2^2, \|g_1\|_2^2 \leq \frac{\tau}{32r^2}$. For a subset $\alpha$ which is shattered, it is easy to see that that $\wh{h}_\alpha = \wh{g}_\alpha = \wh{f}_{v, \alpha}(1 - \gamma)^{|\alpha|}$. Using the definition of influence over the domain $\{-1,1\}^{R_2}$ we obtain the following lemma. \begin{lemma}\label{lem-infsame} For any $i \in R_2$, $$\left|\tn{Inf}_i(\ol{T}_{1-\gamma}f_v) - \tn{Inf}_i(T_{1-\gamma}f_v)\right| \leq 2\left(\frac{s^2}{T}\right)^{1/4} + \frac{\tau}{16r^2}.$$ \end{lemma} \noindent {\bf Choice of Parameters.} Given $\varepsilon > 0$, fix $\delta \in (0, 1/2)$, which also fixes $\eta$. The choice of $L$ made at the beginning of Section \ref{sec-redn-2color} is fixed and therefore the maximum possible value of $r$ is also fixed. Choose $\nu$ small enough so that \begin{equation} \underline{\Gamma}_{\delta} \left(\frac{\varepsilon}{2}, \frac{1}{2}\left(\frac{\varepsilon}{2}\right)^{4/\eta} - 5\nu \right) - 2\nu > 0.\label{eqn-nonzero} \end{equation} This also fixes the choice of $\gamma$ and $\tau$ by Lemma \ref{lem-main-inf}, and the choice of $s$ as defined above. Then choose $T$ to be large enough so that $$4\left(\frac{s^2}{T}\right)^{1/4} \leq \min\left\{\frac{1}{2}\left(\frac{\varepsilon}{2}\right)^{4/\eta}, \frac{\varepsilon^2}{128}\right\},$$ and, $$2\left(\frac{s^2}{T}\right)^{1/4} \leq \frac{\tau}{16r^2}.$$ The above setting implies that for all good neighbors $u$ of $v$, \begin{equation} \E[f_v(y)f_v(z)] \geq \frac{1}{2}\left(\frac{\varepsilon}{2}\right)^{4/\eta} - 4\nu, \label{eqn-expec} \end{equation} and for any $i \in R_2$, using Lemma \ref{lem-infsame}, \begin{equation} \left|\tn{Inf}_i(\ol{T}_{1-\gamma}f_v) - \tn{Inf}_i(T_{1-\gamma}f_v)\right| \leq \frac{\tau}{8r^2}.\label{eqn-inf-same2} \end{equation} Using Equations \eqref{eqn-nonzero}, \eqref{eqn-expec} and \eqref{eqn-inf-same2} along with Lemma \ref{lem-main-inf} for a heavy \emph{and} good neighbor $u$ of $v$ yields an $i^* \in R_1$ such that, \begin{equation} \min(\tn{Inf}_{i^*}(T_{1-\gamma}f_u), 4r\sum_{j\in \pi^{-1}(i^*)} \tn{Inf}_j(T_{1-\gamma}f_v)) \geq \tau/2. \label{eqn-inf-cond2} \end{equation} \noindent {\bf Labeling.} The labeling to a heavy variable $u \in V_{l'}$ is given by choosing a label $i \in R_1$ independently with probability proportional to $\tn{Inf}_i(T_{1-\gamma}f_u)$. The label of a heavy variable $v \in V_{l}$ is similarly assigned given by choosing $j \in R_2$ independently with probability proportional to $\tn{Inf}_i(T_{1-\gamma}f_v)$. Note that the sum of all influences of $T_{1-\gamma}f_u$ ($T_{1-\gamma}f_v$) is bounded by $1/\gamma$. Suppose $u$ is a good and heavy neighbor of a heavy variable $v$. Then analysis above along with Lemma \ref{lem-main-inf} and Equation \ref{eqn-inf-cond2} implies that the labeling strategy will succeed for $v$ and $u$ with probability $\tau^2\gamma^2/16r$. Additionally, from the above analysis, at least $\frac{\varepsilon^2}{128}$ fraction of constraints between layers $l$ and $l'$ are between heavy variables $v \in V_l$ and $u \in V_{l'}$ such that $u$ is a good neighbor of $v$. Thus, the probabilistic labeling strategy satisfies in expectation $\frac{\varepsilon^2\tau^2\gamma^2}{2048r}$ fraction of constraints. By choosing the soundness $\zeta$ to be small enough we obtain a contradiction. \appendix \section{Construction of Smooth $d$-to-$1$ MLPCP}\label{sec-dto1multi} The construction of the Smooth $d$-to-$1$ Multi-Layered PCP $\Phi$ closely follows the construction used \cite{Khot-3}. We shall only give the construction. We begin with an instance $\mc{L}$ of the $d$-to-$1$ Game given by Conjecture \ref{conj-dto1bireg} with the variable sets $\mc{U}$, $\mc{V}$ and label sets $[k]$ and $[m]$. For convenience we refer to the variables in $\mc{V}$ as $\mc{V}$-variables and those in $\mc{U}$ as $\mc{U}$-variables. The variables of $\Phi$ in the $l$th layer are sets of $(TL + L - l)$ $\mc{V}$-variables and $(l-1)$ $\mc{U}$-variables. The label set $R_l$ of layer $l$ is the set of all $(TL + L-1)$-tuples of labelings to $TL + L - l$ $\mc{V}$-variables and $l-1$ $\mc{U}$-variables. There is a constraint between a variable $v$ in layer $l$ and a variable $u$ in layer $l'$ of $\Phi$ if replacing $(l - l')$ $\mc{V}$-variables $q_1,\dots, q_{l-l'}$ from the set associated with $v$, with $\mc{U}$-variables $p_1, \dots, p_{l-l'}$ such that $p_r$ has a constraint with $q_r$ in $\mc{L}$ for $r = 1, \dots, l-l'$, yields the set associated with $u$. The constraint $\pi_{v\rightarrow u}$ is projection which checks the consistency of the labels, according to whether the variables of $\mc{L}$ common to both $u$ and $v$ are assigned identically and the assignments to $p_1, \dots, p_{l-l'}$ and $q_1,\dots, q_{l-l'}$ are consistent. It is easy to see that $\pi_{v\rightarrow u}^{-1}(i) = d^{l-l'}$ for any $i \in R_{l'}$. The proof of weak density follows from the bi-regularity property of $\Phi$ in a manner identical to the proof in \cite{DGKR03}. The proof of soundness is identical to the proof in \cite{Khot-3}. The proof of hardness in Theorem \ref{sec-dto1multidef} follows from standard arguments as given in \cite{DGKR03}. We omit these proofs. \section{Fourier Analysis}\label{sec-Fourier} We will be working over the field $\ensuremath{\mathbb{F}[2]}$. Define the following homomorphism $\phi$ from $(\ensuremath{\mathbb{F}[2]}, +)$ to the multiplicative group $(\{-1,1\}, .)$, by $\phi(a) := (-1)^a$. We now consider the vector space $\ensuremath{\mathbb{F}[2]}^m$ for some positive integer $m$. We define the `characters' $\chi_\alpha : \ensuremath{\mathbb{F}[2]}^m \mapsto \{-1,1\}$ for every $\alpha \in \ensuremath{\mathbb{F}[2]}^m$ as, $$ \chi_\alpha(f) := \phi(\alpha\cdot f), \ \ \ \ \ \ \ f \in \ensuremath{\mathbb{F}[2]}^m$$ where `$\cdot$' is the inner product in the vector space $\ensuremath{\mathbb{F}[2]}^m$. The characters $\chi_\alpha$ satisfy the following properties, \begin{eqnarray*} \chi_0(f) = 1 &\ \ \ \ \ & \forall f \in \ensuremath{\mathbb{F}[2]}^m\\ \chi_\alpha(0) = 1 &\ \ \ \ \ & \forall \alpha \in \ensuremath{\mathbb{F}[2]}^m\\ \chi_{\alpha + \beta}(f) = \chi_\alpha(f)\chi_\beta(f)\\ \chi_{\alpha}(f + g) = \chi_\alpha(f)\chi_\alpha(g) \end{eqnarray*} and, \begin{equation*} \E_{f \in \ensuremath{\mathbb{F}[2]}^m}\left[\chi_\alpha(f)\right] = \begin{cases} 1 & \text{ if } \alpha = 0 \\ 0 & \text{ otherwise } \end{cases} \end{equation*} The characters $\chi_\alpha$ form an orthonormal basis for $L^2(\ensuremath{\mathbb{F}[2]}^m)$. We have, $$ \left<\chi_\alpha, \chi_\beta\right> = \begin{cases} 1 & \text{ if }\alpha = \beta \\ 0 & \text{ otherwise } \end{cases} $$ where, $$ \left<\chi_\alpha, \chi_\beta\right> := \E_{f \in \ensuremath{\mathbb{F}[2]}^m}\left[ \chi_\alpha(f)\chi_\beta(f)\right]. $$ Let $A:\ensuremath{\mathbb{F}[2]}^m \mapsto \R$ be any real valued function. Then the Fourier expansion of $A$ is given by, $$A(x) = \sum_{\alpha \in \ensuremath{\mathbb{F}[2]}^m}\widehat{A}_\alpha\chi_\alpha(x),$$ where, $$\widehat{A}_\alpha = \E_{x\in \ensuremath{\mathbb{F}[2]}^m}[A(x)\chi_\alpha(x)].$$ A useful equality is: $$\widehat{A}_0 = \E_{x\in \ensuremath{\mathbb{F}[2]}^m}[A(x)].$$ \subsection*{Folding} The following lemma gives a property of the Fourier coefficients of any homogeneously folded function. \begin{lemma} Let $A : \ensuremath{\mathbb{F}[2]}^m\mapsto \R$ be any function such that $A(x + y) = A(x)$ for some $y \in \ensuremath{\mathbb{F}[2]}^m$ and all $x\in \ensuremath{\mathbb{F}[2]}^m$. Then if $\widehat{A}_\alpha \neq 0$, then $\alpha\cdot y = 0$. \end{lemma} \begin{proof} Assume $\widehat{A}_\alpha \neq 0$. By definition and using the folding property, \begin{eqnarray*} \widehat{A}_\alpha & = & \E_{x\in \ensuremath{\mathbb{F}[2]}^m}[A(x)\chi_\alpha(x)] \\ & = &\E_{x\in \ensuremath{\mathbb{F}[2]}^m}[A(x + y) \chi_\alpha(x + y)] \\ & = & \E_{x\in \ensuremath{\mathbb{F}[2]}^m}[A(x) \chi_\alpha(x + y)] \\ & = & \E_{x\in \ensuremath{\mathbb{F}[2]}^m}[A(x) \chi_\alpha(x)]\chi_{\alpha}(y) \\ & = & \widehat{A}_\alpha \chi_\alpha(y). \end{eqnarray*} Thus, if $\widehat{A}_\alpha \neq 0$, then $\chi_\alpha(y) = 1$. Thus, $\phi(\alpha\cdot y) = 1$. This implies that $\alpha\cdot y = 0$. \end{proof} \end{document}
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{question}[theorem]{Question} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{cor}[theorem]{Corollary} \newtheorem{obs}[theorem]{Observation} \newtheorem{proc}[theorem]{Procedure} \def\mathbb Z{\mathbb Z} \def{\mathbb F}_q{{\mathbb F}_q} \def\mathbb R{\mathbb R} \def\mathbb N{\mathbb N} \title{Revisiting Toom's proof \\ of Bulgarian Solitaire} \author{Therese A. Hart} \address{TAH: Department of Mathematics and Computer Science, Eastern Connecticut State University, Willimantic, CT 06226} \email{[email protected]} \author{Gabriel J. H. Khan} \address{GJHK: Boston University, Boston, MA 02215} \email{GK:[email protected]} \author{Mizan R. Khan} \address{MRK: Department of Mathematics and Computer Science, Eastern Connecticut State University, Willimantic, CT 06226} \email{[email protected]} \date{\today} \subjclass[2000]{05A17,11P81} \begin{abstract} In this article we give an exposition of Toom's proof of Bulgarian Solitaire that appeared in \emph{Kvant}. We provide more details. We also show how an application of the Chinese Remainder Theorem allows us to generalize the proof. \end{abstract} \maketitle \section{Introduction} The following \emph{literary} version of Bulgarian Solitaire appeared as a problem (numbered M655) in the Russian high school journal \emph{Kvant}~\cite{K1} in 1980. Its solution by Andrei Toom was published in the same journal~\cite{K2} in 1981. We refer the interested reader who knows Russian to the additional source~\cite[144-147]{VGRT}, where the same solution appears. M655. On the table of a clerk at the Circumlocution Office there are $n$ volumes of Encyclopedia Britannica, ordered in several piles. Every day, when arriving at work, the clerk takes one volume from each pile and puts all of these in a new pile, after which he rearranges the piles according to the number of volumes in each (in non-increasing order), and fills out a form, recording the number of volumes in each pile. He never does anything else, except this. \begin{enumerate} \item What record will enter the form after a month, if the total number of volumes is $n=3$, $n=6$, $n=10$ (the initial distributions of volumes is arbitrary)? \item Prove that if the total number of volumes is $n=k(k+1)/2$, where $k$ is a natural number, then after a certain amount of days the form will start filling up with identical records. \item* Investigate what will happen after many working days for other values of $n$. \end{enumerate} \hspace{3.5in} S. Limanov, A. Toom We note that the idea of the \emph{Circumlocution Office} is from the Dickensian novel \emph{Little Dorrit}~\cite[Chapter 10, Book 1]{D}. Martin Gardner popularized Bulgarian Solitaire in the West in his Scientific American column~\cite{MG}. We quote the following from~\cite[page 18]{MG}: ``Our last example of a task that ends suddenly in a counterintuitive way is one you will enjoy modeling with deck of cards. Its origin is unknown, but Graham, who told me about it, says that European mathematicians call it Bulgarian solitaire for reasons he has not been able to discover." Later in the column Gardner credits Jorgen Brandt\cite{JB} for the first proof of the solution. However, the proof by Toom predates this, and more importantly is much simpler. In this article we describe Toom's proof, but include more detail, and introduce a generalization. A very nice discussion of the the origins of Bulgarian Solitaire and subsequent literature is given in the preprint~\cite{H}. In particular, the mystery behind its name is solved there. We would like to thank both Z. Pozdynakova and C. Yankov for their superb translating services, especially since none of the three authors have the faintest idea of Russian. \section{Definitions, Notation, Pictures and Statement of Results} We recast the problem of Bulgarian solitaire into the language of partitions and dynamical systems. Consequently, we begin by stating some basic definitions and introducing some notation. In this context we follow the usage in~\cite{S}. A partition $\pi$ of a positive integer $n$ is a nonincreasing sequence of nonnegative integers such that the sum of terms is $n$, that is, $$ \pi=\left(\pi_1,\pi_2,\pi_3,\ldots\right) $$ where $ \pi_i $ is a nonnegative integer, $\pi_i \geq \pi_{i+1}$ for all $i$, and $\sum_{i=1}^\infty \pi_i = n.$ Since $\pi_i$ is a nonnegative integer, only a finite number of terms of $\pi$ are non-zero. These are called the parts of $\pi$, and the number of parts of $\pi$ is called the length of $\pi$ and is denoted $l(\pi)$. In writing specific partitions one typically omits the infinite tail of zeros. For example, the partition $(3,3,2,2,1,0,0,\ldots)$ is written as $(3,3,2,2,1)$. We denote the set of all partitions by $\mathcal P$, and the subset of the partitions of $n$ by ${\mathcal P}_n$. For $\lambda,\pi \in {\mathcal P}$ we define the union $\lambda \cup \pi$ to be the partition obtained by merging the entries of $\lambda$ with those of $\pi$ and arranging the resulting entries in nonincreasing order, for example, $$(4,3,3,2,1) \cup (5,4,3,1,1) = (5,4,4,3,3,3,2,1,1,1).$$ We now define a map $ T: {\mathcal P} \rightarrow {\mathcal P}$ via \begin{equation} \label{eq:defn-T} T(\pi) = (\pi_1 -1, \pi_2 -1, \ldots, \pi_{l(\pi)} -1,0,0,\ldots) \cup (l(\pi),0,0,\dots) \end{equation} We note that if $\pi \in {\mathcal P}_n$ then $T(\pi) \in {\mathcal P}_n$. In this paper we study the dynamics of this map $T$. If we start with a partition $\pi \in {\mathcal P}_n$ and look at the iterates of $T$ we get the orbit \begin{equation*} {\mathcal O}_T(\pi) = \{ \pi, T(\pi), T^2(\pi), T^3(\pi), \ldots \}. \end{equation*} For fixed value of $n$, the number of partitions of $n$ is finite. Consequently the orbit, ${\mathcal O}_T(\pi)$, is closed and ends in a $N$-cycle. The problem of Bulgarian solitaire is characterizing the periodic points of $T$. We visualize a partition in a couple of different ways. The first as a Ferrers graph. A Ferrers graph represents a partition as a pattern of dots with the $k$-th row having the same number of dots as the $k$-th term of the partition. So for example, the Ferrers graph of $(5,4,1,1,1)$ is \begin{equation*} \begin{array}{ccccc} \bullet & \bullet & \bullet & \bullet & \bullet \\ \bullet & \bullet & \bullet & \bullet & \\ \bullet & & & & \\ \bullet & & & & \\ \bullet & & & & \end{array} \end{equation*} We denote the Ferrers graph of $\pi$ by $F_\pi$. The solution to Bulgarian solitaire involves viewing $F_\pi$ as an arrangement of checkers in the upper triangle of an infinite checkerboard. For example, the partition $(5,4,1,1,1)$ has the following arrangement. \begin{equation*} \begin{array}{|c|c|c|c|c|c|c} \hline \bullet & \bullet & \bullet & \bullet & \bullet & \ & \ \\ \hline \bullet & \bullet & \bullet & \bullet & & & \\ \hline \bullet & & & & & & \\ \hline \bullet & & & & & & \\ \hline \bullet & & & & & \\ \hline & & & & & & \\ \hline & & & & & & \\ \end{array} \end{equation*} We use the ordered pair $(i,j)$ to denote the square on the $i$-th row and $j$-th column of the checkerboard, and we use $diag[k]$ to denote the $k$-th upper diagonal of the infinite checkerboard, that is, $diag[k]$ are the squares $$diag[k] = \{ \, (1,k),(2,k-1),(3,k-2), \ldots, (k-1,2), (k,1) \}. $$ For example, in the picture below, $diag[3]$ is the set of three squares containing the checkers. \begin{equation*} \begin{array}{|c|c|c|c|c} \hline \ & \ & \bullet & \ & \ \\ \hline \ & \bullet & \ & \ & \\ \hline \bullet & \ & \ & \ & \\ \hline \ & \ & \ & \ & \\ \hline \ & \ & \ & \ & \\ \end{array} \end{equation*} \textbf{Note:} For the remainder of this paper we use the word \emph{diagonal} in a very restrictive sense. It will be only used to reference the sets $diag[k]$. \noindent We now state the solution to Bulgarian solitaire. \begin{theorem}[Bulgarian Solitaire] \label{Main-Result} Let $\pi$ be a partition of $n$, and let $m$ be the integer such that $m(m+1)/2 \le n < (m+1)(m+2)/2$. $\pi$ is a periodic point of $T$ if and only if \begin{equation}\label{eq:per cond} \# \textrm{ of checkers on } diag[k] = \left\{ \begin{array}{ll} k, & k=1,2, \ldots, m \\ n-m(m+1)/2, & k = m+1 \\ 0, & k > m+1. \end{array}\right. \end{equation} So in the special case of $n=m(m+1)/2$ we have that, with respect to $T$, ${\mathcal P}_n$ contains only one periodic point, namely, $$(m, m-1, m-2, \dots, 2,1).$$ \end{theorem} Below is the Ferrer's graph of (5,4,4,2,2,1). The above theorem says that this is a periodic point of $T$. \begin{equation*} \begin{array}{ccccc} \bullet & \bullet & \bullet & \bullet & \bullet \\ \bullet & \bullet & \bullet & \bullet & \\ \bullet & \bullet & \bullet & \bullet & \\ \bullet & \bullet & & & \\ \bullet & \bullet & & & \\ \bullet & & & & \end{array} \end{equation*} \section{Reinterpreting the action of $T$} The reason we like to view $F_\pi$ as an arrangement of checkers on a checkerboard is that it introduces a co-ordinate system. This permits a convenient way to decompose the action of $T$ into different components which then leads to a useful visualization of the action of $T$. \begin{obs}[Moving Checkers] Given a partition $\pi$ we arrive at $T(\pi)$ in the following way. \begin{enumerate} \item We remove all of the checkers in the first column of the checkerboard. \item We translate the remaining checkers down one square and then one square to the left. \item We now place the checkers that we had removed from the first column into the first row. In doing so we take care to place the checker that originally been on the $(k,1)$-th square onto the $(1,k)$-th square. \item Finally if $l(\pi) < \pi_1-1$, then we move the checkers in the columns numbering $l(\pi) +1$ through $\pi_1-1$ up one square. \end{enumerate} \end{obs} We illustrate the above procedure using the partition $(5,4,1)$. \begin{equation*} \begin{array}{|c|c|c|c|c|c|c} \hline \bullet & \bullet & \bullet & \bullet & \bullet & \ & \ \\ \hline \bullet & \bullet & \bullet & \bullet & & & \\ \hline \bullet & & & & & & \\ \hline & & & & & & \\ \hline & & & & & & \\ \end{array} \end{equation*} Our first step gives us the pattern \begin{equation*} \begin{array}{|c|c|c|c|c|c|c} \hline \ & \bullet & \bullet & \bullet & \bullet & \ & \ \\ \hline & \bullet & \bullet & \bullet & & & \\ \hline & & & & & & \\ \hline & & & & & & \\ \end{array} \end{equation*} The second step is \begin{equation*} \begin{array}{|c|c|c|c|c|c} \hline \ & \ & \ & \ & \ & \ \\ \hline \bullet & \bullet & \bullet & \bullet & \ & \ \\ \hline \bullet & \bullet & \bullet & & & \\ \hline & & & & & \\ \hline & & & & & \\ \end{array} \end{equation*} Our third step is \begin{equation*} \begin{array}{|c|c|c|c|c|c} \hline \bullet & \bullet & \bullet & \ & \ & \ \\ \hline \bullet & \bullet & \bullet & \bullet & \ & \ \\ \hline \bullet & \bullet & \bullet & & & \\ \hline & & & & & \\ \hline & & & & & \\ \end{array} \end{equation*} Our final step is \begin{equation*} \begin{array}{|c|c|c|c|c|c} \hline \bullet & \bullet & \bullet & \bullet & \ & \ \\ \hline \bullet & \bullet & \bullet & \ & \ & \ \\ \hline \bullet & \bullet & \bullet & & & \\ \hline & & & & & \\ \hline & & & & & \\ \end{array} \end{equation*} \begin{obs} By describing the action of $T$ in this way we infer the following. \begin{itemize} \item A checker in the $(k,1)$-th square moves to the $(1,k)$-th square. \item A checker in the $(i,j)$-th square, with $ i-1 \leq l(\pi)$ and $1 < j $, moves to the $(i+1,j-1)$-th square. \item Finally a checker on the $(i,j)$-th square, with $ i-1 > l(\pi)$ and $1 < j $, moves to the $(i,j-1)$-th square. \end{itemize} \end{obs} Consequently, a checker on the $k$-th diagonal may move to the $(k-1)$-th diagonal, but it cannot move to the $(k+1)$-th diagonal. With these remarks we now arrive at the following description of the action of $T$. \begin{obs}[Action of $T$]\label{Act-T} We can view the action of $T$ as moving checkers down the diagonals of the checkerboard. When a checker reaches the square $(k,1)$, then on the next move it jumps up to the $(1,k)$ square. When a checker moves down one square on the diagonal and then if there is an empty square above it, it moves up vertically one square and thus moves into smaller diagonal. \end{obs} For this problem we found it convenient visually to rotate the Ferrers graph clockwise by $45^\circ$, with the center of rotation being the top left hand point of the graph. So for example we view $(5,4,1)$ as \begin{equation*} \begin{array}{ccccccccccc} & & & & & \bullet & & & & \\ & & & & \bullet & & \bullet & & & \\ & & & \bullet & & \bullet & & \bullet & & \\ & & \circ & & \circ & & \bullet & & \bullet & \\ & \circ & & \circ & & \circ & & \bullet & & \bullet \\ \end{array} \end{equation*} \begin{obs} Each row of the above triangular array represents a diagonal on the checkerboard. We use circles to represent empty squares on the diagonal. When working with such a triangular array we will refer to the $(i,j)$ square on the checkerboard as the $[i+j-1,j]$ cell of the triangular array, that is, the first co-ordinate of the cell refers to the diagonal on which the square belongs. We use the word \emph{square} when we visualize the checkers lying on a checkerboard and we use the word \emph{cell} when we visualize the checkers lying in a triangular array. We view the action of $T$ as first moving each checker one cell to the left (with the condition that a checker in cell $[s,1]$ moves into the cell $[s,s]$, that is, it loops to the other side), and then possibly moving a checker diagonally up one cell to the right. \end{obs} Let us illustrate with $(5,4,1)$ and show how we get that $T((5,4,1))= (4,3,3)$. We start with \begin{equation*} \begin{array}{ccccccccccc} & & & & & \bullet & & & & \\ & & & & \bullet & & \bullet & & & \\ & & & \bullet & & \bullet & & \bullet & & \\ & & \circ & & \circ & & \bullet & & \bullet & \\ & \circ & & \circ & & \circ & & \bullet & & \bullet \\ \end{array} \end{equation*} We now move the checkers one cell to the left to obtain \begin{equation*} \begin{array}{ccccccccccc} & & & & & \bullet & & & & \\ & & & & \bullet & & \bullet & & & \\ & & & \bullet & & \bullet & & \bullet & & \\ & & \circ & & \bullet & & \bullet & & \circ & \\ & \circ & & \circ & & \bullet & & \bullet & & \circ \\ \end{array} \end{equation*} and then move the checker in cell $[5,4]$ to cell $[4,4]$ to obtain \begin{equation*} \begin{array}{ccccccccccc} & & & & & \bullet & & & & \\ & & & & \bullet & & \bullet & & & \\ & & & \bullet & & \bullet & & \bullet & & \\ & & \circ & & \bullet & & \bullet & & \bullet & \\ & \circ & & \circ & & \bullet & & \circ & & \circ \\ \end{array} \end{equation*} This is the rotated Ferrers graph of $(4,3,3)$. On occasion we adopt the following shortcut in presenting the rotated Ferrers graph by omitting the filled rows. So for example we permit the following to represent $(4,3,3)$. \begin{equation*} \begin{array}{ccccccccccc} & & \circ & & \bullet & & \bullet & & \bullet & \\ & \circ & & \circ & & \bullet & & \circ & & \circ \\ \end{array} \end{equation*} \section{Proof of Theorem~\ref{Main-Result}} We begin with a small lemma that we will require in our proof. It is an immediate consequence of the Chinese Remainder Theorem. \begin{lemma}\label{CRTApp} Let $a,b,m,n,u,v \in \mathbb Z$ with $\gcd(m,n)=\gcd(u,m)=\gcd(v,n)=1$. Then for any $c \in \mathbb Z$, there exist a corresponding $k \in \mathbb Z$ such that \begin{equation} c \equiv (a + ku) \!\!\!\! \pmod{m} \textrm{ and } c \equiv (b+kv) \!\!\!\! \pmod{n}. \end{equation} \end{lemma} \begin{proof} Let $u^\prime,v^\prime \in \mathbb Z$ such that $uu^\prime \equiv 1 \pmod{m}$ and $vv^\prime \equiv 1 \pmod{n}$. Consider the simultaneous congruences $$ x \equiv (c-a)u^\prime \!\!\!\! \pmod{m} \textrm{ and } x \equiv (c-b)v^\prime \!\!\!\! \pmod{n}.$$ By the Chinese Remainder Theorem, this system of simultaneous congruences has a solution $k$. \end{proof} \begin{obs}[Key idea of proof] The key idea that we exploit for the proof of Theorem~\ref{Main-Result} is that a checker can never move from a smaller diagonal to a larger diagonal. The reader may first want to check the action of $T$ on $(4,3,3)$. We found this to be an instructive example illustrating our proof. \end{obs} \begin{proof}[Proof of Theorem~\ref{Main-Result}] $(\mathbb Rightarrow)$ We will prove this direction by proving the contrapositive. We begin by observing that if $diag[d]$ contains an empty square then so does every higher diagonal. Let $\pi \in {\mathcal P}_n$ be a partition that does not satisfy ~\eqref{eq:per cond}. Consequently there exists an integer $l$ such that $diag[l]$ and $diag[l+1]$ (of $F_\pi$) contains both checkers and spaces. Without loss of generality we can assume that $l$ is minimal, that is, for every $d<l$ the diagonal $diag[d]$ of $F_\pi$ contains $d$ checkers and no spaces. Let cell $[l,a]$ be empty and let cell $[l+1,b]$ contain a checker. By Lemma~\ref{CRTApp} for each $c\in \mathbb Z$ with $1 \leq c \leq l$, there corresponds a smallest positive integer $k(c)$ such that $$ a-k(c) \equiv c\!\!\!\! \pmod{l}, \, b-k(c) \equiv c \!\!\!\! \pmod{(l+1)}. $$ Let $K(C) = \min(\{k(c) \, : \, c=1,2, \ldots, l \}).$ We claim that a checker will move from $diag[l+1]$ to $diag[l]$ after \emph{at most} $K(C)$ iterations. To see this we assume that we have iterated $T$ $(K(C)-1)$ times and at no point a checker has moved from $diag[l+1]$ to $diag[l]$. Now when we take the $K(C)$-th iteration we find that after translating the checkers to the left (in the triangular array), there will be a checker in the $[l+1,C]$ cell and an empty space in the $[l,C]$ cell. At this point the checker in the $[l+1,C]$ cell will move to the $[l,C]$ cell. As no checker will move from $diag[l]$ to $diag[l+1]$, no amount of iterations of $T$ will yield the original partition, and consequently $\pi$ is not a periodic point of $T$. \noindent $(\Leftarrow)$ If $\pi$ satisfies ~\eqref{eq:per cond} then $T^{(m+1)}(\pi)=\pi$ and therefore $\pi$ is periodic. \end{proof} \section{Counting the number of orbits of ${\mathcal P}_n$} In~\cite{JB} it was shown how to apply Polya enumeration to count the number of distinct orbits of $T$ for a fixed value of $n$. We give a short exposition of this calculation. The idea is to identify each orbit of ${\mathcal P}_n$ with a necklace of black and white beads and then invoke Polya enumeration. If $n$ is triangular, then there is exactly one periodic point. So we assume that $n$ is not triangular. Therefore we assume that for some $m \in \mathbb N$, $n$ satisfies the inequality $$\frac{m(m+1)}{2} < n < \frac{(m+1)(m+2)}{2},$$ and we set $l = (m+1)(m+2)/2-n.$ Let $\pi$ be a periodic point of ${\mathcal P}_n$. By Theorem~\ref{Main-Result} we can identify $\pi$ with the arrangement of checkers and empty squares on the $(m+1)$-th diagonal of the Ferrers graph. Furthermore by thinking of each empty square as being a white bead and each checker as a black bead we can identify $\pi$ with an arrangement of $l$ black beads and $(m+1-l)$ white beads. For example we identify the periodic point $(5,4,4,2,2,1)$ with the following arrangement of black beads ($\bullet$) and white beads ($\circ$): \begin{equation*} (5,4,4,2,2,1) \,\, \leftrightarrow \,\, \begin{array}{ccccccccccc} \bullet & \bullet & \circ & \bullet & \circ & \circ \end{array} \end{equation*} Furthermore, we can view the action of $T$ on $\pi$ as rotating these black and white beads. Thus we have the cyclic group of order $(m+1)$, $C_{m+1}$, acting upon a line of $l$ black beads and $(m+1-l)$ white beads, and consequently we can identify the orbit, ${\mathcal O}_T(\pi)$, with a necklace consisting of $l$ black beads and $(m+1-l)$ white beads. For example, in the case of the periodic point $(5,4,4,2,2,1)$ we have the identification \begin{equation*} {\mathcal O}((5,4,4,2,2,1)) \,\, \leftrightarrow \,\, \left. \begin{array}{ccc} & \circ & \\ \bullet & & \circ \\ \circ & & \bullet \\ & \bullet & \end{array} \right. \end{equation*} These observations lead to the following theorem. \begin{theorem} The number of distinct orbits of ${\mathcal P}_n$ under the action of $T$ is equal to the number of necklaces consisting of $l$ black beads and $(m+1-l)$ white beads, where the symmetry group is the cyclic group of order $(m+1)$. \end{theorem} Counting the number of necklaces consisting of beads of two distinct colors, where the symmetry group is cyclic, is a standard exercise in Polya-enumeration. The interested reader is referred to~\cite[Chapter 27]{B} for the details. We simply state the basic result. \begin{cor} \label{No-Orbits} Let $\varphi(d)$ denote the Euler phi function. The number of distinct orbits of ${\mathcal P}_n$ under the action of $T$ is the coefficient of the $b^lw^{m+1-l}$ term of the bivariate polynomial \begin{equation}\label{eq:count} \frac{1}{m+1}\sum_{d|(m+1)}\varphi(d)\left(b^d+w^d\right)^{(m+1)/d}. \end{equation} \end{cor} \section{A Slight Generalization of Bulgarian Solitaire} Let $s=(s_n)$ be a sequence with $s_n \in \mathbb Z$. We define a map $T_s: {\mathcal P} \rightarrow {\mathcal P}$ in the following way using as a model our description of the $T$, see Observation~\ref{Act-T}. \begin{proc}[Action of $T_s$] We start with a partition whose Ferrers graph is arranged on an infinite checkerboard. For each checker on $diag[k]$ we move the checker $(s_k \mod k)$ squares down the diagonal, looping around to the top of $diag[k]$ as needed. Once we have moved all of the checkers diagonally, we check to see if there are any rows of checkers with spaces between checkers. For each such row we move the checkers to the left until all of the checkers are contiguous starting from the left side of the board. We now check to see if there are columns of checkers with spaces between checkers. If there are we move the checkers up until we have a contiguous set of checkers starting at the top of the board. We now have the Ferrers graph of a partition, possibly the one with which we started. We note that our map $T$ (for Bulgarian Solitaire) is simply the special case $T_{(1,1,1,\ldots)}$. \end{proc} Our proof of Bulgarian solitaire immediately generalizes to give us the following result. The condition that $\gcd(k,s_k)=1$ is needed so that we can apply Lemma~\ref{CRTApp}. \begin{theorem}[Generalized Bulgarian Solitaire] \label{Gen-Result} Let $\pi$ be a partition of $n$, and let $m$ be the integer such that $m(m+1)/2 \le n < (m+1)(m+2)/2$. Let $s=(s_k)$ be a sequence, with $s_n \in \mathbb Z$, satisfying the condition that $\gcd \left(k,s_k\right)=1$. Then $\pi$ is a periodic point of $T_s$ if and only if \begin{equation} \# \textrm{ of checkers on } diag[k] = \left\{ \begin{array}{ll} k, & k=1,2, \ldots, m \\ n-m(m+1)/2, & k = m+1 \\ 0, & k > m+1. \end{array}\right. \end{equation} \end{theorem} It is an interesting question to determine the periodic points and fixed points for $T_s$ when $s= \{s_n\}, s_n\in \mathbb Z$, is a sequence with $\gcd(k,s_k) \not=1 $ for some $k$. In this case one can have periodic points (and fixed points) that are not periodic points (or fixed points) for Generalized Bulgarian Solitaire. The trivial example is the sequence $s=(1,2,3,4, \ldots)$. In this case every partition is a fixed point. A more interesting example is when $s$ is any integer valued sequence satisfying the condition $s_{15}=5$ and $s_{16}=8$. In this case the partition $$ (15,14,13,11,11,10,9,9,6,6,5,4,3,1,1,1)$$ is a fixed point for $T_s$. This can be seen by looking at the last two rows of its rotated Ferrers graph. \begin{equation*} \begin{array}{c} \bullet \circ \bullet \bullet \bullet \bullet \circ \bullet \bullet \bullet \bullet \circ \bullet \bullet \bullet \\ \bullet \circ \circ \circ \circ \circ \circ \circ \bullet \circ \circ \circ \circ \circ \circ \circ \\ \end{array} \end{equation*} \end{document}
\begin{document} \title{The Poisson cohomology of $\mathfrak{sl} \begin{abstract} We compute the smooth Poisson cohomology of the linear Poisson structure associated with the Lie algebra $\spl$. \end{abstract} \tableofcontents \section{Introduction} Let $M$ be a smooth manifold. The space of $C^{\infty}$-multi-vector fields on $M$: \begin{align*} \mathfrak{X}^{\bullet}(M)&:= \Gamma (\wedge ^{\bullet} TM) \end{align*} carries a natural extension of the Lie bracket, called the Schouten-Nijenhuis bracket (see e.g.\ \cite{Laurent2013}), which makes $\mathfrak{X}^{\bullet}(M)$ into a graded Lie algebra: \begin{equation*} \begin{array}{cccc} [\cdot,\cdot]:&\mathfrak{X}^{p+1}(M)\times \mathfrak{X}^{q+1}(M)&\to &\mathfrak{X}^{p+q+1}(M). \end{array} \end{equation*} A Poisson structure on $M$ is a bivector field $\pi \in \mathfrak{X}^2(M)$ satisfying \[[\pi,\pi]=0.\] By the graded Jacobi identity, this equation is equivalent to \[\dif_{\pi}^2=0,\ \ \textrm{where}\ \ \dif_{\pi}:=[\pi,\cdot]:\mathfrak{X}^{\bullet}(M)\to \mathfrak{X}^{\bullet +1}(M).\] The cohomology of the resulting chain complex \[(\mathfrak{X}^{\bullet}(M), \dif_{\pi}),\] is called the Poisson cohomology of $(M,\pi)$, and was introduced by Lichnerowicz \cite{Lich77}. The Poisson cohomology groups, denoted by \[H^{\bullet}(M,\pi),\] encode infinitesimal information about the Poisson structure. In low degrees they have the following interpretations: $H^0(M,\pi)$ consists of so-called Casimir functions, which are the ``smooth functions'' on the leaf-space; $H^1(M,\pi)$ plays the role of the Lie algebra of the ``Lie group'' of outer automorphisms of the Poisson manifold, $H^2(M,\pi)$ is the ``tangent space'' to the Poisson-moduli-space, or the space of infinitesimal deformations of the Poisson structure, and in $H^3(M,\pi)$ we can find obstructions to extending infinitesimal deformations to actual deformations. However, these interpretations are mostly of a heuristic or formal nature, since there are no general results asserting them, and their validity is poorly understood. Poisson cohomology is hard to compute due to the lack of general methods. The existing techniques are specialized to certain classes of Poisson structures, which we briefly outline below \begin{itemize} \item \underline{Mildly degenerate Poisson structures}. As noticed already in \cite{Lich77}, for symplectic structures (i.e.\ non-degenerate Poisson) the Poisson complex is isomorphic to the de Rham complex, and so it computes the usual (real) cohomology of the manifold; this is also the only case when the Poisson differential is elliptic. Similar techniques apply also to Poisson structures which are almost everywhere non-degenerate and have ``mild'' singularities. For these one can use singular de Rham forms. This was first worked out in dimension 2: for linear singularities in \cite{Radko}, for quadratic singularities in \cite{Nakanishi_97}, and for general singularities in \cite{Monnier}, and in general dimension: for log-symplectic structures in \cite{MO14,GMP,Lanius_1} and for higher order singularities in \cite{Lanius_2}. \item \underline{Regular Poisson structures} have a non-singular symplectic foliation, which induces a filtration on the Poisson complex; the first pages of the resulting spectral sequence are described in terms of foliated cohomology \cite{Vaisman}. For simple foliations, this technique can be used to obtain explicit results as in \cite{Xu92}, or as in \cite{Gammella} where the Poisson cohomology of the regular part of certain duals of low-dimensional Lie algebras is calculated. \item \underline{Compact-type}. For the linear Poisson structure on the dual of a compact semi-simple Lie algebra, Conn showed that the Poisson cohomology vanishes in first and second degree, and he used this in the proof of the linearization theorem \cite{Conn85}. The full Poisson cohomology associated to compact Lie algebras was calculated in \cite{GW92}. The proof therein uses averaging over the fibers of a source compact Lie groupoid. This technique has been extended to ``compact-type'' Poisson manifolds, already in \cite{Xu92} for simple, regular foliations, and more recently in \cite{PMCT} for Poisson manifolds which admit a source-proper Lie groupoid integrating them. \item \underline{Other categories}. There are several calculations of Poisson cohomology in categories different from $C^{\infty}$, such as: formal, analytic, holomorphic, or algebraic Poisson cohomology. We will not discuss these results here, because the techniques involved are usually quite different and rarely of use in the $C^{\infty}$-setting. \end{itemize} In this paper we calculate the Poisson cohomology of the linear Poisson structure on the dual of the Lie algebra $\spl$ (see Theorem \ref{main theorem}): \[(\mathfrak{sl}_2^*(\mathbb{R}),\pi).\] Our interest in calculating this cohomology originates in our study of the local structures of ``generic Poisson structures'' in odd-dimension, which transversely to the singular leaves are linearly approximated by $\mathfrak{sl}_2^*(\mathbb{R})$ or $\mathfrak{so}_3^*(\R)$; the second case being well understood \cite{Conn85}. There are several other reasons to consider specifically this example. First, $\spl$ does not fit into any of the classes above for which techniques are known, and therefore its calculation requires some new insights and ideas. Secondly, semisimple Lie algebra have been considered many times in the Poisson framework, especially in relation to the problem of linearization \cite{Wein83,Conn84,Conn85,Wein87,GW92}. Finally, the Poisson cohomology of $(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$ has a representation theoretic flavor, as it is isomorphic to the Chevalley-Eilenberg cohomology of $\spl$ with coefficients in the infinite-dimensional representation $C^{\infty}(\mathfrak{sl}_2^*(\mathbb{R}))$. As we will see in Section \ref{section: geometric interpretation}, all classes in $H^{\bullet}(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$ have a clear geometric meaning, and therefore the construction of representatives is quite intuitive. The difficult part, which will occupy most of the paper, is proving that the elements we construct cover all cohomology classes. This is also reflected in the literature, as, in one way or another, representatives for all classes have appeared in various contexts. In \cite[Prop 6.3]{Wein83}, Weinstein constructed a non-analytic deformation of $(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$ which is non-linearizable. In fact, by our main result, all infinitesimal deformations in $H^2(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$ are obtained by a similar procedure. Also with the aim of constructing deformations of semi-simple Lie algebras, the results in \cite{Wein87} and in \cite[Thm 4.3.9]{Zung_Book} yield non-trivial classes in $H^1(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$, and to some extend, these classes have appeared also in \cite{Nakanishi_91}. Let us mention also that the Poisson cohomology of the regular part of $\mathfrak{sl}_2^*(\mathbb{R})$ was considered in \cite{Gammella}, where it is proven to be infinite dimensional. The formal and polynomial Poisson cohomology of $\spl$ can be calculated using standard representation theory (see e.g.\ \cite{Laurent2013,Pichereau, Nakanishi_91}), and, most likely, the analytic Poisson cohomology can be deduced using methods from \cite{Conn84}. The paper is structured as follows. In Section \ref{section results} we state our main result, and build representatives for all Poisson cohomology classes. In Section \ref{section: geometric interpretation} we discuss the algebra of Casimir functions, we calculate the induced Schouten-Nijenhuis bracket in cohomology, we construct groups of outer automorphisms and deformations, and we study the action of outer-automorphisms on deformations. In Section \ref{section: formal}, by using the formal Poisson cohomology, we reduce the problem to that of calculating flat Poisson cohomology. In Section \ref{section: flat PC} we introduce the flat foliated complex, which, as in the regular case, can be used to compute flat Poisson cohomology. In Section \ref{section flat foli} we calculate the flat foliated cohomology. For this, we construct a retraction of the foliation to a subset, which represents the ``cohomological skeleton'' of the foliation, and show that the retraction is a homotopy equivalence. The retraction is built as the infinite flow of a vector field; the analysis of the flow of this vector field is left for Section \ref{section: analysis}. Summarizing the main steps of the proof, we extract a general strategy for calculating Poisson cohomology (a version of this scheme was used in \cite{Ginz96} for a specific 2-dimensional Poisson structure): \begin{itemize} \item[1.] Calculate the formal Poisson cohomology at the singularity of the foliation, and reduce the problem to calculating the cohomology of the ``flat Poisson complex''; \item[2.] Treat this subcomplex as it came from a regular Poisson structure, and reduce the problem to calculating flat foliated cohomology; \item[3.] For calculating (flat) foliated cohomology, try to build a contraction to a ``cohomological skeleton''. \end{itemize} In future work, we will attempt to use this strategy for other Poisson structures. In particular, we will continue the study of the Poisson cohomology associated to other semi-simple Lie algebras. \section{The main result}\label{section results} To state the results we identify $\mathfrak{sl}_2^*(\mathbb{R})$ with $\R^3$ in such a way that the linear Poisson structure is given by \begin{align*} \pi &:=x\partial _y \wedge \partial _z +y\partial_z \wedge \partial_x -z\partial_x\wedge \partial_y \end{align*} The symplectic foliation of $\pi$ can be described with using the following basic Casimir function, which will be used throughout the paper: \begin{align}\label{Casimir} f(x,y,z)&:= x^2+y^2-z^2. \end{align} \begin{wrapfigure}{r}{4cm} \includegraphics[scale=0.3]{levelsets.png} \caption{The level sets of $f$} \end{wrapfigure} The symplectic leaves are the following families of submanifolds: the one-sheeted hyperboloids \[S_{\lambda}:=f^{-1}(\lambda), \ \ \lambda>0;\] the two sheets of the hyperboloids \[S^{\pm}_{\lambda}:=f^{-1}(\lambda)\cap\{\pm z>0\}, \ \ \lambda<0;\] and the cone $f^{-1}(0)$ decomposes into three leaves; \[S^{\pm}_{0}:=f^{-1}(0)\cap\{\pm z>0\}, \ \ S_0:=\{0\}.\] The leaf-space, denoted by $\Y$, is obtained by identifying points belonging to the same leaf, and taking the quotient topology. The regular part of $\Y$: \[\mathcal{Y}^{\mathrm{reg}}:=\Y\backslash\{S_0\}\] is a smooth 1-dimensional non-Hausdorff manifold. Its smooth structure is determined by the quotient map being a submersion. Explicitly, a smooth atlas is given by: \[\big\{(U^+,\varphi^+),(U^{-},\varphi^-)\big\}\] \[ U^{\pm}=\big\{S^{\pm}_{\lambda} \ | \ \lambda\leq 0\big\}\cup \big\{S_{\lambda}\ | \ \lambda>0\big\}\subset \Yr,\] \[\varphi^{\pm}:U^{\pm}\xrightarrow{\raisebox{-0.2 em}[0pt][0pt]{\smash{\ensuremath{\sim}}}} \R,\ \ \ \ S^{\pm}_{\lambda}\mapsto \lambda, \ \ S_{\lambda}\mapsto \lambda.\] \begin{wrapfigure}{r}{4.5cm} \begin{tikzpicture} \draw (-1.5,0) -- (0,0); \draw (-3,0.2) -- (-1.5,0.2); \draw (-3,-0.2) -- (-1.5,-0.2); \draw[fill=black] (-1.5,0.2) circle (0.2ex); \draw[fill=white] (-1.5,0) circle (0.2ex); \draw[fill=black] (-1.5,-0.2) circle (0.2ex); \end{tikzpicture} \caption{$\Yr$} \end{wrapfigure} This atlas allows us to identify $\Yr$ with two copies of $\R$ glued along $(0,\infty)$: \[\Yr\simeq \R\sqcup_{(0,\infty)}\R.\] The algebra of smooth functions on $\Yr$ is \[C^{\infty}(\Yr)=\{(h_1,h_2)\in C^{\infty}(\R)\times C^{\infty}(\R)\ | \ h_1|_{(0,\infty)}=h_2|_{(0,\infty)}\}.\] The 0-th Poisson cohomology group consists of smooth function constant along the symplectic leaves, also called Casimir functions, denoted by: \[\Cas(\mathfrak{sl}_2^*(\mathbb{R})):=H^0(\mathfrak{sl}_2^*(\mathbb{R}),\pi).\] In Subsection \ref{sub: Casimirs}, we will prove the following: \begin{proposition}\label{prop:Casimirs} The algebra of Casimir functions is isomorphic to the algebra of smooth functions on the regular part of the leaf-space: \begin{equation}\label{eq: functions on the orbit space} C^{\infty}(\Yr)\simeq \Cas(\mathfrak{sl}_2^*(\mathbb{R})), \end{equation} and the isomorphism is given: \[h=(h_1,h_2)\mapsto \widetilde{h},\] \[ \widetilde{h}(x,y,z):= \begin{cases} h_1 (f(x,y,z)), & z\geq 0 \\ h_2(f(x,y,z)), & z < 0 \end{cases}. \] \end{proposition} Denote the singular cone, its ``outside'' and its ``inside'', respectively, by \[Z:=\{f=0\},\ \ \ O:=\{f>0\}, \ \ \ I:=\{f<0\}.\] We introduce two Poisson vector fields $T$ and $N$ on $O\cup I$: \begin{align*} T|_O&:=\partial_z+\frac{z}{x^2+y^2}(x\partial_x+y\partial y),\ \ \ \ \ \ T|_I:=0,\\ N|_O&:=\frac{1}{2(x^2+y^2)}(x\partial_x+y\partial_y), \ \ \ \ \ \ N|_I:=\frac{1}{2(y^2-z^2)}(y\partial_y + z\partial_z). \end{align*} These formulas come from the special coordinate systems: \begin{align}\label{coordinates O} &\textrm{on } O: &\theta=\mathrm{tan}^{-1}\left(\frac{y}{x}\right),\ \ \ \ \ \ \ \ &w=z, &f=x^2+y^2-z^2,\\ &\textrm{on }I:\label{coordinates I} &\xi=\mathrm{tanh}^{-1}\left(\frac{y}{z}\right),\ \ \ \ \ \ \ \ &v=x, &f=x^2+y^2-z^2, \end{align} in which: \begin{equation}\label{coordinates pi O} \pi|_O=\partial_{\theta}\wedge\partial_w, \ \ \ \ \ T|_O=\partial_w,\ \ \ \ \ N|_O=\partial_f, \end{equation} \begin{equation}\label{coordinates pi I} \pi|_I=\partial_{\xi}\wedge\partial_v,\ \ \ \ \ \ T|_I=0, \ \ \ \ \ \ \ N|_I=\partial_f. \end{equation} We will use the collection of flat Casimir functions: \[C^{flat}=\big\{\chi\in \Cas(\mathfrak{sl}_2^*(\mathbb{R}))\ \ |\ \ \eta \textrm{ vanishes flatly at } 0 \big\}.\] Proposition \ref{prop:Casimirs} implies that any $\chi \in C^{flat}$ vanishes flatly along the entire cone $Z$ (see Subsection \ref{sub: Casimirs}). Since the singularities of $T$ and $N$ along $Z$ are given by rational functions, if follows that, for all $\chi\in C^{flat}$, \[\chi T, \ \ \chi N,\] extend to smooth vector fields on $\R^3$ that vanish flatly on $Z$. We will also use the collection of Casimir functions with support outside of the cone: \[C^{out}=\big\{\eta \in \Cas(\mathfrak{sl}_2^*(\mathbb{R}))\ \ |\ \ \supp(\eta) \subset \overline{O}\big\}.\] We state now the main result of the paper: \begin{theorem}\label{main theorem} The Poisson cohomology of $(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$ is given by: \begin{itemize} \item Every class in $H^1(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$ can be represented as \[ \chi N + \eta T\] for unique functions $\chi \in C^{flat}$ and $\eta \in C^{out}$. \item Every class in $H^2(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$ can be represented as \begin{align*} \eta N\wedge T \end{align*} for a unique function $\eta \in C^{out}$. \item For the third Poisson cohomology group we have \begin{align*} H^3(\mathfrak{sl}_2^*(\mathbb{R}),\pi)&\simeq \R[[f]]\ \partial_x\wedge\partial_y\wedge\partial_z, \end{align*} where $\R[[f]]$ denotes the ring of formal power series in $f$. \end{itemize} \end{theorem} \section{Geometric interpretation}\label{section: geometric interpretation} In this section we prove Proposition \ref{prop:Casimirs} and we explore the geometric meaning of our calculation of the Poisson cohomology of $(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$. In particular, we calculate the Schouten-Nijenhuis bracket in cohomology, we describe groups of Poisson-diffeomorphisms ``integrating'' the first Poisson cohomology Lie algebra, we build deformations corresponding to the second Poisson cohomology, and describe some identifications between the deformations. The entire discussion leaves many open questions, which hopefully will be answered in the future. \subsection{The algebra of Casimir functions}\label{sub: Casimirs} \begin{wrapfigure}{l}{3cm} \includegraphics[scale=0.2]{curves.eps} \caption{$\gamma_1$ and $\gamma_2$ intersecting the level sets of $f$ in the $y$-$z$-plane} \end{wrapfigure} We begin with \begin{proof}[Proof of Proposition \ref{prop:Casimirs}] First, we show that the map is indeed defined, i.e.\ for any $h=(h_1,h_2)\in C^{\infty}(\Yr)$, the function $\widetilde{h}$ is indeed smooth. By subtracting $h_2\circ f$ from $\widetilde{h}$, we may assume that $h_2=0$. So let $h_1\in C^{\infty}(\R)$, with $h_1|_{(0,\infty)}=0$, and note that: \[ \widetilde{h}(x,y,z)= \begin{cases} h_1 (f(x,y,z)), & z\geq 0\ \\ 0, & z < 0 \end{cases}. \] On $z\geq 0$, $h_1(f(x,y,z))$ is smooth and it vanishes on $f\geq 0$; in particular it vanishes flatly on the plane $z=0$. Therefore its extension by $0$ on $z<0$ is a smooth function on $\R^3$. Next, we show that any Casimir function comes from an element in $C^{\infty}(\Yr)$. For this, we define two straight lines $\gamma_{1},\gamma_{2}:\R\to \mathfrak{sl}_2^*(\mathbb{R})$: \begin{equation}\label{eq:curves} \gamma_{1}(t)=\frac{1}{2}\big(0,-t-1,1-t\big), \ \ \gamma_{2}(t)=-\gamma_{1}(t). \end{equation} Both lines are transverse to the leaves of the foliation and satisfy: \begin{equation}\label{eq:transversals} f\circ \gamma_{1}(t)=t, \ \ \ \ \ f\circ \gamma_{2}(t)=t. \end{equation} Both lines cut leaves at most once; their intersections are indicated below: \begin{center} \begin{tabular}{||c||c|c|c|c|} \hline & $S_0$ & $S_{\lambda}$, $\lambda>0$ & $S^{-}_{\lambda}$, $\lambda\leq 0$ & $S^{+}_{\lambda}$,\ $\lambda\leq 0$ \\ \hline \hline $\gamma_1$ & \ding{55} & \ding{51} & \ding{55} & \ding{51}\\ \hline $\gamma_2$ & \ding{55} & \ding{51} & \ding{51} & \ding{55}\\ \hline \end{tabular} \end{center} Consider a Casimir function $g\in \Cas(\mathfrak{sl}_2^*(\mathbb{R}))$, and denote by \[ h_1:=g\circ \gamma_1\in C^{\infty}(\R), \ \ h_2:=g\circ \gamma_2\in C^{\infty}(\R).\] Since for $t>0$, $\gamma_1(t),\gamma_2(t)\in S_t$, it follows that $ h_1(t)= h_2(t)$, and so \[h=(h_1,h_2)\in C^{\infty}(\Yr).\] We have that $g=\widetilde{h}$. This follows because both are Casimir functions, their compositions with the $\gamma_1$ and $\gamma_2$, respectively, yield the same result, and the two lines cut all regular leaves. \end{proof} Next, let us note that under the isomorphism \eqref{eq: functions on the orbit space}, we have that \[C^{flat}\simeq C^{\infty}_0(\Yr),\] where $C^{\infty}_0(\Yr)$ consists of pairs $(h_1,h_2)\in C^{\infty}(\Yr)$ with the property that $h_1$ and $h_2$ vanish flatly at their $0$s, respectively. This follows by comparing the Taylor series at $0$ of $h_{1}(t^2)=\widetilde{h}(t,0,0)$; and similarly for $h_2$. Hence Casimirs which vanish flatly at the origin, actually vanish flatly along $Z$. Note also that, under the isomorphism \eqref{eq: functions on the orbit space}, we have that \[C^{out}\simeq C^{\infty}_{\to}(\Yr),\] where $C^{\infty}_{\to}(\Yr)$ consists of pairs $(h_1,h_2)\in C^{\infty}(\Yr)$ satisfyng: $h_1=h_2$ and $\supp(h_1)\subset [0,\infty)$. \subsection{The Schouten-Nijenhuis bracket} The Schouten-Nijenhuis bracket on multi-vector fields descends to a bracket on Poisson cohomology, which can be easily calculated for $(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$. First, note that $T$ is tangent to the symplectic foliation, therefore \[\Lie_T(g)=0, \ \ \textrm{for all}\ \ g\in\Cas(\mathfrak{sl}_2^*(\mathbb{R})).\] This implies that \[\mathfrak{g}_T:=\{\eta T\ | \ \eta\in C^{out}\}\] is an abelian subalgebra. Next, note that $N$ is transverse to the symplectic foliation on $\R^3 \setminus Z$, with \[\Lie_N(f|_{R^3\setminus Z})=1.\] Moreover, for $g\in \Cas(\mathfrak{sl}_2^*(\mathbb{R}))$ we have that $\Lie_N(g)$ extends to a smooth Casimir on $\R^3$. This follows because locally $N=\partial_f$, and so if $g$ corresponds to the pair $h=(h_1,h_2)\in C^{\infty}(\Yr)$, then $\Lie_N(g)$ corresponds to the pair $\partial_t h=(\partial_t h_1,\partial_t h_2)\in C^{\infty}(\Yr)$. Thus, although $N$ is only smooth on the set $\R^3 \setminus Z$, its Lie derivative $\Lie_N$ induces a derivation of the algebra of Casimir functions, denoted by \[\partial_f:\Cas(\mathfrak{sl}_2^*(\mathbb{R}))\to \Cas(\mathfrak{sl}_2^*(\mathbb{R})),\] which corresponds under the isomorphism \eqref{eq: functions on the orbit space} to $\partial_t$. We obtain that \[\mathfrak{g}_N:=\{\chi N \ | \ \chi\in C^{flat}\}\] is a Lie subalgebra, which is isomorphic to the Lie algebra of vector fields on $\Yr$ which are flat at the origin(s): \begin{equation}\label{g_N} (\mathfrak{g}_N,[\cdot,\cdot])\simeq (\mathfrak{X}_0^1(\Yr),[\cdot,\cdot]). \end{equation} In the coordinates \eqref{coordinates pi O}, it is obvious that on $O\cup I$ \[[N,T]=0.\] Using the Leibniz rule, this allows us to calculate other brackets, for example: \begin{equation}\label{Schouten} [\chi N, \eta N\wedge T]=(\chi\partial_f(\eta)-\eta\partial_f(\chi))N\wedge T. \end{equation} \begin{remark} It is somehow surprising that the representatives we found for Poisson cohomology in degree $\leq 2$ are closed under the Schouten-Nijenhuis bracket. \end{remark} Since all 3-vector fields that are flat at 0 are trivial in cohomology, we obtain the following: \begin{corollary}\label{bracket} The bracket induced from the Schouten-Nijenhuis bracket on Poisson cohomology \begin{equation*} \begin{array}{cccc} [\cdot,\cdot]:&H^p(\mathfrak{sl}_2^*(\mathbb{R}),\pi)\times H^q(\mathfrak{sl}_2^*(\mathbb{R}),\pi)&\to&H^{p+q-1}(\mathfrak{sl}_2^*(\mathbb{R}),\pi) \end{array} \end{equation*} is non-zero only for $p+q\leq 3$, and in these degrees it is determined by the Leibniz identity and the following relations: \[[N, g]=\partial_f(g), \ \ \ \ [T, g]=0,\ \ \ \ [ N, T]=0,\] for all $g\in \Cas(\mathfrak{sl}_2^*(\mathbb{R}))$. \end{corollary} In particular, note that $\mathfrak{g}_N$ and $\mathfrak{g}_T$ span a Lie subalgebra, which is a semi-direct product $\mathfrak{g}_T\rtimes \mathfrak{g}_N$, because: \[[\chi N, \eta T]=\chi\partial_f(\eta) T.\] \subsection{Poisson-diffeomorphisms} Denote the Lie algebra of Poisson vector fields by: \[\mathfrak{poiss}:=\{X\in \mathfrak{X}(\mathfrak{sl}_2^*(\mathbb{R}))\ | \ \Lie_X\pi=0 \},\] and the ideal of Hamiltonian vector fields by: \[\mathfrak{ham}:=\{X_{h}:=\pi^{\sharp}(\dif h)\ | \ h\in C^{\infty}(\mathfrak{sl}_2^*(\mathbb{R}))\}.\] The quotient Lie algebra is the first Poisson cohomology: \[H^1(\mathfrak{sl}_2^*(\mathbb{R}),\pi)=\mathfrak{poiss}/\mathfrak{ham}\simeq \mathfrak{g}_T\rtimes \mathfrak{g}_N.\] Note that $\mathfrak{poiss}$ has a 3-term filtration by ideals: \[0\, \unlhd\, \mathfrak{ham}\, \unlhd\, \mathfrak{ham}\rtimes \mathfrak{g}_{T} \, \unlhd\, (\mathfrak{ham}\rtimes \mathfrak{g}_{T})\rtimes \mathfrak{g}_N=\mathfrak{poiss},\] and $\mathfrak{ham}\rtimes \mathfrak{g}_{T}$ consists of the Poisson vector fields tangent to the foliation. Next, we describe groups corresponding to these Lie algebras. It would be interesting to understand to what extend these groups are smooth or integrate the Lie algebras. Denote the Poisson-diffeomorphism group by \[\mathrm{Poiss}:=\{\varphi\in \mathrm{Diff}(\mathfrak{sl}_2^*(\mathbb{R})), \ \ \varphi_*(\pi)=\pi\},\] and the (normal) Hamiltonian subgroup by: \[\mathrm{Ham}\, \unlhd\, \mathrm{Poiss}.\] The group $\mathrm{Ham}$ consists of diffeomorphisms $\varphi$ that can be connected to the identity by a smooth family of diffeomorphisms \[\{\varphi_t\}_{t\in[0,1]}, \ \ \ \varphi_0=\mathrm{id}, \ \ \ \varphi_1=\varphi\] that is generated by a smooth family of Hamiltonians $\{h_t\in C^{\infty}(\mathfrak{sl}_2^*(\mathbb{R}))\}_{t\in [0,1]}$: \[\varphi'_t=X_{h_t}\circ \varphi_t, \ \ \ X_{h_t}=\pi^{\sharp}(\dif h_t).\] Next, we associate to $\mathfrak{g}_{T}$ an abelian group: \[G_T:=\big\{\, \mathrm{exp}(\eta T):=\textrm{time-one flow of }\eta T\ | \ \eta \in C^{out}\big\}.\] To see that the vector fields $\eta T$ are indeed complete, and that $G_T$ is indeed an abelian group, we use the coordinates from \eqref{coordinates pi O} $(\theta,w,f)\in S^1\times \R\times (0,\infty)$ on $O=\{f>0\}$, in which: \[\pi|_O=\partial_{\theta}\wedge\partial_{w}, \ \ T|_O=\partial_{w}.\] Then the flow of $\eta T$, with $\eta=h\circ f\in C^{out}$, and $\mathrm{supp}(h)\subset [0,\infty)$, is \[\exp(t\eta T)(\theta,w,f)=(\theta,w+t\, h(f),f),\] in particular, it is defined for all $t\in \R$. Because it vanishes on $\overline{I}$, $\eta T$ is complete. Note that the flow preserves the leaves. The leaves in $O$ are sent by the chart symplectomorphically to the cotangent bundle of the circle: \[S_{\lambda}\simeq T^*S^1.\] Under this identification, $\exp(\eta T)$ acts by translation with $h(\lambda)\dif \theta\in \Omega^1(S^1)$. By the formula for the flow, the exponential is a group isomorphism: \[\exp:\mathfrak{g}_T\xrightarrow{\raisebox{-0.2 em}[0pt][0pt]{\smash{\ensuremath{\sim}}}} G_T, \ \ \exp(\eta_1T+\eta_2T)=\exp(\eta_1T)\circ \exp(\eta_2T).\] The subgroup corresponding to $\mathfrak{ham}\rtimes \mathfrak{g}_T$ is the semi-direct product: \[\mathrm{Ham}\rtimes G_T.\] It would be interesting to know whether $\mathrm{Ham}\rtimes G_T$ can be characterized geometrically by the following property: \begin{question} Does a Poisson diffeomorphism that sends each leaf to itself belong to $\mathrm{Ham}\rtimes G_T$? \end{question} Recall that $\mathfrak{g}_N$ is isomorphic to the Lie algebra of vector fields on $\Yr$ that are flat at the origin(s) \eqref{g_N}. Next, we build a group $G_N$ corresponding to $\mathfrak{g}_N$, which will be isomorphic to the group of diffeomorphisms of $\Yr$ which are flat at the origin(s). First consider the group \[\mathrm{Diff}^0_0(\Yr)\subset \mathrm{Diff}(\Yr)\] consisting of pairs $(\phi_1,\phi_2)$ of diffeomorphisms of $\R$ which fix the origin up to infinite jet (i.e.\ $\phi_i-\mathrm{id}_{\R}$ vanishes flatly at $0$), and such that $\phi_1|_{(0,\infty)}=\phi_2|_{(0,\infty)}$. For $\phi=(\phi_1,\phi_2)\in \mathrm{Diff}^0_0(\Yr)$, we build an element $\widetilde{\phi}\in G_N$. In the chart \eqref{coordinates O} on $O$, define: \[\widetilde{\phi}(\theta,w,f)=(\theta,w,\phi_1(f))=(\theta,w,\phi_2(f)),\] in the chart \eqref{coordinates I} on $I\cap \{z> 0\}$, define: \[\widetilde{\phi}(\xi,v,f)=(\xi,v,\phi_1(f)),\] and in the chart \eqref{coordinates I} on $I\cap \{z< 0\}$, define: \[\widetilde{\phi}(\xi,v,f)=(\xi,v,\phi_2(f)).\] Note that these three expressions extend to $Z=\{f=0\}$ as the identity map, and they coincide along $Z$ with the identity up to infinite jet. Therefore $\widetilde{\phi}$ is indeed smooth (one can also transform $\widetilde{\phi}$ to usual coordinates to check this). The local expression of $\pi$ in the charts \eqref{coordinates pi O} and \eqref{coordinates pi I}, implies that $\widetilde{\phi}$ is a Poisson diffeomorphism. Let $G_N^{0}$ be the collection of all $\widetilde{\phi}\in \mathrm{Poiss}$, with $\phi\in \mathrm{Diff}_{0}^0(\Yr)$. Since $\widetilde{\phi}$ induces $\phi$ on the regular leaf-space, we have that: \[G_N^0\simeq \mathrm{Diff}_{0}^0(\Yr).\] \begin{wrapfigure}{r}{5cm} \begin{tikzpicture} \draw (-2,0) -- (0,0); \draw (-4.5,0.5) -- (-2.5,0.5); \draw (-4.5,-0.5) -- (-2.5,-0.5); \draw[fill=white] (-2,0) circle (0.2ex); \draw[fill=black] (-2.5,0) circle (0.2ex); \draw[fill=black] (-2.5,0.5) circle (0.2ex); \draw[fill=black] (-2.5,-0.5) circle (0.2ex); \draw (-3.5,0.4) edge[ <->, bend right=10] node[auto] {$\tau$} (-3.5,-0.4) ; \end{tikzpicture} \caption{$\tau$ acting on $\Y$} \end{wrapfigure} Next, consider the reflection: \begin{equation*} \begin{array}{cccc} \tau:&\R^3&\to&\R^3\\ &(x,y,z)&\mapsto&(x,-y,-z), \end{array} \end{equation*} and note that $\tau$ is a Poisson involution, i.e.\ \[\tau_*(\pi)=\pi,\ \ \tau^2=\mathrm{id},\] and it interchanges the leaves $S^{+}_{\lambda}$ and $S^{-}_{\lambda}$, $\lambda \leq 0$. We denote also by $\tau$ the diffeomorphism induced on $\Yr$. Note that $\tau$ normalizes $G_N^0$ and $\mathrm{Diff}_0^0(\Yr)$, and in fact: \[\tau\cdot \widetilde{\phi}\cdot \tau=\widetilde{\tau \cdot \phi\cdot \tau}, \ \ \tau \cdot (\phi_1,\phi_2)\cdot \tau= (\phi_2,\phi_1).\] Therefore the following groups are isomorphic: \begin{align*} G_N&:=G_N^0\, \cup\, G_N^0\cdot \tau, \\ \mathrm{Diff}_0(\Yr)&:= \mathrm{Diff}_0^0(\Yr)\, \cup\, \mathrm{Diff}_0^0(\Yr)\cdot \tau. \end{align*} Note that $G_N$ normalizes $G_T$: \begin{align*} \exp(\eta T)\cdot \widetilde{\phi}&=\widetilde{\phi}\cdot \exp(\widetilde{\phi}^*(\eta)\, T),\\ \exp(\eta T)\cdot \tau& =\tau\cdot \exp(-\eta T), \end{align*} where $\widetilde{\phi}^*(\eta)= \widetilde{\phi^*h}$, for $\eta=\widetilde{h}$. We obtain the group $G_T\rtimes G_N$, which in principle is the group of outer-automorphisms of the Poisson manifold. It would be interesting to know whether this is a correct interpretation: \begin{question} Is the natural map $G_T\rtimes G_N\to \mathrm{Poiss}/\mathrm{Ham}$ an isomorphism? Equivalently, is it true that: \[\mathrm{Poiss}=\mathrm{Ham}\rtimes G_T\rtimes G_N?\] \end{question} \subsection{Deformations} The second Poisson cohomology has the heuristic interpretation of being the ``tangent space" to the Poisson-moduli space. In our case, by Theorem \ref{main theorem}, every class in $H^2(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$ has a unique representative of the form \[\eta N\wedge T,\ \textrm{with} \ \ \eta\in C^{out}.\] Since the Schouten bracket is trivial on these elements, it follows that \[\pi_{\eta}:=\pi+\eta N\wedge T, \ \ \textrm{with}\ \ \eta\in C^{out}\] is a Poisson structure. In other words, infinitesimal deformations are unobstructed. Note that these are precisely the deformations of $\pi$ constructed by Weinstein in \cite[Prop 6.3]{Wein83} to show that $\spl$ is smoothly degenerate. The Poisson structure $\pi_{\eta}$ differs from $\pi$ only on $O=\{f> 0\}$. Using the coordinates $(\theta,w,f)$ from \eqref{coordinates O} on $O$ and writing $\eta=h\circ f$, with $\supp h\subset [0,\infty)$, we have that: \[\pi_{\eta}|_O=\big(\partial_{\theta}+h\circ f\,\partial_{f} \big)\wedge\partial_w.\] Note that the leaves of $\pi_{\eta}$ are perturbations of the cylinders $S_{\lambda}$. In order to understand their shape, note that the leaves of $\pi_{\eta}$ cut the plane $z=0$ in the flow lines of the Hamiltonian vector field of $-z$: \begin{align*} -\pi_{\eta}^{\sharp}(\dif z)|_{z=0}&=\left(\partial_{\theta}+h\circ f\,\partial_{f}\right)\big|_{z=0}=\partial_{\theta}+\frac{h(r^2)}{2r^2}\, r\partial_{r}\\ &=\left(x\partial_y-y\partial_x \right)+u(x^2+y^2)\left(x\partial_x+y\partial_y \right), \end{align*} where $u(t):=h(t)/(2t)$ is smooth and vanishes flatly at $t=0$. The shape of the flow lines depends on the behaviour of $u$; for example, if $u(r^2)=0$, then the circle of radius $r$ is an orbit; if $u(t)<0$ for $t\in (0,r^2)$, then the flow lines in the disk of radius $r$ spiral towards the origin. It would be interesting to know if all deformations are of this type: \begin{question} Is every Poisson structure near $\pi$ isomorphic to $\pi_{\eta}$ for some $\eta\in C^{out}$? \end{question} There are options in how to formulate this question precisely: for example, one can consider deformations on a small ball around $0$, or one can consider global Poisson structures which are close with respect to the Whitney (open-open) $C^{\infty}$-topology. A related problem is: \begin{question} Is every Poisson structure with isotropy Lie algebra $\spl$ at a zero isomorphic to $\pi_{\eta}$, for some $\eta\in C^{out}$? \end{question} Further, we note that different functions $\eta\in C^{out}$ can yield isomorphic Poisson structures $\pi_{\eta}$. Infinitesimally, this phenomenon arises because the Schouten-Nijenhuis bracket bracket in cohomology is non-trivial (see \eqref{Schouten}) in degrees $(1,2)\mapsto 2$, and this operation encodes the infinitesimal action of outer-automorphisms on deformations. In fact, only elements $\chi N\in \mathfrak{g}_{N}$, with $\chi\in C^{out}$ act non-trivially. Via the isomorphism \eqref{g_N}, this subalgebra corresponds to the following subalgebra of vector fields on $\Yr$: \[\mathfrak{X}^1_{\to}(\R)=\big\{h\partial_t\ |\ h|_{(-\infty,0]}=0\big\}\subset \mathfrak{X}_0^1(\Yr),\] with corresponding subgroup: \[\mathrm{Diff}_{\to}(\R)=\big\{\phi\in \mathrm{Diff}(\R)\ | \ \phi|_{(-\infty,0]}=\mathrm{id}\big\}\subset \mathrm{Diff}_{0}(\Yr),\] where in both cases we use the diagonal inclusion. The action of $\widetilde{\phi}$, with $\phi\in \mathrm{Diff}_{\to}(\R)$, on $\pi_{\eta}$, with $\eta=h\circ f$, is given by: \[\widetilde{\phi}^*(\pi_{\eta})= \pi_{\eta'}, \ \ \ \eta'=\frac{h\circ \phi}{\phi'}\circ f \in C^{out}.\] Note also that $\tau$ acts non-trivially: \[\tau^*(\pi_{\eta})=\pi_{-\eta}.\] It would be interesting to know whether these are all identifications: \begin{question} For $i=1,2$, consider $h_i\in C^{\infty}(\R)$, with $\supp(h_i)\subset [0,\infty)$, and let $\eta_i:=h_i\circ f$. If the Poisson structures $\pi_{\eta_1}$ and $\pi_{\eta_2}$ are isomorphic, does there exist $\phi\in \mathrm{Diff}_{\to}(\R)$ such that \[h_2=\frac{h_1\circ \phi}{\phi'}\ \ \ \textrm{or}\ \ \ h_2=-\frac{h_1\circ \phi}{\phi'}\ ?\] \end{question} These formulas come from the adjoint action of $\mathrm{Diff}_{\to}(\R)$. Namely, if $\phi\in \mathrm{Diff}_{\to}(\R)$ and $h\partial_t\in \mathfrak{X}_{\to}^1(\R)$, then \[\phi^*(h\partial_t)=\frac{h\circ \phi}{\phi'}\partial_t.\] Thus, we obtain a bijection: \[H^2(\mathfrak{sl}_2^*(\mathbb{R}),\pi)/G_N\simeq \big(\mathfrak{X}_{\to}^1(\R)/\mathrm{Diff}_{\to}(\R)\big)/\{\pm 1\},\] where the right hand-side can be thought of as adjoint orbits of $\mathrm{Diff}_{\to}(\R)$, up to $\pm 1$. Assuming that the answers to the last two questions are positive, this space is a model for the Poisson-moduli space around $\pi$. \subsection{The Koszul-Brylinski double complex} Dual to the Poisson complex of a Poisson manifold $(M,\pi)$, Koszul \cite{Kos85} introduced a differential on differential forms: \begin{align*} \delta_{\pi}:=\iota_{\pi}\circ \dif -\dif \circ \iota_{\pi}: \Omega^{\bullet}(M)\to \Omega^{\bullet-1}(M), \ \ \ \ \delta_{\pi}^2 =0, \end{align*} which yields the Poisson homology groups: $H_{\bullet}(M,\pi)$. Moreover, one has \begin{align*} \dif\circ \delta_{\pi} +\delta_{\pi}\circ \dif =0, \end{align*} and therefore we have a bidifferential complex $(\Omega^{\bullet}(M),\dif,\delta_{\pi})$. In \cite{Bry88}, Brylinski gave a more explicit formula of $\delta_{\pi}$ and studied this complex in more detail. Moreover, by \cite{Xu99} and \cite{ELuWein}, for an oriented, unimodular Poisson manifold, the contraction with an $\mathfrak{ham}$-invariant volume form $\mu$ gives an isomorphism between the Poisson cohomology complex and the Poisson homology complex: \begin{align*} \mu^{\flat}: (\mathfrak{X}^{k}(M),\dif_{\pi})\xrightarrow{\raisebox{-0.2 em}[0pt][0pt]{\smash{\ensuremath{\sim}}}} (\Omega^{m-k}(M),\delta_{\pi}). \end{align*} For $(\mathfrak{sl}_2^*(\mathbb{R}) ,\pi)$ the standard volume form $\mu=\dif x\wedge \dif y \wedge \dif z$ is invariant. Applying contraction with $\mu$ on the representatives for Poisson cohomology from Theorem \ref{main theorem} we obtain: \begin{align*} H_3(\mathfrak{sl}_2^*(\mathbb{R}),\pi)\ \ &\simeq\ \ \Cas(\mathfrak{sl}_2^*(\mathbb{R}))\cdot \mu,\\ H_2(\mathfrak{sl}_2^*(\mathbb{R}),\pi)\ \ &\simeq\ \ C^{out}\cdot\dif f\wedge \dif \theta\ \oplus\ C^{flat}\cdot \overline{\omega},\\ H_1(\mathfrak{sl}_2^*(\mathbb{R}),\pi)\ \ &\simeq\ \ C^{out}\cdot \dif \theta,\\ H_0(\mathfrak{sl}_2^*(\mathbb{R}),\pi)\ \ &\simeq\ \ \R[[f]], \end{align*} where $\overline{\omega}:=\mu^{\flat}(N)$ is a closed extension to $\R^3\backslash Z$ of the leaf-wise symplectic form. Using this, we calculate the de Rham cohomology of the Poisson homology: \begin{corollary} We have that: \[ H_{DR}^{k}(H_{\bullet}(\mathfrak{sl}_2^*(\mathbb{R}),\pi),\dif)\simeq \begin{cases} \R[[f]], & k=0, \ \textrm{or}\ \ k=3;\\ 0, & k=1, \ \textrm{or}\ \ k=2.\\ \end{cases}\] \end{corollary} \section{Formal Poisson cohomology of $\mathfrak{sl}_2^*(\mathbb{R})$}\label{section: formal} We begin this section by introducing flat and formal Poisson cohomology. For the linear Poisson structure on the dual of a Lie algebra, we identify these cohomologies with the Chevalley-Eilenberg cohomology of the Lie algebra with coefficients in certain representations. Then we specialize to semi-simple Lie algebras, for which, using standard results from Lie theory, we calculate the formal Poisson cohomology (Proposition \ref{generators}); and explicitly, for $\spl$. An important consequence (Corollary \ref{ses smooth formal}) is that, for semi-simple Lie algebras, the calculation of Poisson cohomology can be reduced to that of flat Poisson cohomology. \subsection{Flat and formal Poisson cohomology} Let $(M,\pi)$ be a Poisson manifold. Its Poisson cohomology $H^{\bullet}(M,\pi)$ is the cohomology of the chain complex: \[(\mathfrak{X}^{\bullet}(M),\dif_{\pi}:=[\pi,\cdot]).\] For $p\in M$, let $\mathfrak{X}_p^{\bullet}(M)$ denote the set of multivector fields that are flat at $p$. Since $\mathfrak{X}_p^{\bullet}(M)$ is a Lie ideal in $\mathfrak{X}^{\bullet}(M)$, it is also a subcomplex with respect to $\dif_{\pi}$. The cohomology of this complex, denoted $H^{\bullet}_{p}(M,\pi)$, will be called the flat Poisson cohomology at $p$. By Borel's Lemma on the existence of smooth functions with a prescribed Taylor series, we have the following identification for the quotient: \[\mathfrak{X}^{\bullet}(M)/\mathfrak{X}_p^{\bullet}(M)\simeq \wedge^{\bullet}T_pM\otimes \R[[T^*_pM]],\] where $\R[[T^*_pM]]$ denotes the algebra of formal power series of functions at $p$. Thus, we obtain a short exact sequence of complexes \begin{align*} 0\to(\mathfrak{X}^{\bullet}_p(M),\dif_{\pi})\to(\mathfrak{X}^{\bullet}(M),\dif_{\pi})\stackrel{j^{\infty}_p}{\longrightarrow} (\wedge^{\bullet}T_pM\otimes \R[[T^*_pM]],\dif_{j^{\infty}_{p}\pi})\to 0, \end{align*} where $j^{\infty}_p$ is the infinite jet map. The cohomology of the quotient complex, denoted by $H^{\bullet}_{F,p}(M,\pi)$, will be called the formal Poisson cohomology at $p$. The short exact sequence induces a long exact sequence in cohomology: \begin{equation}\label{jet} \ldots \stackrel{j^{\infty}_p}{\to} H^{q-1}_{F,p}(M,\pi)\stackrel{\partial}{\to} H^{q}_{p}(M,\pi)\to H^{q}(M,\pi)\stackrel{j^{\infty}_p}{\to}H^{q}_{F,p}(M,\pi )\stackrel{\partial}{\to}\ldots. \end{equation} \subsection{Poisson cohomology of linear Poisson structures} A Poisson structure on a vector space is called linear if the set of linear functions is closed under the Poisson bracket. Such Poisson structures are in one-to-one correspondence with Lie algebra structures on the dual vector space. Namely, let $(\mathfrak{g},[\cdot,\cdot])$ be a real, finite-dimensional Lie algebra. The associated linear Poisson structure $\pi$ on $\mathfrak{g}^*$ is determined by the condition that the map $l:\mathfrak{g}\to C^{\infty}(\mathfrak{g}^*)$, which identifies $\mathfrak{g}$ with $(\mathfrak{g}^*)^*$, is a Lie algebra homomorphism: \[\{l_X,l_Y\}=l_{[X,Y]},\] where $\{\cdot,\cdot\}$ is the Poisson bracket on $C^{\infty}(\mathfrak{g}^*)$ corresponding to $\pi$. In particular, $C^{\infty}(\mathfrak{g}^*)$ becomes a $\mathfrak{g}$-representation, with $X\cdot f:=\{l_X,f\}$. Moreover, the Poisson complex of $(\mathfrak{g}^*,\pi)$ is isomorphic to the Chevalley-Eilenberg complex of $\mathfrak{g}$ with coefficients in $C^{\infty}(\mathfrak{g}^*)$ \cite[Prop 7.14]{Laurent2013} \begin{equation}\label{iso_complexes} (\mathfrak{X}^{\bullet}(\mathfrak{g}^*),\dif_{\pi})\simeq (\wedge^{\bullet}\mathfrak{g}^*\otimes C^{\infty}(\mathfrak{g}^*),\dif_{EC}). \end{equation} This identification allows for the use of techniques from Lie theory in the calculation of Poisson cohomology, as we will do in the sequel. Suppose that $R$ is a representation of $\g$, and denote by $R^{\g}\subset R$ the set of $\g$-invariant elements. Since $R^{\g}$ is a trivial subrepresentation, we can identify \[H^q(\mathfrak{g})\otimes R^{\mathfrak{g}} \simeq H^q(\mathfrak{g},R^{\mathfrak{g}}).\] Moreover, the inclusion $\iota:R^{\g}\hookrightarrow R$ induces a map in cohomology: \begin{align}\label{subrep} \iota_{*} : H^q(\mathfrak{g})\otimes R^{\mathfrak{g}} \to H^q(\mathfrak{g},R). \end{align} For $q=0$ this map is always an isomorphism: $R^{\g}\simeq H^0(\g,R)$. In general, $\iota_{*}$ need not be injective nor surjective. However, by \cite[Thm 13]{Hoch1953}, if $\g$ is semisimple and $R$ is finite-dimensional, then \eqref{subrep} is an isomorphism for all $q\geq 0$. The same conclusion holds also in the following more general situation, which we will use in the next subsection: \begin{lemma} \label{iso cohomology} Let $\mathfrak{g}$ be a semisimple Lie algebra. Let $R=\prod_{\alpha\in A}R_{\alpha}$ be a direct product of finite-dimensional $\g$-representations. Then the map in \eqref{subrep} is an isomorphism for all $q\geq 0$. \end{lemma} \begin{proof} Since $\g$ is finite-dimensional, the Chevalley-Eilenberg complex of $R$ is canonically isomorphic to the direct product of complexes: \begin{equation}\label{eq:prod_complexes} (\wedge^{\bullet}\g\otimes R, \dif_{EC})\simeq \prod_{\alpha\in A}(\wedge^{\bullet}\g\otimes R_{\alpha}, \dif_{EC}), \end{equation} which yields an isomorphism in cohomology $H^{\bullet}(\g, R)\simeq \prod_{\alpha\in A}H^{\bullet}(\g, R_{\alpha})$. Moreover, under the isomorphism \eqref{eq:prod_complexes}, the subcomplexes of invariant elements are in one-to-one correspondence: \[(\wedge^{\bullet}\g\otimes R^{\g}, \dif_{EC})\simeq \prod_{\alpha\in A}(\wedge^{\bullet}\g\otimes R^{\g}_{\alpha}, \dif_{EC}),\] and therefore $H^{\bullet}(g)\otimes R^{g}\simeq \prod_{\alpha\in A}H^{\bullet}(\g)\otimes R^{\g}_{\alpha}$. This identifies the map $\iota_*$ for $R$ with the direct product of the maps $\iota_{\alpha *}$ for $R_{\alpha}$: \[\iota_*\simeq \prod_{\alpha\in A}\iota_{\alpha *}: \prod_{\alpha\in A}H^{\bullet}(\g)\otimes R^{\g}_{\alpha}\to \prod_{\alpha\in A}H^{\bullet}(\g, R_{\alpha}).\] Since $\g$ is semisimple and all $R_{\alpha}$'s are finite-dimensional, each $\iota_{\alpha *}$ is an isomorphism \cite[Thm 13]{Hoch1953}, and therefore so is their product $\iota_*$. \end{proof} We are interested in the representation $R=C^{\infty}(\g^*)$, whose space of invariants are the Casimir functions: \begin{align*} H^0(\g^*,\pi)=\Cas(\g^*)=C^{\infty}(\g^*)^{\g}. \end{align*} If $\g$ is semisimple, then the map \eqref{subrep} for $R=C^{\infty}(\g^*)$ is an isomorphism for all $q\geq 0$ if and only if the Lie algebra is compact. Namely, by the construction in \cite{Wein87}, for all non-compact semisimple Lie algebra it fails at $q=1$. For compact Lie algebras, this was proven in \cite[Thm 3.2]{GW92}, and so, by \eqref{iso_complexes}, we have that: \[H^{q}(\g^*,\pi)=H^{q}(\g,C^{\infty}(\g^*))\simeq H^q(\g)\otimes\Cas(\g^*).\] \subsection{Formal Poisson cohomology of linear Poisson structures} Under the isomorphism \eqref{iso_complexes}, the subcomplex of multivector fields on $\g^*$ that are flat at $0$ corresponds to the Eilenberg-Chevalley complex of $\g$ with coefficients in the subrepresentation $C_0^{\infty}(\g^*)\subset C^{\infty}(\g^*)$ consisting of smooth functions that are flat at zero: \[\mathfrak{X}^{\bullet}_0(\mathfrak{g}^*)\simeq \wedge^{\bullet}\mathfrak{g}^*\otimes C^{\infty}_0(\mathfrak{g}^*).\] Therefore, the quotient complex is naturally identified with \[\wedge^{\bullet} \g^*\otimes \R[[\g]],\] where $\R[[\g]]=C^{\infty}(\mathfrak{g}^*)/C^{\infty}_0(\mathfrak{g}^*)$ is the ring of formal power series of functions on $\g^*$. Thus, the formal Poisson cohomology at $0\in \g^*$, which for simplicity we denote by $H^{\bullet}_{F}(\g^* ,\pi )$, is naturally isomorphic to the cohomology of $\g$ with coefficients in the representation $\R[[\g]]$ \[H^{\bullet}_{F}(\g^*)\simeq H^{\bullet}(\g,\R[[\g]]).\] As a representation, $\R[[\g]]$ is isomorphic to the product of the symmetric powers of the adjoint representation: \[\R[[\g]]\simeq \prod_{k\geq 0} S^k(\g).\] Thus Lemma \ref{iso cohomology} implies: \begin{proposition}\label{formal cohomology} For the formal Poisson cohomology at $0$ of a semisimple Lie algebra $\g$ we have that: \begin{align*} H^{\bullet}_{F}(\g^*,\pi)&\simeq H^{\bullet}(\g)\otimes \Cas _{F}(\g^*), \end{align*} where $\Cas_{F}(\g^*)=\R[[\g]]^{\g}$ is the set of formal Casimir functions. \end{proposition} On the other hand, the space of formal Casimir functions is well-understood: \begin{proposition}\label{generators} Let $(\g^* ,\pi ) $ be the linear Poisson structure associated to a semisimple Lie algebra $\g$. Then there exist $n=\dim \g - \max \mathrm{rank}(\pi)$ algebraically independent homogeneous polynomials $f_1 ,\dots , f_n$ such that \begin{align*} \Cas_F(\g^*)=\R[[f_1 ,\dots ,f_n ]] \subset \R[[\g]], \end{align*} where $\R[[f_1,\dots ,f_n]]$ denotes formal power series in the polynomials $f_i$. \end{proposition} \begin{proof} \cite[Thm 7.3.8]{Dixmier96} gives such polynomials $f_1 ,\dots ,f_n$ which generate the algebra of $\g$-invariant polynomials $\R[\g]^{\g }$. Clearly $\R[[f_1 ,\dots ,f_n ]]\subset \Cas_F(\g^*)$. For the other inclusion, let $g\in \Cas_{F}(\g^*)$. Let $g_k\in S^k(\g)$ denote the homogeneous component of degree $k$ of $g$. Since $g_k\in S(\g)^{\g}$ and the $f_i$'s are algebraically independent, there is a unique polynomial $p_k\in \R[x_1,\ldots,x_n]$ such that $g_k=p_k(f_1,\ldots,f_n)$. Note that each monomial of $p_k$ has total degree at least $k/D$, where $D:=\mathrm{max}\ \mathrm{degree}(f_i)$. Therefore, $p=\sum_{k\geq 0}p_k$ represents an element in $\R[[x_1,\ldots,x_n]]$, which satisfies $g=p(f_1,\ldots,f_n)$. \end{proof} The invariant polynomials on $\mathfrak{sl}_2^*(\mathbb{R})$ are generated by the function $f$ from \eqref{Casimir}. We conclude: \begin{corollary}\label{formal cohomology sl2} The formal Poisson cohomology of $\mathfrak{sl}_2^*(\mathbb{R})$ at 0 is given by \[H^0_F(\mathfrak{sl}_2^*(\mathbb{R}),\pi)=\R[[f]], \ \ H^1_F(\mathfrak{sl}_2^*(\mathbb{R}),\pi)=0,\] \[H^2_F(\mathfrak{sl}_2^*(\mathbb{R}),\pi)=0, \ \ H^3_F(\mathfrak{sl}_2^*(\mathbb{R}),\pi)=\R[[f]] \otimes \partial_x\wedge\partial_y\wedge \partial_z.\] \end{corollary} \begin{proof} By the previous propositions $H^{\bullet}_F(\mathfrak{sl}_2^*(\mathbb{R}),\pi)= H^{\bullet}(\spl)\otimes \R[[f]]$. Clearly $H^{0}(\spl)=\R$, and it is easy to see that $H^{3}(\spl)=\R \partial_x\wedge\partial_y\wedge \partial_z$. In degrees 1 and 2, the conclusion follows by the Whitehead lemma, which states that for a semisimple Lie algebra $\g$, $H^1(\g)=0$ and $H^2(\g)=0$. \end{proof} The previous propositions reduce the calculation of Poisson cohomology of a semisimple Lie algebra to that of flat Poisson cohomology at 0: \begin{corollary}\label{ses smooth formal} For a semi-simple Lie algebra $\g$, the Poisson cohomology of $(\g^*,\pi)$ fits into the short exact sequence \[0\to H^{\bullet}_0(\g^*,\pi)\to H^{\bullet}(\g^*,\pi)\stackrel{j^{\infty}_0}{\to} H^{\bullet}_F(\g^*,\pi)\to 0.\] \end{corollary} \begin{proof} Using the long exact sequence \eqref{jet}, it suffices to show that $j^{\infty}_0$ is surjective in cohomology. By Proposition \ref{formal cohomology}, every element in $H^{\bullet}_F(\g^*,\pi)$ has a representative of the form $w=\sum_i \omega_i\otimes g_i$, where $\omega_i\in \wedge^{\bullet}\g$ are closed elements, and $g_i\in \Cas_F(\g^*)$. By Proposition \ref{generators} we can write $g_i=h_i(f_1,\dots ,f_n)$, for some $h_i\in \R[[x_1,\dots ,x_n ]]$. By Borel's lemma, there are smooth functions $\overline{h}_i\in C^{\infty}(\R^n)$ such that $j^{\infty}_0 \overline{h}_i=h_i$. Therefore \[\overline{w}=\sum_i \omega_i\otimes \overline{h}_i(f_1,\dots ,f_n)\in \wedge^{\bullet}\g\otimes C^{\infty}(\g^*)\simeq \mathfrak{X}^{\bullet}(\g^*)\] is a closed element satisfying $j^{\infty}_0 \overline{w}=w$. \end{proof} \begin{remark} The versions of Propositions \ref{formal cohomology} and \ref{generators} with polynomials instead of formal power series are also valid (see \cite{Laurent2013} and \cite[Thm 7.3.8]{Dixmier96}). We expect these results to hold also for the algebra of analytic functions $C^{\omega}(\g^*)$, with a possible proof using the techniques from \cite{Conn84}. \end{remark} \section{Flat Poisson cohomology of $\mathfrak{sl}_2^*(\mathbb{R})$}\label{section: flat PC} The Poisson manifold $\mathfrak{sl}_2^*(\mathbb{R})\backslash\{0\}$ is regular, of corank one, and unimodular. Such Poisson structures can be described in terms of foliated cohomology \cite{Vaisman,Gammella}. We will explain this in the first two subsections. In the third subsection, we introduce the flat foliated cohomology of $\mathfrak{sl}_2^*(\mathbb{R})$, which we use in the last subsection to calculate the flat Poisson cohomology. This, together with Corollaries \ref{formal cohomology sl2} and \ref{ses smooth formal}, complete the description of the Poisson cohomology $\mathfrak{sl}_2^*(\mathbb{R})$ from Theorem \ref{main theorem}. The calculation of the flat foliated cohomology will be left for Sections \ref{section flat foli} and \ref{section: analysis}, and is the most technically involved part of the paper. \subsection{Foliated cohomology} We will follow \cite[Chp 1]{Torres15}. For a regular foliation $\mathcal{F}\subset TM$ on a manifold $M$, we denote the complex of foliated forms by \[(\Omega^{\bullet}(\mathcal{F}), \dif_{\mathcal{F}});\] i.e.\ $\Omega^{\bullet}(\mathcal{F}):=\Gamma(\wedge^{\bullet} \mathcal{F}^*)$ consists of smooth families of differential forms on the leaves of $\mathcal{F}$, and $\dif_{\mathcal{F}}$ is the leafwise de Rham differential. The resulting cohomology is called the foliated cohomology of $\F$, and is denoted by $H^{\bullet}(\F)$. The normal bundle to $\F$, denoted by $\nu:=TM/\F$, carries the Bott-connection \[ \nabla:\Gamma(\F)\times \Gamma(\nu)\to \Gamma(\nu),\ \ \ \ \nabla_X(\overline{V}):=\overline{[X,V]}, \] which, via the usual Koszul-type formula, induces a differential on $\nu$-valued forms $(\Omega^{\bullet}(\F,\nu),\dif_{\nabla})$, which yields the cohomology of $\F$ with values in $\nu$, denoted $H^{\bullet}(\F,\nu)$. Similarly, the dual connection on $\nu^*$ gives rise to the complex $(\Omega^{\bullet}(\F,\nu^*),\dif_{\nabla})$, with cohomology groups $H^{\bullet}(\F,\nu^*)$. Assume now that $\F$ has codimension one. Then we have the short exact sequence of complexes: \begin{equation}\label{ses fol-deRham} 0\to (\Omega^{\bullet-1}(\F,\nu^*),\dif_{\nabla})\to (\Omega^{\bullet}(M),\dif)\stackrel{r}{\to}(\Omega^{\bullet}(\F),\dif_{\F})\to 0, \end{equation} where $r$ is the restriction map, and an element $\eta$ in the kernel of $r$ is canonically identified with the element $u(\eta)\in \Omega^{\bullet-1}(\F,\nu^*)$, defined by: \[\langle u(\eta),\overline{V}\rangle= r(i_{V}\eta), \ \ \overline{V}\in \nu.\] Assume in addition that $\nu^*$ is orientable, and let $\varphi\in \Gamma(\nu^*)\subset \Omega^1(M)$ be a defining 1-form for $\F$, i.e.\ $\varphi$ is nowhere zero and $\F=\ker (\varphi)$. Then $\ker(r)$ is the differential ideal generated by $\varphi$ \begin{equation}\label{the complex} (\varphi\wedge\Omega^{\bullet-1}(M),\dif) \end{equation} The foliation is called unimodular, if there exists a defining one-form $\varphi$ which is closed: $\dif \varphi=0$. Such a one-form is parallel for the dual of the Bott-connection, and it induces an isomorphism of complexes: \[(\Omega^{\bullet}(\F),\dif_{\F})\xrightarrow{\raisebox{-0.2 em}[0pt][0pt]{\smash{\ensuremath{\sim}}}} (\Omega^{\bullet}(\F,\nu^*),\dif_{\nabla}), \ \ \eta\mapsto \varphi \otimes \eta.\] Similarly, the dual of $\varphi$ gives a parallel section of $\nu$, and we obtain an isomorphism of complexes: \[(\Omega^{\bullet}(\F,\nu),\dif_{\nabla})\xrightarrow{\raisebox{-0.2 em}[0pt][0pt]{\smash{\ensuremath{\sim}}}} (\Omega^{\bullet}(\F),\dif_{\F}), \ \ \eta\mapsto \langle \varphi, \eta\rangle.\] \subsection{Cohomology of codimension one symplectic foliations} Let $(M,\pi)$ be a regular Poisson manifold of corank one, and denote its symplectic foliation by $(\mathcal{F},\omega)$, where $\F=\pi^{\sharp}(T^*M)$ and $\omega\in \Omega^2(\F)$ is the leafwise symplectic structure. The Poisson complex of $\pi$ fits into a short exact sequence: \begin{align}\label{ses poisson} 0\to (\Omega^{\bullet}(\mathcal{F}),\dif_{\mathcal{F}})\xrightarrow{j} (\mathfrak{X}^{\bullet}(M),\dif_{\pi})\xrightarrow{p} (\Omega^{\bullet-1}(\mathcal{F},\nu),\dif_{\nabla})\to 0. \end{align} Regarding the Poisson complex as the de Rham complex of the Lie algebroid $T^*M$, the map $j$ is obtained by pulling back Lie algebroid forms via the Lie algebroid map $\pi^{\sharp}:T^*M\to \F$; explicitly, \[j(\eta)=(-\pi^{\sharp})(\eta),\] where we denoted by \[(-\pi^{\sharp}):\wedge^{\bullet} \F^*\xrightarrow{\raisebox{-0.2 em}[0pt][0pt]{\smash{\ensuremath{\sim}}}} \wedge^{\bullet}\F\] the isomorphism induced by $-\pi^{\sharp}$. For the cokernel, we have the canonical isomorphism $\wedge^{\bullet} TM/\wedge^{\bullet} \F\simeq \nu\otimes \wedge^{\bullet-1}\F$; and the map $p$ is obtained by using the isomorphism \[(-\omega_b)=(-\pi^{\sharp})^{-1}:\wedge^{\bullet} \F \xrightarrow{\raisebox{-0.2 em}[0pt][0pt]{\smash{\ensuremath{\sim}}}} \wedge^{\bullet} \F^*.\] Explicitly, \[ \langle \alpha, p(w)\rangle:=(-\omega_b) (i_{\alpha}w), \ \ \alpha\in \nu^*,\] where we note that, $i_{\alpha}w\in \wedge^{k-1}\F$. Therefore there is a long exact sequence \begin{equation}\label{les poisson}\arraycolsep=1.4pt \begin{array}{ccccccccccc} \dots &\xrightarrow{\partial}& H^k(\mathcal{F}) &\xrightarrow{j}&H^k(M,\pi)&\xrightarrow{p}&H^{k-1}(\mathcal{F},\nu)&\xrightarrow{\partial}&H^{k+1}(\mathcal{F})&\to&\dots \end{array} \end{equation} The boundary morphism $\partial$ is up to a sign given by the cup-product with the class $\dif_{\nu}[\omega]\in H^2(\mathcal{F},\nu^*)$, where $\dif_{\nu}:H^{\bullet}(\F)\to H^{\bullet}(\F,\nu^*)$ is the boundary map of the long exact sequence associated to \eqref{ses fol-deRham}. This class has the geometric interpretation of being the transverse variation of the leafwise symplectic form. Assume now that $\F$ is coorientable and unimodular, and let $\varphi$ be a closed defining one-form. Then $\varphi$ gives flat trivializations of the bundles $\nu$ and $\nu^*$, and the cohomology of $\F$ with trivial coefficients and with coefficients in these bundles can all be calculated using the subcomplex \eqref{the complex} of the de Rham complex, which will be useful in our situation. Thus \eqref{ses poisson} can be rewritten as the short exact sequence: \begin{align}\label{ses poisson unimod} 0\to (\varphi\wedge \Omega^{\bullet}(M),\dif)\xrightarrow{j_{\varphi}} (\mathfrak{X}^{\bullet}(M),\dif_{\pi})\xrightarrow{p_{\varphi}} (\varphi\wedge \Omega^{\bullet-1}(M),\dif)\to 0. \end{align} Unravelling the identifications made above, the maps $j_{\varphi}$ and $p_{\varphi}$ can be made explicitly. Namely, let $V$ be a vector field on $M$ such that $i_V\varphi=1$, and let $\widetilde{\omega}\in \Omega^2(M)$ be the unique extension of $\omega$ such that $i_{V}\widetilde{\omega}=0$. Then \[j_{\varphi}=(-\pi^{\sharp})\circ i_{V}, \ \ \ \ p_{\varphi}=e_{\varphi}\circ (-\widetilde{\omega}_b)\circ i_{\varphi},\] where $e_{\varphi}(-)=\varphi\wedge (-)$ is the exterior product with $\varphi$. Even though it was convenient to use $V$ (and $\widetilde{\omega}$) to write these formulas, the maps $j_{\varphi}$ and $p_{\varphi}$ are independent of this choice. However, $V$ allows us to build dual maps: \[ 0\leftarrow \varphi\wedge \Omega^{\bullet}(M)\xleftarrow{p_{V}} \mathfrak{X}^{\bullet}(M)\xleftarrow{j_{V}} \varphi\wedge \Omega^{\bullet-1}(M)\leftarrow 0,\] with \begin{equation}\label{dual_maps} p_V=e_{\varphi}\circ (-\widetilde{\omega}_b), \ \ \ j_V=e_V\circ (-\pi^{\sharp})\circ i_V, \end{equation} which satisfy the homotopy relations: \begin{equation}\label{dual_maps_relations} p_V\circ j_V=0,\ \ p_V\circ j_{\varphi}=\mathrm{Id}, \ \ p_{\varphi}\circ j_{V}=\mathrm{Id}, \ \ \mathrm{Id}=j_V\circ p_{\varphi}+j_{\varphi}\circ p_V. \end{equation} It can be checked that the maps $p_V$ and $j_V$ are chain morphisms precisely when $V$ is a Poisson vector field, which is also equivalent $\widetilde{\omega}$ being closed; in this case the pair $(\varphi,\widetilde{\omega})$ is a cosymplectic structure on $M$. In general, we can write $\dif\widetilde{\omega}=\varphi\wedge \xi$, where $\xi=i_V\dif\widetilde{\omega}$. Even though we will not use this later, let us remark that the boundary map for the long exact sequence in cohomology induced by \eqref{ses poisson unimod} is given up to sign by the chain map: \[ e_{\xi}:(\varphi\wedge \Omega^{\bullet-1}(M),\dif)\to (\varphi\wedge \Omega^{\bullet+1}(M),\dif), \ \ \eta\mapsto \xi\wedge \eta.\] \subsection{A short exact sequence for the flat Poisson complex} If we remove the origin from the Poisson manifold $(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$, we obtain a codimension one symplectic foliation $(\F,\omega)$ which is unimodular with defining one-form $\dif f$. Therefore, the techniques from the previous section can be used to describe its cohomology. Consider the vector field on $\R^3\backslash\{0\}$ \begin{align*} V:=&\frac{1}{2(x^2+y^2+z^2)}(x\partial_x+y\partial_y-z\partial_z), \end{align*} and note that $\dif f (V)=1$. The unique extension $\widetilde{\omega}$ of the leafwise symplectic structure satisfying $i_V\widetilde{\omega}=0$ is given by \begin{align*} \widetilde{\omega}:=-\frac{1}{x^2+y^2+z^2}(x\dif y\wedge \dif z +y\dif z\wedge \dif x -z\dif x \wedge \dif y). \end{align*} Therefore, the Poisson complex of $(\mathfrak{sl}_2^*(\mathbb{R})\backslash\{0\},\pi)$ fits into the short exact sequence \eqref{ses poisson unimod}, with $\varphi=\dif f$. However, since the singularities of $V$ and $\widetilde{\omega}$ are of finite order, we can apply the same reasoning and obtain a similar short exact sequence for the flat Poisson cohomology. Denote by $\Omega_0^{\bullet}(\R^3)$ the space of differential forms on $\R^3$ which are flat at zero. The following holds: \begin{proposition}\label{ses for flat things} The flat Poisson complex of $\mathfrak{sl}_2^*(\mathbb{R})$ fits into the short exact sequence: \[ 0\to (\dif f\wedge \Omega_0^{\bullet}(\R^3),\dif)\xrightarrow{j_{\dif f}} (\mathfrak{X}^{\bullet}_0(\mathfrak{sl}_2^*(\mathbb{R})),\dif_{\pi})\xrightarrow{p_{\dif f}} (\dif f\wedge \Omega^{\bullet-1}_0(\R^3),\dif)\to 0, \] where $j_{\dif f}=(-\pi^{\sharp})\circ i_{V}$ and $p_{\dif f}=e_{\dif f}\circ (-\widetilde{\omega}_b)\circ i_{\dif f}$. \end{proposition} \begin{proof} First, note that $i_V$ is indeed well-defined: if $\eta$ is a flat form at $0$, then $i_V\eta$ extends smoothly at zero and is also flat. The same applies also to the map $\widetilde{\omega}_b$, hence the maps $j_{\dif f}$ and $p_{\dif f}$ are well-defined. They are chain maps because they satisfy this condition away from $0$. In order to show that the sequence is exact, note that also the maps $p_V$ and $j_V$ defined in \eqref{dual_maps} induce maps on flat forms/multi-vector fields. Relations \eqref{dual_maps_relations} (which still hold, because they hold away from the origin) imply that the sequence in the statement is indeed exact. \end{proof} We call the cohomology of the complex \begin{equation}\label{ffcomplex} (\dif f\wedge\Omega^{\bullet}_0(\R^3),\dif) \end{equation} the flat foliated cohomology, and denote it by \[H^{\bullet}_0(\F,\nu^*).\] Consider the angular one-form on $\R^3\backslash\{x=y=0\}$ by \[\dif \theta=\frac{1}{x^2+y^2}(-y\dif x+x\dif y).\] The proof of the following result will occupy Sections \ref{section flat foli} and \ref{section: analysis}: \begin{theorem}\label{flat foliated cohomology} Cohomology classes in $H^{k}_0(\mathcal{F},\nu^*)$ have unique representatives of the form: \begin{itemize} \item for $k=0$ \[ \chi \dif f, \ \ \textrm{where} \ \ \chi \in C^{flat},\] \item for $k=1$ \[\eta \dif f \wedge \dif \theta, \ \ \textrm{where} \ \ \eta\in C^{out},\] \item and for $k\geq 2$, $H^{k}_0(\mathcal{F},\nu^*)=0$. \end{itemize} \end{theorem} \subsection{Higher Poisson cohomology groups} In this subsection we finish the proof of Theorem \ref{main theorem}. Since $H^2_0(\F,\nu^*)=0$, the long exact sequence in cohomology induced by the short exact sequence in Proposition \ref{ses for flat things} yields:\\ \noindent\underline{In degree $0$}: $j_{\dif f}$ induces an isomorphism: \[H^{0}_0(\F,\nu^*)\simeq C^{flat}.\] This isomorphism is simply $j_{\dif f}(\chi \dif f)=\chi$.\\ \noindent\underline{In degree $1$}: $j_{\dif f}$ and $p_{\dif f}$ induce a short exact sequence: \begin{equation}\label{ses in coho} 0\to H^{1}_0(\F,\nu^*)\to H^1_0(\mathfrak{sl}_2^*(\mathbb{R}),\pi)\to H^0_0(\F,\nu^*)\to 0, \end{equation} The map $j_{\dif f}$ acts on the representatives of $H^1_0(\F,\nu^*)$ as follows: \[j_{\dif f}(\eta \dif f \wedge \dif \theta)=-\eta\pi^{\sharp}(\dif\theta), \ \ \textrm{where}\ \eta\in C^{out}.\] Since $T|_O=\pi^{\sharp}(\dif \theta)|_O$, the image of $j_{\dif f}$ consists of all the elements \[[\eta T],\ \ \ \textrm{with}\ \ \eta \in C^{out}.\] One other hand, since $\Lie_Nf=1$, note that \[p_{\dif f}(\chi N)= \chi\dif f, \ \ \ \textrm{for all } \ \chi\in C^{flat}.\] Therefore the set consisting of classes of the form $[\chi N]$ is sent by $p_{\dif f}$ onto $H^0(\F,\nu^*)$. Exactness of the sequence \eqref{ses in coho} and Theorem \ref{flat foliated cohomology}, imply that elements in $H^1_{0}(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$ can be uniquely represented as \[\eta T+\chi N,\] with $\eta\in C^{out}$ and $\chi\in C^{flat}$. By Corollaries \ref{formal cohomology sl2} and \ref{ses smooth formal}, \[H^1(\mathfrak{sl}_2^*(\mathbb{R}),\pi)=H^1_{0}(\mathfrak{sl}_2^*(\mathbb{R}),\pi);\] thus we obtain the description of $H^1(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$ from Theorem \ref{main theorem}. \\ \noindent\underline{In degree $2$}: $p_{\dif f}$ induces an isomorphism: \[H^2_0(\mathfrak{sl}_2^*(\mathbb{R}),\pi)\simeq H^1_0(\F,\nu^*).\] We note that, for all $\eta\in C^{out}$, \[p_{\dif f}(\eta N\wedge T)=-\eta \dif f\wedge \dif \theta.\] Hence, reasoning as in the previous case, we obtain the description of the second Poisson cohomology group $H^2(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$ from Theorem \ref{main theorem}.\\ \noindent\underline{In degree $3$}: we have that: \[H^3_0(\mathfrak{sl}_2^*(\mathbb{R}),\pi)= 0.\] By using again Corollaries \ref{formal cohomology sl2} and \ref{ses smooth formal}, we obtain the description from Theorem \ref{main theorem} of $H^3(\mathfrak{sl}_2^*(\mathbb{R}),\pi)$. \section{Flat foliated cohomology}\label{section flat foli} In this section we reduce the proof of Theorem \ref{flat foliated cohomology} to two technical results, which will be proven in Section \ref{section: analysis}. \subsection{Averaging over $S^1$} In order to compute the flat foliated cohomology, it will be more convenient to work with $S^1$-invariant forms, where we consider the natural action of $S^1$ on $\R^3$ by rotations around the $z$-axis. Since $ f$ is $S^1$-invariant, and $0$ is fixed by the action, the invariant part of \eqref{ffcomplex} forms a subcomplex, denoted: \begin{equation*} (\CC^{\bullet},\dif):=(\dif f\wedge \Omega^{\bullet}_0(\R^3)^{S^1},\dif). \end{equation*} Note that averaging operator \[Av:(\dif f\wedge \Omega^{\bullet}_0(\R^3),\dif) \to (\CC^{\bullet},\dif),\] \[Av(\alpha)=\frac{1}{2\pi}\int_0^{2\pi} \big(e^{i\theta}\big)^*\alpha\dif \theta,\] is a chain map and a projection onto $\CC^{\bullet}$. \begin{lemma}\label{lemma:averaging} The map $Av$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} Let $\partial_{\theta}=x\partial_{y}-y\partial_x$ be the rotational vector field generating the $S^1$-action. Then, for all $\alpha\in \dif f\wedge \Omega^{\bullet}_0(\R^3)$, \begin{align}\label{eq:pull back} (e^{i\theta})^*\alpha-\alpha&=\int_0^{\theta}\frac{\dif}{\dif t}(e^{it})^*\alpha\dif t=\int_0^{\theta}\Lie_{\partial_{\theta}}(e^{it})^*\alpha\dif t\\\nonumber &=\dif\int_0^{\theta}i_{\partial_{\theta}}(e^{it})^*\alpha\dif t+ \int_0^{\theta}i_{\partial_{\theta}}(e^{it})^*\dif \alpha\dif t. \end{align} Integrating this equation from $0$ to $2\pi$, we obtain \begin{equation}\label{eq:averaging} Av(\alpha)-\alpha=\dif \circ h (\alpha)+h\circ \dif (\alpha), \end{equation} where $h$ denotes the homotopy operator: \[h:\dif f\wedge \Omega^{\bullet}_0(\R^3) \to \dif f\wedge \Omega^{\bullet-1}_0(\R^3),\] \[h(\alpha)=\frac{1}{2\pi}\int_0^{2\pi}\dif \theta \int_0^{\theta} i_{\partial_{\theta}}(e^{it})^*\alpha \dif t= \frac{1}{2\pi}\int_0^{2\pi}(2\pi-t)i_{\partial_{\theta}}(e^{it})^*\alpha \dif t.\] The homotopy relation \eqref{eq:averaging} implies the statement. \end{proof} \subsection{Retraction to the ``cohomological skeleton''} In order to calculate the cohomology of $\CC^{\bullet}$, we use a retraction onto the set \begin{align*} X:=\{ x=y=0\} &\cup \{z=0\} \end{align*} along the leaves of the foliation. We think about $X$ as a ``cohomological skeleton'' of the singular foliation. The leaves in $I$ are diffeomorphic to $\R^2$ and $X$ intersects them exactly in one point, and the leaves $O$ are diffeomorphic to $S^1\times \R$, and $X$ intersects these in one circle. On the other hand, $X$ does not intersect the two leaves in the cone, but, as we will see, these will not contribute to the flat cohomology. Define the retraction as follows: \begin{align}\label{retract} p_{X} (x,y,z)&:= \begin{cases} (\frac{x}{r}\sqrt{f},\frac{y}{r}\sqrt{f},0)&\text{ on } O\\ (0,0,0)&\text{ on }Z\\ (0,0,\sign(z)\sqrt{-f})&\text{ on }I \end{cases} \end{align} Note that $p_X$ preserves $\R^3\backslash Z$, and it satisfies: \[p_X(\R^3)=X, \ \ \textrm{and}\ \ p_X|_{X}=\id_X.\] Also, note that $p_X$ is continuous on $\R^3$, it is smooth on $\R^3\backslash Z$, but it is not smooth on $Z$. However, since we are working with forms that are flat at $0$, the following holds: \begin{lemma}\label{infinite time pullback} For every $\alpha \in \Omega_{0}^{\bullet}(\R^3)$, the form $p_{X}^*(\alpha)|_{\R^3\backslash Z}$ extends to a smooth form on $\R^3$, which satisfies $p_{X}^*(\alpha)\in \Omega_{0}^{\bullet}(\R^3)$. \end{lemma} The proof of this result is given at the end of Subsection \ref{subsection: homotopy operators}. We have that $p_X^*$ is $S^1$-equivariant, it commutes with $\dif$ and with $e_{\dif f}$; these properties hold, because they are closed and they hold on $\R^3\backslash Z$. Therefore, $p_X^*$ induces a chain map: \[p_X^*:(\CC^{\bullet},\dif)\to (\CC^{\bullet},\dif).\] In the next subsection, we will show that this is an isomorphism in cohomology, more precisely: \begin{proposition}\label{lemma: homotopies} There are linear maps \[h:\CC^{\bullet}\to \CC^{\bullet-1}\] which satisfy the homotopy relation: \[p_X^*(\alpha)-\alpha=\dif \circ h (\alpha)+h\circ \dif (\alpha).\] \end{proposition} Hence, $p_X^*(\CC^{\bullet})$ has the same cohomology as $\CC^{\bullet}$. However: \begin{lemma}\label{lemma: closed} For all $\alpha\in \CC^{\bullet}$, we have that \[\dif p_X^*(\alpha)=0.\] \end{lemma} \begin{proof} Let $\alpha\in \CC^{k}$. On $I$, we have that \[\dif p_X^*(\alpha)|_I=p_X^*(\dif \alpha|_{\{0\}\times \R})=0,\] because $\dif \alpha$ is at least a 2-form. Similarly, if $k>0$, also on $O$ we have that: \[\dif p_X^*(\alpha)|_O=p_X^*(\dif \alpha|_{\R^2\times\{0\}})=0.\] For $k=0$, write $\alpha=\chi\dif f$. Note that $f|_{\R^2\times 0}=r^2$. Since $\chi$ is $S^1$-invariant, we can write $\chi|_{\R^2\times\{0\}}=h(r^2)$, for some flat function $h$; hence $\alpha|_{\R^2\times \{0\}}=h(r^2)\dif r^2$, and so $\dif \alpha|_{\R^2\times \{0\}}$=0. \end{proof} We are ready now to prove Theorem \ref{flat foliated cohomology}. \begin{proof}[Proof of Theorem \ref{flat foliated cohomology}] By Lemma \ref{lemma:averaging} and Proposition \ref{lemma: homotopies}, the subcomplex $p_X^*(\CC^{\bullet})$ computes $H^{\bullet}_0(\F,\nu^*)$. By Lemma \ref{lemma: closed}, the differential on this subcomplex is trivial, and so every class in $H^{\bullet}_0(\F,\nu^*)$ has a unique representative in $p_X^*(\CC^{\bullet})$. Thus it suffices to determine the image of $p_X^*$.\\ \noindent\underline{In degree $0$}: this follows from Proposition \ref{ses for flat things}.\\ \noindent\underline{In degree $1$}: let $\alpha\in \CC^1$. As in the proof of Lemma \ref{lemma: closed}, we have that $p_X^*(\alpha)|_I=0$. Note that $\dif f\wedge \dif\theta|_{\R^2\times\{0\}}=2\dif x\wedge\dif y$, therefore we can write uniquely $\alpha|_{\R^2\times \{0\}}=g\dif f\wedge \dif\theta|_{\R^2\times\{0\}}$, and since $\alpha$ is $S^1$-invariant and flat at $0$, we can further decompose $g=h(r^2)$, where $h\in C^{\infty}(\R)$ with $\supp h \subset [0,\infty)$. Thus, for $\eta=h\circ f\in C^{out}$, we obtain: \[p_X^*(\alpha)= \eta \dif f\wedge \dif \theta.\] \noindent\underline{In degree $2$}: as in the proof of Lemma \ref{lemma: closed}, $p_X^*(\alpha)=0$ for any $\alpha\in \CC^2$. \end{proof} \subsection{Homotopy operators}\label{subsection: homotopy operators} \begin{wrapfigure}{r}{5cm} \includegraphics[scale=0.6]{contraction.png} \caption{Flow of $W$ along the leaves to the ``cohomological skeleton''} \end{wrapfigure} In order to construct the homotopy operators from Proposition \ref{lemma: homotopies}, we build a foliated homotopy between the identity map and the retraction $p_X$. We will do this using the flow of the vector field: \begin{align}\label{W} W=& -r^2z\pi^{\sharp}(\dif \theta)=-r^2 z\partial_z -z^2 r\partial_r\\ =&-z^2x\partial_x -z^2y\partial_y -(x^2+y^2)z\partial_z\nonumber \end{align} where $r=\sqrt{x^2+y^2}$. Note that $W$ has the following properties: \begin{itemize} \item $W$ vanishes precisely on $X$, \item $W$ is tangent to the foliation:\\ $\Lie_Wf=0,$ \item $W$ is $S^1$-invariant, \item $\Lie_W(\theta)=\dif \theta(W)=0$, \item $\Lie_W(R^2)=-4r^2z^2\leq 0$,\\ where $R^2=r^2+z^2.$ \end{itemize} In particular $i_W$ preserves $\CC^{\bullet}$. The last property implies that the flow lines starting in a point inside the closed ball $\overline{B}_n(0)$ are trapped inside that ball; hence the flow is defined for all positive time, and will be denoted by: \begin{equation*} \begin{array}{cccc} \phi:&[0,\infty)\times \R^3 &\to&\R^3\\ &(t,x,y,z)&\mapsto&\phi_t(x,y,z). \end{array} \end{equation*} The above properties of $W$ imply that: \begin{itemize} \item $\phi_t$ fixes $X$; \item $\phi_t^*(f)=f$; \item $\phi_t$ is $S^1$-equivariant; \item $\phi_t^*(\dif \theta)=\dif \theta$. \end{itemize} In particular, $\phi_t^*$ preserves the complex $(\CC^{\bullet},\dif)$. By a similar calculation as \eqref{eq:pull back}, for all $\alpha\in \CC^{\bullet}$, we have that \begin{equation}\label{eq: at time t} \phi_t^*(\alpha)-\alpha=\dif \circ h_t(\alpha)+h_t\circ \dif(\alpha), \end{equation} where \[h_t:\CC^{\bullet}\to \CC^{\bullet-1}, \ \ h_t(\alpha)=\int_0^{t}i_W\phi_s^*(\alpha)\dif s.\] In order to prove Proposition \ref{lemma: homotopies} we will take the limit as $t\to\infty$ in \eqref{eq: at time t}. In the following subsection we will give explicit formulas for $\phi_t$, which imply the point-wise limit: \begin{align}\label{infinity now} \lim_{t\to\infty}\phi_t(x,y,z)=p_X(x,y,z), \ \ \ \forall\ (x,y,z)\in \R^3. \end{align} The following results are much more involved, and their proofs will occupy the last section of the paper: \begin{lemma}\label{convergence of the flow} On $\R^3\backslash Z$, we have that \[\lim_{t\to\infty}\phi_t=p_X\] with respect to the compact-open $C^{\infty}$-topology. \end{lemma} \begin{lemma}\label{well-defined} For any $\alpha\in \Omega_{0}^{\bullet}(\R^3)$, the following limit exists: \begin{align}\label{limit h} h(\alpha):=\lim_{t \to \infty}{h_{t}(\alpha)}\in \Omega_{0}^{\bullet-1}(\R^3), \end{align} with respect to the compact-open $C^{\infty}$-topology. \end{lemma} Recall that the existence of a limit with respect to the compact-open $C^{\infty}$-topology means that all partial derivatives converge uniformly on compact subsets; more details are given in the following section. The results above suffice to complete our proofs: \begin{proof}[Proofs of Lemma \ref{infinite time pullback} and Proposition \ref{lemma: homotopies}] Since the limit \eqref{limit h} is uniform on compact subsets with respect to all $C^k$-topologies, $h$ satisfies \begin{align*} \dif h(\alpha)&=\lim_{t \to \infty}{\dif h_{t}(\alpha)}. \end{align*} From \eqref{eq: at time t}, we obtain that for any $\alpha\in \Omega_0^{\bullet}(\R^3)$ \[\lim_{t\to\infty}\phi_t^*(\alpha)=\alpha+\dif\circ h(\alpha)+h\circ \dif (\alpha)\] holds for the compact-open $C^{\infty}$-topology. On the other hand, on $\R^3\backslash Z$, $\lim_{t\to\infty}\phi_t=p_X$, and since this limit is also with respect to the compact-open $C^{\infty}$-topology, we have that \[\lim_{t\to\infty}\phi_t^*(\alpha)|_{\R^3\backslash Z}=p_X^*(\alpha)|_{\R^3\backslash Z}.\] Therefore, $p_X^*(\alpha)|_{\R^3\backslash Z}$ extends to a smooth form on $\R^3$. This implies Lemma \ref{infinite time pullback} and the equation: \[p_X^*(\alpha)-\alpha=\dif\circ h(\alpha)+h\circ \dif (\alpha).\] Finally, since $h_t$ is $S^1$-equivariant and commutes with $e_{\dif f}$, and these conditions are closed, we have that $h(\CC^{\bullet})\subset \CC^{\bullet-1}$. Thus, the above relation holds on $\CC^{\bullet}$, and so we obtain also Proposition \ref{lemma: homotopies}. \end{proof} \subsection{Explicit formula for the flow}\label{subsection:explicit flow} Recall that in cylindrical coordinates $W=-(r^2z\partial_z+z^2r\partial_r)$. Therefore, its flow $\phi_t(r,\theta,z)=(r_t,\theta_t,z_t)$ satisfies \[r'_t=-r_tz^2_t,\ \ \ \ \theta'_t=0,\ \ \ \ z'_t=-z_tr^2_t.\] As remarked before, $\theta_t=\theta$. The above system is equivalent to \begin{equation*} (r^2_t)'=-2r^2_tz^2_t,\ \ \ \ (z^2_t)'=-2r^2_tz^2_t. \end{equation*} Note that $(r^2_t-z_t^2)'=0$, hence, as remarked before, $f=r^2-z^2$ is constant along the flow lines. Therefore, the system above is equivalent to a single ODE in $R_t^2=r^2_t+z^2_t$. Solving this ODE, we obtain the explicit formulas: \begin{align}\label{explicit_flow} r_t&=r\sqrt{\frac{r^2-z^2}{r^2-z^2e^{-2t(r^2-z^2)}}}=\frac{r}{\sqrt{1+tz^2\kappa(tf)}},\\ z_t&=z\sqrt{\frac{z^2-r^2}{z^2-r^2e^{2t(r^2-z^2)}}}=\frac{z}{\sqrt{1+tr^2\kappa(-tf)}}, \nonumber \end{align} where we have denoted by $\kappa$ the following smooth function: \begin{equation*} \begin{array}{cccc} \kappa:&\R&\to&\R_{\ge 0}\\ &x&\mapsto&\int_0^2e^{-sx}\dif s=\frac{1-e^{-2x}}{x} \end{array} \end{equation*} These formulas give the point-wise limit $\lim_{t\to\infty}\phi_t=p_X$ claimed in \eqref{infinity now}: \begin{equation*} \lim_{t\to\infty}(r_t,z_t)=\begin{cases} (\sqrt{f},0)&\text{ if }\ 0\leq f\\ (0,\sign(z)\sqrt{-f})&\text{ if }\ f\leq 0 \end{cases}. \end{equation*} In Cartesian coordinates, we obtain: \begin{lemma}\label{flow lines} For $t\geq 0$, the flow $\phi_t(x,y,z)=(x_t,y_t,z_t)$ of $W$ is given by \begin{align}\label{flow in Cartesian} x_t&= \frac{x}{\sqrt{1+tz^2\kappa(tf)}},\nonumber \\ y_t&= \frac{y}{\sqrt{1+tz^2\kappa(tf)}},\\ z_t&= \frac{z}{\sqrt{1+tr^2\kappa(-tf)}}.\nonumber \end{align} \end{lemma} \section{The analysis}\label{section: analysis} In this last section we prove Lemmas \ref{convergence of the flow} and \ref{well-defined}. \subsection{Partial derivatives, Leibniz rule, chain rule} Denote the partial derivative corresponding to a multi-index \[a=(a_1,\ldots,a_n)\in \N^n \ \ \textrm{by}\ \ D^{a}: = \frac{1}{a_1!}\partial^{a_1}_{x_1}\ldots \frac{1}{a_n!}\partial^{a_n}_{x_n}.\] We will often use the general Leibniz rule: \[D^a(f_1\cdot \ldots\cdot f_k)=\sum_{a^1+\ldots+a^k=a}D^{a^1}(f_1)\cdot\ldots \cdot D^{a^k}(f_k),\] for $f_1,\ldots,f_k\in C^{\infty}(\R^n)$, and the general chain rule: \[D^a(g(f_1,\ldots,f_k))=\sum_{1\leq |b|\leq |a|}D^b(g)(f_1,\ldots,f_k){\sum}' \prod_{i=1}^k\prod_{j=1}^{b_i} D^{a^{ij}}(f_i),\] where $b=(b_1,\ldots,b_k)$ and $\sum'$ is the sum over all non-trivial decompositions \[a=\sum_{i=1}^k\sum_{j=1}^{b_i}a^{ij}, \ \ \ a, a^{ij}\in \N^n.\] \subsection{Limits of families of smooth functions}\label{subsection: limits} We discuss some standard facts about the existence of limits of families of smooth functions. Let $U\subset \R^n$ be an open set. For a compact subset $K\subset U$, we define the corresponding $C^k$-semi-norm on $C^{\infty}(U,\R^m)$ as: \[\norm{F}^K_k:=\sum_{|a|\leq k}\sup_{x\in K}\left|D^aF(x)\right|,\] where $|a|=a_1+\ldots+a_n$. The semi-norms \[\big\{\, \norm{\cdot}_k^K\ | \ k\geq 0,\ K\subset U \big\},\] endow $C^{\infty}(U,\R^m)$ with the structure of a Fr\'echet space, and the resulting topology is called the compact-open $C^{\infty}$-topology. Consider a family $F_t\in C^{\infty}(U,\R^m)$, defined for $t\geq t_0$. Assume that, for each compact $K\subset U$ and each $k\geq 0$, we find a function \[l^K_{k}:[t_0,\infty)\to [0,\infty),\] such that \[\forall \ s\geq t\ :\ \ \norm{F_s-F_t}^K_{k}\leq l^K_{k}(t) \ \ \ \ \textrm{and} \ \ \ \ \lim_{t\to \infty}l_k^K(t)=0.\] Then, since $C^{\infty}(U,\R^m)$ is a Fr\'echet space, the limit $t\to \infty$ exists: \[F_{\infty}:=\lim_{t\to \infty}F_{t}\in C^{\infty}(U,\R^m).\] So all partial derivatives of $F_t$ convergence uniformly on all compact subsets to those of $F_{\infty}$. Assume further that $F_t$ is smooth also in $t$. Writing \[D^a(F_t-F_s)=\int_s^tD^a(F'_h)\dif h,\] we obtain that \[\norm{F_s-F_t}^K_{k}\leq \int_s^t\norm{F'_h}_k^K\dif h.\] So, if we find a function \[b^K_{k}:[t_0,\infty)\to [0,\infty),\] such that \begin{equation}\label{conditions for limit} \forall \ t\geq t_0\ :\ \ \norm{F'_t}^K_{k}\leq b^K_{k}(t)\ \ \ \ \textrm{and} \ \ \ \ \int_{t_0}^{\infty}b_k^K(t)\dif t<\infty, \end{equation} then we can conclude that $\lim_{t\to\infty} F_t$ exists. We will use this criterion in the following subsections. \subsection{Polynomial-type estimates}\label{subsection: flow estimates} In the following subsections, we will prove several inequalities, and in order to keep track of what is essential, we discuss here the nature of these estimates. The following two families of functions play a key role: \[g_t, \overline{g}_t:\R^3\to (0,1],\ \ \ t\in [0,\infty),\] \[g_t:=\frac{1}{\sqrt{1+tz^2\kappa(tf)}},\ \ \ \ \overline{g}_t:=\frac{1}{\sqrt{1+tr^2\kappa(-tf)}}.\] These functions appeared in the explicit formula for the flow $\phi_t=(x_t,y_t,z_t)$ \[x_t=xg_t, \ \ \ y_t=yg_t,\ \ \ r_t=rg_t\ \ z_t=z\overline{g}_t.\] Note that they satisfy the following property: \begin{equation}\label{g_t} g_t=\overline{g}_te^{tf}, \end{equation} As discussed in the previous subsection, given a smooth family $F_t\in C^{\infty}(\R^3)$, $t\geq 0$, in order to show that $\lim_{t\to \infty}F_t$ exists, we need to estimate the partial derivatives of $F'_t$. We will find bounds which are polynomials in the variables $R$, $t$, $g_t$ and $\overline{g}_{t}$, and obtain inequalities of the form: \begin{equation}\label{polynomial bounds} |D^a F'_t|\leq C\sum R^d\, t^k\, g_t^p\, \overline{g}_{t}^q, \ \ \ \ \ \ \forall\ (x,y,z)\in \R^3, \ \ t\geq 0, \end{equation} where $C>0$ is a constant, $R=\sqrt{x^2+y^2+z^2}$, and the sum is over a finite set of degrees $(d,k,p,q)$. Which degrees actually appear in this sum will play a crucial role. Namely, note first that, by \eqref{g_t}, $g_t$ is rapidly decreasing on $I$ and $\overline{g}_t$ is rapidly decreasing on $O$. So, for $p>0$ and $q>0$, $R^d\, t^k\, g^p_t\,\overline{g}_t^q$ goes rapidly to zero away from $Z$. However, higher exponents $d$, $p$ and $q$ are needed to obtain estimates as in \eqref{conditions for limit} also along $Z$. For this we will use the following: \begin{lemma}\label{lemma: finite integrals} For $p>0$ and $q>0$ there is $C=C(p,q)$ such that: \[g_t^p\, \overline{g}_t^q\leq C R^{-(p+q)}t^{-\frac{p+q}{2}},\] for all $t>0$ and all $(x,y,z)\neq 0\in \R^3$. \end{lemma} \begin{proof} First we prove that, for $0\leq \epsilon \leq 1$, $0\le s$, $0\leq v$, the following holds: \begin{equation}\label{bounded function} \frac{\epsilon}{1+v\kappa(s)} e^{- 2\epsilon s}\le \frac{1}{1+2(v+s)}. \end{equation} This is equivalent to \[l(v,s):=(s+v(1-e^{-2s}))e^{2\epsilon s}-\epsilon s(1+2(v+s))\geq 0.\] Since $l$ is linear in $v$, we need to check that $l(0,s)\geq 0$ and $\partial_vl(0,s)\geq 0$: \begin{align*} l(0,s)&=s\big(e^{2\epsilon s}-\epsilon (1+2s)\big)\\ &\geq s\big(1+2\epsilon s-\epsilon (1+2s)\big)\\ &=s(1-\epsilon)\geq 0,\\ \partial_vl(0,s)&=(1-e^{-2s})e^{2\epsilon s}-2\epsilon s\\ &=e^{2\epsilon s}-e^{-2(1-\epsilon)s}-2\epsilon s\\ &\geq e^{2\epsilon s}-1-2\epsilon s\geq 0. \end{align*} Next, we prove the statement in the case $0\leq |z|\leq r$, the other case follows similarly. Using \eqref{g_t}, we write: \[g_t^p\,\overline{g}_t^q=g_t^{p+q}e^{-tqf}=\left(g_t^2e^{-t\frac{2q}{p+q}f}\right)^{\frac{p+q}{2}}.\] By applying \eqref{bounded function} with $s:=t(r^2-z^2)$, $v:=tz^2$ and $\epsilon:=\frac{q}{p+q}$, we obtain: \[ g_t^2e^{-t\frac{2q}{p+q}f}=\frac{e^{-2\epsilon s}}{1+v\kappa(s)}\leq \epsilon^{-1}\frac{1}{1+2(s+v)}= \frac{p+q}{q}\frac{1}{1+2tr^2}, \] and so: \[g_t^p\, \overline{g}_t^q\leq \left(\frac{p+q}{q}\right)^{\frac{p+q}{2}} (1+2tr^2)^{-\frac{p+q}{2}}.\] Using that $t R^2\leq (1+2 t r^2)$ we obtain the inequality from the statement. \end{proof} For the estimates that will follow, we introduce polynomials $\sigma_{k,l}$ defined for $k\in \N$ and $l\in \Z$ as the sum of all monomials $t^jR^d$ with \[d-2j=l \ \ \textrm{and}\ \ 0\leq j\leq k,\] or, in a closed formula: \[\sigma_{k,l}=\sum_{m\leq j\leq k} t^jR^{2j+l}, \ \ m:=\mathrm{max}\big(0,-l/2 \big),\] and we set $\sigma_{k,l}=0$ if $m> k$ and $\sigma_{0,0}=1$. By comparing terms, the following is immediate: \begin{equation}\label{submult} \sigma_{k,l}\, \sigma_{k',l'}\leq C \sigma_{k+k', l+l'}, \end{equation} for some constant $C=C(k,k',l,l')$. In particular: \begin{equation*} R^d\sigma_{k,l}=\sigma_{0,d}\sigma_{k,l}\leq \sigma_{k, l+d}. \end{equation*} We will use these inequalities later on. \subsection{Estimates for $g_t$ and $\overline{g}_t$} Here we will evaluate the partial derivatives of $g_t$ and $\overline{g}_t$. Let \[g(u,v):=\frac{1}{\sqrt{1+v\kappa(u-v)}}, \ \ \ \ u,v\geq 0.\] First, we prove an intermediate result about the function $g$. \begin{lemma}\label{lemma gg} For every $a\in \N^2$ there exists $C=C(a)$ such that: \begin{equation}\label{gg} \lvert D^a g(u,v)\rvert \leq C g(u,v), \ \ \forall \ u,v\geq 0. \end{equation} \end{lemma} \begin{proof} Consider the function: \[\iota(u,v):=1+v\kappa(u-v).\] First note that \begin{align*} \lvert \kappa ^{(k)}(x)\vert=\int_0^2s^ke^{-xs}\dif s\leq 2^k\kappa(x). \end{align*} Using this, that $\kappa(s)\leq 2$ for $0\leq s$, and the Leibniz rule, one finds for any $b\in \N^2$ a constant $C=C(b)>0$ such that, for all $0\leq v\leq u$: \begin{align*} \lvert D^b (\iota(u,v))\rvert &\le C\iota(u,v). \end{align*} Next, using that \[\left(x^{-\frac{1}{2}}\right)^{(k)}=c_kx^{-k-\frac{1}{2}}, \ \ c_k=(-1)^k\ \frac{2k-1}{2}\cdot \frac{2k-3}{2}\cdot \ldots \cdot \frac{1}{2},\] and the chain rule, we obtain the following estimate for $0\leq v\leq u$: \begin{align*} \lvert D^a \iota^{-\frac12}\rvert &=\lvert \sum_{1\le k \le |a|}{ c_k\iota^{-k-\frac{1}{2}}\sum_{a^1+\dots +a^k=a}{D^{a^1}(\iota)\dots D^{a^k}(\iota)}}\rvert\leq C\iota^{-\frac12}. \end{align*} Since $g=\iota^{-\frac{1}{2}}$, we obtain \eqref{gg} on the domain $0\leq v\leq u$. In order to prove the estimate also for $0\leq u\leq v$, consider the function $\overline{g}(u,v):=g(v,u)$. Clearly, $\overline{g}$ satisfies the version of inequality \eqref{gg} for $0\leq u\leq v$. Note the following relation (which is equivalent to \eqref{g_t}): \[g(u,v)=e^{v-u}\overline{g}(u,v).\] Using this, we obtain \eqref{gg} also for $0\leq u\leq v$: \begin{align*} |D^a(g(u,v))|&=|\sum_{a^1+a^2=a}D^{a^1}(e^{v-u})D^{a^2}(\overline{g}(u,v))|\\ &\leq Ce^{v-u}\overline{g}(u,v)=Cg(u,v).\qedhere \end{align*} \end{proof} We provide now estimates for the partial derivatives of $g_t$ and $\overline{g}_t$. \begin{lemma}\label{lemma:bounds on g} For $a\in \N^3$, with $|a|=k$, there is $C=C(a)$ such that \[|D^ag_t|\leq C \sigma_{k,-k}g_t,\ \ \ \ \textrm{and} \ \ \ \ |D^a\overline{g}_t|\leq C \sigma_{k,-k}\overline{g}_t.\] \end{lemma} \begin{proof} We have that $g_t=g(tr^2,tz^2)$. Therefore, by the chain rule: \begin{alignat*}{2} \lvert D^a g_t\rvert &=\; && \lvert \sum_{1\le |b| \le |a|}{D^b(g)(tr^2,tz^2){\sum}' {\Pi_{i=1}^{b_1}{D^{a^{1i}}(tr^2)}\Pi_{i=1}^{b_2}{D^{a^{2i}}(tz^2)}}}\rvert, \end{alignat*} where $b=(b_1,b_2)$ and the sum ${\sum}'$ is over all decompositions \[a=a^{11} +\dots+ a^{1b_1} +a^{21} +\dots +a^{2b_2},\] with $a^{ji}\in \N^3$ and $1\leq |a^{ji}|$. Note that $|a^{ji}|\le 2$ since otherwise $D^{a^{1i}}(tr^2)$ and $D^{a^{2i}}(tz^2)$ are zero, respectively. Hence we have \begin{align*} b_j &\le \sum_{i=1}^{b_j}{\lvert a^{ji}\rvert} \le 2b_j, \end{align*} and therefore $\lvert b \rvert \le k \le 2\lvert b\rvert$, which means $\frac{k}{2}\le \lvert b\rvert \le k$. Moreover, \begin{align*} |D^{a^{1i}}(tr^2)|\le 2tR^{2-\lvert a^{1i}\rvert} \end{align*} and similarly for $tz^2$. Thus we obtain the estimate: \[|{\Pi_{i=1}^{b_1}{D^{a^{1i}}(tr^2)}\Pi_{i=1}^{b_2}{D^{a^{2i}}(tz^2)}}|\leq C t^{|b|}R^{2|b|-k}.\] Using now the previous lemma, we the first inequality: \[\left\lvert D^a g_t \right\rvert \leq C\sum_{k/2\leq j\leq k}t^jR^{2j-k}g_t=C \sigma_{k,-k}g_t.\] The statement for $\overline{g}_t$ is proven similarly. \end{proof} \subsection{Estimates on the flow (proof of Lemma \ref{convergence of the flow})} Next, we estimate the partial derivatives of the flow: \begin{lemma}\label{lemma:bounds on the flow} For $a\in \N^3$, with $k=|a|$, there is $C=C(a)$ such that: \[|D^ax_t|\leq C \sigma_{k,1-k} g_t ,\ \ \ \ |D^ay_t|\leq C \sigma_{k,1-k} g_t , \ \ \ \ |D^az_t|\leq C \sigma_{k,1-k}\overline{g}_t.\] \end{lemma} \begin{proof} Recall that $x_t=xg_t$. Therefore: \[D^a(x_t)=xD^a(g_t)+D^{b}(g_t),\] where $b=a-(1,0,0)$ and $D^{b}=0$ if $b\notin\N^3$. Using that $|x|\leq R$ and \eqref{submult}, we obtain \[\left\lvert D^a x_t \right\rvert \leq C\big(R\sigma_{k,-k}+\sigma_{k-1,1-k}\big)g_t\leq C\sigma_{k,1-k}g_t. \] The other two estimates are proven in the same way. \end{proof} We are now ready to show convergence of the flow away from $Z$: \begin{proof}[Proof of Lemma \ref{convergence of the flow}] It suffices to consider compact sets of the form: \[K_n^{\epsilon}:=\overline{B}_n(0)\cap \{|f|\geq \epsilon\}.\] By the discussion in Subsection \ref{subsection: limits} we need to bound the partial derivatives of $\phi'_t := \frac{\dif}{\dif s}\phi_s |_{s=t}$ on $K_{n}^{\epsilon}$ by a positive integrable function. Since $\phi_t$ is the flow of $W=-z^2x\partial_x -z^2y\partial_y -(x^2+y^2)z\partial_z$, we have that \begin{align*} x'_t&=-z_t^2x_t,\\ y'_t&=-z_t^2y_t,\\ z'_t&=-(x_t^2+y_t^2)z_t. \end{align*} Note that, for $R\leq n$ and $1\leq t$, there is $C=C(n,i,l)$ such that \[\sigma_{i,l}\leq C t^i.\] Therefore, using the Leibniz identity and Lemma \ref{lemma:bounds on the flow}, we find for any $a\in \N^3$ a constant $C=C(a)$ such that, for $t\geq 1$, the following hold on $\overline{B}_n(0)$ \begin{align*} \lvert D^a x'_t\rvert &\le C\, t^{k}\, g_t\, \overline{g}^2_t,\\ \lvert D^a y'_t\rvert &\le C\, t^{k}\, g_t\, \overline{g}^2_t,\\ \lvert D^a z'_t\rvert &\le C\, t^{k}\, g^2_t\, \overline{g}_t. \end{align*} Next, note that \eqref{g_t} and $g_t\leq 1$ give: \[g_t\overline{g}_t=e^{-tf}g_t^2\leq e^{-ft},\] and similarly, by exchanging their role, we obtain: \[g_t\overline{g}_t=e^{tf}\overline{g}_t^2\leq e^{ft}.\] Thus, the following estimate holds: \[g_t\overline{g}_t\leq e^{-t|f|}.\] Therefore, we obtain for $t\geq 1$: \[\norm{\phi_t'}_k^{K_{n}^{\epsilon}} \leq C\, t^{k}\, e^{-\epsilon t}.\] Since the right hand side is integrable, the conclusion follows. \end{proof} \subsection{Estimates for the pull-back (proof of Lemma \ref{well-defined})} We will prove all estimates on the closed ball of radius $n\geq 1$, denoted: \[K:=\overline{B}_n(0)\] First, we estimate the pullback under $\phi_t$ of flat forms. Flatness will be used to increase the degrees of $R$, $g_t$ and $\overline{g}_{t}$ in our estimates. \begin{lemma}\label{pullback estimates} For every $d\in\N$ and $a\in \N^3$ with $|a|=k$, there is a constant $C=C(d,a)$ such that, for any flat form $\alpha\in \Omega^{i}_0(K)$, and any $t\geq 0$: \[|D^a(\phi_t^*\alpha)|\leq C\norm{\alpha}^K_{k+d}\, \sigma_{k+i,d}\, (g_t+\overline{g}_t)^{k+d+i},\] holds on $K$. \end{lemma} \begin{proof} Step 1: we first prove the estimate for $k=i=0$, i.e.\ for a function $\chi\in C^{\infty}_0(K)$, and for $a=(0,0,0)$. If also $d=0$, the estimate is obvious, since $\sigma_{0,0}=1$. So let $d>0$. Since $\chi$ is flat at $0$, the Taylor formula with integral remainder gives: \[\chi(v)=\sum_{|b|=d}v^b\int_{0}^1d(1-s)^{d-1}(D^b\chi)(sv)\dif s,\] where for $v=(x,y,z)$ and $b=(b_1,b_2,b_3)$ we denoted $v^b=x^{b_1}y^{b_2}z^{b_3}$. Thus: \[\phi_t^*(\chi)(v)=\sum_{|b|=d}(\phi_t(v))^b\int_{0}^1d(1-s)^{d-1}(D^b\chi)(s\phi_t(v))\dif s.\] Since $\phi_t(v)=(xg_t,yg_t,z \overline{g}_t)$, we have that \[|(\phi_t(v))^b|\leq R^d g_t^{b_1+b_2}\, \overline{g}_t^{b_3}\leq R^d(g_t+ \overline{g}_t)^d.\] On the other hand, since $s\phi_t(v)\in K$, we have that: \[\Big| \int_{0}^1d(1-s)^{d-1}(D^b\chi)(s\phi_t(v))\dif s\Big|\leq C\norm{\chi}^K_{d}.\] Using these inequalities and $\sigma_{0,d}=R^d$ we obtain the estimate in this case. Step 2: we prove now the estimate for a flat function $\chi$, and $a\in \N^3$ with $|a|=k\geq 1$. We use the chain rule to write: \[D^a(\chi\circ \phi_t)=\sum_{1\leq |b|\leq k}D^b(\chi)\circ \phi_t\ {\sum}'\prod_{j=1}^{b_1}D^{a^{1j}}(x_t) \prod_{j=1}^{b_2}D^{a^{2j}}(y_t) \prod_{j=1}^{b_3}D^{a^{3j}}(z_t), \] where $b=(b_1,b_2,b_3)$ and $\sum'$ is the sum over all non-trivial decompositions: \[a=\sum_{i=1}^3\sum_{j=1}^{b_i}a^{ij}.\] Since $D^b(\chi)$ is flat at zero, we apply Step 1 with $d\leftarrow k-|b|+d$: \[|D^b(\chi)\circ \phi_t|\leq C\norm{\chi}^K_{k+d}\ \sigma_{0,k-|b|+d}(g_t+\overline{g}_t)^{k-|b|+d}.\] Next, by applying Lemma \ref{lemma:bounds on the flow} and \eqref{submult}, we obtain: \begin{align*} \big|\prod_{j=1}^{b_1}D^{a^{1j}}(x_t) \prod_{j=1}^{b_2}D^{a^{2j}}(y_t) \prod_{j=1}^{b_3}D^{a^{3j}}(z_t)\big|&\leq C\sigma_{k,|b|-k}g_t^{b_1+b_2}\,\overline{g}_t^{b_3}\\ &\leq C\sigma_{k,|b|-k}(g_t+\overline{g}_t)^{|b|}. \end{align*} Using again \eqref{submult}, we obtain the estimate in this case. Step 3: let $\alpha\in \Omega^i_0(K)$, $i\geq 1$, and $a\in \N^3$. Note that the coefficients of $\phi_t^*(\alpha)$ are sums of elements of the form \[\chi\circ \phi_t\cdot M(\phi_t),\] where $\chi\in C^{\infty}_0(K)$ is a coefficient of $\alpha$ and $M(\phi_t)$ denotes the determinant of a minor of rank $i$ of the Jacobian matrix of $\phi_t$. By the Leibniz rule: \[D^a(\chi\circ \phi_t\cdot M(\phi_t))=\sum_{b+c=a}D^b(\chi\circ \phi_t)D^c(M(\phi_t)).\] For the first term, we apply Step 2 with $d\leftarrow k-|b|+d=|c|+d$: \[|D^b(\chi\circ \phi_t)|\leq C\norm{\alpha}^K_{k+d}\sigma_{|b|,|c|+d}(g_t+\overline{g}_t)^{k+d}.\] Note that $M(\phi_t)$ is a homogeneous polynomial of degree $i$ in the first order partial derivatives of $x_t$, $y_t$ and $z_t$. By Lemma \ref{lemma:bounds on the flow} each such partial derivatives satisfies: \[\Big|D^{e}\Big(\frac{\partial u_t}{\partial v}\Big)\Big|\leq C\sigma_{|e|+1,-|e|}(g_t+\overline{g}_t),\] where $u_t\in \{x_t,y_t,z_t\}$ and $v\in\{x,y,z\}$. Therefore, by applying the Leibniz rule and \eqref{submult}, we obtain: \[|D^c M(\phi_t)|\leq C \sigma_{|c|+i,-|c|}(g_t+\overline{g}_t)^i.\] These inequalities imply now the estimates from the statement. \end{proof} Finally, we prove estimates for the derivative of the homotopy operator: \begin{lemma}\label{derivative of h} For every $d\in \N$ and $a\in \N^3$ with $|a|=k$, there is a constant $C=C(d,a)$ such that, for any flat form $\alpha\in \Omega^{i}_0(K)$, and all $t\geq 0$: \[|D^a(h'_t(\alpha))|\leq C\norm{\alpha}^K_{k+d}\, \sigma_{k+i-1,d+3}\, g_t\, \overline{g}_t\, (g_t+\overline{g}_t)^{k+d+i},\] holds on $K$. \end{lemma} \begin{proof} Recall that \[h'_t(\alpha)=i_W\phi_t^*(\alpha)=\phi_t^*(i_W\alpha)=\sum_{j=1}^3u_t^j\,\phi_t^*(\alpha_j),\] where \[u_t^1=-x_tz_t^2,\ \ \ \ u_t^2=-y_tz_t^2,\ \ \ \ u_t^3=-(x_t^2+y_t^2)z_t\] and \[\alpha_1=i_{\partial_x}\alpha,\ \ \ \ \alpha_2=i_{\partial_y}\alpha,\ \ \ \ \alpha_3=i_{\partial_z}\alpha.\] First we apply the Leibniz rule: \[D^a(u_t^j\,\phi_t^*(\alpha_j))=\sum_{b+c=a}D^b(u_t^j) D^c(\phi_t^*(\alpha_j)).\] By using Lemma \ref{lemma:bounds on the flow}, the Leibniz rule and \eqref{submult}, we obtain: \[|D^b u_t^j|\leq C \sigma_{|b|,3-|b|}\, g_t\, \overline{g}_t\,(g_t+\overline{g}_t).\] By applying Lemma \ref{pullback estimates} with $d\leftarrow d+|b|$, we obtain: \[|D^c(\phi_t^*(\alpha_j))|\leq C\norm{\alpha}^K_{k+d}\,\sigma_{|c|+i-1,d+|b|}\, (g_t+\overline{g}_t)^{k+d+i-1}.\] These inequalities imply now the estimates from the statement. \end{proof} Finally, we obtain: \begin{proof}[Proof of Lemma \ref{well-defined}] Let $\alpha\in \Omega^i_0(\R^3)$ and $a\in \N^3$, with $|a|=k$. Lemma \ref{derivative of h} with $d=k+i-1$ gives the following on $K$: \begin{align*} |D^a(h'_t(\alpha))|&\leq C\norm{\alpha}^K_{2k+i-1}\, \sigma_{k+i-1,k+i+2}\, g_t\, \overline{g}_t\, (g_t+\overline{g}_t)^{2(k+i)-1}\\ &= C\norm{\alpha}^K_{2k+i-1}\, \sigma_{d,d+3}\, g_t\, \overline{g}_t\, (g_t+\overline{g}_t)^{2d+1}. \end{align*} Using the definition of the polynomials $\sigma_{k,l}$, that $g_t+\overline{g}_{t}\leq 2$, and Lemma \ref{lemma: finite integrals}, we evaluate the last term for $t>0$: \begin{align*} \sigma_{d,d+3}\, g_t\, \overline{g}_t\, (g_t+\overline{g}_t)^{2d+1}&\leq C\sum_{j=0}^{d}t^jR^{2j+d+3}g_t\, \overline{g}_t\, (g_t+\overline{g}_t)^{2j+1}\leq CR^{d}t^{-\frac{3}{2}}. \end{align*} Since $R^d\leq n^d$, we obtain that there exists $C=C(n,k)$ such that for $t>0$: \[\norm{h'_t(\alpha)}^K_{k}\leq C \norm{\alpha}^K_{2k+i-1} t^{-\frac{3}{2}}.\] Thus \eqref{conditions for limit} holds, and therefore $\lim_{t\to\infty}h_t(\alpha)$ exists with respect to the compact-open $C^{\infty}$-topology. \end{proof} \end{document}
\begin{document} \title[Dirichlet-Laplacian with a drift]{Existence of minimizers for eigenvalues of the Dirichlet-Laplacian with a drift} \author[\textsf{B.~Brandolini}]{Barbara Brandolini$^{1}$} \author[\textsf{F.~Chiacchio}]{Francesco Chiacchio$^{1}$} \author[\textsf{A.~Henrot}]{Antoine Henrot$^{2}$} \author[\textsf{C.~Trombetti}]{Cristina Trombetti$^{1}$} \begin{abstract} This paper deals with the eigenvalue problem for the operator $L=-\Delta -x(n-k+1)\omega_n^{1/(n-k+1)}dot \nabla $ with Dirichlet boundary conditions. We are interested in proving the existence of a set minimizing any eigenvalue $\lambda_k$ of $L$ under a suitable measure constraint suggested by the structure of the operator. More precisely we prove that for any $c>0$ and $k\in {\mathcal L}^nathbb N$ the following minimization problem $$ {\mathcal L}^nin\left\{\lambda_k(\Omega): \> \Omega \>{\mathcal L}^nbox{quasi-open} \>{\mathcal L}^nbox{set}, \> \int_\Omega e^{|x|^2/2}dx\le c\right\} $$ has a solution. \end{abstract} {\mathcal L}^naketitle \section{Introduction} In this paper we are interested in the following eigenvalue problem for the Dirichlet-Laplacian with a drift term \begin{equation} \label{problem1} \left\{ \begin{array}{ll} -\Delta u - x(n-k+1)\omega_n^{1/(n-k+1)}dot \nabla u=\lambda u & {\mathcal L}^nbox{in}\> \Omega \\ u=0 & {\mathcal L}^nbox{on}\> \partial\Omega, \end{array} \right. \end{equation} or equivalently in the weighted eigenvalue problem \begin{equation} \label{problem} \left\{ \begin{array}{ll} -{\mathcal L}^nathrm{div} \left(e^{|x|^2/2} \nabla u\right)=\lambda e^{|x|^2/2} u & {\mathcal L}^nbox{in}\> \Omega \\ u=0 & {\mathcal L}^nbox{on}\> \partial\Omega, \end{array} \right. \end{equation} where $\Omega$ is an open subset of ${\mathcal L}^nathbb R^N$ ($N m_Ne 2$). Let us denote $$ dm_N=\prod_{i=1}^N e^{\frac{x_i^2}{2}}dx_i, \qquad x=(x_1,...,x_N) \in {\mathcal L}^nathbb{R}^N, $$ and let $H_0^1(\Omega;m_N)$ be the closure of $C_0^\infty(\Omega)$ with respect to the norm $$ ||u||_{H_0^1(\Omega;m_N)}=\left(||u||^2_{L^2(\Omega;m_N)}+||\nabla u||^2_{L^2(\Omega;m_N)}\right)^{1/2}. $$ The operator ${\mathcal L}^nathcal{R}:f \in L^2(\Omega; m_N) \to \varphi \in H_0^1(\Omega; m_N)$, where $\varphi$ is the unique solution to \begin{equation}\label{1} \left\{ \begin{array}{ll} -{\mathcal L}^nathrm{div} \left(e^{|x|^2/2} \nabla \varphi \right)=f e^{|x|^2/2} & {\mathcal L}^nbox{in}\> \Omega \\ \varphi=0 & {\mathcal L}^nbox{on}\> \partial\Omega, \end{array} \right. \end{equation} is compact, self-adjoint and nonnegative (see Section 2); then the spectrum of ${\mathcal L}^nathcal R$ is purely discrete, it consists only of eigenvalues which can be ordered (according to their multiplicity): $$ 0< \lambda_1(\Omega) \le \lambda_2(\Omega) \le ... \le \lambda_k(\Omega) \le ... $$ Our main result is the following \begin{theorem} \label{existence} For any $c>0$ the minimum \begin{equation}\label{min} {\mathcal L}^nathrm{min} \{ \lambda_k(\Omega): \> \Omega \subset {\mathcal L}^nathbb R^N, \> \Omega \> quasi-open\> set, m_N(\Omega) \leq c\} \end{equation} is achieved. \end{theorem} For the definition of quasi-open sets we remind the reader to Remark \ref{rem-qo1} and the references mentioned therein. Let us briefly discuss how our result is inserted in the literature. In the case of Laplace operator, the analogous minimization problem with Lebesgue measure constraint has been faced for the first time by Buttazzo and Dal Maso in (n-k+1)\omega_n^{1/(n-k+1)}ite{BD}. Their key assumption is that $\Omega$ varies in the class of sets contained in the same box $D$. Replacing $D$ with ${\mathcal L}^nathbb R^N$ is far from being simple due to the lack of compactness for generic sequences of sets. Very recently, this problem has been overcome independently by Mazzoleni and Pratelli in (n-k+1)\omega_n^{1/(n-k+1)}ite{MP} and Bucur in (n-k+1)\omega_n^{1/(n-k+1)}ite{B} with different techniques. In our case, the set $\Omega$ is allowed to vary in the whole ${\mathcal L}^nathbb R^N$ since the structure of the operator and the $m_N$ measure constraint allow us to earn the compact embedding of the weighted Sobolev space $H_0^1(\Omega; m_N)$ into the weighted Lebesgue space $L^2(\Omega; m_N)$ (see Theorem \ref{compact} below). \noindent On the other hand problem \eqref{problem1} can be viewed as a prototype of a more general class of eigenvalue problems. For instance in (n-k+1)\omega_n^{1/(n-k+1)}ite{HNR} (see also the references therein), among other things, the problem of minimizing the first eigenvalue of $$ \left\{ \begin{array}{ll} -\mathrm {div}\left(A(x)\nabla u\right)+v(n-k+1)\omega_n^{1/(n-k+1)}dot \nabla u+Vu=\lambda u &{\mathcal L}^nbox{in}\> \Omega \\ u=0 &{\mathcal L}^nbox{on}\>\partial\Omega \end{array} \right. $$ is addressed under various constraints on $A, v, V, \Omega$ by using a new notion of rearrangement. To our knowledge the existence of a domain minimizing a generic eigenvalue of problem \eqref{problem1} has not been established yet. In this paper we solve this question under the natural ``weighted volume constraint''. Using an appropriate notion of rearrangement, see for instance (n-k+1)\omega_n^{1/(n-k+1)}ite{BMP,RCBM}, a Faber-Krahn type inequality can be proved: the ball centered at the origin is the optimal domain for the first eigenvalue (i.e. the case $k=1$). Now all the other cases are open. In the classical situation (the Dirichlet-Laplacian with a constraint on the Lebesgue measure) only two cases are solved: for $k=1$, the minimizer is any ball (Faber-Krahn inequality), while for $k=2$, it is the union of two identical balls (Krahn-Sz\"ego inequality), see (n-k+1)\omega_n^{1/(n-k+1)}ite{H} for more details. In our situation, even the case $k=2$ is not clear because of the measure $m_N$. In (n-k+1)\omega_n^{1/(n-k+1)}ite{BCHOT} we study this problem and prove, among other things, that the optimal domain is not composed of two identical balls. The proof of Theorem \ref{existence} can be summarized as follows. We consider a minimizing sequence of quasi-open sets $\Omega_n$ and we construct the sequence of functions $w_n\in H_0^1(\Omega_n;m_N)$ solving problem \eqref{1} in $\Omega_n$ with $f \equiv 1$. We prove that $w_n$ strongly converge to a function $w$ in $L^2({\mathcal L}^nathbb R^N;m_N)$ and we define $\mathcal{H}^{N-1}at \Omega=\{w>0\}$. We prove that the eigenfunctions $u_j^n$ corresponding to $\lambda_j(\Omega_n)$ weakly converge to $u_j\in H_0^1(\mathcal{H}^{N-1}at \Omega;m_N)$. We conclude that $\lambda_j(\mathcal{H}^{N-1}at \Omega)$ is the minimum of problem \eqref{min}. The paper is organized as follows. In Section 2 we recall the compact embedding of $H_0^1({\mathcal L}^nathbb R^N;m_N)$ into $L^2({\mathcal L}^nathbb R^N;m_N)$ and we provide an Hardy type inequality which in turn gives an improved embedding theorem. In Section 3 we prove a sharp reverse H\"older inequality for eigenfunctions that will be used to ensure the suitable convergence of $u_j^n$. Note that these results may have an interest by their own. Finally Section 4 contains the proof of Theorem \ref{existence}. \section{Some properties of weighted Sobolev spaces} \subsection{Weighted isoperimetric inequalities and rearrangements} We start this section by recalling the isoperimetric inequality with respect to the measure $m_N$. Let $\Omega\subset {\mathcal L}^nathbb R^N$ be a Lebesgue measurable set, we define the weighted perimeter of $\Omega$ with respect to $m_N$ by $$ P_{m_N}(\Omega)=\sup\left\{\int_\Omega \mathrm {div} \left({\bf k}(x)e^{|x|^2/2}\right)dx:\> {\bf k }\in C_0^1({\mathcal L}^nathbb R^N,{\mathcal L}^nathbb R^N),\> |{\bf k}|\le 1\right\}. $$ For any smooth set $\Omega \subset {\mathcal L}^nathbb R^N$ it reduces to $$ P_{m_N}(\Omega)=\int_{\partial \Omega} e^{|x|^2/2}d{\mathcal L}^nathcal{H}^{N-1}. $$ In (n-k+1)\omega_n^{1/(n-k+1)}ite{BMP} (see also (n-k+1)\omega_n^{1/(n-k+1)}ite{BCM,BCM1}) the authors prove the following result. \begin{theorem} For any set $\Omega \subset {\mathcal L}^nathbb R^N$ with finite $m_N$-measure, \begin{equation}\label{II} P_{m_N}(\Omega) m_Ne P_{m_N}(\Omega^{\mathcal B}gstar), \end{equation} where $\Omega^{\mathcal B}gstar$ is the ball centered at the origin, having the same $m_N$-measure as $\Omega$. Equality sign holds in \eqref{II} if and only if $\Omega=\Omega^{\mathcal B}gstar$. \end{theorem} \noindent As well-known, \eqref{II} turns out to be the key ingredient for a Faber-Krahn type inequality to hold (see Proposition \ref{faber}). To this aim we give the notion of rearrangement with respect to the measure $m_N$. \noindent Let $\phi $ be a measurable real function defined in $\Omega$. The distribution function of $ \phi $ with respect to the $m_N$-measure is defined by \begin{equation*} {\mathcal L}^nu(t)=m_{N}\left( \{x\in \Omega:\>|\phi (x)|>t\}\right) ,\qquad tm_Neq 0, \end{equation*} while the decreasing rearrangement of $\phi $ with respect to the $m_N$-measure is the function \begin{equation*} \phi ^{\ast }(s)=\sup \left\{ tm_Neq 0:\>{\mathcal L}^nu(t)>s\right\} ,\quad s\in (0,m_N(\Omega)). \end{equation*} It is easy to see that $\phi ^{\ast }$ is a nonincreasing, right-continuous function defined in $(0,m_N(\Omega))$, equidistributed with $\phi $, that means $\phi $ and $\phi ^{\ast }$ have corresponding superlevel sets with the same $m_N$-measure. This feature implies that \begin{equation*} ||\phi ||_{L^{p}(\Omega;m_N)}=||\phi ^{\ast }||_{L^{p}(0,m_N(\Omega))},\quad \forall pm_Neq 1. \end{equation*} Now we set \begin{equation*} h(r)=N\omega_n |x|^{n-k+1}mega _{N}e^{r^{2}/2}r^{N-1},\quad H(r)=\int_{0}^{r}h(t)dt, \end{equation*} where $\omega_n |x|^{n-k+1}mega_N$ is the Lebesgue measure of the unit ball in ${\mathcal L}^nathbb R^N$. Then $$ P_{m_N}(\Omega^{\mathcal B}gstar)=h\left(H^{-1}(m_N(\Omega))\right) $$ and \eqref{II} reads as \begin{equation*}\label{II2} P_{m_N}(\Omega)m_Ne h \left(H^{-1}(m_N(\Omega))\right). \end{equation*} We finally define $\phi ^{{\mathcal B}gstar }$, the $m_N$-symmetrization of $ \phi $, as follows \begin{equation*} \phi ^{{\mathcal B}gstar }(x)=\phi ^{\ast }\left( H(\left\vert x\right\vert )\right), \quad x \in \Omega^{\mathcal B}gstar. \end{equation*} $\phi^{\mathcal B}gstar$ is the only spherically symmetric function, nonincreasing along the radii, whose level sets are balls centered at the origin, with the same $m_N$ measure as the corresponding level sets of $|\phi|$. This definition immediately implies $$ ||\phi||_{L^p(\Omega;m_N)}=||\phi^{\mathcal B}gstar||_{L^p(\Omega^{\mathcal B}gstar;m_N)},\qquad \forall p m_Ne 1. $$ The following inequalities hold true. \begin{proposition}[Hardy-Littlewood inequality] Let $\phi,\psi \in L^2(\Omega; m_N)$; then \begin{equation}\label{hl} \int_\Omega |\phi \psi| dm_N \le \int_0^{m_N(\Omega)}\phi^\ast(s) \psi^\ast(s)ds=\int_{\Omega^{\mathcal B}gstar}\phi^{\mathcal B}gstar \psi^{\mathcal B}gstar dm_N. \end{equation} \end{proposition} \begin{proposition}[P\'olya-Sz\"ego principle]\label{prop3.1} If $\phi \in H^1({\mathcal L}^nathbb R^N; m_N)$, $\phi m_Ne 0$, then $\phi^{\mathcal B}gstar \in H^1({\mathcal L}^nathbb R^N;m_N)$ and \begin{equation}\label{polya} \int_{{\mathcal L}^nathbb R^N}|\nabla \phi|^2 dm_N m_Ne \int_{{\mathcal L}^nathbb R^N}|\nabla \phi^{\mathcal B}gstar|^2 dm_N. \end{equation} \end{proposition} \noindent For an exhaustive treatment on rearrangements see, for instance, (n-k+1)\omega_n^{1/(n-k+1)}ite{K,Ta,H,Ke,Ba}. {\mathcal L}^nedskip \subsection{Weighted Sobolev spaces and embedding theorems} In order to prove the existence of the optimal set in \eqref{min} let us introduce the natural Sobolev spaces associated with problem \eqref{problem}. Let $\Omega$ be an arbitrary open subset of ${\mathcal L}^nathbb R^N$; let us consider the weighted Lebesgue space $$ L^q(\Omega;m_N)=\left\{u:\Omega\to {\mathcal L}^nathbb R: \> \int_\Omega |u|^q dm_N<+\infty\right\}, \qquad q m_Ne 1, $$ endowed with the norm $$ ||u||_{L^q(\Omega;m_N)}=\left(\int_\Omega |u|^q dm_N\right)^{1/q}, $$ and let $H_0^1(\Omega;m_N)$ be the closure of $C_0^\infty(\Omega)$ with respect to the norm $$ ||u||_{H_0^1(\Omega;m_N)}=\left(||u||^2_{L^2(\Omega;m_N)}+||\nabla u||^2_{L^2(\Omega;m_N)}\right)^{1/2}. $$ We initially observe that, for any $\Omega \subseteq {\mathcal L}^nathbb R^N$, if $u \in H_0^1(\Omega; m_N)$ and we define $v=ue^{|x|^2/4}$, we get the following equivalence. \begin{proposition} $$u \in H_0^1(\Omega; m_N) \Longleftrightarrow v \in H_0^1(\Omega), \quad |x|v \in L^2(\Omega).$$ \end{proposition} The following Poincar\'e inequality is well-known (see for instance (n-k+1)\omega_n^{1/(n-k+1)}ite{EK}). \begin{proposition} For every $u \in H^1({\mathcal L}^nathbb R^N;m_N)$ it holds \begin{equation} \label{poincare} \int_{{\mathcal L}^nathbb R^N} |\nabla u|^2dm_N m_Ne N \int_{{\mathcal L}^nathbb R^N} u^2dm_N. \end{equation} \end{proposition} The following theorem provides the compact embedding of $H^1({\mathcal L}^nathbb R^N; m_N)$ into $L^2({\mathcal L}^nathbb R^N;m_N)$. Nevertheless this result can be found in (n-k+1)\omega_n^{1/(n-k+1)}ite{EK}, for the reader's convenience we recall it here. \begin{theorem}\label{compact} The weighted Sobolev space $H^1({\mathcal L}^nathbb R^N; m_N)$ is compactly embedded into the weighted Lebesgue space $L^2({\mathcal L}^nathbb R^N;m_N)$. \end{theorem} \begin{proof} Let $u_n \in H^1({\mathcal L}^nathbb R^N; m_N)$ be such that \begin{equation}\label{111} \int_{{\mathcal L}^nathbb R^N}|\nabla u_n|^2 dm_N \le C,\qquad n \in {\mathcal L}^nathbb N, \end{equation} and let $v_n=u_ne^{|x|^2/4}$. Integrating by parts we get $$ \int_{{\mathcal L}^nathbb R^N}|\nabla v_n|^2dx+\frac{1}{4}\int_{{\mathcal L}^nathbb R^N}|x|^2v_n^2dx \le \int_{{\mathcal L}^nathbb R^N}|\nabla u_n|^2 dm_N $$ and \eqref{111} immediately gives \begin{eqnarray} \int_{{\mathcal L}^nathbb R^N}|\nabla v_n|^2dx & \le C \label{a} \\ \int_{{\mathcal L}^nathbb R^N}|x|^2v_n^2dx & \le C. \label{aa} \end{eqnarray} In order to prove the compactness of the sequence $\{v_n\}$ in $L^2({\mathcal L}^nathbb R^N)$ it is enough to show that for any $\varepsilon >0$ there exist a constant $\delta >0$ and a set $D\subset {\mathcal L}^nathbb R^N$ such that \begin{equation} \int_{{\mathcal L}^nathbb R^N} |v_n(x+\tau)-v_n(x)|^2 dx <\varepsilon^2, \qquad \forall n \in {\mathcal L}^nathbb N, \> \forall \tau \in {\mathcal L}^nathbb R^N: \> |\tau|<\delta, \label{2} \end{equation} \begin{equation} \int_{{\mathcal L}^nathbb R^N \setminus \bar D} v_n^2 dx <\varepsilon^2, \qquad \forall n \in {\mathcal L}^nathbb N. \label{3} \end{equation} \eqref{2} is an immediate consequence of \eqref{a}. In order to prove \eqref{3} let us consider a ball $B_R$ centered at the origin, with radius $R$; by \eqref{aa} we get $$ \int_{{\mathcal L}^nathbb R^N \setminus B_R} v_n^2 dx \le \frac{C}{R^2} $$ and choosing $R$ in such a way that $\frac{C}{R^2}<\varepsilon^2$ we have \eqref{3}. Then, up to a subsequence, $v_n$ strongly converge to a function $v$ in $L^2({\mathcal L}^nathbb R^N)$. Let $u=ve^{-|x|^2/4}$. Clearly $u \in L^2({\mathcal L}^nathbb R^N; m_N)$ and $u_n$ strongly converge to $u$ in $L^2({\mathcal L}^nathbb R^N; m_N)$. \end{proof} By the above result, as mentioned in the Introduction, the operator ${\mathcal L}^nathcal{R}:f \in L^2(\Omega; m_N) \to \varphi \in H_0^1(\Omega; m_N)$, where $\varphi$ is the unique solution to \begin{equation*} \left\{ \begin{array}{ll} -{\mathcal L}^nathrm{div} \left(e^{|x|^2/2} \nabla \varphi \right)=f e^{|x|^2/2} & {\mathcal L}^nbox{in}\> \Omega \\ \varphi=0 & {\mathcal L}^nbox{on}\> \partial\Omega, \end{array} \right. \end{equation*} is compact; it is clearly self-adjoint and nonnegative, then the spectrum of ${\mathcal L}^nathcal R$ consists only of eigenvalues which can be ordered (according to their multiplicity): $$ 0< \lambda_1(\Omega) \le \lambda_2(\Omega) \le ... \le \lambda_k(\Omega) \le ... $$ Moreover, for every $k \in {\mathcal L}^nathbb N$ and every $\lambda_k(\Omega)$ the following min-max formula holds \begin{equation}\label{minmax} \lambda_k(\Omega)= {\mathcal L}^nin_{\begin{array}{c} V {\mathcal L}^nbox{ subspace of dim-}\\ {\mathcal L}^nbox{ension $k$ of } H_0^1(\Omega;m_N) \end{array}} {\mathcal L}^nax_{v\in V}\;\frac{\int_\Omega |\nabla u|^2 dm_N}{\int_\Omega u^2 dm_N}. \end{equation} Problem \eqref{min} is completely solved when $k=1$. Indeed, using P\'olya-Sz\"ego inequality \eqref{polya} and the variational characterization of the first eigenvalue, arguing as for the Dirichlet-Laplacian, the following result can be easily proven. \begin{proposition}\label{faber} Let $\Omega$ be an open subset of ${\mathcal L}^nathbb R^N$ with finite $m_N$ measure. Then $$ \lambda_1(\Omega) m_Ne \lambda_1(\Omega^{\mathcal B}gstar). $$ \end{proposition} {\mathcal L}^nedskip \subsection{An Hardy type inequality and consequences} In (n-k+1)\omega_n^{1/(n-k+1)}ite{EK} the authors, among other things, prove that, if $N m_Ne 3$ and $2^\ast=\frac{2N}{N-2}$, then \begin{eqnarray*} S&:=&\inf\left\{ \frac{\int_{{\mathcal L}^nathbb R^N}|\nabla \varphi|^2 dx}{\left(\int_{{\mathcal L}^nathbb R^N}|\varphi|^{2^\ast}dx\right)^{2/2^\ast}}:\> \varphi \in H^1({\mathcal L}^nathbb R^N)\setminus\{0\} \right\} \\ & =&\inf\left\{ \frac{\int_{{\mathcal L}^nathbb R^N}|\nabla \varphi|^2 dm_N}{\left(\int_{{\mathcal L}^nathbb R^N}|\varphi|^{2^\ast}dm_N\right)^{2/2^\ast}}:\> \varphi \in H^1({\mathcal L}^nathbb R^N;m_N)\setminus\{0\} \right\} \end{eqnarray*} and, as a corollary, they get that for every $u \in H^1({\mathcal L}^nathbb R^N;m_N)$ $$ \int_{{\mathcal L}^nathbb R^N}|\nabla u|^2 dm_N m_Ne S \left(\int_{{\mathcal L}^nathbb R^N}u^{2^\ast}dm_N\right)^{2/2^\ast}+\frac N 2 \int_{{\mathcal L}^nathbb R^N}u^2dm_N. $$ Moreover, by interpolation between $L^2({\mathcal L}^nathbb R^N;m_N)$ and $L^{2^\ast}({\mathcal L}^nathbb R^N;m_N)$ they obtain that for any $2 \le q \le 2^\ast$ there exists a positive constant $C$ such that if $\frac 1 q= \frac a 2 + \frac{1-a}{2^\ast}$ and $u \in H^1({\mathcal L}^nathbb R^N;m_N)$ then $$ ||u||_{L^q({\mathcal L}^nathbb R^N;m_N)}\le C ||u||_{L^2({\mathcal L}^nathbb R^N;m_N)}^a ||\nabla u||_{L^2({\mathcal L}^nathbb R^N;m_N)}^{1-a}. $$ We go further by proving a Hardy type inequality with respect to the measure $m_N$ (see for example (n-k+1)\omega_n^{1/(n-k+1)}ite{T,BCT1}). This inequality, as in the classical case, will imply that, if $N m_Ne 3$, $H^1({\mathcal L}^nathbb R^N;m_N)$ is continuously embedded into the weighted Lorentz space $L^{2^\ast,2}({\mathcal L}^nathbb R^N;m_N)$ and a fortiori in $L^{2^\ast}({\mathcal L}^nathbb R^N;m_N)$. When $N=2$ we gain that $H^1({\mathcal L}^nathbb R^2;m_2)$ is continuously embedded into a suitable Orlicz space. \noindent Let \begin{equation*}\label{rho} \rho_N(r)=\frac{r^{1-N}e^{-r^2/2}}{\int_r^{+\infty}t^{1-N}e^{-t^2/2}dt}, \qquad r m_Ne 0. \end{equation*} Clearly $$ \lim_{r \to 0^+} \rho_N(r)=+\infty,\qquad \lim_{r \to +\infty}\rho_N(r)=+\infty, $$ and $$ \lim_{r \to 0^+}r\rho_N(r)=N-2 \quad {\mathcal L}^nbox{if}\> Nm_Ne 3, \quad \lim_{r\to 0^+}r\log \left(\frac 1 r\right)\rho_N(r)=1 \quad {\mathcal L}^nbox{if}\> N=2. $$ Moreover $\rho_N$ solves the following differential equation \begin{equation}\label{eqrho} \rho_N'(r)+\frac{N-1}{r}\rho_N(r)+r\rho_N(r)=\rho_N^2(r); \end{equation} by differentiating \eqref{eqrho} we immediately get that $\rho_N$ cannot have positive maxima. Thus it has a unique minimum point $T>0$ where $\rho_N(T)>0$. \begin{lemma}\label{unidim} For every $\psi \in C_0^\infty(0,+\infty)$ it holds \begin{equation*} \int_0^{+\infty}(\psi'(r))^2 r^{N-1}e^{r^2/2}dr m_Ne \frac 1 4 \int_0^{+\infty}\psi(r)^2 \rho_N(r)^2r^{N-1}e^{r^2/2}dr. \end{equation*} Moreover $\frac 1 4$ is sharp. \end{lemma} \begin{proof} Since $\left(\psi'(r)+\frac 1 2\rho_N(r)\psi(r)\right)^2 m_Ne 0$, integrating by parts we get $$ \int_0^{+\infty}(\psi'(r))^2 r^{N-1}e^{r^2/2}dr m_Ne \frac 1 2 \int_0^{+\infty} \psi(r)^2 r^{N-1}e^{r^2/2} \left[\rho_N'(r)+\frac{N-1}{r}\rho_N(r)+r\rho_N(r)-\frac 1 2 \rho_N(r)^2 \right]dr $$ and from \eqref{eqrho} we immediately deduce the claim. In order to prove that the constant $\frac 1 4$ is sharp it suffices to consider the following sequence of functions $$ \psi_k(r)=\left\{\begin{array}{ll} \left(\displaystyle\int_{1/k}^{+\infty}t^{1-N}e^{-t^2/2}dt\right)^{1/2} & r \in \left(0,\frac 1 k\right) \\ \left(\displaystyle\int_r^{+\infty}t^{1-N}e^{-t^2/2}dt\right)^{1/2} & r \in \left(\frac 1 k,+\infty\right) \end{array} \right. $$ and verify that $$ \lim_{k \to +\infty}\frac{\int_0^{+\infty}(\psi_k'(r))^2 r^{N-1}e^{r^2/2}dr}{\int_0^{+\infty}\psi_k(r)^2 \rho_N(r)^2r^{N-1}e^{r^2/2}dr}=\frac 1 4. $$ \end{proof} The following Hardy inequality holds true. \begin{theorem}\label{hardy} For every $u \in H^1({\mathcal L}^nathbb R^N; m_N)$ it holds \begin{equation}\label{hardy1} \int_{{\mathcal L}^nathbb R^N} |\nabla u|^2 dm_N m_Ne \frac 1 4 \int_{{\mathcal L}^nathbb R^N} u^2 \rho_{N,T}(|x|)^2 dm_N, \end{equation} where $$ \rho_{N,T}(r)=\left\{\begin{array}{ll} \rho_N(r) & 0<r<T \\ \rho_N(T) & r m_Ne T. \end{array} \right. $$ Moreover $\frac 1 4 $ is sharp. \end{theorem} \begin{proof}[Proof of Theorem \ref{hardy}] Let $u \in H^1({\mathcal L}^nathbb R^N;m_N)$. Taking into account \eqref{polya} and \eqref{hl} it is enough to prove the claim when $u=u^{\mathcal B}gstar$. In this case $$ \int_{{\mathcal L}^nathbb R^N}|\nabla u|^2 dm_N =N\omega_n |x|^{n-k+1}mega_N\int_0^{+\infty} \left(u^\ast(H(r))'\right)^2r^{N-1}e^{r^2/2}dr $$ and $$ \int_{{\mathcal L}^nathbb R^N} u^2 \rho_{N,T}^2 dm_N=N\omega_n |x|^{n-k+1}mega_N\int_0^{+\infty} u^\ast(H(r))^2 \rho_{N,T}(r)^2r^{N-1}e^{r^2/2}dr. $$ We get \eqref{hardy1} by applying Lemma \ref{unidim} and $0 <\rho_{N,T}(r) \le \rho_N(r),\> r>0.$ \end{proof} Let $N m_Ne 3$ and $u \in H^1({\mathcal L}^nathbb R^N; m_N)$. P\'olya-Sz\"ego principle \eqref{polya} together with Hardy inequality \eqref{hardy1} yield \begin{eqnarray*} \int_{{\mathcal L}^nathbb R^N}|\nabla u|^2 dm_N m_Ne \int_{{\mathcal L}^nathbb R^N}|\nabla u^{\mathcal B}gstar|^2 dm_N &m_Ne& \frac 1 4 \int_{{\mathcal L}^nathbb R^N} \left(u^{\mathcal B}gstar\right)^2 \rho_{N,T}(|x|)^2dm_N \\ &=& \frac 1 4 \int_0^{+\infty}u^\ast(t)^2 \rho_{N,T}(H^{-1}(t))^2 dt. \end{eqnarray*} Observing that $2/2^\ast-1=-2/N$ and $$ \lim_{t \to 0^+} \frac{H^{-1}(t)}{t^{1/N}}=\omega_n |x|^{n-k+1}mega_N^{-1/N} $$ we get $$ \int_{{\mathcal L}^nathbb R^N} |\nabla u|^2dm_N m_Ne C \int_0^{+\infty} u^\ast(t)^2t^{2/2^\ast-1}dt, $$ that is $H^1({\mathcal L}^nathbb R^N;m_N)$ is continuously embedded in the Lorentz space $L^{2^\ast,2}({\mathcal L}^nathbb R^N;m_N)$ (see for instance (n-k+1)\omega_n^{1/(n-k+1)}ite{Hu,KK} for the definition). On the other hand, when $N=2$ we obtain $$ \int_{{\mathcal L}^nathbb R^N} |\nabla u|^2dm_N m_Ne C \int_0^{+\infty} \frac{u^\ast(t)^2}{t{\mathcal L}^nax\{1,\log (1/t)\}^2}dt. $$ By (n-k+1)\omega_n^{1/(n-k+1)}ite{OP} (see Theorem 4.2 and Theorem 8.8) we deduce that there exist $m_Namma_0,m_Namma_\infty>0$ such that $$ \int_{0^+}\exp\left(m_Namma_0 u^\ast(t)^2\right)dt<+\infty, \qquad \int^{+\infty}\exp\left(-m_Namma_\infty u^\ast(t)^{-2}\right)dt<+\infty. $$ \section{A reverse H\"older inequality}\label{sec3} Let $u_{j}$ be an eigenfunction corresponding to the eigenvalue $\lambda _{j} $ of the problem under consideration, i.e. \begin{equation}\label{P} \left\{ \begin{array}{ll} -{\mathcal L}^nathrm{div}\left( e^{|x|^{2}/2}\nabla u_{j}\right) =\lambda _{j}(\Omega )e^{|x|^{2}/2}u_{j} & {\mathcal L}^nbox{in}\>\Omega \\ u_{j}=0 & {\mathcal L}^nbox{on}\>\partial \Omega, \end{array} \right. \end{equation} where $\Omega$ is an open subset of ${\mathcal L}^nathbb{R}^{N}$ ($N m_Ne 2$) with finite $m _{N}$-measure. The main result of this section is a sharp reverse H\"{o}lder inequality for $u_{j}$. In the case of the Dirichlet-Laplacian, this kind of estimates has been proved in (n-k+1)\omega_n^{1/(n-k+1)}ite{PR,C}. The first step in our arguments consists into introducing a ball $B_{\tilde r}$ such that $\lambda_j(\Omega)=\lambda_j(B_{\tilde r})$. Since the explicit value of $\lambda_j(B_{\tilde r})$ is not known, we estimate it from above and below in terms of $\tilde r$. To this aim observe that, if $u_j$ is a solution to \eqref{P}, the function $v_j=u_je^{|x|^2/4}$ satisfies the following Dirichlet problem for the harmonic oscillator \begin{equation} \label{problem2} \left\{ \begin{array}{ll} -\Delta v_j +\frac{1}{4}|x|^2 v_j=\nu_j(\Omega) v_j & {\mathcal L}^nbox{in}\> \Omega \\ v_j=0 & {\mathcal L}^nbox{on}\> \partial\Omega \end{array} \right. \end{equation} with $\nu_j(\Omega) = \lambda_j(\Omega) -\frac{N}{2}$. \noindent When $\Omega={\mathcal L}^nathbb R^N$ the spectrum and the eigenfunctions of \eqref{problem2} are explicitly known (see for instance (n-k+1)\omega_n^{1/(n-k+1)}ite{EK}). In particular the spectrum is given by $\{\nu_j({\mathcal L}^nathbb R^N)=N+j-1:\>j=1,2,...\}$. When $j=1$, $\nu_1({\mathcal L}^nathbb R^N)=N$ is simple and a corresponding eigenfunction is $e^{-|x|^2/2}$. \noindent When $\Omega \subset {\mathcal L}^nathbb R^N$, since the eigenvalues of \eqref{problem2} are decreasing with respect to inclusion of sets, we get \begin{equation}\label{RN} \nu_j(\Omega) m_Ne \nu_1({\mathcal L}^nathbb R^N) \Longleftrightarrow \lambda_j(\Omega) m_Ne \frac{3}{2}N. \end{equation} Moreover, using $v_j$ as test function in \eqref{problem2} we get \begin{equation*} \lambda_j(\Omega)-\frac{N}{2}m_Ne \frac{\int_\Omega |\nabla v_j|^2 dx}{\int_\Omega v_j^2 dx}m_Ne {\mathcal L}^nin\left\{\frac{\int_\Omega |\nabla \varphi|^2 dx}{\int_\Omega \varphi^2 dx}: \> \varphi \in H_0^1(\Omega)\setminus\{0\}\right\}=\lambda_1^{-\Delta}(\Omega) \end{equation*} where $\lambda_1^{-\Delta}(\Omega)$ is the first eigenvalue of the Dirichlet-Laplacian in $\Omega$. Whenever $\Omega$ has finite $m_N$-measure, by the well-known Faber-Krahn inequality, if $B_R$ is a ball with the same Lebesgue measure as $\Omega$, it holds $$ \lambda_1^{-\Delta}(\Omega) m_Ne \lambda_1^{-\Delta}(B_R)=\frac{j_{N/2-1,1}^2}{R^2} $$ being $j_{N/2-1,1}$ the first zero of the Bessel function of the first kind of order $\frac{N}{2}-1$. Thus \begin{equation}\label{delta} \lambda_j(\Omega)m_Ne \frac{N}{2}+\frac{j_{N/2-1,1}^2}{R^2}. \end{equation} Incidentally we note that $$ \lambda_1(\Omega) \le \frac{N}{2}+\lambda_1^{-\Delta}(\Omega)+\frac{R_\Omega^2}{4}, $$ where $R_\Omega$ is the radius of the smallest ball centered at the origin and containing $\Omega$. Now, let us come back to problem \eqref{P}. From now on we will suppose that $\Omega$ has finite $m_N$-measure. By integrating the equation in \eqref{P} on the superlevel sets of $u_j$, using isoperimetric inequality \eqref{II}, co-area formula and H\"older inequality, according to a technique introduced by Talenti in (n-k+1)\omega_n^{1/(n-k+1)}ite{Ta1}, in (n-k+1)\omega_n^{1/(n-k+1)}ite{BMP} it is proved that \begin{equation*} \left\{ \begin{array}{ll} -U_{j}^{\prime \prime }(s)\leq \lambda _{j}(\Omega )I^{-2}(s)U_{j}(s) & {\mathcal L}^nbox{in}\>(0,m_{N}(\Omega)) \\ & \\ U_{j}(0)=U_{j}^{\prime }(m_{N}(\Omega))=0, & \end{array} \right. \end{equation*} where \begin{equation*} U_{j}(s)=\int_{0}^{s}u_{j}^{\ast }(t)dt \end{equation*} and $$I(s)=\inf\{P_{m_N}(E): \> E \>{\mathcal L}^nbox{smooth}, \> m_N(E)=s \}=h\left(H^{-1}(s)\right)$$ is the isoperimetric function associated to the measure $m_N$. \noindent For any fixed $L>0$ we consider the following Sturm-Liouville problem \begin{equation} \left\{ \begin{array}{ll} -\varphi^{\prime \prime }(s)=\sigma I^{-2}(s)\varphi(s) & {\mathcal L}^nbox{in}\>(0,L) \\ & \\ \varphi(0)=\varphi^{\prime }(L)=0. & \end{array} \right. \label{SL} \end{equation} Since \begin{equation*} \lim_{s\rightarrow 0^{+}}\frac{I(s)}{s^{1-\frac{1}{N}}}=N\omega_n |x|^{n-k+1}mega _{N}^{\frac{1 }{N}}>0, \end{equation*} the Sobolev space $\{\varphi \in H^{1}(0,L):\varphi (0)=0\}$ is compactly embedded into the weighted Lebesgue space \begin{equation*} L^{2}\left((0,L);I^{-2}\right)=\left\{ \varphi :\left( 0,L\right) \rightarrow {\mathcal L}^nathbb{R} :\>\left\Vert \varphi \right\Vert _{L^{2}\left((0,L);I^{-2}\right)}^{2}\equiv \int_{0}^{L}(\varphi (t))^{2}I^{-2}(t)dt<+\infty \right\} \end{equation*} (see (n-k+1)\omega_n^{1/(n-k+1)}ite{N, BCT2}). Therefore spectral theory on selfadjoint compact operators ensures that the first eigenvalue $\sigma _{1}(0,L)$ of (\ref{SL}) is simple and it can be found as the minimum of the Rayleigh quotient \begin{equation*} {\mathcal L}^nin_{{\tiny \begin{array}{l} \varphi \in H^{1}(0,L), \\ \varphi (0)=0,\>\varphi \not\equiv 0 \end{array} }}\frac{\displaystyle\int_{0}^{L}\left( \varphi ^{\prime }(t)\right) ^{2}dt}{ \displaystyle\int_{0}^{L}\left( \varphi (t)\right) ^{2}I^{-2}(t)dt}. \end{equation*} {\mathcal B}gskip \noindent Now we claim that there exists a value of $L,$ say $\widetilde{L}$ , such that \begin{equation*} \sigma _{1}(0,\widetilde{L})=\lambda _{j}(\Omega). \label{L} \end{equation*} To this aim consider the problem \begin{equation} \left\{ \begin{array}{ll} -{\mathcal L}^nathrm{div}\left( e^{|x|^{2}/2}\nabla z\right) =\lambda e^{|x|^{2}/2}z & {\mathcal L}^nbox{in}\>B_{r} \\ z=0 & {\mathcal L}^nbox{on}\>\partial B_{r}, \end{array} \right. \label{Radial_P} \end{equation} where $B_{r}$ denotes the ball centered at the origin having radius $r$. By Theorem \ref{compact} the first eigenvalue $\lambda _{1}(B_{r})$ of (\ref {Radial_P}) fulfills \begin{equation*} \lambda _{1}(B_{r})={\mathcal L}^nin \left\{ \frac{\int_{B_{r}}|\nabla \psi |^{2}dm_N}{\int_{B_{r}}\psi ^{2}dm_N}:\>\psi \in H_{0}^{1}(B_{r};m_N)\backslash \left\{ 0\right\} \right\} . \end{equation*} By \eqref{polya} we know that such a minimum is achieved on a function $z$ such that \begin{equation*} z(x)=z^{{\mathcal B}gstar }(x). \end{equation*} At this point it easy to verify that $\lambda _{1}(B_{r})$ is a continuous and strictly decreasing function \ with respect to $r.$ Moreover an easy consequence of \eqref{RN}, \eqref{delta} is \begin{equation*} \lim_{r\rightarrow 0^{+}}\lambda _{1}(B_{r})=+\infty \text{ \ and } \lim_{r\rightarrow +\infty }\lambda _{1}(B_{r})=\frac{3}{2}N. \end{equation*} Therefore there exists a unique value of $r,$ say $\widetilde{r},$ such that \begin{equation*} \lambda _{1}(B_{\widetilde{r}})=\lambda _{j}(\Omega). \end{equation*} Let us denote with $\widetilde{z}$ an eigenfunction corresponding to $ \lambda _{1}(B_{\widetilde{r}})$; it is easy to verify that the function \begin{equation*} Z(s)=\int_{0}^{s}\widetilde{z}^{\ast }(t)dt \end{equation*} satisfies \eqref{SL} with $\sigma _{1}(0,\widetilde{L})=\lambda_1(B_{\widetilde r})=\lambda _{j}(\Omega).$ Note that if $\widetilde L=m_N(\Omega)$ the results we are going to state become trivial since in this case $U_{j}$ and $Z$ are proportional. So from now on we will assume that $\widetilde L<m_N(\Omega)$ and we will define the function $Z(s)$ on the whole interval $(0,m_N(\Omega))$ by setting its value constantly equal to $Z(\widetilde L)$ on $(\widetilde L,m_N(\Omega))$. The following comparison result holds true. We omit the proof, since it can be obtained following, for instance, the lines of (n-k+1)\omega_n^{1/(n-k+1)}ite{AFT,BCT}. \begin{proposition} If $u_j$ and $\widetilde z$ are defined as above and \begin{equation*} \int_{0}^{m_{N}(\Omega)}\left( u_{j}^{\ast }(t)\right) ^{q}dt=\int_{0}^{\widetilde L}\left( \widetilde z^{\ast }(t)\right) ^{q}dt\qquad \text{ with }q>0 \end{equation*} then \begin{equation*} \int_{0}^{s}\left( u_{j}^{\ast }(t)\right) ^{q}dt\leq \int_{0}^{s}\left( \widetilde z^{\ast }(t)\right) ^{q}dt, \qquad s \in (0,\widetilde L). \end{equation*} \end{proposition} The above result immediately implies the following reverse H\"{o}lder inequality. \begin{theorem}\label{ReverseHolder} For any $0<r<q\le \infty$, $u_j$ satisfies the inequality \begin{equation}\label{chiti} \left( \int_{\Omega}\left\vert u_{j}\right\vert ^{q}dm_{N}\right) ^{1/q}\leq C(N,r,q,\lambda_j(\Omega))\left( \int_{\Omega}\left\vert u_{j}\right\vert ^{r}dm_{N}\right) ^{1/r}, \end{equation} where $$ C(N,r,q,\lambda_j(\Omega))=\frac{\left( \displaystyle\int_0^{\widetilde L}\widetilde z^\ast(t)^{q}dt\right) ^{1/q}}{\left( \displaystyle\int_0^{\widetilde L} \widetilde z^\ast(t)^{r}dt\right)^{1/r}} $$ with $\widetilde z$ defined as above. \end{theorem} \begin{remark}\label{rem-qo1} \rm The previous inequality is stated for any open set $\Omega$ with finite $m_N$-measure. Actually, it also holds for any quasi-open set $\Omega$ with finite $m_N$-measure. To see that, we can use the definition of a quasi-open set (see for example (n-k+1)\omega_n^{1/(n-k+1)}ite[chapter 3]{HP}) to approach $\Omega$ by a sequence of open sets $\Omega_\varepsilon$ such that $\Omega\subset \Omega_\varepsilon$ and the capacity of the difference $\Omega_\varepsilon\setminus\Omega$ is less than $\varepsilon$. Then $\Omega_\varepsilon$ $m_Namma$-converges to $\Omega$ and therefore the eigenfunctions and the eigenvalues of $\Omega_\varepsilon$ converge to the corresponding eigenfunctions and eigenvalues of $\Omega$ allowing to pass to the limit in (\ref{chiti}). \end{remark} \section{Proof of Theorem \ref{existence}} Let $\Omega_n$ be a minimizing sequence, and let $w_n$ be the solution to \begin{equation} \label{pb1} \left\{ \begin{array}{ll} -\Delta w_n - x(n-k+1)\omega_n^{1/(n-k+1)}dot \nabla w_n=1 & {\mathcal L}^nbox{in}\> \Omega_n \\ w_n=0 & {\mathcal L}^nbox{on}\> \partial\Omega_n. \end{array} \right. \end{equation} Choosing $w_n$ as test function in \eqref{pb1} and using Poincar\'e inequality \eqref{poincare} we get that the sequence $w_n$ is bounded in $H^1({\mathcal L}^nathbb{R}^N; m_N)$. The compact embedding of $H^1({\mathcal L}^nathbb{R}^N; m_N)$ in $L^2({\mathcal L}^nathbb{R}^N; m_N)$ ensures the existence of a subsequence, still denoted by $w_n$, and the existence of a function $w \in H^1({\mathcal L}^nathbb{R}^N; m_N)$ such that \begin{eqnarray} & w_n \rightharpoonup w \quad & {\mathcal L}^nbox{in } \quad H^1({\mathcal L}^nathbb{R}^N; m_N) \notag \\ & w_n \rightarrow w & {\mathcal L}^nbox{in} \quad L^2({\mathcal L}^nathbb{R}^N; m_N) \notag \\ & w_n \rightarrow w & {\mathcal L}^nbox{a.e. in } \Omega. \notag \end{eqnarray} Let us consider the quasi-open set \begin{equation}\label{ast} \mathcal{H}^{N-1}at\Omega = \{x \in {\mathcal L}^nathbb{R}^N : w(x) > 0\}. \end{equation} Since $w_n$ converges a.e. to $w$, we have for a.e. $x \in {\mathcal L}^nathbb{R}^N$ \begin{equation*} (n-k+1)\omega_n^{1/(n-k+1)}hi_{\mathcal{H}^{N-1}at\Omega}(x) \leq \liminf_n (n-k+1)\omega_n^{1/(n-k+1)}hi_{\Omega_n}(x); \end{equation*} therefore Fatou's Lemma gives \begin{equation*} \label{lim1} m_N(\mathcal{H}^{N-1}at\Omega)=\int_{{\mathcal L}^nathbb{R}^N} (n-k+1)\omega_n^{1/(n-k+1)}hi_{\mathcal{H}^{N-1}at\Omega}(x) dm_N \leq \liminf_n \int_{{\mathcal L}^nathbb{R}^N} (n-k+1)\omega_n^{1/(n-k+1)}hi_{\Omega_n}(x) dm_N \leq c. \end{equation*} We want to prove that $$\lambda_k(\mathcal{H}^{N-1}at\Omega)={\mathcal L}^nin \{ \lambda_k(\Omega):\> m_N(\Omega) \leq c\}.$$ Let $u_n^j$ be an eigenfunction corresponding to $\lambda_j(\Omega_n)$, $1 \le j\le k$, normalized as follows \begin{equation}\label{uno} \int_{\Omega_n} (u_n^j)^2dm_N=1. \end{equation} By Theorem \ref{compact} there exist $k$ function $u^j$ such that \begin{eqnarray} & u_n^j \rightharpoonup u^j \quad & {\mathcal L}^nbox{in } \quad H^1({\mathcal L}^nathbb{R}^N; m_N) \notag \\ & u_n^j \rightarrow u^j & {\mathcal L}^nbox{in} \quad L^2({\mathcal L}^nathbb{R}^N; m_N). \notag \end{eqnarray} First of all we observe that $u^j \in H_0^1(\mathcal{H}^{N-1}at\Omega;m_N)$ (see Proposition \ref{conv1} below). \noindent Let us consider the set $V = {\mathcal L}^nathrm{Span} [u^1, u^2,(n-k+1)\omega_n^{1/(n-k+1)}dots, u^k]$ which is a vector subspace of $H_0^1(\Omega; m_N)$ with dimension $k$. \noindent Let $v = \sum_{j=1}^{k} \alpha_j u^j \in V$ and $v_n = \sum_{j=1}^{k} \alpha_j u_n^j \in H_0^1(\Omega_n; m_N)$; then we have \begin{eqnarray} v_n \rightharpoonup v & \quad {\mathcal L}^nbox{in } \quad H^1({\mathcal L}^nathbb{R}^N; m_N) \notag \\ v_n \rightarrow v & \quad {\mathcal L}^nbox{in} \quad L^2({\mathcal L}^nathbb{R}^N; m_N). \notag \end{eqnarray} Thus \begin{equation} \label{lim2} \dfrac{\displaystyle\int_{{\mathcal L}^nathbb{R}^N} |\nabla v|^2 dm_N}{\displaystyle \int_{{\mathcal L}^nathbb{R}^N} v^2 dm_N} \leq \liminf_n\dfrac{\displaystyle\int_{{\mathcal L}^nathbb{R} ^N} |\nabla v_n|^2 dm_N}{\displaystyle\int_{{\mathcal L}^nathbb{R}^N} v_n^2 dm_N} \end{equation} with \begin{equation*} \int_{{\mathcal L}^nathbb{R}^N} v^2_n dm_N = \sum_{j=1}^{ k} \alpha_j^2; \quad \int_{{\mathcal L}^nathbb{R}^N} |\nabla v_n|^2 dm_N = \sum_{j=1}^{ k} \alpha_j^2 \lambda_j(\Omega_n). \end{equation*} Being $$\dfrac{\alpha_1^2}{\displaystyle\sum_{j=1}^{ k} \alpha_j^2}\lambda_1(\Omega_n)+...+\dfrac{\alpha_k^2}{\displaystyle\sum_{j=1}^{ k} \alpha_j^2}\lambda_k(\Omega_n)\le \lambda_k(\Omega_n)$$ yields \begin{equation*} \dfrac{\displaystyle\int_{{\mathcal L}^nathbb{R}^N} |\nabla v_n|^2 dm_N}{ \displaystyle\int_{{\mathcal L}^nathbb{R}^N} v_n^2 dm_N} \leq \lambda_k(\Omega_n) \end{equation*} and then, for every $v \in V$, by \eqref{lim2} we get \begin{equation*} \dfrac{\displaystyle\int_{{\mathcal L}^nathbb{R}^N} |\nabla v|^2 dm_N}{\displaystyle \int_{{\mathcal L}^nathbb{R}^N} v^2 dm_N} \leq \liminf_n \lambda_k(\Omega_n). \end{equation*} By the min-max formula \eqref{minmax} for $\lambda_k(\Omega)$ we have \begin{equation*} \lambda _{k}(\mathcal{H}^{N-1}at\Omega)\leq {\mathcal L}^nax_{v\in V}\dfrac{\displaystyle\int_{{\mathcal L}^nathbb{R} ^{N}}|\nabla v|^{2}dm_N}{\displaystyle\int_{{\mathcal L}^nathbb{R} ^{N}}v^{2}dm_N}\leq \liminf_{n}\lambda _{k}(\Omega _{n}), \end{equation*} which concludes the proof. \begin{proposition} \label{conv1} Let $\Omega_n$ be a minimizing sequence in problem \eqref{min} and let $u_n^j$ be an eigenfunction associated to $ \lambda_j(\Omega_n),$ $1 \le j \le k$, satisfying \eqref{uno}. For every $1 \le j \le k$ there exists $u^j \in H_0^1(\mathcal{H}^{N-1}at\Omega; m_N)$, with $\mathcal{H}^{N-1}at\Omega$ as in \eqref{ast}, such that \begin{eqnarray} & u_n^j \rightharpoonup u^j \quad & {\mathcal L}^nbox{in } \quad H^1({\mathcal L}^nathbb{R}^N; m_N) \label{4} \\ & u_n^j \rightarrow u^j & {\mathcal L}^nbox{in} \quad L^2({\mathcal L}^nathbb{R}^N; m_N). \label{5} \end{eqnarray} \end{proposition} \begin{proof} We first observe that \eqref{4} and \eqref{5} are easy consequences of Theorem \ref{compact}. It remains to prove that $u^j \in H_0^1(\mathcal{H}^{N-1}at\Omega; m_N)$. By \eqref{chiti} (see Remark \ref{rem-qo1}) there exists a constant $M>0$, whose value is independent from $n$, such that $$ ||u_n^j||_\infty\le M. $$ Suppose $\lambda_k(\Omega_n)\le \Lambda$ for every $n$ and, as in the proof of Theorem \ref{existence}, let $w_n$ be the solution to problem \eqref{pb1}. The function $\psi_n^j=\Lambda M w_n-u_n^j$ satisfies $$ \left\{ \begin{array}{ll} -\Delta \psi_n^j-x(n-k+1)\omega_n^{1/(n-k+1)}dot \nabla \psi_n^j =f_n^j & {\mathcal L}^nathrm{in}\> \Omega_n \\ \psi_n^j=0 & {\mathcal L}^nathrm{on}\> \partial \Omega_n \end{array} \right. $$ with $f_n^j=\Lambda M -\lambda_j(\Omega_n)u_n^j$ positive and bounded. By the maximum principle (see Proposition \ref{maxprinc} below) $$ \psi_n^j m_Ne 0 \quad {\mathcal L}^nathrm{in} \>\Omega_n, $$ that is $$ u_n^j \le \Lambda M w_n. $$ Analogously we can prove that $-\Lambda M w_n \le u_n^j$, and then $$ |u_n^j|\le \Lambda M w_n. $$ Passing to the limit, we get \begin{equation}\label{6} |u^j|\le \Lambda M w \quad {\mathcal L}^nathrm{a.e.}. \end{equation} Since $w=0$ quasi-everywhere in ${\mathcal L}^nathbb R^N \setminus \mathcal{H}^{N-1}at\Omega$, \eqref{6} implies that $u^j \in H_0^1(\mathcal{H}^{N-1}at\Omega;m_N)$. \end{proof} \begin{proposition}\label{maxprinc} Let $D$ be an open set in ${\mathcal L}^nathbb R^N$ with finite $m_N$-measure and let $\psi$ be a solution to \begin{equation}\label{eq-mp} \left\{ \begin{array}{ll} -\Delta \psi -x (n-k+1)\omega_n^{1/(n-k+1)}dot \nabla \psi =f & {\mathcal L}^nathrm{in}\> D \\ \psi=0 & {\mathcal L}^nathrm{on}\> \partial D \end{array} \right. \end{equation} with $f\in L^\infty(D)$. If $f m_Ne 0$ in $D$ then $\psi m_Ne 0$ in $D$. \end{proposition} \begin{proof} Let $\eta=\psi e^{|x|^2/4}$. Clearly $\eta$ satisfies $$ \left\{ \begin{array}{ll} -\Delta \eta+\left(\frac{|x|^2}{4}+\frac{N}{2}\right)\eta=fe^{-|x|^2/4} & {\mathcal L}^nathrm{in}\> D \\ \eta=0 & {\mathcal L}^nathrm{on}\> \partial D. \end{array} \right. $$ Since $f\in L^\infty(D)$, by Talenti's theorem (see (n-k+1)\omega_n^{1/(n-k+1)}ite{Ta1}) $\eta \in L^\infty(D)$. Let us consider the sequence of sets $D_k=D (n-k+1)\omega_n^{1/(n-k+1)}ap B_k$, where $B_k$ is the ball centered at the origin, with radius $k$; it holds $$ D_1 \subset D_2\subset ...,\qquad D \subseteq (n-k+1)\omega_n^{1/(n-k+1)}up_{k\in {\mathcal L}^nathbb N} D_k,\qquad \partial D_k=\Gamma_k (n-k+1)\omega_n^{1/(n-k+1)}up \Gamma_k' \quad {\mathcal L}^nathrm{with}\> \Gamma_k \subseteq \partial D, \Gamma_k' \subset D. $$ Let $$ \Phi_N(t)=\left\{ \begin{array}{ll} \ln |t| & {\mathcal L}^nbox{if}\> N=2 \\ \\ -|t|^{2-N} & {\mathcal L}^nbox{if}\> Nm_Ne 3. \end{array} \right. $$ We distinguish two cases: $(i) \>0 \notin \bar D$, $(ii) \>0 \in \bar D$. \noindent $(i)$ Let $r_0>0$ be such that $|x|>r_0$ for every $x \in \bar D$. For any $k\in {\mathcal L}^nathbb N$ such that $k>r_0$ we define $$ w_k(x)=\left\{ \begin{array}{ll} \Phi_N(|x|)-\Phi_N(r_0),\quad x\in D_k \\ \\ \Phi_N(k)-\Phi_N(r_0),\quad x\in D\setminus D_k \end{array} \right. $$ and $$ w(x)=\Phi_N(|x|)-\Phi_N(r_0),\quad x\in D. $$ By construction $w_k(x) \le w(x)$ for every $x \in D$ and $$\limsup_k\left(\inf_{\Gamma_k'}\frac{\eta}{w_k}\right)=\limsup_k\left(\frac{1}{\Phi_N(k)-\Phi_N(r_0)}\inf_{\Gamma_k'}\eta\right)=0.$$ Moreover it satisfies $$ \left\{ \begin{array}{ll} -\Delta w_k+\left(\frac{|x|^2}{4}+\frac{N}{2}\right)w_km_Ne 0 & {\mathcal L}^nathrm{in}\> D_k \\ w_k>0 & {\mathcal L}^nathrm{on}\> D_k (n-k+1)\omega_n^{1/(n-k+1)}up \partial D_k. \end{array} \right. $$ By Theorem 19, p. 97 in (n-k+1)\omega_n^{1/(n-k+1)}ite{PW} we get that $\eta m_Ne 0$ and hence $\psi m_Ne 0$ in $D$. $(ii)$ If $0 \notin \bar D$ it is enough to consider $\tilde D_k=D_k\setminus B_{2\varepsilon}(0)$ for $\varepsilon >0$ and $$ \tilde w_k(x)=\left\{ \begin{array}{ll} \Phi_N(|x|)-\Phi_N(\varepsilon),\quad x\in \tilde D_k \\ \\ \Phi_N(k)-\Phi_N(\varepsilon),\quad x\in D\setminus D_k. \end{array} \right. $$ Reasoning as in the case $(i)$ we get that $\psi m_Ne 0$ in $D \setminus B_{2\varepsilon}(0)$ for every $\varepsilon >0$ small enough. By continuity $ \psi m_Ne 0$ in D. \end{proof} \begin{remark}\rm The previous maximum principle also holds for a quasi-open set $D$ with finite $m_N$-measure. To see that, we proceed by external approximation $D_\varepsilon$, exactly as in Remark \ref{rem-qo1}, and we use the fact that the (non-negative) solutions to problem (\ref{eq-mp}) on $D_\varepsilon$ converge to the solution of the same problem on $D$, thus this one is non negative. \end{remark} \end{document}
\begin{document} \title[Variational principle for neutralized Bowen topological entropy of NDSs]{Variational principle for neutralized Bowen topological entropy on subsets of non-autonomous dynamical systems} \author[J. Nazarian Sarkooh]{Javad Nazarian Sarkooh$^{*}$} \address{Department of Mathematics, Ferdowsi University of Mashhad, Mashhad, IRAN.} \email{\textcolor[rgb]{0.00,0.00,0.84}{[email protected]}} \author[A. Ehsani]{Azam Ehsani} \address{Department of Mathematics, Ferdowsi University of Mashhad, Mashhad, IRAN.} \email{\textcolor[rgb]{0.00,0.00,0.84}{aza\[email protected]}} \author[Z. Pashaei]{Zeynal Pashaei} \address{Department of Mathematics, Ferdowsi University of Mashhad, Mashhad, IRAN.} \email{\textcolor[rgb]{0.00,0.00,0.84}{[email protected]}} \author[R. Abdi]{Roghayeh Abdi} \address{Department of Mathematics, Ferdowsi University of Mashhad, Mashhad, IRAN.} \email{\textcolor[rgb]{0.00,0.00,0.84}{[email protected]}} \subjclass[2010] {37B55; 37B40; 37A50; 37D35.} \keywords{Non-autonomous dynamical system, neutralized Bowen topological entropy, neutralized weighted Bowen topological entropy, lower neutralized Brin-Katok's local entropy, neutralized Katok's entropy, variational principle.} \thanks{$^*$Corresponding author} \begin{abstract} Ovadia and Rodriguez-Hertz \cite{SBOFRH} defined the neutralized Bowen open ball for an autonomous dynamical system $(X,f)$ on a compact metric space $(X,d)$ as \begin{equation*} B_{n}(x,\text{e}^{-n\epsilon})=\{y\in X: d(f^{j}(x),f^{j}(y))<\text{e}^{-n\epsilon}\ \text{for all}\ 0\leq j\leq n-1\}, \end{equation*} where $\epsilon>0$, $n\in\mathbb{N}$, and $x\in X$. Replacing the usual Bowen open ball with neutralized Bowen open ball, we introduce the notions of neutralized Bowen topological entropy of subsets, neutralized weighted Bowen topological entropy of subsets, lower neutralized Brin-Katok's local entropy of Borel probability measures, and neutralized Katok's entropy of Borel probability measures for a non-autonomous dynamical system $(X,f_{1,\infty})$ on a compact metric space $(X,d)$. Then, we establish the following variational principles for neutralized Bowen topological entropy and neutralized weighted Bowen topological entropy of non-empty compact subsets in terms of lower neutralized Brin-Katok's local entropy and neutralized Katok's entropy. \begin{eqnarray*} h_{top}^{NB}(f_{1,\infty},Z)=h_{top}^{NWB}(f_{1,\infty},Z) &=& \lim_{\epsilon\to 0}\sup\{\underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon):\mu\in\mathcal{M}(X), \mu(Z)=1\}\\ &=& \lim_{\epsilon\to 0}\sup\{h_{\mu}^{NK}(f_{1,\infty},\epsilon):\mu\in\mathcal{M}(X), \mu(Z)=1\}, \end{eqnarray*} where $\mathcal{M}(X)$ denotes the set of all Borel probability measures on $X$ and $Z$ is a non-empty compact subset of $X$. Moreover, $h_{top}^{NB}(f_{1,\infty},Z)$, $h_{top}^{NWB}(f_{1,\infty},Z)$, $\underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon)$, and $h_{\mu}^{NK}(f_{1,\infty},\epsilon)$ are the neutralized Bowen topological entropy of $Z$, neutralized weighted Bowen topological entropy of $Z$, lower neutralized Brin-Katok's local entropy of $\mu$, and neutralized Katok's entropy of $\mu$, respectively. In particular, this extends the main result of \cite{YRCEZX} for autonomous dynamical systems. \end{abstract} \maketitle \thispagestyle{empty} \section{Introduction and statement of main results} Throughout this paper $X$ is a compact metric space with metric $d$ and $f_{1,\infty}=(f_n)_{n=1}^\infty$ is a sequence of continuous maps $f_n: X\to X$, such a pair $(X, f_{1,\infty})$ is a non-autonomous dynamical system so-called time-dependent (or NDS for short). By $\mathcal{M}(X)$ we denote the set of all Borel probability measures on $X$. The non-autonomous systems yield very flexible models than autonomous cases for the study and description of real-world processes. They may be used to describe the evolution of a wider class of phenomena, including systems that are forced or driven. Recently, there have been major efforts in establishing a general theory of such systems (see \cite{BV,JNSFHG1,K1,K2,K3,KR,KS,JNSFHG,JNS000,JNS0000,O,DTRD}), but a global theory is still out of reach. In the theory of dynamical systems, entropies are nonnegative extended real numbers measuring the complexity of a dynamical system. Topological entropy was first introduced by Adler et al. \cite{AKM} via open covers for continuous maps in compact topological spaces. In 1970, Bowen \cite{RB} gave another definition in terms of separated and spanning sets for uniformly continuous maps in metric spaces, and this definition is equivalent to Adler's definition for continuous maps in compact metric spaces. Later, for noncompact sets, Bowen \cite{RB4} gave a characterization of dimension type for the entropy which was further investigated by Pesin and Pitskel \cite{PYBPBS}. Also, the measure-theoretic (metric) entropy for an invariant measure was introduced by Kolmogorov \cite{KAAAA}. The basic relation between topological entropy and measure-theoretic entropy is the variational principle, see \cite{PW}. Topological entropy which is an important tool to understand the complexity of dynamical systems plays a vital role in topological dynamics, ergodic theory, mean dimension theory, and other fields of dynamical systems. Moreover, it has close relationships with many important dynamical properties, such as chaos, Lyapunov exponents, the growth of the number of periodic points, and so on. Our main goal in this paper is to focus on the interplay between ergodic theory and topological entropy of non-autonomous dynamical systems. In 1996, Kolyada and Snoha extended the concept of topological entropy to non-autonomous systems, based on open covers, separated sets, and spanning sets, and obtained a series of important properties of these systems \cite{KS}. Also, Kawan \cite{K1,K2,K3} introduced and studied the notion of metric entropy for non-autonomous dynamical systems and showed that it is related via variational inequality (principle) to the topological entropy as defined by Kolyada and Snoha. More precisely, for equicontinuous topological non-autonomous dynamical systems $(X_{1,\infty},f_{1,\infty})$, he proved the variational inequality \begin{equation*} \sup_{\mu_{1,\infty}}h_{\mathcal{E}_{M}}(f_{1,\infty},\mu_{1,\infty})\leq h_{\text{top}}(f_{1,\infty}), \end{equation*} where $h_{\text{top}}(f_{1,\infty})$ and $h_{\mathcal{E}_{M}}(f_{1,\infty},\mu_{1,\infty})$ are the topological and metric entropy of $(X_{1,\infty},f_{1,\infty})$, the supremum is taken over all invariant measure sequences $\mu_{1,\infty}$ and $\mathcal{E}_{M}$ is Misiurewicz class of partitions. Also, he proved the following variational principle \begin{equation*} \sup_{\mu_{1,\infty}}h_{\mathcal{E}_{M}}(f_{1,\infty},\mu_{1,\infty})=h_{\text{top}}(f_{1,\infty}) \end{equation*} for non-autonomous dynamical systems $(M,f_{1,\infty})$ built from $C^{1}$ expanding maps $f_{n}$ on a compact Riemannian manifold $M$ with uniform bounds on expansion factors and derivatives that act in the same way on the fundamental group of $M$. Moreover, Nazarian Sarkooh \cite{JNS0000} showed the following variational principle \begin{equation*} h_{top}^{B}(f_{1,\infty},Z)=h_{top}^{WB}(f_{1,\infty},Z)=\sup\{\underline{h}_{\mu}(f_{1,\infty}):\mu\in\mathcal{M}(X), \mu(Z)=1\} \end{equation*} for non-autonomous dynamical system $(X, f_{1,\infty})$ on a compact metric space $X$ which links the Bowen topological entropy $h_{top}^{B}(f_{1,\infty},Z)$ and weighted Bowen topological entropy $h_{top}^{WB}(f_{1,\infty},Z)$ of non-empty compact subset $Z$ to the measure-theoretic lower entropy $\underline{h}_{\mu}(f_{1,\infty})$ of Borel probability measures. Recently, Ovadia and Rodriguez-Hertz \cite{SBOFRH} defined the neutralized Bowen open ball for an autonomous dynamical system $(X,f)$ on a compact metric space $X$ with metric $d$ as \begin{equation*} B_{n}(x,\text{e}^{-n\epsilon})=\{y\in X: d(f^{j}(x),f^{j}(y))<\text{e}^{-n\epsilon}\ \text{for}\ 0\leq j<n\}, \end{equation*} where $\epsilon>0$, $n\in\mathbb{N}$, and $x\in X$. As the Brin-Katok's entropy formula shows, the usual Bowen open balls $\{B_{n}(x,\epsilon)\}$ allow us to control their measure and image under the dynamics for iterations, while it may possess complicated geometric shape in a central direction. Ovadia and Rodriguez-Hertz by replacing the usual Bowen open balls $\{B_{n}(x,\epsilon)\}$ with neutralized Bowen open balls $\{B_{n}(x,\text{e}^{-n\epsilon})\}$, defined the lower neutralized Brin-Katok's local entropy to estimate the asymptotic measure of sets with a distinctive geometric shape by neutralizing the subexponential effects. Moreover, the neutralized Bowen open balls have more advantages for describing the neighborhood with a local linearization of the dynamics. Motivated by \cite{RB4,DFWH,JNS000,SBOFRH}, a natural question is whether the previous variational principles for Bowen topological entropy and weighted Bowen topological entropy on non-empty compact subsets for a non-autonomous dynamical system still holds by considering neutralized Bowen open balls. Hence, we introduce the notions of neutralized Bowen topological entropy of subsets, neutralized weighted Bowen topological entropy of subsets, lower neutralized Brin-Katok's local entropy of Borel probability measures, and neutralized Katok's entropy of Borel probability measures through neutralized Bowen open balls for non-autonomous dynamical systems. Then, by using some ideas from geometric measure theory and utilizing the dimensional approach, we establish the following variational principles for neutralized Bowen topological entropy and neutralized weighted Bowen topological entropy of non-empty compact subsets. In particular, different from the classical variational principles for topological entropy in terms of Kolmogorov-Sinai entropy \cite{PW} and Bowen topological entropy of non-empty compact subsets in terms of lower Brin-Katok's local entropy \cite{DFWH}, the form of our variational principles for neutralized Bowen topological entropy is more close to Lindenstrauss-Tsukamoto's variational principle \cite{LETM} for metric mean dimension in terms of rate-distortion functions. \begin{theorem A} Let $(X,f_{1,\infty})$ be a non-autonomous dynamical system on a compact metric space $(X,d)$ and $Z$ be a non-empty compact subset of $X$. Then \begin{eqnarray*} h_{top}^{NB}(f_{1,\infty},Z)=h_{top}^{NWB}(f_{1,\infty},Z) &=& \lim_{\epsilon\to 0}\sup\{\underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon):\mu\in\mathcal{M}(X), \mu(Z)=1\}\\ &=& \lim_{\epsilon\to 0}\sup\{h_{\mu}^{NK}(f_{1,\infty},\epsilon):\mu\in\mathcal{M}(X), \mu(Z)=1\}, \end{eqnarray*} where $h_{top}^{NB}(f_{1,\infty},Z)$, $h_{top}^{NWB}(f_{1,\infty},Z)$, $\underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon)$, and $h_{\mu}^{NK}(f_{1,\infty},\epsilon)$ denote the neutralized Bowen topological entropy of $Z$, neutralized weighted Bowen topological entropy of $Z$, lower neutralized Brin-Katok's local entropy of $\mu$, and neutralized Katok's entropy of $\mu$, respectively. \end{theorem A} As a direct consequence of Theorem $A$, we have the following corollary that is the main result of \cite{YRCEZX} for autonomous dynamical systems. \begin{corollary B} Let $(X,f)$ be an autonomous dynamical system on a compact metric space $(X,d)$ and $Z$ be a non-empty compact subset of $X$. Then \begin{eqnarray*} h_{top}^{NB}(f,Z)=h_{top}^{NWB}(f,Z) &=& \lim_{\epsilon\to 0}\sup\{\underline{h}_{\mu}^{NBK}(f,\epsilon):\mu\in\mathcal{M}(X), \mu(Z)=1\}\\ &=& \lim_{\epsilon\to 0}\sup\{h_{\mu}^{NK}(f,\epsilon):\mu\in\mathcal{M}(X), \mu(Z)=1\}, \end{eqnarray*} where $h_{top}^{NB}(f,Z)$, $h_{top}^{NWB}(f,Z)$, $\underline{h}_{\mu}^{NBK}(f,\epsilon)$, and $h_{\mu}^{NK}(f,\epsilon)$ denote the neutralized Bowen topological entropy of $Z$, neutralized weighted Bowen topological entropy of $Z$, lower neutralized Brin-Katok's local entropy of $\mu$, and neutralized Katok's entropy of $\mu$, respectively. \end{corollary B} \textbf{This paper is organized as follows.} In Section \ref{section2}, we give a precise definition of a non-autonomous dynamical system and introduce the notions of neutralized Bowen topological entropy of subsets, neutralized weighted Bowen topological entropy of subsets, lower neutralized Brin-Katok's local entropy of Borel probability measures, and neutralized Katok's entropy of Borel probability measures for non-autonomous dynamical systems. Moreover, we give a few preparatory results needed for the proof of the main results. The proof of Theorem A is given in Section \ref{section3}. \section{Definitions and Preliminary}\label{section2} A \emph{non-autonomous} or \emph{time-dependent} dynamical system (an \emph{NDS} for short), is a pair $(X_{1,\infty}, f_{1,\infty})$, where $X_{1,\infty}=(X_n)_{n=1}^\infty$ is a sequence of sets and $f_{1,\infty}=(f_n)_{n=1}^\infty$ is a sequence of maps $f_n: X_n \to X_{n+1}$. If all the sets $X_n$ are compact metric spaces and all the $f_n$ are continuous, we say that $(X_{1,\infty}, f_{1,\infty})$ is a \emph{topological NDS}. Here, we assume that $X$ is a compact metric space with metric $d$, all the sets $X_{n}$ are equal to the set $X$ and we abbreviate $(X_{1,\infty}, f_{1,\infty})$ by $(X, f_{1,\infty})$. Throughout this paper, we work with topological NDSs and use NDS instead of topological NDS for simplicity. The time evolution of the system is defined by composing the maps $f_{n}$ in an obvious way. In general, we define \begin{equation*} f_i^n:=f_{i+n-1}\circ\cdots\circ f_{i+1}\circ f_i \ \ \text{for} \ i,n\in \mathbb{N},\ \text{and} \ f_i^0:=\text{id}_X. \end{equation*} Subset $Z$ of $X$ is said to be \emph{forward} $f_{1,\infty}$-\emph{invariant} if $f_{n}(Z)\subset Z$ for all $n\in\mathbb{N}$. In what follows, we introduce the notions of neutralized Bowen topological entropy of subsets, neutralized weighted Bowen topological entropy of subsets, lower neutralized Brin-Katok's local entropy of Borel probability measures, and neutralized Katok's entropy of Borel probability measures for NDSs. Moreover, we give a few preparatory results needed for the proof of the main results. \subsection{Neutralized Bowen topological entropy} Let $(X, f_{1,\infty})$ be an NDS on a compact metric space $(X,d)$. Given $i,n\in\mathbb{N}$, $x,y\in X$, the $(i,n)$-\emph{Bowen metric} $d_{i,n}$ on $X$ is defined as \begin{equation*} d_{i,n}(x,y):=\max_{0\leq j\leq n-1}d(f_{i}^{j}(x),f_{i}^{j}(y)). \end{equation*} Then the \emph{Bowen open ball} of radius $\epsilon>0$ and order $n$ with initial time $i$ around $x$ is given by \begin{equation*} B(x;i,n,\epsilon):=\{y \in X: d_{i,n}(x,y)<\epsilon\}. \end{equation*} Following the idea of \cite{SBOFRH}, we define the neutralized Bowen open ball for NDSs by replacing the radius $\epsilon$ in the usual Bowen open ball with $\text{e}^{-n\epsilon}$. Precisely, the \emph{neutralized Bowen open ball} of radius $\epsilon>0$ and order $n$ with initial time $i$ around $x$ is given by \begin{equation*} B(x;i,n,\text{e}^{-n\epsilon}):=\{y \in X: d_{i,n}(x,y)<\text{e}^{-n\epsilon}\}. \end{equation*} For simplicity, denote $B(x;1,n,\epsilon)$, $B(x;1,n,\text{e}^{-n\epsilon})$, and $d_{1,n}$ by $B_{n}(x,\epsilon)$, $B_{n}(x,\text{e}^{-n\epsilon})$, and $d_{n}$, respectively. Now, by following the idea of \cite{BMKA,DFWH,JNS0000}, we define the so-called neutralized Bowen topological entropy of a subset that is not necessarily compact or forward $f_{1,\infty}$-invariant. Let $Z$ be a non-empty subset of $X$. Given $n\in\mathbb{N}$, $\alpha\in\mathbb{R}$, and $\epsilon>0$, define \begin{equation*} M_{f_{1,\infty}}(n,\alpha,\epsilon,Z):=\inf\bigg\{\sum_{i\in I}\text{e}^{-\alpha n_{i}}\bigg\}, \end{equation*} where the infimum is taken over all finite or countable collections $\{B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon})\}_{i\in I}$ such that $x_{i}\in X$, $n_{i}\geq n$, and $Z\subseteq\bigcup_{i\in I}B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon})$. The quantity $M_{f_{1,\infty}}(n,\alpha,\epsilon,Z)$ does not decrease as $n$ increases, hence the following limit exists \begin{equation*} M_{f_{1,\infty}}(\alpha,\epsilon,Z):=\lim_{n\to\infty}M_{f_{1,\infty}}(n,\alpha,\epsilon,Z). \end{equation*} By the construction of Carath\'{e}odory dimension characteristics (see \cite{PYB}), when $\alpha$ goes from $-\infty$ to $\infty$, the quantity $M_{f_{1,\infty}}(\alpha,\epsilon,Z)$ jumps from $\infty$ to $0$ at a unique critical value. Hence we can define the number \begin{equation*} M_{f_{1,\infty}}(\epsilon,Z):=\sup\{\alpha:M_{f_{1,\infty}}(\alpha,\epsilon,Z)=\infty\}=\inf\{\alpha:M_{f_{1,\infty}}(\alpha,\epsilon,Z)=0\}. \end{equation*} \begin{definition}[Neutralized Bowen topological entropy] Let $(X, f_{1,\infty})$ be an NDS on a compact metric space $(X,d)$ and $Z$ be a non-empty subset of $X$. Then, the \emph{neutralized Bowen topological entropy} of NDS $(X,f_{1,\infty})$ on the set $Z$ is defined as \begin{equation*} h_{top}^{NB}(f_{1,\infty},Z):=\lim_{\epsilon\to 0} M_{f_{1,\infty}}(\epsilon,Z). \end{equation*} \end{definition} \begin{remark} Denote by $h_{top}^{B}(f_{1,\infty},Z)$ the classical Bowen topological entropy of NDS $(X, f_{1,\infty})$ on a non-empty subset $Z$ of $X$ defined by Bowen open balls $\{B_{n_{i}}(x_{i},\epsilon)\}_{i\in I}$ (see \cite{JNS0000}). Then, it is easy to see that $h_{top}^{B}(f_{1,\infty},Z)\leq h_{top}^{NB}(f_{1,\infty},Z)$. \end{remark} The following proposition presents some basic properties related to neutralized Bowen topological entropy of NDSs. \begin{proposition} Let $(X, f_{1,\infty})$ be an NDS on a compact metric space $(X,d)$ and $Z$, $Z_{1}$, $Z_{2}$ be non-empty subsets of $X$. Then, the following statements hold. \begin{itemize} \item[1)] The value of $h_{top}^{NB}(f_{1,\infty},Z)$ is independent of the choice of metrics on $X$. \item[2)] If $Z_{1}\subset Z_{2}$, then $h_{top}^{NB}(f_{1,\infty},Z_{1})\leq h_{top}^{NB}(f_{1,\infty},Z_{2})$. \item[3)] If $Z$ is a countable union of $Z_{i}$, then $M_{f_{1,\infty}}(\epsilon,Z)=\sup_{i}M_{f_{1,\infty}}(\epsilon,Z_{i})$ for every $\epsilon>0$. \item[4)] If $Z$ is a finite union of $Z_{i}$, then $h_{top}^{NB}(f_{1,\infty},Z)=\max_{i}h_{top}^{NB}(f_{1,\infty},Z_{i})$. \end{itemize} \end{proposition} \begin{proof} We only show the first statement, because the other statements are a direct consequence of the definition. Let $d_{1}$, $d_{2}$ be two compatible metrics on $X$. Then for every $\epsilon^{\prime}>0$ there is $\delta^{\prime}>0$ such that for all $x,y\in X$, $d_{1}(x,y)<\delta^{\prime}$ implies $d_{2}(x,y)<\epsilon^{\prime}$. Now, fix $\epsilon>0$ and let $0<\delta<\epsilon$. For every $n\geq 1$, there is $n_{0}\geq n$ (depends only on $\epsilon$, $\delta$, and $n$) so that for all $x,y\in X$, $d_{1}(x,y)<\text{e}^{-n_{0}\delta}$ implies $d_{2}(x,y)<\text{e}^{-n\epsilon}$. Hence $M_{f_{1,\infty}}^{d_{2}}(n,\alpha,\epsilon,Z)\leq M_{f_{1,\infty}}^{d_{1}}(n_{0},\alpha,\delta,Z)\leq M_{f_{1,\infty}}^{d_{1}}(\alpha,\delta,Z)$ and so $M_{f_{1,\infty}}^{d_{2}}(\alpha,\epsilon,Z)\leq M_{f_{1,\infty}}^{d_{1}}(\alpha,\delta,Z)$. Consequently, $M_{f_{1,\infty}}^{d_{2}}(\epsilon,Z)\leq M_{f_{1,\infty}}^{d_{1}}(\delta,Z)$. As $\epsilon\to 0$, one has $h_{top}^{NB}(f_{1,\infty},d_{2},Z)\leq h_{top}^{NB}(f_{1,\infty},d_{1},Z)$. Now, by exchanging the role of $d_{1}$ and $d_{2}$ we get the converse inequality. \end{proof} \subsection{Neutralized weighted Bowen topological entropy} Let $(X, f_{1,\infty})$ be an NDS on a compact metric space $(X,d)$ and $Z$ be a non-empty subset of $X$. For any $n\in\mathbb{N}$, $\alpha\in\mathbb{R}$, $\epsilon>0$, and any bounded function $g:X\to\mathbb{R}$, define \begin{equation*} \mathcal{W}_{f_{1,\infty}}(n,\alpha,\epsilon,g):=\inf\bigg\{\sum_{i\in I}c_{i}\text{e}^{-\alpha n_{i}}\bigg\}, \end{equation*} where the infimum is taken over all finite or countable families $\{(B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon}),c_{i})\}_{i\in I}$ such that $0<c_{i}<\infty$, $x_{i}\in X$, $n_{i}\geq n$ for all $i$, and $\sum_{i\in I}c_{i}\chi_{B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon})}\geq g$, where $\chi_{B}$ denotes the characteristic function of subset $B$ of $X$. Now, set \begin{equation*} \mathcal{W}_{f_{1,\infty}}(n,\alpha,\epsilon,Z):=\mathcal{W}_{f_{1,\infty}}(n,\alpha,\epsilon,\chi_{Z}). \end{equation*} The quantity $\mathcal{W}_{f_{1,\infty}}(n,\alpha,\epsilon,Z)$ does not decrease as $n$ increases, hence the following limit exists \begin{equation*} \mathcal{W}_{f_{1,\infty}}(\alpha,\epsilon,Z):=\lim_{n\to\infty}\mathcal{W}_{f_{1,\infty}}(n,\alpha,\epsilon,Z). \end{equation*} By the construction of Carath\'{e}odory dimension characteristics (see \cite{PYB}), when $\alpha$ goes from $-\infty$ to $\infty$, the quantity $\mathcal{W}_{f_{1,\infty}}(\alpha,\epsilon,Z)$ jumps from $\infty$ to $0$ at a unique critical value. Hence we can define the number \begin{equation*} \mathcal{W}_{f_{1,\infty}}(\epsilon,Z):=\sup\{\alpha:\mathcal{W}_{f_{1,\infty}}(\alpha,\epsilon,Z)=\infty\}=\inf\{\alpha:\mathcal{W}_{f_{1,\infty}}(\alpha,\epsilon,Z)=0\}. \end{equation*} \begin{definition}[Neutralized weighted Bowen topological entropy] Let $(X, f_{1,\infty})$ be an NDS on a compact metric space $(X,d)$ and $Z$ be a non-empty subset of $X$. Then, the \emph{neutralized weighted Bowen topological entropy} of NDS $(X,f_{1,\infty})$ on the set $Z$ is defined as \begin{equation*} h_{top}^{NWB}(f_{1,\infty},Z):=\lim_{\epsilon\to 0}\mathcal{W}_{f_{1,\infty}}(\epsilon,Z). \end{equation*} \end{definition} In what follows, as the role of Kolmogorov-Sinai entropy played in the classical variational principle of topological entropy of autonomous dynamical systems (see \cite{PW}), we define the lower neutralized Brin-Katok's local entropy and the neutralized Katok's entropy of Borel probability measures for NDSs, because we need them to establish the variational principle for neutralized Bowen topological entropy and neutralized weighted Bowen topological entropy on non-empty compact subsets of NDSs. \subsection{Lower neutralized Brin-Katok's local entropy} Let $(X, f_{1,\infty})$ be an NDS on a compact metric space $(X,d)$, and let $\mathcal{M}(X)$ denote the set of Borel probability measures on $X$. By following the idea of \cite{BMKA,DFWH,JNS0000}, one can employ the neutralized Bowen open balls to define the local lower neutralized Brin-Katok's entropy for NDSs. For $\mu\in\mathcal{M}(X)$, $\epsilon>0$, and $x\in X$, we define \begin{equation*} \underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon,x):=\liminf_{n\to\infty}\dfrac{-\log\mu(B_{n}(x,\text{e}^{-n\epsilon}))}{n} \end{equation*} and \begin{equation*} \underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon):=\int\underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon,x) d\mu. \end{equation*} Then the \emph{lower neutralized Brin-Katok's local entropy} of $\mu$ is defined as \begin{equation*} \underline{h}_{\mu}^{NBK}(f_{1,\infty}):=\lim_{\epsilon\to 0}\underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon). \end{equation*} \begin{remark} Note that $\underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon,x)$ is integrable and so $\underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon)$ is well-defined. Indeed, for every $\alpha>0$ the set $\{x\in X: \mu(B_{n}(x,\text{e}^{-n\epsilon}))>\alpha\}$ is open, that implies the mappings $x\in X\mapsto\mu(B_{n}(x,\text{e}^{-n\epsilon}))$ and $x\in X\mapsto\underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon,x)$ are measurable. \end{remark} \subsection{Neutralized Katok's entropy.} Let $(X, f_{1,\infty})$ be an NDS on a compact metric space $(X,d)$. Then, the neutralized Katok's entropy of $(X, f_{1,\infty})$ can be defined similarly to the classical Katok's entropy defined by spanning sets and separated sets \cite{KA}. However, to pursue a variational principle for $(X, f_{1,\infty})$ we need to define neutralized Katok's entropy of Borel probability measures by Carath\'{e}odory Pesin structure \cite{PYB}. Let $n\in\mathbb{N}$, $\epsilon>0$, $\alpha\in\mathbb{R}$, $\mu\in\mathcal{M}(X)$, and $0<\delta<1$. Put \begin{equation*} \Lambda_{f_{1,\infty}}(n,\alpha,\epsilon,\mu,\delta):=\inf\bigg\{\sum_{i\in I}\text{e}^{-\alpha n_{i}}\bigg\}, \end{equation*} where the infimum is taken over all finite or countable collections $\{B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon})\}_{i\in I}$ such that $x_{i}\in X$, $n_{i}\geq n$, and $\mu(\bigcup_{i\in I}B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon}))>1-\delta$. The quantity $\Lambda_{f_{1,\infty}}(n,\alpha,\epsilon,\mu,\delta)$ does not decrease as $n$ increases, hence the following limit exists \begin{equation*} \Lambda_{f_{1,\infty}}(\alpha,\epsilon,\mu,\delta):=\lim_{n\to\infty}\Lambda_{f_{1,\infty}}(n,\alpha,\epsilon,\mu,\delta). \end{equation*} By the construction of Carath\'{e}odory dimension characteristics (see \cite{PYB}), when $\alpha$ goes from $-\infty$ to $\infty$, the quantity $\Lambda_{f_{1,\infty}}(\alpha,\epsilon,\mu,\delta)$ jumps from $\infty$ to $0$ at a unique critical value. Hence we can define the number \begin{equation*} \Lambda_{f_{1,\infty}}(\epsilon,\mu,\delta):=\sup\{\alpha:\Lambda_{f_{1,\infty}}(\alpha,\epsilon,\mu,\delta)=\infty\}=\inf\{\alpha:\Lambda_{f_{1,\infty}}(\alpha,\epsilon,\mu,\delta)=0\}. \end{equation*} Put $h_{\mu}^{NK}(f_{1,\infty},\epsilon):=\lim_{\delta\to 0}\Lambda_{f_{1,\infty}}(\epsilon,\mu,\delta)$. Then, the \emph{neutralized Katok's entropy} of $\mu$ is defined as \begin{equation*} h_{\mu}^{NK}(f_{1,\infty}):=\lim_{\epsilon\to 0}h_{\mu}^{NK}(f_{1,\infty},\epsilon). \end{equation*} The following proposition presents some basic properties related to lower neutralized Brin-Katok's local entropy and the neutralized Katok's entropy of Borel probability measures for NDSs. \begin{proposition}\label{propjj} Let $(X, f_{1,\infty})$ be an NDS on a compact metric space $(X,d)$ and $\mu\in\mathcal{M}(X)$. Then, for every $\epsilon>0$, one has $\underline{h}_{\mu}^{NBK}(f_{1,\infty},\dfrac{\epsilon}{2})\leq h_{\mu}^{NK}(f_{1,\infty},\epsilon)$. \end{proposition} \begin{proof} Let $\underline{h}_{\mu}^{NBK}(f_{1,\infty},\dfrac{\epsilon}{2})>\alpha>0$. For each $n\in\mathbb{N}$, set \begin{equation*} E_{n}:=\big\{x\in X: \mu(B_{m}(x,\text{e}^{\frac{-m\epsilon}{2}}))<\text{e}^{-m\alpha}\ \text{for all}\ m\geq n\big\}. \end{equation*} Then, there is $n_{0}\in\mathbb{N}$ such that $\text{e}^{\frac{n_{0}\epsilon}{2}}>2$ and $\mu(E_{n_{0}})>0$. Put $\delta_{0}:=\frac{1}{2}\mu(E_{n_{0}})$. Let $\{B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon})\}_{i\in I}$ be a finite or countable collection such that $x_{i}\in X$, $n_{i}\geq n_{0}$, and $\mu(\bigcup_{i\in I}B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon}))>1-\delta_{0}$. Then $\mu(E_{n_{0}}\cap\bigcup_{i\in I}B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon}))\geq\delta_{0}$. Put $I_{1}:=\{i\in I: E_{n_{0}}\cap B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon})\neq\emptyset\}$. For every $i\in I_{1}$, take $y_{i}\in E_{n_{0}}\cap B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon})$ such that \begin{equation*} E_{n_{0}}\cap B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon})\subset B_{n_{i}}(y_{i},2\text{e}^{-n_{i}\epsilon})\subset B_{n_{i}}(y_{i},\text{e}^{\frac{-n_{i}\epsilon}{2}}). \end{equation*} Thus, \begin{equation*} \sum_{i\in I}\text{e}^{-\alpha n_{i}}\geq\sum_{i\in I_{1}}\text{e}^{-\alpha n_{i}}\geq\sum_{i\in I_{1}}\mu(B_{n_{i}}(y_{i},\text{e}^{\frac{-n_{i}\epsilon}{2}}))\geq\delta_{0}, \end{equation*} that implies $\Lambda_{f_{1,\infty}}(\alpha,\epsilon,\mu,\delta_{0})\geq\Lambda_{f_{1,\infty}}(n_{0},\alpha,\epsilon,\mu,\delta_{0})$ and hence $\Lambda_{f_{1,\infty}}(\epsilon,\mu,\delta_{0})\geq\alpha$. Consequently, $h_{\mu}^{NK}(f_{1,\infty},\epsilon)\geq\alpha$. This finishes the proof as $\alpha\to \underline{h}_{\mu}^{NBK}(f_{1,\infty},\dfrac{\epsilon}{2})$. \end{proof} \section{Proof of main results}\label{section3} In this section, we prove Theorem A. To do so, we use the idea of geometric measure theory \cite{MP} and the works of Feng and Huang \cite{DFWH} and Nazarian Sarkooh \cite{JNS0000} to show that the neutralized Bowen topological entropy and the neutralized weighted Bowen topological entropy of NDSs are equivalent. This allows us to define a positive linear functional $L$ on $\mathcal{C}(X,\mathbb{R})$ by applying Frostman's lemma which is an analog of Feng and Huang's approximation, where $\mathcal{C}(X,\mathbb{R})$ denotes the Banach space of all continuous real-valued functions on $X$ equipped with the supremum norm. Then \emph{Riesz representation theorem} can be applied to produce a Borel probability measure $\mu\in\mathcal{M}(X)$ with $\mu(Z)=1$ so that $M_{f_{1,\infty}}(\epsilon,Z)\leq\underline{h}_{\mu}^{NBK}(f_{1,\infty},2\epsilon)$. In Theorem \ref{propj}, we prove that the neutralized Bowen topological entropy is equal to the neutralized weighted Bowen topological entropy, i.e., $h_{top}^{NB}(f_{1,\infty},Z)=h_{top}^{NWB}(f_{1,\infty},Z)$. To do this, we need the following lemma \cite[Lemma 6.3]{TW} which is called the \emph{Vitali covering lemma} (see \cite[Theorem 2.1]{MP}). \begin{lemma}\label{lemmaj} Let $(X,d)$ be a compact metric space, $r>0$, and $\{B(x_{i},r)\}_{i\in I}$ be a family of open balls in $X$. Set $I(i):=\{j\in I: B(x_{i},r)\cap B(x_{j},r)\neq\emptyset\}$. Then there exists a finite subset $J\subset I$ so that for any $i,j\in J$ with $i\neq j$, $I(i)\cap I(j)\neq\emptyset$ and $\bigcup_{i\in I}B(x_{i},r)\subseteq\bigcup_{j\in J}B(x_{j},5r)$. \end{lemma} \begin{theorem}\label{propj} Let $(X, f_{1,\infty})$ be an NDS on a compact metric space $(X,d)$ and $Z$ be a non-empty subset of $X$. Then, for $\epsilon>0$, $\alpha\in\mathbb{R}$, $\theta>0$, and sufficiently large $n$, we have $M_{f_{1,\infty}}(n,\alpha+\theta,\frac{\epsilon}{2},Z)\leq\mathcal{W}_{f_{1,\infty}}(n,\alpha,\epsilon,Z)\leq M_{f_{1,\infty}}(n,\alpha,\epsilon,Z)$. Consequently, $h_{top}^{NB}(f_{1,\infty},Z)=h_{top}^{NWB}(f_{1,\infty},Z)$. \end{theorem} \begin{proof} The inequality $\mathcal{W}_{f_{1,\infty}}(n,\alpha,\epsilon,Z)\leq M_{f_{1,\infty}}(n,\alpha,\epsilon,Z)$ follows by definitions. We show $M_{f_{1,\infty}}(n,\alpha+\theta,\frac{\epsilon}{2},Z)\leq\mathcal{W}_{f_{1,\infty}}(n,\alpha,\epsilon,Z)$ by modifying the proof of \cite[Proposition 6.4]{TW}. Let $n$ be a sufficiently large integer so that $\text{e}^{\frac{m\epsilon}{2}}>5$ and $\frac{m^{2}}{\text{e}^{m\theta}}<1$ for all $m\geq n$. Let $\{(B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon}),c_{i})\}_{i\in I}$ with $0<c_{i}<\infty$, $x_{i}\in X$, and $n_{i}\geq n$ for all $i$, be a finite or countable family satisfying $\sum_{i\in I}c_{i}\chi_{B_{n_{i}}(x_{i},\text{e}^{-n_{i}\epsilon})}\geq\chi_{Z}$. For $m\in\mathbb{N}$, put $I_{m}:=\{i\in I: n_{i}=m\}$. Let $t>0$ and $m\geq n$. Set \begin{equation*} Z_{m,t}:=\{x\in Z: \sum_{i\in I_{m}}c_{i}\chi_{B_{m}(x_{i},\text{e}^{-m\epsilon})}(x)>t\} \end{equation*} and \begin{equation*} I_{m}^{t}:=\{i\in I_{m}: B_{m}(x_{i},\text{e}^{-m\epsilon})\cap Z_{m,t}\neq\emptyset\}. \end{equation*} Then, $Z_{m,t}\subset\bigcup_{i\in I_{m}^{t}}B_{m}(x_{i},\text{e}^{-m\epsilon})$. Let $\mathcal{B}:=\{B_{m}(x_{i},\text{e}^{-m\epsilon})\}_{i\in I_{m}^{t}}$. By Lemma \ref{lemmaj}, there is a finite subset $J\subset I_{m}^{t}$ such that \begin{equation*} \bigcup_{i\in I_{m}^{t}}B_{m}(x_{i},\text{e}^{-m\epsilon})\subset\bigcup_{j\in J}B_{m}(x_{j},5\text{e}^{-m\epsilon})\subset\bigcup_{j\in J}B_{m}(x_{j},\text{e}^{\frac{-m\epsilon}{2}}), \end{equation*} and $I_{m}^{t}(i)\cap I_{m}^{t}(j)\neq\emptyset$ for any $i,j\in J$ with $i\neq j$, where $I_{m}^{t}(i):=\{j\in I_{m}^{t}: B_{m}(x_{j},\text{e}^{-m\epsilon})\cap B_{m}(x_{i},\text{e}^{-m\epsilon})\neq\emptyset\}$. Now, for each $j\in J$, we choose $y_{j}\in B_{m}(x_{j},\text{e}^{-m\epsilon})\cap Z_{m,t}$. Then $\sum_{i\in I_{m}^{t}}c_{i}\chi_{B_{m}(x_{i},\text{e}^{-m\epsilon})}(y_{j})>t$ and hence $\sum_{i\in I_{m}^{t}(j)}c_{i}>t$. Consequently, \begin{equation*} |J|<\frac{1}{t}\sum_{j\in J}\sum_{i\in I_{m}^{t}(j)}c_{i}\leq\frac{1}{t}\sum_{i\in I_{m}^{t}}c_{i}. \end{equation*} It follows that $M_{f_{1,\infty}}(n,\alpha+\theta,\frac{\epsilon}{2},Z_{m,t})\leq|J|.\text{e}^{-m(\alpha+\theta)}\leq\frac{1}{m^{2}t}\sum_{i\in I_{m}}c_{i}\text{e}^{-m\alpha}$. Note that for any $0<t<1$, $Z=\cup_{m\geq n}Z_{m,\frac{t}{m^{2}}}$, and so \begin{equation*} M_{f_{1,\infty}}(n,\alpha+\theta,\frac{\epsilon}{2},Z)\leq\sum_{m\geq n}M_{f_{1,\infty}}(n,\alpha+\theta,\frac{\epsilon}{2},Z_{m,\frac{t}{m^{2}}})\leq\frac{1}{t}\sum_{i\in I}c_{i}\text{e}^{-n_{i}\alpha}. \end{equation*} Hence, as $t\to 1$, we have $M_{f_{1,\infty}}(n,\alpha+\theta,\frac{\epsilon}{2},Z)\leq\sum_{i\in I}c_{i}\text{e}^{-n_{i}\alpha}$. So $M_{f_{1,\infty}}(n,\alpha+\theta,\frac{\epsilon}{2},Z)\leq\mathcal{W}_{f_{1,\infty}}(n,\alpha,\epsilon,Z)$ that implies the desired result. \end{proof} To prove Theorem A, we need the following dynamical Frostman's lemma which is an analog of Feng and Huang's approximation. We omit the detail of its proof because the proof of \cite[Lemma 3.4]{DFWH} and \cite[Lemma 4.3]{JNS0000} which are adapted from Howroyd's elegant argument (see \cite[Theorem 2]{HJD} and \cite[Theorem 8.17]{MP}) also works for it if the Bowen open ball $B_{m}(x,\epsilon)$ is replaced with neutralized Bowen open ball $B_{m}(x,\text{e}^{-m\epsilon})$. \begin{lemma}\label{lemmajjj} Let $(X, f_{1,\infty})$ be an NDS on a compact metric space $(X,d)$ and $Z$ be a non-empty compact subset of $X$. Let $\alpha\in\mathbb{R}$, $n\in\mathbb{N}$, and $\epsilon>0$. Set $c:=\mathcal{W}_{f_{1,\infty}}(n,\alpha,\epsilon,Z)>0$. Then there is $\mu\in\mathcal{M}(X)$ such that $\mu(Z)=1$ and \begin{equation*} \mu(B_{m}(x,\text{e}^{-m\epsilon}))\leq\dfrac{1}{c}\text{e}^{-\alpha m},\ \ \ \forall x\in X,\ m\geq n. \end{equation*} \end{lemma} \textbf{Proof of Theorem A}. Note that for every $\mu\in\mathcal{M}(X)$ with $\mu(Z)=1$ and $\epsilon>0$, $h_{\mu}^{NK}(f_{1,\infty},\epsilon)\leq M_{f_{1,\infty}}(\epsilon,Z)$. Hence, by Proposition \ref{propjj}, we have \begin{eqnarray*} && \lim_{\epsilon\to 0}\sup\{\underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon): \mu\in\mathcal{M}(X),\mu(Z)=1\}\\ &\leq & \lim_{\epsilon\to 0}\sup\{h_{\mu}^{NK}(f_{1,\infty},\epsilon): \mu\in\mathcal{M}(X),\mu(Z)=1\}\\ &\leq & h_{top}^{NB}(f_{1,\infty},Z). \end{eqnarray*} Now, fix $\epsilon>0$ and assume that $M_{f_{1,\infty}}(\epsilon,Z)>\alpha>0$. Then, by Theorem \ref{propj}, one has $\mathcal{W}_{f_{1,\infty}}(\alpha,2\epsilon,Z)>0$. So there is $n\in\mathbb{N}$ such that $c:=\mathcal{W}_{f_{1,\infty}}(n,\alpha,2\epsilon,Z)>0$. By Lemma \ref{lemmajjj}, there is $\mu\in\mathcal{M}(X)$ such that $\mu(Z)=1$ and \begin{equation*} \mu(B_{m}(x,\text{e}^{-2m\epsilon}))\leq\dfrac{1}{c}\text{e}^{-\alpha m},\ \ \ \forall x\in X,\ m\geq n. \end{equation*} This implies that $\underline{h}_{\mu}^{NBK}(f_{1,\infty},2\epsilon)\geq\alpha$. Now, as $\alpha\to M_{f_{1,\infty}}(\epsilon,Z)$, we obtain that \begin{equation*} M_{f_{1,\infty}}(\epsilon,Z)\leq\underline{h}_{\mu}^{NBK}(f_{1,\infty},2\epsilon)\leq\sup\{\underline{h}_{\mu}^{NBK}(f_{1,\infty},2\epsilon): \mu\in\mathcal{M}(X),\mu(Z)=1\}. \end{equation*} The above statement together with Theorem \ref{propj} finishes the proof of Theorem A. \begin{remark} Note that, one can not directly exchange the order of $\lim_{\epsilon\to 0}$ and $\sup$ in Theorem A, because of \begin{equation*} \underline{h}_{\mu}^{NBK}(f_{1,\infty})=\inf_{\epsilon>0}\underline{h}_{\mu}^{NBK}(f_{1,\infty},\epsilon)\ \text{and}\ h_{\mu}^{NK}(f_{1,\infty})=\inf_{\epsilon>0}h_{\mu}^{NK}(f_{1,\infty},\epsilon). \end{equation*} This obstacle coming from definition does not allow us to establish the variational principle for neutralized Bowen topological entropy and neutralized weighted Bowen topological entropy whose form is closer to the Nazarian Sarkooh work \cite[Corollary D]{JNS0000}. Interested readers also can see \cite[Theorem 1.1]{YRCEZX} for analogous problems in autonomous dynamical systems, and \cite[Theorem 1.4]{TW} and \cite[Theorem 1.4]{YRCEZX1} in metric mean dimension theory for an analog problem. \end{remark} \section*{Acknowledgments} The authors would like to thank the respectful referee for his/her comments on the manuscript. \section*{Declarations} \begin{itemize} \item[]\textbf{Ethical Approval}: Not applicable. \item[]\textbf{Competing interests}: There is no conflict of interest. \item[]\textbf{Authors' contributions}: All the authors were responsible for conceptualizing, writing, reviewing, and editing the manuscript. \item[]\textbf{Funding}: No Funding. \item[]\textbf{Availability of data and materials}: Not applicable. \end{itemize} \end{document}
\begin{document} \title {Reconstruction of a source domain from the Cauchy data: II. Three dimensional case} \author{Masaru IKEHATA\footnote{ Laboratory of Mathematics, Graduate School of Advanced Science and Engineering, Hiroshima University, Higashihiroshima 739-8527, JAPAN} \footnote{Emeritus Professor at Gunma University} } \maketitle \begin{abstract} This paper is concerned with reconstruction issue of some typical inverse problems and consists of three parts. First a framework of the enclosure method for an inverse source problem governed by the Helmholtz equation at a fixed wave number in three dimensions is introduced. It is based on the nonvanishing of the coefficient of the leading profile of an oscillatory integral over a domain having a conical singularity. Second an explicit formula of the coefficient for a domain having a circular cone singularity and its implication under the framework are given. Third, an application under the framework to an inverse obstacle problem governed by an inhomogeneous Helmholtz equation at a fixed wave number in three dimensions is given. \noindent AMS: 35R30 $\quad$ \noindent Key words: exponentially growing solution, enclosure method, inverse source problem, inverse obstacle problem, Helmholtz equation, conical singularity, circular cone singularity \end{abstract} \section{Introduction} More than twenty years ago, in \cite{Ik} the author obtained the extraction formula of the support function of an unknown polygonal source domain in an inverse source problem governed by the Helmholtz equation and polygonal penetrable obstacle in an inverse obstacle problem governed by an inhomogeneous Helmholtz eqution. All the problems considered therein are in two dimensions and employ only a single set of Cauchy data of a solution of the governing equation at a fixed wave number in a bounded domain. Those results can be considered as the first application of a single measurment version of the {\it enclosure method} introduced in \cite{Ik0}. Succeding to \cite{Ik}, in \cite{IkC} the author found another unexpected application of the enclosure method out to the Cauchy problem for the stationary Schr\"odinger equation $$\displaystyle -\Delta u+V(x)u=0 \tag {1.1} $$ in a bounded domain $\Omega$ of ${\rm \bf R}^n$, $n=2,3$. Here $V\in L^{\infty}(\Omega)$ and both $u$ and $V$ can be complex valued functions. We established an explicit representation or computation formula for an arbitrary solution $u\in H^2(\Omega)$ to the equation (1.1) in $\Omega$ in terms of its Cauchy data on a part of $\partial\Omega$. See also \cite{IS} for its numerical implementation. Note also that the idea in \cite{IkC} has been applied to an inverse source problem governed by the heat equation together with an inverse heat conduction problem in \cite{IkH}, \cite{IkHC}, respectively. The idea introduced therein is to make use of the complex geometrical optics solutions (CGO) with a large parameter $\tau$ for the modified equation instead of (1.1): $$\begin{array}{ll} \displaystyle -\Delta v+V(x)v=\chi_{D_y}(x)v, & x\in\Omega, \end{array} $$ where $y$ is a given point in $\Omega$, $D_y\subset\subset\Omega$ is the inside of a triangle, tetrahedron for $n=2,3$, respectively with a vertex at $y$ and $\chi_{D_y}(x)$ is the characteristic function of $D_y$. The solution is the same type as constructed one in \cite{SU1} for $n=2$, \cite{SU2} for $n=3$ and has the following form as $\tau\rightarrow\infty$ $$\displaystyle v\sim e^{x\cdot z}, $$ where $z=\tau(\omega+i\vartheta)$ and both $\omega$ and $\vartheta$ are unit vectors perpendicular to each other. This right-hand side is just the complex plane wave used in the Calder\'on method \cite{C}. Note that, in \cite{IkS} another simpler idea to make use of the CGO solutions of another modified equation described below is presented: $$\begin{array}{ll} \displaystyle -\Delta v+V(x)v=\chi_D(x)e^{x\cdot z}, & x\in\Omega. \end{array} $$ Using integration by parts we reduced the problem of computing the value of $u$ at given point $y$, essentially, to clarifying the leading profile of the following oscillatory integral as $\tau\rightarrow\infty$: $$\displaystyle \int_{D_y} e^{x\cdot z}\,\rho(x)dx, $$ where $\rho(x)$ is uniformly H\"older continuous on $\overline{D_y}$\footnote{In this case $\rho=u$.}. Note that the asymptotic behaviour of this type of oscillatory integral in {\it two dimensions} is the key point of the enclosure method developed in \cite{Ik}. In \cite{IkC} we clarified the leading profile in more general setting as follows. Given a pair $(p,\omega)\in{\rm \bf R}^n\times S^{n-1}$ and $\delta>0$ let $Q$ be an arbitrary non empty bounded open subset of the plane $x\cdot\omega=p\cdot\omega-\delta$ with respect to the relative topology from ${\rm \bf R}^n$. Define the bounded open subset of ${\rm \bf R}^n$ by the formula $$\displaystyle D_{(p,\omega)}(\delta,Q) =\cup_{0<s<\delta}\, \left\{p+\frac{s}{\delta}(z-p)\,\left\vert\right.\,z\in Q\,\right\}. \tag {1.2} $$ This is a cone with the base $Q$ and apex $p$, and lying in the slab $\{x\in{\rm \bf R}^n\,\vert\,p\cdot\omega-\delta<x\cdot\omega<p\cdot\omega\,\}$. Note that $\delta=\mbox{dist}\,(\{p\},Q)$ is called the height. If $Q$ is given by the inside of a polygon, the cone (1.2) is called a {\it solid pyramid}. In particular, if $Q$ is given by the inside of a triangle, cone (1.2) becomes a tetrahedron. On (2.2) in \cite{IkC} we introduced a special complex constant associated with the domain (1.2) which is given by $$\displaystyle C_{(p,\omega)}(\delta, Q,\vartheta)=2s\int_{Q_s}\frac{dS_z}{\{s-i(z-p)\cdot\vartheta\}^n}, \tag {1.3} $$ where $i=\sqrt{-1}$, $0<s<\delta$ and $Q_s=D_{(p,\omega)}(\delta,Q)\cap\{x\in{\rm \bf R}^n\,\vert x\cdot\omega=p\cdot\omega-s\,\}$ and the direction $\vartheta\in S^{n-1}$ is perpendicular to $\omega$. Note that in \cite{IkC} complex constant $C_{(p,\omega)}(\delta,Q,\vartheta)$ is simply written as $C_D(\omega,\omega^{\perp})$ with $\omega^{\perp}=\vartheta$. As pointed therein out this quantity is independent of the choice $s\in\,]0,\,\delta[$ because of the one-to-one correspondence between $z\in Q_s$ and $z'\in Q_{s'}$ by the formula $$ \left\{ \begin{array}{l} \displaystyle z'=p+\frac{s'}{s}\,(z-p), \\ \\ \displaystyle \displaystyle dS_{z'}=(\frac{s'}{s})^{n-1}\,dS_z. \end{array} \right. $$ The following lemma describes the relationship between complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$ and an integral over (1.2). \proclaim{\noindent Proposition 1.1 (Lemma 2 in \cite{IkC}).} Let $n=2, 3$. Let $D=D_{(p,\omega)}(\delta,Q)$ and $\rho\in C^{0,\alpha}(\overline D)$ with $0<\alpha\le 1$. It holds that, for all $\tau>0$ $$\begin{array}{l} \displaystyle \,\,\,\,\,\, \left\vert e^{-\tau p\cdot(\omega+i\vartheta)} \int_D\rho(x)e^{\tau x\cdot(\omega+i\vartheta)}\,dx -\frac{n-1}{2\tau^n} \rho(p)\,C_{(p,\omega)}(\delta,Q,\vartheta)\right\vert \\ \\ \displaystyle \le\vert\rho(p)\vert\frac{\vert Q\vert}{\delta^{n-1}} \{(\tau\delta+1)^{n-1}+n-2\} \frac{e^{-\tau\delta}}{\tau^n} +\Vert\rho\Vert_{C^{0,\alpha}(\overline D)} \frac{\vert Q\vert}{\delta^{n-1}} (\frac{\mbox{diam}\,D}{\delta})^{\alpha}\frac{C_{n,\alpha}}{\tau^{n+\alpha}}, \end{array} $$ where $\Vert\rho\Vert_{C^{0,\alpha}(\overline D)}= \sup_{x,y\in\overline D, x\not=y}\frac{\vert\rho(x)-\rho(y)\vert}{\vert x-y\vert^{\alpha}}$ and $$\displaystyle C_{n,\alpha} =\int_0^{\infty}s^{n-1+\alpha} e^{-s}ds. $$ \em \vskip2mm Thus we have, as $\tau\rightarrow\infty$ $$\displaystyle e^{-\tau p\cdot(\omega+i\vartheta)} \int_{D_{(p,\omega)}(\delta,Q)}\rho(x)e^{\tau x\cdot(\omega+i\vartheta)}\,dx =\frac{n-1}{2\tau^n} \rho(p)\,C_{(p,\omega)}(\delta,Q,\vartheta)+O(\tau^{-(n+\alpha)}). $$ This is the meaning of complex constant $C_{(p,\omega)}(\delta,Q,\vartheta)$. Note that the remainder estimate $O(\tau^{-(n+\alpha)})$ is uniform with respect to $\vartheta$. And also as a direct corollary, instead of (1.3) we have another representation of $C_{(p,\omega)}(\delta,Q,\vartheta)$: $$\displaystyle C_{(p,\omega)}(\delta,Q,\vartheta) =\frac{2}{n-1} \lim_{\tau\longrightarrow\infty}\tau^ne^{-\tau p\cdot(\omega+i\vartheta)} \int_{D_{(p,\omega)}(\delta,Q)} e^{\tau x\cdot(\omega+i\vartheta)}dx. \tag {1.4} $$ The convergence is uniform with respect to $\vartheta$. Proposition 1.1 is the one of two key points in \cite{IkC} and gives the role of the H\"older continuity of $\rho$. Another one is the {\it non-vanishing} of $C_{(p,\omega)}(\delta,Q,\vartheta)$ as a part of the leading coefficient of the integral in Proposition 1.1 as $\tau\rightarrow\infty$. This is not trivial, in particular, in three dimensional case. For this we have shown therein the following fact. \proclaim{\noindent Proposition 1.2(Theorem 2 in \cite{IkC}).} $\bullet$ If $n=2$ and $Q$ is given by the inside of an {\it arbitrary line segment}, then for all $\vartheta$ perpendicular to $\omega$ we have $C_{(p,\omega)}(\delta,Q,\vartheta)\not=0$. $\bullet$ If $n=3$ and $Q$ is given by the inside of an {\it arbitrary triangle}, then for all $\vartheta$ perpendicular to $\omega$ we have $C_{(p,\omega)}(\delta,Q,\vartheta)\not=0$. \em \vskip2mm The nonvanishing of complex constant $C_{(p,\omega)}(\delta,Q,\vartheta)$ in case $n=2$ has been shown in the proof of Lemma 2.1 in \cite{Ik}. The proof therein employs a local expression of the corner around apex as a graph of a function on the line $x\cdot\omega=x\cdot p$ and so the proof by viewing $D_{(p,\omega)}(\delta,Q)$ as a cone in \cite{IkC} is not developed. Note that, in the survey paper \cite{IkS} on the enclosure method it is pointed out that ``the Helmholtz version'' of Proposition 1.1 is also valid. That is, roughly speaking, we have $$\displaystyle e^{-p\cdot(\tau\omega+i\sqrt{\tau^2+k^2}\,\vartheta)} \int_{D_{(p,\omega)}(\delta,Q)}\,\rho(x)e^{x\cdot(\tau \omega+i\sqrt{\tau^2+k^2}\,\vartheta)}\,dx =\frac{n-1}{2\tau^n} \rho(p)\,C_{(p,\omega)}(\delta,Q,\vartheta)+O(\tau^{-(n+\alpha)}) \tag {1.5} $$ with the {\it same constant} $C_{(p,\omega)}(\delta,Q,\vartheta)$, where $k\ge 0$. See Lemma 3.2 therein. The proof can be done by using the same argument as that of Proposition 1.1. Note that the function $v=e^{x\cdot(\tau \omega+i\sqrt{\tau^2+k^2}\,\vartheta)}$ satisfies the Helmholtz equation $\Delta v+k^2 v=0$ in ${\rm \bf R}^n$. \subsection{Role of nonvanishing in an inverse source problem} As an application of the nonvanishing of the complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$, we present here its direct application to the inverse source problem considered in \cite{Ik}, however, in {\it three dimensions}. Let $\Omega$ be a bounded domain of ${\rm \bf R}^3$ with $\partial\Omega\in C^2$. We denote by $\nu$ the normal unit outward vector field on $\partial\Omega$. Let $k\ge 0$. Let $u\in H^1(\Omega)$ be an arbitrary weak solution of the Helmholtz equation in $\Omega$ at the wave number $k$: $$\begin{array}{ll} \displaystyle \Delta u+k^2 u=F(x), & x\in\Omega, \end{array} \tag {1.6} $$ where $F(x)$ is an unknown source term such that $\mbox{supp}\,F\subset\Omega$. Both $u$ and $F$ can be complex-valued functions. See \cite{Ik} for the meaning of the solution and the formulation of the Cauchy data on $\partial\Omega$ in the weak sense. It is well known that, in general, one can not obtain the uniqueness of the source term $F$ itself from the Cauchy data of $u$ on $\partial\Omega$. In fact, given $\varphi\in C^{\infty}_0(\Omega)$ let $G=F+\Delta\varphi+k^2\varphi$. We have $\mbox{supp}\,G\subset\Omega$ and the function $\tilde{u}=u+\varphi$ satisfies $$\begin{array}{ll} \displaystyle \Delta\tilde{u}+k^2\tilde{u}=G(x), & x\in\Omega. \end{array} $$ Both $u$ and $\tilde{u}$ have the same Cauchy data on $\partial\Omega$. It should be pointed out that, however, $F$ and $G$ coincides each other modulo $C^{\infty}$. This means that the singularity of $F$ and $G$ coincides each other. This suggests a possibility of extracting some information about a singularity of $F$ or its support from the Cauchy data of $u$ on $\partial\Omega$. As done in \cite{Ik} in two dimensions, we introduce the special form of the unknown source $F$: $$F(x)=F_{\rho,D}(x)= \left\{\begin{array}{lr} \displaystyle 0, & \quad\mbox{if $x\in\Omega\setminus D$,}\\ \\ \displaystyle \rho(x), & \quad\mbox{if $x\in\,D$.} \end{array} \right. \tag {1.7} $$ Here $D$ is an unknown non empty open subset of $\Omega$ satisfying $\overline D\subset\Omega$ and $\rho\in L^{2}(D)$ also unknown. We call $D$ the {\it source domain}, however, we do not assume the connectedness of not only $D$ but also $\Omega\setminus\overline D$. The $\rho$ is called the strength of the source. We are interested in the following problem. $\quad$ {\bf\noindent Problem 1.} Extract information about a singularity of the source domain $D$ of $F$ having form (1.7) from the Cauchy data $(u(x), \frac{\partial u}{\partial\nu}(x))$ for all $x\in\partial\Omega$. $\quad$ \noindent Note that we are seeking a {\it concrete procedure} of the extraction. Here we recall the notion of the regularity of a direction introduced in the enclosure method \cite{Ik}. The function $h_D(\omega)=\sup_{x\in D}\,x\cdot\omega$, $\omega\in S^{2}$ is called the {\it support function} of $D$. It belongs to $C(S^2,{\rm \bf R})$ because of the trivial estimae $\vert h_D(\omega_1)-h_D(\omega_2)\vert\le \sup_{x\in D}\,\vert x\vert\cdot\vert\omega_1-\omega_2\vert$ for all $\omega_1,\omega_2\in S^2$. Given $\omega\in S^{2}$, it is easy to see that the set $$\displaystyle H_{\omega}(D)\equiv\left\{x\in \overline D\,\left\vert\right. x\cdot\omega=h_D(\omega)\,\right\} $$ is non empty and contained in $\partial D$. We say that $\omega$ is {\it regular} with respect to $D$ if the set $H_{\omega}(D)$ consists of only a single point. We denote the point by $p(\omega)$. We introduce a concept of a singularity of $D$ in (1.7). {\bf\noindent Definition 1.1.} Let $\omega\in S^{2}$ be regular with respect to $D$. We say that $D$ has a {\it conical singularity} from direction $\omega$ if there exists a positive number $\delta$, an open set $Q$ of the plane $x\cdot\omega=h_D(\omega)-\delta$ with respect to the relative topology from ${\rm \bf R}^3$ such that $$\displaystyle D\cap\left\{x\in{\rm \bf R}^3\,\vert\,h_D(\omega)-\delta<x\cdot\omega<h_D(\omega)\,\right\}=D_{(p(\omega),\omega)}(\delta,Q). $$ Second we introduce a concept of an {\it activity} of the source term. {\bf\noindent Definition 1.2.} Given a point $p\in\partial D$ we say that the source $F=F_{\rho,D}$ given by (1.7) is {\it active} at $p$ if there exist an open ball $B_{\eta}(p)$ centered at $p$ with radius $\eta$, $0<\alpha\le 1$ and a function $\tilde{\rho}\in C^{0,\alpha}(\overline{B_{\eta}(p)})$ such that $\rho(x)=\tilde{\rho}(x)$ for almost all $x\in B_{\eta}(p)\cap D$ and $\tilde{\rho}(p)\not=0$. Note that $\rho$ together with $\tilde{\rho}$ can be a complex-valued function. Now let $u\in H^1(\Omega)$ satisfies the equation (1.6) in the weak sense with $F=F_{\rho, D}$ given by (1.7). Given a unit vector $\omega\in S^2$ define $S(\omega)=\{\vartheta\in S^2\,\vert \omega\cdot\vartheta=0\}$. Using the Cauchy data of $u$ on $\partial\Omega$, we define the indicator function as \cite{Ik} $$\displaystyle I_{\omega,\vartheta}(\tau)=\int_{\partial\Omega} \left(\frac{\partial u}{\partial\nu}v-\frac{\partial v}{\partial\nu} u\right)\,dS, $$ where $\vartheta\in S(\omega)$ and $$\displaystyle v=e^{x\cdot(\tau\omega+i\sqrt{\tau^2+k^2}\vartheta)},\,\,\tau>0. $$ And also its derivative with respect to $\tau$ $$\displaystyle I_{\omega,\vartheta}'(\tau) =\int_{\partial\Omega}\left(\frac{\partial u}{\partial\nu}\,v_{\tau}-\frac{\partial\,v_{\tau}}{\partial\nu} u\right)\,dS, $$ where $$\displaystyle v_{\tau}=\partial_{\tau}v=\left\{x\cdot\left(\omega+i\frac{\tau}{\sqrt{\tau^2+k^2}}\,\vartheta\,\right)\,\right\}\,v. $$ The following theorem clarifies the role of the complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$ in the asymptotic behaviour of the indicator function together with its derivative as $\tau\rightarrow\infty$. \proclaim{\noindent Theorem 1.1.} Let $\omega$ be regular with respect to $D$ and assume that $D$ has a conical singularity from direction $\omega$. Then, we have $$\displaystyle \tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}(\tau)= \tilde{\rho}(p(\omega))\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta) +O(\tau^{-\alpha}) \tag {1.8} $$ and $$\displaystyle \tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}'(\tau)= \tilde{\rho}(p(\omega))(h_D(\omega)+ip(\omega)\cdot\vartheta)\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta) +O(\tau^{-\alpha}). \tag {1.9} $$ The remainder $O(\tau^{-\alpha})$ is uniform with respect to $\vartheta\in S(\omega)$. \em \vskip2mm {\it\noindent Proof.} Integration by parts yields $$\displaystyle I_{\omega,\vartheta}(\tau)=\int_D\rho(x)\,v\,dx $$ and thus $$\displaystyle I_{\omega,\vartheta}'(\tau)=\int_D\rho(x)\,v_{\tau}\,dx. $$ Recalling Definition 1.1, one has the decomposition $$\displaystyle D=D_{(p(\omega),\omega)}(\delta,Q)\cup D', \tag {1.10} $$ where $$ D'=D\setminus D_{(p(\omega),\omega)}(\delta,Q)\subset\left\{x\in{\rm \bf R}^3\,\vert\,x\cdot\omega\le h_D(\omega)-\delta\,\right\}. \tag {1.11} $$ Besides, choosing $\delta$ smaller if necessary, one may assume that $D_{(p(\omega),\omega)}(\delta, Q)\subset B_{\eta}(p(\omega))$, where $\eta$ and $B_{\eta}(p(\omega))$ are same as those of Definition 1.2. Hereafter we set $p=p(\omega)$ for simplicity of description. According to the decomposition (1.10), we have the decomposition of both $I_{\omega,\vartheta}(\tau)$ and $I_{\omega,\vartheta}'(\tau)$ as follows: $$\begin{array}{l} \displaystyle \,\,\,\,\,\, e^{-\tau h_D(\omega)} e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}I_{\omega,\vartheta}(\tau) \\ \\ \displaystyle =e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta} \int_{D_{(p,\omega)}(\delta, Q) }\tilde{\rho}(x)\,vdx +e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}\int_{D'}\rho v dx \end{array} \tag {1.12} $$ and $$\begin{array}{l} \displaystyle \,\,\,\,\,\, e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}I_{\omega,\vartheta}'(\tau) \\ \\ \displaystyle =e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta} \int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, v_{\tau}dx+ e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}\int_{D'}\rho v_{\tau} dx, \end{array} \tag {1.13} $$ where $p=p(\omega)$. By (1.11), we see that the second terms on the right-hand sides of (1.12) and (1.13) have the common bound $O(e^{-\tau\delta}\Vert\rho\Vert_{L^{2}(D)})$. Thus from (1.5) and (1.12) we obtain (1.8) with the remainder $O(\tau^{-\alpha})$ which is uniform with respect to $\vartheta\in S(\omega)$. For (1.13) we write $$\begin{array}{l} \displaystyle \,\,\,\,\,\, \int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, v_{\tau}dx \\ \\ \displaystyle =\int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, \left\{x\cdot\left(\omega+i\frac{\tau}{\sqrt{\tau^2+k^2}}\,\vartheta\,\right)\,\right\}\,v\,dx \\ \\ \displaystyle =\int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, x\cdot\omega\,v\,dx +i\frac{\tau}{\sqrt{\tau^2+k^2}}\int_{C_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, x\cdot\vartheta\,v\,dx. \end{array} $$ Thus applying (1.5) to each of the last terms and using (1.13), we obtain (1.9) with the remainder $O(\tau^{-\alpha})$ which is uniform with respect to $\vartheta\in S(\omega)$. \noindent $\Box$ Thus under the same assumptions as Theorem 1.1, for each $\vartheta\in S(\omega)$ one can calculate $$\displaystyle I(\omega,\vartheta)\equiv \tilde{\rho}(p(\omega))\,\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta) $$ via the formula $$\displaystyle I(\omega,\vartheta) =\lim_{\tau\rightarrow\infty}\tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta} I_{\omega,\vartheta}(\tau) \tag {1.14} $$ by using the Cauchy data of $u$ on $\partial\Omega$ if $p(\omega)$ is known. As a direct corollary of formulae (1.8) and (1.9), we obtain a partial answer to Problem 1 and the starting point of the main purpose in this paper. \proclaim{\noindent Theorem 1.2.} Let $\omega$ be regular with respect to $D$. Assume that $D$ has a conical singularity from direction $\omega$, $F_{\rho,D}$ is active at $p=p(\omega)$ and that direction $\vartheta\in S(\omega)$ satisfies the condition $$\displaystyle C_{(p(\omega),\,\omega)}(\delta,Q,\vartheta)\not=0. \tag {1.15} $$ Then, there exists a positive number $\tau_0$ such that, for all $\tau\ge\tau_0$ $\vert I_{\omega,\vartheta}(\tau)\vert>0$ and we have the following three asymptotic formulae. The first formula is $$\displaystyle \lim_{\tau\longrightarrow\infty}\frac{\log\vert I_{\omega, \vartheta}(\tau)\vert}{\tau}=h_D(\omega) \tag {1.16} $$ and second one $$\displaystyle \lim_{\tau\rightarrow\infty} \frac{I_{\omega,\vartheta}'(\tau)}{I_{\omega,\vartheta}(\tau)} =h_D(\omega)+i\,p(\omega)\cdot\vartheta. \tag {1.17} $$ The third one is the so-called $0$-$\infty$ criterion: $$\displaystyle \lim_{\tau\longrightarrow\infty}e^{-\tau t}\vert I_{\omega, \vartheta}(\tau) \vert = \left\{ \begin{array}{ll} 0, & \mbox{if $t\ge h_D(\omega)$,}\\ \\ \displaystyle \infty, & \mbox{if $t<h_D(\omega)$.} \end{array} \right. \tag {1.18} $$ \em \vskip2mm This provides us the framework of the approach using the enclosure method for the source domain with a conical singularity from a direction. Some remarks are in order. $\bullet$ In two dimensions, by Proposition 1.2 the condition (1.15) is redundant and we have the same conclusion as Theorem 1.2. $\bullet$ The formula (1.17) is an application of the idea ``taking the logarithmic derivative of the indicator function'' introduced in \cite{IkL}. Therein inverse obstacle scattering problems at a fixed frequency in two dimensions are considered. Needless to say, formula (1.17) is not derived in \cite{Ik}. The condition (1.15) is {\it stable} with respect to the parturbation of $\vartheta\in S(\omega)$ since from the expression (1.3) we see that the function $S(\omega)\ni\vartheta\longmapsto C_{(p(\omega),\,\omega)}(\delta,Q,\,\vartheta)$ is continuous, where the topology of $S(\omega)$ is the relative one from ${\rm \bf R}^3$. This fact yields a corollary as follows. \proclaim{\noindent Corollary 1.1.} Let $\omega$ be regular with respect to $D$. Under the same assumptions as those in Theorem 1.2 the point $p(\omega)$ is uniquely determined by the Cauchy data of $u$ on $\partial\Omega$. \em \vskip2mm {\it\noindent Proof.} From (1.16) one has $h_D(\omega)=p(\omega)\cdot\omega$. Choose $\vartheta'\in S(\omega)$ sufficiently near $\vartheta$ in such a way that $C_{(p(\omega),\omega)}(\delta,Q,\vartheta')\not=0$. Then from the formula (1.17) for two linearly independent directions $\vartheta$ and $\vartheta'$ one gets $p(\omega)\cdot\vartheta$ and $p(\omega)\cdot\vartheta'$. \noindent $\Box$ As another direct corollary of Theorem 1.2 and Proposition 1.2 in the case $n=3$ we have the following result. \proclaim{\noindent Corollary 1.2.} Assume that $D$ is given by the inside of a convex polyhedron and in a neighbourhood of each vertex $p$ of $D$, the $D$ coincides with the inside of a tetrahedron with apex $p$ and that the source $F=F_{\rho, D}$ given by (1.7) is active at $p$. Then, we have all the formulae (1.16), (1.17) and (1.18) for all $\omega$ regular with respect to $D$ and $\vartheta\in S(\omega)$. \em \vskip2mm {\it\noindent Proof.} We have: $D$ has a conical singularity from the direction $\omega$ that is regular with respect to $D$ with a triangle $Q$ at each $p(\omega)$. Thus (1.15) is valid for all $\omega$ regular with respect to $D$ and $\vartheta\in S(\omega)$. Therefore, we have all the formulae (1.16), (1.17) and (1.18) for all $\omega$ regular with respect to $D$ and $\vartheta\in S(\omega)$. \noindent $\Box$ {\bf\noindent Remark 1.1.} Under the same assumptions as Corollary 1.2 one gets a uniqueness theorem: the Cauchy data of $u$ on $\partial\Omega$ uniquely determines $D$. The proof is as follows. From (1.16) one gets $h_D(\omega)$ for all $\omega$ regular with respect to $D$. The set of all $\omega$ that are not regular with respect to $D$ consists of a set of finite points and arcs on $S^2$. This yields the set of all $\omega$ that are regular with respect to $D$ is dense and thus one gets $h_D(\omega)$ for all $\omega\in S^2$ because of the continuity of $h_D$. Therefore one obtains the convex hull of $D$ and thus $D$ itself by the convexity assumption. This proof is remarkable and unique since we never make use of the {\it traditional contradiction argument}`` Suppose we have two different source domains $D_1$ and $D_2$ which yields the same Cauchy data,...''; any {\it unique continuation argument} of the solution of the governing equation. One can see such two arguments in \cite{N} in the case when $k=0$ for an inverse problem for detecting a source of {\it gravity anomaly}. Some of typical examples of $D$ covered by Corollary 1.2 are tetrahedron, regular hexahedron (cube), regular dodecahedron. So now the central problem in applying Theorem 1.2 to Problem 1 for the source with various source domain under our framework is to clarify the condition (1.15) for general $Q$. In contrast to Proposition 1.2, when $Q$ is general, we do not know whether there exists a unit vector $\vartheta\in S(\omega)$ such that (1.15) is valid or not. Going back to (1.3), we have an explicit vector equation for the constant $C_{(p,\omega)}(\delta,Q,\vartheta)$, if $Q$ is given by the inside of a polygon. See Proposition 4 in \cite{IkC}. However, comparing with the case when $Q$ is given by the inside of a triangle, it seems difficult to deduce the non-vanishing $C_{(p,\omega)}(\delta,Q,\vartheta)$ for all $\vartheta\in S(\omega)$ from the equation directly. This is an open problem. \subsection{Explicit formula and its implication} In this paper, instead of considering general $Q$, we consider another special $Q$. It is the case when $Q$ is given by the section of the inside of a {\it circulr cone} by a plane. Given $p\in{\rm \bf R}^3$, $\mbox{\boldmath $n$}\in S^2$ and $\theta\in\,]0,\,\frac{\pi}{2}[$ let $V_p(-\mbox{\boldmath $n$},\theta)$ denote the inside of the {\it circular cone} with {\it apex} at $p$ and the opening angle $\theta$ around the direction $-\mbox{\boldmath $n$}$, that is $$\displaystyle V_p(-\mbox{\boldmath $n$},\theta)=\left\{x\in{\rm \bf R}^3\,\left\vert\right. \,(x-p)\cdot(-\mbox{\boldmath $n$})>\cos\theta\,\right\}. $$ Given $\omega\in S^2$ set $$\displaystyle Q=\mbox{\boldmath $V$}_p(-\mbox{\boldmath $n$},\theta) \cap\left\{x\in{\rm \bf R}^3\,\left\vert\right.\,x\cdot\omega=p\cdot\omega-\delta\,\right\}. \tag {1.19} $$ To ensure that $Q$ is non empty and bounded, we impose the restriction between $\omega$ and $\mbox{\boldmath $n$}$ as follows: $$ \omega\cdot\mbox{\boldmath $n$}>\cos(\pi/2-\theta)=\sin\theta(>0). \tag{1.20} $$ This means that the angle between $\omega$ and $\mbox{\boldmath $n$}$ has to be less than $\frac{\pi}{2}-\theta$. Then it is known that $Q$ is an ellipse and we have $$\displaystyle D_{(p,\omega)}(\delta, Q)=\mbox{\boldmath $V$}_p(-\mbox{\boldmath $n$},\theta) \cap\left\{x\in{\rm \bf R}^3\,\left\vert\right.\,x\cdot\omega>p\cdot\omega-\delta\,\right\}. \tag {1.21} $$ The problem here is to compute the complex constant $C_{(p,\omega)}(\delta,Q,\vartheta)$ with all $\vartheta\in S(\omega)$ for this domain $D_{(p,\omega)}(\delta,Q)$ with $Q$ given by (1.19). Instead of (1.3) we employ the formula (1.4) with $D=D_{(p,\omega)}(\delta,Q)$ with $n=3$: $$\displaystyle C_{(p,\omega)}(\delta,Q,\vartheta) = \lim_{\tau\longrightarrow\infty}\tau^3e^{-\tau p\cdot(\omega+i\vartheta)} \int_{D_{(p,\omega)}(\delta,Q)}\,e^{\tau x\cdot(\omega+i\vartheta)}dx. \tag {1.22} $$ Here we rewrite this formula. Choosing sufficiently small positive numbers $\delta'$ and $\delta''$ with $\delta''<\delta'$, we see that the set $$\displaystyle D_{(p,\omega)}(\delta, Q)\cap\left\{x\in{\rm \bf R}^3\,\left\vert\right.\,x\cdot\mbox{\boldmath $n$}<p\cdot\mbox{\boldmath $n$}-\delta'\,\right\} $$ is containted in the half-space $x\cdot\omega<p\cdot\omega-\delta''$. This yields $$ \displaystyle e^{-\tau p\cdot(\omega+i\vartheta)} \int_{D_{(p,\omega)}(\delta,Q)}\,e^{\tau x\cdot(\omega+i\vartheta)}dx =e^{-\tau p\cdot(\omega+i\vartheta)} \int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx+O(e^{-\tau\delta''}), $$ where $$\displaystyle V=\mbox{\boldmath $V$}_p(-\mbox{\boldmath $n$},\theta) \cap\left\{x\in{\rm \bf R}^3\,\left\vert\right.\,x\cdot\mbox{\boldmath $n$}>p\cdot\mbox{\boldmath $n$}-\delta'\,\right\}. $$ Thus from (1.22) we obtain a more convenient expression $$\displaystyle C_{(p,\omega)}(\delta,Q,\vartheta) = \lim_{\tau\longrightarrow\infty}\tau^3e^{-\tau p\cdot(\omega+i\vartheta)} \int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx. \tag {1.23} $$ Using this expression we have the following explicit formula of $C_{(p,\omega)}(\delta,Q,\vartheta)$ for $D_{(p,\omega)}(\delta,Q)$ given by (1.21). \proclaim{\noindent Proposition 1.3.} We have $$\displaystyle C_{(p,\omega)}(\delta, Q,\vartheta) =6\,V(\theta)\, (\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)\,)^{-3}, \tag {1.24} $$ where $$\displaystyle V(\theta)=\frac{\pi}{3}\cos\,\theta\sin^2\,\theta. $$ \em \vskip2mm Note that the value $V(\theta)$ coincides with the volume of the circular cone with the height $\cos\theta$ and the opening angle $\theta$. This function of $\theta\in\,]0,\,\frac{\pi}{2}\,[$ is monotone increasing in $]0,\,\tan^{-1}\sqrt{2}[$ and decreasing in $]\tan^{-1}\sqrt{2},\,\frac{\pi}{2}[$; takes the maximum value $\frac{2\pi}{9\sqrt{3}}$ at $\theta=\tan^{-1}\sqrt{2}$. Now we describe an application to Problem 1. First we introduce a singularity of a circular cone type for the source domain. {\bf\noindent Definition 1.3.} Let $D$ be a non empty bounded open set of ${\rm \bf R}^3$. Let $p\in\partial D$. We say that $D$ has a {\it circular cone singularity} at $p$ if there exist a positive number $\epsilon$, unit vector $\mbox{\boldmath $n$}$ and number $\theta\in\,]0,\,\frac{\pi}{2}[$ such that $$\displaystyle D\cap B_{\epsilon}(p)=V_{p}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p). $$ It is easy to see that notion of the circular cone singularity is a special case of that of the conical one in the following sense. \proclaim{\noindent Lemma 1.1.} Let $\omega\in S^2$ be regular with respect to $D$. Assume that $D$ has a circular cone singularity at $p(\omega)$. Then, $D$ has a conical singularity from direction $\omega$ at $p(\omega)$. More precisely, for a sufficiently small $\delta$ we have the expression $$\displaystyle D\cap\left\{x\in{\rm \bf R}^3\,\vert\, h_D(\omega)-\delta<x\cdot\omega<h_D(\omega)\,\right\} =D_{(p(\omega),\omega)}(\delta, Q), $$ where $Q$ is given by (1.19) with $V_{p}(-\mbox{\boldmath $n$},\theta)$ at $p=p(\omega)$ in the definition 1.3 satisfying (1.20). \em \vskip2mm As a diect corollary of Theorems 1.1-1.2, Proposition 1.3 and Lemma 1.1 we immediately obtain all the results in Theorem 1.2 without the condition (1.15). We suumarize one of the result as Corollary 1.3 as follows. \proclaim{\noindent Corollary 1.3(Detecting the point $p(\omega)$).} Let $u\in H^1(\Omega)$ be an arbitrary solution of (1.6) with the source $F=F_{\rho,D}$ given by (1.7). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p=p(\omega)$; the source $F$ is active at $p(\omega)$. Choose two linearly independent vectors $\vartheta=\vartheta_1$ and $\vartheta_2$ in $S(\omega)$. Then, the point $p(\omega)$ itself and thus $h_D(\omega)=p(\omega)\cdot\omega$ can be extracted from the Cauchy data of $u$ on $\partial\Omega$ by using the formula $$\displaystyle p(\omega)\cdot\omega+i\,p(\omega)\cdot\vartheta_j =\lim_{\tau\rightarrow\infty} \frac{I_{\omega,\vartheta_j}'(\tau)}{I_{\omega,\vartheta_j}(\tau)},\,\,\,j=1,2. \tag {1.25} $$ \em \vskip2mm By virtue of the formula (1.24), the function $I(\omega,\,\cdot\,)$ has the expression $$\displaystyle I(\omega,\vartheta)=6\,\tilde{\rho}(p(\omega))\,V(\theta)(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^{-3}. \tag {1.26} $$ Formula (1.26) yields the following results. \proclaim{\noindent Corollary 1.4.} Let $u\in H^1(\Omega)$ be a solution of (1.6) with the source $F=F_{\rho,D}$ given by (1.7). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$. \noindent (i) Assume that $F$ is active at $p(\omega)$. The vector $\omega$ coincides with $\mbox{\boldmath $n$}$ if and only if the function $I(\omega,\,\cdot\,)$ is a constant function. \noindent (ii) The vector $\mbox{\boldmath $n$}$ and $\theta$ of $V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)$ and the source strength $\tilde{\rho}(p(\omega))$ satisfies the following two equations: $$\displaystyle 6\,\vert\tilde{\rho}(p(\omega))\vert\,V(\theta)=(\mbox{\boldmath $n$}\cdot\omega)^3 \max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert; \tag {1.27} $$ $$\displaystyle 6\,\tilde{\rho}(p(\omega)) \,V(\theta)\,(3(\mbox{\boldmath $n$}\cdot\omega)^2-1) =\frac{1}{\pi}\,\int_{S(\omega)}\,I(\omega,\vartheta) \,ds(\vartheta). \tag {1.28} $$ \em \vskip2mm Using the equations (1.26), (1.27) and (1.28) one gets the following corollary. \proclaim{\noindent Corollary 1.5.} Let $u\in H^1(\Omega)$ be a solution of (1.6) with the source $F=F_{\rho,D}$ given by (1.7). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$. Assume that $F$ is active at $p(\omega)$ and that $\omega\approx\mbox{\boldmath $n$}$ in the sense that $$\displaystyle \mbox{\boldmath $n$}\cdot\omega>\frac{1}{\sqrt{3}}. \tag {1.29} $$ Then, the value $\gamma=\mbox{\boldmath $n$}\cdot\omega$ is the unique solution of the following quintic equation in $]\,\frac{1}{\sqrt{3}},\,1]$: $$\displaystyle \gamma^3(3\gamma^2-1)= \frac{\displaystyle\left\vert\int_{S(\omega)}\,I(\omega,\vartheta) \,ds(\vartheta)\right\vert}{\pi\,\max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert}. \tag {1.30} $$ Besides, for an arbitrary $\vartheta\in S(\omega)$ the value $\mu=\mbox{\boldmath $n$}\cdot\vartheta$ is given by the formulae $$ \displaystyle \mu^2=\frac{\displaystyle\gamma^3-\mbox{Re}\,T(\omega,\vartheta)} {3\gamma} \tag {1.31} $$ and $$\displaystyle \mu=\frac{\displaystyle\mbox{Im}\,T(\omega,\vartheta)}{3\gamma^2-\mu^2}, \tag {1.32} $$ where $$\displaystyle T(\omega,\vartheta) =\frac{\displaystyle \int_{S(\omega)}\,I(\omega,\vartheta)\,ds(\vartheta)} {\pi(3\gamma^2-1)I(\omega,\vartheta)}. \tag{1.33} $$ \em \vskip2mm The condition (1.29) is equivalent to the statement\footnote{We have $$\displaystyle \frac{3\pi}{10}+\frac{\pi}{100}>\tan^{-1}\sqrt{2}>\frac{3\pi}{10}. $$ }: the angle between $\omega$ and $\mbox{\boldmath $n$}$ is less than $\tan^{-1}\sqrt{2}$. Thus it is not so strict condition. The denominator of (1.32) is not zero because of $3\gamma^2-\mu^2\ge 3\gamma^2-1$ and (1.29). Under the same assumptions as Corollary 1.5, one can finally calculate the quantity $$\displaystyle \tilde{\rho}(p(\omega))\,V(\theta) \tag {1.34} $$ and $\mbox{\boldmath $n$}$ from the Cauchy data of $u$ on $\partial\Omega$. This is the final conclusion. The procedure is as follows. \noindent {\bf Step 1.} Calculate $p(\omega)$ via the formula (1.25). \noindent {\bf Step 2.} Calculate $I(\omega,\vartheta)$ via the formula (1.14) and the computed $p(\omega)$ in Step 1. \noindent {\bf Step 3.} If $I(\omega,\vartheta)$ looks like a constant function, decide $\omega\approx\mbox{\boldmath $n$}$ in the sense (1.29). If not so, search another $\omega$ around the original one in such a way that $\omega\approx\mbox{\boldmath $n$}$ as above by try and error and finally fix it. \noindent {\bf Step 4.} Find the value $\gamma=\mbox{\boldmath $n$}\cdot\omega$ by solving the quintic equation (1.30). \noindent {\bf Step 5.} Find the value (1.34) via the formulae (1.28) with the computed $\mbox{\boldmath $n$}\cdot\omega$ in Step 4. \noindent {\bf Step 6.} Choose linearly independent vectors $\vartheta_1, \vartheta_2\in S(\omega)$ and calculate $T(\omega,\vartheta_j)$, $j=1,2$ via the formula (1.33) using the computed value $\gamma$ in Step 4. \noindent {\bf Step 7.} Find $\mu=\mu_j=\mbox{\boldmath $n$}\cdot\vartheta_j$ by solving (1.31) and (1.32) using the computed $T(\omega,\vartheta_j)$ in Step 6. \noindent {\bf Step 8.} Find $\mbox{\boldmath $n$}$ by solving $\mbox{\boldmath $n$}\cdot\omega=\gamma$, $\mbox{\boldmath $n$}\cdot\vartheta_j=\mu_j$, $j=1,2$. Note that, in addition, if the opening angle $\theta$/the source strength $\tilde{\rho}(p(\omega))$ is known, then one obtains the value of $\tilde{\rho}(p(\omega))$/the volume $V(\theta)$ via the computed value (1.34) in Step 5. This paper is organized as follows. In the next section we give a proof of Proposition 1.3. It is based on the integral representation (2.8) of the complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$ and the residue calculus. Proofs of Corollaries 1.4 and 1.5 are given in Section 3. In Section 4, an inverse obstacle problem for a penetrable obstacle in three dimensions is considered. The corresponding results in this case are given and in Section 5 a possible direction of the extension of all the results in this paper is commented. Appendix is devoted to an example covered by the results in Section 4. \section{Proof of Proposition 1.3} In order to compute this right-hand side, we choose two unit vectors $\mbox{\boldmath $l$}$ and $\mbox{\boldmath $m$}$ perpendicular to each other in such a way that $\mbox{\boldmath $n$}=\mbox{\boldmath $l$}\times\mbox{\boldmath $m$}$. We see that the intersection of $\partial V_p(-\mbox{\boldmath $n$},\theta)$ with the plane $(x-p)\cdot\mbox{\boldmath $n$}=-(1/\tan\,\theta)$ coincides with the circle with radius $1$ centered at the point $p-(1/\tan\,\theta)\mbox{\boldmath $n$}$ on the plane. The pointing vector of an arbitrary point on the circle with respect to point $p$ has the expression $$\displaystyle \vartheta(w)=\cos\,w\,\mbox{\boldmath $l$}+\sin\,w\,\mbox{\boldmath $m$} -\frac{1}{\tan\,\theta}\,\mbox{\boldmath $n$} \tag {2.1} $$ with a parameter $w\in\,[0,2\pi]$. Besides, from the geometrical meaning of $\vartheta(w)$, we have $$ \displaystyle\max_{w\in[0,\,2\pi]}\,\vartheta(w)\cdot\omega<0. \tag {2.2} $$ \proclaim{\noindent Lemma 2.1.} We have the expression $$\displaystyle (\omega+i\vartheta)\,C_{(p,\omega)}(\delta,Q,\vartheta) =\frac{1}{\tan\,\theta} \int_0^{2\pi} \frac{\cos\,w\,\mbox{\boldmath $l$}+\sin\,w\,\mbox{\boldmath $m$}+\tan\,\theta\,\mbox{\boldmath $n$}} {\{\vartheta (w)\cdot(\omega+i\vartheta)\}^2}dw. \tag {2.3} $$ \em \vskip2mm {\it\noindent Proof.} Let $\mbox{\boldmath $a$}$ be an arbitrally three dimensional complex vector. We have $$\displaystyle \int_{V}\,\nabla\cdot(e^{\tau x\cdot(\omega+i\vartheta)}\mbox{\boldmath $a$})\,dx =\tau(\omega+i\vartheta)\cdot\mbox{\boldmath $a$}\,\int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx. $$ The divergence theorem yields $$\displaystyle (\omega+i\vartheta)\cdot\mbox{\boldmath $a$}\,\int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx =\tau^{-1}\int_{\partial V}\,e^{\tau x\cdot(\omega+i\vartheta)}\mbox{\boldmath $a$}\cdot\mbox{\boldmath $\nu$}\,dS(x), \tag {2.4} $$ where $\mbox{\boldmath $\nu$}$ denotes the outer unit normal vector to $\partial V$. Decompose $\partial V=V_1\cup V_2$ with $V_1\cap V_2=\emptyset$, where $$\begin{array}{l} \displaystyle V_1=\{x\,\vert\,-(x-p)\cdot\mbox{\boldmath $n$}=\vert x-p\vert\cos\,\theta,\, -\delta'<(x-p)\cdot\mbox{\boldmath $n$}<0\},\\ \\ \displaystyle V_2=\{x\,\vert\,\vert x-(p-\delta'\,\mbox{\boldmath $n$})\vert\le\delta'\,\tan\,\theta,\, (x-p)\cdot\mbox{\boldmath $n$}=-\delta'\}. \end{array} $$ To compute the surface integral over $V_1$, we make use of the change of variables as follows: $$\begin{array}{ll} \displaystyle x & \displaystyle =(p-\delta'\,\mbox{\boldmath $n$})+r(\cos\,w\,\mbox{\boldmath $l$} +\sin\,w\,\mbox{\boldmath $m$})+\left(\delta'-\frac{r}{\tan\,\theta}\right)\,\mbox{\boldmath $n$} \\ \\ \displaystyle & \displaystyle =p+r\vartheta(w), \end{array} \tag {2.5} $$ where $(r,w)\in\,[0,\,\delta'\tan\,\theta]\times[0,\,2\pi[$ and $\vartheta(w)$ is given by (2.1). Then the surface element has the expression $$\displaystyle dS(x)=\frac{r}{\sin\,\theta}\,drdw $$ and outer unit normal $\mbox{\boldmath $\nu$}$ to $V_1$ takes the form $$\displaystyle \nu=\sin\,\theta\left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$}+\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta} \right). $$ Now from (2.4) and the decomposition $\partial V=V_1\cup V_2$, we have $$\begin{array}{l} \displaystyle \,\,\,\,\,\, e^{-\tau p\cdot(\omega+i\vartheta)} (\omega+i\vartheta)\cdot\mbox{\boldmath $a$}\int_{V} v\,dx\\ \\ \displaystyle =e^{-\tau p\cdot(\omega+i\vartheta)}\tau^{-1} \int_{V_1} v\mbox{\boldmath $a$}\cdot\nu\,dS(x) -e^{-\tau p\cdot(\omega+i\vartheta)}\tau^{-1} \int_{V_2} v\mbox{\boldmath $a$}\cdot\mbox{\boldmath $n$}\,dS(x)\\ \\ \displaystyle \equiv I+II, \end{array} \tag {2.6} $$ where $v=e^{\tau x\cdot(\omega+i\vartheta)}$. Since the set $V_2$ is contained in the half-space $x\cdot\omega\le p\cdot\omega-\delta''$, one gets $$ \displaystyle II=O(\tau^{-1}e^{-\tau\delta''}). \tag {2.7} $$ On $I$, using the change of variables given by (2.5), one has $$\begin{array}{c} \displaystyle x\cdot\omega=p\cdot\omega+r\,\vartheta(w)\cdot\omega,\\ \\ \displaystyle x\cdot\vartheta=p\cdot\vartheta+r\,\vartheta(w)\cdot\vartheta. \end{array} $$ And also noting (2.2), one gets $$\begin{array}{ll} \displaystyle \tau\,I & \displaystyle =\int_0^{2\pi}dw \int_0^{\delta'\tan\,\theta} rdr e^{\tau r\vartheta(w)\cdot\omega+i\tau\,r\vartheta(w)\cdot\vartheta} \left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$} +\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta}\right)\cdot\mbox{\boldmath $a$}\\ \\ \displaystyle & \displaystyle =\frac{1}{\tau^2} \int_0^{2\pi}dw \int_0^{\tau\delta'\tan\,\theta} sds e^{s\vartheta(w)\cdot\omega+i\,s\vartheta(w)\cdot\vartheta} \left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$} +\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta}\right)\cdot\mbox{\boldmath $a$}\\ \\ \displaystyle & \displaystyle =\frac{1}{\tau^2} \int_0^{2\pi}dw \int_0^{\infty} sds e^{s\vartheta(w)\cdot\omega+is\vartheta(w)\cdot\vartheta} \left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$} +\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta}\right)\cdot\mbox{\boldmath $a$}+O(\tau^{-4}). \end{array} $$ Here one can apply the following formula to this right-hand side: $$\displaystyle \int_0^{\infty}se^{as}e^{ibs}ds=\frac{1}{(a+ib)^2},\,\,a<0. $$ Then one gets $$\displaystyle I=\frac{1}{\tau^3\tan\,\theta} \int_0^{2\pi} \frac{\cos\,w\,\mbox{\boldmath $l$} +\sin\,w\,\mbox{\boldmath $m$}+\tan\,\theta\,\mbox{\boldmath $n$}} {\{\vartheta(w)\cdot(\omega+i\Theta)\}^2}\,dw+O(\tau^{-5}). $$ Now this together with (1.23), (2.6) and (2.7) yields the desired formula. \noindent $\Box$ Now from (1.20) and (2.3) we have the integral representation of $C_{(p,\omega)}(\delta,Q,\vartheta)$: $$\displaystyle C_{(p,\omega)}(\delta,Q,\vartheta) =\frac{1}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} \int_0^{2\pi}\frac{dw}{\{\vartheta(w)\cdot(\omega+i\vartheta)\}^2}. \tag {2.8} $$ This formula shows that the constant $C_{(p,\omega)}(\delta,Q,\vartheta)$ is independent of $p$ and $\delta$ when $Q$ is given by (1.19). By computing the integral of the right-hand side on (2.8) we obtain the explicit value of $C_{(p,\omega}(\delta, Q,\vartheta)$. \proclaim{\noindent Lemma 2.2.} We have: $C_{(p,\omega)}(\delta, Q,\vartheta)\not=0$ if and only if $$\displaystyle \frac{\sin\,\theta} {1+\cos\,\theta}<\left\vert \frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} {(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)} \right\vert<\frac{1+\cos\,\theta}{\sin\,\theta} \tag {2.9} $$ and then $$\displaystyle C_{(p,\omega)}(\delta, Q,\vartheta) =2\pi\cos\theta\,\sin^2\,\theta\, (\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)\,)^{-3}. \tag {2.10} $$ \em \vskip2mm {\it\noindent Proof.} Set $$\displaystyle A=\mbox{\boldmath $l$}\cdot(\omega+i\vartheta), \,\,B=\mbox{\boldmath $m$}\cdot(\omega+i\vartheta),\,\, C=-\frac{1}{\tan\,\theta}\mbox{\boldmath $n$}\cdot(\omega+i\vartheta) $$ and $z=e^{iw}$. One can write $$\begin{array}{ll} \displaystyle \vartheta(w)\cdot(\omega+i\vartheta) & \displaystyle =A\cos\,w+B\sin\,w+C\\ \\ \displaystyle & \displaystyle =\frac{A}{2}(z+z^{-1})-i\frac{B}{2}(z-z^{-1})+C\\ \\ \displaystyle & \displaystyle =\frac{1}{2z} \{(A-iB)z^2+2Cz+(A+iB)\}. \end{array} $$ Here we claim $$ \displaystyle A-iB\equiv(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)\not=0. \tag {2.11} $$ Assume contrary that $A-iB=0$. Since we have $$\displaystyle A-iB =\mbox{\boldmath $l$}\cdot\omega+\mbox{\boldmath $m$}\cdot\vartheta +i(\mbox{\boldmath $l$}\cdot\vartheta-\mbox{\boldmath $m$}\cdot\omega), $$ it must hold that $$\displaystyle \mbox{\boldmath $l$}\cdot\omega=-\mbox{\boldmath $m$}\cdot\vartheta,\,\, \mbox{\boldmath $m$}\cdot\omega=\mbox{\boldmath $l$}\cdot\vartheta. \tag {2.12} $$ Then we have $$\begin{array}{ll} \displaystyle (\mbox{\boldmath $n$}\cdot\vartheta)^2 & \displaystyle =\vert\vartheta\vert^2-(\mbox{\boldmath $l$}\cdot\vartheta)^2-(\mbox{\boldmath $m$}\cdot\vartheta)^2 \\ \\ \displaystyle & \displaystyle =\vert\omega\vert^2-(\mbox{\boldmath $l$}\cdot\omega)^2-(\mbox{\boldmath $m$}\cdot\omega)^2 \\ \\ \displaystyle & \displaystyle =(\mbox{\boldmath $n$}\cdot\omega)^2. \end{array} \tag {2.13} $$ On the other hand, we have $$\displaystyle 0=\omega\cdot\vartheta =(\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta) +(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta) +(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta). $$ Here by (2.12) one has $(\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta) +(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta)=0$. Thus one obtains $$ \displaystyle 0=(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta). $$ Now a combination of this and (2.13) yields $\mbox{\boldmath $n$}\cdot\omega=0$. However, by (1.20) this is impossible. Therefore we obtain the expression $$\displaystyle \vartheta(w)\cdot(\omega+i\vartheta) =\frac{A-iB}{2z}f(z)\vert_{z=e^{iw}}, \tag {2.14} $$ where $$\displaystyle f(z)= \left(z+\frac{C}{A-iB}\right)^2 -\frac{C^2-(A^2+B^2)}{(A-iB)^2}. $$ Here we write $$\begin{array}{ll} \displaystyle C^2-(A^2+B^2) & \displaystyle =\frac{1}{\tan^2\,\theta} (\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^2 -\{(\mbox{\boldmath $l$}\cdot(\omega+i\vartheta))^2 +(\mbox{\boldmath $m$}\cdot(\omega+i\vartheta))^2\}\\ \\ \displaystyle & \displaystyle = \frac{1}{\tan^2\,\theta} \{(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2 +2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta\}\\ \\ \displaystyle & \displaystyle \,\,\, -\{(\mbox{\boldmath $l$}\cdot\omega)^2 +(\mbox{\boldmath $m$}\cdot\omega)^2 -(\mbox{\boldmath $l$}\cdot\vartheta)^2 -(\mbox{\boldmath $m$}\cdot\vartheta)^2 +2i(\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta) +2i(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta)\}\\ \\ \displaystyle & \displaystyle =\left(\frac{1}{\tan^2\,\theta}+1\right)(\mbox{\boldmath $n$}\cdot\omega)^2 - \left(\frac{1}{\tan^2\,\theta}+1\right)(\mbox{\boldmath $n$}\cdot\vartheta)^2 +\frac{1}{\tan^2\,\theta}\,2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta)\\ \\ \displaystyle & \displaystyle \,\,\, -2i\{ (\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta) +(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta)\}\\ \\ \displaystyle & \displaystyle =\frac{1}{\sin^2\,\theta}\{(\mbox{\boldmath $n$}\cdot\omega)^2 -(\mbox{\boldmath $n$}\cdot\vartheta)^2\} +\frac{1}{\sin^2\,\theta}\,2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta) \\ \\ \displaystyle & \displaystyle \,\,\, -2i\{ (\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta) +(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta) +(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta)\} \\ \\ \displaystyle & \displaystyle =\frac{1}{\sin^2\,\theta}\{(\mbox{\boldmath $n$}\cdot\omega)^2 -(\mbox{\boldmath $n$}\cdot\vartheta)^2\} +\frac{1}{\sin^2\,\theta}\,2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta) \\ \\ \displaystyle & \displaystyle \,\,\, -2i\omega\cdot\vartheta. \end{array} $$ Since $\omega\cdot\vartheta=0$, we finally obtain $$\displaystyle C^2-(A^2+B^2) =\left(\frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} {\sin\,\theta}\right)^2. $$ Now set $$\displaystyle z_{\pm}=\frac{(\cos\,\theta\pm 1)}{\sin\,\theta} \frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} {(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}. \tag {2.15} $$ Then one gets the factorization $$\displaystyle f(z)=(z-z_+)(z-z_{-}). $$ By (2.15) we have $\vert z_+\vert>\vert z_{-}\vert$. Besides, from (2.2), (2.11) and (2.14) we have $f(e^{iw})\not=0$ for all $w\in\,[0,\,2\pi]$. This ensures that the complex numbers $z_{+}$ and $z_{-}$ are not on the circle $\vert z\vert=1$. Thus from (2.14) one gets $$ \displaystyle \int_0^{2\pi} \frac{dw} {\{\vartheta(w)\cdot(\omega+i\vartheta)\}^2} =\frac{4}{i(A-iB)^2} \int_{\vert z\vert=1}\frac{zdz}{(z-z_{+})^2(z-z_{-})^2}. \tag {2.16} $$ The residue calculus yields $$\displaystyle \int_{\vert z\vert=1}\frac{zdz}{(z-z_{+})^2(z-z_{-})^2} = \left\{ \begin{array}{ll} \displaystyle 0 & \mbox{if $\vert z_{-}\vert>1$,} \\ \\ \displaystyle 0 & \mbox{if $\vert z_{-}\vert<1$ and $\vert z_{+}\vert<1$,} \\ \\ \displaystyle 2\pi i\frac{z_{+}+z_{-}}{(z_{+}-z_{-})^3}\not=0 & \mbox{if $\vert z_{-}\vert<1<\vert z_{+}\vert$.} \end{array} \right. $$ And also (2.15) gives $$\begin{array}{ll} \displaystyle 2\pi i\frac{z_{+}+z_{-}}{(z_{+}-z_{-})^3} & \displaystyle =2\pi i\cdot2\frac{\cos\theta}{\sin\theta} \frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} {(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)} \cdot (\frac{\sin\theta}{2})^3 \left\{\frac{(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} \right\}^3 \\ \\ \displaystyle & \displaystyle =\frac{\pi i}{2}\cos\,\theta\sin^2\,\theta \left\{\frac{(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}\right\}^2 \\ \\ \displaystyle & \displaystyle =\frac{\pi i}{2}\cos\,\theta\sin^2\,\theta \left\{\frac{A-iB} {\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}\right\}^2. \end{array} $$ Thus (2.16) yields $$ \displaystyle \int_0^{2\pi} \frac{dw} {\{\vartheta(w)\cdot(\omega+i\vartheta)\}^2} =2\pi\cos\,\theta\sin^2\,\theta \left\{\frac{1}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}\right\}^2 $$ provided $\vert z_{-}\vert<1<\vert z_{+}\vert$. From these together with (2.8) we obtain the desired conclusion. \noindent $\Box$ Note that (2.10) is nothing but (1.24). Since (2.9) looks like a condition depending on the choice of $\mbox{\boldmath $l$}$ and $\mbox{\boldmath $m$}$ we further rewrite the number $$\displaystyle K(\vartheta;\omega,\mbox{\boldmath $n$}) =\left\vert \frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} {(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)} \right\vert. $$ We have $$\begin{array}{l} \displaystyle \,\,\,\,\,\, \vert(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)\vert^2 \\ \\ \displaystyle =(\mbox{\boldmath $l$}\cdot\omega+\mbox{\boldmath $m$}\cdot\vartheta)^2+ (\mbox{\boldmath $l$}\cdot\vartheta-\mbox{\boldmath $m$}\cdot\omega)^2 \\ \\ \displaystyle =2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2 +2(\mbox{\boldmath $l$}\cdot\omega\,\mbox{\boldmath $m$}\cdot\vartheta-\mbox{\boldmath $l$}\cdot\vartheta\,\mbox{\boldmath $m$}\cdot\omega). \end{array} $$ Here we see that $$\displaystyle \mbox{\boldmath $n$}\cdot(\omega\times\vartheta) =\mbox{\boldmath $l$}\cdot\omega\,\mbox{\boldmath $m$}\cdot\vartheta-\mbox{\boldmath $l$}\cdot\vartheta\,\mbox{\boldmath $m$}\cdot\omega. $$ Thus one has $$\begin{array}{l} \displaystyle \,\,\,\,\,\, \vert(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)\vert^2 \\ \\ \displaystyle =2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2 +2\mbox{\boldmath $n$}\cdot(\omega\times\vartheta). \end{array} $$ Therefore we obtain $$\displaystyle K(\vartheta;\omega,\mbox{\boldmath $n$})=\frac{\displaystyle \sqrt{(\mbox{\boldmath $n$}\cdot\omega)^2+(\mbox{\boldmath $n$}\cdot\vartheta)^2 }} {\displaystyle \sqrt{ 2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2 +2\mbox{\boldmath $n$}\cdot(\omega\times\vartheta)} }. $$ Besides, we have $$\displaystyle \frac{1-\cos\,\theta}{\sin\,\theta}=\tan\,\frac{\theta}{2} $$ and $$\displaystyle \frac{1+\cos\,\theta}{\sin\,\theta}=\frac{1}{\tan\,\frac{\theta}{2}}. $$ Thus (2.9) is equivalent to the condition $$\displaystyle \tan\,\frac{\theta}{2}< K(\vartheta;\omega,\mbox{\boldmath $n$}) <\frac{1}{\tan\,\frac{\theta}{2}}. \tag {2.17} $$ Here consider the case $\mbox{\boldmath $\omega$}\times\mbox{\boldmath $n$}\not=\mbox{\boldmath $0$}$. Choose $$\displaystyle \vartheta=\frac{\mbox{\boldmath $\omega$}\times\mbox{\boldmath $n$}} {\vert\mbox{\boldmath $\omega$}\times\mbox{\boldmath $n$}\vert}. $$ We have $\vartheta\cdot\omega=\vartheta\cdot\mbox{\boldmath $n$}=0$ and $\vartheta\in S^2$. Since we have $$\displaystyle \mbox{\boldmath $n$}\cdot(\omega\times\vartheta)=-\vert\omega\times\mbox{\boldmath $n$}\vert $$ and $$\displaystyle 1=(\mbox{\boldmath $n$}\cdot\omega)^2+\vert\omega\times\mbox{\boldmath $n$}\vert^2, $$ one gets $$\begin{array}{l} \displaystyle \,\,\,\,\,\, 2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2 +2\mbox{\boldmath $n$}\cdot(\omega\times\vartheta) \\ \\ \displaystyle =1+\vert\omega\times\mbox{\boldmath $n$}\vert^2-2\vert\omega\times\mbox{\boldmath $n$}\vert\\ \\ \displaystyle =(1-\vert\omega\times\mbox{\boldmath $n$}\vert)^2. \end{array} $$ Therefore, we obtain $$\displaystyle K(\omega\times\mbox{\boldmath $n$};\omega,\mbox{\boldmath $n$}) =\frac{\omega\cdot\mbox{\boldmath $n$}}{1-\vert\omega\times\mbox{\boldmath $n$}\vert}. $$ Note that we are considering $\omega$ satisfying (1.20). Let $\varphi$ denote the angle between $\omega$ and $\mbox{\boldmath $n$}$. Under the condition $\omega\times\mbox{\boldmath $n$}\not=\mbox{\boldmath $0$}$, we see that (1.20) is equivalent to the condition $$\displaystyle 0<\varphi<\frac{\pi}{2}-\theta. \tag {2.18} $$ Then one can write $$\begin{array}{ll} \displaystyle K(\omega\times\mbox{\boldmath $n$};\omega,\mbox{\boldmath $n$}) & \displaystyle =\frac{\cos\,\varphi}{1-\sin\,\varphi} \\ \\ \displaystyle & \displaystyle =\frac{1+\sin\,\varphi}{\cos\,\varphi} \\ \\ \displaystyle & \displaystyle =\frac{\displaystyle 1+\cos\,(\frac{\pi}{2}-\varphi)}{\displaystyle \sin\,(\frac{\pi}{2}-\varphi)} \\ \\ \displaystyle & \displaystyle =\frac{1}{\displaystyle\tan\,\frac{1}{2}\,(\frac{\pi}{2}-\varphi)} \end{array} $$ Thus (2.18) gives $$\displaystyle 1<K(\omega\times\mbox{\boldmath $n$};\omega,\mbox{\boldmath $n$})<\frac{1}{\displaystyle \tan\,\frac{\theta}{2}}. \tag {2.19} $$ Since we have $\tan\,\frac{\theta}{2}<1$ for all $\theta\in\,]0,\,\frac{\pi}{2}[$, (2.19) yields the validity of (2.17). Next consider the case $\omega\times\mbox{\boldmath $n$}=\mbox{\boldmath $0$}$. By (1.20) we have $\omega=\mbox{\boldmath $n$}$. Then, for all $\vartheta$ perpendicular to $\mbox{\boldmath $n$}$ satisfies $$\displaystyle K(\vartheta;\mbox{\boldmath $n$}, \mbox{\boldmath $n$}) =1. $$ This yields that (2.17) is valid for all $\theta\in\,]0,\,\frac{\pi}{2}[$. The results above are summarized as follows. Given $\omega\in S^2$ with (1.20) define the subset of $S^2$ $$\displaystyle {\cal K}(\omega;\mbox{\boldmath $n$},\theta) = \left\{ \vartheta\in S^2\,\left\vert\right.\,\vartheta\cdot\omega=0, \,\,\mbox{$K(\vartheta;\omega,\mbox{\boldmath $n$})$ satisfies (2.19)\,}\,\right\}. $$ Then, we have $\bullet$ If $\omega\not\not=\mbox{\boldmath $n$}$, then $\omega\times\mbox{\boldmath $n$}\in{\cal K}(\omega;\mbox{\boldmath $n$},\theta)$. $\bullet$ If $\omega=\mbox{\boldmath $n$}$, then ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)= \{\vartheta\in S^2\,\vert\,\vartheta\cdot\omega=0\}\equiv S(\omega)$. Thus, any way the set ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$ is not empty and clearly open with respect to the topology of the set $S(\omega)$ which is the relative topology of $S^2$. Besides, we can say more about ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$. We claim set ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$ is closed. For this, It suffices to show that if a sequence $\{\vartheta_n\}$ of ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$ converges to a point $\vartheta\in S(\omega)$, then $\vartheta\in{\cal K}(\omega;\mbox{\boldmath $n$},\theta)$. This is proved as follows. By assumption, each $\vartheta_n$ satisfies $$\displaystyle \tan\,\frac{\theta}{2}< K(\vartheta_n;\omega,\mbox{\boldmath $n$}) <\frac{1}{\tan\,\frac{\theta}{2}}. $$ Taking the limit, we have $$\displaystyle \tan\,\frac{\theta}{2}\le K(\vartheta;\omega,\mbox{\boldmath $n$}) \le\frac{1}{\tan\,\frac{\theta}{2}}. $$ By (2.15) this is equivalent to $\vert z_{+}\vert\ge 1$ and $\vert z_{-}\vert\le 1$. However, in the proof of Lemma 2.2 we know that $\vert z_{+}\vert\not=1$ and $\vert z_{-}\vert\not=1$. Thus we have $\vert z_{+}\vert>1$ and $\vert z_{-}\vert<1$. This is equivalent to $\vartheta\in{\cal K}(\omega;\mbox{\boldmath $n$},\theta)$. Since $S(\omega)$ is connected, ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$ is not empty, open and closed we conclude ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)=S(\omega)$. This completes the proof of Proposition 1.3. \section{Proof of Corollaries 1.4 and 1.5} Note that $\omega$ satisfies (1.20). \subsection{On Corollary 1.4} From (1.26) we have, if $\omega=\mbox{\boldmath $n$}$, then for all $\vartheta\in S(\omega)$ $$\displaystyle I(\omega,\vartheta)= 6\tilde{\rho}(p(\omega))\,V(\theta)(\mbox{\boldmath $n$}\cdot\omega)^{-3}. $$ On the other hand, if $\omega\not=\mbox{\boldmath $n$}$, then we have $\omega\times\mbox{\boldmath $n$}\not=\mbox{\boldmath $0$}$ (under the condition (1.19)) and $$\displaystyle S(\omega)\cap S(\mbox{\boldmath $n$})=\left\{\pm\frac{\omega\times\mbox{\boldmath $n$}}{\vert \omega\times\mbox{\boldmath $n$}\vert}\right\}. $$ Thus one gets $$\displaystyle I(\omega,\vartheta) = \begin{array}{ll} \displaystyle 6\tilde{\rho}(p(\omega))\,V(\theta) \left(\mbox{\boldmath $n$}\cdot\omega\mp\,i\frac{\vert\omega\times\mbox{\boldmath $n$}\vert^2}{\vert\omega\times(\omega\times\mbox{\boldmath $n$})\vert}\right)^{-3} & \mbox{for $\displaystyle\vartheta=\pm\frac{\omega\times(\omega\times\mbox{\boldmath $n$})}{\vert\omega\times(\omega\times\mbox{\boldmath $n$})\vert}$.} \end{array} $$ Thus one gets the assertion (i) and (1.27) in (ii). For (1.28) it suffices to prove the following fact. \proclaim{\noindent Lemma 3.1.} Let the unit vectors $\omega$ and $\mbox{\boldmath $n$}$ satisfy $\omega\cdot\mbox{\boldmath $n$}\not=0$. We have $$\displaystyle \int_{S(\omega)}\frac{ds(\vartheta)}{(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^3} =\pi(3(\mbox{\boldmath $n$}\cdot\omega)^2-1). \tag {3.1} $$ \em \vskip2mm {\it\noindent Proof.} The right-hand side on (3.1) is invariant with respect to the change $\omega\rightarrow-\omega$, it is easy to see that the case $\omega\cdot\mbox{\boldmath $n$}<0$ can be derived from the result in the case $\omega\cdot\mbox{\boldmath $n$}>0$. Thus, hereafter we show the validity of (3.1) only for this case. If $\mbox{\boldmath $n$}\cdot\omega=1$, then $\omega=\mbox{\boldmath $n$}$. Thus $S(\omega)=S(\mbox{\boldmath $n$})$. Then for all $\vartheta\in S(\omega)$ we have $\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)=1$. This yields $$\displaystyle \int_{S(\omega)}\frac{ds(\vartheta)}{(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^3} =2\pi. $$ Thus the problem is the case when $\mbox{\boldmath $n$}\cdot\omega\not=1$. Choose an orthogonal $3\times 3$-matrix $A$ such that $A^T\omega=\mbox{\boldmath $e$}_3$. Introduce the change of variables $\vartheta=A\vartheta'$. We have $\vartheta\in S(\omega)$ if and only if $\vartheta'\in S(\mbox{\boldmath $e$}_3)$ and $$\begin{array}{ll} \displaystyle \mbox{\boldmath $n$}\cdot(\omega+iA\vartheta') & \displaystyle =\mbox{\boldmath $n$}'\cdot(\mbox{\boldmath $e$}_3+i\vartheta'), \end{array} $$ where $\mbox{\boldmath $n$}'=A^T\mbox{\boldmath $n$}\in S^{2}$. Here we introduce the polar coordinates for $\vartheta'\in S(\mbox{\boldmath $e$}_3)$: $$\begin{array}{ll} \displaystyle \vartheta'=(\cos\,\vartheta,\sin\varphi, 0)^T, & \varphi\in\,[0,\,2\pi[. \end{array} $$ Then, we have $$\begin{array}{ll} \displaystyle I\equiv\int_{S(\omega)}\frac{ds(\vartheta)}{(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^3} & \displaystyle =\int_0^{2\pi}\frac{d\varphi} {(\mbox{\boldmath $n$}'\cdot(i\cos\,\varphi,i\sin\,\varphi,1)^T)^3} \\ \\ \displaystyle & \displaystyle =-\frac{1}{i}\int_0^{2\pi}\frac{d\varphi} {(\mbox{\boldmath $n$}'\cdot(\cos\,\varphi,\sin\,\varphi,-i)^T)^3} \\ \\ \displaystyle & \displaystyle =i\int_0^{2\pi}\frac{d\varphi} {(a\cos\,\varphi+b\sin\,\varphi-ic)^3}, \end{array} \tag {3.2} $$ where $\mbox{\boldmath $n$}'=(a,b,c)^T$. The numbers $a, b, c$ satisfy $a^2+b^2+c^2=1$ and $0<c<1$ since we have $c=\mbox{\boldmath $n$}'\cdot\mbox{\boldmath $e$}_3=\mbox{\boldmath $n$}\cdot\omega$. Thus $a^2+b^2\not=0$. To compute the integral on the right-hand side of (3.2) we make use of the residue calculus. The change of variables $z=e^{i\varphi}$ gives $$\begin{array}{l} \displaystyle \,\,\,\,\,\, a\cos\,\varphi+b\sin\,\varphi-ic \\ \\ \displaystyle =\frac{1}{2} \left\{a\left(z+\frac{1}{z}\right)+\frac{b}{i}\left(z-\frac{1}{z}\right)-2ic\right\} \\ \\ \displaystyle =\frac{1}{2z} \left\{(a-ib)z^2-2icz+(a+ib)\right\} \\ \\ \displaystyle =\frac{a-ib}{2z} \left\{\left(z-\frac{ic}{a-ib}\right)^2-\left(\frac{i}{a-ib}\right)^2,\right\} \\ \\ \displaystyle =\frac{a-ib}{2z}(z-\alpha)(z-\beta), \end{array} \tag {3.3} $$ where $$\begin{array}{ll}\displaystyle \alpha=\frac{i(c+1)}{a-ib}, & \displaystyle \beta=\frac{i(c-1)}{a-ib}. \end{array} $$ Since $1-c<1+c$ and $a\cos\,\varphi+b\sin\,\varphi-ic\not=0$ for $z=e^{i\varphi}$, we have $\vert\beta\vert<1<\vert\alpha\vert$. Substituting (3.3) into (3.2) and using $d\varphi=\frac{dz}{iz}$, we have $$\begin{array}{ll} \displaystyle I & \displaystyle =i\int_{\vert z\vert=1}\frac{2^3}{(a-ib)^3}\cdot\frac{z^3}{(z-\alpha)^3(z-\beta)^3}\cdot\frac{dz}{iz} \\ \\ \displaystyle & \displaystyle =\left(\frac{2}{a-ib}\right)^3\int_{\vert z\vert=1}\,\frac{z^2 dz}{(z-\alpha)^3(z-\beta)^3}. \end{array} \tag {3.4} $$ The residue calculus yields $$\begin{array}{ll} \displaystyle \int_{\vert z\vert=1}\,\frac{z^2 dz}{(z-\alpha)^3(z-\beta)^3} & \displaystyle =2\pi i\,\mbox{Res}_{z=\beta}\,\left(\frac{z^2}{(z-\alpha)^3(z-\beta)^3}\right) \\ \\ \displaystyle & \displaystyle =2\pi i\cdot\frac{1}{2}\frac{d^2}{dz^2}\left(\frac{z^2}{(z-\alpha)^3}\right)\vert_{z=\beta} \\ \\ \displaystyle & \displaystyle =2\pi i\cdot\frac{\alpha^2+4\alpha\beta+\beta^2}{(\beta-\alpha)^5}. \end{array} \tag {3.5} $$ Here we have the expression $$\displaystyle \alpha-\beta=\frac{2i}{a-ib} $$ and $$\begin{array}{ll} \displaystyle \alpha^2+4\alpha\beta+\beta^2 & \displaystyle =-\frac{(c+1)^2+4(c^2-1)+(c-1)^2}{(a-ib)^2} \\ \\ \displaystyle & \displaystyle =-\frac{2(3c^2-1)}{(a-ib)^2}. \end{array} $$ Thus from (3.4) and (3.5) we obtain $$\begin{array}{ll} \displaystyle I & \displaystyle =-2\pi\left(\frac{a-ib}{2}\right)^2(\alpha^2+4\alpha\beta+\beta^2) \\ \\ \displaystyle & \displaystyle =\pi(3c^2-1). \end{array} $$ This completes the proof of (3.1). \noindent $\Box$ \subsection{On Corollary 1.5} Let us explain the uniqueness of the solution of the quintic equation (1.30) in $]\frac{1}{\sqrt{3}},\,1]$. From (1.27), (1.28) and (1.29) we have $$\displaystyle \frac{\displaystyle\left\vert\int_{S(\omega)}\,I(\omega,\vartheta) \,ds(\vartheta)\right\vert}{\pi\,\max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert} =(\mbox{\boldmath $n$}\cdot\omega)^3(3(\mbox{\boldmath $n$}\cdot\omega)^2-1) $$ and thus $$\displaystyle 0< \frac{\displaystyle\left\vert\int_{S(\omega)}\,I(\omega,\vartheta) \,ds(\vartheta)\right\vert}{\pi\,\max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert} \le 2. $$ Since $]\,\frac{1}{\sqrt{3}},\,\,1]\ni\gamma\longmapsto\gamma^3(3\gamma^2-1)\in\,]0,\,2]$ is bijective, the solution of quintic equation (1.30) in $]\frac{1}{\sqrt{3}},\,1]$ is unique and its solution is just $\gamma=\mbox{\boldmath $n$}\cdot\omega$. The formulae (1.31) and (1.32) are derived as follows. A combination of (1.26) and (1.28) yields $$\displaystyle (\mbox{\boldmath $n$}\cdot\omega+i\mbox{\boldmath $n$}\cdot\vartheta)^3 =T(\omega,\vartheta). $$ By expanding the left-hand side, we obtain immediately the desired formulae. \section{Application to an inverse obstacle problem} As pointed out in \cite{Ik} the enclosure method developed here can be applied also to an inverse obstacle problem in three dimensions governed by the equation $$\begin{array}{ll} \displaystyle \Delta u+k^2 n(x)u=0, & x\in\Omega, \end{array} \tag {4.1} $$ where $k$ is a fixed positive number. We assume that $\partial\Omega\in C^{\infty}$, for simplicity. Both $u$ and $n$ can be complex-valued functions. In this section we assume that $n(x)$ takes the form $n(x)=1+F(x)$, $x\in\Omega$, where $F=F_{\rho,D}(x)$ is given by (1.7). We assume that $\rho\in L^{\infty}(D)$ instead of $\rho\in L^2(D)$ and that $u\in H^2(\Omega)$ is an arbitrary non trivial solution of (4.1) at this stage. We never specify the boundary condition of $u$ on $\partial\Omega$. By the Sobolev imbedding theorem \cite{G} one may assume that $u\in C^{0,\alpha}(\overline\Omega)$ with $0<\alpha<1$. In this section we consider {\bf\noindent Problem 2.} Extract information about the singularity of $D$ from the Cauchy data of $u$ on $\partial\Omega$. We encounter this type of problem, for example, $u$ is given by the restriction to $\Omega$ of the total wave defined in the whole space and generated by a point source located outside of $\Omega$ or a single plane wave coming from infinity. The surface where the measurements are taken is given by $\partial\Omega$ which encloses the penetrable obstacle $D$ with a different reflection index $1+\rho$, $\rho\not\equiv 0$. See \cite{CK} for detailed information about the direct problem itself. Any way we start with having the Cauchy data of an arbitrary (nontrivial) $H^2(\Omega)$ solution of (4.1). Using the Cauchy data of $u$ on $\partial\Omega$, we introduce the indicator function $$\displaystyle I_{\omega,\vartheta}(\tau)=\int_{\partial\Omega} \left(\frac{\partial u}{\partial\nu}v-\frac{\partial v}{\partial\nu} u\right)\,dS, \tag {4.2} $$ where the function $v=v(x), x\in{\rm \bf R}^3$ is given by $$\displaystyle v=e^{x\cdot(\tau\omega+i\sqrt{\tau^2+k^2}\vartheta)},\,\,\tau>0 $$ and $\vartheta\in S(\omega)$. And also its derivative with respect to $\tau$ is given by the formula $$\displaystyle I_{\omega,\vartheta}'(\tau) =\int_{\partial\Omega}\left(\frac{\partial u}{\partial\nu}\,v_{\tau}-\frac{\partial\,v_{\tau}}{\partial\nu} u\right)\,dS, \tag {4.3} $$ where $$\displaystyle v_{\tau}=\partial_{\tau}v=\left\{x\cdot\left(\omega+i\frac{\tau}{\sqrt{\tau^2+k^2}}\,\vartheta\,\right)\,\right\}\,v. $$ As done the proof of Theorem 1.1 integration by parts yields $$\displaystyle I_{\omega,\vartheta}(\tau)=-k^2\int_D\rho(x)u(x)v\,dx $$ and $$\displaystyle I_{\omega,\vartheta}'(\tau)=-k^2\int_D\rho(x)u(x)v_{\tau}\,dx. $$ Thus this can be viewed as the case $\rho(x)$ in Problem 1 is given by $-k^2\rho(x)u(x)$ and $\tilde{\rho}(x)$ in Definition 1.2 by $-k^2\tilde{\rho}(x)u(x)$. Thus we obtain \proclaim{\noindent Theorem 4.1.} Let $\omega$ be regular with respect to $D$ and assume that $D$ has a conical singularity from direction $\omega$. Then, we have $$\displaystyle \tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}(\tau)= -k^2\tilde{\rho}(p(\omega))\,u(p(\omega))\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta) +O(\tau^{-\alpha}) $$ and $$\displaystyle \tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}'(\tau)= -k^2\tilde{\rho}(p(\omega))\,u(p(\omega))(h_D(\omega)+ip(\omega)\cdot\vartheta)\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta) +O(\tau^{-\alpha}). $$ The remainder $O(\tau^{-\alpha})$ is uniform with respect to $\vartheta\in S(\omega)$. \em \vskip2mm Thus under the same assumptions as Theorem 4.1, for each $\vartheta\in S(\omega)$ one can calculate $$\displaystyle I(\omega\,\vartheta)\equiv -k^2\tilde{\rho}(p(\omega))\,u(p(\omega))\,C_{(p(\omega),\omega)}(\delta,Q) $$ via the formula $$\displaystyle I(\omega,\vartheta) =\lim_{\tau\rightarrow\infty}\tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta} I_{\omega,\vartheta}(\tau) \tag {4.4} $$ by using the Cauchy data of $u$ on $\partial\Omega$ if $p(\omega)$ is known. And also we have \proclaim{\noindent Theorem 4.2.} Let $\omega$ be regular with respect to $D$. Assume that $D$ has a conical singularity from direction $\omega$; $n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies $$\displaystyle u(p(\omega))\not=0. \tag {4.5} $$ If the direction $\vartheta\in S(\omega)$ satisfies the condition (1.15), then all the formulae (1.16), (1.17) and (1.18) for the indicator function defined by (4.2) together with its derivative (4.3) are valid. \em \vskip2mm Note that the assumption (4.5) ensures $u\not\equiv 0$. See Appendix for an example of $u$ satisfying (4.5). The following corollaries corresponds to Corollaries 1.1 and 1.2. \proclaim{\noindent Corollary 4.1.} Let $\omega$ be regular with respect to $D$. Under the same assumptions as those in Theorem 4.2 the point $p(\omega)$ is uniquely determined by the Cauchy data of $u$ on $\partial\Omega$. \em \vskip2mm \proclaim{\noindent Corollary 4.2.} Let $u\in H^2(\Omega)$ be a solution of (4.1). Assume that $D$ is given by the inside of a convex polyhedron and that in a neighbourhood of each vertex $p$ of $D$, the $D$ coincides with the inside of a tetrahedron with apex $p$ and that $n-1=F_{\rho, D}$ given by (1.7) is active at $p$ and the value of $u$ at $p$ satisfies (4.5). Then, all the formulae (1.16), (1.17) and (1.18) for the indicator function defined by (4.2) together with its derivative (4.3) are valid for all $\omega$ regular with respect to $D$ and $\vartheta\in S(\omega)$. Besides, the Cauchy data of $u$ on $\partial\Omega$ uniquely determines $D$. \em \vskip2mm The following result is an extension of Theorem 4.1 in \cite{Ik} to three dimensional case. \proclaim{\noindent Corollary 4.3.} Let $u\in H^2(\Omega)$ be a solution of (4.1). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p=p(\omega)$; $n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies (4.5). Choose two linearly independent vectors $\vartheta=\vartheta_1$ and $\vartheta_2$ in $S(\omega)$. Then, the point $p(\omega)$ itself and thus $h_D(\omega)=p(\omega)\cdot\omega$ can be extracted from the Cauchy data of $u$ on $\partial\Omega$ by using the formula $$\displaystyle p(\omega)\cdot\omega+i\,p(\omega)\cdot\vartheta_j =\lim_{\tau\rightarrow\infty} \frac{I_{\omega,\vartheta_j}'(\tau)}{I_{\omega,\vartheta_j}(\tau)},\,\,\,j=1,2. \tag {4.6} $$ \em \vskip2mm By virtue of the formula (1.24), the function $I(\omega,\,\cdot\,)$ has the expression $$\displaystyle I(\omega,\vartheta)=-6k^2\,\tilde{\rho}(p(\omega))u(p(\omega))\,V(\theta)(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^{-3}. \tag {4.7} $$ Simillarly to Corollary 1.4 formula (4.7) yields immediately the following results. \proclaim{\noindent Corollary 4.4.} Let $u\in H^2(\Omega)$ be a solution of (4.1). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$. \noindent (i) Assume that $n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies (4.5). The vector $\omega$ coincides with $\mbox{\boldmath $n$}$ if and only if the function $I(\omega,\,\cdot\,)$ is a constant function. \noindent (ii) The vector $\mbox{\boldmath $n$}$ and $\theta$ of $V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)$ and $\tilde{\rho}(p(\omega))\,u(p(\omega))$ satisfies the following two equations: $$\displaystyle 6k^2\,\vert\tilde{\rho}(p(\omega))\,u(p(\omega))\vert\,V(\theta)=(\mbox{\boldmath $n$}\cdot\omega)^3 \max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert; \tag {4.8} $$ $$\displaystyle -6k^2\,\tilde{\rho}(p(\omega))u(p(\omega)) \,V(\theta)\,(3(\mbox{\boldmath $n$}\cdot\omega)^2-1) =\frac{1}{\pi}\,\int_{S(\omega)}\,I(\omega,\vartheta) \,ds(\vartheta). \tag {4.9} $$ \em \vskip2mm Using the equations (4.7), (4.8) and (4.9) one gets the following corollary. \proclaim{\noindent Corollary 4.5.} Let $u\in H^2(\Omega)$ be a solution of (4.1). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$. Assume that $n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies (4.5). Assume also that $\omega\approx\mbox{\boldmath $n$}$ in the sense that (1.29) holds. Then, we have the completely same statement and formulae as those of Corolloary 1.5. \em \vskip2mm Note that under the same assumptions as Corollary 4.5, one can finally calculate the quantity $$\displaystyle \tilde{\rho}(p(\omega))\,u(p(\omega))\,V(\theta) \tag {4.10} $$ and $\mbox{\boldmath $n$}$ from the Cauchy data of $u$ on $\partial\Omega$. Since the steps for the calculation are similar to the steps presented in Subsection 1.2 for the inverse souce problem, we omit its description. However, it should be noted that, in addition, if $\tilde{\rho}(p(\omega))$ is known to be a {\it real number}, then one can recover the phase of the complex number $u(p(\omega))$ modulo $2\pi n$, $n=0,\pm 1,\pm 2,\cdots$ from the computed value (4.10). {\bf\noindent Remark 4.1.} One can apply the result in \cite{IkC} to the computation of the value $u(p(\omega))$ itself. For simplicity we assume that $\Omega$ is convex like a case when $\Omega=B_R(x_0)$ centered at a point $x_0$ with a large radius $R$. From formula (4.6) we know the position of $p(\omega)$ and thus the domain $\Omega\cap\{x\in{\rm \bf R}^3\,\vert\,x\cdot\omega>x\cdot p(\omega)\}$. Because of the continuity of $u$ on $\overline\Omega$, one has, for a sufficiently small $\epsilon>0$ $$\displaystyle u(p(\omega))\approx u(p(\omega)+\epsilon\,\omega). $$ Since the point $p(\omega)+\epsilon\,\omega\in\Omega\cap\{x\in{\rm \bf R}^3\,\vert\,x\cdot\omega>x\cdot p(\omega)\}$ and therein $u\in H^2$ satisfies the Helmholtz equation $\Delta u+k^2u=0$, one can calculate the value $u(p(\omega)+\epsilon\,\omega)$ itself from the Cauchy data of $u$ on $\partial\Omega\cap\{x\in{\rm \bf R}^3\,\vert\,x\cdot\omega>x\cdot p(\omega)\}$ by using Theorem 1 in \cite{IkC}. \section{Final remark} All the results in this paper can be extended also to the case when the governing equation of the background medium is given by a Helmholtz equation with a known coefficient $n_0(x)$. It means that if one considers, instead of (1.6) and (4.1) the equations $$\begin{array}{ll} \displaystyle \Delta u+k^2n_0(x)u=F_{\rho,D}(x), & x\in\Omega \end{array} $$ and $$\begin{array}{ll} \displaystyle \Delta u+k^2(n_0(x)+F_{\rho,D}(x))u=0, & x\in\Omega, \end{array} $$ respectively, then one could obtain all the corresponding results. \section{Appendix. On condition (4.5)} As suggested in \cite{Ik} the condition (4.5) can be satisfied if $k$ is sufficiently small under the situation when $u$ is given by the restriction onto $\Omega$ of the total field $U$ in the whole space scattering problem generated by, for example, the point source located at a point $z$ in ${\rm \bf R}^3\setminus\overline\Omega$. The $U$ has the expression $U=\Phi(x,z)+w_z(x)$, where $$\begin{array}{ll} \displaystyle \Phi(x,z)=\frac{1}{4\pi}\frac{e^{ik\vert x-z\vert}}{\vert x-z\vert}, & x\in{\rm \bf R}^3\setminus\{z\} \end{array} $$ and $w_z\in H^2_{\mbox{local}}({\rm \bf R}^3)$ is the unique solution of the inhomogeneous Helmholtz equation $$\begin{array}{ll} \displaystyle \Delta w_z+k^2w_z+k^2F(x)(w_z+\Phi(x,z))=0, & x\in{\rm \bf R}^3 \end{array} $$ with the outgoing Sommerfeld radiation condition $$\displaystyle \lim_{r\rightarrow\infty}r\left(\frac{\partial}{\partial r}w_z(x)-ik w_z(x)\right)=0, $$ where $r=\vert x\vert$ and $F=F_{\rho,D}$ is given by (1.7). See \cite{CK} for the solvabilty. Here we claim \proclaim{\noindent Proposition A.} Let $0<R_1<R_2$ satisfy $D\subset B_{R_2}(z)\setminus\overline B_{R_1}(z)$. Let $M>0$ and $R>0$ satisfy $\vert D\vert\le M$ and $\Vert\rho\Vert_{L^{\infty}(D)}\le R$, respectively. If $k$ satisfies the system of inequalities $$\displaystyle C\equiv \frac{3k^2R}{2}\left(\frac{M}{4\pi}\right)^{2/3}<1 \tag {A.1} $$ and $$\displaystyle \frac{C}{1-C}<\frac{R_1}{R_2}, \tag {A.2} $$ then, for all $x\in\overline D$ we have $$\displaystyle \vert U(x)\vert\ge \frac{1}{4\pi}\left(\frac{1}{R_2}-\frac{C}{1-C}\frac{1}{R_1}\,\right). \tag {A.3} $$ \em \vskip2mm {\it\noindent Proof.} Note that $w\in C^{0,\alpha}(\overline\Omega)$ with $0<\alpha<1$ by the Sobolev imbedding theorem. It is well known that the function $w_z$ satisfies the Lippman-Schwinger equation $$\begin{array}{ll} \displaystyle w_z(x) & \displaystyle =k^2\int_{D}\Phi(x,y)\rho(y)w_z(y)\,dy+k^2\int_{D}\Phi(x,y)\Phi(y,z)\rho(y)\,dy \end{array} $$ and thus, for all $x\in\overline{D}$ we have $$\displaystyle \vert w_z(x)\vert \le \frac{k^2 R}{4\pi} \left(\Vert w_z\Vert_{L^{\infty}(D)} +\frac{1}{4\pi\,R_1}\right)\, \int_D\frac{dy}{\vert x-y\vert}. \tag {A.4} $$ Let $\epsilon>0$. We have $$\begin{array}{ll} \displaystyle \int_D\frac{dy}{\vert x-y\vert} & \displaystyle =\int_{D\cap B_{\epsilon}(x)}\,\frac{dy}{\vert x-y\vert}+\int_{D\setminus B_{\epsilon}(x)}\,\frac{dy}{\vert x-y\vert} \\ \\ \displaystyle & \displaystyle \le \int_{B_{\epsilon}(x)}\,\frac{dy}{\vert x-y\vert}+\frac{\vert D\vert}{\epsilon} \\ \\ \displaystyle & \displaystyle \le 2\pi\epsilon^2+\frac{M}{\epsilon}. \end{array} $$ Choose $\epsilon$ in such a way that this right-hand side becomes minimum, that is, $$\displaystyle \epsilon=\left(\frac{M}{4\pi}\right)^{1/3}. $$ Then one gets $$\begin{array}{ll} \displaystyle \int_D\frac{dy}{\vert x-y\vert} & \displaystyle \le 6\pi \left(\frac{M}{4\pi}\right)^{2/3}. \end{array} $$ Thus this together with (A.4) yields $$\displaystyle \left(1-C\,\right)\Vert w_z\Vert_{L^{\infty}(D)} \le \frac{C}{4\pi\,R_1}. $$ This together with the estimate $$\displaystyle \vert U(x)\vert\ge \frac{1}{4\pi\,R_2}-\Vert w_z\Vert_{L^{\infty}(D)} $$ yields the desired estimate (A.3) under the assumptions (A.1) and (A.2). \noindent $\Box$ Note that since $R_2>R_1$, the set of inequalities (A.1) and (A.2) are equivalent to the single inequality $$\displaystyle C<\frac{R_1}{R_1+R_2}. \tag {A.5} $$ Thus we choose $k^2$ sufficiently small in the sense of (A.5) we have, for all $x\in\overline D$ $$\displaystyle \vert u(x)\vert \ge\frac{1}{4\pi}\left(\frac{1}{R_2}-\frac{C}{1-C}\frac{1}{R_1}\,\right)>0. $$ Thus the condition (4.5) for $u=U\vert_{\Omega}$ is satisfied. The choice of $k$ depends only on the a-priori information about $D$ and $\rho$ described by $R_1$, $R_2$, $M$ and $R$. $$\quad$$ \centerline{{\bf Acknowledgments}} This research was partially supported by Grant-in-Aid for Scientific Research (C) (No. 17K05331) and (B) (N. 18H01126) of Japan Society for the Promotion of Science. $$\quad$$ \end{document}
\begin{document} \title{Class Introspection: A Novel Technique for Detecting Unlabeled Subclasses by Leveraging Classifier Explainability Methods} \begin{abstract} Detecting latent structure within a dataset is a crucial step in performing analysis of a dataset. However, existing state-of-the-art techniques for subclass discovery are limited: either they are limited to detecting very small numbers of outliers or they lack the statistical power to deal with complex data such as image or audio. This paper proposes a solution to this subclass discovery problem: by leveraging instance explanation methods, an existing classifier can be extended to detect latent classes via differences in the classifier's internal decisions about each instance. This works not only with simple classification techniques but also with deep neural networks, allowing for a powerful and flexible approach to detecting latent structure within datasets. Effectively, this represents a projection of the dataset into the classifier's ``explanation space,'' and preliminary results show that this technique outperforms the baseline for the detection of latent classes even with limited processing. This paper also contains a pipeline for analyzing classifiers automatically, and a web application for interactively exploring the results from this technique. \end{abstract} \section{Introduction} \subsection{Motivation}\label{section:intro-motivation} Training classifiers for machine learning tasks requires that the data is accurately and completely labeled for a specific application. However, in the real world there is often more structure to the data than is labeled---and this can have real-world consequences to how the model performs in production processes. Within each labeled class, there can be significant variations that the model picks up on but is invisible to the user---this is \textit{latent structure} within the class. For an example of this latent structure, consider a hypothetical classifier to determine whether an image contains apples or oranges. Oranges tend to be uniformly orange, but apples can come in more than one color: red, green, yellow, etc. If the labels for the dataset are just \texttt{apple} and \texttt{orange}, then the information about the color of the apples is lost. The intuition here is simple: the classifier may know about the color difference between two types of apples, but still label both apples due to the training data available to it. Therefore, by using explainability techniques, a human can detect the different unlabeled subclasses by analyzing \textit{how} the classifier determines the class of an instance. As a less trivial but more impactful example, clinical trial results interpreting the efficacy of a drug's treatment via a classifier may not fully capture all the subgroups in the input. Are the reasons Person A and Person B respond to a specific round of treatment similar? How about why Person C did not respond? This may be down to some specific structure in the input data which may not be fully captured by training data labeling. With current methods, it is difficult to determine this latent structure---especially with high-dimensional or complex data such as image, audio, or video inputs. Current state-of-the-art methods generally require either that an entirely new representation is trained (as in the case of mixture models \cite{bell_automatic_2021} which must relearn the data distribution), that instances be categorized manually (as in subgroup analysis \cite{lanza_latent_2013}), or they are only suited to discovering individual anomalous instances (as in commonality metrics \cite{paterson_detection_2019}). These methods are discussed more fully in Section~\ref{section:background-latent-structure}, but in general they are not one-size-fits-all, and require a separate (new) model to detect latent structure. Instead, class introspection allows for the re-use of an existing classifier allowing for analysis to inherit the statistical power of the classifier. \subsection{Objectives} This project aims to provide a solution for the latent structure problem by leveraging explainability techniques to detect latent (unlabeled) subclasses in the input data, using a novel approach dubbed \textbf{class introspection}. This technique compares a classifiers decision making process for each point in a dataset, and within each predicted class performs clustering over the local model explanations. Crucially, this \textit{re-uses the existing classifier}, and does not require a separate model for detecting latent structure (in contrast to the current state-of-the-art). At a high level, the decision-making process of a classifier model will be different for different inputs---leading to clusters corresponding to fragmentation within input classes. This provides a level of auditability on the model training process, both by ensuring that models are producing results for the correct reasons and by allowing for the detection of deficiencies in the model setup priors; by detecting latent structure, possible errors in labeling can be detected in model training. Additionally, this technique is agnostic to both the architecture of the particular classifier model used and the particular local explanation method, allowing for use in even black-box environments (where no knowledge of the classifier's internals is required). \section{Background}\label{chapter:background} \begin{draft} Todo: reference \end{draft} \subsection{Overview} In this section, we will discuss the algorithms and methods that are fundamental to class introspection (namely, explainability and clustering methods). Additionally, this section discusses several existing algorithms for detecting structure in data and their limitations as compared to class introspection. \subsection{Explainability}\label{section:intro-explainability} A key issue facing machine learning is explain\-ability (XAI). In current state-of-the-art machine learning models, the model outputs are generated opaquely---that is, the reasons that the model chooses to assign one label rather than another are inscrutable from the outside. While this may be fine in trivial applications such as face detection, in safety-critical applications the reasoning for why a model produces the outputs it does can be a literal case of life and death. Explainability allows for a model to be audited, and for faulty behaviors to be explained and calibrated away \cite{ghai_explainable_2020}. Even more crucially, explainability allows for a model's decisions to be trusted by other agents (like humans) \cite{dosilovic_explainable_2018}. \subsubsection{What is an explanation?} Explanations are, at their core, simply an indication of which input features are relevant for a specific output from a network, and for each how strongly (or not) these features contributed to the output. Typically this is represented numerically, where each input feature is weighted by its importance to the overall classification with respect to the other features \cite{dosilovic_explainable_2018}. In tabular data, numerically weighting input features trivially shows which inputs are important as the feature is clearly defined (i.e. each input feature is labeled). Image data is more complex, as the features do not carry the same amount of information per-feature as tabular data; explanations for this type of data show which regions of the image were important to a model's classification (see Figure~\ref{fig:intro-shap-explanation}). \begin{figure} \caption{Example of a series of explanations of image data. Here, regions of the image that contribute positively towards the classification are red, and similarly blue regions denote negative contributions. Note that SHAP creates explanations for each instance for each label, regardless of true label. For more information, see Section~\ref{section:intro-neural-nets} \label{fig:intro-shap-explanation} \end{figure} \subsubsection{Inherently Interpretable Models} The easiest way to generate explanations for models is by using models that are inherently interpretable. There are two common model architectures with this feature: regression models and decision trees. Decision tree outputs can be simply explained by following the chain of decisions from the root node down to the eventual leaf representing the classification. Regression models are similarly simple, with linear regression and logistic regression: \[ \text{linreg}(\bold x; \bold w, b) = \bold w^\top \bold x + b \qquad \text{logreg}(\bold x; \bold w, b) = \sigma(\bold w^\top \bold x + b) \] We can see that for each case, there is a one-to-one correspondence between the weight vector \(\bold w\) and the input vector \(\bold x\). Therefore, the relative importance of each input feature in \(\bold x\) is directly encoded in the weights of the model---exactly an explanation. Of course, using regression models or decision trees may not be an optimal strategy as these models are very limited in their capabilities. Another option is to converting an existing model as a whole into a more easily interpretable model architecture. For example, an interpretable model can be created with \textit{Self-Explanatory Neural Networks} (SENNs) which are an extension of logistic regression \cite{teso_toward_2019}. This can use very few weights, and it is easy to determine the reasoning for any given classification as the weights correspond to a positive or negative linear combination of inputs. This approach, however, is limited in that some algorithms are not well-suited to interpretability. Algorithms such as deep neural networks are uninterpretable, and their performance cannot easily be matched by interpretable models. \subsubsection{Local Outcome Modeling} If the model to be explained cannot be converted to an easily-interpretable model, it is still possible to create an explanation of its behavior using black-box explanation methods. The simplest black-box approach is the Leave-One-Out (LOO) algorithm \cite{abdalla_visual_2021}. LOO simply segments the input features, and for each segment zeros out the segment and runs the model inference to determine how much the output changes. Despite LOO's apparent simplicity, this strategy is surprisingly effective at determining the salient regions of an input, and being a black-box algorithm means it can be broadly applicable to a diverse variety of algorithms \cite{abdalla_visual_2021}. Careful selection of segmentation algorithms can lead to very accurate saliency maps (see Figure~\ref{fig:intro-loo-segmentation}). However, this flexibility comes at a cost. LOO is expensive to compute, as each segment requires a re-inference; additionally, this method does not take into account inter-dependencies between regions \cite{abdalla_visual_2021}. Additionally, this does not give per-feature saliency mapping, rather operating over superpixels---useful for individual explanations, but less so for comparing explanations between instances. \begin{figure} \caption{Example of LOO segmentation.} \label{fig:intro-loo-segmentation} \end{figure} A more robust approach is to train a smaller model to explain individual predictions of a dataset \cite{teso_why_2018}. This local approach has the benefit that the overarching model can be a black box while still providing explanations of the model behavior. This works because even if the decision surface may be defined by many uninterpretable features, a single point in that decision surface can be locally approximated. The most popular implementation of this approach is Local Interpretable Model-agnostic Explanations (LIME) \cite{ribeiro_why_2016}. LIME fits a flat plane against the decision surface, and infers the boundary by perturbing the input point to generate a cloud of points which are then all inferred by the model. While not being perfectly accurate, this is effective at broadly showing the explanation of the point inference and has the advantage of being model-agnostic. LIME still suffers from some the same limitations of LOO:\ each explanation requires multiple inferences (due to the input point perturbation), and to work over large dimensional data (such as images) superpixel segmentation must be used. \begin{figure} \caption{LIME local model. From Ribeiro et.\ al.\protect \cite{ribeiro_why_2016} \label{fig:intro-lime-fitting} \end{figure} \subsubsection{Neural Network Approaches}\label{section:intro-neural-nets} A final approach is specific to neural networks: a series of techniques are available to compute the gradient of the class score with respect to the input pixels, yielding an effective explanation of which inputs are salient to the model's ultimate classification \cite{abdalla_visual_2021}. These methods are typically white-box, and operate over the specific structure of the neural network; this means that explanation method implementations need to be aware of the implementation details of the networks they are explaining. There are a several methods of achieving this, with the simplest being Gradient Ascent \cite{simonyan_deep_2014}. In gradient ascent, the gradient of the class label with respect to each input feature is computed via backpropagation all in one go, leading to a per-feature importance score across the image \cite{abdalla_visual_2021}. However, this can cause a very grainy saliency map (see Figure~\ref{fig:intro-gradient-ascent-saliency}). A more nuanced approach is Deep Learning Important FeaTures (DeepLIFT), which compares the activations of network neurons to a ``reference activation'' (activations on a different instance) and can separate positive and negative contributions to give higher-quality results than Gradient Ascent \cite{abdalla_visual_2021,shrikumar_learning_2019}. \begin{figure} \caption{Example saliency map generated with gradient ascent, depicting the saliency map from the top class prediction from a ConvNet~\protect \cite{simonyan_deep_2014} \label{fig:intro-gradient-ascent-saliency} \end{figure} A popular library for generating these explanations is Shapley Additive Explanations (SHAP) \cite{lundberg_unified_2017}. SHAP represents explanations as a set of Shapley values (a measure of how important a contribution is to an overall whole borrowed from game theory) of the overall model, and computes these using a unification of several techniques (notably LIME and DeepLIFT) \cite{lundberg_unified_2017}. This approach generates high-quality explanations and is packaged into an easy-to-use Python library \texttt{shap} \cite{slundberg_shap_2021}. SHAP is used heavily throughout this project, so it is worth examining its underlying principles of operation. SHAP generates explanations for each label for each instance; for example for each numeral in MNIST SHAP generates ten sets of explanations: one for label \texttt{0}, one for label \texttt{1}, and so on. This is visible in Figure~\ref{fig:intro-shap-explanation}, where each instance (rows) has ten different Shapley value sets (columns). These Shapley values are not necessarily the same magnitude between data points (though in this project they usually are), and are typically similar within a dataset. SHAP operates over not only image data (with DeepSHAP) but works with other types of data, such as tabular data. \iffalse \begin{figure} \caption{Force plots of SHAP explanations of XGBoost classifications trained on the Iris dataset, one for each class in the dataset. Note how each feature contributes different amounts towards the classification, with the final classification being a sum of the contributions.} \label{fig:intro-shap-iris-force} \end{figure} \fi \subsection{Hierarchical Clustering} Hierarchical clustering algorithms are a series of algorithms to form clusters over data where the number of target clusters is not necessarily known; contrasting against partitioning algorithms (such as K-means clustering) which require a knowledge of the cluster count to partition the dataset. Hierarchical clustering algorithms work by combining data points into clusters agglomeratively, and this property makes them incredibly powerful for analyzing unknown data as they are able to discover the data \cite{ester_density-based_1996}. A commonly-used clustering technique is Density Based Spatial Clustering of Applications with Noise (DBSCAN). DBSCAN is able to efficiently handle clustering over a dataset with arbitrary-shaped clusters, and is able to achieve this with minimal required knowledge of the underlying dataset (as it is parameterized by a single hyperparameter) \cite{ester_density-based_1996}. This is achieved by picking an arbitrary point in the dataset and building a list of points reachable within a distance \(\epsilon\) from that point (measured via a Euclidean distance metric). If that list is over a threshold number of points (typically 5), the list of points is given a cluster label---otherwise, it is marked as noise. This process is repeated until all points are either assigned a cluster or marked as noise. However, due to DBSCAN's reliance on a Euclidean distance metrics, it is not suitable to very high-dimensional data. This is due to \textit{curse of dimensionality}, which states that as the dimensionality goes up the more sparse the input space becomes for nearest-neighbor searching \cite{marimont_nearest_1979}. \iffalse \subsubsection{Principal Component Analysis} Over large datasets, the number of dimensions per each instance can become cumbersome to work with. To counter this, dimensionality reduction techniques can be used to reduce the number of dimensions for each instance. Principal Component Analysis (PCA) is a technique used to reduce the dimensionality of a dataset by finding a set of vectors (\textit{principal components}) which form a basis for a new space in which the dataset can be transformed into \cite{jolliffe_principal_2011}. These principal components are orthogonal to each other, and maximize the variance for the data. Principal components can be found by calculating the eigenvectors of the covariance matrix for the data, with the eigenvectors with the largest corresponding eigenvalues being the components with the maximum explained variance for the dataset \cite{murray_autoencoders_2020}. PCA is sensitive to outliers, however, as these outliers can greatly increase the variance and skew the maximum variance for a particular component \cite{murray_autoencoders_2020}. \fi \subsection{Latent Structure Detection}\label{section:background-latent-structure} Class introspection is a novel technique, but the idea of identifying latent classes within a dataset is not new. A prominent application area of this concept is medical studies, where similarities between subgroups of patients need to be identified in order to determine the safety of a drug; for example, if certain members of the experimental group had different reactions to a research pharmaceutical, then it would be important to both identify fragmentation inside that group (i.e.\ had a reaction to a drug in different ways) and to identify common features between those individuals (e.g.\ similar ages, etc.) \cite{lanza_latent_2013}. Discovering these similarities is crucial, as treatment techniques can rely on a specific confounding variable in a population that is not readily captured by the available features. \subsubsection{Subgroup Analysis} Subgroup analysis is a simple method of detecting latent structure. Subgroup analyses are performed by manually categorizing gathered data into subgroups based on some common characteristic of the data, and are typically examined by incorporating some moderating variable into a regression and interpreting the results \cite{higgins_cochrane_2011}. However, because this grouping is inherently observational, this method is difficult to use effectively and is time-intensive. Additionally, the results of such analyses are subject to high Type I errors, as false positives are easy to make when manually grouping data \cite{lanza_latent_2013}. \subsubsection{Finite Mixture Models} Finite mixture models are another way of determining latent structure (or structure in general) \cite{lanza_latent_2013}. Finite mixture models represent the data as a linear combination of component densities and maximize the likelihood of a specific configuration of models, and a common density function to use is the Gaussian normal distribution \cite{bell_automatic_2021}. Gaussian mixture models optimize the probability: \[ p(x) = \sum^M_{m=1} P(m) \mathcal{N} (x; \mathbf{\mu}_m, \Sigma_m) \] where the probability \(p(x)\) of a specific point is given by a sum of normal distributions moderated by mixing parameters \(P(m)\) \cite{bell_automatic_2021}. This probability is typically optimized by the expectation-maximization (EM) algorithm \cite{siedel_mixture_2011}. Finite mixture models can find latent structure, but they are limited in their effectiveness for this task. Finite mixture models are limited to discovering structure in the form of the basis functions chosen (e.g. Gaussians). Additionally, mixture models must re-learn the data distribution without any guidance from the original class labels which is computationally expensive\cite{bell_automatic_2021}. FMMs also generalize poorly to high-dimensional data, as it is difficult to fit models in high-dimensional spaces. In general, these techniques are poorly suited to finding latent structure in complex data. \subsubsection{Commonality metrics}\label{subsection:intro-commonality-metrics} A cutting edge technique in this area is the detection of rare subclasses via \textit{commonality metrics} \cite{paterson_detection_2019}. This technique analyzes the input training classes for a given dataset and for each class determines the average activation in the penultimate neural layer for that class, and then scores each instance based on how similar it is to that average activation. This is technique does not identify latent classes in and of themselves; rather, it identifies single instances that may be mislabeled or are significantly different than the average. Additionally, because the commonality metric approach operates over an average activation, large numbers of far-from-average instances will skew the whole commonality metric---potentially rendering the method less effective at identifying singular anomalous instances. \begin{draft} Under construction: papers for referrer \#1 comments. See \cite{asano2020selflabelling} and \cite{caron2019deep}. \end{draft} \subsubsection{Clustering over neural networks} Additionally, several recent techniques directly address the problem of unsupervised labeling of datasets. Broadly, these techniques introduce a clustering step during the training of a neural network to improve the labeling performance of the network. Two such techniques are DeepCluster \cite{caron2019deep}, and the self-labeling method proposed by Asano et. al. \cite{asano2020selflabelling}, where the loss function jointly learns the neural network parameters and the cluster assignments of the inferred features. DeepCluster itself simply uses \(k\)-means clustering to achieve this, and Asano et. al. use a more complex linear-programming-based method. Both, however, significantly outperform other feature-based learning approaches \cite{caron2019deep,asano2020selflabelling}. \section{Methodology} \subsection{Problem Formulation} The central problem to be solved is the discovery of latent subclasses in the input space. The aim is to find unlabeled (latent) subclasses within labeled classes in an existing dataset, with the hypothesis that these latent subclasses can carry additional information that is relevant to the authors of the classifier (see the examples in Section~\ref{section:intro-motivation}). The latent subclasses should be differentiated from one another without human intervention; that is to say, this should be an unsupervised technique. This is, at its core, a clustering problem. In the ideal case (with no latent structure), each discovered cluster should correspond to a single true label in the dataset. However, in the case where latent subclasses exist within a class, we expect to see two or more cluster labels assigned to a single class label. In general, this method should take a dataset as input and the labels and output a list of cluster labels for each instance. As a baseline technique, simple heirarchical clustering could be performed; however, by using the explainability methods the resulting projection is hypothesized to be easier to cluster over. \subsection{Method Overview} Class introspection's central hypothesis is that even if a instances within a dataset is all labeled identically, the specifics of how a specific classifier \textit{interprets} that instance can change significantly between those instances---and it is precisely that change that can be used to determine the existence of latent subclasses. In other words, this approach solves the subclass detection problem by leveraging an existing classifier, and through that the classifier's statistical power. Detecting these instance changes relies on the combination of several methods discussed in Chapter~\ref{chapter:background}. The general process is described below: \begin{enumerate} \item Generate classifications for all instances in the dataset. \item For each instance, create a local explanation of the classifier's decisions. \item For each class: \begin{enumerate} \item Select all instances in the dataset classified as the selected class, and their explanations. \item Run a clustering algorithm over the class's explanation. \item \textbf{Multiple clusters indicate the existence of latent structure.} \end{enumerate} \end{enumerate} \begin{figure*} \caption{Class introspection methodology overview.} \label{fig:meth-methodology-overview} \end{figure*} Clustering over the instance explanations is possible as these explanations act as a \textit{proxy} for the underlying instance. The explainability methods discussed in Section~\ref{section:intro-explainability} produce saliency maps over the input features \cite{ribeiro_why_2016,lundberg_unified_2017}, and these saliencies are expressed as a tensor of saliency weights over the initial features. This is crucial: because these saliency maps are numeric, they can be interpreted just as easily as the original data---or even more easily, as discussed further on in this paper. Because the explanations are simply numeric they can be clustered over, but the clustering itself is a challenge. Because the total class count is unknown, partitioning algorithms (e.g. K-means clustering) are not suitable to the task---if the total class count was known, the problem would be solved as we'd already know which subclasses exist. Instead, agglomerative clustering methods are appropriate for this use case, as they can operate over an unknown number of clusters. In this paper, DBSCAN is used to apportion the explanations into clusters---although others can be used (see Section~\ref{section:future-work}). Before clustering can be applied, the dimensionality of the data must be reduced. Clustering algorithms rely on (typically Euclidean) distance metrics, and as the number of dimensions increases the number of unit cells in that space increases exponentially, and so in these high dimensional spaces the nearest point can be extremely far away \cite{scarpa_data_2011}. In this paper, principal component analysis is used to reduce the number of dimensions before clustering, yielding much better results. At the end of the clustering process, the assigned cluster labels within each class are displayed as a histogram. The key insight here is that the cluster labels correspond to the similarity of the input images to each other, and if there are a large number of instances in two or more clusters those high-count are a candidate for a latent subclass. Ultimately, this method requires human interpretation of the results as some of the detected latent structure may be intentional (e.g.\ in the apples-oranges example the classifier may not care about the different kinds of apple). Even so, the output from this algorithm is much more readily interpretable than attempting to audit the dataset and labels by hand. \subsection{Baseline}\label{section:meth-baseline} A simple baseline technique for detecting this latent structure is to ignore the explanatory methods altogether and perform clustering over the raw instance data itself. PCA is used to reduce the dimensionality of the data before being passed to the clustering algorithm, DBSCAN.\@ This approach is similar to the commonality metrics methods described in Section~\ref{subsection:intro-commonality-metrics}, and is nearly identical to the methodology in Figure~\ref{fig:meth-methodology-overview}. To generate a dataset with latent structure, artificial latent structure is induced as in Section~\ref{section:artificial-structure}. Running the baseline over the MNIST dataset yields a surprising result: the PCA and DBSCAN over the raw MNIST digits is not likely to determine when the artificial latent structure is present. Even with a hyperparameter sweep of the DBSCAN \(\epsilon\) parameter, in some cases an optimal configuration cannot be found where the bridged class has a significant split in cluster membership as opposed to the non-bridged classes. Figure~\ref{fig:results-baseline-1-8} contains the output for the \((1 \to 8)\) bridging, and it is clear that the bridged class is not identified: no class contains any split in cluster membership, and some are entirely noise. However, some splits do work: in the case of the \(0 \to 1\) split in Figure~\ref{fig:results-baseline-0-1}, the latent structure is successfully identified. \begin{figure} \caption{Detected class fragmentation from the baseline run against MNIST \(1\to 8\) with \(\epsilon = 250\). Each blue bar represents a detected class from DBSCAN. Note how none of the classes show a major split in the assigned cluster labels (ignoring gray noise bars). } \label{fig:results-baseline-1-8} \end{figure} \begin{figure} \caption{Detected class fragmentation from the baseline run against MNIST \(0\to 1\) with \(\epsilon = 275\). Note how here, the latent structure is successfully identified (top left).} \label{fig:results-baseline-0-1} \end{figure} \section{Experimentation}\label{chapter:initial-explorations} \subsection{Artificial Latent Structure}\label{section:artificial-structure} To develop a pipeline to detect latent structure, we first need a dataset which contains latent structure. Unfortunately, it is difficult to find a dataset with such structure already in it. The challenge is twofold---either the dataset is simple enough that latent structure is virtually nonexistent, or the dataset is complex enough that detecting latent structure requires domain knowledge to detect. For a useful comparison of class introspection techniques, the dataset needs both the original class labels and labels for the latent structure to be detected. Creating these labels requires manually labeling every instance, which is both incredibly time-consuming and out of the scope of this project. Clearly, another solution is needed. Instead of relying on existing datasets to have exploitable latent structure, we can \textit{create} latent structure ourselves by ``bridging'' class labels together. This trivially creates latent structure, as both the instances for label \(A\) and label \(B\) are bound to the same class---and thus the new bridged class label has two distinct types of instance in it. This also has the benefit of giving labels corresponding to the original classes, as the original labels can be compared to the bridged labels to show which instances have been bridged. See Figure~\ref{fig:mnist-18-bridge} for an example with MNIST with labels \texttt{1} and \texttt{8} bridged together. \begin{figure} \caption{A demonstration of artificial latent structure with MNIST, here bridging the \texttt{1} \label{fig:mnist-18-bridge} \end{figure} This approach has limitations, however. The primary complication is the nature of the latent structure itself: bridging two classes can create a superclass with extremely obvious latent structure, which may not align well with real-life latent structure. However, for the purposes of this project these effects have been ignored, as it is useful to have obvious latent structure to detect rather than very subtle structure in order to test the pipeline. Additionally, bridging class labels can lead to unexpected behavior whilst training models over a dataset with a small number of classes. This is a fundamental issue, as bridging two classes effectively removes one of the classes. For simple datasets, this can cause certain models to not learn the structure of the superclass but instead simply learn one of the other classes and segment the input space into ``class and not class''. \iffalse This was encountered during experimentation with simple datasets, discussed below in Section~\ref{section:iris-bridged}. \subsection{Iris plants dataset}\label{section:initial-iris-dataset} The initial proof-of-concept of the class introspection pipeline needed to be as simple as possible, so the Iris dataset was chosen. The Iris dataset is a simple dataset, consisting of three classes characterized by four features. The petal width, petal length, sepal width, and sepal length are measured for three different species of iris; \textit{Iris setosa}, \textit{Iris virginica}, and \textit{Iris versicolor} \cite{fisher_use_1936,anderson_species_1936}. Based on these four features, it is possible to determine the class of an instance---and in fact, the petal length and width alone are highly correlated with the class. \begin{figure} \caption{\textit{Iris versicolor} \label{fig:iris-versicolor-plant} \end{figure} Due to the relative simplicity of this dataset, it is a common dataset for multiclass classification scenarios. For the initial proof-of-concept, it was chosen specifically for its low class number and distinct class separation. With the petal width and petal length correlated highly with class and sepal width and sepal length not correlated, it is possible to ignore sepal attributes and only consider two dimensions for plotting---allowing for easy debugging of the pipeline. \subsubsection{Non-bridged Performance} The first experiment was simply to generate explanations for each instance using simple models, and plot them against each other to see if any structure emerged. The models used in the analysis were off-the-shelf Scikit-learn models, using black-box explanations provided by LIME.\@ All experiments used the same training/testing split of \(25\%\). The first model tested was a random forest ensemble model (\texttt{sklearn.\allowbreak{}ensemble.\allowbreak{}RandomForest}) with \(n=15\) estimators. Training the model resulted in a perfect score against the test data, indicating a potential overfit; however, for class introspection overfit is not of particular concern. Figure~\ref{fig:iris-lime-rf-unbridged} shows the results from this analysis. The graph shows that while the \textit{Iris versicolor} instances are largely explained similarly, the \textit{Iris virginica} and \textit{Iris setosa} instances have some strange structure to the explanations---they are not stable within the class. Note that only the intra-class positions of the points matters, not the positions relative to other classes. \begin{figure} \caption{LIME explanations of random forest classifier over all irises (petal features only).} \label{fig:iris-lime-rf-unbridged} \end{figure} The next experiment used a multinomial na{\"\i}ve Bayes classifier (\texttt{sklearn.\allowbreak{}naive\_bayes.\allowbreak{}MultinomialNB}). This was trained exactly in the same manner as the random forest ensemble, and again fit well to the data. The graph (Figure~\ref{fig:iris-lime-mnb-unbridged}) again shows a tight clustering for the \textit{Iris versicolor} and more spread out explanations for \textit{Iris virginica} and \textit{Iris setosa}, though this time with different locations of the classes relative to each other. While this does not affect anything from a class introspection perspective, it is noteworthy that while the explanations are in different locations the overall class shapes are similar between the multinomial na{\"\i}ve Bayes classifier and the random forest classifier. \begin{figure} \caption{LIME explanations of multinomial na{\"\i} \label{fig:iris-lime-mnb-unbridged} \end{figure} \subsubsection{SHAP vs LIME} These experiments uncovered an issue with LIME---the explanations within each class were not clustered together, but rather exhibited a strange four-sided symmetry. Because the dataset is fairly simple, it was reasonable to assume this to be a limitation in LIME. To combat this, SHAP was used instead to generate explanations. The experimental setup was similar, with a tree-based model (XGBoost with \(100\) rounds and a learning rate of \(0.01\)) used instead of Scikit-Learn's random forest classifier. This resulted in better performance the Iris dataset (see Figure~\ref{fig:iris-shap-xgboost-unbridged}), with the majority of instances within classes having explanations within the same cluster. There are a few outliers, for example within the \textit{Iris versicolor} cluster which has a singular outlier. This shows SHAP's improved performance over LIME, as the explanations are much higher quality. \begin{figure} \caption{SHAP explanations of XGBoost over all irises (petal features only).} \label{fig:iris-shap-xgboost-unbridged} \end{figure} \subsubsection{Bridged Performance}\label{section:iris-bridged} With this improved performance of SHAP powering the explanations, the next experimental step was to introduce latent structure into the dataset as described in Section~\ref{section:artificial-structure}. For this run the \textit{Iris setosa} and \textit{Iris versicolor} were bridged together and the resulting dataset was trained as before, with an XGBoost classfier with \(100\) rounds and a learning rate of \(0.01\) and explanations generated via SHAP's tree explainer. Unfortunately, this exposed the issue with class bridging with low-class-count datasets: instead of learning the differences between the instances in the bridged class, the tree classifier simply split the inputs into two categories: \textit{Iris virginica} and not \textit{Iris virginica}. Figure~\ref{fig:iris-shap-xgboost-bridged} shows the full bridged class explanations, showing a split inside of the bridged class. However, further investigation of this split shows that the \textit{Iris versicolor} and \textit{Iris setosa} classes are not separate from each other within the class (see Figure~\ref{fig:iris-shap-xgboost-bridged-detail}). \begin{figure} \caption{SHAP explanations of XGBoost over irises with bridged \textit{Iris setosa} \label{fig:iris-shap-xgboost-bridged} \end{figure} \begin{figure} \caption{SHAP explanations of XGBoost over only bridged \textit{Iris setosa} \label{fig:iris-shap-xgboost-bridged-detail} \end{figure} \fi \subsection{MNIST dataset} An appropriate multiclass dataset for the development of the class introspection algorithm is the MNIST dataset \cite{lecun_mnist_2010} which contains \(70,000\) handwritten digits (\(0\) through \(9\)) as \(28\) pixel by \(28\) pixel monochrome images. The MNIST dataset is a common dataset in machine learning as it is easily understood by beginners while still being complex enough to allow for meaningful analysis. For the purposes of class introspection, it is an ideal dataset due to its relative simplicity and high class count. This high class count is crucial to avoid the issues with too few meaningful classes discussed above, as with a single bridged class there are still eight remaining classes. \iffalse \subsubsection{LIME and PyTorch} Because MNIST is image data, the models used for the tabular Iris dataset would not have been sufficient to classify MNIST digits. Instead, an artificial neural network (ANN) was used as a classifier, with three fully-connected hidden layers of \(784 \to 128 \to 128 \to 64 \to 10\) neurons connected by rectified linear units (ReLUs) with a logarithmic softmax on the output. For the initial MNIST experimentation, the ANN was implemented with PyTorch and explained with LIME.\@ PyTorch was chosen because of it's simple integration with LIME's default image explanation pipeline. First, as a sanity check the network was trained on the unbridged MNIST data, and then the test data instances were explained in bulk. The train/test split was the MNIST recommended \(60,000\) training instances and \(10,000\) testing instances \cite{lecun_mnist_2010}. The network reached an accuracy of \(97.6\%\) over the test set. In Figure~\ref{fig:mnist-pytorch-lime-unbridged}, the results of these explanations can be seen---the LIME explanations do highlight the structure of the instance as being salient to the classifier output, though this is noisy. In Figure~\ref{fig:mnist-pytorch-lime-unbridged-detail}, the output for a specific instance (a \(7\)) is shown. Again, it's clear that the bulk of the digit is marked as salient, but a lot of the background is also marked as salient. This noise is discussed below. \begin{figure} \caption{Unbridged MNIST explanations with LIME-generated (quickshift) superpixels.} \label{fig:mnist-pytorch-lime-unbridged} \end{figure} \begin{figure} \caption{Unbridged MNIST explanations with LIME-generated (quickshift) superpixels, detail on individual instance.} \label{fig:mnist-pytorch-lime-unbridged-detail} \end{figure} The next step was to re-train the network with a pair of bridged labels. For this step, the \(1\) and \(8\) labels were chosen for bridging, as these digits are very distinct from one another giving the class introspection the best chance of succeeding. The labels were modified for both the train and test datasets, and the network was re-trained. In Figure~\ref{fig:mnist-pytorch-lime-bridged}, it is evident that the network marks both \(1\) and \(8\) as being in the same class. \begin{figure} \caption{Bridged MNIST explanations with LIME superpixels.} \label{fig:mnist-pytorch-lime-bridged} \end{figure} While this is a positive result, this also exposes underlying issues with using LIME as an explainer. In image mode, LIME operates over superpixels generated by a generic segmentation algorithm (by default, scikit-image's Quickshift), running model inference over permuted superpixels rather than individual features for performance \cite{ribeiro_why_2016}. However, using superpixels leads to very coarse-grained explanations, as there are only a small number of superpixels by default---see Figure~\ref{fig:mnist-pytorch-lime-superpixel-detail} for a detail of a singular instance with very noisy explanations. This is not optimal for detecting latent structure, as more fine-grained feature importances are required. \begin{figure} \caption{Detail of LIME superpixels.} \label{fig:mnist-pytorch-lime-superpixel-detail} \end{figure} \fi \subsubsection{SHAP and Keras}\label{subsection:shap-and-keras} Due to the fine-grained explanations required for class introspection, LIME's explanations were insufficient. A different explanation engine was required, and SHAP (with the DeepLIFT backend) is able to provide those explanations. Instead of operating over superpixels, SHAP uses DeepLIFT to backpropagate the contributions of every neuron to every feature in the input space \cite{shrikumar_learning_2019,lundberg_unified_2017}. To keep the experiment simple, a relatively simple network architecture was chosen: fully-connected \(784 \to 128 \to 128 \to 64 \to 10\) layers with ReLUs and a softmax activation. Training this network resulted in a \(97.91\%\) accuracy over the test dataset, which is expected over such a relatively simple dataset. Running SHAP explanations over the test dataset yielded a per-pixel saliency map, and this map is intuitively interpretably; as an example for an instance with a true label 7 the explanation for label 0 shows that the network expects a round shape, and because the instance does not match that shape the input features negatively contribute towards classification as a 0. For the same instance, the explanation of the true label 7 shows the shape of the 7 positively influencing the classification of a 7, yielding a classification of 7 for the instance (see Figure~\ref{fig:mnist-keras-shap-detail-7-0-and-7-7}). \begin{figure} \caption{On the left: SHAP explanations over unbridged data, detail on explanation for label 0 and true label 7. Note the band of blue values in the shape of a zero, these are negative contributions (blue) from the features that would match label 0 and are missing on the 7. On the right: SHAP explanations over unbridged data, detail on explanation for label 7 and true label 7. Note the positive contributions (green) along the body of the glyph.} \label{fig:mnist-keras-shap-detail-7-0-and-7-7} \end{figure} Again, the \(1\) and \(8\) digits were bridged together due to their dissimilarity, and the network was trained again (achieving an accuracy of \(97.92\%\)). Figure~\ref{fig:mnist-keras-shap-bridged} shows another grid of SHAP explanations, note that there are no explanations in the \(8\) column as they have been bridged with the \(1\) column. Filtering just for instances in the bridged category shows a clear difference between the explanations for the \(1\) instances and the \(8\) instances. \begin{figure} \caption{SHAP explanations over bridged MNIST data.} \label{fig:mnist-keras-shap-bridged} \end{figure} The next step in the class introspection pipeline is to cluster the explanation to isolate the latent structure in the class. This poses several problems: the number of clusters is unknown, the data may not be linearly separable, and the data is high-dimensional. To solve the unknown cluster count, hierarchical clustering is used (DBSCAN). DBSCAN can handle clusters that are not linearly separable, making it suitable for this application \cite{ester_density-based_1996}. However, DBSCAN relies on a euclidean distance metric, requiring dimensionality reduction to avoid the curse of dimensionality. In this case, PCA was used to reduce the dimensionality from \(784\) to \(5\) as a preprocessing step, yielding the principal components seen in Figure~\ref{fig:mnist-keras-shap-bridged-pca}. PCA was calculated from the set of all explanations for instances the network predicted, giving a global set of basis vectors. Running DBSCAN over this data with \(\epsilon = 0.004\) yielded two distinct classes. As a control, all other classes were calculated using the same method, and this yielded the class membership visible in Figure~\ref{fig:mnist-keras-shap-bridged-0-1-all}. This shows that it is possible to find latent structure with this method. Additionally, even without DBSCAN the latent class is visible in the variance explained by the PCA vectors, as seen in Figure~\ref{fig:mnist-keras-shap-bridged-0-1-variance}. To verify that this was not a fluke, this process was repeated for all possible pairings (\(45\) in total). To facilitate this computation, a pipeline was created to automatically run all cases, and a web application was produced in order to view results. The code for both of these are available on Github\footnote{\url{https://github.com/pkage/class_introspection_krhcai}}. \begin{figure} \caption{PCA vectors for the \(1\) to \(8\) bridge, with the explained variance of each component.} \label{fig:mnist-keras-shap-bridged-pca} \end{figure} \begin{figure} \caption{Intraclass fragmentation histogram viewed with DBSCAN on bridged MNIST data (bridged labels 0 and 1). As before, the \(x\) axis is the class label and the \(y\) axis is the count of instances in that label. Note that in the bridged case \(0\to 1\) there are two main bars (+ a gray noise bar) and in the rest there is only one main bar.} \label{fig:mnist-keras-shap-bridged-0-1-all} \end{figure} \begin{figure} \caption{Variance inside classes in bridged MNIST data (bridged labels 0 and 1). Note that the variance is highest in class \(0\) (the bridged class).} \label{fig:mnist-keras-shap-bridged-0-1-variance} \end{figure} \section{Discussion}\label{chapter:results-limitations} \subsection{Comparison to Baseline} Compared to the baseline method described in Section~\ref{section:meth-baseline}, the SHAP + PCA + DBSCAN method described above is very effective at determining the latent structure. This variation in performance is indicative of the difference in the type of data being processed. The baseline operates directly over the glyphs themselves, while the class introspection pipeline operates over the classifier's \textit{explanations} of the glyphs. While similar, the distinction is important: in the baseline, the precise positions of the pixels are salient to the class representation, whilst in the explanations it is the specific neuron firings (and their intensities) that are salient. This allows the class introspection pipeline to group positive classifications by the specific neurons that are firing; the intuition being that the exact structure of the glyph does not matter so long as the specific neurons with that class are active which is fine tuned already by the classifier. Compare this to the baseline, which is relegated to determining the specific pixels that make up a class without the benefit of a trained neural network behind it. \subsection{Limitations} This experiment has shown that class introspection is a viable technique, but there are still several limitations in its current iteration. One main issue is that the explainability methods are imperfect. In most cases, SHAP or LIME produce saliencies that are reasonable, but in others they pick up on specific pixels that are assigned a saliency much higher than it's neighbors---or in fact, any other pixel in any other instance---by several orders of magnitude. This is infrequent, but care must be taken to avoid these outliers skewing the PCA vectors. Another issue is the dependence on the DBSCAN hyperparameter \(\varepsilon\). Without careful tuning, this can very easily yield a "collapsed solution", where the entire class is lumped into a single class, or a largely noise-filled clustering solution where most points are in the noise class or in minimum-cluster-size clusters. Future work here should be in adding checks to ensure that the class distribution is meaningful. In a similar vein, PCA is ill-suited to image data. PCA does not preserve the structure of the image, rather flattening it out into a single-dimensional vector. This flattening ruins any sort of spatial correlation between features, and is extremely fragile to image scaling, rotation, transposition, etc. A better choice for future work is a proper computer-vision-based feature extraction method, such as Scale-Invariant Feature Transform (SIFT) or Histogram of Oriented Gradients (HOG). These techniques would preserve the feature positions in the saliency map, and allow for a higher-quality dimensionality reduction. In its current iteration, class introspection is ill-suited to complex image data (such as CIFAR or ImageNet) due to this limitation with PCA. Fundamentally, class introspection is limited by the characteristics (and statistical power) of the classifier being explained. If the classifier does not learn the differentiating features of a class with latent subclasses and instead just treats that class as a catch-all, then class introspection is less effective at discovering that subclass. This is why the neural-network-based approaches were effective: the MNIST data was complex enough that the neural network had to actually understand the input features, and this allowed the bridged class to be easily discovered. More fundamentally still, class introspection is an ``unknown-unknown problem'': namely, the number of latent classes is unknown---and even worse, the very existence of those latent classes is unknown. It is not unlike searching for a needle in a haystack without even knowing if the needle is there. Additionally, there is another problem: of these discovered subclasses, there is no way of knowing which ones of those subclasses are intentional or which are novel. This means that there will always have to be a human in the loop to determine which subclasses are relevant. \subsection{Future Work}\label{section:future-work} The class introspection pipeline as it stands performs well in this paper, but there are areas that can be improved for more robust performance on a wider variety of data. A potential area of exploration is replacing the clustering algorithm with a Gaussian mixture-based approach, where each point could be maximized for the probability of being assigned a latent class instead of a definite class label being assigned to each point. This could help identify classes where the presence of latent structure is uncertain, but not definite. Another direction of exploration is the even more involved technique of comparing not only the input saliency but the neuron activations chain through the last layers of the network for any particular instance. This would limit the technique to only neural networks, but may be a powerful tool for identifying subclasses as the last layers of the network can be effectively used for determining similarity of instances \cite{paterson_detection_2019,caron2019deep}. This technique has far reaching possibilities, as the ability to reliably discover latent classes has implications across diverse fields of data science and artificial intelligence research. For example, the ability to reliably detect latent structure in classifiers greatly benefits medical studies who may be looking for a robust alternative to subgroup analysis. More fundamentally, this technique may allow for the auditing of classifiers to ensure that the data they are trained on and the data that they work over is free of unknown and potentially unwanted subclasses that could reduce the overall effectiveness of the classifier. \subsection{Conclusion} Over the course of this project a technique for reliably extracting unlabeled (latent) subclasses from existing datasets has been developed, leveraging explainability techniques to take advantage of the statistical power of complex models. This provided an advantage over other approaches for capturing latent structure, and was demonstrated over an example dataset with artificially-induced latent structure. \end{document}
\begin{document} \title{Compositions of Theta Correspondences} \author{Hongyu He \\ Department of Mathematics \& Statistics \footnote{email:[email protected] } \footnote{AMS Subject Primary 22E45, 22E46} } \date{} \maketitle \abstract{Theta correspondence $\theta$ over $\mathbb R$ is established by Howe in (~\cite{howe}). In ~\cite{unit}, we prove that $\theta$ preserves unitarity under certain restrictions, generalizing the result of Jian- Shu Li (~\cite{li2}). The goal of this paper is to elucidate the idea of constructing unitary representation through the propagation of theta correspondences. We show that under a natural condition on the sizes of the related dual pairs which can be predicted by the orbit method (~\cite{pdk}, ~\cite{vogan3}, ~\cite{pan}), one can compose theta correspondences to obtain unitary representations. We call this process quantum induction. \section{Introduction} An important problem in representation theory is the classification and construction of irreducible unitary representations. Let $G$ be a reductive group and $\Pi(G)$ be its admissible dual. For an algebraic semisimple group $G$, the admissible dual $\Pi(G)$ is known mostly due to the works of Harish-Chandra, R. Langlands, and Knapp-Zuckerman (see ~\cite{langlands}, ~\cite{kz}). Let $\Pi_u(G)$ be the set of equivalence classes of irreducible unitary representations of $G$, often called the unitary dual of $G$. The unitary dual of general linear groups is classified by Vogan (see ~\cite{vogan0}). The unitary dual of complex classical groups is classified by Barbasch (see ~\cite{barbasch}). Recently, Barbasch has classified all the spherical duals for split classical groups (see ~\cite{barbasch2}). The unitary duals $\Pi_u(O(p,q))$ and $\Pi_u(Sp_{2n}(\mathbb R))$ are not known in general. \\ \\ In ~\cite{howesmall}, Roger Howe constructs certain small unitary representations of the symplectic group using Mackey machine. Later, Jian-Shu Li generalizes Howe's construction of small unitary representations to all classical groups. In particular, Li defines a sesquilinear form $(,)_{\primei}$ that relates these constructions to the theta correspondence (see ~\cite{howe79}, ~\cite{li2}). It then becomes clear to many people that some irreducible unitary representations can be constructed through the propagation of theta correspondences (See ~\cite{li97}, ~\cite{hl} and ~\cite{pr1} and the references within them). So far, constructions can only be carried out for "complete small orbits" (see ~\cite{li97}). The purpose of this paper is to make it work for nilpotent orbits in general, for real orthogonal groups and symplectic groups. \\ \\ Consider the group $O(p,q)$ and $Sp_{2n}(\mathbb R)$. The theta correspondence with respect to $$O(p,q) \rightarrow Sp_{2n}(\mathbb R)$$ is formulated by Howe as a one-to-one correspondence $$\theta(p,q;2n): \mathcal R(MO(p,q),\omega(p,q;2n)) \rightarrow \mathcal R(MSp_{2n}(\mathbb R), \omega(p,q;2n))$$ where $MO(p,q)$ and $MSp_{2n}(\mathbb R)$ are some double coverings of $O(p,q)$ and $Sp_{2n}(\mathbb R)$ respectively and $$\mathcal R(MO(p,q),\omega(p,q;2n)) \subseteq \Pi(MO(p,q)); \qquad \mathcal R(MSp_{2n}(\mathbb R), \omega(p,q;2n)) \subseteq \Pi(MSp_{2n}(\mathbb R))$$ (see ~\cite{howe}). We denote the inverse of $\theta(p,q;2n)$ by $\theta(2n;p,q)$. For the sake of simplicity, we define $$\theta(p,q;2n)(\primei)=0 $$ if $\primei \notin \mathcal R(MO(p,q), \omega(p,q;2n))$. We define $\theta(p,q;2n)(0)=0$ and $0$ can be regarded as the NULL representation. \\ \\ For example, given an "increasing" string $$O(p_1,q_1) \rightarrow Sp_{2n_1}(\mathbb R) \rightarrow O(p_2,q_2) \rightarrow Sp_{2n_2}(\mathbb R) \rightarrow \ldots \rightarrow Sp_{2n_m}(\mathbb R) \rightarrow O(p_m,q_m) $$ $$p_1+q_1 \equiv p_2+q_2 \equiv \ldots \equiv p_m+q_m \qquad (\mod 2),$$ consider the propagation of theta correspondence along this string: $$\theta(2n_m; p_m,q_m) \ldots \theta(2n_1;p_2,q_2) \theta(p_1,q_1;2n_1)(\primei).$$ Under some favorable conditions on $\primei \in \Pi_u(O(p_1,q_1))$, one hopes to obtain a unitary representation in $\Pi_u(O(p_m,q_m))$. In this paper, we supply a sufficient condition for $$\theta(2n_m; p_m,q_m) \ldots \theta(2n_1;p_2,q_2) \theta(p_1,q_1;2n_1)(\primei)$$ to be unitary. We denote the resulting representation of $MO(p_m,q_m)$ by $$Q(p_1,q_1; 2n_1; p_2,q_2;2n_2; \ldots ; p_m,q_m)(\primei).$$ We call $Q(p_1,q_1; 2n_1; p_2,q_2;2n_2; \ldots ; p_m,q_m)$ quantum induction. In addition to the assumption that certain Hermitian forms do not vanish, we must also assume the matrix coefficients of $\primei$ satisfy a mild growth condition. \\ \\ Based on the work of Przebinda (~\cite{pr}), we further determine the behavior of infinitesimal characters under quantum induction. In certain limit cases, the infinitesimal character under quantum induction behaves exactly in the same way as under parabolic induction. In fact, in some limit cases, quantum induced representations can be obtained from unitarity- preserving parabolic induction (see ~\cite{quan}). Finally, motivated by the works of Przebinda and his collaborators, we make a precise conjecture regarding the associated variety of the quantum induced representations (Conjecture 2). \\ \\ There is one problem we did not address in this paper, namely, the nonvanishing of certain Hermitian forms $(,)_{\primei}$ with $\primei \in \Pi(Mp_{2n}(\mathbb R))$. In a forthcoming article (~\cite{quan}), we partially address this problem and construct a set of special unipotent representations in the sense of Vogan (see ~\cite{vogan89}). I wish to thank Prof. Shou-En Lu for her encouragements and the referee for some very helpful comments. \section{Main Results} \subsection{Notations} In this paper, unless stated otherwise, all representations are regarded as Harish-Chandra modules. This should cause no problems since most representations in this paper will be admissible with respect to a reductive group. Thus unitary representations in this paper would mean unitrizable Harish-Chandra modules. "Matrix coefficients" of a representation $\primei$ of a real reductive group $G$ will refer to the $K$-finite matrix coefficients with respect to a maximal compact subgroup $K$. A vector $v$ in an admissible representation $\primei$ means that $v$ is in the Harish-Chandra module of $\primei$ which shall be evident within the context.\\ \\ Let $(G_1,G_2)$ be a reductive dual pair of type I (see ~\cite{howe} ~\cite{li2}). The dual pairs in this paper will be considered as ordered. For example, the pair $(O(p,q), Sp_{2n}(\mathbb R))$ is considered different from the pair $(Sp_{2n}(\mathbb R), O(p,q))$. Unless stated otherwise, we will, in general, assume that the size of $G_1(V_1)$ is less or equal to the size of $G_2(V_2)$, i.e., $\dim_D(V_1) \leq \dim_D(V_2)$. Let $(G_1,G_2)$ be a dual pair in the symplectic group $Sp$. Let $Mp$ be the unique double covering of $Sp$. Let $\{1,\epsilon\}$ be the preimage of the identity element in $Sp$. For a subgroup $H$ of $Sp$, let $MH$ be the preimage of $H$ under the double covering. Whenever we use the notation $MH$, $H$ is considered to be a subgroup of certain $Sp$ which shall be evident within the context. Let $\omega(MG_1, MG_2)$ be a Schr\"odinger model of the oscillator representation of $Mp$ equipped with a dual pair $(MG_1,MG_2)$. The Harish-Chandra module of $\omega(MG_1, MG_2)$ consists of polynomials multiplied by the Gaussian function. Since the pair $(G_1,G_2)$ is ordered, we use $\theta(MG_1, MG_2)$ to denote the theta correspondence from $\mathcal R(MG_1, \omega(MG_1, MG_2))$ to $\mathcal R(MG_2, \omega(MG_1, MG_2))$. We use $\bold n$ to denote the constant vector $$(n,n, \ldots, n)$$ The dimension of $\bold n$ is determined within the context. Finally, we say a vector $$x=(x_1,x_2, \ldots x_n) \primerec 0$$ if $$\sum_{j=1}^k x_j < 0 \qquad \forall k \geq 1$$ and $x \primereceq 0$ if $$\sum_{j=1}^k x_j \leq 0 \qquad \forall k \geq 1.$$ \\ In this paper, the space of $m \times n$ matrices will be denoted by $M(m,n)$. The set of nonnegative integers will be denoted by $\mathbb N$. For the group $O(p,q)$, we assume that $p \leq q$ unless stated otherwise. For a reductive group $G$, $\Pi(G)$, $\Pi_u(G)$ will be the admissible dual and the unitary dual respectively. \\ \\ We extend the definition of matrix coefficients to the NULL representation. The matrix coefficients of the NULL representation is defined to be the zero function. \subsection{Theta Correspondence in Semistable Range and Unitary Representations} Let $\primei \in \Pi(MG_1)$. Following ~\cite{li2}, for every $u,v \in \primei$ and $\primehi, \primesi \in \omega(MG_1, MG_2)$, we formally define \begin{equation}~\label{avera} (\primehi \otimes v, \primesi \otimes u)_{\primei}=\int_{MG_1}(\omega(MG_1, MG_2) (\tilde g_1) \primehi, \primesi)(u, \primei(\tilde g_1) v) d \tilde g_1. \end{equation} Roughly speaking, if the functions $$(\omega(MG_1, MG_2) (\tilde g_1) \primehi, \primesi)(u, \primei(\tilde g_1) v) \qquad (\forall \primehi, \primesi \in \omega(MG_1, MG_2); \forall u, v \in \primei) $$ are in $L^{1}(MG_1)$ and $\primei(\epsilon)=-1$, $\primei$ is said to be in the semistable range of $\theta(MG_1, MG_2)$ (see ~\cite{theta}). We denote the semistable range of $\theta(MG_1, MG_2)$ by $\mathcal R_s(MG_1, MG_2)$. \\ \\ {\bf Suppose from now on} that $\primei \in \mathcal R_s(MG_1, MG_2)$. In ~\cite{theta}, we showed that if $(,)_{\primei}$ does not vanish, then $(,)_{\primei}$ descends into a Hermitian form on $\theta(MG_1, MG_1)(\primei)$. For $\primei \in \mathcal R_s(MG_1, MG_1)$, we {\bf define } \begin{equation} \theta_s(MG_1,MG_1)(\primei)=\left\{ \begin{array}{cc} \theta(MG_1,MG_2)(\primei) & \mathbbox{ if $(,)_{\primei} \neq 0$} \\ 0 & \mathbbox{ if $(,)_{\primei}=0$} \end{array} \right. \end{equation} $\theta_s(MG_1, MG_2)(\primei)$ as a real vector space is just $\omega(MG_1, MG_2) \otimes \primei$ modulo the radical of $(,)_{\primei}$ (see ~\cite{li2}, ~\cite{theta}). The main object of study in this paper is $\theta_s$. \\ \\ If $\primei$ is in $\mathcal R_s(MG_1, MG_2)$ but not in $\mathcal R(MG_1, \omega(MG_1, MG_2))$, our construction from ~\cite{theta} will result in a vanishing $(,)_{\primei}$. Thus $\theta_s(MG_1, MG_2)(\primei)$ "vanishes". In this case, $\theta_s=\theta$ trivially. The remaining question is whether $(,)_{\primei} \neq 0 $ if $\primei \in \mathcal R(MG_1, \omega(MG_1, MG_2))$. Conjecturally, $\theta_s(MG_1, MG_1)$ should agree with the restriction of $\theta(MG_1, MG_1)$ on $\mathcal R_s(MG_1, MG_2)$ (see ~\cite{theta}, ~\cite{li1}). \\ \\ For $\primei$ a Hermitian representation, it can be easily shown that $(,)_{\primei}$ is an invariant Hermitian form on $\theta(MG_1, MG_2)(\primei)$ if $(,)_{\primei}$ does not vanish. This is a special case of Przebinda's result in (~\cite{pr2}). For $\primei$ unitary, we do not know whether $(,)_{\primei}$ must be positive semidefinite in general. Nevertheless, in ~\cite{unit}, we have proved the semi-positivity of $(,)_{\primei}$ under certain condition on the leading exponents of $\primei$ (see ~\cite{knapp}, ~\cite{wallach}). Fix a Cartan decomposition for $Sp_{2n}(\mathbb R)$ and $O(p,q)$. Fix the standard basis of $\f a$ for $Sp_{2n}(\mathbb R)$ and $O(p,q)$ (see 6.1). The leading exponents of an irreducible admissible representation are in the complex dual of the Lie algebra $\f a$ of $A$. \begin{thm} Suppose $p+q \leq 2n+1$. Let $\primei$ be an irreducible unitary representation whose every leading exponent satisfies \begin{equation}~\label{ss1} \Re(v)-(\bold{n}-\frac{\bold{p+q}}{2})+\rho(O(p,q)) \primereceq 0 \end{equation} Then $(,)_{\primei}$ is positive semidefinite. Thus, $\theta_s(p,q;2n)(\primei)$ is either unitary or vanishes. \end{thm} We denote the set of representations in $\Pi(MO(p,q))$ satisfying (~\ref{ss1}) by $\mathcal R_{ss}(p,q;2n)$. The set $\mathcal R_s(MO(p,q),MSp_{2n}(\mathbb R))$ is written as $\mathcal R_s(p,q;2n)$ in short. \begin{thm} Suppose $n < p \leq q$. Let $\primei$ be an irreducible unitary representation whose every leading exponent satisfies \begin{equation}~\label{ss2} \Re(v)-(\frac{\bold{p+q}}{2}-\bold{n}-\bold{1})+\rho(Sp_{2n}(\mathbb R)) \primereceq 0 \end{equation} Then $(,)_{\primei}$ is positive semidefinite. Thus, either $\theta_s(p,q;2n)_s(\primei)$ is unitary or vanishes. \end{thm} We denote the set of representations in $\Pi(MSp_{2n}(\mathbb R))$ satisfying (~\ref{ss2}) by $\mathcal R_{ss}(2n; p,q)$. The set $\mathcal R_s(MSp_{2n}(\mathbb R), MO(p,q))$ is written as $\mathcal R_s(2n; p,q)$ in short.\\ \subsection{Estimates on Leading Exponents and $L(p,n)$} In this paper, we establish some estimates on the growth of the matrix coefficients of $\theta(p,q;2n)(\primei)$ and of $\theta(2n;p,q)(\primei)$ for $\primei$ in $\mathcal R_s(p,q;2n)$ and $\mathcal R_s(2n;p,q)$ respectively. We achieve this by studying the decaying of the function $$L(a, \primehi)=\int_{b_1 \geq b_2 \ldots \geq b_p \geq 1} (\primerod_{i=1,j=1}^{n,p}(a_i^2+ b_j^2)^{-\frac{1}{2}} ) \primehi(b_1,b_2,\ldots,b_p) d b_1 d b_2 \ldots d b_p$$ as a function of $a \in \mathbb R^n$. In general, the decaying of $L(a, \primehi)$ depends on the decaying of $\primehi$. In section 5, we define a map $L(p,n)$ to describe this dependence. The map $L(p,n)$ is a continuous map from $$C(p)=\{\lambda \primerec 0 \mid \lambda \in \mathbb R^p\}$$ to $$C(n)=\{ \mu \primerec 0 \mid \mu \in \mathbb R^n \}$$ Its algorithm is developed in Section $5$. For some special vectors in $C(p)$, $L(p,n)$ is just a reordering plus an augmentation or truncation. In this paper, we prove \begin{thm}. Let $L(n,p)$ be defined as in Section $5$. Let $a(g_2)$ be the middle term of the $KA^+K$ decomposition of $g_2 \in Sp_{2n}(\mathbb R)$. Let $b(g_1)$ be the middle term of the $KA^+K$ decomposition of $g_1 \in O(p,q)$. \begin{enumerate} \item Suppose that $\primei \in \mathcal R_s(p,q;2n)$. Suppose $\lambda \primerec -2 \rho(O(p,q))+ \bold n $ and for every leading exponent $v$ of $\primei$, $\Re(v) \primereceq \lambda$. Then the matrix coefficients of $\theta_s(p,q;2n)(\primei)$ are weakly bounded by $$a(g_2)^{L(p,n)(\lambda+2 \rho(O(p,q))-\bold n)-\bold{\frac{q-p}{2}}}.$$ \item Suppose that $\primei \in \mathcal R_s(2n;p,q)$. Suppose $\lambda \primerec -2 \rho(Sp_{2n}(\mathbb R))+\frac{\bold{p+q}}{2}$ and for every leading exponent $v$ of $\primei$, $\Re(v) \primereceq \lambda$. Then the matrix coefficients of $\theta_s(2n;p,q)(\primei)$ are weakly bounded by $$b(g_1)^{L(n,p)(\lambda+2 \rho(Sp_{2n}(\mathbb R))-\frac{\bold{p+q}}{2})}$$ \end{enumerate} \end{thm} The definition of weakly boundedness is given in Section $3$. \subsection{Quantum Induction} The idea of composing two theta correspondences to obtain "new" representations has been known for years. For example, one can compose $\theta(p,q;2n)$ with $\theta(2n;p^{\prime},q^{\prime})$. The nature of $\theta(2n;p^{\prime},q^{\prime}) \theta(p,q;2n)(\primei)$ seems to be inaccessible except for the cases of stable ranges. In this paper, we treat a somewhat more accessible object, namely, $$\theta_s(2n;p^{\prime},q^{\prime}) \theta_s(p,q;2n)(\primei).$$ Our construction is done through the studies of the Hermitian form $(,)_{\primei}$. Due to the unitarity theorems we proved in (~\cite{unit}), under restrictions as specificed in Equations (~\ref{ss1}) and (~\ref{ss2}), quantum induction preserves unitarity. Our main result can be stated as follows \begin{thm}[Main Theorem] \begin{itemize} \item Suppose \begin{enumerate} \item $q^{\prime} \geq p^{\prime} > n$; \item $p^{\prime}+q^{\prime}-2n \geq 2n-(p+q)+2 \geq 1$; \item $p+q =p^{\prime}+q^{\prime} \qquad (\mod \ 2)$. \end{enumerate} Let $\primei$ be an irreducible unitary representation in $\mathcal R_{ss}(p,q;2n)$. Suppose that $(,)_{\primei}$ does not vanish. Then \begin{enumerate} \item $\theta_s(p,q;2n)(\primei)$ is unitary. \item $\theta_s(p,q;2n)(\primei) \in \mathcal R_{ss}(2n;p^{\prime},q^{\prime})$. \item $\theta_s(2n; p^{\prime},q^{\prime})\theta_s(p,q;2n)(\primei)$ is either an irreducible unitary representation or the NULL representation. \end{enumerate} \item Suppose \begin{enumerate} \item $2n^{\prime}-p-q+2 \geq p+q-2n$; \item $n < p \leq q$. \end{enumerate} Let $\primei$ be a unitary representation in $\mathcal R_{ss}(p,q;2n)$. Suppose $(,)_{\primei}$ does not vanish. Then \begin{enumerate} \item $\theta_s(2n;p,q)(\primei)$ is unitary. \item $\theta_s(2n;p,q)(\primei) \in \mathcal R_{ss}(p,q;2n^{\prime})$. \item $\theta_s(p,q;2n^{\prime})\theta_s(2n;p,q)(\primei)$ is either an irreducible unitary representation or the NULL representation. \end{enumerate} \end{itemize} \end{thm} The purpose of assuming $\primei \in \mathcal R_{ss}$ is to guarantee the unitarity of $ Q(*)(\primei)$. In fact, for any $\primei$, the condition on the sizes of related dual pairs can be computed easily to define nonunitary quantum induction. In general, the underlying Hilbert space of the induced representation is "invisible" under quantum induction except for certain limit cases where quantum induction becomes unitary parabolic induction (see Section 6 and ~\cite{quan}). \begin{conj} Suppose $\primei$ is a unitary representation in $\mathcal R_{ss}$. \begin{itemize} \item The quantum induction $Q(p,q;2n;p^{\prime},q^{\prime})(\primei)$ for $2n-p- q+2=p^{\prime}+q^{\prime}-2n$ can be obtained via unitarity-preserving parabolic induction and cohomological induction from $\primei$. \item The quantum induction $Q(2n;p,q;2n^{\prime})(\primei)$ for $p+q-2n-2=2n^{\prime}-p-q$ can be obtained as a subfactor via unitarity-preserving parabolic induction from $\primei$. \end{itemize} \end{conj} For the cases $p+q=2n+1=p^{\prime}+q^{\prime}$ and $p+q=2n+1=2n^{\prime}+1$, by a Theorem of Adams-Barbasch, $Q$ is either the identity map or vanishes (~\cite{ab}). Our conjecture holds trivially, i.e., no induction is needed. For the case $p+q+p^{\prime}+q^{\prime}=4n+2$ and $p-p^{\prime}=q-q^{\prime}$, our result in Section 6 gives some indication that $Q(p,q;2n;p^{\prime},q^{\prime})(\primei)$ can be obtained from $$Ind_{SO_0(p,q) GL_0(p^{\prime}-p) N}^{SO_0(p^{\prime},q^{\prime})}(\primei \otimes 1).$$ Let me make one remark regarding the nonvanishing of $(,)_{\primei}$. In ~\cite{non1} we prove \begin{thm}[~\cite{non1}] Suppose $p+q \leq 2n+1$. Let $\primei \in \mathcal R_s(p,q;2n)$. Then at least one of $$(,)_{\primei}, (,)_{\primei \otimes \det}$$ does not vanish. \end{thm} For $\primei \in \mathcal R_s(2n;p,q)$, the nonvanishing of $(,)_{\primei}$ is hard to detect since it depends on $p,q$ (~\cite{ab}, ~\cite{thesis}, ~\cite{moeglin}). A result of Jian-Shu Li says that $(,)_{\primei}$ does not vanish if $p,q \geq 2n$. We are not aware of any more general nonvanishing theorems. \\ \\ Finally, concerning the associated varieties, Przebinda shows that the associated varieties behaves reasonably well under theta correspondence under certain strong hypothesis (~\cite{pr1}). We conjecture that quantum induction induces an induction on associated varieties and wave front sets. The exact description of the associated variety under quantum induction can be predicted based on ~\cite{pdk}. \begin{conj} \begin{itemize} \item Under the same assumptions from the main theorem, let $\primei$ be a unitary representation in $\mathcal R_{ss}(p,q;2n)$. Let $\mathcal O_{\bold d}$ be the associated variety of $\primei$ with $\bold d$ a partition (see Ch 5, ~\cite{cm}). Let $\mathcal O_{\bold f}$ be the associated variety of $Q(p,q;2n;p^{\prime},q^{\prime})(\primei) \neq 0$. Then $\bold f^t=(p^{\prime}+q^{\prime}-2n, 2n-p-q, \bold d^t)$. \item Under the same assumptions from the main theorem, let $\primei$ be a unitary representation in $\mathcal R_{ss}(2n;p,q)$. Let $\mathcal O_{\bold d}$ be the associated variety of $\primei$ with $\bold d$ a partition. Let $\mathcal O_{\bold f}$ be the associated variety of $Q(2n;p,q;2n^{\prime})(\primei) \neq 0$. Then $\bold f^t=(2n^{\prime}-p-q, p+q-2n, \bold d^t)$. \end{itemize} \end{conj} We remark that our situation is different from the situation treated in ~\cite{pr1} with some overlaps. The description of the wave front set under quantum induction can be predicted based on ~\cite{pan}. \section{Theta Correspondence} Let $(O(p,q), Sp_{2n}(\mathbb R))$ be a reductive dual pair in $Sp_{2n(p+q)}(\mathbb R)$. Let $$j: Mp_{2n(p+q)}(\mathbb R) \rightarrow Sp_{2n(p+q)}(\mathbb R)$$ be the double covering. Let $\{1, \epsilon\}=j^{-1}(1)$. Let $MO(p,q)=j^{-1}(O(p,q))$ and $MSp_{2n}(\mathbb R)=j^{-1}(Sp_{2n}(\mathbb R))$. Fix a maximal compact subgroup $U$ of $Sp_{2n(p+q)}(\mathbb R)$ such that $$U \cap Sp_{2n}(\mathbb R) \cong U(n), \qquad U \cap O(p,q) \cong O(p) \times O(q).$$ Then $MU$ is a maximal compact subgroup of $Mp_{2n(p+q)}(\mathbb R)$. Let $\omega(p,q;2n)$ be the oscillator representation of $Mp_{2n(p+q)}(\mathbb R)$. The representation $\omega(p,q;2n)$ or sometimes $\omega(2n;p,q)$ is regarded as an admissible representation of $Mp_{2n(p+q)}(\mathbb R)$ equipped with a fixed dual pair $(O(p,q), Sp_{2n}(\mathbb R))$. Let $\mathcal P$ be the Harish-Chandra module. Then $\omega(p,q;n)$ can be restricted to $MO(p,q)$ and $MSp_{2n}(\mathbb R)$. Howe's theorem states that there is a one to one correspondence $$\theta(p,q;2n): \mathcal R(MO(p,q), \omega(p,q; 2n)) \rightarrow \mathcal R(MSp_{2n}(\mathbb R), \omega(p,q;2n)).$$ \subsection{$MO(p,q)$ and $MSp_{2n}(\mathbb R)$} The groups $MO(p,q)$ and $MSp_{2n}(\mathbb R)$ are double covers of $O(p,q)$ and $Sp_{2n}(\mathbb R)$. Depending on the parameter $n$, $p$ and $q$, they may be quite different. \begin{lem}~\label{msp2n} \begin{enumerate} \item If $p+q$ is odd, then the double cover $MSp_{2n}(\mathbb R)$ does not split. It is the metaplectic group $Mp_{2n}(\mathbb R)$. The representations in $\mathcal R(Mp_{2n}(\mathbb R), \omega(p,q; 2n))$ are genuine representation of $Mp_{2n}(\mathbb R)$. \item If $p+q$ is even, then the double cover $MSp_{2n}(\mathbb R)$ splits. It is the product of $Sp_{2n}(\mathbb R)$ and $\{1, \epsilon \}$. The representations in $\mathcal R(MSp_{2n}(\mathbb R), \omega(p,q;2n))$ can be identified with representations of $Sp_{2n}(\mathbb R)$ by tensoring the nontrivial character of $\{1, \epsilon\}$. \item In both cases, any representation in $$\mathcal R(MSp_{2n}(\mathbb R), \omega(p,q; 2n))$$ can be identified with a representation of $Mp_{2n}(\mathbb R)$. In the former case, a genuine representation, and in the latter case, a nongenuine representation. \end{enumerate} \end{lem} We do not know the earliest reference. The details can be worked out easily and can be found in ~\cite{ab}. \begin{lem}~\label{mopq} \begin{enumerate} \item As a group, $$MO(p,q) \cong \{ (\xi, g) \mid g \in O(p,q), \xi^2=\det g^n \}$$ \item $\xi$ is a character of $MO(p,q)$. Any representations in $\mathcal R(MO(p,q), \omega(p,q;2n))$ can be identified with representations of $O(p,q)$ by tensoring $\xi$. \item $MSO(p,q)$ can be identified as group product $$SO(p,q) \times \{1, \epsilon\}.$$ \item If $n$ is even, $MO(p,q) \cong O(p,q) \times \{1, \epsilon\}$. \end{enumerate} \end{lem} The details can be found in ~\cite{ab} or ~\cite{unit}. We must keep in mind that for $p+q$ odd, $$\mathcal R(MSp_{2n}(\mathbb R), \omega(p,q;2n)) \subset \Pi_{genuine}(Mp_{2n}(\mathbb R))$$ and for $p+q$ even $$\mathcal R(MSp_{2n}(\mathbb R), \omega(p,q;2n)) \subset \Pi(Sp_{2n}(\mathbb R)).$$ \subsection{Averaging Integral $(,)_{\primei}$} Let $O(p,q)$ be the orthogonal group preserving the symmetric form defined by $$I_{p,q}=\left( \begin{array}{clcr} 0_p & 0 & I_p \\ 0 & I_{q-p} & 0 \\ I_p & 0 & 0_p \end{array} \right).$$ Fix a Cartan decomposition with $$A=\{ {\ \rm diag}(a_1,a_2, \ldots, a_p,\overbrace{1,\ldots,1}^{q-p},a_1^{-1}, a_2^{-1}, \ldots, a_p^{-1}) \mid a_i >0 \}$$ and a positive Weyl chamber $$A^+=\{ {\ \rm diag}(a_1, a_2 \ldots, a_p,\overbrace{1,\ldots,1}^{q-p},a_1^{-1}, a_2^{-1}, \ldots, a_p^{-1}) \mid a_1 \geq a_2 \geq \ldots \geq a_p \geq 1 \}.$$ The half sum of the positive restricted roots of $O(p,q)$ $$\rho(O(p,q))=\overbrace{(\frac{p+q-2}{2}, \frac{p+q-4}{2}, \ldots, \frac{q- p}{2})}^{p}.$$ Let $Sp_{2n}(\mathbb R)$ be the symplectic group that preserves the skew-symmetric form defined by $$W_n=\arr{0_n & -I_n \\ I_n & 0_n }$$ Let $K$ be the intersection of $Sp_{2n}(\mb R)$ with the orthogonal group $O(2n)$ which preserves the Euclidean inner product on $\mathbb R^{2n}$. Let $$A=\{a= {\ \rm diag}(a_1, a_2, \ldots, a_n, a_1^{-1}, \ldots, a_n^{-1}) \mid a_i > 0 \}$$ $$A^+=\{a= {\ \rm diag}(a_1, a_2, \ldots, a_n, a_1^{-1}, \ldots, a_n^{-1}) \mid a_1 \geq a_2 \geq \ldots \geq a_n \geq 1\}.$$ The half sum of the positive restricted roots of $Sp_{2n}(\mathbb R)$ $$\rho(Sp_{2n}(\mathbb R))=\overbrace{(n,n-1, \ldots, 1)}^n.$$ For each irreducible admissible representation of a semisimple group $G$ of real rank $r$, there are number of $r$-dimensional complex vectors in $\f a^*$ called leading exponents attached to it. Leading exponents are the main data used to produce the Langlands classification (see ~\cite{langlands} and ~\cite{knapp}). \begin{defn} An irreducible representation $\primei$ of $O(p,q)$ is said to be in the {\it semistable range} of $\theta(p,q;2n)$ if and only if each leading exponent $v$ of $\primei$ satisfies \begin{equation} \sum_{i=1}^{j} \Re(v_i)+(p+q-2i)-n < 0 \qquad (\forall \ \ j \in [1,p]) \end{equation} i.e., $$\Re(v)-\bold n+ 2 \rho(O(p,q)) \primerec 0.$$ An irreducible representation $\primei$ of $Mp_{2n}(\mathbb R)$ is said to be in the semistable range of $\theta(2n;p,q)$ if and only if every leading exponent $v$ of $\primei$ satisfies \begin{equation} \sum_{i=1}^k \Re (v_i)-\frac{p+q}{2}+2n+2-2j < 0 \qquad (\forall \ \ k \in [1,n]) \end{equation} i.e., $$\Re(v) -\frac{\bold{p+q}}{2}+ 2 \rho(Sp_{2n}(\mathbb R)) \primerec 0.$$ \end{defn} If $W$ is a complex linear space, we use a superscript $W^c$ to denote $W$ equipped with the conjugate complex linear structure. Let $\primei \in \mathcal R_s(MG_1,MG_2)$. We define a complex linear pairing $$ (\mathcal P^c \otimes \primei, \mathcal P \otimes \primei^c ) \rightarrow \mathbb C$$ as follows: for $\primehi \in \mathcal P, \primesi \in \mathcal P^c, v \in \primei^c, u \in \primei$, $$( \primehi \otimes v , \primesi \otimes u)_{\primei} =\int_{MO(p,q)} (\primehi, \omega(g) \primesi) ( \primei(g)u, v) d g$$ If $\primei$ is unitary, $(,)_{\primei}$ is an invariant Hermitian form with respect to the action of $MG_2$. \begin{thm}(see ~\cite{theta}) Suppose $(\primei, V)$ is a unitary representation in the semistable range of $\theta(MG_1,MG_2)$. Then $(,)_{\primei}$ is well-defined. Suppose $\mathcal R_{\primei}$ is the radical of $(,)_{\primei}$ with respect to $\mathcal P \otimes V^c$. If $(,)_{\primei}$ does not vanish, then \begin{itemize} \item $\primei$ occurs in $\mathcal R(MG_1, \omega(MG_1,MG_2))$; \item $\mathcal P \otimes V^c / \mathcal R_{\primei}$ is irreducible; \item $\mathcal P \otimes V^c / \mathcal R_{\primei}$ is isomorphic to $\theta(MG_1, MG_2)(\primei)$. \item $\theta_s(MG_1,MG_2)(\primei)$ is a Hermitian representation of $MG_2$. \end{itemize} \end{thm} Thus the Harish-Chandra module of $\theta_s(MG_1, MG_2)(\primei)$ can be defined as $\mathcal P \otimes V^c/\mathcal R_{\primei}$. \subsection{Oscillator Representation} The oscillator representation, also known as the Segal-Shale-Weil representation, is a unitary representation of the metaplectic group $Mp$. The construction of the oscillator representation can be found in the papers of Segal, Shale and Weil (~\cite{shale},~\cite{segal}, ~\cite{weil}). In this section, we give a basic estimate of the matrix coefficients of the oscillator representation. Proof of Theorem 3.3.1 can also be found in ~\cite{howe82} (Prop. 8.1).\\ \\ Let $g \in Sp_{2n}(\mb R)$. Let $a(g)$ be the midterm of the $KAK$ decomposition of $g$ such that $a \in A^+$. Let $H(g)=\log a(g)$. Then $$H(g)={\ \rm diag}(H_1(g), H_2(g), \ldots , H_n(g), -H_1(g), \ldots, -H_n(g))$$ is in the Weyl chamber $\f a^+$.\\ \\ Let $Mp_{2n}(\mathbb R)$ be the double covering of $Sp_{2n}(\mb R)$. The midterm of the $KAK$ decomposition of $Mp_{2n}(\mathbb R)$ remains the same. Let $(\omega_n, L^2(\mathbb R^n))$ be the Schr\"odinger model of the oscillator representation of $Mp_{2n}(\mathbb R)$ as in ~\cite{theta}. Let $$\mu(x)=\exp (-\frac{1}{2}(x_1^2+x_2^2 + \ldots + x_n^2))$$ be the Gaussian function. The Harish-Chandra module $\mathcal P_n$ are the polynomial functions multiplied by the Gaussian function as verified in ~\cite{theta}. We write $$x^{\alpha}=\primerod_1^n x_i^{\alpha_i}.$$ Harish-Chandra's theory says that the $Mp_{2n}(\mathbb R)$ action on $\mathcal P_n$ can be controlled by the $A$ action on fixed $K$-types of $\omega_n$. \begin{thm}~\label{metaplectic} For any $a \in A$, we have $$(\omega_n(a) x^{\alpha} \mu(x), x^{\beta} \mu(x))=c_{\alpha,\beta} \primerod_{i=1}^n a_i^{\alpha_i+\frac{1}{2}}(1+a_i^2)^{-\frac{\alpha_i+\beta_i+1}{2}}$$ In addition, $$|(\omega_n(a) x^{\alpha} \mu(x), x^{\beta} \mu(x))| \leq c \primerod_{i=1}^n (a_i+a_i^{-1})^{-\frac{1}{2}}$$ In general, for every $\primehi, \primesi \in \mathcal P_n$, we have $$|(\omega_n(g) \primehi, \primesi) | \leq c \primerod_{i=1}^n (a_i(g)+a_i^{-1}(g))^{- \frac{1}{2}}$$ \end{thm} The proof for the first statement can be found in ~\cite{theta}. We observe that \begin{equation} \begin{split} & |(\omega_n(a) x^{\alpha} \mu(x), x^{\beta} \mu(x)) |\\ = & |c_{\alpha,\beta} \primerod_{i=1}^n a_i^{\alpha_i+\frac{1}{2}}(1+a_i^2)^{-\frac{\alpha_i+\beta_i+1}{2}} | \\ = & |c_{\alpha,\beta} \primerod_{i=1}^n (a_i+a_i^{-1})^{-\frac{1}{2}}(1+a_i^2)^{-\frac{\beta_i}{2}} (1+a_i^{-2})^{-\frac{\alpha_i}{2}} | \\ \leq & c_{\alpha,\beta} \primerod_{i-1}^n (a_i +a_i^{-1})^{-\frac{1}{2}}. \end{split} \end{equation} The second statement is proved. The third statement follows immediately from $K$-finiteness of $\primehi$ and $\primesi$. \\ \\ The estimations on the right hand side are invariant under Weyl group action, thus do not depend on the choices of the Weyl chamber $\f a^+$. \subsection{Growth of Matrix Coefficients} \begin{defn} Suppose $X$ is a Borel measure space equipped with a norm $\|.\|$ such that \begin{itemize} \item $\|x \| \geq 0$ for all $x \in X$; \item the set $\{ \|x \| \leq r \}$ is compact. \end{itemize} Let $f(x)$ and $\primehi(x)$ be continuous functions defined over $X$. Suppose $\primehi(x)$ approaches $0$ as $\|x \| \rightarrow \infty$. A function $f(x)$ is said to be weakly bounded by the function $\primehi(x)$ if there exists a $\delta_0>0$ such that for every $\delta_0 > \delta >0$, there exists a $C >0$ depending on $\delta$ such that $$|f(x)| \leq C \primehi(x)^{1-\delta} \qquad (\forall \ x \in X)$$ \end{defn} The typical case is when $f(x)$ does not decay as fast as $\primehi(x)$ but faster than $\primehi(x)^{1-\delta}$.\\ \\ Let $\primei$ be an irreducible representation of a reductive group $G$. Let $K$ be a maximal compact subgroup of $G$. We adopt the notation from Chapter VIII in ~\cite{knapp}. We equip $G$ with a norm $$ g \rightarrow \|\log(a(g))\|=(\log a(g), \log a(g))^{\frac{1}{2}}$$ where $(,)$ is a real $\f g$-invariant symmetric form whose restriction on $\f a$ is positive definite. \\ \\ {\bf Example}: An irreducible representation $\primei$ of a reductive group $G$ is tempered if and only if its matrix coefficients are weakly bounded by $$a(g)^{ -\rho}$$ where $\rho$ is the half sum of positive restricted roots and $a(g)$ is the mid term of the $KAK$ decomposition with $a(g)$ in the positive Weyl chamber $A^+$ (see ~\cite{knapp}). \begin{thm}~\label{equ} Let $\primei$ be an irreducible unitary representation of $G$. Let $\lambda \primerec 0$. The following are equivalent. \begin{enumerate} \item Every leading exponent $v$ of $\primei$ has $\Re(v) \primereceq \lambda$. \item There is an integer $q \geq 0$ such that every $K$-finite matrix coefficient is bounded by a multiple of $(1+\|\log a(g)\|)^q \exp(\lambda(\log a(g)))$. \item Every $K$-finite matrix coefficient $\primehi(g)$ of $\primei$ is bounded by $C a(g)^{\lambda+ \delta}$ for any $\delta \succ 0$. \item Every $K$-finite matrix coefficient of $\primei$ is weakly bounded by $a(g)^{\lambda}$. \end{enumerate} \end{thm} See Chapter VIII.8,13 ~\cite{knapp} or Chapter 4.3 ~\cite{wallach} for details. The first three statements are equivalent without assuming the unitarity of $\primei$ and $\lambda \primerec 0$. \section{Twisted Integral} Let $A^+=\{a_1 \geq a_2 \ldots \geq 1 \}$. In this section, we will study the following integrals $$L(a,\lambda)=\int_{B^+} \primerod_{i=1}^p (\primerod_{k=1}^n (a_k^2+b_i^2)^{- \frac{1}{2}}) b_i^{\lambda_i} d b_i$$ and $$L(a, \primehi)=\int_{b_1 \geq b_2 \ldots \geq b_p \geq 1} \primerod_{i,j}(a_i^2+ b_j^2)^{-\frac{1}{2}} \primehi(b_1,b_2,\ldots,b_p) d b_1 d b_2 \ldots d b_p .$$ The domain of $a$ will always be $A^+$ unless stated otherwise. We are interested in the growth of $L(a,\primehi)$ as $a$ goes to infinity. Variables and parameters are assumed to be real in this section. \subsection{Single Variable Case $a \geq 1$} \begin{lem} Suppose that $a \geq 1$. The integral $$L(a, \lambda)=\int_{b \geq 1} (a^2+b^2)^{-\frac{1}{2}} b^{\lambda} {d b}$$ converges if and only if $\lambda < 0$. In addition, $L(a, \lambda)$ is weakly bounded by $a^{\lambda}$ if $-1 \leq \lambda<0$ and is bounded by a multiple of $a^{-1}$ if $\lambda < -1$. \end{lem} Proof: From classical analysis, the integral $$\int_{b \geq 1} b^{-1+\lambda} {d b}$$ converges if and only if $\lambda<0$. For a fixed $a$ and any $b>1$, $b^2 \leq a^2+b^2 \leq (1+a^2) b^2$. Hence $$\int_{b \geq 1} b^{-1} b^{\lambda} d b \geq \int_{b \geq 1} (a^2+b^2)^{- \frac{1}{2}} b^{\lambda} {d b} \geq \int_{b \geq 1} (1+a^2)^{-\frac{1}{2}} b^{- 1} b^{\lambda} d b$$ Hence, $ L(a, \lambda)$ converges if and only if $\lambda<0$. \\ \\ For $a \geq 1$, \begin{equation} \begin{split} L(a, \lambda) = & \int_{b \geq 1} (a^2+b^2)^{-\frac{1}{2}} b^{\lambda} d b \\ = & \int_{ab \geq 1} (a^2+a^2 b^2)^{-\frac{1}{2}} a^{\lambda+1} b^{\lambda} d b \\ = & a^{\lambda} \int_{b \geq a^{-1}} (1+b^2)^{-\frac{1}{2}} b^{\lambda} d b \\ = & a^{\lambda} \int_{b \geq 1} (1+b^2)^{-\frac{1}{2}} b^{\lambda} d b+ a^{\lambda} \int_{a^{-1}}^{1} (1+b^2)^{-\frac{1}{2}} b^{\lambda} d b \end{split} \end{equation} For $a \geq 1$ and $a^{-1} \leq b \leq 1$ and $\lambda \neq -1$, $$\frac{1}{\sqrt{2}} b^{\lambda} \leq (1+b^2)^{-\frac{1}{2}} b^{\lambda} \leq b^{\lambda}.$$ Taking $\int_{a^{-1}}^1 d b $, we obtain $$\frac{1}{\sqrt{2}(\lambda+1)}(a^{\lambda}-a^{-1}) \leq a^{\lambda} \int_{a^{-1}}^{1} (1+b^2)^{-\frac{1}{2}} b ^{\lambda} d b \leq \frac{1}{\lambda+1}(a^{\lambda}-a^{-1}) .$$ Therefore, for $-1 < \lambda < 0$, $L(a, \lambda)$ is bounded by a multiple of $a^{\lambda}$; for $\lambda < -1$, $L(a, \lambda)$ is bounded by a multiple of $a^{-1}$. For $\lambda=-1$, $$\frac{1}{\sqrt{2}}a^{-1} \ln a \leq a^{-1} \int_{a^{-1}}^{1} (1+b^2)^{-\frac{1}{2}} b ^{-1} d b \leq a^{-1} \ln a.$$ Therefore, $L(a, -1)$ is weakly bounded by $a^{-1}$. Q.E.D. \\ \\ \begin{lem}~\label{delta} Suppose $\lambda_0 <0$. Suppose $f(a)$ is weakly bounded by $a^{\lambda}$ for any $0>\lambda > \lambda_0$. Then $f(a)$ is weakly bounded by $a^{\lambda_0}$. \end{lem} Combining these two lemmas, we obtain \begin{thm} Suppose that $a \geq 1$. Suppose $\primehi(b)$ is weakly bounded by $b^{\lambda}$ for some $\lambda <0$. Then the integral $$L(a, \primehi(b))=\int_{b \geq 1} (a^2+b^2)^{-\frac{1}{2}} \primehi(b) d b$$ converges. In addition, $L(a, \primehi)$ is weakly bounded by $a^{\lambda}$ if $-1 \leq \lambda$ and is bounded by a multiple of $a^{-1}$ if $\lambda < -1$. \end{thm} In conclusion, the growth rate of $L(a, \primehi(b))$ is a "truncation" of the growth rate of $\primehi(b)$. \subsection{Multivariate $b$} Let $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_p)$. Let $B^+=\{b_1 \geq b_2 \geq \ldots \geq b_p \geq 1\}$. Let us consider $$L(a, \lambda)=\int_{B^+} \primerod_{i=1}^p (a^2+b_i^2)^{-\frac{1}{2}} b_i^{\lambda_i} d b_i $$ First, we observe that $$a^2+b_i^2 \geq a^{2 \eta_i} b_i^{2-2 \eta_i}$$ for any $\eta_i \in [0,1]$. The $\eta_i$ is to be determined later. We obtain \begin{equation} \begin{split} L(a, \lambda) \leq & \int_{B^+} \primerod_{i=1}^p a^{-\eta_i} b_i^{- 1+\eta_i+\lambda_i} d b_i \\ = & a^{\sum_{i=1}^p -\eta_i} \int_{B^+} \primerod_{i=1}^p b_i^{-1+\eta_i+\lambda_i} d b_i \\ \end{split} \end{equation} Secondly, we change the coordinates and let $$r_i=\frac{b_i}{b_{i+1}} \qquad (i=1,\ldots, p-1)$$ $$r_p=b_p.$$ Then $$b_i=\primerod_{j=i}^{p} r_j \qquad (i=1,\ldots, p).$$ In addition, $B^+$ is transformed into $[1, \infty)^p$. The differential $$ \primerod_{i=1}^p d b_i=\primerod_{i=1}^p (\primerod_{j=i}^p r_j) \frac{d r_i}{r_i}=\primerod_{i=1}^p b_i \frac{ dr_i}{r_i}.$$ We obtain \begin{equation} \begin{split} L(a, \lambda) & \leq a^{-\sum_{i=1}^p \eta_i} \int_{[1, \infty)^p} \primerod_{i=1}^p b_i^{ \eta_i+\lambda_i} \frac{ dr_i}{r_i} \\ & =a^{-\sum_{i=1}^p \eta_i} \int_{[1, \infty)^p} \primerod_{i=1}^p (\primerod_{j=i}^p r_j^{ \eta_i+\lambda_i}) \frac{ d r_i}{r_i} \\ &=a^{-\sum_{i=1}^p \eta_i} \int_{[1, \infty)^p} \primerod_{j=1}^p r_j^{\sum_{i=1}^j \eta_i+\lambda_i} \frac{d r_j}{r_j}. \end{split} \end{equation} This integral converges if $$\sum_{i=1}^j \eta_i+\lambda_i < 0 \qquad (\forall \, \, j).$$ \begin{thm} Suppose $ a \geq 1$. If $\lambda \primerec 0$, then $L(a, \lambda)$ converges. Furthermore, $L(a, \lambda)$ is bounded by a multiple of $$a^{\sum_{i=1}^p \eta_i}$$ with any $\eta_i$ satisfying the condition $$\{ 0 \leq \eta_j \leq 1 , \sum_{i=1}^j \eta_i + \sum_{i=1}^j \lambda_i <0 \qquad (j=1, \ldots p) \}.$$ \end{thm} The condition $$\sum_{i=1}^j \eta_i + \sum_{i=1}^j \lambda_i <0 \qquad (j=1, \ldots p) $$ can be restated as $\eta+\lambda \primerec 0$. Combined with Lemma ~\ref{delta}, we have \begin{thm} Suppose $\primehi(b_1, b_2, \ldots, b_p)$ on $B^+$ is weakly bounded by $b^{\lambda}$ for some $\lambda \primerec 0$. Then the function $$L(a, \primehi)= \int_{B^+} (\primerod_{i=1}^p (a^2+b_i^2)^{-\frac{1}{2}}) \primehi(b) d b_1 \ldots d b_p$$ is weakly bounded by $a^{-\mu}$ with $$\mu=\max \{\sum_{i=1}^p \eta_i \mid 0 \leq \eta_j \leq 1 , \lambda+\eta \primereceq 0 \}.$$ \end{thm} We point out the second ingredient needed to carry out estimations on $L(a,\primehi)$, namely, the coordinate transform from $b$ to $r$. \subsection{Multivariate $a \in [1,\infty)^n$} This case is more complicated since the function $L(a, \primehi)$ is no longer of single variable. Our result here is weaker than the results for single variable $a$. \\ \\ First we consider $$L(a,\lambda)=\int_{B^+} \primerod_{i=1}^p (\primerod_{k=1}^n (a_k^2+b_i^2)^{- \frac{1}{2}}) b_i^{\lambda_i} d b_i $$ We again set the parameters $\eta_{k,i}$ to be in $[0,1]$. We have $$a_k^2+b_i^2 \geq a_k^{2 \eta_{k,i}} b_i^{2-2\eta_{k,i}}$$ Therefore, we obtain \begin{equation} \begin{split} & L(a, \lambda) \\ \leq & \int_{B^+} \primerod_{i=1}^p (\primerod_{k=1}^n a_k^{-\eta_{k,i}} b_i^{-1+\eta_{k,i}}) b_i^{\lambda_i} d b_i \\ = & \primerod_{k=1}^n a_k^{-\sum_{i=1}^p \eta_{k,i}} \int_{B^+} \primerod_{i=1}^p b_i^{\lambda_i-n+\sum_{k=1}^n \eta_{k,i}} d b_i \end{split} \end{equation} Now we change the coordinates $b$ into $r$. We obtain \begin{equation} \begin{split} & L(a, \lambda) \\ \leq & \primerod_{k=1}^n a_k^{-\sum_{i=1}^p \eta_{k,i}} \int_{[1, \infty)^p} \primerod_{i=1}^p (\primerod_{j=i}^p r_j^{\lambda_i-n+\sum_{k=1}^n \eta_{k,i}} \primerod_{j=i}^p r_j) \frac{ d r_i}{r_i} \\ = & \primerod_{k=1}^n a_k^{-\sum_{i=1}^p \eta_{k,i}} \int_{[1, \infty)^p} \primerod_{i=1}^p (\primerod_{j=i}^p r_j^{\lambda_i-n+1+\sum_{k=1}^n \eta_{k,i}} ) \frac{ d r_i}{r_i} \\ =& \primerod_{k=1}^n a_k^{-\sum_{i=1}^p \eta_{k,i}} \int_{[1, \infty)^p} \primerod_{j=1}^p r_j^{\sum_{i=1}^j (\lambda_i-n+1+\sum_{k=1}^n \eta_{k,i})} \frac{ d r_j}{ r_j} \end{split} \end{equation} This integral converges if $$\sum_{i=1}^j (\lambda_i-n+1+\sum_{k=1}^n \eta_{k,i}) <0 \qquad (\forall \ \ 1 \leq j \leq p).$$ Since $\eta_{k,i} \in [0,1]$, we obtain the following theorem. \begin{thm} Suppose $a \in [1,\infty)^p$. The integral $L(a, \lambda)$ converges if $$\sum_{i=1}^j \lambda_i-n+1 <0$$ for every integer $1 \leq j \leq p$. In this situation $L(a,\lambda)$ is bounded by a multiple of $$a^{-\mu}=\primerod_{k=1}^n a_k^{-\mu_k}$$ where $\mu_k= \sum_{i=1}^p \eta_{k,i}$ and $\{ \eta_{k,i} \}$ satisfy $$\eta_{k,i} \in [0,1] \ \ \forall k,i$$ \begin{equation} \sum_{i=1}^j (\lambda_i-n+1+\sum_{k=1}^n \eta_{k,i}) < 0 \qquad \forall \ \ j. \end{equation} \end{thm} Similarly, we obtain \begin{thm}~\label{tw1} Suppose $a \in [1,\infty)^p$. Suppose $\primehi(b)b^{-\bold{n}+{1}}$ on $B^+$ is bounded by $b^{\lambda}$ with $\lambda \primerec 0$. Then the integral $L(a, \primehi)$ converges. Furthermore, $L(a,\primehi)$ is bounded by a multiple of $$a^{-\mu}=\primerod_{k=1}^n a_k^{-\mu_k}$$ where $\mu_k= \sum_{i=1}^p \eta_{k,i}$ and $\{ \eta_{k,i} \}$ satisfy $$\eta_{k,i} \in [0,1] \ \ \forall k,i$$ \begin{equation}~\label{tw11} \sum_{i=1}^j (\lambda_i+\sum_{k=1}^n \eta_{k,i}) < 0 \qquad \forall \ \ j. \end{equation} \end{thm} \section{Algorithm and Examples} Suppose $\lambda \primerec 0$. We are interested in finding the "maximal" $\eta$ where $$\mu_k= \sum_{i=1}^p \eta_{k,i}$$ with $\eta_{k,i}$ satisfying $$\eta_{k,i} \in [0,1] \ \ \forall k,i$$ \begin{equation}~\label{basic} \sum_{i=1}^j (\lambda_i+\sum_{k=1}^n \eta_{k,i}) < 0 \qquad \forall \ \ j. \end{equation} \subsection{A Theorem for $a \in [1, \infty)^n$} Write (~\ref{basic}) as \begin{equation}~\label{basic1} \sum_{i=1}^j (\sum_{k=1}^n \eta_{k,i}) < -\sum_{i=1}^j \lambda_i \qquad \forall \ \ j. \end{equation} First of all, since $\eta_{k,i} \geq 0$, the sequence $$\{\sum_{i=1}^j \sum_{k=1}^n \eta_{k,i} \mid j \in [1, p] \}$$ is increasing. However, the sequence $$\{-\sum_{i=1}^j \lambda_i \mid j \in [1,p] \}$$ might not be increasing. Therefore, there are redundancies in Inequalities ~\ref{basic1}. Let $j_1$ be the greatest index such that $$\sum_{i=1}^{j_1} -\lambda_i= \min\{-\sum_{i=1}^j \lambda_i \mid j \in [1,p] \}$$ Then we consider $j \geq j_1$. Let $j_2$ be the greatest number such that $$\sum_{i=1}^{j_2} -\lambda_i = \min \{-\sum_{i=1}^{j} \lambda_i \mid j \in [j_1,p] \}.$$ If $j_2=j_1$, we stop. Otherwise, we can continue on and define a sequence $$j_0=0 < j_1 < j_2 < j_3 < \ldots \leq p$$ with \begin{equation}~\label{lambdase} 0< \sum_{i=1}^{j_1} -\lambda_i < \sum_{i=1}^{j_1} -\lambda_i < \ldots < \sum_{i=1}^{p} -\lambda_i. \end{equation} Our problem is equivalent to finding $\{ \eta_{k,i} \}$ such that $$\eta_{k,i} \in [0,1] \ \ \forall k,i$$ $$\sum_{i=1}^{j_s} (\lambda_i+\sum_{k=1}^n \eta_{k,i}) < 0 \qquad (\forall \ \ j_s).$$ Once we determine the sequence $$j_0=0 < j_1 < j_2 < j_3 < \ldots \leq p,$$ we assign numbers in $[0,1]$ to $\eta_{k,i}$ for $ j_{s-1} < i \leq j_s$ such that \begin{equation}~\label{al1} \sum_{i=1}^{j_s} \sum_{k=1}^n \eta_{k,i} < -\sum_{i=1}^{j_s} \lambda_i. \end{equation} \begin{thm}~\label{gr1} Suppose $a \in [1,\infty)^p$. Suppose $\primehi(b)b^{-\bold{n+1}}$ on $B^+$ is bounded weakly by $b^{\lambda}$ with $\lambda \primerec 0$. Then the integral $L(a, \primehi)$ converges. Furthermore, $L(a,\primehi)$ is weakly bounded by $$a^{-\mu}=\primerod_{k=1}^n a_k^{-\mu_k}$$ where $\mu_k= \sum_{i=1}^p \eta_{k,i}$ and for each $j_s>0$, $\{ \eta_{k,i} \in [0,1] \}$ satisfy one of the following \begin{enumerate} \item \begin{equation}~\label{ar2} \sum_{i=1}^{j_s} (\lambda_i+\sum_{k=1}^n \eta_{k,i}) =0 ; \end{equation} \item \begin{equation}~\label{ar3} \sum_{i=1}^{j_s} (\lambda_i+\sum_{k=1}^n \eta_{k,i}) < 0; \,\,\,\, \mathbbox{and} \,\,\,\, \eta_{k,i}=1 \ \ \forall \ k \in [1,n], i \in [j_{s-1}+1, j_s]. \end{equation} \end{enumerate} \end{thm} Proof: It suffices to show that for any $0< t <1$, $t \eta_{k,i}$ satisfies the conditions in Theorem ~\ref{tw1}. Apparently, we have $$ t \eta_{k,i} \in [0,1] \qquad (\forall \ i, k)$$ and $$\sum_{i=1}^{j_s} (\lambda_i+\sum_{k=1}^n \eta_{k,i}) \leq 0$$ From (~\ref{lambdase}), for every $s \geq 1$, $$\sum_{i=1}^{j_s} (\lambda_i+\sum_{k=1}^n t \eta_{k,i}) \leq (1-t) \sum_{i=1}^{j_s}\lambda_i < 0 $$ We have shown that (~\ref{tw11}) holds for $j=j_s$. For $j_{s-1}+1 \leq j \leq j_{s}$, since $\eta_{k,i} \geq 0$, \begin{equation} \begin{split} & \sum_{i=1}^{j} \sum_{k=1}^n t \eta_{k,i} \\ \leq & \sum_{i=1}^{j_{s}} \sum_{k=1}^n t \eta_{k,i} \\ < & -\sum_{i=1}^{j_s} \lambda_i \\ \leq & -\sum_{i=1}^{j} \lambda_i \end{split} \end{equation} Thus, (~\ref{tw11}) holds for all $1 \leq j \leq p$. By Theorem ~\ref{tw1}, $L(a, \primehi)$ is bounded by $a^{-t \mu}$ with $\mu_k= \sum_{i=1}^p \eta_{k,i}$. Hence, $L(a, \primehi)$ is weakly bounded by $a^{-\mu}$. Q.E.D. \subsection{$L(p,n)$ and Algorithm for $a \in A^+$ } Theorem ~\ref{gr1} only assumes $a \in [1, \infty)^n$. Suppose from now on $$a \in A^+=\{a_1 \geq a_2 \geq \ldots \geq a_n \geq 1 \}.$$ In order to gain a better control over $L(a, \primehi)$, we just need to assign numbers to $\eta_{1,i}$ to make $\mu_1$ as big as possible, then assign numbers to $\eta_{2,i}$ to make $\mu_2$ as big as possible and so on. The only requirement is either (~\ref{ar2}) or (~\ref{ar3}). Our algorithm can be stated as follows. \begin{defn} Fix $j_s$ and assume that $\{\eta_{k,i} \mid i \leq j_{s-1} \}$ are known. We assign numbers between $0$ and $1$ to $\eta_{k,i}$ for $j_{s-1} < i \leq j_{s}$ in the following way. If (~\ref{ar3}) holds, assign $\eta_{k,i}=1$ for all $k$ and all $j_{s-1}+1 \leq i \leq j_s$. We are done. If (~\ref{ar2}) holds, we choose $\{ \eta_{1,i} \mid j_{s-1}+1 \leq i \leq j_s \}$ satisfying (~\ref{ar2}) and maximizing $\sum_{i=j_{s-1}+1}^{j_s} \eta_{1,i}$. The order of assigning numbers to $\{\eta_{1,i} \} $ for $j_{s-1} < i \leq j_{s}$ is not of our concern. Update (~\ref{ar2}). If (~\ref{ar2}) is trivial, we assign zero to the rest of $\{\eta_{k,i} \mid j_{s-1}+1 \leq i \leq j_s \}$ and stop. If not, choose $\{\eta_{2,i} \mid j_{s- 1}+1 \leq i \leq j_s \}$ satisfying (~\ref{ar2}) and maximizing $\sum_{i=j_{s- 1}+1}^{j_s} \eta_{2,i}$. Update ~\ref{ar2} and repeat this process. We do this for each $j_s$ until we reach $i=p$. Finally, we compute $$\mu_k=\sum_{i=1}^p \eta_{k,i} \qquad (1 \leq k \leq n)$$ and obtain a unique $\mu$. Write $$L(p,n)(\lambda)=-\mu.$$ \end{defn} The domain of $ L(p,n)$ are apparently $p$-dimensional real vectors such that $$\lambda \primerec 0.$$ The range of $L(p,n)$ are $n$-dimensional real vectors such that $$\mu \primerec 0.$$ $L(p,n)$, in general, does not produce the precise information for the Langlands parameters under theta correspondence; but for a special class of representations, $L(p,n)$ will be precise. Now, Theorem ~\ref{gr1} can be restated as follows. \begin{thm}~\label{gr2} Suppose $a \in A^+$. Suppose $\primehi(b)b^{\bold{-n+1}}$ on $B^+$ is bounded weakly by $b^{\lambda}$ with $\lambda \primerec 0$. Then the integral $L(a, \lambda)$ converges. Furthermore, $L(a,\lambda)$ is weakly bounded by $a^{\mu}$ for $\mu= L(p,n)(\lambda)$. \end{thm} \subsection{ Examples } Now let us compute a few examples. Suppose $p \leq n$. \\ \\ {\bf Example 1}: For $$\lambda=(-\frac{1}{2}, -\frac{3}{2}, \ldots, -p+\frac{1}{2}),$$ $$L(p,n)(\lambda)=(-p+\frac{1}{2}, -p+1+\frac{1}{2}, \ldots, -\frac{1}{2}, 0, \ldots, 0).$$ {\bf Example 2}: For $$\lambda=(-1, -2, \ldots, -p),$$ $$L(p,n)(\lambda)=(-p, -p+1, \ldots, -1, 0, \ldots, 0).$$ {\bf Example 3}: For $$\lambda=(-\frac{1}{2}, -\frac{3}{2}, \ldots, -n+\frac{1}{2}),$$ $$L(n,p)(\lambda)=(-n+\frac{1}{2}, -n+\frac{3}{2}, \ldots, -n-\frac{1}{2}+p).$$ {\bf Example 4}: For $$\lambda=(-1,-2,\ldots, -n),$$ $$L(n,p)(\lambda)=(-n,-n+1, \ldots, -n+p-1).$$ \section{Dual Pair $(O(p,q), Sp_{2n}(\mathbb R))$ and Estimates on $\theta_s(\primei)$} Let $O(p,q)$ be the orthogonal group preserving the symmetric form defined by $$I_{p,q}=\left( \begin{array}{clcr} 0_p & 0 & I_p \\ 0 & I_{q-p} & 0 \\ I_p & 0 & 0_p \end{array} \right)$$ and $Sp_{2n}(\mathbb R)$ be the standard symplectic group. We define a symplectic form on $V=M(p+q, 2n)$ by $$\Omega(v_1, v_2)=Trace(v_1 W v_2^t I_{p,q}) \qquad (\forall \ \ v_1,v_2 \in V).$$ Now as a dual pair in $Sp(V, \Omega)$, $O(p,q)$ acts by left multiplication and $Sp_{2n}(\mathbb R)$ acts by (inverse) right multiplication. We denote both actions on $M(p+q,2n)$ by $m$. \subsection{The dual pair representation $\omega(p,q;2n)$} Let $x_{i,j}$ be the entries in first $n$ columns of $v \in V$ and $y_{i,j}$ be the entries in the second $n$ columns of $v$. Let $$X=\{ v \in V \mid y_{i,j}=0 \}, \qquad Y=\{v \in V \mid x_{i,j}=0\}.$$ Then $X$ and $Y$ are both Lagrangian subspaces of $(V, \Omega)$. We realize the Schr\"odinger model of $Mp(V, \Omega)$ on $L^2(X)$. Let $\mathcal P(p,q;2n)$ be the Harish-Chandra module. We call the admissible representation $$(\omega(p,q;2n), \mathcal P(p,q;2n))$$ the dual pair representation. \\ \\ Now let $b={\ \rm diag}(b_1,b_2,\ldots b_p,1,\ldots,1,b_1^{-1},\ldots,b_p^{-1})$. Let $$B^+=\{b \mid b_1 \geq b_2 \geq \ldots \geq b_p \geq 1\} \subseteq O(p,q) .$$ Let $a={\ \rm diag}(a_1^{-1},a_2^{-1},\ldots,a_n^{-1},a_1,\ldots, a_n)$. Let $$A^+=\{a \mid a_1 \geq a_2 \geq \ldots \geq a_n \geq 1\} \subseteq Sp_{2n}(\mathbb R).$$ For $1\leq j \leq n$, let \[ m(b)e_{i,j}= \left\{ \begin{array}{ll} b_i e_{i,j} & {i=1,\ldots,p} \\ e_{i,j} & {i=p+1,\ldots,q} \\ b_i^{-1} e_{i,j} & {i=q+1, \ldots, p+q} \end{array} \right. \] $$ m(a) e_{i,j}=a_j e_{i,j} \qquad (i=1,\ldots,p+q)$$ These formulae indicate that the embedding $m$ of $A$ and $B$ into $GL(X)$ are simply the left multiplication and the (inverse) right multiplication. In fact, $$m(ab) e_{i,j}=\left\{ \begin{array}{ll} b_i a_j e_{i,j} & {i=1,\ldots,p} \\ a_j e_{i,j} & {i=p+1,\ldots,q} \\ b_i^{-1} a_j e_{i,j} & {i=q+1, \ldots, p+q} \end{array} \right. $$ Let $b(g_1)$ be the middle term of $KAK$ decomposition of $g_1$ with $b(g_1) \in B^+$. Let $a(g_2)$ be the middle term of $KAK$ decomposition of $g_2$ with $a(g_2) \in A^+$. Observe that $$(b_i a_j+b_i^{-1} a_j^{-1})(b_i^{-1} a_j+b_i a_j^{-1})= (b_i^2+b_i^{-2} +a_j^2+a_j^{-2}).$$ From Theorem ~\ref{metaplectic}, we obtain \begin{thm}~\label{h} For any $\primehi, \primesi \in \mathcal P(p,q;2n)$, $$| (\omega(p,q;2n)(m(ab)) \primehi, \primesi) | \leq C \primerod_{i=1}^p \primerod_{j=1}^n (b_i^2+b_i^{-2} +a_j^2+a_j^{-2})^{-\frac{1}{2}} \primerod_{j=1}^n (a_j + a_j^{-1})^{-\frac{q- p}{2}}$$ Furthermore, this estimate holds for $m(g_1 g_2)$ by substituting $b(g_1)$ and $a(g_2)$ into the right hand side. \end{thm} We denote $$\primerod_{i=1}^p \primerod_{j=1}^n (b_i^2+b_i^{-2} +a_j^2+a_j^{-2})^{-\frac{1}{2}}$$ by $H(a,b)$. \subsection{Growth Control on $\theta_s(p,q;2n)(\primei)$} Let $(\primei, V)$ be an irreducible Harish-Chandra module in $\mathcal R_{s}(p,q;2n)$. We are interested in the following integral $$\int_{MO(p,q)}(\omega(p,q;2n)(g_1g_2) \primehi, \primesi)(v, \primei(g_1)u) d g_1 \qquad (u, v \in V; \primesi, \primesi \in \mathcal P(p,q;2n)).$$ Our goal is to control the growth of this integral as a function on $MSp_{2n}(\mathbb R)$. From Theorem ~\ref{h} and Theorem ~\ref{equ}, we may as well consider \begin{equation}~\label{twisted0} \int_{B^+} \primerod_{j=1}^n (a_j + a_j^{-1})^{-\frac{q-p}{2}} H(a,b) b^{\lambda} b^{2 \rho_1} \primerod_{i=1}^p\frac{d b_i}{ b_i} \end{equation} Here $\rho_1$ is the half sum of the restricted positive roots of $O(p,q)$: $$\rho_1=(\frac{p+q-2}{2}, \frac{p+q-4}{2}, \ldots, \frac{q-p}{2})$$ and $(\primei(g_1)u, v)$ is bounded by a multiple of $b(g_1)^{\lambda}$. We observe that $$\primerod_{j=1}^n (a_j + a_j^{-1})^{-\frac{\bold q-\bold p}{2}}\int_{B^+} H(a,b) b^{\lambda} b^{2 \rho_1} \primerod_{i=1}^p \frac{d b_i}{ b_i} \leq C a(g_2)^{\bold{-\frac{q-p}{2}}} L(a, \lambda+2 \rho_1- \bold 1)$$ From Theorem ~\ref{gr2}, we obtain \begin{lem} Let $\primei \in \mathcal R_s(p,q;2n)$. Suppose $K$-finite matrix coefficients of $\primei$ are bounded by some $ C b(g_1)^{\lambda}$ with $$\lambda+ 2\rho(O(p,q))-\bold n \primerec 0.$$ Then the matrix coefficients of $\theta_s(p,q;2n)(\primei)$ are weakly bounded by $$a(g_2)^{L(p,n)(\lambda+2 \rho(O(p,q))-\bold n)-\bold{\frac{q-p}{2}}}.$$ \end{lem} Recall that $\primei \in \mathcal R_{ss}(p,q;2n)$ if and only if $$\Re(v)-(\bold{n}-\frac{\bold{p+q}}{2})+\rho(O(p,q)) \primereceq 0 $$ for every leading exponent $v$ of $\primei$. Take $$\lambda=\bold{n}-\frac{\bold{p+q}}{2}-\rho(O(p,q))+(\delta,0,\ldots, 0)$$ with $\delta$ a small positive number. Then matrix coefficients of $\primei$ are bounded by multiples of $b(g_1)^{\lambda}$. \begin{equation} \begin{split} & L(p,n)(\lambda+2 \rho(O(p,q))-\bold{n}) \\ = & L(p,n)(-\frac{\bold{p+q}}{2}+\rho(O(p,q))+(\delta,0, \ldots,0)) \\ = & L(p,n)(-1+\delta, -2, \ldots, -p ) \\ = & \{ \begin{array}{clcr} (-p+\delta, -p+1, \ldots, -1, 0, \ldots, 0)& n \geq p \\ (-p+\delta,-p+1, \ldots, -p+n-1) & n < p \end{array}. \end{split} \end{equation} From Lemma ~\ref{delta}, we obtain the following theorem \begin{thm}~\label{thetaspq} Suppose that $\primei \in \mathcal R_{ss}(p,q;2n)$. Then the matrix coefficients of $\theta_s(p,q;2n)(\primei)$ are weakly bounded by $$a(g_2)^{(-\frac{p+q}{2}, -\frac{p+q-2}{2}, \ldots, -\frac{q-p}{2}, \ldots, - \frac{q-p}{2})} \qquad (if \ n \geq p)$$ $$a(g_2)^{(-\frac{p+q}{2},-\frac{p+q-2}{2}, \ldots, -\frac{p+q-2n+2}{2})} \qquad (if \ n < p).$$ \end{thm} \subsection{Growth Control on $\theta(2n;p,q)_s(\primei)$} Let $(\primei, V)$ be an irreducible Harish-Chandra module in $\mathcal R_{s}(2n;p,q)$. We are interested in the following integral $$\int_{MSp_{2n}(\mathbb R)}(\omega(p,q;2n)(g_1g_2) \primehi, \primesi)(v, \primei(g_2)u) d g_2 \qquad (u, v \in V;\primehi, \primesi \in \omega(p,q;2n)).$$ Our goal is to control the growth of this integral as a function on $MO(p,q)$. From Theorem ~\ref{h} and Theorem 8.47 in ~\cite{knapp}, it suffices to consider \begin{equation}~\label{twisted0} \int_{A^+} H(a,b) a^{\lambda} a^{2 \rho_2} \primerod_{j=1}^n (a_j + a_j^{-1})^{- \frac{q-p}{2}} \frac{d a_j}{ a_j} \end{equation} Here $\rho_2$ is the half sum of the restricted positive roots of $Sp_{2n}(\mathbb R)$: $$\rho_2=(n, n-1, \ldots, 1)$$ and $(\primei(g_2)u, v)$ is bounded by a multiple of $a(g_2)^{\lambda}$. Apparently, the integral ~\ref{twisted0} can be controlled by $C L(a, \lambda-\bold{\frac{q-p}{2}}-\bold{1}+ 2 \rho_2)$. From Theorem ~\ref{gr2}, we obtain \begin{lem}~\label{control} Suppose that $\primei \in \mathcal R_s(2n;p,q)$, i.e., the matrix coefficients of $\primei$ are bounded by multiples of $a(g_2)^{\lambda}$ for some $$\lambda+ 2 \rho_2- \frac{\bold{p+q}}{2} \primerec 0.$$ Then the matrix coefficients of $\theta_s(2n;p,q)(\primei)$ are weakly bounded by $$b(g_1)^{L(n,p)(\lambda+2 \rho_2-\frac{\bold{p+q}}{2})}.$$ \end{lem} Recall that the representation $\primei$ is in $\mathcal R_{ss}(2n;p,q)$ if and only if $$\Re(v)+\bold{n+1}+\rho_2-\frac{\bold{p+q}}{2} \primereceq 0$$ for every leading exponent $v$ of $\primei$. Now let $$\lambda=\bold{-n-1}-\rho_2+\frac{\bold{p+q}}{2}+(\delta,0, \ldots, 0)$$ where $\delta$ is a small positive number. Then the matrix coefficients of $\primei$ are bounded by multiples of $a(g_2)^{\lambda}$ and $$\lambda+2 \rho_2-\frac{\bold{p+q}}{2}=\bold{-n-1}+\rho_2+\delta=(-1+\delta,- 2, \ldots -n).$$ Therefore $$L(n,p)(\lambda+2 \rho_2-\frac{p+q}{2})=(-n+\delta,-n+1, \ldots, -1,0, \ldots, 0) \qquad (p>n)$$ $$L(n,p)(\lambda+2 \rho_2-\frac{p+q}{2})=(-n+\delta,-n+1, \ldots, -n+p-1) \qquad (p \leq n)$$ From Lemma ~\ref{delta}, we obtain \begin{thm}~\label{matsp} Suppose that $\primei$ is in $\mathcal R_{ss}(2n;p,q)$. Then matrix coefficients of $\theta(2n;p,q)_s(\primei)$ is weakly bounded by $$b(g_1)^{(-n,-n+1,\ldots,-1,0,\ldots,0)} \qquad (p>n)$$ $$b(g_1)^{(-n,-n+1, \ldots, -n+p-1)} \qquad (p \leq n).$$ \end{thm} \subsection{Applications to Unitary Representations} We may now combine our results from ~\cite{unit} with the results we established in the previous two sections. Let us start with a unitary representation in $\mathcal R_{ss}(p,q;2n)$. \begin{thm} Suppose $p+q \leq 2n+1$. Suppose $\primei$ is a unitary representation in $\mathcal R_{ss}(p,q;2n)$ and $(,)_{\primei}$ is nonvanishing. Then $\theta_s(p,q;2n)(\primei)$ is unitary. Furthermore, the matrix coefficients of $\theta(p,q;2n)(\primei)$ is weakly bounded by $$a(g_2)^{(\overbrace{-\frac{p+q}{2}, -\frac{p+q-2}{2}, \ldots, -\frac{q-p}{2}- 1}^p,\overbrace{-\frac{q-p}{2}, \ldots, -\frac{q-p}{2}}^{n-p})}$$ \end{thm} In ~\cite{unit}, we have proved that for $p+q$ odd we can loose our restrictions from $\mathcal R_{ss}(p,q;2n)$ a little bit and unitarity still holds for $\theta_s(p,q;2n)(\primei)$. The precise statement can be stated as follows. \begin{thm}~\label{odd} Suppose $p+q \leq 2n+1$ and $p+q$ is odd. Suppose $\primei$ is a unitary representation in $\mathcal R_{s}(p,q;2n)$ such that each leading exponent $v$ of $\primei$ satisfies $$\Re(v)-(\bold{n-\frac{p+q-1}{2}})+\rho(O(p,q)) \primereceq 0 .$$ If $(,)_{\primei}$ is nonvanishing, then $\theta_s(p,q;2n)(\primei)$ is unitary. Furthermore, the matrix coefficients of $\theta_s(p,q;2n)(\primei)$ is weakly bounded by $$a(g_2)^{(\overbrace{-\frac{p+q-1}{2}, -\frac{p+q-3}{2}, \ldots, -\frac{q- p+1}{2}}^p, \overbrace{-\frac{q-p}{2} \ldots, -\frac{q-p}{2}}^{n-p})}$$ \end{thm} Similarly, we obtain the following theorem regarding $\theta_s(2n;p,q)(\primei)$. \begin{thm} Suppose that $n < p \leq q$. Suppose that $\primei$ is a unitary representation in $\mathcal R_{ss}(2n;p,q)$. If $(,)_{\primei}$ is nonvanishing, then $\theta_s(2n; p,q)(\primei)$ is unitary. Furthermore, the matrix coefficients of $\theta_s(2n;p,q)(\primei)$ are weakly bounded by $$b(g_1)^{(\overbrace{-n,-n+1,\ldots,-1}^n,\overbrace{0,\ldots,0}^{p-n})}.$$ \end{thm} \section{The Idea of Quantum Induction} In this section, we will define quantum induction first. Then we compute the infinitesimal characters of quantum induced representations. Finally, we give some indication how the limit of quantum induction will become parabolic induction. \subsection{Quantum Induction on Orthogonal Group} Consider the composition of $\theta_s(p,q;2n)$ with $\theta_s(2n;p^{\prime},q^{\prime})$. Suppose $\primei \in \mathcal R_{ss}(p,q;2n)$ and $p+q \leq 2n+1$. If $(,)_{\primei}$ is nonvanishing, then $\theta_s(p,q;2n)(\primei)$ is unitary and its leading exponents satisfy $$ \Re(v) \primereceq (\overbrace{-\frac{p+q}{2},-\frac{p+q-2}{2}, \ldots, -\frac{q- p+2}{2}}^p,\overbrace{-\frac{q-p}{2}, \ldots -\frac{q-p}{2}}^{n-p}).$$ The representation $\theta_s(p,q;2n)(\primei)$ is in $\mathcal R_{ss}(2n;p^{\prime}, q^{\prime})$ if $$(\overbrace{-\frac{p+q}{2},-\frac{p+q-2}{2}, \ldots, -\frac{q-p+2}{2}}^p, \overbrace{-\frac{q-p}{2}, \ldots -\frac{q-p}{2}}^{n- p})+\bold{(n+1)}+\rho(Sp_{2n}(\mathbb R))-\frac{\bold{p^{\prime}+q^{\prime}}}{2} \primereceq 0. $$ This is true if and only if $$-\frac{p+q}{2}+n+1+n-\frac{p^{\prime}+q^{\prime}}{2} \leq 0.$$ We obtain \begin{thm}~\label{quan1} Suppose $$q^{\prime} \geq p^{\prime} > n$$ $$p^{\prime}+q^{\prime}-2n \geq 2n-(p+q)+2 \geq 1$$ $$p+q =p^{\prime}+q^{\prime} \qquad \primemod 2.$$ Let $\primei$ be an irreducible unitary representation in $\mathcal R_{ss}(p,q;2n)$. Suppose that $(,)_{\primei}$ does not vanish. Then $\theta_s(p,q;2n)(\primei)$ is unitary and $$\theta_s(p,q;2n)(\primei) \in \mathcal R_{ss}(2n;p^{\prime},q^{\prime}).$$ Furthermore, $\theta_s(2n; p^{\prime},q^{\prime})\theta_s(p,q;2n)(\primei)$ is either a unitary representation or the NULL representation. \end{thm} \begin{defn} Let $\primei$ be a unitary representation in $\mathcal R_{ss}(p,q;2n)$. Suppose that $$q^{\prime} \geq p^{\prime} > n$$ $$p^{\prime}+q^{\prime}-2n \geq 2n-(p+q)+2 \geq 1$$ $$p+q =p^{\prime}+q^{\prime} \qquad (\mod 2)$$ We call $$Q(p,q;2n;p^{\prime},q^{\prime}): \primei \rightarrow \theta_s(2n; p^{\prime},q^{\prime})\theta_s(p,q;2n)(\primei)$$ the (one-step) quantum induction. \end{defn} If one of $(,)_{\primei}$ and $(,)_{\theta(p,q;2n)(\primei)}$ vanishes, we define our quantum induction $Q(p,q;2n;p^{\prime}, q^{\prime})(\primei)$ to be the NULL representation. \subsection{Quantum Induction on Symplectic Group} Next, we consider the composition of $\theta_s(2n;p,q)$ with $\theta_s(p,q;2n^{\prime})$. Suppose $n < p \leq q$. Let $\primei$ be a unitary representation in $\mathcal R_{ss}(p,q;2n)$. Suppose $(,)_{\primei}$ is not vanishing. Then the leading exponents of $\theta(2n;p,q)$ satisfy $$\Re(v) \primereceq (-n,-n+1,\ldots,-1,0,\ldots,0).$$ Therefore, $\theta(2n;p,q)$ is in $\mathcal R_{ss}(MO(p,q), \omega(p,q;2n^{\prime}))$ if $$(-n,-n+1,\ldots,-1,0,\ldots,0) -\bold{n^{\prime}}+ \frac{\bold{p+q}}{2}+\rho(O(p,q)) \primereceq 0$$ This is true if and only if $$-n-n^{\prime}+p+q-1 \leq 0.$$ \begin{thm}~\label{quan2} Suppose $2n^{\prime}-p-q \geq p+q-2n-2$ and $n < p \leq q$. Suppose $\primei$ is a unitary representation in $\mathcal R_{ss}(2n;p,q)$. If $(,)_{\primei}$ does not vanish, then $\theta_s(2n;p,q)(\primei)$ is unitary and it is in $\mathcal R_{ss}(p,q;2n^{\prime})$. Furthermore, $\theta_s(p,q;2n^{\prime})\theta_s(2n;p,q)(\primei)$ is a unitary representation or the NULL representation. \end{thm} \begin{defn} Let $p,q,n,n^{\prime}$ be nonnegative integers such that $$n < p \leq q$$ $$p+q-2n-2 \leq 2n^{\prime}-p-q$$ Let $\primei$ be a unitary representation in $\mathcal R_{ss}(2n;p,q)$. We call $$Q(2n;p,q;2n^{\prime}): \primei \rightarrow \theta_s(p,q;2n^{\prime})\theta_s(2n;p,q)(\primei)$$ the (one-step) quantum induction. \end{defn} If one of $(,)_{\primei}$ and $(,)_{\theta_s(2n;p,q)(\primei)}$ vanishes, we define our quantum induction $Q(2n; p,q;2n^{\prime})(\primei)$ to be $0$. Thus the domain of our quantum induction is $\mathcal R_{ss}(2n;p,q)$. \subsection{Quantum Inductions} We can further define $2$-step quantum induction and so on. The general quantum induction $$Q(p_1,q_1; 2n_1; p_2,q_2; 2n_2; \ldots)(\primei)$$ is defined as the composition of $\theta_s$, under the following conditions: \begin{enumerate} \item Initial Conditions: \\ $p_1+q_1 \leq 2n_1+1$. \\ $\primei$ is a unitary representation in $\mathcal R_{ss}(p_1,q_2; 2n_1)$, i.e., its leading exponents satisfy $$\Re(v)-\bold{n_1}+\frac{\bold{p_1+q_1}}{2}+\rho(O(p_1,q_1)) \primereceq 0$$ \item Inductive Conditions: $\forall \ \ j$, \\ $$ n_j < p_{j+1} \leq q_{j+1}$$ $$ p_{j+1}+q_{j+1}-2 n_{j} \leq 2n_{j+1}-p_{j+1}-q_{j+1}+2$$ $$ 2 n_j -p_j-q_j+2 \leq p_{j+1}+q_{j+1} - 2 n_j$$ $$ p_j+q_j \equiv p_{j+1}+q_{j+1} \qquad (\mod 2).$$ \end{enumerate} \begin{thm} The representation $$Q(p_1,q_1; 2n_1; p_2,q_2; 2n_2; \ldots)(\primei)$$ is either an irreducible unitary representation or the NULL representation. \end{thm} The general quantum induction $$Q(2n_1; p_1,q_1; 2n_2; p_2,q_2; 2n_3; \ldots)(\primei)$$ is defined as the composition of $\theta_s$ under the following conditions: \begin{enumerate} \item Initial Conditions: \\ $n_1 < p_1 \leq q_1$ \\ $\primei$ is a unitary representation in $\mathcal R_{ss}(2n_1; p_1,q_1)$, i.e., its leading exponents satisfy $$\Re(v)-\frac{\bold{p_1+q_1}}{2}+\bold{n+1}+\rho(Sp_{2n_1}(\mathbb R)) \primereceq 0$$ \item Inductive Conditions: $\forall \ \ j$, \\ $$ n_j < p_{j} \leq q_{j}$$ $$ p_j+q_j-2 n_{j} \leq 2n_{j+1}-p_j-q_j+2$$ $$ 2 n_{j+1} -p_j-q_j+2 \leq p_{j+1}+q_{j+1} - 2 n_{j+1}$$ $$ p_j+q_j \equiv p_{j+1}+q_{j+1} \qquad (\mod 2).$$ \end{enumerate} \begin{thm}The representation $$Q(2n_1; p_1,q_1; 2n_2; p_2,q_2; 2n_3; \ldots)(\primei)$$ is either an irreducible unitary representation or the NULL representation. \end{thm} Our inductive conditions are natural within the frame work of orbit method (see ~\cite{vogan3}, ~\cite{thesis}, ~\cite{pan}, ~\cite{pr1}). The nonvanishing of $\theta_s$ has been studied in ~\cite{thesis} and ~\cite{non1}. It can be assumed as a working hypothesis in the framework of quantum induction. Notice that $Q$ is defined as a composition of $\theta_s$. Thus, it is not known that $Q$ is exactly the composition of theta correspondences over $\mathbb R$. This problem hinges on one earlier problem mentioned by Jian-Shu Li (see \cite{li1}): \\ \\ Is $(,)_{\primei}$ nonvanishing if $\primei \in \mathcal R(MG_1,MG_2) \cap \mathcal R_{s}(MG_1, MG_2)$? \\ \\ Our result in ~\cite{theta} which is derived from Howe's results in ~\cite{howe} confirms the converse:\\ \\ $\primei$ is in $\mathcal R(MG_1,MG_2)$ if $(,)_{\primei}$ does not vanish. \\ \\ Therefore, if $Q(*)(\primei) \neq 0$, $Q(*)$ is the composition of $\theta$. \subsection{Infinitesimal Characters} Infinitesimal characters under theta correspondence were studied by Przebinda (\cite{pr}). We denote the infinitesimal character of an irreducible representation $\primei$ by $\mathcal I(\primei)$. Przebinda's result can be stated as follows. \begin{thm}[Przebinda] \begin{enumerate} \item Suppose $p+q < 2n+1$. Then $$\mathcal I(\theta(p,q;2n)(\primei))=\mathcal I(\primei) \oplus (n-\frac{p+q}{2}, n- \frac{p+q}{2}-1, \ldots, 1+[\frac{p+q}{2}]-\frac{p+q}{2}).$$ \item Suppose $2n+1 < p+q$. Then $$\mathcal I(\theta(2n;p,q)(\primei)) =\mathcal I(\primei) \oplus (\frac{p+q}{2}-n-1, \frac{p+q}{2}-n-2,\ldots, \frac{p+q}{2}-[\frac{p+q}{2}]).$$ \item Suppose $p+q=2n$ or $p+q=2n+1$. Then $\mathcal I(\theta(p,q;2n)(\primei))=\mathcal I(\primei)$. \end{enumerate} \end{thm} Now we can compute the infinitesimal character under quantum induction. \begin{cor} Suppose $Q(*)(\primei) \neq 0$. \begin{enumerate} \item If $p+q$ is even, then $$\mathcal I(Q(2n;p,q;2n^{\prime})(\primei))=\mathcal I(\primei) \oplus (\frac{p+q}{2}-n-1, \frac{p+q}{2}-n-2, \ldots, 0) \oplus (n^{\prime}-\frac{p+q}{2}, n^{\prime}- \frac{p+q}{2}-1, \ldots, 1).$$ \item If $p+q$ is odd, then $$\mathcal I(Q(2n;p,q;2n^{\prime})(\primei))=\mathcal I(\primei) \oplus (\frac{p+q}{2}-n-1, \frac{p+q}{2}-n-2, \ldots, \frac{1}{2}) \oplus (n^{\prime}- \frac{p+q}{2}, n^{\prime}-\frac{p+q}{2}-1, \ldots, \frac{1}{2}).$$ \item If $p+q$ is even, then $$\mathcal I(Q(p,q;2n;p^{\prime},q^{\prime})(\primei))=\mathcal I(\primei) \oplus (n-\frac{p+q}{2}, n- \frac{p+q}{2}-1, \ldots, 1) \oplus (\frac{p^{\prime}+q^{\prime}}{2}-n-1, \frac{p^{\prime}+q^{\prime}}{2}-n-2, \ldots, 0).$$ \item If $p+q$ is odd, then $$\mathcal I(Q(p,q;2n;p^{\prime},q^{\prime})(\primei))=\mathcal I(\primei) \oplus (n-\frac{p+q}{2}, n- \frac{p+q}{2}-1, \ldots, \frac{1}{2}) \oplus (\frac{p^{\prime}+q^{\prime}}{2}-n-1, \frac{p^{\prime}+q^{\prime}}{2}-n-2, \ldots, \frac{1}{2}).$$ \end{enumerate} \end{cor} We shall now take a look at some "limit" cases under quantum induction. \\ \\ {\bf Example I}: $p+q+p^{\prime}+q^{\prime}=4n+2$. \\ \\ In this case, $$n-\frac{p+q}{2}=\frac{p^{\prime}+q^{\prime}}{2}-n-1.$$ Therefore, $$\mathcal I(Q(p,q;2n;p^{\prime},q^{\prime})(\primei))=\mathcal I(\primei) \oplus \overbrace{ (n- \frac{p+q}{2}, n-\frac{p+q}{2}-1, \ldots, 1+\frac{p+q}{2}-n, \frac{p+q}{2}-n)}^{2n-p-q+1}.$$ {\bf Example II}: $2n-p-q+2=p^{\prime}+q^{\prime}-2n$ and $p-p^{\prime}=q-q^{\prime}$.\\ \\ Notice first that $$p^{\prime}-p+q^{\prime}-q=(p^{\prime}+q^{\prime})-(p+q)=4n+2-2(p+q).$$ Therefore $$\frac{p^{\prime}-p}{2}=\frac{p^{\prime}-p+q^{\prime}+q}{4}=n-\frac{p+q}{2}+\frac{1}{2}$$ Recall from Prop 8.22 ~\cite{knapp} \begin{equation} \begin{split} & \mathcal I(Ind_{SO_0(p,q) GL_0(p^{\prime}-p) N}^{SO_0(p^{\prime},q^{\prime})}(\primei \otimes 1)) \\ = & \mathcal I(\primei \otimes 1) \\ = & \mathcal I(\primei) \oplus (\frac{p^{\prime}-p-1}{2}, \frac{p^{\prime}-p-3}{2}, \ldots, - \frac{p^{\prime}-p-3}{2}, -\frac{p^{\prime}-p-1}{2}) \\ = & \mathcal I(\primei) \oplus (n-\frac{p+q}{2},n-\frac{p+q}{2}-1, \ldots, 1+\frac{p+q}{2}-n, \frac{p+q}{2}-n) \\ = & \mathcal I(Q(p,q;2n;p^{\prime},q^{\prime})(\primei)). \end{split} \end{equation} This suggests that $Q(p,q;2n;p^{\prime},q^{\prime})(\primei)$ as a representation of $SO_0(p,q)$ can be decomposed as direct sum of some parabolically induced unitary representation (see Conjecture I). \\ \\ {\bf Example III}: $n+n^{\prime}+1=p+q$. \\ \\ In this case, $$\frac{p+q}{2}-n-1=n^{\prime}-\frac{p+q}{2},$$ $$\frac{n^{\prime}-n-1}{2}=\frac{p+q}{2}-n-1.$$ From Prop. 8.22 (~\cite{knapp}) and the Corollary, \begin{equation} \begin{split} &\mathcal I(Ind_{Sp_{2n}(\mathbb R) GL(n^{\prime}-n)N}^{Sp_{2n^{\prime}}(\mathbb R)} (\primei \otimes 1)) \\ = & \mathcal I(\primei) \oplus (\frac{n^{\prime}-n-1}{2}, \frac{n^{\prime}-n-3}{2}, \ldots, -\frac{n^{\prime}-n-3}{2}, -\frac{n^{\prime}-n-1}{2}) \\ = & \mathcal I(\primei) \oplus (\frac{p+q}{2}-n-1, \frac{p+q}{2}-n-2, \ldots, -\frac{p+q}{2}+n+2, - \frac{p+q}{2}+n+1) \\ = & \mathcal I(Q(2n;p^{\prime},q^{\prime};2n^{\prime})(\primei)) \end{split} \end{equation} This suggests that $Q(2n;p,q;2n^{\prime})(\primei)$ can be obtained as subfactors of certain parabolic induced representation. We prove this connection in ~\cite{quan}.\\ \\ Let me make some final remarks concerning the definition of quantum induction $Q$. Notice that $ Q(p,q;2n;p^{\prime},q^{\prime})(\primei)$ contains distributions of the following form \begin{equation} \begin{split} & \int_{MSp_{2n}(\mathbb R)} \omega(p^{\prime},q^{\prime};2n)(g_1) \primehi_1 \otimes \int_{MO(p,q)} \omega(p,q;2n)^c (g_1g_2) \primehi_2 \otimes \primei(g_2) v d g_2 dg_1 \\ = & \int_{MO(p,q)} \omega(p,q;2n)^c(g_2) [\int_{MSp_{2n}(\mathbb R)} \omega(p^{\prime}+q, q^{\prime}+p;2n)(g_1) (\primehi_1 \otimes \primehi_2 ) d g_1] \otimes \primei(g_2) v. \end{split} \end{equation} Our discussions in this paper guaranteed absolute integrability of this integral. Notice that the vectors in $[*]$ are in $\theta(2n;p^{\prime}+q,q+p^{\prime})(1)$. \begin{defn} Suppose $p^{\prime}+q \geq 2n$, $q^{\prime}+p \geq 2n$ and $p+q+p^{\prime}+q^{\prime}$ is even. Consider the dual pair $(O(p^{\prime}+q,q^{\prime}+p), Sp_{2n}(\mathbb R))$. This is a dual pair in the stable range (~\cite{li2}, ~\cite{howesmall}). Then $\theta(2n;p^{\prime}+q,q^{\prime}+p)(1)$ is an unitary representation of $MO(p^{\prime}+q,q^{\prime}+p)$ (see ~\cite{li2}, ~\cite{hz}). Let $O(p,q)$ and $O(p^{\prime},q^{\prime})$ be embedded diagonally in $O(p^{\prime}+q,q^{\prime}+p)$. Let $\primei \in \Pi(MO(p,q))$. Formally define a Hermitian form $(,)$ on $\theta(2n;p^{\prime}+q,q^{\prime}+p)(1) \otimes \primei$ by integrating the matrix coefficients of $\theta(2n;p^{\prime}+q,q^{\prime}+p)(1)$ against the matrix coefficients of $\primei$ over $MO(p,q)$ as in (~\ref{avera}). Suppose that $(,)$ converges. Define $\mathcal Q(p,q;2n;p^{\prime}, q^{\prime})(\primei)$ to be $\theta(2n;p^{\prime}+q,q^{\prime}+p)(1) \otimes \primei$ modulo the radical of $(,)$. $\mathcal Q(p,q;2n;p^{\prime},q^{\prime})(\primei)$ is thus a representation of $MO(p^{\prime},q^{\prime})$. \end{defn} One must assume that $p^{\prime}+q^{\prime} \equiv p+q \primemod 2$. Otherwise, $\theta(2n;p^{\prime}+q, q^{\prime}+p)(1)=0$. $\mathcal Q$ can be regarded as a more general definition of quantum induction. It is no longer clear that $\mathcal Q$ preserves unitarity. \begin{thm} Under the assumptions from Theorem ~\ref{quan1}, $$\mathcal Q(p,q;2n;p^{\prime},q^{\prime})(\primei) \cong Q(p,q;2n;p^{\prime},q^{\prime})(\primei).$$ \end{thm} Similarly, one can define nonunitary quantum induction $\mathcal Q(2n;p,q;2n^{\prime})(\primei)$. \begin{defn} Suppose that $p+q \leq n+n^{\prime}+1$. Consider the dual pair $(O(p,q), Sp_{2n+2n^{\prime}}(\mathbb R))$. Then $\theta(p,q;2n+2n^{\prime})(1)$ is a unitary representation of $MSp_{2n+2n^{\prime}}(\mathbb R)$ (see ~\cite{howesmall}, ~\cite{li2}, ~\cite{pr2}). Let $\primei \in \Pi(MSp_{2n}(\mathbb R))$. Formally define a Hermitian form $(,)$ on $\theta(p,q;2n+2n^{\prime})(1) \otimes \primei$ by integrating the matrix coefficients of $\theta(p,q;2n+2n^{\prime})(1)$ against the matrix coefficients of $\primei$ as in (~\ref{avera}). Suppose that $(,)$ converges. Define $\mathcal Q(2n;p,q;2n^{\prime})(\primei)$ to be $\theta(p,q;2n+2n^{\prime})(1) \otimes \primei$ modulo the radical of $(,)$. $\mathcal Q(2n;p,q;2n^{\prime})(\primei)$ is a representation of $MSp_{2n}(\mathbb R)$. \end{defn} For $p+q$ odd, the $MSp$ in this definition are metaplectic groups. For $p+q$ even, the $MSp$ in this definition split (see Lemma ~\ref{msp2n}). \begin{thm} Under the assumptions from Theorem ~\ref{quan2}, $$\mathcal Q(2n; p,q;2n^{\prime})(\primei) \cong Q(2n;p,q;2n^{\prime})(\primei).$$ \end{thm} There is a good chance that $\mathcal Q(*)(\primei)$ will be irreducible. \\ \\ Quantum induction fits well with the general philosophy of induction. On the one hand, similar to parabolic induced representation $Ind_{P}^G \tau$ whose vectors are in $$Hom_P(C^{\infty}_c(G), \tau),$$ quantum induced $\mathcal Q(p,q;2n;p^{\prime},q^{\prime})(\primei)$ lies in $$Hom_{\f o(p,q), O(p) \times O(q)}(\theta(2n; p^{\prime}+q,q^{\prime}+p)(1), \primei).$$ On the other hand, $Ind_P^G \tau$ has a nice geometric description. It consists of sections of the vector bundle $$ G \times_P \tau \rightarrow G/P.$$ In contrast, quantum induction does not possess this kind of classical interpretation except for some limit case. \end{document}
\betaegin{document} \tauitle{The diagonal slice of Schottky space} \alphauthor{Caroline Series} \alphaddress{\betaegin{flushleft} \rm {\tauexttt{[email protected] \\http://www.maths.warwick.ac.uk/$\sigmaim$masbb/} }\\ Mathematics Institute, University of Warwick \\ Coventry CV4 7AL, UK \end{flushleft}} \alphauthor{Ser Peow Tan} \alphaddress{\betaegin{flushleft} \rm {\tauexttt{[email protected] \\http://www.math.nus.edu.sg/$\sigmaim$mattansp/} }\\ Department of Mathematics, National University of Singapore, 10, Lower Kent Ridge Road Singapore 119076 \end{flushleft}} \alphauthor{Yasushi Yamashita} \alphaddress{\betaegin{flushleft} \rm {\tauexttt{[email protected] \\http://vivaldi.ics.nara-wu.ac.jp/$\sigmaim$yamasita/}}\\ Department of Information and Computer Sciences, Nara Women's University, 630-8506 Kitauoyanishi-machi, Nara-City, Japan \end{flushleft}} \tauhanks{The second author is partially supported by the National University of Singapore academic research grant R-146-000-186-112. The third author is partially supported by JSPS KAKENHI Grant Number 23540088.} \deltaate{\tauoday} \betaegin{abstract} An irreducible representation of the free group on two generators $X,Y $ into $SL(2,\mathbb C)$ is determined up to conjugation by the traces of $X,Y$ and $XY$. If the representation is free and discrete, the resulting manifold is in general a genus-$2$ handlebody. We study the diagonal slice of the representation variety in which $\taur X = \taur Y = \taur XY $. Using the symmetry, we are able to compute the Keen-Series pleating rays and thus fully determine the locus of free and discrete groups. We also computationally determine the `Bowditch set' consisting of those parameter values for which no primitive elements in ${\lambda}angle X,Y \rangle$ have traces in $[-2,2]$, and at most finitely many primitive elements have traces with absolute value at most $2$. The graphics make clear that this set is both strictly larger than, and significantly different from, the discreteness locus. {\betaf MSC classification: 30F40; 57M50} \end{abstract} \title{The diagonal slice of Schottky space} \sigmaection{Introduction} {\lambda}abel{intro} It is well known that an irreducible representation of the free group $F_2$ on two generators $X,Y $ into $SL(2,\mathbb C)$ is determined up to conjugation by the traces of $X,Y$ and $XY$. More generally, if we take the GIT quotient of all (not necessarily irreducible) representations, then the resulting $SL(2,\mathbb C)$ character variety of $F_2$ can be identified with $\mathbb C^3$ via these traces, see for example \gammaammaite{goldman2} and the references therein. If the representation is free, discrete, purely loxodromic and geometrically finite, the resulting manifold is a genus-$2$ handlebody. The collection of all such representations is known as \emph{Schottky space}, denoted $\mathcal {SCH}$. It is a consequence of Bers' density theorem that $\mathcal {SCH}$ is the interior of the discreteness locus, see for example~\gammaammaite{canary}. It is natural to ask, for which values of $x = {\cal T}r X, y = {\cal T}r Y, z= {\cal T}r XY$ is the corresponding representation in $\mathcal {SCH}$? Let ${\cal P}$ denote the set of primitive elements in $F_2$. For $(x,y,z) {\bf i}n \mathbb C^3$, let $\rho_{(x,y,z)}$ denote a choice of representation $ F_2 \tauo {\mathcal S}L$ in the conjugacy class determined by the trace triple. The \emph{Bowditch set} (or $BQ$-set) $\mathcal B$ is defined in ~\gammaammaite{tan_gen} as the set of $(x,y,z) {\bf i}n \mathbb C^3$ corresponding to irreducible representations for which \betaegin{equation*} \betaegin{split} & {\cal T}r \rho_{(x,y,z)}(g) \notin [-2,2] \ \ \forall g {\bf i}n {\cal P} \ \ \mbox {\rm and} \ \ \gammaammar & \{ g {\bf i}n {\cal P}: |{\cal T}r \rho_{(x,y,z)}(g)| {\lambda}eq 2 \} \ \mbox {\rm is finite}.\end{split}\end{equation*} (The exceptional case in which ${\cal T}r [X,Y] = 2$ corresponds to reducible representations and is excluded from the discussion, see Remark~\ref{reducible}.) The Bowditch set is open and ${\rm Out}(F_2)$ acts properly discontinuously on it. Clearly ${\mathcal S}CH \sigmaubset \mathcal B$. Bowditch's original work~\gammaammaite{bow_mar} was on the case in which the commutator $[X,Y] = XYX^{-1}Y^{-1}$ is parabolic and ${\cal T}r [X,Y] = -2 $. He conjectured that the subsets of ${\mathcal S}CH$ and $ { \mathcal B}$ corresponding to this restriction coincide. Although this has not been proven, computer pictures indicate his conjecture may well be true. In this paper we restrict to the special case in which $x = y = z$, which we call the \emph{diagonal slice} of the character variety, denoted $ \mathcal Diag$ and parametrized by the single complex variable $x$. We show that in this slice, the analogue of Bowditch's conjecture is far from being true. This is illustrated in Figure~\ref{Diagonal-and-Riley-mu-3-Ray-BQ} which compares the intersections of $ \mathcal Delta$ with ${\mathcal S}CH$ and ${ \mathcal B}$. The discreteness locus is the outer region foliated by rays; these are the Keen-Series pleating rays which relate to the geometry of the convex hull boundary as explained in Section~\ref{sec:pleating} and whose closure is known to be $\overline{ \mathcal Diag \gammaammaap {\mathcal S}CH}$, see Theorem~\ref{thm:density}. The Bowditch set, by contrast, is the complement of the black part. It is clear that ${ \mathcal B} \gammaammaap \mathcal Diag$ contains a large open region not in $ \mathcal Diag \gammaammaap {\mathcal S}CH$, and also has different symmetries. In particular, it is not hard to show that the interval $ (2,3)$ is contained in $ { \mathcal B} \sigmaetminus {\mathcal S}CH$, see the discussion in Section~\ref{sec:algorithm1}. The main content of this paper is an explanation and justification of how these plots were made, in particular to explain how we enumerated and computed the pleating rays for the symmetric genus $2$ handlebody corresponding to the trace triple $(x,x,x)$. \betaegin{figure}[hbt]{\bf i}ncludegraphics[width=6cm]{Figs/Diagonal-and-Riley-mu-3-Ray-BQ} \gammaammaaption{Superposition of the discreteness locus for $\pi_1({\cal H})$ and the Bowditch set in the $x$-plane. The Bowditch set for the $(x,x,x)$-triple is the complement of the central black region, while the discreteness locus is the closure of the region foliated by rays. The rays are actually computed as the pleating rays for the quotient orbifold ${\mathcal S}$.}{\lambda}abel{Diagonal-and-Riley-mu-3-Ray-BQ} \end{figure} To compute the Bowditch set ${ \mathcal B}$ we use an algorithm based on the ideas in~\gammaammaite{bow_mar} and developed further in~\gammaammaite{tan_gen}. This is explained in Section~\ref{sec:algorithm}. The discreteness problem is tackled as follows. If $(x,x,x) {\bf i}n {\mathcal S}CH$ then the quotient $3$-manifold $\mathbb H^3/G$ is a handlebody ${\cal H}$ with order $3$ symmetry. We use the symmetry to reduce the problem of finding $ \mathcal Diag \gammaammaap {\mathcal S}CH$ to a problem very similar to that of determining the so-called \emph{Riley slice of Schottky space}. This is actually a space of groups on the boundary of $ {\mathcal S}CH$, consisting of those free, discrete and geometrically finite groups for which the two generators $X,Y$ are parabolic, thus contained in the slice $(2,2,z) \sigmaubset \mathbb C^3$. The corresponding manifold is a handlebody whose conformal boundary is a sphere with four parabolic points. The problem of finding those $z$-values for which such a group is free, discrete and geometrically finite was solved using the method of pleating rays in~\gammaammaite{ksriley}. In the present case, the quotient of ${\cal H}$ by the symmetry is an orbifold ${\mathcal S}$ with two order $3$ cone axes, whose conformal boundary is a sphere with four order $3$ cone points. Thus similar methods enable us to find $ \mathcal Diag \gammaammaap {\mathcal S}CH$ here. Although Figure~\ref{Diagonal-and-Riley-mu-3-Ray-BQ} shows that in $ \mathcal Delta$, the analogue of Bowditch's conjecture fails since ${ \mathcal B}$ and the interior of the discreteness locus are plainly distinct, in many other slices, see for example Figure~\ref{Figs/Riley-Ray-BQ}, the (modified) Bowditch set and the interior of the discreteness locus appear to coincide. This is connected to the dynamics of the action of a suitable mapping class group on representations and raises many interesting questions which we hope to address elsewhere. The plan of the paper is as follows. We begin in Section~\ref{sec:markoff} with a discussion of the Markoff tree and the algorithm used to compute the Bowditch set. In Section~\ref{sec:basicconfig} we introduce a basic geometrical construction which conveniently encapsulates the $3$-fold symmetry. The quotient of the original handlebody ${\cal H}$ by the symmetry is a ball with two order $3$ cone axes. This orbifold ${\mathcal S}$ has a further $4$-fold symmetry group whose quotient is again a topological ball. Our construction allows us to write down specific ${\mathcal S}L$ representations of all the groups involved with ease. In Section~\ref{sec:discrete} we turn to the discreteness question. After reducing the problem to one on ${\mathcal S}$, we briefly review material from the Keen-Series theory of pleating rays and recall what is needed from~\gammaammaite{ksriley}, allowing us to apply a similar proof in the present context. Section~\ref{sec:torustree}, not strictly logically necessary for our development, explains how we did our trace computations in practice, by relating the problem to one on a commensurable torus with a single cone point of angle $4\pi/3$. \sigmaection{The Markoff tree and the Bowditch set} {\lambda}abel{sec:markoff} Let $A = \betaegin{pmatrix} a & b \gammaammar c & d\end{pmatrix} {\bf i}n {\mathcal S}L$ so that $ad-bc= 1$. As usual we define its trace ${\cal T}r A = a+d$. Let $F_2 = {\lambda}angle X,Y| \ \rangle$ be the free group on two generators. It is well known that a representation $\rho\gammaammao F_2 \tauo {\mathcal S}L$ is determined up to conjugation (modulo taking the GIT quotient under the conjugation action, see~\gammaammaite{goldman2}) by the three traces $ x = \taur X, y = \taur Y, z = \taur XY$. In fact, given $x,y,z {\bf i}n \mathbb C$ we can define a representation $\rho_{x,y,z}\gammaammao F_2 \tauo {\mathcal S}L $ by $\rho(X) = \betaegin{pmatrix} x & 1 \gammaammar -1 & 0 \end{pmatrix}, \ \rho(Y) = \betaegin{pmatrix} 0 & \xi \gammaammar -\xi^{-1} & y \end{pmatrix}$ where $ z = -(\xi + \xi^{-1})$. Clearly with this definition, $ {\cal T}r X = x, {\cal T}r Y = y$ and ${\cal T}r XY = z$. \sigmaubsection{The Markoff Tree} For matrices $U,V {\bf i}n {\mathcal S}L$ set $u = {\cal T}r U, v = {\cal T}r V, w = {\cal T}r UV$. Recall the trace relations: \betaegin{equation}{\lambda}abel{eqn:inverse} {\cal T}r UV^{-1} = uv-w \end{equation} and \betaegin{equation} {\lambda}abel{eqn:commreln} u^2+v^2+w^2 = uvw + {\cal T}r {[U,V]} +2. \end{equation} Setting $\mu = {\cal T}r {[U,V]} + 2$, this last equation takes the form $$u^2+v^2+w^2 - uvw = \mu.$$ Let $F_2 = {\lambda}angle X,Y| \ \ \rangle$ as above. An element $U {\bf i}n F_2$ is \emph{primitive} if it is a member of a generating pair; we denote the set of all primitive elements by ${\cal P}$. The conjugacy classes of primitive elements are enumerated by $\hat \mathbb Q = \mathbb Q \gammaammaup {\bf i}nfty$ and are conveniently organised relative to the Farey diagram ${ \mathcal F}$ as shown in Figure~\ref{fig:farey}. This consists of the images of the ideal triangle with vertices at $1/0,0/1$ and $1/1$ under the action of $SL(2,\mathbb Z)$ on the upper half plane, suitably conjugated to the position shown in the disk. The label $p/q$ in the disk is just the conjugated image of the actual point $p/q {\bf i}n \mathbb R$. \betaegin{figure}[ht] {\bf i}ncludegraphics[width=5.5cm]{Figs/fareydiagram.pdf} \hspace{1cm} {\bf i}ncludegraphics[width=5.5cm]{Figs/fareywords.pdf} \gammaammaaption{The Farey diagram, showing the arrangement of rational numbers on the left with the corresponding primitive words on the right.}{\lambda}abel{fig:farey} \end{figure} Since the rational points are precisely the images of ${\bf i}nfty$ under $SL(2,\mathbb Z)$, they correspond bijectively to the vertices of ${ \mathcal F}$. A pair $p/q , r/s {\bf i}n \hat \mathbb Q$ are the endpoints of an edge if and only if $pr-qs = \pm 1$; such pairs are called \emph{neighbours}. A triple of points in $\hat \mathbb Q$ are the vertices of a triangle precisely when they are the images of the vertices of the initial triangle $(1/0,0/1,1/1)$; such triples are always of the form $(p/q , r/s,(p+r )/( q+s))$ where $p/q , r/s$ are neighbours. In other words, if $p/q , r/s$ are the endpoints of an edge, then the vertex of the triangle on the side away from the centre of the disk is found by `Farey addition' to be $(p+r )/( q+s)$. Starting from $1/0 = -1/0= {\bf i}nfty$ and $0/1$, all points in $\hat \mathbb Q$ are obtained recursively in this way. (Note we need to start with $-1/0= {\bf i}nfty$ to get the negative fractions on the left side of the left hand diagram in Figure~\ref{fig:farey}.) The right hand picture in Figure~\ref{fig:farey} shows a corresponding arrangement of primitive elements in $F_2$, one in each conjugacy class, starting with initial triple $(A,B, AB)$. Each vertex is labelled by a certain cyclically shortest representative of the corresponding word. Pairs of primitive elements form a generating pair if and only if they are at the endpoints of an edge. Triples at the vertices of a triangle correspond to a generator triple of the form $(U,V, UV)$. Corresponding to the process of Farey addition, successive words can be found by juxtaposition as indicated on the diagram. Note that for this to work it is important to preserve the order: if $U, V$ are the endpoints of an edge with $U$ before $V$ in the anti-clockwise order round the circle, the correct concatenation is $UV$. Note also that the words on the left side of the diagram involve $B^{-1}A$ corresponding to starting with ${\bf i}nfty = -1/0 $. We denote the particular representative of the conjugacy class corresponding to $p/q {\bf i}n \hat \mathbb Q$ found by concatenation by $W_{p/q}$. Its word length in the generators $A,B$ is a function $F(p/q)$ of $p/q$. A function on $\hat \mathbb Q$ is said to have \emph{Fibonacci growth} if it is comparable with uniform upper and lower bounds to $F$. \betaegin{figure}[ht] {\bf i}ncludegraphics[width=7cm]{Figs/markofftree.pdf} \gammaammaaption{The Markoff tree used to compute traces with an initial triple $(u,v,w)$. }{\lambda}abel{fig:markofftree} \end{figure} In this paper we are largely interested in computing traces of primitive elements. Following Bowditch~\gammaammaite{bow_mar}, these can also be easily computed by using the trivalent tree $\mathbb T$ dual to ${ \mathcal F}$, see the left frame of Figure~\ref{fig:farey}, and Figure~\ref{fig:markofftree}. Let ${\kappa}mega$ denote the set of complementary regions of $\mathbb T$, abstractly, a complementary region is the closure of a connected component of the complement of $\mathbb T$. As is apparent from Figure~\ref{fig:farey}, there is a bijection between ${\kappa}mega$ and the set of vertices of ${ \mathcal F}$. Thus the set ${\kappa}mega$ can be identified with conjugacy classes of primitive elements and hence with $\hat \mathbb Q$. Given a representation $\rho \gammaammao F_2 \tauo {\mathcal S}L$, each $U {\bf i}n {\kappa}mega$ is labelled by $u={\cal T}r \rho(U)$, the trace of the corresponding generator, as shown in Figure~\ref{fig:markofftree}. Labels on opposite sides of an edge correspond to traces of a generator pair: the three labels round a vertex correspond to a generator triple $(U,V,UV)$. Crossing an edge adjacent to regions $U,V$ of ${ \mathcal F}$ corresponds to changing the generator triple from $(U,V,UV)$ to $(U,V,UV^{-1})$. Suppose that $(U,V,W)$ are the labels of regions round a vertex with $ u = {\cal T}r \rho(U)$, $v = {\cal T}r \rho(V)$, $w = {\cal T}r \rho(W)$. By~\eqref{eqn:commreln} we have $u^2+ v^2 + w^2 -uvw = \mu$. By~\eqref{eqn:inverse}, the two vertices opposite the ends of an edge labelled $(U,V)$ have labels $w, uv-w$ respectively. More precisely, crossing the $3$ edges of a triangle of $\mathcal F$ gives rise to the three basic moves $(u,v, w) \tauo (u,v, uv-w)$, $(u,v, w) \tauo (u,uw-v, w)$, $(u,v, w) \tauo (vw-u,v, w)$ which generates traces of all possible elements in ${\kappa}mega$ (and hence $\mathcal P$). Note that any of these three moves leaves ${\cal T}r {[U,V]}$ and hence $\mu$ invariant; in other words, $\mu$ is an invariant of the tree. Bowditch's original paper was mostly confined to the case $\mu = 0$. In this way, the Markoff tree provides a fast way to compute traces of elements in ${\cal P}$ starting from an initial triple $(u,v,w)$. This is illustrated in Figure~\ref{fig:trace2} with the initial triple $( \sigmaqrt{x+1}, 0, \sigmaqrt{-x+2})$ which is used in Section~\ref{sec:torustree1}. We denote the tree of traces associated to an initial triple $(u,v,w)$ by $\mathbb T_{(u,v,w)}$. Later we will use a variant of this construction to compute traces of curves on a four pointed sphere, see Section~\ref{sec:traces}. \betaegin{figure}[hbt] \betaegin{center} {\bf i}ncludegraphics[height=7cm]{Figs/fareytraces} \gammaammaaption{The Farey tessellation used to compute traces. See Section~\ref{sec:torustree1} for a discussion of the choice of sign of the square roots.}{\lambda}abel{fig:trace2} \end{center} \end{figure} \sigmaubsection{The Bowditch set} {\lambda}abel{sec:tree} It is convenient to rephrase the above discussion using the terminology introduced in~\gammaammaite{bow_mar}. As above, let ${\kappa}mega$ denote the set of complementary regions of the tree $\mathbb T$. Define a \emph{Markoff map} to be a map $\phi: {\kappa}mega \tauo \mathbb C$ such that $\phi$ satisfies the trace relations~\eqref{eqn:inverse} and~\eqref{eqn:commreln}. The set of all Markoff maps is denoted ${\cal P}hi$. Since traces depend only on conjugacy classes, a representation $\rho \gammaammao F_2 \tauo {\mathcal S}L$ defines a Markoff map by setting $\phi(U) = {\cal T}r \rho(U)$ for $U {\bf i}n {\kappa}mega$. Fixing once and for all an identification of ${\kappa}mega$ with $\hat \mathbb Q$ (and recalling that ${\kappa}mega$ is identified with conjugacy classes of elements in ${\cal P}$), we have $\phi(p/q) = {\cal T}r \rho(W_{p/q}), p/q {\bf i}n \hat \mathbb Q$, where $W_{p/q}$ is the special word in the conjugacy class corresponding to $p/q {\bf i}n {\kappa}mega$. Thus as explained above, using the trace relations~\eqref{eqn:inverse} and~\eqref{eqn:commreln}, an initial triple $(x,y,z) {\bf i}n \mathbb C^3$ uniquely determines a Markoff map $\phi = \phi_{x,y,z}$ together with a corresponding labelling of $\mathbb T$. Conversely a Markoff map $\phi {\bf i}n {\cal P}hi$ determines $(x,y,z){\bf i}n \mathbb C^3$ by setting $ x = \phi(0/1), y = \phi(1/0), z = \phi(1/1)$. In this way, we can identify ${\cal P}hi$ with $\mathbb C^3$. For $\phi {\bf i}n {\cal P}hi$, denote the corresponding tree $\mathbb T_{\phi} = \mathbb T_{(\phi(0/1), \phi(1/0),\phi(1/1))}$. The Bowditch set $\mathcal B$ is the set of all $\phi {\bf i}n {\cal P}hi$ with $\mu \neq 4$ which satisfy the following conditions: \betaegin{equation}{\lambda}abel{eqn:B1} \phi (U) \notin [-2,2] \ \ \forall \ U {\bf i}n {\kappa}mega \ \ \ \mbox {\rm {and}} \end{equation} \betaegin{equation} {\lambda}abel{eqn:B2} \{U {\bf i}n {\kappa}mega: | \phi (U)| {\lambda}eq 2 \} \ \ \mbox {\rm {is finite}}. \end{equation} The Bowditch set $\mathcal B$ is open in $\mathbb C^3$ and ${\rm Out}(F_2)$ acts properly discontinuously on $\mathcal B$. Furthermore, if $\phi {\bf i}n \mathcal B$, then ${\lambda}og^+|\phi(U)|={\rm max}\{0, {\lambda}og|\phi(U)|\}$ has Fibonacci growth on ${\kappa}mega$ (see \gammaammaite{tan_gen}). \betaegin{remark}{\lambda}abel{reducible} \rm{The maps $\phi$ for which $\mu=4$ correspond to the reducible representations: our definition above automatically excludes them from ${ \mathcal B}$. For such $\phi$, there are infinitely many $U {\bf i}n {\kappa}mega$ such that $|\phi(U)|<m$ for $m>2$, they can alternatively be excluded from ${ \mathcal B}$ by relaxing condition (\ref{eqn:B2}) to the condition that $\{U {\bf i}n {\kappa}mega: | \phi (U)| {\lambda}eq 2+\epsilonsilon \} $ be finite for any $\epsilonsilon >0$. } \end{remark} \sigmaubsubsection{Background to the algorithm}{\lambda}abel{sec:algorithm} Our algorithm for computing which points lie in ${ \mathcal B}$ is based on results from~\gammaammaite{bow_mar, tan_gen} which we summarise here. We consider only $\phi$ for which $\mu \neq 4$. Following Bowditch~\gammaammaite{bow_mar}, we orient the edges of $\mathbb T_{\phi}$ in the following way. Suppose that labels of the regions adjacent to some edge $e$ are $u,v$ and the labels of the two remaining regions at the two end vertices are $w,t$, see Figure~\ref{fig:markofftree}. From the trace relations, $t = uv-w$. Orient $e$ by putting an arrow from $t$ to $w$ whenever $|t| > |w|$ and vice versa. If both moduli are equal, make either choice; if the inequality is strict, say that the edge is \emph{oriented decisively}. A \emph{sink region} of $\mathbb T_{\phi}$ is a connected non-empty subtree $T$ such that the arrow on any edge not in $T$ points towards $T$ decisively. A sink region may consist of a single \emph{sink vertex} $v$ (the three edges adjacent to $v$ point towards $v$) and no edges. Clearly a sink region is not unique: one can always add further vertices and edges around the boundary of $T_{\phi}$. For any $m \gammaeq 0$ and $\phi {\bf i}n {\cal P}hi$ define ${\kappa}mega_{\phi}(m) = \{ U {\bf i}n {\kappa}mega | |\phi( U)| {\lambda}eq m\}$. The following lemmas from~\gammaammaite{tan_gen} show that ${\kappa}mega_{\phi}(2)$ is connected, and that from any initial vertex not adjacent to regions in ${\kappa}mega_{\phi}(2)$, the arrows determine a descending path through $\mathbb T$ which either runs into a sink, or meets vertices adjacent to regions in ${\kappa}mega_{\phi}(2)$. Furthermore, if $\phi(U)$ takes values away from the exceptional set $E = [-2,2] \gammaammaup \{\pm \sigmaqrt{\mu}\} \sigmaubset \mathbb C$, then there exists a finite segment of $\partial U$ such that the edges adjacent to $U$ not in this segment are directed towards this segment. \betaegin{lemma}[\gammaammaite{tan_gen} Lemma 3.7]{\lambda}abel{forkvertex} Suppose $U,V,W {\bf i}n {\kappa}mega$ meet at a vertex $v$ with the arrows on both the edges adjacent to $U$ pointing away from $v$. Then either $|\phi(U)| {\lambda}eq 2$ or $\phi(V) = \phi(W) = 0$. \end{lemma} \betaegin{corollary}[\gammaammaite{tan_gen} Theorem 3.1(2)]{\lambda}abel{connected} Let $\phi {\bf i}n {\cal P}hi$. Then ${\kappa}mega_{\phi}(2)$ (more generally, ${\kappa}mega_{\phi}(m)$ for $m \gammae 2$) is connected. \end{corollary} \betaegin{lemma}[\gammaammaite{tan_gen} Lemma 3.11 and following comment] {\lambda}abel{infiniteray} Suppose $\betaeta$ is an infinite ray consisting of a sequence of edges of $\mathbb T_{\phi}$ all of whose arrows point away from the initial vertex. Then $\betaeta$ meets at least one region $U {\bf i}n {\kappa}mega$ with $|\phi( U)| < 2$. Furthermore, if the ray does not follow the boundary of a single region, it meets infinitely many regions with this property. \end{lemma} \betaegin{lemma}[\gammaammaite{tan_gen} Lemma 3.20] {\lambda}abel{finiteboundary} Suppose that $\phi(U) \notin E$ and consider the regions $V_i, i {\bf i}n \mathbb Z$ adjacent to $U$ in order round $\deltad U$. Then away from a finite subset, the values $|\phi(V_i)|$ are increasing and approach infinity as $ i \tauo {\bf i}nfty$ in both directions. Hence there exists a finite segment of $\partial U$ such that the edges adjacent to $U$ not in this segment are directed towards this segment. \end{lemma} We remark that if $\phi(U)= \pm \sigmaqrt{\mu}$ and $\sigmaqrt{\mu} \not{\bf i}n [-2,2]$, then the values of $|\phi(V_i)|$ in Lemma \ref{finiteboundary} approach zero in one direction round $\deltad U$ (\gammaammaite{tan_gen} Lemma 3.10) and hence $\phi \not{\bf i}n { \mathcal B}$ since condition (\ref{eqn:B2}) will not be satisfied. Hence, for $\phi {\bf i}n { \mathcal B}$, $\phi(U) \not{\bf i}n E$ for all $U {\bf i}n {\kappa}mega$. The set ${\kappa}mega_{\phi}(2)$ can be used to construct a sink region $T$ which is finite if and only if $\phi {\bf i}n \mathcal B$. Essentially, if $\phi {\bf i}n \mathcal B$, then $T$ consists of finite segments of the boundaries of the (finite number of) elements of ${\kappa}mega_{\phi}(2)$. These are the segments alluded to in Lemma \ref{finiteboundary}; they have to be large enough so the conclusion of the lemma holds, and also to contain all edges adjacent to $U,V$ with $U,V {\bf i}n {\kappa}mega_{\phi}(2)$ so that the union is connected. To do this, an explicit function $H_{\mu}:\mathbb C \rightarrow \mathbb R^+ \gammaammaup \{{\bf i}nfty\}$ is constructed (see~\gammaammaite{tan_gen} Lemma 3.20, the following remark and Lemma 3.23) as follows: \betaegin{enumerate} {\bf i}tem If $x {\bf i}n E$, define $H_{\mu}(x) ={\bf i}nfty$; {\bf i}tem For $x \not{\bf i}n E$, let $x={\lambda}ambda +{\lambda}ambda^{-1}$ with $|{\lambda}ambda|>1$ (note that $|{\lambda}ambda| \neq 1$ since $x \not{\bf i}n [-2,2]$). Define \betaegin{equation}{\lambda}abel{eqn:Hx} H_{\mu}(x) =\max {\lambda}eft\{ 2, \sigmaqrt{{\lambda}eft|\frac{x^2-\mu}{x^2-4}\right|}\frac{2|{\lambda}ambda|^2}{|{\lambda}ambda|-1} \right\} . \end{equation} \end{enumerate} Then $H_{\mu}$ is continuous on $\mathbb C \sigmaetminus E$. Now we can define a specific attracting subtree: \betaegin{definition} {\lambda}abel{sinkregion} Given $\phi {\bf i}n {\cal P}hi$, let $T$ be the subset of $\mathbb T_{\phi}$ defined as follows: \betaegin{enumerate} {\bf i}tem An edge with adjacent regions $U,V$ is in $T$ if and only if either $|\phi(U)| {\lambda}eq 2$ and $|\phi(V)| {\lambda}eq H_{\mu}(\phi(U))$, or vice versa. {\bf i}tem Any sink vertex is in $T$, as are any vertices which are the end points of two edges in $T$. \end{enumerate} \end{definition} Based on the above lemmas, we have the following theorem (see also the special properties of the function $H_{\mu}$ and Lemmas 3.21-3.24 in~\gammaammaite{tan_gen}). \betaegin{theorem} Given $\phi {\bf i}n {\cal P}hi$ (with $\mu \neq 4$), the set $T$ in Definition \ref{sinkregion} is a non-empty, connected subtree of $\mathbb T_{\phi}$. Moreover $T$ is a sink region for $\mathbb T_{\phi}$, that is, all edges not in $T$ are directed decisively towards $T$. Furthermore, $T$ is finite if and only if $\phi {\bf i}n \mathcal B$. \end{theorem} \sigmaubsubsection{The algorithm}{\lambda}abel{sec:algorithm1} Based on the above discussion, our algorithm to decide whether or not $\phi {\bf i}n { \mathcal B}$ is as follows. \betaegin{description} {\bf i}tem [Step 1] Starting at any vertex, follow the direction of decreasing arrows. On reaching a sink vertex, stop. This vertex is in $T$ by Definition~\ref{sinkregion}. If the input is ${ \mathcal B}$, then this method always finds a sink vertex in finite time because there is a finite sink region. Otherwise, the process may not terminate in (pre-specified) finite time and the algorithm is indecisive. {\bf i}tem [Step 2] Assuming a stopping point is found in Step 1, starting from this point, search outwards by a depth first search using Definition~\ref{sinkregion} to identify whether or not an edge is in $T$. This works because of the connectedness of $T$. If this search terminates in (pre-specified) finite time then $\phi_{x,y,z} {\bf i}n { \mathcal B}$. Otherwise, the algorithm is indecisive. \end{description} Note that if the starting point is a sink vertex and the three adjacent edges are not in $T$, then $T$ consists of just the sink vertex by the connectedness of $T$, hence $\phi_{x,y,z} {\bf i}n { \mathcal B}$. This occurs for example for the tree $\mathbb T_{(x,x,x)}$ with $x {\bf i}n (2,3)$. \betaegin{comment} The algorithm is as follows. It is justified by Lemmas 3.7, 3.11, 3.21, 3.22 in \gammaammaite{tan_gen}. \betaegin{description} {\bf i}tem [Step 1] Starting at any vertex, follow the direction of decreasing arrows. On reaching a vertex with two decreasing arrows (a \emph{fork vertex}) or a vertex with no decreasing arrows (a \emph{sink vertex}), stop. This vertex is in $T$ since neither a fork nor sink can be contained in the complement of $T$, we are at a vertex in $T$. {\bf i}tem [Step 2] We now need to check whether or not $T$ is finite. Starting from the stopping point found in Step 1, search outwards by a depth first search using Lemma~\ref{test} to identify whether or not an edge is in $T$. If this search terminates then $\phi_{(x,y,z)} {\bf i}n { \mathcal B}$. Otherwise, the process may not terminate in finite time. Note that in the case where the starting point is a sink, if the three edges adjacent to the sink are not in $T$, then $T$ consists of just the sink and $\phi_{(x,y,z)} {\bf i}n { \mathcal B}$. This occurs for example for the tree $\mathbb T_{(x,x,x)}$ with $x {\bf i}n (2,3)$. \end{description} \betaegin{remark} \rm{ In the actual program we used, rather than stopping at a fork vertex, we continue to follow any decreasing arrows until we reach a sink vertex, because in practice, it is easier to write a program which does this. If the input is ${ \mathcal B}$, then this method always finds a sink vertex in a finite time because there is a finite sink region. } \end{remark} \end{comment} Figure~\ref{Diagonal-BQ} shows the Bowditch set in the diagonal slice $ \mathcal Delta$ as determined by this algorithm. \betaegin{remark} \rm{We do not have an algorithm whose output is $\phi_{x,y,z} \notin { \mathcal B}$. When $\mu=0$, it was shown in \gammaammaite{NT} that if $|\phi(U)|{\lambda}e 0.5$ for some $U {\bf i}n {\kappa}mega$, then $\phi_{x,y,z} \notin { \mathcal B}$. Hence in Step 1 above, if $\mu=0$, we can stop when we hit a region satisfying this condition and conclude that $\phi_{x,y,z} \notin { \mathcal B}$. Using the same methods, a similar upper bound can be found for $\mu$ close to $0$. In particular, there is a neighbourhood of $(0,0,0)$ which is disjoint from ${ \mathcal B}$, as clearly illustrated in Figure ~\ref{Diagonal-BQ}. However, as shown in \gammaammaite{GMST}, no such universal positive bound exists for all $\mu$: precisely, for any $\epsilonsilon >0$ and $\mu>4$, there exist $\phi {\bf i}n { \mathcal B}_{\mu}$ and $U {\bf i}n {\kappa}mega$ such that $|\phi(U)|<\epsilonsilon$. Another issue is that the sink region may be extremely large so may not be detected in a program with a given finite number of steps, this occurs when we approach the boundary of ${ \mathcal B}$. Thus the algorithm is not completely decisive although it appears to give nice results. In particular, there may be false negatives; however points which are determined to be in ${ \mathcal B}$ are correctly marked. } \end{remark} \betaegin{figure}[ht] {\bf i}ncludegraphics[width=7cm]{Figs/Diagonal-BQ} \gammaammaaption{The Bowditch set ${ \mathcal B}$ for the Markoff maps $\phi_{(x,x,x)}$, plotted in the $x$-plane. The coloured (grey) points are in ${ \mathcal B}$ and the black ones are outside. The colours (shades of grey) indicate the size of the sink region $T$.} {\lambda}abel{Diagonal-BQ} \end{figure} \sigmaection{Groups, manifolds, symmetries and quotients} {\lambda}abel{sec:basicconfig} In this section we detail a construction which allows us conveniently to exploit the three-fold symmetry of groups in the diagonal slice $ \mathcal Delta$. As is well known, if the image of a representation $\rho \gammaammao F_2 \tauo {\mathcal S}L$ is free and discrete then $\mathbb H^3/\rho(F_2)$ is a genus two handlebody ${\cal H}$, see~\gammaammaite{hempel} Theorem 5.2. (Note that a hyperbolic $3$-manifold is irreducible, hence prime, and that $\pi_2(M) = 0$.) Rather than working with ${\cal H}$, however, it is much easier to work with the quotient ${\mathcal S}$ of ${\cal H}$ by the order $3$-symmetry ${\kappa}$ corresponding to cyclic permutation of the parameters. We also introduce a commensurable orbifold ${\cal T}$ with a torus boundary $ \deltad {\cal T}$. Both ${\mathcal S} = {\cal H} /{\kappa}$ and ${\cal T}$ surject to a $3$-orbifold ${\mathcal U}$ with fundamental group a so-called \emph{$(P,Q,R)$-group}. Its boundary $\deltad {\mathcal U}$ is a sphere with three order $2$ and one order $3$ cone points. A similar construction has been used extensively by Akiyoshi et al, see for example~\gammaammaite{akiyoshi}, and is the basis of Wada's program OPTi, hence was convenient for our computations. In this section we explain these constructions in detail, using them to find explicit representations of all four groups. \sigmaubsection{The handlebody and related orbifolds} {\lambda}abel{sec:mainhandlebody} The symmetric handlebody ${\cal H}$ can be thought of as made by gluing two solid pairs of pants each with order $3$-symmetry. More precisely, take a $3$-ball and remove three open disks from the boundary, placed so as to have order $3$ rotational symmetry. Gluing two such balls along the open disks produces a handlebody ${\cal H}$ with the required order three symmetry ${\kappa}$. Rather than write down a suitably symmetric representation of $\pi_1({\cal H})$ directly, we consider first the quotient orbifold ${\mathcal S} = {\cal H} /{\kappa}$. As will be justified in retrospect when we have identified the representations explicitly, this is a ball with two cone axes around each of which the angle is $2\pi/3$. Its boundary $\deltad {\mathcal S}$ is a sphere ${\mathcal S}igma_{0;3,3,3,3}$ with $4$ order $3$ cone points. We will call ${\mathcal S}$ the \emph{large coned ball}. The ball ${\mathcal S}$ has a further order $4$ symmetry group. Consider the two cone axes which form the singular locus of ${\mathcal S}$, together with their common perpendicular $C$. This configuration is invariant under the $\pi$-rotation about $C$, and also under $\pi$-rotations about a unique pair of orthogonal lines on the plane orthogonal to $C$ passing through its midpoint $O$, see Section~\ref{sec:largeball}. Denoting these latter rotations $P,Q$, the $\pi$-rotation about $C$ is $PQ$ and the entire configuration is invariant under ${\lambda}angle P,Q \rangle = \mathbb Z_2 \tauimes \mathbb Z_2$. Thus we obtain a further quotient orbifold ${\mathcal U} = {\mathcal S}/(\mathbb Z_2 \tauimes \mathbb Z_2)$, also topologically a ball, which we call the \emph{small coned ball}. The singular locus of ${\mathcal U} $ is as follows. Let $\betaar O$ and $\betaar E$ be the images in ${\mathcal U}$ of the midpoint $O$ of $C$ and the point where $C$ meets ${ A}x K$ respectively, where $K$ is one of the two order three rotations. Let $\betaar C$ be the image of $C$, so that $\betaar C$ is a line from $\betaar O$ to $\betaar E$. From $\betaar O$ emanate three mutually orthogonal lines corresponding to the order $2$ axes of $P,Q$ and $PQ$. One of these is the line $\betaar C$ corresponding to $PQ$ which ends at $\betaar E$. From $\betaar E$ also emanates an order $3$ singular line, the projection of ${ A}x K$, perpendicular to $\betaar C$. The boundary $\deltad {\mathcal U}$ is a sphere ${\mathcal S}igma_{0;2,2,2,3}$ with $3$ cone points of order $2$ and one of order $3$. The order $3$ cone point is the end point of the order 3 singular line and the order $2$ cone points are the endpoints on $\deltad U$ of the axes of $P, Q$ and a third involution $R$ defined below. Finally, there is a double cover of the small coned ball ${\mathcal U}$ by an orbifold ${\cal T}$ which is topologically a solid torus. Its boundary is a torus $ \deltad \gammaammaal T$ with a single cone point of angle $4 \pi/3$. Just as the quotient of a once punctured torus $ {\mathcal S}igma_{1;{\bf i}nfty}$ by the hypelliptic involution is the surface ${\mathcal S}igma_{0;2,2,2,{\bf i}nfty}$, so the quotient of $ \deltad \gammaammaal T $ by the hypelliptic involution ${\bf i}ota$ is the surface $\deltad {\mathcal U} = {\mathcal S}igma_{0;2,2,2,3}$. The involution ${\bf i}ota$ extends to an involution, also denoted ${\bf i}ota$, of $\gammaammaal T$ such that $\gammaammaal T/{\bf i}ota = {\mathcal U}$. The group $\pi_1({\mathcal U})$ is generated by $(P,Q, K)$. We can replace $K$ by a further involution $R$ such that $RQP = K$. To do this, let $R$ be an order $2$ rotation about an axis contained in the plane through $E$ orthogonal to $ { A}x K$, such that the axis makes an angle $\pi/3$ with $C$. (We will fix orientations more precisely below.) Then $R (QP)$ is a $2\pi/3$-rotation about $ { A}x K$, in other words, provided orientations have been chosen correctly, we can identify $\pi_1({\mathcal U}) $ with a group $ {\lambda}angle P,Q,R | P^2=Q^2=R^2= (RPQ)^3 = -\rm{id}, PQ=-QP \rangle \sigmaubset {\mathcal S}L$. (For a discussion on the choice of signs, see Remark~\ref{signchoices} below.) In~\gammaammaite{akiyoshi} and other papers by the same authors, groups generated by three involutions $P,Q,R$ with $R QP$ parabolic, are used as a convenient way of parameterizing representations of once punctured tori, where the torus in question is now a two-fold cover of the orbifold with fundamental group ${\lambda}angle P,Q,R \rangle$ with quotient induced by the hyperelliptic involution. A small modification of their parameterization allows us to write down a convenient general form for a representation $\pi_1({\mathcal U}) \tauo {\mathcal S}L$, from which we obtain explicit representations of $\pi_1({\cal H}), \pi_1({\mathcal S})$ and $\pi_1({\cal T})$. This we do in the next section. \sigmaubsection{The basic configuration and the small coned ball}{\lambda}abel{sec:basicconfig1} We start with a general construction for representations $\pi_1({\mathcal U}) \tauo {\mathcal S}L$, that is, of subgroups $ {\lambda}angle P,Q,R | P^2=Q^2=R^2= (RPQ)^3 = -\rm{id}, PQ=-QP \rangle \sigmaubset {\mathcal S}L$. For convenience we refer to such a group (or its image in ${\cal P}SL$) as a \emph{$(P,Q,R)$-group}. We will make our calculations using \emph{line matrices} following \gammaammaite{Fenchel}. Note this will define representations into ${\mathcal S}L$, thus fixing the signs of traces. Let $u,u' {\bf i}n \hat {\mathbb C}$, and denote the oriented line from $u$ to $u'$ by $[u,u']$. The associated line matrix $M([u,u']){\bf i}n{\mathcal S}L $ is a matrix which induces an order two rotation about $[u,u']$ and such that $ M([u,u'])^2 = -\rm{id}$, so that in particular $$M([0,{\bf i}nfty]) = \betaegin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix}.$$ By \gammaammaite{Fenchel}, p. 64, equation (1), we have, if $u,u' {\bf i}n \mathbb C$: $$ M([u,u']) = \frac{i}{u-u'} \betaegin{pmatrix} u+u' & -2uu' \\ 2 & -u-u' \end{pmatrix}. $$ The representation we require is derived from a basic configuration shown in Figure~\ref{fig:basicconfig}. It depends on a single parameter $ \zeta {\bf i}n \mathbb C$ which we will relate to the original parameter $x$ in \ref{sec:solidtorus} below. \betaegin{figure}[hbt] {\bf i}ncludegraphics[height=4cm]{Figs/PQR} \gammaammaaption{The basic configuration for the $(P,Q,R)$-group $\pi_1({\mathcal U})$.}{\lambda}abel{fig:basicconfig} \end{figure} Let $\zeta{\bf i}n\mathbb C$ and $P, Q, R{\bf i}n{\mathcal S}L $ be $\pi$-rotations about the oriented lines $[\zeta, -\zeta]$, $[i\zeta, -i\zeta]$ and $[1,-3]$, respectively. By construction $P^2=Q^2= R^2 = -\rm {id}$. Moreover $ { A}x P$ and $ { A}x Q$ intersect at the point $|\zeta|j {\bf i}n \mathbb H^3$ on the hemisphere of radius $|\zeta|$ centre $0 {\bf i}n \mathbb C$, where $ z + tj$ represents the point at height $t>0$ above $z{\bf i}n \mathbb C$ in the upper half space model of $\mathbb H^3$. Thus $PQ = -QP$ and $PQ$ is an order $2$ rotation about the vertical axis $0 + tj, t>0$. Let $V$ be the vertical plane above the real axis in $\mathbb H^3$. Note that the oriented axes of the order two rotations $PQ$ and $R$ both lie in $V$, intersecting in the point $\sigmaqrt 3 j $ at angle $\pi/3$. The line $[-\sigmaqrt{3}i, \sigmaqrt{3}i]$ passes through this point and is orthogonal to $V$. It follows that $RPQ = -RQP$ is anti-clockwise rotation through $2\pi/3$ about the line $[-\sigmaqrt{3}i, \sigmaqrt{3}i]$. Using line matrices as above, we can now easily write down the corresponding representation in ${\mathcal S}L$: \betaegin{align*} P & = M([\zeta, -\zeta]) = \frac{i}{2\zeta} \betaegin{pmatrix} 0 & 2\zeta^2 \\ 2 & 0 \end{pmatrix} = \betaegin{pmatrix} 0 & i \zeta \\ i/\zeta & 0\end{pmatrix}\\ Q & = M([i\zeta, -i\zeta]) = \frac{1}{2\zeta} \betaegin{pmatrix} 0 & -2\zeta^2 \\ 2 & 0 \end{pmatrix} = \betaegin{pmatrix} 0 & -\zeta \\ 1/\zeta & 0 \end{pmatrix} \\ R & = M([1, -3]) = \frac{i}{4} \betaegin{pmatrix} -2 & 6 \\ 2 & 2 \end{pmatrix} = \betaegin{pmatrix} -i/2 & 3i/2 \\ i/2 & i/2 \end{pmatrix}. \end{align*} Let $K = RQP$. Then, $$ K = \betaegin{pmatrix} -1/2 & -3/2 \\ 1/2 & -1/2 \end{pmatrix}, \quad K^3 = \betaegin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix}, $$ so that as expected, $K$ is a anticlockwise rotation about $[-\sigmaqrt{3}i, \sigmaqrt{3}i]$ by $2\pi/3$. Note that as matrices in ${\mathcal S}L$, $P^2 = Q^2 = -\rm {id}$ and $PQ = -QP$. As isometries of $\mathbb H^3$, the signs are irrelevant. We could have chosen $K = RPQ$ in which case $K^3 = -\rm {id}$ but see Remark~\ref{signchoices} below. We denote the group generated by $P,Q,R$ by $G_{{\mathcal U}}(\zeta)$ and the corresponding representation $\pi_1({\mathcal U}) \tauo {\mathcal S}L$ by $\rho_{{\mathcal U}}(\xi)$. \sigmaubsubsection{The large coned ball ${\mathcal S}$}{\lambda}abel{sec:largeball} To relate $\pi_1({\mathcal U})$ to $\pi_1({\mathcal S})$, start with two oriented axes $A_0, A_1$ about each of which we have order $3$ anticlockwise rotations $K_0,K_1$, measured with respect to the orientation of the axes. Let $C$ denoted the common perpendicular between $A_0$ and $ A_1$, oriented from $A_0$ to $A_1$. We denote this configuration, which is clearly well defined up to isometry, by ${ \mathcal C} { \mathcal F}$. As described in \ref{sec:mainhandlebody}, ${ \mathcal C} { \mathcal F}$ has a further $\mathbb Z_2 \tauimes \mathbb Z_2$ group of symmetries generated by the $\pi$-rotations $P,Q$ with axes through the mid-point of $C$: precisely, let ${\cal P}i$ be the plane through the mid-point of $C$ and orthogonal to $C$. Then the axes of $P,Q$ are the two lines in ${\cal P}i$ which bisect the angles between the projections of ${ A}x K_0, { A}x K_1 $ onto ${\cal P}i$, chosen so that the angle bisected by $ { A}x P$ is that between the projection of the lines ${ A}x K_0, { A}x K_1 $ with the same (say outward) orientation. This choice of $P$ ensures that $PK_0P^{-1} = K_1$ while $QK_0Q^{-1} = K_1^{-1}$. Also $PQ$ is the order $2$ rotation about $C$, and $PQK_i Q^{-1}P^{-1} = K_i^{-1}, i = 0,1$. As in Section~\ref{sec:mainhandlebody}, ${\mathcal U} = {\mathcal S}/(\mathbb Z_2 \tauimes \mathbb Z_2)$ and we can take $\pi_1({\mathcal U})$ to be the $(P,Q,R)$-group defined in~Section~\ref{sec:basicconfig1}. In terms of $(P,Q,R)$, the generators of $\pi_1({\mathcal S})$ are $K_0 = RQP, K_1 = PK_0P^{-1}$. Thus \betaegin{equation*} K_0 = -\betaegin{pmatrix} 1/2 & 3/2 \\ -1/2 & 1/2 \end{pmatrix}, \quad K_1 =- \betaegin{pmatrix} 1/2 & -\zeta^2/2 \\ 3/2\zeta^2 & 1/2 \end{pmatrix}. \end{equation*} In terms of generators for $ \pi_1(\deltad {\mathcal S})$, we have additionally $ K_2 = QK_0Q^{-1}, K_3 = RK_0R^{-1}$ where \betaegin{equation*} K_2 = -\betaegin{pmatrix} 1/2 & \zeta^2/2 \\ -3/2\zeta^2 & 1/2 \end{pmatrix}, \quad K_3 = -\betaegin{pmatrix} 1/2 & -3/2 \\ 1/2 & 1/2 \end{pmatrix} \end{equation*} so that $K_0 K_3 K_1 K_2 = \rm{id}$. We denote the group generated by $K_0,K_1$ by $G_{{\mathcal S}}(\zeta)$ and the corresponding representation $\pi_1({\mathcal S}) \tauo {\mathcal S}L$ by $\rho_{{\mathcal S}}(\xi)$. From now on, we frequently drop the subscript and refer to $K_0$ as $K$. \sigmaubsubsection{\tauextbf{The handlebody ${\cal H}$}}{\lambda}abel{sec:handlebody} Observe that the generator $X {\bf i}n \pi_1({\cal H})$ projects to the loop $K_0K_1$ in ${\cal H} /{\kappa}$. (This latter is a loop in $\deltad {\cal H} /{\kappa}$ which separates one of each pair of the cone points of $K_0,K_1$ from the other pair.) We arrange that the action of ${\kappa}$ is induced by conjugation by $K_0^{-1} = K^{-1}$, so the generators of $\pi_1({\cal H})$ can be written in terms of the generators of $\pi_1({\mathcal S})$ as $X = K_0 K_1$, $Y = K_0^{-1}XK_0 = K_1 K_0$. Thus we have: $$ K^{-1} X K = Y, \quad K^{-1} Y K = (XY)^{-1}, \quad K^{-1} (XY)^{-1} K = X. $$ Using the formulae from the previous section, this gives $$ X = \betaegin{pmatrix} \frac{9}{4\zeta^2} + \frac{1}{4} & -\frac{\zeta^2}{4} + \frac{3}{4} \\ \frac{3}{4\zeta^2} - \frac{1}{4} & \frac{\zeta^2}{4} + \frac{1}{4} \end{pmatrix},\quad Y = \betaegin{pmatrix} \frac{\zeta^2}{4} + \frac{1}{4} & -\frac{\zeta^2}{4} + \frac{3}{4} \\ \frac{3}{4\zeta^2} - \frac{1}{4} & \frac{9}{4\zeta^2} + \frac{1}{4} \end{pmatrix}. $$ In particular this reveals the relation between the parameter $\zeta$ and $x$: \betaegin{equation}{\lambda}abel{eqn:paramreln} x= \taur X = \taur Y = \taur XY = \frac{\zeta^2}{4}+\frac{9}{4\zeta^2}+\frac{1}{2}. \end{equation} We denote the group generated by $X,Y$ by $G_{{\cal H}}(\zeta)$ and the corresponding representation $\pi_1({\cal H}) \tauo {\mathcal S}L$ by $\rho_{{\cal H}}(\zeta)$; we explain in Section~\ref{sec:whichparameter} why up to conjugation $\rho_{{\cal H}}(\zeta)$ in fact depends only on $x$. \betaegin{remark} {\lambda}abel{signchoices} \rm{In the above discussion, we made choices of sign so that $K^3 = \mbox{\rm{id}}, X = K_0K_1$ (where $K=K_0$ as above). To compute the discreteness locus of a family of representations only requires looking in ${\cal P}SL$, however for computations involving traces we need a lift to ${\mathcal S}L$. By~\gammaammaite{Culler}, any ${\cal P}SL$ representation of a Kleinian group can be lifted to ${\mathcal S}L$ provided there are no elements of order $2$; in particular this applies to ${\cal P}SL$ representations of $\pi_1({\mathcal S})$ and $ \pi_1({\cal H}) $. Since the product of the three generating loops corresponding to $X,Y, Z$ is the identity in $\pi_1({\cal H})$, we should make a choice of lift in which $XYZ = \rm{id}$ in ${\mathcal S}L$. We could choose the element $K$ which represents the $3$-fold symmetry ${\kappa}$ to be such that either $K^3 = \rm{id}$ or $K^3 = -\rm{id}$; however since we intend to work with representations of $\pi_1({\mathcal S}) \tauo {\mathcal S}L$ we should make the choice $K^3 = \rm{id}$ because in the quotient orbifold ${\mathcal S}$, $K$ corresponds to a loop round an order $3$ cone axis. In the representation we have written down we achieve $K^3 = \rm{id}$ with the choice $K = RQP = \betaegin{pmatrix} -1/2 & -3/2 \\ 1/2 & -1/2 \end{pmatrix}$. It is easy to check that taking $K^3 = \rm{id}$, if we let $X = K_0K_1$ we get $XYZ = \rm {id}$ as required, but if we choose $X = -K_0K_1$ we get $XYZ = -\rm {id}$ which is wrong. } \end{remark} \sigmaubsubsection{\tauextbf{The singular solid torus ${\cal T}$.}}{\lambda}abel{sec:solidtorus} Finally we discuss the associated singular solid torus ${\cal T}$, which is constructed in a standard way from the $(P,Q,R)$-group. We do not logically need to use ${\cal T}$ in our further development, however as explained in Section~\ref{sec:torustree}, in practice we used ${\cal T}$ for computations, moreover the interpretation of the problem in the more familiar setting of a torus with a cone point may be helpful. The boundary $ \deltad {\mathcal U}$ is a sphere with $4$ cone points $x_P, x_Q,x_R$ and $ x_K$ corresponding to $P,Q, R$ and $K = RQP$. Thus we can take as generators of $\pi_1({\cal T})$ the loop $B=PQ$ separating $x_P, x_Q$ from $x_R, x_K$, and the loop $A = RQ$ separating $x_R, x_Q$ from $x_P, x_K$. Since $P,Q$ have a common fixed point, $B$ is an order $2$ elliptic, while since the axes of $R,Q$ are (generically) disjoint, $A$ is a loxodromic whose axis extends the common perpendicular to $ { A}x R$ and $ { A}x Q$. Using the formulae above for the $(P,Q,R)$-group we compute: \betaegin{equation*} RQ = A = \betaegin{pmatrix} 3i/2\zeta & i\zeta/2 \\ i/2\zeta & -i\zeta/2 \end{pmatrix}, \quad PQ= B = \betaegin{pmatrix} i & 0 \\ 0 & -i\end{pmatrix}, \end{equation*} \quad so that \betaegin{equation}{\lambda}abel{eqn:torustraces} \taur A = \frac{3i}{2\zeta} - \frac{i\zeta}{2}, \quad \taur B = 0, \quad \taur AB = -\frac{\zeta}{2} - \frac{3}{2\zeta}. \end{equation} Note that $AB = RP$ and $A^2 = -K_0K_1, B^2=-\rm{id}$. We also deduce that \betaegin{equation*} ABA^{-1} B^{-1} = [A, B] = \betaegin{pmatrix} 1/2 & -3/2 \\ 1/2 & 1/2 \end{pmatrix}, \\ \ \mbox{{\rm so that }} {\cal T}r {[A,B] }= 1. \end{equation*} Note that ${\cal T}r A {\bf i}n [-2,2]$ if and only if $|\zeta| = \sigmaqrt 3$ or $\zeta = it$ with $ 1 {\lambda}eq |t| {\lambda}eq \sigmaqrt 3$, justifying the above remark that generically $A$ is loxodromic. Note also that $A^2 = -K_0K_1 $ is consistent with the direct computation using \eqref{eqn:torustraces} that ${\cal T}r (A^2) = -( \frac{\zeta^2}{4}+\frac{9}{4\zeta^2}+\frac{1}{2})$. Also note that $ [A, B] = -K^2$, so that the commutator is rotation by $ 4\pi/3$ about $ { A}x K$. Since ${\cal T}r K^2 = ({\cal T}r K)^2 -2$ we find also that ${\cal T}r {[A,B]} = 1$ independently of the choice of sign for $K$. This is consistent with ${\cal T}r {[A,B]} = -2 \gammaammaos (2\pi/3)$, the sign being negative by analogy with the well known fact that for any representation of a once punctured torus group for which the commutator is parabolic, we have ${\cal T}r {[A,B]}= -2 $. We denote the group generated by $A,B$ by $G_{{\cal T}}(\zeta)$ and the corresponding representation $\pi_1({\cal T}) \tauo {\mathcal S}L$ by $\rho_{{\cal T}}(\xi)$. \betaegin{remark} {\lambda}abel{signchoices1} \rm{Once again there are questions of sign which this time are a little more subtle. If $\alpha {\bf i}n {\cal P}SL$ corresponds to an element of order $2$ in $\pi_1(M)$, then the corresponding representation cannot be lifted to ${\mathcal S}L$, because for non-trivial $\alpha {\bf i}n {\mathcal S}L$, necessarily $\alpha^2 = -\rm{id}$, see~\gammaammaite{kra} and \gammaammaite{Culler}. Since in $\pi_1({\cal T})$ the element $B^2$ is trivial, a ${\cal P}SL$ representation of $\pi_1({\cal T})$ cannot be lifted to ${\mathcal S}L$. Nevertheless, we can as above write down a group in ${\mathcal S}L$ which projects to a ${\cal P}SL$ representation for $\pi_1({\cal T})$. See Section~\ref{sec:torustree} for further discussion on this point. } \end{remark} \sigmaubsubsection{More on the configuration for the large coned ball ${\mathcal S}$}{\lambda}abel{sec:morelargeball} The relation~\eqref{eqn:paramreln} can be given a geometrical interpretation in terms of the perpendicular distance between the axes of $K_0,K_1$ which sheds light on the symmetries of the configuration ${ \mathcal C} { \mathcal F}$ in Section~\ref{sec:largeball}. To measure complex distance, we use the conventions spelled out in detail in~\gammaammaite{ser-wolp} Section 2.1. The signed complex distance $ {\betaf d}_{ \alpha} (L_1,L_2)$ between two oriented lines $L_1,L_2$ along their oriented common perpendicular $\alpha$ is defined as follows. The signed real distance $ d_{ \alpha} (L_1,L_2)$ is the positive real hyperbolic distance between $L_1,L_2$ if $\alphalpha$ is oriented from $L_1$ to $L_2$ and its negative otherwise. Let ${\betaf v_i}, i=1,2$ be unit vectors to $L_i$ at the points $L_i \gammaammaap \alpha$ and let $\betaf w_1$ be the parallel translate of $\betaf v_1$ along $\alpha$ to the point $ \alpha \gammaammaap L_2$. Then $ {\betaf d}_{ \alpha} (L_1,L_2)= \deltaelta_{\betaf \alpha} (L_1,L_2) + i\tauheta$ where $\tauheta$ is the angle, mod $2\pi i$, from $\betaf w_1$ to $\betaf v_2$ measured anticlockwise in the plane spanned by $\betaf w_1$ to $\betaf v_2$ and oriented by $\alpha$. Let $\sigmaigma$ be the signed complex distance from the \emph{oriented} axis ${ A}x K_0 $ to the oriented axis $ { A}x K_1 $, measured along the common perpendicular $C$ oriented from ${ A}x K_0 $ to $ { A}x K_1$. Then ${ A}x K_0 , { A}x K_1$ together with ${ A}x K_0K_1 $ form the alternate sides of a right angled skew hexagon whose other three sides are the common perpendiculars between the three axes taken in pairs. The cosine formula gives $\sigmaigma$ in terms of the complex half translation lengths ${\lambda}ambda_0, {\lambda}ambda_1, {\lambda}ambda_2$ of $K_0, K_1$ and $K_0K_1$ respectively. To get the sides oriented consistently round the hexagon we have to reverse the orientation of $ { A}x K_0$ so that the complex distance $\sigma$ should be replaced by $ \sigma' = \sigma+ i \pi$ and ${\lambda}ambda_0$ by $ {\lambda}ambda'_0 = -{\lambda}ambda_0 $, see~\gammaammaite{ser-wolp}, so the formula gives $$ \gammaammaosh \sigmaigma' = \frac{\gammaammaosh {\lambda}ambda_2 - \gammaammaosh {\lambda}ambda'_0 \gammaammaosh {\lambda}ambda_1}{ \sigmainh {\lambda}ambda'_0 \sigmainh {\lambda}ambda_1} .$$ As in~\ref{sec:handlebody}, we have $X = K_0K_1$ so $ x = \taur K_0K_1 = 2 \gammaammaosh {\lambda}ambda_2$ while for $i = 0,1$ we have $ \gammaammaosh {\lambda}ambda_i = \gammaammaos 2 \pi/3 = -1/2$ and $\sigmainh {\lambda}ambda_i = i\sigmain 2 \pi/3 = i \sigmaqrt 3/2$. (Note that since $K_0, K_1$ are conjugate we should take ${\lambda}ambda_0= {\lambda}ambda_1$ so the possible additive ambiguity of $i\pi$ in the definition of the ${\lambda}ambda_i$ does not change the resulting equation.) Substituting, we find \betaegin{equation}{\lambda}abel{eqn:cxdist} -\gammaammaosh \sigmaigma = \frac{x/2 - 1/4}{( \sigmaqrt {3}/2)^2} = \frac{2x-1}{3}.\end{equation} We can also relate $\sigma$ directly to our parameter $\zeta$. By construction $ { A}x K_0$ is the oriented line $[-\sigmaqrt 3 i, \sigmaqrt 3i]$, while $K_1 = PK_0P^{-1}$ so that $ { A}x K_1$ is the oriented line $[ i\zeta^2/\sigmaqrt 3, -i\zeta^2/\sigmaqrt 3]$ and $C$ is the oriented line from ${\bf i}nfty$ to $0$. Thus the real part of the hyperbolic distance from $ { A}x K_0$ to $ { A}x K_1$ is $2{\lambda}og \sigmaqrt 3 /|\zeta|$ and the anticlockwise angle, measured in the plane oriented \emph{downwards} along the vertical axis $C$, is $-(\pi + 2\alpharg \zeta) $. Hence $$\sigmaigma = 2{\lambda}og \sigmaqrt 3 /|\zeta| - 2i\alpharg \zeta - i\pi= 2 {\lambda}og \frac{\sigmaqrt 3}{\zeta}.$$ Comparing to~\eqref{eqn:cxdist}, we find $$ \betaiggl [ \betaiggl(\frac{\sigmaqrt 3}{\zeta}\betaiggr)^2 + \betaiggl(\frac{\zeta} {\sigmaqrt 3}\betaiggr)^2\betaiggr ] = 2 \gammaammaosh (\sigmaigma + i \pi) =\frac{2(2x-1)}{3}$$ or \betaegin{equation}{\lambda}abel{eqn:paramreln1} x -1/2 = \frac{3}{4} \betaiggl [ \betaiggl(\frac{\sigmaqrt 3}{\zeta}\betaiggr)^2 + \betaiggl(\frac{\zeta} {\sigmaqrt 3}\betaiggr)^2\betaiggr ] \end{equation} recovering and giving a more satisfactory geometrical meaning to~\eqref{eqn:paramreln}. \sigmaubsubsection{Dependence on $x$ versus $\xi$.}{\lambda}abel{sec:whichparameter} It is not perhaps immediately obvious why the groups $G_{{\mathcal S}}(\xi), G_{{\cal H}}(\xi)$ as defined above depend up to conjugation only on our original parameter $x$. This is clarified by the above discussion, because up to conjugation $G_{{\mathcal S}}(\xi)$ depends only on the configuration ${ \mathcal C}{ \mathcal F}$ and hence on $\sigmaigma$ which is related to $x$ as in~\eqref{eqn:cxdist}. An alternative way to see this is the discussion on computing traces in Section~\ref{sec:traces}. Thus from now on, we shall alternatively write $G_{{\mathcal S}}(x), G_{{\cal H}}(x)$ in place of $G_{{\mathcal S}}(\xi), G_{{\cal H}}(\xi)$. \sigmaubsubsection{Symmetries}{\lambda}abel{sec:symmetries} The discussion in~\ref{sec:morelargeball} gives insight into various symmetries of the parameters $x$ and $\zeta$. Equation~\eqref{eqn:paramreln} shows that the map $ \zeta \mapsto x$ is a $4$-fold covering with branch points at $ \zeta = \pm\sigmaqrt{3}, \zeta = \pm i\sigmaqrt{3}$ and $z = 0,{\bf i}nfty$. Correspondingly, we have a Klein $4$-group $\mathbb Z_2 \tauimes \mathbb Z_2$ of symmetries which change $\zeta$ but not $x$: \betaegin{enumerate} {\bf i}tem replacing $\zeta$ by $-\zeta$ leaves the basic construction unchanged but the line matrices defining $P,Q$ change sign. {\bf i}tem replacing $\zeta$ by $-3/\zeta$ is an order $2$ rotation about the axis $[-\sigmaqrt 3i, \sigmaqrt 3 i]$. This fixes $K_0$ and moves $K_1$ into a position on the opposite side of $K_0$ along the vertical line $C$. This changes nothing other than the position we choose for the basic configuration in Section~\ref{sec:basicconfig1}. Note however that the line matrices defining $P,Q$ change sign. \end{enumerate} There is also a symmetry which changes $x$ as well as $\zeta$. Say we fix the orientation of one of the two axes ${ A}x K_0, { A}x K_1 $ while reversing the other. On the level of the configuration ${ \mathcal C} { \mathcal F}$ from~\ref{sec:largeball}, this interchanges $P$ and $Q$. Since $PK_0P^{-1} = K_1$ while $QK_0Q^{-1} = K_1^{-1}$, this is equivalent to fixing the orientation of one of the two axes $ { A}x K_0 , { A}x K_1 $ while reversing the other. This symmetry interchanges the marked group $P,Q, R$ with the marked group $Q,P, R$, so that one group is discrete if and only if so is the other. In terms of our parameters, the complex distance $\sigmaigma$ between the axes changes to $ \sigmaigma + i\pi$, so that $\gammaammaosh \sigmaigma \mapsto - \gammaammaosh \sigmaigma$ giving the symmetry $(x -1/2) \mapsto -(x -1/2)$ of \eqref{eqn:cxdist}. Note that the diagonal slice of the Bowditch set $ \mathcal Delta \gammaammaap { \mathcal B}$ does not possess this symmetry. Interchanging $P$ and $Q$ is induced by the map $ \zeta \mapsto i\zeta$; more precisely this map sends $P$ to $Q$ and $Q$ to $-P$. This clearly induces the same symmetry in equation \eqref{eqn:paramreln}. Note that by the definition, in this symmetry $R$ remains unchanged. On the level of the torus group $\pi_1({\cal T})$, we have by definition $RQ= A$, $PQ=B$ so that $AB = RP$. Thus sending $P$ to $Q$ and $Q$ to $-P$ while fixing $R$ sends $B$ to $-B$ and $A$ to $-AB$. (Recall that on the level of matrices, $PQ = -QP$.) The symmetry should therefore replace the trace triple $({\cal T}r A, {\cal T}r B, {\cal T}r AB)$ by the triple $(-{\cal T}r AB, -{\cal T}r B, {\cal T}r A)$. It is easily checked from~\eqref{eqn:torustraces} that this is exactly the change effected by $ \zeta \mapsto i\zeta$. Finally, we have the symmetry of complex conjugation induced by $ x \tauo \betaar x$ or equivalently $ \zeta \mapsto \betaar \zeta$. This sends $\sigmaigma \mapsto \betaar \sigmaigma$ thus replacing $G_{{\cal H}}(x)$ by a conjugate group in which the distance between $ { A}x K_0 $ and $ { A}x K_1 $ is unchanged but the angle measured along their common perpendicular changes sign. Clearly these are different groups but one is discrete if and only if the same is true of the other. The diagonal slice of the Bowditch set obviously also enjoys the symmetry by conjugation, however, that is its only symmetry. In particular $(x,x,x) \tauo (-x,-x,-x)$ is not a symmetry and the corresponding ${\mathcal S}L$ representations project to different representations of $F_2$ into ${\cal P}SL$. This is because any two distinct lifts of a representation from ${\cal P}SL$ to ${\mathcal S}L$ differ by multiplying exactly two of the parameters $x,y,z$ by $-1$. The allowed replacement $X \tauo -X$ and $Y \tauo -Y$ gives the group $(-x,-x,x)$ with parameters which are not in the diagonal slice $ \mathcal Delta$. The symmetries can be seen in our plots by comparing Figure~\ref{Diagonal-BQ}, the Bowditch set for the triple $\phi_{(x,x,x)}$ in the $x$-plane, with the right hand frame of Figure~\ref{fig:BQ-sets-comparison}, which shows the same set in the $\zeta$-plane. Note the symmetry of complex conjugation in both pictures. In addition, Figure~\ref{fig:BQ-sets-comparison} is invariant under the maps $ \zeta \mapsto -\zeta$ and $ \zeta \mapsto -3/\zeta$, neither of which are seen in Figure~\ref{Diagonal-BQ}. Thus the upper half plane in Figure~\ref{fig:BQ-sets-comparison} is a $4$-fold covering of the upper half plane in Figure~\ref{Diagonal-BQ}: as is easily checked from~\eqref{eqn:paramreln}, the imaginary axis in Figure~\ref{fig:BQ-sets-comparison} maps to the negative real axis in Figure~\ref{Diagonal-BQ} while the real axis in Figure~\ref{fig:BQ-sets-comparison} maps to the positive real axis in Figure~\ref{Diagonal-BQ}. In particular, note the following branch points and special values: if $x=3$, then $\zeta=\pm 1, \pm 3$; if $x=2$, then $\zeta=\pm\sigmaqrt{3}$; if $x=-1$, then $\zeta=\pm \sigmaqrt{3}i$; if $x=-2$, then $\zeta=\pm i, \pm 3i$. Finally, the symmetry $(x -1/2) \mapsto -(x -1/2) $ is not visible in either picture because it does not preserve the property of lying in the Bowditch set. As we shall see later, this symmetry is visible in pictures of the discreteness locus, see the right hand frame of Figure~\ref{Figs/Riley-Ray-BQ} below. \sigmaection{Discreteness} {\lambda}abel{sec:discrete} We now turn to the question of finding those values of the parameter $x$ for which the group ${\lambda}angle X,Y \rangle$ is free and discrete. Let $ \mathcal D_{{\mathcal S}}, \mathcal D_{{\cal H}} \sigmaubset \mathbb C$ denote the subsets of the complex $x$-plane on which the groups $G_{{\mathcal S}}(x), G_{{\cal H}}(x)$ respectively are discrete and the corresponding representations are faithful, so that in particular, $G_{{\cal H}}(x)$ is free. (See Section~\ref{sec:whichparameter} for the replacement of $G_{{\mathcal S}}(\xi), G_{{\cal H}}(\xi)$ by $G_{{\mathcal S}}(x), G_{{\cal H}}(x)$.) We first show that $ \mathcal D_{{\mathcal S}}= \mathcal D_{{\cal H}} $. We begin with the easy observation that since all the groups in Section~\ref{sec:basicconfig} are commensurable, they are either all discrete or all non-discrete together: \betaegin{lemma} {\lambda}abel{lem:commensurable} Suppose that $G, H$ are subgroups of ${\cal P}SL$ with $G \sigmaupset H$ and that $[G:H]$ is finite. Then $G$ is discrete if and only if the same is true of $H$.\end{lemma} \betaegin{proof} If $G$ is discrete, clearly so is $H$. Suppose that $H$ is discrete but $G$ is not. Then infinitely many distinct orbit points in $G \gammaammadot O$ accumulate in some compact set $D \sigmaubset \mathbb H^3$. Label the cosets of $[G:H]$ as $g_1H {\lambda}dots g_kH$. Then for some $i$ there are infinitely many points $g_i h_r \gammaammadot O {\bf i}n D$, which gives infinitely many distinct points $h_r {\bf i}n g_i^{-1}D$. This contradicts discreteness of $H$. \end{proof} \betaegin{lemma} {\lambda}abel{lem:faithfultogether} The representation $\rho_{{\mathcal S}}(x) \gammaammao \pi_1({\mathcal S}) \tauo G_{{\mathcal S}}(x)$ is faithful if and only if the same is true of $ \rho_{{\cal H}}(x) \gammaammao \pi_1({\cal H}) \tauo G_{{\cal H}}(x)$. \end{lemma} \betaegin{proof} Note that $\pi_1({\mathcal S})$ is isomorphic to $\mathbb Z/3\mathbb Z * \mathbb Z/3\mathbb Z = <k_0, k_1 | k_0^3 = k_1^3 = \rm{id}>$, while $\pi_1({\cal H})$ is the subgroup of $\pi_1({\mathcal S})$ generated by $k_0 k_1$ and $k_1 k_0$ and is isomorphic to a free group of rank $2$. By construction, $\rho_{{\cal H}}(x)$ is the restriction of $\rho_{{\mathcal S}}(x)$ to $\pi_1({\cal H})$. Thus, if $\rho_{{\mathcal S}}(x)$ is faithful, then so is $\rho_{{\cal H}}(x)$. Now $\pi_1({\cal H})$ has index three in $\pi_1({\mathcal S})$ and $ \pi_1({\mathcal S}) = \pi_1({\cal H}) \gammaammaup k_0 \pi_1({\cal H}) \gammaammaup k_0^{-1} \pi_1({\cal H})$. Suppose that $\rho_{{\cal H}}(x)$ is faithful but $\rho_{{\mathcal S}}(x)$ is not. Then, there exists $g {\bf i}n \pi_1(S)$ such that $\rho_{{\mathcal S}}(x)(g) = \rm{id}$. Now $g = k_0^e h$, where $e=\pm 1$ and $h {\bf i}n \pi_1({\cal H})$. Thus $ \mbox{\rm{id} }= \rho_{{\mathcal S}}(x)(g) = \rho_{{\mathcal S}}(x)(k_0^e) \rho_{{\cal H}}(x)(h) $ so that $\rho_{{\cal H}}(x)(h^3) = \rho_{{\mathcal S}}(x)(k_0^{-3e}) = \rm{id}$ contradicting the assumption that $\rho_{{\cal H}}(x)$ is faithful. \end{proof} \betaegin{corollary} {\lambda}abel{discretetogether} The representations $\rho_{{\mathcal S}}(x), \rho_{{\cal H}}(x)$ are faithful and discrete together, that is, $ \mathcal D_{{\mathcal S}}= \mathcal D_{{\cal H}} $. \end{corollary} Thus we may write $ \mathcal D = \mathcal D_{{\mathcal S}}= \mathcal D_{{\cal H}} $. Our aim is this section is to find $ \mathcal D \sigmaubset \mathbb C$. \sigmaubsubsection{Fundamental domains}{\lambda}abel{sec:dirichlet} We can make a rough estimate for $ \mathcal D $ by exhibiting a fundamental domain for $G_{{\mathcal S}}(x)$ for sufficiently large $x$. \betaegin{prop} {\lambda}abel{prop:dirichlet}Writing $x = u + iv$, the region $ \mathcal D$ contains the region outside the ellipse $(2u-1)^2/25 + v^2/4 = 1$ in the $x$-plane. \end{prop} \betaegin{proof} In view of Corollary~\ref{discretetogether}, we can work with the large cone manifold ${\mathcal S}$ with generators $K_0, K_1$ of Section~\ref{sec:largeball}. By construction, $P$, which conjugates $K_0$ to $K_1$, is $\pi$-rotation about $0+ j |\zeta|$ in the upper half space $\mathbb H^3$, which is the mid-point of the common perpendicular between the axes of $K_0$ and $ K_1$. The axis of $K_0$ is the line through $ j \sigmaqrt 3$ perpendicular to the real axis. Let $H, H'$ be the hemispheres which meet $\mathbb R$ orthogonally at points $ -3, 1$ and $-1,3$ respectively, and let $E, E'$ be the closed half spaces they cut out which contain infinity. Then $H, H'$ intersect in $ { A}x K_0 $, moreover the complement in $\mathbb H^3$ of $E \gammaammaup E'$ is a fundamental domain for the group ${\lambda}angle K_0\rangle$ acting on $\mathbb H^3$. Likewise the images of $H,H'$ under $P$ meet along $ { A}x K_1 $ and the complement of $P(E ) \gammaammaup P(E')$ is a fundamental domain for ${\lambda}angle K_1\rangle$. Since $P(z) = \zeta^2/z, z {\bf i}n \mathbb C$, $P(H),P( H')$ meet $\mathbb R$ orthogonally in points $-\zeta^2/3, \zeta^2$ and $\zeta^2/3, -\zeta^2$ respectively. Thus if $|\zeta| < 1$ then the circle of radius $|\zeta|$ separates the regions $ E \gammaammaap E'$ and $P(E ) \gammaammaap P(E')$. We conclude by Poincar\'e's theorem (or a suitable simple version of the Klein-Maskit combination theorem) that in this situation ${\lambda}angle K_0, K_1\rangle$ is discrete with presentation ${\lambda}angle K_0, K_1| K_0^3 = K_1^3 = \rm {id}\rangle$. Thus the representation $\rho_{{\mathcal S}}(x)$ with $ x = x(\zeta)$ as in~\eqref{eqn:paramreln}, is faithful, and hence $x {\bf i}n \mathcal D$. Suppose that $\zeta = e^{i\phi}$. Then from \eqref{eqn:cxdist}, $(2x-1)/3 = \gammaammaosh \sigmaigma = 1/2( e^{2i \phi}/3 + 3 e^{-2i \phi} )$ so that $x = u+iv$ lies on the ellipse $(2u-1)^2/25 + v^2/4 = 1$ as claimed. \end{proof} \betaegin{figure}[ht] {\bf i}ncludegraphics[width=6cm]{Figs/dirichletgeneral} \gammaammaaption{The shaded region illustrates the fundamental domain for $\pi_1({\mathcal S})$ acting in its regular set in $\hat {\mathbb C}$ when $|\zeta| >1$, so that $x$ is outside the ellipse of Proposition~\ref{prop:dirichlet}.}{\lambda}abel{fig:dirichletgeneral} \end{figure} The configuration when $x {\bf i}n \mathbb R$ is of particular interest since in this case $G_{{\cal H}}(x)$ is Fuchsian. The ellipse meets the real axis in points $-2,3$ so that $G_{{\cal H}}(x)$ is discrete and the representation is faithful on $({\bf i}nfty, -2]$ (corresponding to $|\zeta| >1, \zeta{\bf i}n i\mathbb R$) and $[3,{\bf i}nfty)$ (corresponding to $|\zeta| >1, \zeta {\bf i}n \mathbb R$). In these two cases the fundamental domains look the same, see Figure~\ref{fig:fuchsianconfigs}. Note that the interval $(-2,3)$ is definitely not in $ \mathcal D$: if $ -2 < x < 2$ then $K_0K_1 $ is elliptic since $x= {\cal T}r K_0K_1 $, while if $ -1 < x < 3$ then $K_0K_1^{-1} $ is elliptic since ${\cal T}r K_0K_1^{-1} = 1-x $, see also Section~\ref{sec:fuchsian}. In the general case, a fundamental domain can be found by a modification of Wada's program OPTi ~\gammaammaite{OPTi}. This program allows one to compute the limit set and fundamental domains for the $PQR$-group $G_{{\mathcal U}}$. A short python code for doing this is available at \url{http://vivaldi.ics.nara-wu.ac.jp/~yamasita/DiagonalSlice/} \sigmaubsection{The method of pleating rays} {\lambda}abel{sec:pleating} To determine $ \mathcal D$, we use the Keen-Series method of pleating rays applied to the large coned sphere ${\mathcal S}$. This is closely analogous to the problem of computing the Riley slice of Schottky space, that is the parameter space of free discrete groups generated by two parabolics, which was solved in~\gammaammaite{ksriley,koms}. We begin by briefly summarising the elements of pleating ray theory we need. For more details see various of the first author's papers, for example~\gammaammaite{ksriley, chois}. Suppose that $G \sigmaubset {\mathcal S}L$ is a geometrically finite Kleinian group with corresponding orbifold $M = \mathbb H^3/G$ and let ${ \mathcal C}/G$ be its convex core, where ${ \mathcal C}$ is the convex hull in $\mathbb H^3$ of the limit set of $G$, see~\gammaammaite{EpM}. Then $\deltad { \mathcal C}/G$ is a convex pleated surface (see for example~\gammaammaite{EpM}) also homeomorphic to $\deltad M$. The bending of this pleated surface is recorded by means of a measured geodesic lamination, the \emph{bending lamination} $\beta = \betaeta(G)$, whose support forms the bending lines of the surface and whose transverse measure records the total bending angle along short transversals. We say $\betaeta $ is \emph{rational} if it is supported on closed curves: note that closed curves in the support of $\betaeta $ are necessarily simple and pairwise disjoint. If a bending line is represented by a curve $\gammaamma {\bf i}n \pi_1({\mathcal S})$, then by definition it is the projection of a geodesic axis to $\deltad { \mathcal C}/G$, so in particular $\betaeta$ contains no peripheral curves in its support. Note that any two homotopically distinct non-peripheral simple closed curves on $\deltad {\mathcal S}$ intersect. Thus in this case, $\betaeta $ is rational only if its support is a single simple essential non-peripheral closed curve on $\deltad { \mathcal C}/G$. As above, we parameterise representations $ \rho_{{\mathcal S}}(x) \gammaammao \pi_1({\mathcal S}) \tauo {\mathcal S}L$ by $x {\bf i}n \mathbb C$ and denote the image group by $G_{{\mathcal S}}(x)$. From now on, we frequently write $ \rho_{x}$ for $ \rho_{{\mathcal S}}(x)$. \betaegin{definition} Let $\gammaamma$ be a homotopy class of simple essential non-peripheral closed curves on $\deltad {\mathcal S}$. The \emph{pleating ray} $ {\cal P}_{\gammaamma}$ of $\gammaamma$ is the set of points $ x {\bf i}n \mathcal D$ for which $ \beta(G_{{\mathcal S}}(x)) = \gammaamma$. \end{definition} Such rays are called \emph{rational pleating rays}; a similar definition can be made for general projective classes of bending lamination, see~\gammaammaite{chois}. The following key lemma is proved in~\gammaammaite{chois} Proposition 4.1, see also~\gammaammaite{kstop} Lemma 4.6. The essence is that because the two flat pieces of $\deltad { \mathcal C}/G$ on either side of a bending line are invariant under translation along the line, the translation can have no rotational part. \betaegin{lemma} {\lambda}abel{lemma:realtrace} If the axis of $g {\bf i}n G$ is a bending line of $\deltad { \mathcal C}/G_{{\mathcal S}}(x)$, then ${\cal T}r g(x) {\bf i}n \mathbb R$. \end{lemma} Notice that the lemma applies even when the bending angle $\tauh_{\gamma}$ along $\gamma$ vanishes, so the corresponding surface is flat, or when the angle is $\pi$, in which case either $\gammaamma$ is parabolic or $G_{{\mathcal S}}(x)$ is Fuchsian. If $g {\bf i}n G$ represents a curve $\gammaamma$ on $\deltad {\mathcal S}$, define the \emph{real trace locus} $\mathbb R_{\gamma}$ of $\gamma$ to be the locus of points in $\mathbb C$ for which ${\cal T}r g {\bf i}n (-{\bf i}nfty, -2] \gammaammaup [2,{\bf i}nfty)$. By the above lemma, ${\cal P}_{\gamma} \sigmaubset \mathbb R_{\gamma}$. Our aim is to compute the locus of faithful discrete representations $ \mathcal D_{{\mathcal S}}= \mathcal D$. In summary, we do this as follows: \betaegin{enumerate} {\bf i}tem Show that up to homotopy in ${\mathcal S}$, the essential non-peripheral curves on $\deltad {\mathcal S}$ are indexed by $ \mathbb Q / \sigmaim $ where $p/q \sigmaim \pm (p+ 2kq)/q, k {\bf i}n \mathbb Z$. (Proposition~\ref{curveequality}). {\bf i}tem Given $\gammaamma {\bf i}n \pi_1(\deltad{\mathcal S})$, give an algorithm for computing ${\cal T}r \rho_x(\gammaamma)$ as a polynomial in $x$, in particular identifying its two highest order terms in terms of $p,q$. (Section~\ref{sec:traces} and Proposition~\ref{prop:tracepoly}). {\bf i}tem Show that ${\cal P}_{0/1} = (-{\bf i}nfty, -3]$ and ${\cal P}_{1/1} = [2, {\bf i}nfty)$ (where ${\cal P}_{p/q}$ denotes the pleating ray of the curve $\gammaamma_{p/q} {\bf i}n \pi_1(\deltad{\mathcal S})$ identified with $p/q$). (Section~\ref{sec:fuchsian}). {\bf i}tem Show that ${\cal P}_{p/q}$ is a union of connected non-singular branches of $\mathbb R_{\gamma}$. (Theorem~\ref{thm:nonsing}). {\bf i}tem For $p,q \neq 0,1$, identify ${\cal P}_{p/q}$ by showing it has two connected components, namely the branches of $\mathbb R_{\gamma}$ which are asymptotic to the directions $e^{\pm i \pi (p/q + 1)}$ as $|x| \tauo {\bf i}nfty$. (Proposition~\ref{prop:connectivity}). {\bf i}tem Prove that rational rays ${\cal P}_{p/q}$ are dense in $ \mathcal D_{{\mathcal S}}$. (Theorem~\ref{thm:density}). \end{enumerate} One could carry all this out following almost word for word the arguments in~\gammaammaite{ksriley}. Rather than do this, we indicate as appropriate how more general results can be put together to provide a somewhat less ad hoc proof of the results. The claim that ${\cal P}_{p/q}$ has two connected components appears to contradict the results in~\gammaammaite{ksriley}, see however the following remark and Proposition~\ref{prop:connectivity} below. The pleating rays are shown on the left in Figure~\ref{Figs/Riley-Ray-BQ} with the Riley slice rays from~\gammaammaite{ksriley} on the right for comparison. \betaegin{remark} {\lambda}abel{Komori issue} \rm{There were two rather subtle errors in~\gammaammaite{ksriley} . The first was that, in the enumeration of curves on $\deltad {\mathcal S}$, we omitted to note that $\gammaamma_{p/q} $ is homotopic to $ \gammaamma_{-p/q}$ in ${\mathcal S}$. The second was, that we found only one of the two components of ${\cal P}_{p/q}$. Since ${\cal P}_{p/q} = {\cal P}_{-p/q}$, these two errors in some sense cancelled each other out. They were discussed at length and resolved in ~\gammaammaite{koms} and we make corresponding corrections here.} \end{remark} \betaegin{figure}[ht] {\bf i}ncludegraphics[width=5cm]{Figs/Riley-mu-3-Ray-BQ}\hspace{1cm} {\bf i}ncludegraphics[width=5cm]{Figs/Riley-Ray-BQ} \gammaammaaption{Left: Pleating rays for $G_{{\mathcal S}}(x)$. Right: Pleating rays for the Riley slice as described in~\gammaammaite{ksriley}. The coloured (grey) regions are the Bowditch sets for the initial triples discussed in~\ref{sec:discussion}; conjecturally these coincide with the closure of the regions filled by the pleating rays. For a discussion of how the rays were actually computed, see Section~\ref{sec:torustree1} } {\lambda}abel{Figs/Riley-Ray-BQ} \end{figure} \sigmaubsection{Step 1: Enumeration of curves on $\deltad {\mathcal S}$} {\lambda}abel{sec:enumeration} We need to enumerate essential non-peripheral unoriented simple curves on $\deltad {\mathcal S}$ up to homotopy equivalence in ${\mathcal S}$. As is well known, such curves on $\deltad {\mathcal S}$ are, up to homotopy equivalence in $\deltad {\mathcal S}$, in bijective correspondence with lines of rational slope in the plane, that is, with $\mathbb Q \gammaammaup {\bf i}nfty$, see for example~\gammaammaite{ksriley, koms}. For $(p,q) $ relatively prime and $q \gammaeq0$, denote the class corresponding to $p/q$ by $\gamma_{p/q}$. We have: \betaegin{proposition}[\gammaammaite{koms} Theorem 1.2] {\lambda}abel{curveequality}The unoriented curves $\gammaamma_{p/q}, \gammaamma_{p'/q'}$ are homotopic in ${\mathcal S}$ if and only if $p'/q' = \pm p/q + 2k, k {\bf i}n \mathbb Z$. \end{proposition} Missing the identification $\gammaamma_{p/q} \sigmaim \gammaamma_{-p/q}$ was the first of the two errors in~\gammaammaite{ksriley} referred to in Remark~\ref{Komori issue}. Before proving the proposition, we need to explain the identification of curves on $\deltad S$ with $\mathbb Q \gammaammaup {\bf i}nfty$. In~\gammaammaite{ksriley, koms} this was done using the plane punctured at integer points as an intermediate covering between $\deltad S$ and its universal cover. The idea is sketched in Section~\ref{sec:correspondence}. Here we give a slightly different description of the curve $\gammaamma_{p/q} $ which leads to a nice proof of the above result. Cut ${\mathcal S}$ into two halves along the meridian disk $m$ which is the plane which perpendicularly bisects the common perpendicular $C$ to the two singular axes ${ A}x K_i , i=0,1$. Each half is a ball $\hat B_i$ with a singular axis ${ A}x K_i $. The boundary $\deltad B_i = \deltad \hat B_i \gammaammaap \deltad {\mathcal S}$ is a sphere with two cone points and a hole $\deltad m$. Since the axes of $K_i$ are oriented, we can distinguish one cone point on each $\deltad B_i$ as the positive end of ${ A}x K_i $. Now $\deltad {\mathcal S}$ has a hyperbolic structure inherited from the ordinary set (or from the pleated surface structure on $\deltad { \mathcal C}/G_{{\mathcal S}}(x)$), in which $\deltad m$ is geodesic. With respect to such a structure, each $\deltad B_i$ has a reflectional symmetry ${\bf i}ota$ in the plane containing ${ A}x K_i $ and $C$, which maps the cone points to themselves, is an involution on $\deltad m$ with two fixed points which maps the `front' to the `back' as shown in Figure~\ref{fig:fronttoback}. There is a preferred base point $P_i$ on $\deltad m$, namely the foot of the perpendicular from the negative end of ${ A}x K_i $ to $\deltad m$. \betaegin{figure}[ht] {\bf i}ncludegraphics[width=8cm]{Figs/fronttoback} \gammaammaaption{The arrangement of arcs on $\deltad {\mathcal S}$. The curve shown illustrates the case $p=1, q=3$.}{\lambda}abel{fig:fronttoback} \end{figure} Let $\gamma$ be an essential non-peripheral simple curve on $\deltad {\mathcal S}$. For each $i= 0,1$, $\gamma \gammaammaap \deltad B_i$ consists of $q$ arcs joining $\deltad m$ to itself. Start with the strands of $\gamma$ arranged symmetrically with respect to ${\bf i}ota$, that is, with front to back symmetry. Orient $\deltad m$ so that it points `upwards' on the front side of the figure, noticing that ${\bf i}ota$ reverses the orientation. Lifting $\deltad m $ to its cyclic cover $\mathbb R$, enumerate in order the endpoints $X^{k}_i, i=0,1; k {\bf i}n \mathbb Z$ of arcs of $\gammaamma$ starting (say) with the arc meeting $\deltad m$ nearest $P_i$, and so that increasing order is in the direction of the upwards orientation of $\deltad m$ viewed from the front side in the figure. Since in fact $X^{k}_i = X^{k+2q}_i $, the enumeration is really $\rm{mod} \ 2q\mathbb Z$. To reconstruct $\gamma$ we have to join the endpoints $X^{k}_0$ on $\deltad B_0$ to the endpoints $X^{k'}_1$ on $\deltad B_1$. Since the arcs have to be matched in order round $\deltad m$, if $X^i_0$ is joined to $X^j_1$ then $X^{i+k}_0$ is joined to $X^{j+k}_1$ for all $k {\bf i}n \mathbb Z$. Set $ p = j-i$. It is not hard to see that the resulting curve $\gammaamma_{p/q}$ is connected if and only if $(p,q)$ are relatively prime. Note that with this description, $\deltad m$ is the curve $q=0$, that is $\gamma_{1/0}$. The curve $\gamma_{0/1}$ is the curve $K_0K_1$ and $\gamma_{1/1} = K_0K_1^{-1}$. We leave it to the reader to see that this description is the same as that obtained from the lattice picture in~\gammaammaite{ksriley}, see also Section~\ref{sec:correspondence}. {\sigmac Proof of Proposition~\ref{curveequality}} Write $\gammaamma_{p/q} \sigmaim \gammaamma_{p'/q'}$ to indicate that $\gammaamma_{p/q} , \gammaamma_{p'/q'}$ are homotopic in ${\mathcal S}$. Since Dehn twisting round $\deltad m$ is trivial in ${\mathcal S}$ and sends $X^k_i \tauo X^{k+2q}_{i}$, we have $\gammaamma_{p/q} \sigmaim \gammaamma_{p/q +2}$. To see why $\gammaamma_{p/q} \sigmaim \gammaamma_{-p/q}$, we proceed as follows. Consider the arrangement shown in Figure~\ref{fig:fronttoback}, in which $B_0$ is on the left and $\deltad m$ is oriented `upwards'. Each $\deltad B_i$ has a natural `front' and `back' which are interchanged by the involution ${\bf i}ota$. We will prove the result by arranging $\gammaamma$ on $\deltad S$ in two different ways. Start with the strands of $\gamma$ arranged symmetrically front to back, that is, with respect to ${\bf i}ota$, as on the two sides of Figure~\ref{fig:fronttoback}. (1) Homotope all the arcs of $\gammaamma \gammaammaap \deltad B_i$ on $\deltad B_i$ by dragging the endpoints on the back side around one or other cusp so that they meet $\deltad m$ on the front, see Figure~\ref{fig:pullround}. In this case, the choice $p>0$ means that on an arc approaching $\deltad m$ from $\deltad B_0$ one turns left (up) for $p$ slots before joining up arcs. \betaegin{figure}[ht] {\bf i}ncludegraphics[width=4cm]{Figs/pullround} \gammaammaaption{Curves pulled round to the front of $B_0$ ready to be joined as in (1) of the proof of Proposition~\ref{curveequality}.}{\lambda}abel{fig:pullround} \end{figure} (2) Now for the second arrangement. Homotope all the arcs of $\gammaamma \gammaammaap \deltad B_i$ on $\deltad B_i$, by dragging the endpoints on the front side so that they meet $\deltad m$ on the back of $\deltad B_i$. Notice turning left for $p$ slots on the back side, when viewed through ${\mathcal S}$ from the front to the back, looks the same as turning right (down) for $p$ slots on the front side. Now, starting from situation (2), homotope $\gammaamma$ in ${\mathcal S}$ by pulling the arcs meeting $\deltad m$ on the back side through $m$ so that they meet $\deltad m$ on the front side keeping their `horizontal' level fixed, so that an endpoint $X$ moves to ${\bf i}ota(X)$. The endpoints on the front side now have the same up-down order as they did on the back side. This shows that the curve obtained connecting the arcs moving up $p$ slots is homotopic in ${\mathcal S}$ to the curve obtained by moving down $p$ slots, as claimed. We will show that if $p'/q' \neq \pm p/q + 2k, k {\bf i}n \mathbb Z$ then $\gammaamma_{p/q} \nsim \gammaamma_{p'/q'}$ after computing traces, see Corollary~\ref{curveinequality}. \qed \sigmaubsection{Step 2: Computation of traces}{\lambda}abel{sec:traces} Let $V_{p/q} {\bf i}n {\mathcal S}L = \rho_x(\gamma_{p/q})$, where, since we want to compute ${\cal T}r V_{p/q}$, we only need consider $V_{p/q}$ up to cyclic permutation and inversion. Rather than using the associated torus tree, we will work directly with a $4$-holed sphere ${\mathcal S}igma_{0,4}$ and the associated tree as described in~\gammaammaite{MPT}, see also \gammaammaite{goldman}. Let $\alphalpha, \betaeta, \gammaamma,\deltaelta$ denote loops round the four holes, oriented so that $\alphalpha \betaeta \gammaamma \deltaelta = \rm{id}.$ The fundamental group is identified with the free group $ F_3$ with generators $\alphalpha, \betaeta, \gammaamma$. A representation $\rho\gammaammao F_3 \tauo {\mathcal S}L $ is determined up to conjugation by its values on seven elements as follows (where we use $\hat w$ in place of $w$ in \gammaammaite{MPT} etc to distinguish it from a variable $w$ already in other use): $$ {\cal T}r \rho(\alpha) = a; {\cal T}r \rho(\beta) = b; {\cal T}r \rho(\gammaamma) = c; {\cal T}r \rho(\delta) = d$$ $$ {\cal T}r \rho(\alpha\beta) = \hat x; {\cal T}r \rho(\beta\gammaamma) = \hat y; {\cal T}r \rho(\gammaamma\alpha) = \hat z$$ related by the equation \betaegin{equation} {\lambda}abel{conedspheretraces} \hat x^2 +\hat y^2 +\hat z^2 +\hat x\hat y\hat z = \hat p\hat x + \hat q\hat y+\hat r\hat z + \hat s\end{equation} where $$ \hat p = ab + cd, \hat q = bc+ad, \hat r = ac+bd, \hat s = 4-a^2 -b^2 - c^2 - d^2 - abcd.$$ We identify our generators $K_i$ as: $ \alphalpha = K_0, \beta = K_1, \gammaamma = K_2, \delta = K_3$. Thus we find: $$ a = b = c = d = 1, \hat x = {\cal T}r K_0K_1 = x, \hat y = {\cal T}r K_1K_2 = 2, \hat z = {\cal T}r K_2K_0 = -x+1.$$ As a check, it is easy to verify that the trace identity \eqref{conedspheretraces} holds. Notice that none of the expressions $\hat p, {\lambda}dots, \hat z$ depend on the sign choices made in Section~\ref{sec:handlebody}. The traces can be arranged in a trivalent tree in the usual way. As explained above, we have $\gammaamma_{0/1} = K_0K_1, \gammaamma_{1/0} = \rm{id}, \gammaamma_{1/1} =K_0K_1^{-1} $. As explained in~\gammaammaite{MPT} Section 2.10, there are now $3$ moves, depending on the values of $\hat p,\hat q, \hat r$. In our case $\hat p=\hat q= \hat r = 2$ so the three moves described there coincide. Following~\gammaammaite{MPT}, if $u,v,w$ are labels round a vertex, with $v,w$ labels adjacent along a common edge $e$, then the label at the vertex at the opposite end of $e$ is $u' = 2-vw-u$, compare Figure~\ref{fig:markofftree} in which $u ' = vw-u$. Clearly this procedure gives an algorithm for arranging curves and computing traces on a trivalent tree by analogy with that described in Section~\ref{sec:markoff}. Curves generated in this way inherit a natural labelling from the usual procedure of Farey addition as described in Section~\ref{sec:markoff}. Denote the curve which inherits the label $p/q$ by $\deltaelta_{p/q}$; we say this curve is in \emph{Farey position $p/q$} on the tree. We shall refer to this tree together with its new rule for computing traces as the ${\mathcal S}$-tree, to distinguish it from the Markoff tree of Section~\ref{sec:markoff}. We need to show that $\deltaelta_{p/q}$ is the same as the curve $\gammaamma_{p/q}$ described in the previous section, namely the curve with $2q$ intersections with the meridian $\deltad m$ and a twist by $p$. \betaegin{lemma} {\lambda}abel{lem:treeisok} With the above notation, $\delta_{p/q} = \gamma_{p/q}$.\end{lemma} \betaegin{proof} (Sketch) By definition we have $\delta_{p/q} = \gamma_{p/q}$ for $p,q {\bf i}n \{0, 1 \}$. With the notation above, these are the curves $\alpha \beta, \beta \gamma, \gamma \alpha$, each of which separates the punctures in pairs. Call two essential simple non-peripheral curves on $\deltad {\mathcal S}$ \emph{neighbours} if they intersect exactly twice. Note that of the initial triple, each pair adjacent along an initial edge are neighbours, so that the triple round the initial vertex are neighbours in pairs. Now we check inductively that this is always the case. Give a pair of neighbours $\delta, \delta'$ along an edge, the remaining curves at the vertices at the opposite ends of this edge are obtained by Dehn twisting $\delta$ about $\delta'$ (or vice versa) in each of the two possible directions, see~\gammaammaite{MPT}. Thus for example $\delta_{1/1}$ is obtained by Dehn twisting $\delta_{0/1}$ about $\delta_{1/0}$ while $\delta_{-1/1}$ is obtained by Dehn twisting $\delta_{0/1}$ about $\delta_{1/0}$ in the opposite direction. Moreover if $\delta, \delta'$ are neighbours, then so are the pairs $\delta, D^{\pm}_{\delta}( \delta')$ and $\delta', D^{\pm}_{\delta}( \delta')$, where $D_{\delta}( \delta')$ denotes a Dehn twist of $\delta'$ about $\delta$ and by abuse of notation we write $D^{+}, D^{-}$ for $D, D^{-1}$ respectively. Now we show inductively that $\delta_{p/q} = \gamma_{p/q}$. Suppose that it is true for neighbours $p/q,r/s$ where $|ps-rq| = 1$. By induction we may assume that $\delta_{p/q} , \delta_{r/s}$ are adjacent along an edge $e$ of the tree. The two remaining curves $D^{\pm}_{\delta_{p/q}}(\delta_{r/s})$ at the vertices at the ends of $e$ result from Dehn twisting $\delta_{p/q}$ about $\delta_{r/s}$ in each of the two possible directions, hence are also neighbours. From the inductive hypothesis, we can assume that one of these two curves is $\delta_{p-r/q-s} = \gamma_{p-r/q-s}$ or $\delta_{r-p/s-q} = \gamma_{r-p/s-q}$ depending on whether $ ps-rq = \pm 1$. In accordance with the Farey labelling system, the curve at the other vertex is $\delta_{p+r/q+s} $. \betaegin{figure}[ht] {\bf i}ncludegraphics[width=8cm]{Figs/surgery} \gammaammaaption{Here $\gamma_{1/3}$ and $\gamma_{1/2}$ are surgered to give $\gamma_{2/5}$, see the proof of Lemma~\ref{lem:treeisok}. The inset circles show the direction of surgery. }{\lambda}abel{fig:surgery} \end{figure} Now the curves $D^{\pm}_{\delta_{p/q}}(\delta_{r/s})$ can be found by surgery. On each $B_i$, arrange both curves symmetrically with respect to the front and back of ${\mathcal S}$ as described above, then join the strands so that they have minimal intersection as in Figure~\ref{fig:surgery}. With $\gamma_{p/q}$ in this position, its twist $p$ is its intersection number with the geodesic joining the two positive cone points of the axes $K_i$, and likewise for $\gamma_{r/s}$. To perform the Dehn twist we have to cut the curves at their intersection points and then make a consistent choice of which direction to rejoin the resulting arcs. One of the two choices will give a curve with $2(q+s)$ intersection points with the meridian $\deltad m$. Clearly the curve with the `positive' surgery (see the inset circles in Figure~\ref{fig:surgery}) will have intersection number $p+r$ with this line, and hence is the curve $\gamma_{p+r/q+s}$. Since we already know the curve at the other vertex is $\delta_{r-p/s-q} =\gamma_{p-r/q-s}$ (or $\gamma_{r-p/s-q}$), this shows that $\delta_{p+r/q+s} = \gamma_{p+r/q+s}$. This completes the proof. \end{proof} \betaegin{prop} {\lambda}abel{prop:tracepoly} Let $V_{p/q}(x) = \rho_{{\mathcal S}}(x) (\gamma_{p/q})$ as above. Then: \betaegin{enumerate} {\bf i}tem ${\cal T}r V_{p/q}$ is a polynomial in $x$ whose top two terms are $(-1)^{p-q-1} (x^{q} - px^{q-1} )$. {\bf i}tem ${\cal T}r V_{p/q} = {\cal T}r V_{(p/q) + 2} = {\cal T}r V_{-p/q}$. {\bf i}tem ${\cal T}r V_{p/q} (x) = {\cal T}r V_{p+q/q } (1-x)$. \end{enumerate} \end{prop} \betaegin{remark}\rm{(1) should be compared to~\gammaammaite{ksriley} Corollary 4.3 in which we showed that the leading term is of the form $(-1)^{p-q-1}c x^q$ for some $c>0$, see also the remark following the corollary in that paper. }\end{remark} \betaegin{proof} (1) Note that (1) holds for the three initial traces of $ \gammaamma_{0/1}, \gammaamma_{1/0} , \gammaamma_{1/1} $. If curves $ \gammaamma_{p/q}, \gammaamma_{r/s}$ are adjacent along an edge, then the two curves at the remaining vertices at the ends of the edge are $ \gammaamma_{p\pm r/q \pm s}$. The result then follows easily by induction on the tree. (2) This follows immediately from Proposition~\ref{curveequality} and can also be proved easily by looking at the symmetries of the ${\mathcal S}$-tree. (3) This results from the symmetry $ x \mapsto 1-x$ which interchanges $ \gammaamma_{0/1}, \gammaamma_{1/1}$. \end{proof} Now we can prove the `only if' assertion of Proposition~\ref{curveequality}: \betaegin{corollary} {\lambda}abel{curveinequality} If $p'/q' \neq \pm p/q + 2k, k {\bf i}n \mathbb Z$ then $\gammaamma_{p/q} \not\sigmaim \gammaamma_{p'/q'}$. \end{corollary} \betaegin{proof} This follows immediately by comparing the top two terms of ${\cal T}r W_{p/q}, {\cal T}r W_{p'/q'}$. \end{proof} \sigmaubsection{Step 3. The exceptional Fuchsian case: computation of ${\cal P}_{0/1}, {\cal P}_{1/1}$} {\lambda}abel{sec:fuchsian} As above, let ${\cal P}_{p/q}$ denote the pleating ray of $\gammaamma_{p/q}$. The rays ${\cal P}_{0/1}, {\cal P}_{1/1}$ are exceptional. We have ${\cal T}r \gammaamma_{0/1}= x, {\cal T}r \gammaamma_{1/1} = 1-x$. Thus the real locus for both trace polynomials is exactly the real axis, and on this locus, the group $G_{{\mathcal S}}(x)$, if discrete, is Fuchsian. This is exactly the situation discussed in~\gammaammaite{ksriley} p. 84. In the ball model of $\mathbb H^3$, identify the extended real axis with the equatorial circle. Since the limit set is contained in $\hat \mathbb R$, the convex core (the Nielsen region) of $G_{{\mathcal S}}(x)$ is contained in the equatorial plane. We can think that the convex core has been squashed flat and the bending lines are just the boundary of the Nielsen region, that is, the boundary of the surface $\mathbb H^2/G_{{\mathcal S}}(x)$. Thus to find the bending lamination we just have to determine the boundary of $\mathbb H^2/G_{{\mathcal S}}(x)$. Now if $x {\bf i}n \mathbb R$ then either $\zeta {\bf i}n \mathbb R$ and $x>0$, or $\zeta {\bf i}n i\mathbb R$ and $x<0$. In both cases, we find a fundamental domain for $G_{{\mathcal S}}(x)$ as described in~\ref{sec:dirichlet}, see Figure~\ref{fig:fuchsianconfigs}. Thus regarded as a Fuchsian group acting on the upper half plane $\mathbb H$, $G_{{\mathcal S}}(x)$ represents a sphere with two order $3$ cone points and one hole. However the cases $ x<0, x>0$ are slightly different, because of the relative directions of rotation of $K_0$ and $K_1$. In both cases, the axis $K_0$ has fixed points $\pm i \sigmaqrt 3$ and its axis is oriented so that it is anticlockwise rotation about $ i \sigmaqrt 3$. Thus $K_1 = PK_0P^{-1}$ rotates anticlockwise about $P( i \sigmaqrt 3) = -i \zeta ^2/\sigmaqrt 3 $. If $x<0$ then $P( i \sigmaqrt 3)$ is in the upper half plane $\mathbb H$ while if $x>0$ then $P( i \sigmaqrt 3)$ is in the lower half plane. Hence if $x<0$ then $K_0, K_1$ rotate in the same sense about their fixed points in $\mathbb H$ while if $x>0$ their rotation directions are opposite. This leads to the two different configurations shown in Figure~\ref{fig:fuchsianconfigs}. As is easily checked, if $x>0$ the boundary of the hole is thus $K_0K_1^{-1}$ while if $x<0$ the boundary of the hole is $K_0K_1$. Since $K_0K_1 = \gammaamma_{0/1}$ and $K_0K_1^{-1} = \gammaamma_{1/1}$, combining this with information about the discreteness locus in the Fuchsian case from~\ref{sec:dirichlet}, we conclude that ${\cal P}_{0/1} = (-{\bf i}nfty, -2]$ and $ {\cal P}_{1/1}= [3, {\bf i}nfty)$. \betaegin{figure}[ht] {\bf i}ncludegraphics[width=6cm]{Figs/dirichletxleq0} {\bf i}ncludegraphics[width=6cm]{Figs/dirichletxgeq0} \gammaammaaption{Configurations for $x {\bf i}n \mathbb R$. Left: $\zeta {\bf i}n i\mathbb R, x{\lambda}eq -2$. $K_0$ and $ K_1$ rotate in the same directions $\mathbb H$ and the hole is $K_0K_1$. Right: $\zeta {\bf i}n \mathbb R, x\gammaeq 3$. $K_0$ and $ K_1$ rotate in opposite directions in $\mathbb H$ and the hole is $K_0K_1^{-1}$.}{\lambda}abel{fig:fuchsianconfigs} \end{figure} \sigmaubsection {Step 4. Non-singularity of pleating rays}{\lambda}abel{sec:nonsingular} This is the part of the argument which contains the deepest mathematics. Fortunately the results needed have been proved elsewhere. \betaegin{theorem}[\gammaammaite{ksqf, ksriley, chois}] {\lambda}abel{thm:nonsing} Suppose that $\gammaamma {\bf i}n \pi_1({\mathcal S})$. Then ${\cal P}_{\gammaamma}$ is open and closed in the real trace locus $\mathbb R_{\gamma}$. Moreover ${\cal T}r \rho_x(\gamma)$ is a local coordinate for $\mathbb C$ in a neighbourhood of ${\cal P}_{\gammaamma}$, and is a global coordinate for ${\cal P}_{\gammaamma}$ on any non-empty connected component of ${\cal P}_{\gammaamma}$. \end{theorem} \betaegin{proof} The statement that ${\cal P}_{\gammaamma}$ is open in $\mathbb R_{\gamma}$ is essentially \gammaammaite{ksriley} Proposition 3.1, see also \gammaammaite{ksqf} Theorems 15 and 26. The fact that ${\cal T}r \rho_x(\gamma)$ is a local parameter is equivalent to the fact, also proved in both \gammaammaite{ksriley} and \gammaammaite{kstop}, that ${\cal P}_{\gammaamma}$ is a non-singular $1$-manifold. The open-ness and the final statement are actually a special case of Theorems B and C of \gammaammaite{chois} which state that for general hyperbolic manifolds, if the support of the bending lamination is a union of closed curves is rational, then the traces of these curves are local parameters for the deformation space in a neighbourhood of the corresponding pleating variety. That ${\cal P}_{\gammaamma}$ is closed in $\mathbb R_{\gamma}$ can be proved as in~\gammaammaite{ksriley} Theorem 3.7. Here is a slightly more sophisticated version of the same idea. Suppose $x_n \tauo x_{{\bf i}nfty}$ with $x_n {\bf i}n {\cal P}_{\gammaamma}$. The limit group $G_{{\mathcal S}}(x_{{\bf i}nfty})$ is an algebraic limit of groups $G_{{\mathcal S}}(x_n)$ and hence the corresponding representation is discrete and faithful. Each of the two components of $(\deltad { \mathcal C}/G_{{\mathcal S}}(x_n)) \sigmaetminus \gamma $ is a flat surface corresponding to a conjugacy class of Fuchsian subgroup $F_j(x_n), j=1,2$ (the $F$-peripheral subgroups of~\gammaammaite{ksriley}). Since the limit is algebraic, $F_j(x_n)$ limits on a Fuchsian subgroup $F_j(x_{{\bf i}nfty})$, and similarly for all its conjugates in $G_{{\mathcal S}}(x_{{\bf i}nfty})$. The limit sets ${\cal L}ambda_{\alphalpha}$ of each of these subgroups $F_{\alphalpha}$ is spanned by a hyperbolic plane $H_{\alpha}$ in $\mathbb H^3$. The Nielsen regions of $F_{\alphalpha}$ in $H_{\alpha}$ fit together along the lifts of the bending line $\gamma$ to $\mathbb H^3$, forming a pleated surface ${\cal P}i$ in $\mathbb H^3$. We claim that ${\cal P}i = \deltad { \mathcal C} (x_{{\bf i}nfty})$. This follows since the closure of the union of the ${\cal L}ambda_{\alpha}$ is the limit set of $G(x_{{\bf i}nfty})$, see also Proposition 7.2 in~\gammaammaite{kshowtobend}. The result follows. \end{proof} \betaegin{remark} \rm{The closure of ${\cal P}_{\gamma}$ in $\mathbb R_{\gamma}$ is a simple case of both the `local limit theorem', Theorem 15 in~\gammaammaite{ksqf} and the `lemme de fermeture' of~\gammaammaite{BonO}. These much more sophisticated results allow that the bending lines may be part of an irrational lamination. Our argument above, in which the bending lamination is supported on closed curves, is very close to that in the first part of the proof of Theor{\`e}me 6 in~\gammaammaite{otal}. } \end{remark} \betaegin{corollary}[\gammaammaite{kstop, ksriley, chois}] If ${\cal P}_{\gammaamma} \neq \emptyset$, then it is a union of connected non-singular branches of the real trace locus $\mathbb R_{ \gammaamma}$. \end{corollary} \betaegin{proof} Suppose that ${\cal P}_{\gammaamma} \neq \emptyset$ and let $x {\bf i}n {\cal P}_{\gammaamma}$, so that by Lemma~\ref{lemma:realtrace}, $x {\bf i}n \mathbb R_{\gamma}$. By Theorem~\ref{thm:nonsing}, ${\cal P}_{\gamma}$ is open and closed in $\mathbb R_{\gamma}$. Since ${\cal T}r \gammaamma$ is a local coordinate, in a neighbourhood of $x$ the locus $\mathbb R_{\gamma}$ is a $1$-manifold. \end{proof} Notice that the theorem says that ${\cal T}r \rho_x(\gamma)$ is a local parameter even in a neighbourhood of a cusp where $ \rho_x(\gamma)$ is parabolic,~\gammaammaite{ chois} Theorem C. Thus we have \betaegin{corollary} Suppose that $ x {\bf i}n {\cal P}_{\gammaamma}$. Then there is a neighbourhood of $x$ in $\mathbb C$ on which $x {\bf i}n \mathbb R_{\gamma}$ implies that $ x {\bf i}n \mathcal D$. \end{corollary} \betaegin{corollary}{\lambda}abel{cor:unbounded} If ${\cal P}_{\gammaamma} \neq \emptyset$, then ${\cal T}r \rho_x(\gamma)$ is unbounded on ${\cal P}_{\gammaamma}$. \end{corollary} \betaegin{proof} Since ${\cal T}r \rho_x(\gamma)$ is a local coordinate on connected components of ${\cal P}_{\gammaamma}$, this follows from the maximum principle on the branch, see~\gammaammaite{ksriley} Theorem 4.1. \end{proof} \sigmaubsection{Step 5. Finding the non-empty pleating rays} {\lambda}abel{sec:general rays} Now we determine the pleating rays. As above, let ${\cal P}_{p/q}$ denote the ray corresponding to the curve $\gamma_{p/q}$ and write $\mathbb R_{p/q}$ for the real locus of ${\cal T}r V_{p/q}$. From Proposition~\ref{curveequality} we have ${\cal P}_{p/q} = {\cal P}_{(p+2q)/q}= {\cal P}_{-p/q}$. By~\ref{sec:nonsingular}, ${\cal P}_{p/q} $ is a union of non-singular branches of $\mathbb R_{p/q}$. We now find those $p/q \neq \{0,1\}$ for which ${\cal P}_{p/q} \neq \emptyset$, at the same time resolving the connectivity issue. We follow the method of \gammaammaite{ksriley}, using an inductive argument on position of the pleating rays and their asymptotic directions as $|x| \tauo {\bf i}nfty$, and at the same time correcting the second of the two errors referred to in Remark~\ref{Komori issue}. We have: \betaegin{proposition}[c.f.~\gammaammaite{ksriley} Theorem 4.1]{\lambda}abel{prop:asympdirn} The set ${\cal P}_{p/q}$ is the union of the two branches of $\mathbb R_{p/q} $ which are asymptotic to the half lines $\rho e^{\pm i \pi (p-q)/q}$ as $\rho \tauo {\bf i}nfty$. \end{proposition} \betaegin{proof} Denote by $R(\tauheta)$ the ray $te^{i \tauheta}, t >0$, in the $x$-plane. By Proposition~\ref{prop:tracepoly}, ${\cal T}r V_{p/q}$ is a polynomial in $x$ whose top term is $(-1)^{p-q-1} x^q$. Now ${\cal T}r V_{p/q}$ takes real values on ${\cal P}_{p/q}$, moreover by Corollary~\ref{cor:unbounded} it is unbounded on ${\cal P}_{p/q}$. It follows that as $|x| \tauo {\bf i}nfty$, ${\cal P}_{p/q}$ must be asymptotic to one of the rays $R(k\pi/q), k {\bf i}n \mathbb Z$. We have already identified ${\cal P}_{0/1}$ and ${\cal P}_{1/1}$ as the real intervals $(-{\bf i}nfty, -3]$ and $[2,{\bf i}nfty)$ respectively. It follows from Section~\ref{sec:dirichlet} that the semicircular arc from $-4$ to $4$ (say) in $\mathbb H$ is a continuous path in $ \mathcal D$ from ${\cal P}_{0/1}$ to ${\cal P}_{1/1}$. Hence by the continuity theorem of~\gammaammaite{kscont}, if $0 < p/q <1$ there is a point on ${\cal P}_{p/q}$ in the upper half plane $\mathbb H$. Likewise there is a point on ${\cal P}_{p/q}$ in the lower half plane. (This was missed in~\gammaammaite{ksriley}.) Since ${\cal P}_{0/1} \gammaammaup {\cal P}_{1/1}$ separates $ \mathcal D$ into two connected components, this shows in particular that ${\cal P}_{p/q}$ must have at least two connected components. Now we proceed by induction on the Farey tree. Suppose we have shown the result for two Farey neighbours $p/q, r/s$. Consider the locus ${\cal P}_{p+r/q+s}$. By the inductive hypothesis, $\mathbb H$ contains exactly one component of each of ${\cal P}_{p/q}, {\cal P}_{r/s}$, asymptotic to the rays $R(\pi (p-q)/q), R( \pi (r-s)/s)$ respectively. Exactly as in~\gammaammaite{ksriley} it is easy to check that there is exactly one integer $k {\bf i}n \{0, 1, {\lambda}dots, 2(q+s)-1 \}$ for which $R(k\pi/(q+s))$ lies between $R(\pi (p-q)/q)$ and $ R( \pi (r-s)/s)$, namely $k= (p+r)/(q+s)$. By the same continuity theorem as before, a path in this sector joining suitable points on ${\cal P}_{p/q}, {\cal P}_{r/s}$ must meet ${\cal P}_{p+r/q+s}$. Thus ${\cal P}_{p+r/q+s}$ has at least one connected component asymptotic to $ R( \pi (p + r -q-s)/(q+s))$. A similar argument in the lower half plane gives another connected component asymptotic to $ R( \pi (p + r +q+s)/(q+s))$. Since ${\cal P}_{p+r/q+s}$ has exactly two components by Proposition~\ref{prop:connectivity} below, the result follows. \end{proof} The issue of connectivity of ${\cal P}_{\gamma}$ is a bit subtle. In the general theory, see~\gammaammaite{BonO, chois}, one shows that ${\cal P}_{\gamma}$ has one connected component. However this result holds in a space of manifolds which are consistently oriented throughout the space and all of whose convex cores have non-zero volume. In our case we have: \betaegin{proposition} {\lambda}abel{prop:connectivity} If $\gamma \neq 0/1, 1/1$ and ${\cal P}_{\gamma} \neq \emptyset$, then ${\cal P}_{\gamma}$ has exactly two connected components in $ \mathcal D$. \end{proposition} \betaegin{proof} The usual argument that the pleating ray of a rational lamination has one connected component goes as follows. Given a point on ${\cal P}_{\gamma}$, double the convex core along its boundary to obtain a cone manifold with a singular axis of angle $2(\pi - \tauheta)$ along $\gamma$, where $\tauheta$ is the bending angle along $\gammaamma$. (Notice that the convention on defining bending angles differs between papers by the first author and~\gammaammaite{BonO}. In our convention, a bending line contained in flat subsurface has bending angle $0$ but cone angle $2 \pi$, whereas in~\gammaammaite{BonO}, the bending angle along a line in a flat surface is defined to be $\pi$.) By \gammaammaite{HK}, such a hyperbolic cone manifold is parametrized by its cone angle. One shows that one can continuously deform the cone angle to $0$, at which point the curve whose axis is the bending line has to become parabolic. The doubled manifold is an oriented hyperbolic manifold with a rank two cusp and finite volume. As long as we are working in a space in which all manifolds have consistent orientation, such a manifold is unique up to orientation preserving isometry, from which one deduces that ${\cal P}_{\gammaamma}$ is connected. In our case, the parameter space $ \mathcal D$ is separated by two lines along which $G$ is Fuchsian so that ${ \mathcal C}(G)/G$ has zero volume and the above argument fails. Note however that, provided that $G$ is not Fuchsian, ${\mathcal S}$ can be oriented by the triple consisting of the \emph{oriented} axes of $P,Q$ and the oriented line $C$ from $ { A}x K_1 $ to $ { A}x K_1 $. The map $ \zeta \tauo \betaar \zeta$ reverses the relative orientations of $ { A}x P, { A}x Q$ while fixing that of $C$. Thus $ \mathcal D \sigmaetminus \mathbb R$ has two connected components in which ${\mathcal S}$ has naturally opposite orientations. The above argument shows that ${\cal P}_{\gammaamma} $ has at most one component in each component of $ \mathcal D$. Since we have already shown in Proposition~\ref{prop:asympdirn} that ${\cal P}_{\gammaamma} $ has at least one component in each of the upper and lower half planes, this completes the proof. This proposition can alternatively be proved by the more \emph{ad hoc} methods used in~\gammaammaite{ksriley}. \end{proof} \betaegin{remark} \rm {Proposition~\ref{prop:asympdirn} shows that ${\cal P}_{p/q} \neq \emptyset$ for all $p/q {\bf i}n \mathbb Q$. This can be viewed as a special case of the general result of~\gammaammaite{BonO} Theorem 1, see also~\gammaammaite{chois} Theorem 2.4. We have to be careful to include the case, excluded in \gammaammaite{BonO}, that the group $G_{{\mathcal S}}(x)$ is Fuchsian so that ${ \mathcal C}/G$ has zero volume. The conclusion is the following: \betaegin{proposition} Let $\gammaamma$ be an essential simple non-peripheral closed curve on $\deltad {\mathcal S}$. Then ${\cal P}_{\gammaamma} \neq \emptyset$ if and only if $\gammaamma$ is non-trivial in $\pi_1({\mathcal S})$ and intersects the meridian disk $\gammaamma_{1/0}$ at least twice. If $\gammaamma$ meets $\gammaamma_{1/0}$ exactly twice then the bending angle is identically $\pi$ and $G_{{\mathcal S}}(x)$ is Fuchsian. \end{proposition} } \end{remark} \sigmaubsection{ Step 6. Density of rational pleating rays} Finally, we justify the claim that the rational pleating rays are dense in $ \mathcal D$: \betaegin{theorem}[\gammaammaite{kstop} Corollary 6.2, \gammaammaite{ksriley} Theorem 5.2] {\lambda}abel{thm:density} Rational pleating rays are dense in $ \mathcal D_{{\mathcal S}}$. \end{theorem} \betaegin{proof} The proof of this result in any one complex dimensional parameter space is the same. Here is a quick sketch. Suppose that $\nu$ is an irrational lamination with corresponding pleating variety ${\cal P}_{\nu}$, and that $x {\bf i}n {\cal P}_{\nu} \gammaammaap \mathcal D$. Pick a sequence of rational measured laminations $\nu_n = c_n \deltaelta_{\gammaamma_n}$ where $c_n {\bf i}n \mathbb R^+$ so that $\nu_n \tauo \nu $ in the space of projective measured laminations on $\deltad {\mathcal S}$, where $\deltaelta_{\gammaamma_n}$ is the unit point mass on $\gammaamma_n$. Replace the traces of $\gammaamma_n$ by complex length functions ${\lambda}ambda_n$ and scale to get complex analytic functions $c_n {\lambda}ambda_n$. One shows that in a neighbourhood of $x {\bf i}n {\cal P}_{\nu}$ these functions form a normal family which converges to a non-constant analytic function (\gammaammaite {kstop} Theorem 20), whose real locus contains the pleating ray ${\cal P}_{\nu}$ (\gammaammaite {kstop} Theorem 23). By Hurwitz' theorem, there are nearby points at which the approximating functions $c_n {\lambda}ambda_n$ must take on real values. In a small enough neighbourhood of $x$, this is enough to force $ y {\bf i}n {\cal P}_{\gammaamma_n}$ (\gammaammaite {kstop} Theorem 31). This gives density in ${\cal I}nt \mathcal D$. By the result quoted in the introduction that $ \mathcal D = \overline{{\cal I}nt \mathcal D}$ we are done. \end{proof} \sigmaubsection{The pleating rays for ${\cal H}$} {\lambda}abel{sec:rays} By Corollary~\ref{discretetogether}, $ \mathcal D_{{\cal H}}= \mathcal D_{{\mathcal S}}$. Thus the rational rays for $ \mathcal D_{{\mathcal S}}$ are also dense in $ \mathcal D_{{\cal H}}$. However it is easy to see that a rational pleating laminations on $\deltad {\cal H}(x)$ correspond exactly to those on ${\mathcal S}(x)$, and that although the actual bending curves differ, their traces are related by a simple formula. \betaegin{lemma} {\lambda}abel{bendlinesagree} Suppose that the bending lamination $\betaeta_{{\cal H}}(x)$ of ${\cal H}(x)$ is rational so that its support ${\lambda}ambda$ is a union of disjoint simple closed curves on $\deltad {\cal H}$. Let $\gammaamma$ be a connected component of ${\lambda}ambda$. Then either ${\kappa}(\gammaamma) = \gammaamma$ or the three curves $\gammaamma, {\kappa}(\gammaamma) , {\kappa}^2(\gammaamma)$ are disjoint. The support of the bending lamination $\betaeta_{{\mathcal S}}(x)$ is exactly the projection of $\gammaamma$ to ${\mathcal S}$ and all rational bending laminations of ${\mathcal S} $ arise in this way. \end{lemma} \betaegin{proof} The limit set of $G_{{\cal H}}(x)$ and hence its convex core are invariant under the symmetry ${\kappa}$. Hence the support ${\lambda}ambda$ of $\betaeta_{{\cal H}}(x)$ is also ${\kappa}$-invariant. Let $\gammaamma$ be a connected component of ${\lambda}ambda$. Since connected components of ${\lambda}ambda$ are pairwise disjoint, either ${\kappa}(\gammaamma) = \gammaamma$ or the three curves $\gammaamma, {\kappa}(\gammaamma), {\kappa}^2(\gammaamma)$ are disjoint. In either case, $\gamma$ cannot pass through a fixed point of ${\kappa}$: at the fixed point $P$ the images of $\gamma$ would meet at angles $2 \pi/3$ so that $\gammaamma, {\kappa}(\gammaamma) , {\kappa}^2(\gammaamma)$ would intersect at $P$, which is impossible. Let $\pi_{{\kappa}}$ be the projection ${\cal H} \tauo {\mathcal S}$. In a neighbourhood of a bending line $\pi_{{\kappa}}$ is a covering map hence a local isometry. Since being a bending line can be characterised locally, $\beta_{{\mathcal S}}(x)$ is the projection of $\gamma$ to ${\mathcal S}$. Let $\gammaamma$ be a simple closed curve on $\deltad {\mathcal S}$. Clearly, by the same observation about local characterisation of bending lines, if $\gammaamma$ is a bending line then so is any connected component of its lift to $\deltad {\cal H}$. This proves the converse. \end{proof} We remark that if $p/q$ is congruent to $1/0$ or $0/1$ mod $\mathbb Z_2$ then the lift of $\gammaamma_{p/q}$ has three connected components which are permuted among themselves by ${\kappa}$, while if $p/q$ is congruent to $1/1$ then its lift has one ${\kappa}$-invariant connected component. To see this, check by hand for the curves $\gammaamma_{1/0}, \gammaamma_{0/1}, \gammaamma_{1/1}$ and then note that the lifting property is invariant under the mapping class group of $\deltad {\mathcal S}$ which at the same time acts transitively on $p/q$ congruence classes mod $\mathbb Z_2$. To actually compute the pleating rays for $ \mathcal D_{{\mathcal S}}$, we computed the traces ${\cal T}r V_{p/q}(x)$ corresponding to the curves $\gamma_{p/q} {\bf i}n \pi_1({\mathcal S})$. The above discussion shows that it is unnecessary to actually compute traces of lifted curves in $\pi_1({\cal H})$. If for some reason one wanted to do this, either one could start again enumerating the curves on ${\cal H}$, or one could note that the complex length of a lift of $\gamma_{p/q}$ in ${\cal H}$ would be either the same as or three times that of the curve $\gamma_{p/q}$ in $ {\mathcal S}$, depending on the $\mathbb Z_2$-parity of $p/q$. \sigmaection{Computing traces} {\lambda}abel{sec:torustree} To compute traces of the elements $V_{p/q}$, rather than use the ${\mathcal S}$-tree as in Section~\ref{sec:traces}, we actually performed computations on the associated Markoff tree corresponding to the associated torus ${\cal T}$ of Section~\ref{sec:solidtorus}, referred to in this section as the ${\cal T}$-tree. To justify this, we need to compare the curves in Farey position $p/q$ on the two trees to ensure that they do indeed correspond geometrically as expected. We also need to address the issue about lifting representations to ${\mathcal S}L$ raised in Remark~\ref{signchoices1}. \sigmaubsubsection{Correspondence of curves}{\lambda}abel{sec:correspondence} Homotopy classes of essential simple non-peripheral loops on $\deltad {\cal T}$ are well known to be in bijective correspondence to unoriented lines of rational slope in the plane, see for example~\gammaammaite{serint,kstop}. In fact the word $W_{p/q}$ generated by the concatenation process following the ${\cal T}$-tree described in Section~\ref{sec:markoff} is the cutting sequence of a line of slope $p/q {\bf i}n \hat \mathbb Q$ across the lattice, see~\gammaammaite{serint}. \betaegin{figure}[hbt] \betaegin{center} {\bf i}ncludegraphics[height=6cm]{Figs/slope} \gammaammaaption{Lattice representation of a cover of $\deltad {\mathcal S}$. The integer vertices (white circles) correspond to the end points of the order $3$ axes on $\deltad {\mathcal S}$; the endpoints of the order $2$ elliptics $P,Q, R$ are coloured blue, green and red respectively.} {\lambda}abel{fig:lattice} \end{center}\end{figure} The key point here is that the plane with a cone singularity of angle $2\pi/3$ at integer lattice points, see Figure~\ref{fig:lattice}, is an intermediate covering between the universal cover $\mathbb H$ of $\deltad {\cal T}$ and $\deltad {\cal T}$ itself. As described in for example~\gammaammaite{ksriley}, the same lattice can also be viewed as an intermediate covering between $\mathbb H$ and $\deltad {\mathcal S}$: the rectangle with vertices at $0, 1, 2i, 2i+1$ projects bijectively to $\deltad {\mathcal S}$ while the rectangle with vertices $0, 1, i/2, 1+ i/2$ projects bijectively to $\deltad {\mathcal U}$ and the unit square projects bijectively to the torus $\deltad {\cal T}$. The lattice points correspond to the cone points belonging to $K_i, i=1, {\lambda}dots, 4$ arranged as shown. Thus there is also a bijective correspondence between lines of rational slope in the punctured plane and simple essential non-peripheral curves on $\deltad {\mathcal S}$. In this way, one can easily relate the words $W_{p/q}$ (on $\deltad {\cal T}$) and $ V_{p/q}$ (on $\deltad {\mathcal S}$); this is explained in detail in~\gammaammaite{ksriley}. In this picture, the meridian loop $\deltad m$ of Section~\ref{sec:enumeration} is identified as the `vertical' line of slope $1/0$. One sees easily that the line of slope $p/q$ in the plane projects to a curve on $\deltad {\mathcal S}$ which has exactly $2q$ intersections with $\deltad m$ and a twist of $p$ as described in Section~\ref{sec:enumeration}. It follows from Lemma~\ref{lem:treeisok} that the labelling of curves by lines of rational slope $p/q$ exactly corresponds to the Farey labelling of curves by their position on the ${\mathcal S}$-tree. As above, the curve in Farey position $p/q$ on the ${\mathcal S}$-tree is denoted $\gamma_{p/q}$, corresponding to a word $V_{p/q}$; while the curve in Farey position $p/q$ on the ${\cal T}$-tree is denoted $\omega_{p/q}$, corresponding to a word $W_{p/q}$. Now $ {\mathcal S}$ projects to ${\mathcal U}$ by a four-fold cover and ${\cal T}$ projects to ${\mathcal U}$ by a two-fold cover. Hence we have: \betaegin{proposition}{\lambda}abel{coverings} The complex length of $\gamma_{p/q} $ is twice that of $\omega_{p/q}$, hence $ {\cal T}r V_{p/q}(\zeta) = \pm ({\cal T}r W_{p/q} (\zeta))^2 -2 )$. \end{proposition} Note that this allows for an ambiguity in the signs of the traces since the two lifts of $\pi_1({\cal T})$ and $\pi_1({\mathcal S})$ to ${\mathcal S}L$ are not (indeed cannot be) chosen consistently. \betaegin{corollary} Up to sign, the trace of $\gamma_{p/q} {\bf i}n \pi_1({\mathcal U})$ may be computed using the formula of Proposition~\ref{coverings} and the ${\cal T}$-tree. \end{corollary} Since we are aiming to compute pleating rays which are a geometrical construct and hence only depend on a ${\cal P}SL$ representation, this would be sufficient for our purposes. However it is more satisfying to prove the following more precise result which shows that working with the ${\mathcal S}L$ lift of the representation of $\pi_1({\cal T})$ described in Section~\ref{sec:solidtorus}, we can fix the choice of sign. \betaegin{prop} {\lambda}abel{relatingtraces} With $W_{p/q}, V_{p/q}$ as above, let $f_{p/q} (z) = {\cal T}r V_{p/q}(\zeta)$ and $g_{p/q} (\zeta) = {\cal T}r W_{p/q} (\zeta) $. Then $-f_{p/q} (\zeta) =(g_{p/q} (\zeta))^2 -2$ for all $p/q {\bf i}n \hat \mathbb Q$. \end{prop} \betaegin{proof} It is easy to check that this is correct for $p/q = 0/1,1/0, 1/1$. In detail: $\omega_{0/1} = A, \gammaamma_{0/1} = K_0K_1$ and we have shown that $ A^2 = -K_0K_1$. Thus $ f_{0/1} (\zeta) =x$, $(g_{p/q} (\zeta))^2 -2 = (-x+2)-2 = -x$. $\omega_{1/0} =B, \gammaamma_{1/0} = \rm{id}$ and $B^2 = - \rm{id}$. So $ f_{1/0} (\zeta) =2$, $(g_{1/0} (\zeta))^2 -2 = -2$. $\omega_{1/1} =AB, \gammaamma_{1/1} = \rm{K_0K_1^{-1}}$. So $ f_{1/1} (\zeta) =1-x$, $(g_{1/1} (\zeta))^2 -2 = x-1$. Now we do an inductive proof. Suppose that in the ${\mathcal S}$-tree labels $u,v$ are adjacent along an edge $e$ with $w$ the remaining label at one of the two vertices at the ends of $e$. By the formula in Section~\ref{sec:traces} the label at the other vertex is $2 - uv -w$. Suppose that the corresponding labels on the ${\cal T}$-tree are $u',v',w'$. Then the remaining label at the vertex at the other end of $e$ is $ u'v'-w'$. Replace these labels by the negatives of the traces of the doubled curves to get labels $2-u'^2, 2-v'^2, 2-w'^2 ,2- (u'v'-w')^2 $ around the same $4$ vertices. If we can show that $$2 -(2-u'^2)(2-v'^2) - (2-w'^2 ) = - (2- (u'v'-w')^2 ) $$ we will be done. This is easily checked by multiplying out, noting that the trace identity~\eqref{conedspheretraces} round a vertex of the ${\cal T}$-tree gives $$ u'^2 + v'^2 + w'^2 = u'v'w' + {\cal T}r [A,B] +2 = u'v'w' + 3.$$ \end{proof} \sigmaubsubsection{The actual computations}{\lambda}abel{sec:torustree1} The above discussion justifies the method we actually used to perform computations involving traces on ${\mathcal S}$. Instead of computing on the ${\mathcal S}$-tree with initial traces ${\cal T}r \gamma_{0/1} = {\cal T}r K_0K_1 = x, {\cal T}r \gamma_{1/0}= {\cal T}r \mbox{\rm{id}}= 2, {\cal T}r \gamma_{1/1} = {\cal T}r K_0K_1^{-1} = 1-x$, we used the ${\cal T}$-tree with initial triple ${\cal T}r A = \pm i(3/\zeta - \zeta), {\cal T}r B =0$ and ${\cal T}r AB = \pm i(3/\zeta + \zeta)$ corresponding to the generators $A,B$ of $G_{{\cal T}}$. As in Section~\ref{sec:solidtorus} Observe that $A^2 = -K_0K_1$, so that ${\cal T}r A^2 = -x$. Since ${\cal T}r B = 0$, we can find ${\cal T}r AB$ from the identity $({\cal T}r A)^2 + ({\cal T}r AB)^2 = {\cal T}r {[A,B]} +2 = 3$. Thus setting $(a,b,c) = ({\cal T}r A, {\cal T}r B, {\cal T}r AB)$ we have $$ a^2 - 2 = -x, \quad c^2 = 1 + x. $$ It is easily checked that this is in accord with~\eqref{eqn:torustraces}. Thus associated to $G_{{\cal T}}(x)$ we have the torus tree $(a,0,c) = (\sigmaqrt{-x+2}, 0, \sigmaqrt{x+1})$. This is the method we actually used to compute the pleating rays shown in Figure~\ref{Diagonal-and-Riley-mu-3-Ray-BQ}. \betaegin{remark} {\lambda}abel{rmk:signs} \rm{The sign of the square roots in the above can be uniquely determined by the formulae for traces in terms of $\xi$. What we actually did was to make an arbitrary choice and plot rays corresponding to curves in the range $0 {\lambda}eq p/q {\lambda}eq 1$, thus making a picture in the upper half plane which we could then reflect. As can be seen from Figure~\ref{fig:trace2}, the signs of the square roots in fact alternate periodically with period $4$ rather than period $2$, so that, for example, ${\cal T}r \gamma_{3} = -{\cal T}r \gamma_{1}$.} \end{remark} \sigmaubsubsection{Computations for the Riley slice} {\lambda}abel{sec:discussion} The traces needed to find the pleating rays for the Riley slice on the right in Figure~\ref{Figs/Riley-Ray-BQ} were computed by a method similar to that described above. Our parameter $x$ can be related to the parameter $\rho$ of~\gammaammaite{ksriley} by comparing the traces of the word in Farey position $0/1$: these are $K_0K_1$ in our case and $XY$ in the notation of~\gammaammaite{ksriley}. Thus we find the correct correspondence is $ x {\lambda}eftrightarrow \rho + 2 $. For the Riley group a similar computation to the one above with ${\cal T}r [A,B] = -2$ gives immediately $({\cal T}r A)^2 = -(\rho + 2)$ and $({\cal T}r AB)^2 = \rho + 2 $. Thus writing in terms of the $x$-coordinate we find the initial triple $(\sigmaqrt {-x}, 0, \sigmaqrt {x})$. \sigmaubsubsection{Comparison of Bowditch sets} It is interesting to compare the Bowditch sets associated to the two initial triples $(x,x,x)$ and $(\sigmaqrt{-x+2}, 0, \sigmaqrt{x+1})$. In the latter case, one needs to modify the definition of the Bowditch set: since $\phi(U)=0$ for some $U {\bf i}n {\kappa}mega$, there is a trace preserving $\mathbb Z$-action on the associated tree $\mathbb T_{(\sigmaqrt{-x+2}, 0, \sigmaqrt{x+1})}$ corresponding to the action of a subgroup of ${\rm Aut}(F_2)$ generated by a parabolic, see for example \gammaammaite{tan_gd} Theorem 1.9. The Bowditch condition should actually be specified on ${\kappa}mega \sigmaetminus \{U\} / \sigmaim$, where $\sigmaim$ is the equivalence coming from this symmetry. The results, plotted in the $\zeta$-plane, are shown in Figure~\ref{fig:BQ-sets-comparison}. On the right the initial triple is $(x,x,x)$ (with $x$ related to $\xi$ as in \eqref{eqn:paramreln}) corresponding to the handlebody group $G_{{\cal H}}(x)$. On the left, the initial triple is $(\sigmaqrt{-x+2}, 0, \sigmaqrt{x+1})$ corresponding to the torus group $G_{{\cal T}}(x)$. The two regions are clearly distinct: the grey region on the right contains that on the left. Conjecturally, the left hand grey region is also the discreteness locus for the groups $G_{{\cal T}}(x)$, see Figure \ref{Figs/Riley-Ray-BQ} for the parametrization in terms of $x$. Note the various symmetries as discussed in Section~\ref{sec:symmetries}, in particular note how Figure~\ref{Diagonal-BQ} loses the left-right reflectional symmetry seen in Figure~\ref{fig:BQ-sets-comparison}. The coloured region in Figure~\ref{Diagonal-BQ} is the same region as the right frame of Figure~\ref{fig:BQ-sets-comparison}, drawn in the $x$-plane. \betaegin{figure}[ht]{\lambda}abel{xxx} {\bf i}ncludegraphics[width=4cm]{Figs/z.pdf} \hspace{1cm} {\bf i}ncludegraphics[width=4cm]{Figs/xxx.pdf} \gammaammaaption{Bowditch sets (grey) plotted in the $\zeta$-plane with range $[-4,4]\tauimes[-4i,4i]$. Left: Initial triple $(\sigmaqrt{-x+2}, 0, \sigmaqrt{x+1})$ corresponding to the torus group $G_{{\cal T}}(x)$. Right: Initial triple $(x,x,x)$ corresponding to the handlebody group $G_{{\cal H}}(x)$. The two regions are clearly distinct: the grey region on the right contains that on the left. } {\lambda}abel{fig:BQ-sets-comparison} \end{figure} \betaibliographystyle{alpha} \betaegin{thebibliography}{000} \betaibitem{akiyoshi} H. Akiyoshi, M Sakuma, M. Wada and Y. Yamashita. \newblock Punctured torus groups and $2$-bridge knot groups I. \newblock {\em Springer Lecture Notes in Math. 1909}. Springer, 2007. \betaibitem{BonO} F. Bonahon, J-P. Otal. \newblock Laminations mesur\'ees de plissage des vari\'et\'es hyperboliques de dimension 3. \newblock {\em Ann. of Math. 160}, 1013--1055, 2004. \betaibitem{bow_mar} B.~H. Bowditch. \newblock {M}arkoff triples and quasi-{F}uchsian groups. \newblock {\em Proc. London Math. Soc. 77}, 697--736, 1998. \betaibitem{canary} R. Canary. \newblock Pushing the boundary. \newblock In {\em In the Tradition of Ahlfors and Bers, III. Contemporary Math. Vol 355}, W. Abikoff, A.~Haas eds., AMS Publications, 109--121, 2004. \betaibitem{chois} Y. Choi and C. Series. \newblock Lengths are coordinates for convex structures. \newblock {\em J. Diff. Geom. 73}, 75 -- 116, 2006. \betaibitem{Culler} M. Culler. \newblock Lifting representations to covering groups. \newblock {\em Advances in Math. 59}, 64 -- 70, 1986. \betaibitem{EpM} D. B. A. Epstein and A. Marden. \newblock Convex hulls in hyperbolic space, a theorem of Sullivan, and measured pleated surfaces. In {\em Analytical and geometric aspects of hyperbolic space.} London Math. Soc. Lecture Note Ser., 111, Cambridge Univ. Press, Cambridge, 113--253, 1987. \betaibitem{Fenchel} W. Fenchel. \newblock {\em Elementary geometry in hyperbolic space}, Vol.~11 of {\em de Gruyter Studies in Mathematics}. \newblock Walter de Gruyter \& Co., Berlin, 1989. \betaibitem{goldman} W. Goldman. \newblock The modular group action on real $SL(2)$-characters of a one-holed torus. \newblock {\em Geometry and Topology} 7, 443 -- 486, 2003. \betaibitem{goldman2} W. Goldman. \newblock Trace coordinates on Fricke spaces of some simple hyperbolic surfaces. \newblock In {\em Handbook of Teichm\"uller theory Vol. II}, IRMA Lect. Math. Theor. Phys., 13, Euro. Math. Soc., Z\"urich, 611-- 684, 2009. \betaibitem{GMST} W. Goldman, G. McShane, G. Stantchev and S.P. Tan. \newblock Dynamics of the automorphism group of the two-generator free group on the space of isometric actions on the hyperbolic plane. \newblock {\em In preparation}, 2014 \betaibitem{hempel} J. Hempel. \newblock {\em $3$-manifolds.} \newblock {\em Ann. of Math. Studies 86}. \newblock Princeton Univ. Press, 1976. \betaibitem{HK} C. Hodgson and S. Kerckhoff. \newblock Rigidity of hyperbolic cone-manifolds and hyperbolic Dehn surgery. \newblock {\em J. Differential Geometry 48}, 1--59, 1998. \betaibitem{kstop} L.~Keen and C.~Series. \newblock Pleating coordinates for the {M}askit embedding of the {T}eichm{\"u}ller space of punctured tori. \newblock {\em Topology 32}, 719 --749, 1993. \betaibitem{ksriley} L. Keen and C. Series. \newblock The Riley slice of Schottky space. \newblock {\em Proc. London Math. Soc. 69}, 72 -- 90, 1994. \betaibitem{kscont} L. Keen and C. Series. \newblock Continuity of convex hull boundaries. \newblock {\em Pacific J. Math. 168}, 183 -- 206, 1995. \betaibitem{kshowtobend} L. Keen and C. Series. \newblock How to bend pairs of punctured tori. \newblock In {\em Lipa's Legacy}, J.~Dodziuk and L.~Keen eds, Contemporary Math. 211, 359 -- 387, 1997. \betaibitem{ksqf} L.~Keen and C.~Series. \newblock Pleating invariants for punctured torus groups. \newblock {\em Topology 43}, 447 -- 491, 2004. \betaibitem{koms} Y. Komori and C. Series. \newblock The Riley slice revisited. In {\em The Epstein Birthday Schrift}, I.~Rivin, C.~Rourke and C.~Series eds., Geom. and Top. Monographs, Vol.1, International Press, 303 -- 316, 1999. \betaibitem{kra} I.~Kra. \newblock On lifting Kleinian groups to ${\mathcal S}L$. \newblock In {\em Differential Geometry and Complex Analysis}, I.~Chavel, H.~Farkas eds., Springer, 181 -- 193, 1985. \betaibitem{MPT} S.~Maloni, F. Palesi and S.P. Tan. \newblock On the character variety of the four-holed sphere. \newblock {\em Groups, Geometry and Dynamics}, to appear (2014). \betaibitem{NT} S.P.K. Ng and S.P. Tan. \newblock{The complement of the Bowditch space in the ${\rm SL}(2, {\mathbb C})$ character variety.} \newblock{\em Osaka J. Math.} 44, 247--254, 2007. \betaibitem{otal} J-P. Otal. \newblock Sur le coeur convexe d'une vari\'et\'e hyperbolique de dimension 3. Unpublished preprint, 1994. \betaibitem{serint} C. Series. \newblock The geometry of Markoff numbers. \newblock In \emph{Math. Intelligencer 7}, 20 -- 29, 1985. \betaibitem{ser-wolp} C. Series. \newblock An extension of Wolpert's derivative formula. \newblock {\em Pacific J. Math. 197}, 223 -- 239, 2001. \betaibitem{tan_gd} S.P.Tan, Y. L. Wong and Y. Zhang. \newblock Necessary and sufficient conditions for McShane's identity and variations. \newblock {\em Geometriae Dedicata, 119 }, 199--217, 2006. \betaibitem{tan_gen} S.P.Tan, Y. L. Wong and Y. Zhang. \newblock Generalized {M}arkoff maps and {M}c{S}hane's identity. \newblock {\em Adv. Math. 217}, 761--813, 2008. \betaibitem{wada} M. Wada. \newblock OPTi's algorithm for discreteness determination. \newblock {\em Experimental Math}, 15:1--124, 2006. \betaibitem{OPTi} \url{http://delta.math.sci.osaka-u.ac.jp/OPTi/} \end{thebibliography} \end{document}
\begin{document} \title{Sorting Short Integers ootnote{Michal Koucký and Karel Král were supported by the Grant Agency of the Czech Republic under the grant agreement no. 19-27871X. Karel Král was also partially supported by the Charles University project SVV–2020–260578.} \begin{abstract} We build boolean circuits of size $\mathcal{O}(nm^2)$ and depth $\mathcal{O}(\log(n) + m \log(m))$ for sorting $n$ integers each of $m$-bits. We build also circuits that sort $n$ integers each of $m$-bits according to their first $k$ bits that are of size $\mathcal{O}(nmk (1 + \log^*(n) - \log^*(m)))$ and depth $\mathcal{O}(\log^{3}(n))$. This improves on the results of Asharov~et~al.~\cite{asharov2021sorting} and resolves some of their open questions. \end{abstract} \section{Introduction} \label{sec:introduction} Sorting undoubtedly plays a central role in computer science. Great many problems can be solved using sorting as a subcomponent. There are many practical variants of sorting based either on what we sort (integers, rational numbers, strings, etc.) or how we sort (in parallel, in distributed fashion, in external memory, etc.). Despite lots of research there are still many basic questions about sorting unanswered. The classical comparison based sorting takes time $\mathcal{O}(n \log(n))$ when sorting $n$ integers. Well known lower bound postulates that this is optimal for comparison based sorting. However, this is a great over-simplification and the picture is much more nuanced: sorting integers from a domain of size $M$ can be done using binary search trees in time $\mathcal{O}(n \log|M|)$, thus sorting for example $m$-bit integers only needs $\mathcal{O}(n m)$ comparisons. Such an algorithm can be implemented on a pointer machine, for example. In the RAM model, with the word size $m$ we can sort even faster: When $m=O(\log(n))$ one can sort in time $\mathcal{O}(n)$ using radix sort, and when $m=\Omega(\log^3(n))$ one can also sort in linear time using the algorithm of Andersson~\cite{andersson1998sorting}. When $m=O(\log^3(m))$ one can sort in expected time $\mathcal{O}\left(n \sqrt{\log \frac{m}{\log(n)}}\right)$ and linear space using the algorithm of Han and Thorup~\cite{han2002sorting}. It is an easy exercise to design Turing machines that sort $m$-bit integers in time $\mathcal{O}(n m^2)$. In many cryptographic applications there is an interest in oblivious algorithms, algorithms in which the sequence of the operations is independent of the processed data. Sorting plays an important role in construction of oblivious RAM. An oblivious comparison based parallel model of computation intended for sorting are \emph{sorting networks}. Numbers in a sorting network are thought of as signals which can only be compared. The seminal paper by Ajtai, Komlós, and Szemerédi~\cite{aks_1983sorting} gives an asymptotically optimal sorting network of logarithmic depth and thus having $\mathcal{O}(n \log(n))$ comparators matching the comparison based lower bound. The AKS network has immense applications in theoretical computer science, and we use it in this paper, too. Another oblivious model of computation heavily used throughout theoretical computer science are boolean circuits. One can turn the AKS sorting network into a circuit of size $\mathcal{O}(n m \log(n))$ and depth $\mathcal{O}(\log(m) \log(n))$ (see Section~\ref{sec:sorting}). However, when building boolean circuits for sorting it is not clear whether one can take any advantage of some of the faster algorithms for RAM or Turing machines as simulating random access memory or Turing machine tapes by circuits requires substantial overhead. Asharov~et~al.~\cite{asharov2021sorting} asked the question whether one can sort $m$-bit integers in time $o(n m\log(n))$ when $m=o(\log(n))$. They provide an answer to this question by constructing circuits for sorting $m$-bit integers of size $\mathcal{O}(nm^2 (1 + \log^*(n) - \log^*(m))^{2 + \varepsilon})$ and polynomial depth, for any $\varepsilon > 0$. We improve their results: We build boolean circuits for sorting $m$-bit integers of size $\mathcal{O}(nm^2)$ and depth $\mathcal{O}(\log(n) + m \log(m))$. Pending some unexpected breakthrough this size seems optimal. The depth is provably optimal whenever $m = O(\log(n)/\log \log(n))$. Asharov~et~al.~\cite{asharov2021sorting} solve even a more general problem as their circuits partially sort $n$ numbers each of $m$ bits by their first $k$ bits using a circuit of size $\mathcal{O}(nmk (1 + \log^*(n) - \log^*(m))^{2 + \varepsilon})$. We improve on this result as well by presenting circuits that sort $m$-bit integers according to their first $k$ bits of size $\mathcal{O}(nmk (1 + \log^*(n) - \log^*(m)))$ and depth $\mathcal{O}(\log^{3}(n))$. Our small circuits of poly-logarithmic depth answer some of the open questions of Asharov~et~al.~\cite{asharov2021sorting}. In a work subsequent to ours, Lin and Shi~\cite{lin2021optimal} get circuits of depth $\mathcal{O}(\log(n) + \log(k))$ and size $\mathcal{O}(nkm \cdot \text{poly}(\log^*(n) - \log^*(m)))$ whenever $n > 2^{4k+7}$. They use substantially different approach. We state our results in the next section. \subsection{Our Results} \label{sec:our_results} We provide a family of boolean circuits that sort $m$-bit strings. Our circuits are smaller than the circuits directly derived from the AKS sorting network, and they improve on the result of Asharov~et~al.~\cite{asharov2021sorting}. Our circuits achieve optimal logarithmic depth whenever $m \log(m) \leq \log(n)$. Pending some unexpected breakthrough, their size seems also optimal. \begin{theorem} For any integers $n,m\ge 1$ there is a size $\mathcal{O}(nm^2)$ and depth $\mathcal{O}(\log(n) + m \log(m))$ circuit that sorts $n$ integers of $m$ bits each. \label{thm:main} \end{theorem} For $m\ge \Omega(\log(n))$, the existence of such a circuit directly follows from AKS sorting networks. Our contribution is the construction of such circuits for $m \le o(\log(n))$. Our construction also uses a sorting network as a building block. We use the AKS sorting network as one of our primitives but in principle, we could use any sorting network or sorting circuit. In particular, we could use any circuit sorting $n$ numbers of $\log(n)$ bits each in our construction. Any improvement of asymptotic complexity of sorting of $\log(n)$-bit numbers would give us improved complexity of sorting short numbers. The main idea behind our construction is to \emph{compress} the input by computing the number of occurrences of each $m$-bit integer. This gives a vector of $2^m$ integers, each of size $\mathcal{O}(\log(n))$. Decompressing this vector back gives the sorted input. Combining the counting and decompressing circuit gives us a circuit that sorts. The main technical lemma is our counting circuit which is of independent interest. \begin{lemma} For any integers $n,m\ge 1$ where $m \leq \log(n) / 10$ there is a circuit $$\cktFastCount{n}{m} \fromto{n \cdot m}{\lceil 1+\log(n) \rceil 2^{m}}$$ which given a sequence of $n$ strings of $m$ bits each outputs the number of occurrences of each possible $m$-bit string among the inputs, that is for input $x_1, x_2, \ldots, x_n \in \left\{ 0,1 \right\}^{m}$ it outputs $n_{0^m}, n_{0^{m-1}1}, \ldots, n_{1^m}$ where for each string $y \in \left\{ 0,1 \right\}^{m}$, $n_y\in \left\{ 0,1 \right\}^{\lceil 1+\log(n) \rceil}$ represents $\abs{ \left\{ j \in [n] \mid x_j = y \right\} }$ in binary. The size of the circuit \cktFastCount{n}{m} is $\mathcal{O}(nm^2)$ and depth $\mathcal{O}(\log(n) + m \log(m))$. \label{lemma:fast_count} \end{lemma} We also provide a family of boolean circuits which sort the input integers by their first $k$ bits only. One can view this as sorting (key, value) pairs, where keys have $k$ bits and values have $m-k$ bits. For the special case of $k=1$ (that is partially sorting the numbers by a single bit) the problem is equivalent to routing in \emph{super-concentrators} (see Section~\ref{sec:technique}), and we use super-concentrators of Pippenger~\cite{pippenger1996self} as our building block. We get size improvement over the result of Asharov~et~al.~\cite{asharov2021sorting} while achieving also poly-logarithmic depth. \begin{theorem} For any integers $n,m,k\ge 1$ where $k \leq m$ and $k \leq \log(n) / 11$ there is a circuit $$\cktSort{n}{m}{k} \fromto{nm}{nm}$$ which partially sorts $n$ numbers each of $m$ bits by their first $k$ bits. The circuit \cktSort{n}{m}{k} has size $\mathcal{O}(k n m (1 + \log^*(n) - \log^*(m)))$ and depth $\mathcal{O}(\log^3(n))$. \label{thm:sort_by_k_bits} \end{theorem} \subsection{Our Techniques} \label{sec:technique} One can take AKS sorting networks and turn them into circuits of size $\mathcal{O}(n m \log(n))$ and depth $\mathcal{O}(\log(m) \log(n))$. For $m=o(\log(n))$ this is sub-optimal as shown by Asharov~et~al.~\cite{asharov2021sorting}. Asharov~et~al. show how to reduce the problem of sorting $m$-bit integers according to the first $k$ bits into the problem of sorting $m$-bit integers according to just single bit. Sorting according to single bit is essentially equivalent to routing in super-concentrators. Super-concentrators have been studied originally by Valiant with the aim of proving circuit lower bounds. A super-concentrator is a graph with two disjoint subsets of vertices $A, B \subseteq V(G)$, called inputs and outputs, with the property that for any set $S \subseteq A$ and $T \subseteq B$ of the same size there is a set of vertex disjoint paths from each vertex of $S$ to some vertex of $T$. Pippenger~\cite{pippenger1996self} constructs super-concentrators with a linear number of edges and an algorithm that on input describing $S$ and $T$ outputs the list of edges forming the disjoint paths between $S$ and $T$. This can be turned into a circuit of size $\mathcal{O}(n \log(n))$ and depth~$\mathcal{O}(\log^2(n))$. The result of Pippenger~\cite{pippenger1996self} can be used to build a circuit sorting by one bit, but the circuit will be larger than we want (see Corollary~\ref{col:pippenger_sort}.) Thus, Asharov~et~al.~\cite{asharov2021sorting} used the technique of Pippenger rather than his result to design a circuit sorting by one bit, and iterate it to sort by $k$ bits. Our technique differs substantially from that of Asharov~et~al. yet, we use the circuits from AKS networks and from Pippenger's super-concentrators as black box. To sort $m$-bit integers for $2^m \ll n$ our approach is to count the number of occurrences of each number in the input. This compresses the input from $n m$ bits into $2^m \log(n)$ bits. We can then decompress the vector back to get the desired output. So the main challenge is to construct counting (compressing) circuits of size $\mathcal{O}(nm^2)$. Interestingly, we use the sorting circuits derived from AKS networks to do that. But to avoid the size blow-up we don't use them on all of the integers at once but on blocks of integers of size $2^{8m}$. Then the $\mathcal{O}(\log(n))$ overhead of the circuits turns into the acceptable $\mathcal{O}(m)$ overhead. Each sorted block is then subdivided into parts of size $2^{2m}$. Clearly, most parts in each block will be monochromatic, they will contain copies of the same integer. There will be at most $2^m$ non-monochromatic parts. We \emph{move} the parts within a block to one side using another application of the AKS sorting circuit. Then we can afford to build a fairly expensive counting circuit for the small fraction of non-monochromatic parts, while cheaply counting the monochromatic parts. Summing the results by linear size circuit gives us the desired compression. Our decompression essentially mirrors the compression. We also design a circuit to sort according to a single bit improving the parameters of Asharov~et~al.~\cite{asharov2021sorting}. We take the circuit of Pippenger as basis and apply it iteratively to larger and larger blocks of inputs. Again we start from blocks of size $2^{O(m)}$, and increase the size of the blocks exponentially at each iteration. We use Pippenger's circuit to sort each block by the bit. When we split the block into parts, only one will be monochromatic. Merging multiple blocks into one gives a mega-block with only a small fraction of non-monochromatic parts. These non-monochromatic parts can be separated from monochromatic ones, re-sorted, and re-partitioned to give only one non-monochromatic part in the mega-block. Each part takes on the role of an ``$m$''-bit integer in the next iteration. Iterating this process leads to the desired result. To sort according to the first $k$ bits we use the one-bit sorting similarly to Asharov~et~al.~\cite{asharov2021sorting}. Thanks to our efficient sorting circuits for $m$-bit integers to sort the $k$-bit keys, we can avoid the use of median finding circuits. \paragraph*{Organization.} In the next section we review our notation. We provide basic construction tools including na\"{\i}ve constructions of counting and decompression circuits in Section~\ref{sec:preliminaries}. In Section~\ref{sec:sorting} we recall basic facts on AKS sorting networks and related sorting circuits. In Section~\ref{sec:sorting_strings} we prove our main result by constructing efficient counting and decompression circuits. Finally, we provide a construction of partial sorting circuits for Theorem~\ref{thm:sort_by_k_bits} in Section~\ref{sec:sorting_with_payloads}. \section{Notation} \label{sec:notation} In this paper $\mathbb{N}$ denotes the set of natural numbers, and for $1\le a\le b \in \mathbb{N}$, $[a,b]=\{a,a+1,\dots,b\}$ and $[a]=\{1,\dots,a\}$. All logarithms are base two unless stated otherwise. For $m \in \mathbb{N}$, $\left\{ 0,1 \right\}^{m}$ is the set of all binary strings of length $m$. A string $x \in \left\{ 0,1 \right\}^{m}$, $x=x_1x_2\cdots x_m$, represents the number $\sum_{j \in [m]} x_j 2^{m-j}$ in binary, and we often identify the string with that number. (As the same integer has multiple binary representations differing in the number of leading zeroes, the number of leading zeroes should be clear from the context.) The most significant bit of $x=x_1x_2\cdots x_m$ is $x_1$ and the least significant bit of $x$ is $x_m$. Symbol $\circ$ denotes the concatenation of two strings. For strings $x, y \in \left\{ 0,1 \right\}^{m}$, $x \oplus y$ denotes the bit-wise XOR of $x$ and $y$, $x \wedge y$ denotes the bit-wise AND, and $x \vee y$ the bit-wise OR. We assume the reader is familiar with boolean circuits (see for instance the book of Jukna~\cite{jukna2012boolean}). We assume boolean circuits consist of gates computing binary AND and OR, and unary gates computing negation. For us, boolean circuits might have multiple outputs so a circuit with $n$ inputs and $m$ outputs computes a function $f\fromto{n}{m}$. We usually index a circuit family by multiple integral parameters. Inputs and outputs of boolean circuits are often interpreted as sequences of substrings, e.g., a circuit $C_{n, m} \fromto{nm}{nm}$ is viewed as taking $n$ binary strings of length $m$ as its input, and similarly for its output. We say a circuit family $(C_n)_{n\in\mathbb{N}}$ is uniform, if there is an algorithm that on input $1^n$ outputs the description of the circuit $C_n$ in time polynomial in $n$. \section{Preliminaries} \label{sec:preliminaries} Here we review some of the circuits for basic primitives that we will use in our later constructions. Most of them are well known facts but for the others we provide proofs for the sake of completeness. \newcommand{\cktPlusSizeDepth}[1]{\cktPlus{{#1}} has size $\mathcal{O}\left( {#1} \right)$ and depth $\mathcal{O}\left( \log\left({#1}\right) \right)$} \begin{lemma}[Addition] There is a uniform family of boolean circuits $\cktPlus{m} \fromto{2m}{m+1}$ that given $x, y \in \left\{ 0,1 \right\}^{m}$ representing two numbers in binary outputs their sum $x+y \in \left\{ 0,1 \right\}^{m+1}$. The circuit \cktPlus{m} has size $\Theta\left( m \right)$ and depth $\Theta( \log(m) )$. \label{lem:sum_two_numbers} \end{lemma} \newcommand{\cktDifferenceSizeDepth}[1]{\cktDifference{ {#1} } has size $\Theta\left( {#1} \right)$ and depth $\Theta\left( \log\left( {#1} \right) \right)$} \begin{lemma}[Subtraction] There is a uniform family of boolean circuits $\cktDifference{m} \fromto{2m}{m}$ that given $x, y \in \left\{ 0,1 \right\}^{m}$ representing two numbers in binary outputs the absolute value of their difference $|x-y| \in \left\{ 0,1 \right\}^{m}$. The circuit \cktDifferenceSizeDepth{m}. \label{lem:dif_two_numbers} \end{lemma} \newcommand{\cktSumSizeDepthParenthesis}[2]{\cktSum{ {#1} }{ {#2} } has size $\Theta\left( \left( {#1} \right) \left( {#2} \right) \right)$ and depth $\Theta\left( \log\left( {#1} \right) + \log\left( {#2} \right) \right)$} \newcommand{\cktSumSizeDepthTODO}[2]{\cktSum{ {#1} }{ {#2} } \todo[inline,color=green!40]{\cktSum{ {#1} }{ {#2} } size $\Theta\left( \left( {#1} \right) \left( {#2} \right) \right)$, depth $\Theta\left( \log\left( {#1} \right) + \log\left( {#2} \right) \right)$}} \begin{lemma}[Summation] There is a uniform family of boolean circuits $$\cktSum{n}{m} \fromto{n \cdot m}{\lceil \log(n) \rceil + m}$$ that given $x_1, x_2, \ldots, x_n \in \left\{ 0,1 \right\}^{m}$ interpreted as $n$ numbers, each of $m$ bits, outputs their sum $\sum_{j = 1}^{n} x_j$. The circuit \cktSum{n}{m} has size $\Theta(nm)$ and depth $\Theta\left( \log(n) + \log(m) \right)$. \label{lem:sum_n_numbers} \end{lemma} \begin{proof} We sketch the construction following the technique of Wallace~\cite{wallace1964suggestion}. Given three numbers $x, y, z \in \left\{ 0,1 \right\}^k$ in constant depth and using $\Theta(k)$ gates we can compute $p, q \in \left\{ 0,1 \right\}^{k+1}$ such that $x + y + z = p + q$. Here, $p$ is the coordinate-wise addition without carry, i.e., $0 \circ (x \oplus y \oplus z)$, and $q$ is the carry, i.e., $((x \wedge y) \vee (x \wedge z) \vee (y \wedge z)) \circ 0$. Thus as long as there are at least three numbers to sum we can use this to transform $x, y, z$ which takes $3k$ bits into $p, q$ which take $2k + 2$ bits and continue summing those. Doing this in parallel for disjoint triples of summants after $\mathcal{O}(\log_{3/2}(n)) = \mathcal{O}(\log(n))$ rounds we are left with just two numbers and we sum those using Lemma~\ref{lem:sum_two_numbers}. \end{proof} \begin{lemma}[Comparator] There is a uniform family of boolean circuits $\cktSwitch{m} \fromto{2m}{2m}$ that given two numbers $x, y \in \left\{ 0, 1 \right\}^{m}$ outputs these two numbers sorted as integers, i.e., $\min(x, y) \circ \max(x, y)$. The size of the circuit \cktSwitch{m} is $\Theta(m)$ and depth is $\Theta(\log(m))$. \label{lem:switch} \end{lemma} Technique similar to the proof of the next lemma will be used also later in the proofs of Lemma~\ref{lemma:fast_count} and Lemma~\ref{lemma:fast_decompress} in order to achieve smaller circuit size. The main idea is to split inputs into smaller blocks and process the blocks independently by smaller circuits. \newcommand{\cktOnesSizeDepthTODO}[1]{\cktOnes{ {#1} } \todo[inline,color=green!40]{\cktOnes{ {#1} } size $\Theta\left( 2^{\left( {#1} \right)} \right)$, depth $\Theta\left( {#1} \right)$}} \begin{lemma}[Binary to unary] There is a uniform family of boolean circuits $$\cktOnes{b} \fromto{b+1}{2^b}$$ such that for any number $x \in \left\{ 0,1 \right\}^{b+1}$ represented in binary the output consists of $x$ ones followed by $2^{b} - x$ zeroes, provided $x \leq 2^b$. The circuit \cktOnes{b} has size $\Theta(2^{b})$ and depth $\Theta(\log(b))$. \label{lem:ones} \end{lemma} \begin{proof} We first show how to construct a uniform family of boolean circuits $\left( \cktOnesDeep{b} \right)$ which computes the same function, has the same size but depth $\mathcal{O}(b)$. Then we use \cktOnesDeep{\log(b)} to construct the desired circuit \cktOnes{b}. The main idea of the construction of \cktOnesDeep{b} is to recursively split the number $x$ into two numbers $x_L, x_R$ which describe how many bits set to one there should be in the first and the second half of the output. Each of the two numbers $x_L, x_R$ will be represented by $b$ bits with the convention that if the most significant bit is equal to one then the number is a power of two (corresponding to all output bits in this part of the output set to one). We recursively split the numbers $x_L, x_R$ in the same fashion until the numbers are represented by a single bit each at which point they will represent the output bits. We set \begin{align*} x_L &= \min(2^{b-1}, x) \\ x_R &= \min(2^{b-1}, \max(0, x - 2^{b-1})) \end{align*} note that if the number $x$ is represented by $b+1$ bits ($x \in \left\{ 0,1 \right\}^{b+1}$) then the numbers $x_L, x_R$ can be represented by $b$ bits ($x_L, x_R \in \left\{ 0,1 \right\}^{b}$) as both of them represent at most half of $x$. Given $x \in \left\{ 0,1 \right\}^{b+1}$ we can compute the maximum and minimum defining $x_L, x_R$ by inspecting the two most significant bits of $x$: \begin{itemize} \item If the most significant bit of $x$ is set to one (thus $x \ge 2^b$) we set $x_L = x_R = x/2$ each a power of two with the most significant bit set to one (and represented by a binary string~$10^{b-1}$). \item If the most significant bit of $x$ is set to zero and the second most significant bit is set to one, then $x_L$ will be set to the binary number $10^{b-1}$ and $x_R$ will be $x - x_L$ (a copy of $x$ without the second most significant bit of~$x$). \item If the two most significant bits of $x$ are equal to zero then $x_L = x$ (represented by one less bit than $x$) and $x_R = 0$. \end{itemize} See Figure~\ref{fig:ones_example} for an example of splitting of $x$ into $x_L, x_R$. \begin{figure} \caption{ An example of splitting numbers where $b = 3$. The input number $x = 5$ is represented as $0101$ and is split into $x_L = 100, x_R = 001$ which are themselves split recursively. The bottom nodes form the output. } \label{fig:ones_example} \end{figure} Thus we can compute the transformation $x \mapsto (x_L, x_R)$ where $x \in \left\{ 0,1 \right\}^{b+1}$ and $x_L, x_R \in \left\{ 0,1 \right\}^{b}$ using a circuit of size $\Theta(b)$ and depth $\Theta(1)$. Then each of the numbers $x_L, x_R$ is again split into two, etc. until we get single bit numbers which represent the final output. The depth of the circuit \cktOnesDeep{b} is $\Theta(b)$ as each splitting can be done in constant depth. If the circuit splitting $b+1$ bits into two $b$-bit numbers has size $s(b)\le cb+d$, for some universal constants $c$ and $d$, then the circuit \cktOnesDeep{b} has size: \begin{align*} s(b+1) + 2 s(b) + 4 s(b-2) + \ldots + 2^b s(1) &= \sum_{j = 0}^{b} 2^{j} s(b-j) \\ &\leq \sum_{j = 0}^{b} 2^{j} c(b-j) + 2^j d \\ &\leq c\left( 2^{b+2} - b - 1 \right) + 2^{b+1} d \\ &= O(2^{b}) \end{align*} To build the circuit \cktOnes{b} of depth $\mathcal{O}(\log(b))$ we proceed as follows. For any $y > 1$ we denote the largest power of two that is at most $y$ by $\ell(y) = \max\left\{ 2^{j} \mid j \in \mathbb{N}, 2^{j} \leq y \right\}$. We divide the output bits into blocks of $\ell(b)$ bits and for each block $j \in \left[ \frac{2^{b}}{\ell(b)} \right]$ of output bits with positions $[(j - 1) \ell(b) + 1, j \ell(b)]$ (counting positions from one) we compute if it should be constant (that is either constant zero when $x \leq (j-1) \ell(b)$ or constantly equal to one when $x > j \ell(b)$). This check for constant values can be done in each block by a circuit of size $\Theta(b)$ and depth $\Theta(\log(b))$. We compute \cktOnesDeep{\log(\ell(b))} with the input being the $\log(\ell(b))$ least significant bits of $x$. This circuit is of size $\mathcal{O}(b)$ and depth $\mathcal{O}(\log(b))$. In each block if the block should not be monochromatic then we use the output of that circuit as the output of the block, otherwise we use the appropriate constant one or zero copied $\ell(b)$-times as the output of the block. \end{proof} We will need a primitive that counts the number of occurrences of each string in the input. A counting similar to Lemma~\ref{lem:count} appears in Appendix~A of the paper of Asharov~et~al.~\cite{asharov2021sorting}. The construction of the counting circuit is rather straightforward, we just compare each input string $x_j$ with a given string $y$ getting an indicator bit set to one for equality and to zero for inequality and then sum the indicator bits. \newcommand{\cktCountSizeDepthTODO}[2]{\cktCount{ {#1} }{ {#2} } \todo[inline,color=green!40]{\cktCount{ {#1} }{ {#2} } size $\Theta\left( \left( {#1} \right) \left( {#2} \right) 2^{\left( {#2} \right)} \right)$, depth $\Theta\left( \log\left( {#1} \right) + \log\left( {#2} \right) \right)$}} \begin{lemma}[Count] There is a uniform family of boolean circuits $\cktCount{n}{m} \fromto{n \cdot m}{2^m \lceil 1+\log(n) \rceil}$ that given $x_1, x_2, \ldots, x_n \in \left\{ 0,1 \right\}^{m}$ counts the number of occurrences of each $y \in \left\{ 0,1 \right\}^{m}$ among the inputs, i.e., the circuit outputs $n_{0^m}, n_{0^{m-1}1}, \ldots, n_{1^m}$ where for each $y \in \left\{ 0,1 \right\}^{m}$, $n_y$ represents in binary $\abs{\left\{ j \in [n] \mid y = x_j \right\}}$ using $\lceil 1+\log(n) \rceil$ bits. The size of the circuit \cktCount{n}{m} is $\mathcal{O}(nm2^m)$ and depth $\mathcal{O}(\log(n) + \log(m))$. \label{lem:count} \end{lemma} \begin{proof} For each $y \in \left\{ 0,1 \right\}^{m}$ we build a sub-circuit computing the number of times $y$ occurs among the inputs $x_1,\dots,x_n$. This is done by comparing $y$ to each $x_i$ in parallel, $i\in [n]$, to get an indicator bit whether they are equal. We obtain $n_y$ by summing up the indicator bits using the circuit \cktSum{n}{1} of size $\Theta(n)$ and depth $\Theta(\log(n))$ from Lemma~\ref{lem:sum_n_numbers}. Comparing $y$ to $x_i$ can be done by a circuit of size $\mathcal{O}(m)$ and depth $\mathcal{O}(\log(m))$. So we get $n_y$ using a circuit of size $\Theta(nm)$ and depth $\Theta(\log(n) + \log(m))$. Doing this for each $y \in \left\{ 0,1 \right\}^{m}$ in parallel we get a circuit of size $\Theta(nm 2^{m})$ and depth $\Theta(\log(n) + \log(m))$. \end{proof} We will need also an inverse operation for the counting. To construct a circuit that decompresses the counts we would like to first compute the interval where a given string $x$ should appear and then get indicator bits for this interval. We can compute the interval using prefix sums of the counts. To get the indicator bits for the interval we utilize the circuit from Lemma~\ref{lem:ones} which outputs a given number of bits set to one followed by bits set to zero. \newcommand{\cktDecompressSizeDepthTODO}[2]{\cktDecompress{ {#1} }{ {#2} } \todo[inline,color=green!40]{\cktDecompress{ {#1} }{ {#2} } size $\Theta\left( \left( {#1} \right) \left( {#2} \right) 2^{ {#2} } + 2^{2({#2})} \log\left( {#1} \right) \right)$, depth $\Theta\left( {#2} + \log\left( {#1} \right) \right)$}} \begin{lemma}[Decompress] There is a uniform family of boolean circuits $$\cktDecompress{n}{m} \fromto{\lceil 1+\log(n) \rceil 2^{m}}{n \cdot m}$$ that \emph{decompresses its input} that is on input numbers $n_{0^m}, n_{0^{m-1}1}, \ldots, n_{1^m}$, each represented in binary by $\lceil 1+\log(n) \rceil$ bits, where $\sum_{x \in \left\{ 0,1 \right\}^{m}} n_x = s \leq n$, outputs the string $$(\underbrace{00\cdots 0}_m)^{n_{00\cdots 0}} \circ (\underbrace{00\cdots 0}_{m-1}1)^{ n_{00\cdots 01}} \circ (\underbrace{00\cdots 0}_{m-2}10)^{ n_{00\cdots 010}} \circ (\underbrace{00\cdots 0}_{m-2}11)^{ n_{00\cdots 011}} \circ \cdots \circ (\underbrace{11\cdots 1}_m)^{n_{11\cdots 1}} \circ (0^m)^{n-s}.$$ When $s>n$ the output might be arbitrary. The size of the circuit \cktDecompress{n}{m} is $\mathcal{O}(n m 2^m + 2^{2m} \log(n))$ and depth $\mathcal{O}(m + \log(\log(n)))$. \label{lem:decompress} \end{lemma} \begin{proof} Given $n_{0^m}, n_{0^{m-1}1}, \ldots, n_{1^m}$ we can compute the total sum $s = \sum_{x \in \left\{ 0,1 \right\}^{m}} n_x$ and for each $y\in \{0,1\}^m$, the number $p_y$ of binary strings before the first occurrence of $y$, i.e., $p_y = \sum_{x \in \left\{ 0,1 \right\}^m \colon x < y} n_x$. Each of the numbers $p_y$ can be computed using the circuit \cktSum{y}{\lceil 1+\log(n) \rceil} from Lemma~\ref{lem:sum_n_numbers} of size $\mathcal{O}(2^{m} \log(n))$ and depth $\mathcal{O}(m + \log \log(n))$. Similarly for $s$. Thus we can get all numbers $p_y$ in parallel by a circuit of size $\mathcal{O}(2^{2m} \log(n))$. A given string $y \in \left\{ 0,1 \right\}^{m}$, $y\neq 1^m$, should appear at each position $j \in [p_y + 1, p_{y+1}]$. Let $I_{y} \in \{0,1\}^n$ be the indicator vector of positions where $y$ should appear in the output. We can use $\cktOnes{\lceil 1+\log(n) \rceil}(p_y) \oplus \cktOnes{\lceil 1+\log(n) \rceil}(p_{y+1})$ to calculate $I_y$ for each $y\neq 1^m$. For $y = 1^{m}$, $I_y = \cktOnes{\lceil 1+\log(n) \rceil}(p_y) \oplus \cktOnes{\lceil 1+\log(n) \rceil}({s})$. The size of \cktOnes{\lceil 1+\log(n) \rceil} is $\Theta(n)$. As there are $2^m$ different $y$'s, we need a circuit of size $\Theta(n2^m)$ and depth $\Theta(\log \log(n))$ to calculate all $I_y$'s. If $x_1,x_2,\dots,x_n$ are the output integers, for each output position $j \in [n]$, we calculate the $k$-bit of $x_j$ as \begin{align*} \bigvee_{y \in \left\{ 0,1 \right\}^{m}} ((I_y)_j \wedge y_k) \end{align*} To compute all these ORs we need a circuit of total size $\Theta(nm2^{m})$ and depth $\Theta(m)$. \end{proof} \section{Sorting Circuits from AKS Sorting Networks} \label{sec:sorting} In this section we recall the construction of circuits for sorting from the Ajtai-Koml\'os-Szemer\'edi sorting networks. They will serve as the basic primitive for our later constructions. \paragraph*{Sorting networks.} Sorting networks model parallel algorithms that sort values using only comparisons. A sorting network consists of $n$ wires and $s$ comparators. The wires extend from left to right in parallel. Each wire carries an integer from left to right. Any two wires can be connected by a comparator at any point along their length. The comparator swaps the values carried along the two wires if the higher wire carries a higher value at that point otherwise it has no effect. The sorting network should be such when we input arbitrary integers to the wires on the left, the integers always exit in sorted order from top to bottom. The \emph{depth} of a sorting network is the maximum number of comparators a value can encounter on its way. A figure of a small sorting network is given in Figure~\ref{fig:aks_example}. \begin{figure} \caption{ An example of a sorting network with three inputs (the horizontal lines), three comparators (the vertical lines), and depth three. The inputs on the left are numbers $x,y,z$ and after each comparator we noted what is on the horizontal line. Note that the bottom most output is $\max(\max(x,y),\max(\min(x,y),z)) = \max(x,y,z)$ and the middle one is $\min(\max(x,y),\max(\min(x,y),z))$ which is the median. } \label{fig:aks_example} \end{figure} For a formal definition see, e.g., \cite{aks_1983sorting}. Observe that if the depth of a sorting network is $d$ and the number of inputs is $n$ then there are at most $s \le nd$ comparators. Ajtai, Koml\'os and Szemer\'edi~\cite{aks_1983sorting} established the existence of sorting networks of logarithmic depth. \begin{theorem}[AKS~\cite{aks_1983sorting}] For any integer $n\ge 1$, there is a sorting network for $n$ integers of depth $\mathcal{O}(\log(n))$. \label{thm:aks_network} \end{theorem} \paragraph*{Sorting circuits.} Here we give a precise definition of sorting by a circuit. First we consider a circuit sorting $n$ integers, each of them $m$ bits long. \begin{definition}[Sort] Let $n, m \in \mathbb{N}$, and $\left( C_{n, m} \right)$ be a family of boolean circuits. We say that the \emph{circuit $C_{n, m} \fromto{nm}{nm}$ sorts its input} interpreted as $n$ integers $x_1, x_2, \ldots, x_n$ each represented by $m$ bits if it outputs $y_1, y_2, \ldots, y_n \in \left\{ 0,1 \right\}^{m}$ such that: \begin{enumerate} \item The outputs are sorted: For any $i < j \in [n]$, $y_i \leq y_j$. \item The inputs and outputs form the same multiset: For each $j \in [n]$, $\abs{ \left\{ i \in [n] \mid y_i = x_j \right\} } = \abs{ \left\{ i \in [n] \mid x_i = x_j \right\} }$. \end{enumerate} \label{def:sorting_circuit} \end{definition} An immediate consequence of the existence of AKS sorting networks is the existence of shallow sorting circuits, since by Lemma~\ref{lem:switch}, each comparator can be replaced by a small circuit: \newcommand{\cktAKSSizeDepthTODO}[2]{\cktAKS{ {#1} }{ {#2} } \todo[inline,color=green!40]{\cktAKS{ {#1} }{ {#2} } size $\Theta\left( \left( {#1} \right) \left( {#2} \right) \log\left( {#1} \right) \right)$, depth $\Theta\left( \log\left( {#1} \right) \log\left( {#2} \right) \right)$}} \begin{corollary} There is a family of boolean circuits $\cktAKS{n}{m} \fromto{n \cdot m}{n \cdot m}$ that on an input $x_1, x_2, \ldots, x_n \in \left\{ 0,1 \right\}^{m}$ sorts these numbers. The size of the circuit \cktAKS{n}{m} is $\mathcal{O}(n m \log(n))$ and depth $\mathcal{O}(\log(n) \log(m) )$. \label{col:aks_circuit} \end{corollary} We also need circuits that sort the $n$ input integers, each of $m$ bits, by the $k$ most significant bits where $k<m$. Such sorting can be thought of as sorting (key, value) pairs, where keys are $k$-bit long and values $(m-k)$-bit long. Formally it can be defined as follows: \begin{definition}[Partial Sort] Let $n, m, k \in \mathbb{N}$, be such that $k<m$, and let $\left( C_{n, m, k} \right)$ be a family of boolean circuits. We say that the \emph{circuit $C_{n, m, k} \fromto{nm}{nm}$ partially sorts by the first $k$ bits its input} interpreted as $n$ integers $x_1, x_2, \ldots, x_n$ each represented by $m$ bits if it outputs $y_1, y_2, \ldots, y_n \in \left\{ 0,1 \right\}^{m}$ such that: \begin{enumerate} \item The outputs are partially sorted: For any $i < j \in [n]$, $(y_i)_1 (y_i)_2 \cdots (y_i)_k \leq (y_j)_1 (y_j)_2 \cdots (y_j)_k$. \item The inputs and outputs form the same multiset: For each $j \in [n]$, $\abs{ \left\{ i \in [n] \mid y_i = x_j \right\} } = \abs{ \left\{ i \in [n] \mid x_i = x_j \right\} }$. \end{enumerate} \label{def:partial_sorting_circuit} \end{definition} Using a circuit of size $\mathcal{O}(m)$ and depth $\mathcal{O}(\log(k))$ implementing a comparator which swaps two $m$-bit integers based only on the first $k$ bits we get the following variant of the previous corollary. \newcommand{\cktAKSPartialSizeDepthTODO}[3]{\cktAKSPartial{ {#1} }{ {#2} }{ {#3} } \todo[inline,color=green!40]{\cktAKSPartial{ {#1} }{ {#2} }{ {#3} } size $\Theta\left( \left( {#1} \right) \left( {#2} \right) \log\left( {#1} \right) \right)$, depth $\Theta\left( \log\left( {#1} \right) \log\left( {#3} \right) \right)$}} \begin{corollary} There is a family of boolean circuits $\cktAKSPartial{n}{m}{k} \fromto{n \cdot m}{n \cdot m}$, for $k \leq m$ and $k \leq \log(n)$, that on input $x_1, x_2, \ldots, x_n \in \left\{ 0,1 \right\}^{m}$ partially sorts these numbers according to their $k$ most significant bits. That is if $y_i, y_j$ are two output numbers where $i < j$ then we have $\lfloor y_i / 2^{m-k} \rfloor \leq \lfloor y_j / 2^{m-k} \rfloor$. The size of the circuit \cktAKSPartial{n}{m}{k} is $\mathcal{O}(n m \log(n))$ and depth $\mathcal{O}(\log(n) \log(k))$. \label{col:aks_partial_circuit} \end{corollary} \section{Sorting $n$ Binary Strings of Length $m$} \label{sec:sorting_strings} Here we present a sorting circuit for short numbers. The construction consists of two circuits. The first circuit counts the number of occurrences of various strings (as stated in Lemma~\ref{lemma:fast_count}) and the second circuit decompresses these counts. Both of these constructions use heavily the following technique: we divide the problem into blocks which can be efficiently sorted using the AKS-based circuit. These blocks will be of size between $2^{O(m)}$ and $n/2^{O(m)}$ where $m$ is the binary length of the input integers. Thus when we sort the numbers inside each block and subdivide the block into parts, then by the pigeon-hole principle, most of the parts will be monochromatic (containing copies of a single string only). We can then separately count the strings in monochromatic parts (count the first string and then multiply that by the length of the part) and in the non-monochromatic parts (there are not that many strings in total in non-monochromatic parts). However a priori we do not know which parts will be monochromatic and which will be not. To save on circuitry we use sorting (on whole parts) to move the non-monochromatic parts aside. We build the (expensive) counting circuits only for non-monochromatic parts. \begin{proof}[Proof of Lemma~\ref{lemma:fast_count}] For the sake of simplicity let us assume that $n$ is a power of two so, it is divisible by $2^{8m}$. (By our assumption $n \geq 2^{10m}$, thus if $n$ is not a power of two take the circuit for the closest power of two larger than~$n$ and feed ones for the extra input bits.) We partition the input into $n / 2^{8m}$ blocks each consisting of $2^{8m}$ numbers. We sort each block by the circuit \cktAKS{2^{8m}}{m} of size $\mathcal{O}(2^{8m} m \log(2^{8m})) = \mathcal{O}(2^{8m} m^2)$ and depth $\mathcal{O}(m\log(m))$ as given in Corollary~\ref{col:aks_circuit} . Thus for this phase we need a circuit of total size $\mathcal{O}(n m^2)$. Then we subdivide each block into $2^{6m}$ parts each consisting of $2^{2m}$ numbers. Observe that most of these parts are monochromatic: a part is monochromatic if it contains $2^{2m}$ copies of a single $m$-bit number. We can upper bound the number of non-monochromatic parts by $2^{m}$. We can add a single indicator bit to each part indicating whether this part is monochromatic. As the parts are sorted it is enough to compare the first and last number in each part and set the bit to $1$ if the numbers are equal and to $0$ otherwise. We sort the parts prefixed by their indicator bit using the circuit \cktAKSPartial{2^{6m}}{1 + m 2^{2m}}{1} from Corollary~\ref{col:aks_partial_circuit} to move all non-monochromatic parts to the front of each block. Thus the total size of the circuit sorting parts inside each block is $\mathcal{O}\left( \frac{n}{2^{8m}} (2^{6m}) (1 + m 2^{2m}) 6m \right) = \mathcal{O}(nm^2)$ and depth $\mathcal{O}(m)$. We call the first $2^{m}$ parts of each block \emph{potentially non-monochromatic}. The other parts are \emph{definitely monochromatic}. From each definitely monochromatic part we take the first $m$-bit number and we count them. This can be done by the circuit \cktCount{\frac{n}{2^{8m}} (2^{6m} - 2^{m})}{m} from Lemma~\ref{lem:count} of size $\mathcal{O}\left( \left( \frac{n}{2^{2m}} - \frac{n}{2^{7m}} \right) m 2^{m} \right) \leq \mathcal{O}(nm)$ and depth $\mathcal{O}(\log(n) + \log(m))$. By multiplying each count by $2^{2m}$ (that is by appending $2m$ zeroes) we get the number of occurrences of each number in the definitely monochromatic parts. As there are relatively few (exactly $\frac{n}{2^{8m}} 2^{m} 2^{2m}$) numbers overall in potentially non-monochromatic parts we can use the circuit \cktCount{n / 2^{5m}}{m} from Lemma~\ref{lem:count} to count those numbers by a circuit of size $\mathcal{O}\left( \frac{n}{2^{5m}} m 2^{m} \right) \leq \mathcal{O}(nm)$ and depth $\mathcal{O}(\log(n) + \log(m))$. Thus we get two vectors of counts for numbers in potentially non-monochromatic and definitely monochromatic blocks. Finally, we add the two vectors of $2^{m}$ numbers each consisting of at most $\lceil 1+\log(n) \rceil$ bits to get the resulting counts. This uses a circuit of size $\mathcal{O}(m2^m)=O(n)$ and depth $\mathcal{O}(\log \log(n))$. Thus, the overall size of the circuit is $\mathcal{O}(nm^2)$ and depth $\mathcal{O}(\log(n) + m\log(m))$. \end{proof} \begin{lemma} For integers $n, m\ge 1$ such that $m \leq \log(n) / 11$, there is a family of boolean circuits $$\cktFastDecompress{n}{m} \fromto{\lceil 1+\log(n) \rceil 2^{m}}{n \cdot m}$$ that decompresses its input as in Lemma~\ref{lem:decompress}. The size of \cktFastDecompress{n}{m} is $\mathcal{O}(nm^2)$ and its depth is $\mathcal{O}(m \log(m) + \log \log(n))$. \label{lemma:fast_decompress} \end{lemma} The construction of the decompression circuit mirrors the counting circuit albeit it is somewhat simpler with a different choice of parameters. We separately decompress monochromatic blocks (by decompressing just a single string from each block and then creating the right number of copies) and the strings from non-monochromatic blocks (as there are not many of those). We then use partial sorting to rearrange the blocks in the proper order to construct a sorted sequence. \begin{proof} For the sake of simplicity let us assume that $n$ is a power of two and let us set $k = n / 2^{8m}$. (Thus $k$ is an integer.) We will think of the output as partitioned into $2^{8m}$ blocks of size $k$. As in the proof of Lemma~\ref{lem:decompress} we compute the prefix sums \begin{align*} p_{x} &= \sum_{y \in \left\{ 0,1 \right\}^{m} \colon y < x} n_{y} & \text{for each $x \in \left\{ 0,1 \right\}^{m}$} \end{align*} and we set $p_{2^{m}} = n$. (Here, we identify $m$-bit strings $x$ and $y$ with integers they represent.) We can compute each $p_x$ using the circuit \cktSum{2^{m}}{1+\log(n)}, thus computing all of them using a circuit of size $\mathcal{O}(\log(n)2^{2m}) \leq \mathcal{O}(n)$ (by the assumption $m \leq \log(n) / 11$) and depth $\mathcal{O}(m + \log \log(n))$. Thus the string $x \in \left\{ 0,1 \right\}^{m}$ should appear at output positions $[p_{x} + 1, p_{x + 1}]$. For any $x \in \left\{ 0,1 \right\}^{m}$ we set: \begin{align*} r_{x} &= \left( \left( k - \left( p_x \mod k \right) \right) \mod k \right) + \left( p_{x+1} \mod k \right) \\ q_{x} &= \frac{n_x - r_x}{k} \end{align*} The meaning is that if we partition the output into blocks of $k$ consecutive numbers, then for any $x \in \left\{ 0,1 \right\}^{m}$ the number $r_x$ tells the number of times the string $x$ appears in non-monochromatic blocks. (These occurrences are located in at most two non-monochromatic blocks.) The number $q_{x}$ tells us in how many monochromatic blocks the string $x \in \left\{ 0,1 \right\}^{m}$ appears. Observe that $q_x$ is an integer. Since $n$ is a power of two, so is $k$, furthermore, $k$ is fixed for given $n$ and $m$, and thus computing mod $k$ and division by $k$ corresponds to selecting appropriate bits from the binary representation of numbers. All numbers $p_x$, $q_x$ and $r_x$ are integers represented by $1 + \log(n)$ bits. Hence, each $q_x$ and $r_x$ can be computed from $n_x$ and $p_x$ by one circuit \cktPlus{1 + \log(n)} and two \cktDifference{1 + \log(n)}. The circuit computing values $q_x$ and $r_x$ for all $x$ has total size $\mathcal{O}(2^m \log(n))$ and depth $\mathcal{O}(\log \log(n))$. The following holds: \begin{align*} n_x &= k q_x + r_x \\ \sum_{x \in \left\{ 0,1 \right\}^{m}} q_{x} &= \sum_{x \in \left\{ 0,1 \right\}^{m}} \frac{n_x - r_x}{k} \leq n / k = 2^{8m} \\ \sum_{x \in \left\{ 0,1 \right\}^{m}} r_{x} &\leq 2k 2^{m} = 2n / 2^{7m} \\ \end{align*} We use circuit $\cktDecompress{2^{8m}}{m}(q_{0^m}, q_{0^{m-1}1}, \ldots, q_{1^{m}})$ from Lemma~\ref{lem:decompress} of size $\mathcal{O}\left( m 2^{9m} \right)$ and depth $\mathcal{O}\left( m \right)$ to decompress monochromatic blocks. We then just copy each resulting number $k$ times to create sorted monochromatic blocks. Last $2^{8m}-\sum_{x \in \left\{ 0,1 \right\}^{m}} q_{x}$ blocks contain zero padding corresponding to the numbers in non-monochromatic blocks. They will be merged with the non-monochromatic blocks obtained next. In order to properly match the non-monochromatic blocks to the padded zeroes we adjust the count $r_{0^m}$: \begin{align*} r'_{0^m} &= \left( 2n / 2^{7m} \right) - \sum_{x \in \left\{ 0,1 \right\}^m \colon x \neq 0^{m}} r_x \end{align*} using circuit \cktSum{2^{m}}{1+\log(n)} and \cktDifference{1 + \log(n)} of size $\mathcal{O}(n)$ and depth $\mathcal{O}(m + \log \log(n))$. We use the circuit $\cktDecompress{2n / 2^{7m}}{m}(r'_{0^m}, r_{0^{m-1}1}, \ldots, r_{1^{m}})$ from Lemma~\ref{lem:decompress} to decompress the non-monochromatic blocks. The circuit is of size $\mathcal{O}\left( \left( 2n/2^{7m} \right) m 2^{m} + 2^{2m} \log\left( 2n / 2^{7m} \right) \right) \leq \mathcal{O}\left( nm / 2^{6m} \right)$ and of depth $\mathcal{O}(m + \log(\log(n)))$. (Here, we used our assumption $m \leq \log(n) / 11$, to bound $n \geq 2^{11m}$ and $2^{2m} \leq n^{3/4} / 2^{6m}$.) Finally, we compute the bit-wise OR of the last $2^{m+1}$ blocks of the output from the previous step (monochromatic decompression) with the current output (non-monochromatic decompression). This way we get a sequence of $n$ numbers partitioned into blocks where each block corresponds to one of the blocks in the desired output. However, we still need to rearrange the blocks in the proper order. We will use partial sorting of the whole blocks to do that. For a given block let $x$ be the first number in that block. We prefix the block by a number $2x$ (represented by $m+1$ bits) if the block is monochromatic or the number $2x+1$ if the block is non-monochromatic. To determine whether the block is monochromatic we compare for equality the first and last number inside the block. We do this for each block. Thus each block of $k$ numbers is prefixed by an $m+1$ bit number. Computing these prefixes requires a circuit of total size $\mathcal{O}(2^{8m} m) = O(n)$ and depth $\mathcal{O}(\log(m))$. We then use the \cktAKSPartial{2^{8m}}{(m+1) + km}{m+1} circuit of size $\mathcal{O}(nm^2)$ and depth $\mathcal{O}(m \log(m))$ to sort the blocks. Finally, we ignore the $m+1$ bit prefixes of each block to get the desired output. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main}] This is just a combination of Lemma~\ref{lemma:fast_count} with Lemma~\ref{lemma:fast_decompress}. \end{proof} Observe that the proofs of Lemma~\ref{lemma:fast_count} and Lemma~\ref{lemma:fast_decompress} do not depend on using specifically the AKS sorting. In particular for the case of Lemma~\ref{lemma:fast_count} if there is a circuit that sorts input numbers that is linear in the number of input bits then there is a linear size circuit that counts these numbers. \section{Partial Sorting by the First $k$ Bits in Poly-logarithmic Depth} \label{sec:sorting_with_payloads} Here we design a family of boolean circuits that partially sorts by the first $k$ bits out of $m$ bits which is asymptotically smaller than \cktAKSPartial{n}{m}{k}. We will need super-concentrators for our construction. A directed acyclic graph $G = (V, E, A, B)$, where $V$ is the set of vertices, $E$ is the set of directed edges, and $A$ and $B$ are disjoint subsets of vertices of the same size, is a \emph{super-concentrator} if the following hold: The vertices in $A$ (\emph{inputs}) have in-degree zero, vertices in $B$ (\emph{outputs}) have out-degree zero, and for any $S \subseteq A$ and for any $T \subseteq B \colon |S| = |T|$ there is a set of pairwise vertex disjoint paths connecting each vertex from $S$ to some vertex in $T$. We parametrize the super-concentrator by the number of input vertices $n$, and we measure its size by the number of edges. We want the graph to have as few edges as possible. The depth of the super-concentrator is the number of edges on the longest directed path. Pippenger~\cite{pippenger1996self} shows a construction of super-concentrators of linear size and logarithmic depth. He constructs a family of super-concentrators $S_n$ for $n$ being the number of inputs, where the in-degree and out-degree of each vertex is bounded by some universal constant, the number of edges is linear in $n$, and the depth is $\mathcal{O}(\log(n))$. Moreover there are finite automatons which for any $S \subset A, T \subset B \colon |S| = |T|$ when put on the vertices of the super-concentrator find the set of vertex disjoint paths from $S$ to $T$ in $\mathcal{O}(\log(n))$ iterations, each taking $\mathcal{O}(\log(n))$ steps, for the total number of $\mathcal{O}(n)$ steps of the automatons. We describe this construction using the language of circuits. The circuit on input of characteristic vector of $S$ and $T$ computes the set of $|T|$ vertex disjoint paths connecting $S$ and $T$. The circuit outputs the characteristic vector of the set of edges participating in the paths. \newcommand{\cktRouteSizeDepthTODO}[1]{\todo[inline,color=green!40]{\cktRoute{ {#1} } size $\mathcal{O}\left( \left( {#1} \right) \log\left( {#1} \right) \right)$, depth $\mathcal{O}\left( \log\left( {#1} \right)^2 \right)$}} \begin{theorem}[Pippenger~\cite{pippenger1996self}] There is a family of super-concentrators $S_n$ as described above and boolean circuits $\cktRoute{n} \fromto{2n}{|S_n|}$ of size $\mathcal{O}(n \log(n))$ and depth $\mathcal{O}(\log^2(n))$ that on input characteristic vector of any set $T \subseteq [n]$ and characteristic vector of any $S \subseteq [n]$ where $|T| = |S|$, outputs the characteristic vector of edges that form $|T|$ vertex disjoint paths between $S$ and $T$. \label{thm:pippenger_routing} \end{theorem} By routing $m$ bits along each path in the super-concentrator we can use the above circuit to build a circuit that partially sorts $m$-bit integers by their most significant bit. \begin{corollary} There is a family of boolean circuits $\cktPSort{n}{m}{1} \fromto{n \cdot m}{n \cdot m}$ that on input $x_1, x_2, \ldots, x_n \in \left\{ 0,1 \right\}^{m}$ partially sort these numbers according to their first most significant bit. The size of the circuit \cktPSort{n}{m}{1} is $\mathcal{O}(n m + n \log(n))$ and depth $\mathcal{O}(\log^2(n))$. \label{col:pippenger_sort} \end{corollary} \begin{proof} We give a sketch of the proof. First, we will use the graph $S_n$ to get all inputs starting with one to the proper place. Then, using the same construction we will move all inputs starting by $0$ to the proper place. We transform the graph $S_n$ into a circuit by replacing each vertex of in-degree $d$ by a \emph{routing gadget} (circuit) which takes $d$ $m$-bit inputs together with $d$ control bits, one bit for each of the $m$-bit inputs, and outputs the bit-wise OR of inputs for which their control bit is set to 1. Such a routing gadget of size $\mathcal{O}(dm)$ and depth $\mathcal{O}(\log(d))$ can be easily constructed. If $(u,v)$ is the $j$-th incoming edge of $v$ in $S_n$, we connect the $j$-th block of $m$ input bits of the routing gadget corresponding to $v$ to the output of the routing gadget of $u$. The routing gadgets of input vertices of $S_n$ are connected directly to the appropriate inputs of the sorting circuit. The routing gadget will be used with at most single control bit set to one, thus it will route the corresponding input. It remains to calculate paths that will route the integers starting with 1 in the above circuit in the desired way. For that, we calculate the sum $s$ of the most significant bits by which we are sorting using \cktSum{n}{1} from Lemma~\ref{lem:sum_n_numbers}, we expand it back using $\cktOnes{\lceil \log(n) \rceil + 1}(s)$, and reverse it to get the characteristic vector of a set $T$, where we want to route to. Together with the most significant bits of each input integer (which form the characteristic vector of $S$ from which we route) we feed this as an input to $\cktRoute{n}$. The output bits of $\cktRoute{n}$ are connected to the appropriate control bits of our routing gadgets. The sorted output will be obtained as the output of the $n$ routing gadgets corresponding to the output vertices of~$S_n$. The size of the $\cktRoute{n}$ is $\mathcal{O}(n \log(n))$ and the total size of the circuits implementing the routing gadgets is $\mathcal{O}(mn)$. These two terms dominate the overall size of the circuit. The depth of the circuit is dominated by the depth of the $\cktRoute{n}$. \end{proof} We can use the above circuit in an iterative fashion to build a smaller circuit for the same primitive. \begin{lemma} There is a family of boolean circuits $\cktRSort{n}{m}{1} \fromto{n \cdot m}{n \cdot m}$ that on input $x_1, x_2, \ldots, x_n \in \left\{ 0,1 \right\}^{m}$ partially sort these numbers according to their first most significant bit. The size of the circuit \cktRSort{n}{m}{1} is $\mathcal{O}(n m (1+\log^*(n)-\log^*(m)))$ and its depth is $\mathcal{O}(\log^2(n))$. \label{lem:route} \end{lemma} \begin{proof} Assume $m\le \log(n)/11$ otherwise use Corollary~\ref{col:pippenger_sort}. We will build the circuit iteratively using the circuit from Corollary~\ref{col:pippenger_sort} for blocks of various sizes. We will start with small blocks of items and we will iteratively sort larger and larger number of items organized into mostly monochromatic blocks. Without loss of generality we assume that $m$ is a power of two, and we will ignore the rounding issues. We will have two parameters $m_i$ and $n_i=2^{3m_i}$, where $m_0=m$ and $m_{i+1}=2^{m_i}$ for $i\ge 0$. At iteration $i$, all the items will be partitioned into \emph{parts} of consecutive numbers, each part will be either \emph{monochromatic} containing all zeros, all ones, or it will be \emph{mixed}. (Here we refer to the most significant bits of the numbers in the part.) For each part we will maintain two indicator bits which of the three possibilities occurs: an indicator which is one if the block is mixed, and another \emph{color} indicator which specifies the highest order bit of the integers if the block is monochromatic. (For the latter we could use the first bit of the first integer in the part.) At each iteration $i>0$, $m_i$ will denote the number of items in each part. $n_i/m_i$ consecutive parts form a \emph{block}, so each block contains $n_i$ items. The blocks partition the input. We will maintain an invariant that the fraction of mixed parts in each block is at most $2/m^3_i$. At iteration $0$ we apply $\cktPSort{n_0}{m}{1}$ to consecutive blocks of $n_0$ input integers. Afterwards, the block is partitioned into parts of size $m_1$ and for each part we determine its status by comparing the most significant bits of the first and last integer in the part. It is clear that each block of size $n_0$ contains at most one mixed part. As the number of parts in the block is $m_1^3$, the fraction of mixed parts in each block is at most $2/m^3_1$, and this is also true for blocks of size $n_1$. At iteration $i>0$, we divide the current sequence of parts of size $m_i$ into blocks containing $n_i/m_i$ parts, and we proceed in three steps: \begin{description} \item[Step 1.] Sort the parts in each block using $\cktPSort{n_i/m_i}{2+m_i\cdot m}{1}$ according to the mixed indicator. Hence, all the mixed parts will move to the end of the block. There are at most $2n_i/m^3_i$ mixed parts in each block, the remaining parts must be monochromatic. \item[Step 2.] In each block, sort all the $m$-bit integers in the last $2n_i/m^3_i$ parts according to their most significant bit using $\cktPSort{2n_i/m^2_i}{m}{1}$. This sorts together all the integers in the mixed parts (and perhaps few other parts). Repartition them into parts of $m_i$ consecutive numbers and determine their indicator bits. Only one of the parts should be mixed at this point. Swap it with the last part in the block. (We provide details of the swap later.) \item[Step 3.] In each block, sort all the parts except for the last one according to their color indicator using $\cktPSort{(n_i/m_i)-1}{2+m_i\cdot m}{1}$. This moves all the parts of color 0 to the front. Repartition all the numbers in the block into parts of $m_{i+1}$ consecutive integers and determine their indicator bits, where the last part is marked as mixed. At most two of the new parts should be mixed at this point. Notice, that out of $m_{i+1}^3$ parts in each block, at most two are marked as mixed so the invariant applies. We can move to the next iteration. \end{description} We iterate the algorithm until $m_i \ge \log(n)/4$. Once $m_i\ge \log(n)/4$, the number of integers in mixed parts is at most $2n/m^2_i \leq O(n/\log^2(n))$, remaining items are in monochromatic parts. At this point we cannot form a block of size $n_i$, but we can still perform the same type of actions as in Steps 1-3: We can bring the monochromatic parts forward as in Step~1, sort the last $32n/\log^2(n)$ integers belonging to the mixed parts, move the remaining mixed part to the end, sort the monochromatic parts and swap the mixed part with the first monochromatic part of color 1. To swap a single mixed part with the last part we can copy the mixed part into a buffer by AND-ing every part bit-wise with the indicator whether that is the mixed part, and OR-ing all the results together. This copies the mixed part into a buffer. In a similar fashion we can copy the last part into the now unused part by letting each part bit-wise copy to its place either its original content or the content of the last part, again conditioning on an appropriate indicator bit. Hence, the swap can be implemented by a circuit of size proportional to the total size of the parts and depth logarithmic in the number of parts. Now we will bound the total size of the circuit we constructed. Step~1 requires $n/n_i$ circuits of size $\mathcal{O}(n_i m + n_i/m_i \log(n_i/m_i))= \mathcal{O}(n_i m)$, as $\log(n_i)=O(m_i)$, and of depth at most $\mathcal{O}(\log^2(n_i))$. Step~2 requires $n/n_i$ sorting circuits of size $\mathcal{O}(m n_i/m^2_i + 2n_i/m^2_i \log(2n_i/m^2_i))=\mathcal{O}(n_i)$ and of depth at most $\mathcal{O}(\log^2(n_i))$, together with a circuit of total linear size $\mathcal{O}(n)$ to recalculate the parts and do the swaps. The last step requires the same amount of circuitry as the first step. Hence, each step requires circuits of total size $\mathcal{O}(n m)$. The same goes for the initial sort at iteration 0, and the final sorts at the end. As there are at most $\log^*(n)-\log^*(m)$ iterations, the resulting size is $\mathcal{O}(n m (\log^*(n)-\log^*(m)))$. Each step requires a circuit of depth $\mathcal{O}(\log^2(n_i))$, recall that by our choice $n_i = 2^{4m_i}$, thus $\log(n_{i}) = 4m_i$. Since $m_{i+1} = 2^{m_i}$ and for each $i$ we have $m_i \leq \log(n) / 4$, thus the total depth is dominated by the last iteration where we use a circuit of depth $\mathcal{O}(\log^2(n))$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:sort_by_k_bits}] We assume that $k\le \log(n)/11$ otherwise we can use Corollary~\ref{col:aks_partial_circuit} to sort the elements. Without loss of generality we assume $n$ is a power of two. We think of the input as organized into an array. We extract the first $k$ bits (\emph{key}) from each input element and we sort the keys using the circuit from Theorem~\ref{thm:main} of size $\mathcal{O}(nk^2)$ and depth $\mathcal{O}(\log(n) + k \log(k))$. We will build recursively a circuit that will sort the input array of $n$ elements according to the first $k$ bits when the input is augmented with the array of sorted keys. Now our goal is to split the input array into two equal sized parts $L$ and $R$ where all elements in $L$ are less or equal to elements in $R$ when comparing only the keys. To do that we take the \emph{median}, the $n/2$-th element among the keys, and we partition the array according to it. We split the input array into three arrays $L$, $M$, and $R$ of length $n$ with elements less than, equal to, and greater than the median, resp., and we mark the unused elements as \emph{dummy} using an extra bit associated to each element. We sort $L$ and $M$ so that all non-dummy elements are to the left and $R$ so that all non-dummy elements are to the right. We use three circuits $\cktRSort{n}{m+1}{1}$ to do that. Now, we flip the first half of elements in $M$, i.e., swap the $i$-th element with the element in position $(n/2)-i+1$, and we replace the dummy elements in the first half of $L$ by the corresponding elements in $M$. By one application of $\cktRSort{n}{m+1}{1}$ we move all the remaining non-dummy elements in $M$ to the left, and we merge those elements with the second half of $R$. We discard the second and first half of $L$ and $R$, respectively. (They contain only dummy elements.) If the highest order bit of the median is set to $0$ then all the elements in $L$ have the highest order bit set to $0$, otherwise all the elements in $R$ have the highest order bit set to $1$. In either case we reduced the problem to one problem of sorting half of the elements according to $k-1$ bits and the other half according to $k$-bits. We recursively build a circuit to sort $\cktSort{n/2}{m}{k-1}$ and $\cktSort{n/2}{m}{k}$ when the input is augmented with the sorted array of keys. We pass to each of the sorting sub-circuits the appropriate sub-problem and we re-route the results from them to form the final output. Not counting the two sub-circuits $\cktSort{n/2}{m}{k-1}$ and $\cktSort{n/2}{m}{k}$, this step requires four copies of the circuit $\cktRSort{n}{m+1}{1}$ and additional $\mathcal{O}(nm)$ gates to do the moves and element comparison with the median. Denote the size of this part of the circuit by $L_{m}(n)=\mathcal{O}(n m (1+\log^*(n)-\log^*(m)))$. The depth of the resulting circuit to perform all those operations is $\mathcal{O}(\log^2(n))$ as the move operations are done in parallel (again, not counting the depth of $\cktSort{n/2}{m}{k-1}$ and $\cktSort{n/2}{m}{k}$). If we denote by $S_{m, k}(n)$ the size of the circuit \cktSort{n}{m}{k} we get the following recurrence: \begin{align*} S_{m, k}(1) &= \mathcal{O}(m) \\ S_{m, 1}(n) &= \mathcal{O}(n m (1 + \log^*(n) - \log^*(m))) \\ S_{m, k}(n) &\leq L_{m}(n) + S_{m, k-1}\left( \frac{n}{2} \right) + S_{m, k}\left( \frac{n}{2} \right) \end{align*} when we iterate the recurrence: \begin{align*} S_{m, k}(n) &= L_{m}(n) + S_{m, k-1}(n/2) + S_{m, k}(n/2) \\ &= L_{m}(n) + S_{m, k-1}(n/2) + L_{m}(n/2) + S_{m, k-1}(n/4) + S_{m, k}(n/4) \\ &= L_{m}(n) + S_{m, k-1}(n/2) + L_{m}(n/2) \\ &{} \text{ \ \ \ } + S_{m, k-1}(n/4) + L_{m}(n/4) + S_{m, k-1}(n/8) + S_{m, k}(n/8) \\ &= \ldots \\ &= \left( L_{m}(n) + L_{m}(n/2) + \ldots + L_{m}(1) \right) \\ &{} \text{\ \ \ \ } + \left( S_{m, k-1}(n/2) + S_{m, k-1}(n/4) + \ldots + S_{m, k-1}(1) \right) + S_{m, k}(1) \\ &\leq L_{m}(2 n) + S_{m, k-1}(n) + \mathcal{O}(m) \end{align*} which gives us \begin{align*} S_{m, k}(n) &= k L_{m}(2 n) + (k-1) S_{m, k}(1) + S_{m, 1}(n) \\ &= k L_{m}(2 n) + \mathcal{O}(n m (1 + \log^*(n) - \log^*(m))) \\ &= \mathcal{O}(k n m (1 + \log^*(n) - \log^*(m))) \end{align*} To bound the depth $D_{m, k}(n)$ we use the following recurrence: \begin{align*} D_{m, k}(1) &= \mathcal{O}(1) \\ D_{m, k}(n) &\geq D_{m, k-1}(n) \\ D_{m, 1}(n) &= \mathcal{O}(\log^2(n)) \\ D_{m, k}(n) &= \mathcal{O}(\log^2(n)) + \max\left( D_{m, k}(n/2) + D_{m, k-1}(n/2) \right) \\ &\leq \mathcal{O}(\log^2(n)) + D_{m, k}(n/2) \\ &\leq \mathcal{O}(\log^3(n)) \end{align*} \end{proof} \section{Conclusion} We have provided improved sorting circuits. Our technique used in the proof of Theorem~\ref{thm:main} can be viewed as information compression and decompression. This technique might prove useful for other related problems. We list some open problems: \begin{itemize} \item Most of our circuits are uniform. The non-uniform part is due to the use of the AKS circuits and Pippenger's super-concentrators. Can one make uniform circuits of the same size? \item Kospanov~\cite{kospanov1994scheme} shows that there is a family of sorting circuits with depth $\mathcal{O}(\log(n) + \log(m))$ and size $\mathcal{O}(m n^2)$ that sorts $n$ numbers each of $m$ bits. Is there a circuit family for sorting with circuits of depth $\mathcal{O}(\log(n) + \log(m))$ and size $\mathcal{O}(n m^2)$? In other words can we get rid of the $m \log(m)$ factor in the circuit depth from Theorem~\ref{thm:main} while keeping the $\mathcal{O}(nm^2)$ size? \item Is it possible to partially sort $n$ numbers of $m$ bits each by their first bit using a circuit of size $\mathcal{O}(nm)$ and depth $\mathcal{O}(\log(n))$? \end{itemize} \paragraph{Acknowledgement:} The authors are grateful for insightful discussions with Mike Saks on sorting and to Veronika Slívová for her insights and comments regarding the first versions of this paper. The authors thank Igor Sergeev for pointing us to the paper of Kospanov~\cite{kospanov1994scheme}. \end{document}
\begin{document} \title{Non-vanishing complex vector fields and the Euler characteristic} \thanks{Submitted for publication July 25, 2008.} \author{Howard Jacobowitz} \address{Rutgers University\\ Camden, New Jersey 08012} \email{[email protected]} \subjclass[2000]{Primary 57R25; secondary 57R20} \begin{abstract} Every manifold admits a nowhere vanishing complex vector field. If, however, the manifold is compact and orientable and the complex bilinear form associated to a Riemannian metric is never zero when evaluated on the vector field, then the manifold must have zero Euler characteristic. \end{abstract} \maketitle One of the oldest and most basic results in global differential topology relates the topology of a manifold to the zeros of its vector fields. Let $M$ be a compact and orientable manifold and let $\chi (M)$ denote its Euler characteristic. Here is the simplest statement of this relation. \begin{eqnarray}\label{real} \mbox{If there is a global nowhere zero vector field on}\ M \mbox{ then}\ \chi (M) =0. \end{eqnarray} This of course is for a real vector field (that is, for a section $M\to TM$). On the other hand, it is easy to see that any manifold admits a nowhere zero complex vector field. (A complex vector field is a section $M\to \mathbb{C}\otimes TM$). This can be seen most simply by observing that a generic perturbation of any section, even the zero section itself, must be everywhere different from zero. It is natural to seek a condition on a nowhere zero complex vector field which would again imply $\chi (M) =0$. Curiously, a trivial restatement of (\ref{real}) leads to such a condition. Let $g$ be any Riemannian metric on $M$. \noindent (2)\ Let $v : M\to TM $ be a global vector field on $M$. If the Riemannian metric $g(v,v)$ is never zero, then $\chi (M)=0$. \noindent Here is the condition for complex vector fields. \begin{thm}\nonumber Let $v : M\to \mathbb{C}\otimes TM$ be a global vector field on $ M$. If the bilinear form g(v,v) is never zero, then $ \chi (M)=0$. \end{thm} \noindent Here $g$ is extended to complex vector fields by taking $g(v,w)$ to be complex linear in each argument; for $v=\xi + i\eta$ we have \[ g(v,v)=g(\xi ,\xi )-g(\eta ,\eta )+2ig(\xi ,\eta ). \] \begin{proof} We show that if $g(v,v)\not= 0$ then $v$ can be deformed to a nowhere zero real vector field. So the Euler characteristic would be zero, according to (\ref{real}). We decompose $M$ as \[ M=A_+\cup B\cup A_- \] where $g(\xi ,\xi )>g(\eta ,\eta )$ on $A_+$, the opposite inequality holds on $A_-$, and equality holds on $B$. We assume for now that $B$ is not empty. Note that $\xi$ is nowhere zero in $A_+$ and $\eta$ is nowhere zero in $A_-$. Further, since $g(v,v)$ is never zero, we have that $g(\xi ,\eta )$ is never zero on $B$. Thus there is an open neighborhood\ $\Omega$ of $B$ on which $g(\xi ,\eta )$ is never zero. We may take $\Omega$ to have a smooth boundary. We have that $\eta$ is never zero in $A_-\cup\Omega$ and $\xi$ is never zero in $A_+\cup\Omega$. Let $\Omega _1$ be an open set chosen so that \[ B\subset \Omega _1,\mbox{ } \overline{\Omega _1}\subset \Omega \] and \item \[\overline{\Omega _1} \mbox{ is a neighborhood\ retract of }\overline{ \Omega}. \] The boundary of $\Omega$ has two components, one in $A_+$ and the other in $A_-$. (That is, the boundary of $\Omega$ is the union of two sets, neither of which need be connected.) The same is true for the boundary of $\Omega _1$. We will work only with the components in $A_+$. Call them $\Sigma$ and $\Sigma _1$. Each of these sets separates $M$ into two components. We seek to deform $v$ to a nowhere vanishing real vector field $u$. Set $u=\xi$ on the component of $M-\Sigma$ which does not contain $A_-$. The sets $\Sigma$ and $\Sigma _1$ bound a region which retracts onto $\Sigma _1$. We want to rotate $\xi$ to $\eta$, (or to $-\eta$) as the retraction takes $\Sigma$ to $\Sigma _1$. Since $g(\xi ,\eta ) \neq 0$, in this region, this is easily done. Pick a point in this region. The angle $\theta$ between the vectors $\xi$ and $\eta$ satisfies one of the alternatives \[ 0\leq \theta \leq \pi /2 \mbox{ or } \pi /2 < \theta \leq \pi \] and whichever alternative is satisfied at that point is also satisfied at all points in the region. Thus as we retract $\Sigma$ to $\Sigma _1$, we may rotate $\xi$ to $\eta$, or, respectively to $-\eta$. Finally, define $u=\eta$, respectively $u= -\eta$, on the component of $M-\Sigma _1$ which contains $A_-$. If $B$ is empty, the proof is even easier. Now either $g(\xi ,\xi)>g(\eta ,\eta )$ everywhere and so $\xi$ is a nowhere zero real vector field or the opposite inequality holds and $\eta$ is a nowhere zero real vector field. \end{proof} {\it{Remark}}. \noindent We have proved the theorem by reducing to (\ref{real}). This latter result goes back to H. Hopf; an influential modern proof was given by Atiyah \cite{A}. Atiyah's proof makes use of the Clifford algebra structure on the bundle of exterior forms. Our Theorem can be proved directly, without reducing to (\ref{real}), by following Atiyah's proof using the corresponding complex Clifford algebra. There is a stronger version of (\ref{real}) which expresses the Euler characteristic as the algebraic sum of the indices of the zeros of the vector field. (Indeed this is the result of Hopf.) It would be interesting to generalize this to complex vector fields. \end{document}
\begin{document} \author{George Avalos \\ Department of Mathematics, University of Nebraska-Lincoln, USA \and Pelin G. Geredeli \\ Department of Mathematics, Iowa State University, USA \and Boris Muha \\ Department of Mathematics, Faculty of Science, University of Zagreb, Croatia} \title{ Wellposedness, Spectral Analysis and Asymptotic Stability of a Multilayered Heat-Wave-Wave System } \maketitle \begin{abstract} In this work we consider a multilayered heat-wave system where a 3-D heat equation is coupled with a 3-D wave equation via a 2-D interface whose dynamics is described by a 2-D wave equation. This system can be viewed as a simplification of a certain fluid-structure interaction (FSI) PDE model where the structure is of composite-type; namely it consists of a \textquotedblleft thin\textquotedblright\ layer and a \textquotedblleft thick\textquotedblright\ layer. We associate the wellposedness of the system with a strongly continuous semigroup and establish its asymptotic decay. Our first result is semigroup well-posedness for the (FSI) PDE dynamics. Utilizing here a Lumer-Phillips approach, we show that the fluid-structure system generates a $C_0$-semigroup on a chosen finite energy space of data. As our second result, we prove that the solution to the (FSI) dynamics generated by the $C_0$-semigroup tends asymptotically to the zero state for all initial data. That is, the semigroup of the (FSI) system is strongly stable. For this stability work, we analyze the spectrum of the generator $\mathbf A$ and show that the spectrum of $\mathbf A$ does not intersect the imaginary axis. \vskip.3cm \noindent \textbf{Key terms:} Fluid-structure interaction, heat-wave system, well-posedness, semigroup, strong stability \end{abstract} \section{Introduction} \subsection{Motivation and Literature} This work is motivated by a longstanding interest in the analysis of fluid-structure interaction (FSI) partial differential equation (PDE) dynamics. Such FSI problems deal with multi-physics systems consisting of fluid and structure PDE components. These systems are ubiquitous in nature and have many applications, e.g., in biomedicine \cite{FSIforBIO} and aeroelasticity \cite{Dowell15}. However, the resulting PDE systems are very complicated (due to nonlinearities, moving boundary phenomena and hyperbolic-parabolic coupling) and despite extensive research activity in last 20 years, the comprehensive analytic theory for such systems is still not available. Accordingly, by way of obtaining a better understanding of FSI dynamics, it would seem natural to consider those FSI PDE models, which although constitute a simplification of sorts, yet retain their crucial novelties and intrinsic difficulties. For example, in the past, coupled heat-wave PDE systems (and variations thereof) have been considered for study: the heat equation component is regarded as a simplification of the fluid flow component of the FSI dynamics; the wave equation component is regarded as a simplification of the structural (elastic) component; see e.g., [\cite{lions1969quelques}, Section 9] and \cite{RauchZhangZuazua}. See also the works \cite{Du, A-T, Barbu, Chambolle, Courtand}, in which the fluid PDE component of fluid-structure interactions is governed by Stokes or Navier-Stokes flow. Here we consider a multilayered version of such heat-wave system; where the coupling of the 3-D heat and the 3-D wave equations is realized via an additional 2-D wave equation on the boundary interface. This is a simplified (yet physically relevant) version of a benchmark fluid-component structure PDE model which was introduced in \cite{SunBorMulti}. This particular FSI problem was principally motivated by the mathematical modeling of vascular blood flow: such modeling PDE dynamics will account for the fact that the blood-transporting vessels are generally composed of several layers, each with different mechanical properties and are moreover separated by the thin elastic laminae (see \cite{multi-layered} for more details). In order to mathematically model these biological features, the multilayered structural component of such FSI dynamics is governed by a 3-D wave-2-D wave PDE system. For the physical interpretation and derivation of such coupled "thick-thin" structure models we refer reader to \cite[Chapter 2]{CiarletBook2} and references within. As we said, although the present multilayered heat- wave- wave system constitutes a simplification somewhat of the FSI model in \cite{SunBorMulti} -- in particular, the 2-D wave equation takes the place of a fourth order plate or shell PDE -- our results remain valid if we replace the 2-D wave equation with the corresponding linear fourth order equation. Within the context of the present multilayered heat-wave-wave coupled system, we are interested in asymptotic behavior of the solutions, and regularization effects of the fluid dissipation and coupling via the elastic interface, inasmuch as there is a dissipation of the natural energy of the heat-wave-wave PDE system – with this dissipation coming strictly from the heat component of the FSI dynamics – it is a reasonable objective to determine if this thermal dissipation actually gives rise to asymptotic decay (at least) to all three PDE solution components: That is, we seek to ascertain longtime decay of both 3-D and 2-D wave solution components, as well as the heat solution component. Such a strong stability can be seen as a measure of the "strength" of the coupling condition. For the classical heat-wave system (without the 2-D wave equation on the interface) this question is by now rather well understood and precise decay rates are well known (see \cite{AvalosLasieckaTriggiani16,Batty19} and references within.) (We should emphasize that the high-frequency oscillations in the structure are not efficiently dissipated and therefore there is no exponential decay of the energy.) Our present investigation into the multilayered wave-heat systems is motivated in part by \cite{SunBorMulti} which considered a nonlinear FSI comprised by 2-D (thick layer) wave equation and 1-D wave equation (thin layer) coupled to a 2-D fluid PDE across a boundary interface. For these dynamics, wellposedness was established in \cite{SunBorMulti}, in part by exploiting an underlying regularity which was available by the presence of said wave equation. (Such regularizing effects were observed numerically in \cite{multi-layered} and precisely quantified in the sense of Sobolev for a 1-D FSI system in \cite{BorisSimplifiedFSI}. For similar regularizing effects in the context of hyperbolic-hyperbolic PDE couplings, we refer to \cite{HansenZuazua, KochZauzua, LescarretZuazua15}.) By way of gaining a better qualitative understanding of FSI systems, such as those in \cite{SunBorMulti}, we here embark upon an investigation of the aforesaid 3-D heat-2-D wave-3-D wave coupled PDE system; in particular, we will establish the semigroup wellposedness and asymptotic decay to zero of the underlying energy of this FSI. These objectives of wellposedness and decay will entail a precise understanding of the role played by the coupling mechanisms on the elastic interface and by the fluid dissipation. In future work, we will investigate possible regularizing effects, at least for certain polygonal configurations of the boundary interface. We finish this section by giving a brief literature review, in addition to the ones mentioned above. FSI models have been very active and broad area of research in the last two decades and therefore here we avoid presenting a full literature review: we merely mention here a few recent monographs and review works \cite{FSIforBIO,bodnar2017particles,SunnyStentsSIAM,GazzolaReview,MTEFSI,RichterBook}, where interested reader can find further references. The study of various simplified FSI models which manifest parabolic-hyperbolic coupling has a long history going back at least to [\cite{lions1969quelques}, Section 9], where the Navier-Stokes equations are coupled with the wave equation along a fixed interface. However, even in the linear case the presence of the pressure term gives rise to significant mathematical challenges in developing the semigroup wellposednes theory \cite{AvalosTriggiani09}. Thus, the heat-wave system has been extensively studied in last decade as a suitable simplified model for stability analysis of parabolic-hyperbolic coupling occurring in FSI systems, see e.g. \cite{A-B, AvalosTriggiani2,Duyckaerts,Fathallah,ZhangZuazuaARMA07} and references within. To the best of our knowledge there are still no results about strong stability of FSI systems with an elastic interface. \subsection{PDE Model} Let the fluid geometry $\Omega _{f}$ $\subseteq \mathbb{R}^{3}$ be a Lipschitz, bounded domain. The structure domain $\Omega _{s}$ $ \subseteq \mathbb{R}^{3}$ will be \textquotedblleft completely immersed\textquotedblright\ in $\Omega _{f}$; with $\Omega _{s}$ being a convex polyhedral domain. \begin{center} \includegraphics[scale=0.4]{geo.pdf} \end{center} \begin{center} \textbf{Figure: Geometry of the FSI Domain} \end{center} In the figure, $\Gamma _{f}$ is the part of boundary of $\partial \Omega _{f} $ which does not come into contact with $\Omega _{s}$; $\Gamma _{s}=\partial \Omega _{s}$ is the boundary interface between $\Omega _{f}$ and $\Omega _{s} $ wherein the coupling between the two distinct fluid and elastic dynamics occurs. (And so, $\partial \Omega _{f}=\Gamma _{s}\cup \Gamma _{f}$.) We have that \begin{equation} \Gamma _{s}=\cup _{j=1}^{K}\overline{\Gamma }_{j}, \label{1} \end{equation} where $\Gamma _{i}\cap \Gamma _{j}=\emptyset $, for $i\neq j$. It is further assumed that each $\Gamma _{j}$ is an open polygonal domain. Moreover, $n_{j}$ will denote the unit normal vector which is exterior to $ \partial \Gamma _{j}$, $1\leq j\leq K$. With respect to this geometry, the $ \mathbb{R}^{3}$ wave--$\mathbb{R}^{2}$ wave--$\mathbb{R}^{3}$ heat interaction PDE model is given as follows:\\% For $i\leq j\leq K,$ \begin{equation} \left\{ \begin{array}{l} u_{t}-\Delta u=0\text{ \ \ \ in \ }(0,T)\times \Omega _{f} \\ u|_{\Gamma _{f}}=0\text{ \ \ \ on \ }(0,T)\times \Gamma _{f}; \end{array} \right. \label{2a} \end{equation} \begin{equation} \left\{ \begin{array}{l} \frac{\partial ^{2}}{\partial t^{2}}h_{j}-\Delta h_{j}+h_{j}=\frac{\partial w }{\partial \nu }|_{\Gamma _{j}}-\frac{\partial u}{\partial \nu }|_{\Gamma _{j}}\text{ \ \ \ on \ }(0,T)\times \Gamma _{j} \\ h_{j}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}}=h_{l}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}}\text{ on \ }(0,T)\times (\partial \Gamma _{j}\cap \partial \Gamma _{l})\text{, for all }1\leq l\leq K\text{ such that }\partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset \\ \left. \dfrac{\partial h_{j}}{\partial n_{j}}\right\vert _{\partial \Gamma _{j}\cap \partial \Gamma _{l}}=-\left. \dfrac{\partial h_{_{l}}}{\partial n_{l}}\right\vert _{\partial \Gamma _{j}\cap \partial \Gamma _{l}}\text{on \ }(0,T)\times (\partial \Gamma _{j}\cap \partial \Gamma _{l})\text{, for all } 1\leq l\leq K\text{ such that }\partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset .\text{\ } \end{array} \right. \label{2.5b} \end{equation} \begin{equation} \left\{ \begin{array}{l} w_{tt}-\Delta w=0\text{ \ \ \ on \ }(0,T)\times \Omega _{s} \\ w_{t}|_{\Gamma _{j}}=\frac{\partial }{\partial t}h_{j}=u|_{\Gamma _{j}}\text{ \ \ \ on \ }(0,T)\times \Gamma _{j}\text{, \ for }j=1,...,K \end{array} \right. \label{2d} \end{equation} \begin{equation} \lbrack u(0),h_{1}(0),\frac{\partial }{\partial t}h_{1}(0),...,h_{K}(0), \frac{\partial }{\partial t} h_{K}(0),w(0),w_{t}(0)]=[u_{0},h_{01},h_{11},...,h_{0K},h_{1K},w_{0},w_{1}]. \label{IC} \end{equation} Equation \eqref{2.5b}$_1$ is the dynamic coupling condition and represents a balance of forces on $\Gamma_j$. The left-hand side comes from the inertia and elastic energy of the thin structure, while the right-hand side accounts for the contact forces coming from the 3-D structure and the fluid, respectively. The last term of the left-hand side is added to ensure the uniqueness of the solution and physically means that the structure is anchored and therefore the displacement does not have a translational component. The coupling conditions \eqref{2.5b}$_2$ and \eqref{2.5b}$_3$ represent continuity of the displacement and contact force along the interface between sides $\Gamma_i$ and $\Gamma_l$, respectively. Equation \eqref{2d}$_2$ is a kinematic coupling condition and accounts for continuity of the velocity across the interface $\Gamma_j$. It corresponds to the no-slip boundary condition in fluid mechanics. Note that the boundary condition in (\ref{2d}) implies that for $t>0$, \begin{equation*} w(t)|_{\Gamma _{j}}-h_{j}(t)=w(0)|_{\Gamma _{j}}-h_{j}(0),\text{ \ \ for } j=1,...,K. \end{equation*} Accordingly, the associated space of initial data $\mathbf{H}$ incorporates a compatibility condition. Namely, \begin{equation} \begin{array}{l} \mathbf{H}=\{[u_{0},h_{01},h_{11},...,h_{0k},h_{1k},w_{0},w_{1}]\in L^{2}(\Omega _{f})\times H^{1}(\Gamma _{1})\times L^{2}(\Gamma _{1})\times ... \\ \text{ \ \ \ \ \ \ \ \ \ \ \ }\times H^{1}(\Gamma _{K})\times L^{2}(\Gamma _{K})\times H^{1}(\Omega _{s})\times L^{2}(\Omega _{s})\text{, \ such that for each }1\leq j\leq K\text{:\ (i) }w_{0}|_{\Gamma _{j}}=h_{0j}\text{; } \\ \text{ \ \ \ \ \ \ \ \ }\left. \text{(ii) }h_{0j}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}}=h_{0l}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}} \text{ on \ }\partial \Gamma _{j}\cap \partial \Gamma _{l}\text{, for all } 1\leq l\leq K\text{ such that }\partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset \right\} . \end{array} \label{H} \end{equation} Because of the given boundary interface compatibility condition, $\mathbf{H}$ is a Hilbert space with the inner product \begin{eqnarray} (\Phi _{0},\widetilde{\Phi }_{0})_{\mathbf{H}} &=&(u_{0},\widetilde{u} _{0})_{\Omega _{f}}+\sum\limits_{j=1}^{K}(\nabla h_{0j},\nabla \widetilde{h} _{0j})_{\Gamma _{j}}+\sum\limits_{j=1}^{K}(h_{0j},\widetilde{h} _{0j})_{\Gamma _{j}} \notag \\ &&+\sum\limits_{j=1}^{K}(h_{1j},\widetilde{h}_{1j})_{\Gamma _{j}}+(\nabla w_{0},\nabla \widetilde{w}_{0})_{\Omega _{s}}+(w_{1},\widetilde{w} _{1})_{_{\Omega _{s}}}, \label{Hilbert} \end{eqnarray} where \begin{equation} \Phi _{0}=\left[ u_{0},h_{01},h_{11},...,h_{0K},h_{1K},w_{0},w_{1}\right] \in \mathbf{H}\text{; \ }\widetilde{\Phi }_{0}=\left[ \widetilde{u}_{0}, \widetilde{h}_{01},\widetilde{h}_{11},...,\widetilde{h}_{0K},\widetilde{h} _{1K},\widetilde{w}_{0},\widetilde{w}_{1}\right] \in \mathbf{H}. \label{stat} \end{equation} \iffalse In the (FSI) PDE model under consideration, the structure component is composed of a “thin” layer and a “thick” layer. The thin layer serves as an interface between the respective fluid and thick layer structure dynamics. In [3], [1], [2], the authors therein undertook the qualitative and numerical analyses of nonlinear versions of such fluid-multilayered-structure PDE interactions. In the present situation, Lipschitz geometry $\Omega_f \subset R^3$ will denote the bounded domain on which the fluid component of the coupled PDE system evolves; the structure domain $\Omega_s$ will be a polyhedral body which is completed immersed within $\Omega_f $ ; a 3-D wave equation will describe the thick layer evolution within $\Omega_f $. Moreover, on boundary interface $\Gamma_s = \partial \Omega_s $ -- which is necessarily a polygon--a 2-D wave equation is invoked so as to describe the thin layer displacements along $\Gamma_s$. The agency of coupling for this FSI is (i) via a matching of velocities across $\Gamma_s$; and (ii) via the difference of the 3-D wave and fluid fluxes, which appears as a forcing term in the 2-D thin layer wave equation on $\Gamma_s$. To be in accordance with the FSI model in [3], the fluid PDE component of the FSI should properly be modeled by the incompressible Navier-Stokes equations; and so the appropriate linearization would be Stokes flow, rather than the heat equation which appears for the fluid PDE component in (?). However, as an initial foray into the qualitative analysis of such FSI PDE’s, the 3D heat-2D wave-3D wave PDE model (?) will be considered here. In fact, this system is a generalization of the 1D wave – 1D heat system with point mass in [2], and captures the essence of the FSI which was considered in [3]. \fi \subsection{Novelty and Challenges} The novelty of this work is that we consider an FSI model in which the interface is elastic and has mass. This is the simplest model 3-D of the interaction of the fluid with the composite structure which retains basic mathematical properties of the physical model. To the best of our knowledge this is the first result about asymptotic behavior of solution to such problems. We work in setting were the structure domain is polyhedron and dynamics of each polygon side of the boundary is governed by the 2-D linear wave equation. The wave equations are coupled via dynamic and kinematic coupling conditions over the common boundaries. We choose this setting because it will directly translate to numerical analysis of the problem. This work is an important first step to a finer analysis of the asymptotic decay (e.g. decay rates) and regularity properties of the solutions, and to better understanding of the influence of the elastic interface with mass to the qualitative properties of the solutions. By way of establishing the semigroup wellposedness of the multilayered FSI model \eqref{2a}-\eqref{IC} -- i.e., Theorem 1 below -- we will show that the associated generator $\mathbf{A}$, defined by \eqref{4a} and (A.i)-(A.iv) below, is maximal dissipative, and so generates a $C_{0}$-semigroup of contractions on the natural Hilbert space of finite energy \eqref{6}. The presence of the \textquotedblleft thin layer\textquotedblright\ wave equation on $\Gamma _{j} $, $1\leq j\leq K$, complicates this wellposedness work, vis-\`{a}-vis the situation which prevails for the previous 3-D heat-3-D wave models in \cite{AvalosLasieckaTriggiani16, AvalosTriggiani2, Duyckaerts, RauchZhangZuazua, ZhangZuazuaARMA07} for which a relatively straight invocation of the Lax-Milgram Theorem suffices to establish the maximality of the associated FSI generator. In the present work, we will likewise apply Lax-Milgram in order to ultimately show the condition $Range(\lambda I- \mathbf{A})=\mathbf{H}$ -- where $\lambda >0$ positive; in particular, Lax-Milgram will be applied for the solvability of a certain variational equation, relative to elements in a certain subspace of $H^{1}(\Omega _{f})\times H^{1}(\Gamma _{1})\times ...\times H^{1}(\Gamma _{K})\times H^{1}(\Omega _{s})$. (See \eqref{8} below). This variational equation of course reflects the presence of the thin wave components $h_{j}$ in \eqref{2a}-\eqref{IC}. The complications arise in the subsequent justification that the solutions of said variational equation give rise to solutions of the resolvent equation (in \eqref{a1} below) which are indeed in $D(\mathbf{A})$. In particular, we must proceed delicately to show that the obtained thin layer solution components of resolvent relation \eqref{a1} satisfy the continuity conditions \eqref{2.5b}$_{2}$ and \eqref{2.5b}$_{3}$. Having established the existence of a $C_{0}$-semigroup of contractions $ \left\{ e^{\mathbf{A}t}\right\} _{t\geq 0}\subset \mathcal{L}(\mathbf{H})$ which models the multilayer FSI PDE dynamics \eqref{2a}-\eqref{IC} , we will subsequently show the strong decay of this semigroup; this is Theorem \ref{SS} below. Inasmuch as our analysis of the regularizing effects of the resolvent operator $ \mathcal{R}(\lambda ;\mathbf{A})$ is to be undertaken in future work -- assuming there be such underlying smoothness, at least for some geometrical configurations of the polygonal boundary segments; see Remark \ref{remark_3} below -- the compactness of $D(\mathbf{A})$ is generally questionable. Accordingly, in order to establish asymptotic decay of solutions to the FSI PDE dynamics \eqref{2a}-\eqref{IC} , we will work to satisfy the conditions of the wellknown \cite{A-B}; see also \cite{L-P}. In particular, we will show below that $\sigma (\mathbf{A})\cap i\mathbb{R}=\emptyset $. (In our future work on discerning uniform decay properties of solutions to the multilayered FSI system \eqref{2a}-\eqref{IC}, the spectral information in Theorem \ref{SS} is also requisite; see e.g., the resolvent criteria in \cite{huang} and \cite{tomilov}.) In showing the nonpresence of $\sigma (\mathbf{A})$ on the imaginary axis --in particular, to handle the continuous spectrum of $\mathbf{A}$-- we will proceed in a manner somewhat analogous to what was undertaken in \cite{A-P2} (in which another coupled PDE system, with the coupling accomplished across a boundary interface, is analyzed with a view towards stability). However, the thin layer wave equation in \eqref{2.5b} again gives rise to complications: In the course of eliminating the possibility of approximate spectrum of $ \mathbf{A}$ on $i\mathbb{R}$, we find it necessary to invoke the wave multipliers which are used in PDE control theory for \emph{uniform} stabilization of boundary controlled waves: namely, inasmuch as each $h_{j}$ -wave equation in \eqref{2.5b} carries the difference of the 3-D wave and heat fluxes as a forcing term, we cannot immediately control the thick wave trace $ \left. \frac{\partial w}{\partial \nu }\right\vert _{\Gamma _{s}}$ in $H^{- \frac{1}{2}}(\Gamma _{s})$-norm, this control being needed for strong decay. (This issue absolutely does not appear for the previously considered 3-D heat-3-D wave FSI models of \cite{Duyckaerts} and the other mentioned works, since therein we have only the difference of heat and wave fluxes as a coupling boundary condition, which immediately leads to a decent $H^{-\frac{1}{2}}(\Gamma _{s}) $ estimate of the wave normal derivative, owing to the thermal dissipation.) Consequently, we must invoke static versions of the wave identities in [14], \cite{trigg} and \cite{AG1}, by way of estimating the normal derivative of (a component of) the 3-D wave solution variable $w$ in \eqref{2d}; see relation (74) below. \subsection{Notation} For the remainder of the text norms $||\cdot||$ are taken to be $L^2(D)$ for the domain $D$. Inner products in $L^2(D)$ is written $(\cdot,\cdot)$, while inner products $L_2(\partial D)$ are written $\langle\cdot,\cdot\rangle$. The space $ H^s(D)$ will denote the Sobolev space of order $s$, defined on a domain $D$, and $H^s_0(D)$ denotes the closure of $C_0^{\infty}(D)$ in the $H^s(D)$ norm which we denote by $\|\cdot\|_{H^s(D)}$ or $\|\cdot\|_{s,D}$. We make use of the standard notation for the trace of functions defined on a Lipschitz domain $D$, i.e. for a scalar function $\phi \in H^1(D)$, we denote $\gamma(w)$ to be the trace mapping from $H^1(D)$ to $H^{1/2}(\partial D)$. We will also denote pertinent duality pairings as $(\cdot, \cdot)_{X \times X'}$. \section{Main Results} \subsection{The thick wave-thin wave-heat Generator} With respect to the above setting, the PDE system given in (\ref{2a})-(\ref{IC}) can be recast as an ODE in Hilbert space $\mathbf{H}$. That is, if $\Phi (t)=\left[ u,h_{1},\frac{\partial }{\partial t}h_{1},...,h_{K},\frac{ \partial }{\partial t}h_{K},w,w_{t}\right] \in C([0,T];\mathbf{H})$ solves ( \ref{2a})-(\ref{IC}) for $\Phi _{0}\in \mathbf{H}$, then there is a modeling operator $\mathbf{A}:D( \mathbf{A})\subset \mathbf{H}\rightarrow \mathbf{H}$ such that $\Phi(\cdot) $ satisfies, \begin{equation} \frac{d}{dt}\Phi (t)=\mathbf{A}\Phi (t)\text{; \ }\Phi (0)=\Phi _{0}. \label{ODE} \end{equation} In fact, this operator $\mathbf{A}:D(\mathbf{A})\subset \mathbf{H} \rightarrow \mathbf{H}$ is defined as follows: \begin{equation} \mathbf{A}=\left[ \begin{array}{cccccccc} \Delta & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & I & \cdots & 0 & 0 & 0 & 0 \\ -\frac{\partial }{\partial \nu }|_{\Gamma _{1}} & (\Delta -I) & 0 & \cdots & 0 & 0 & \frac{\partial }{\partial \nu }|_{\Gamma _{1}} & 0 \\ \vdots & \vdots & \vdots & \cdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0 & I & 0 & 0 \\ -\frac{\partial }{\partial \nu }|_{\Gamma _{K}} & 0 & 0 & \cdots & (\Delta -I) & 0 & \frac{\partial }{\partial \nu }|_{\Gamma _{K}} & 0 \\ 0 & 0 & 0 & \cdots & 0 & 0 & 0 & I \\ 0 & 0 & 0 & \cdots & 0 & 0 & \Delta & 0 \end{array} \right] ; \label{4a} \end{equation} \begin{equation} \begin{array}{l} D(\mathbf{A})=\left\{ \left[ u_{0},h_{01},h_{11},\ldots ,h_{0K},h_{1K},w_{0},w_{1}\right] \in \mathbf{H}:\right. \\ \text{ \ \ \textbf{(A.i)} }u_{0}\in H^{1}(\Omega _{f})\text{, }h_{1j}\in H^{1}(\Gamma _{j})\text{ for }1\leq j\leq K\text{, }w_{1}\in H^{1}(\Omega _{s})\text{;} \\ \text{ \ }\left. \text{\textbf{(A.ii)} (a) }\Delta u_{0}\in L^{2}(\Omega _{f})\text{, }\Delta w_{0}\in L^{2}(\Omega _{s})\text{, (b) }\Delta h_{0j}-\frac{\partial u_{0}}{\partial \nu }|_{\Gamma _{j}}+\frac{\partial w_{0}}{\partial \nu } |_{\Gamma _{j}}\in L^{2}(\Gamma _{j})\text{ \ for \ } 1\leq j\leq K\text{;} \right. \\ \text{ \ \ \ \ \ \ (c) }\left. \dfrac{\partial h_{0j}}{\partial n_{j}} \right\vert _{\partial \Gamma _{j}}\in H^{-\frac{1}{2}}(\partial \Gamma _{j}) \text{, \ for \ } 1\leq j\leq K\text{;} \\ \text{ \ }\left. \text{\textbf{(A.iii)} }u_{0}|_{\Gamma _{f}}=0,\ \ u_{0}|_{\Gamma _{j}}=h_{1j}=w_{1}|_{\Gamma _{j}},\ \text{for }1\leq j\leq K\text{;}\right. \\ \text{ \ }\left. \text{\textbf{(A.iv)} For }1\leq j\leq K\text{: }\right. \\ \text{ \ \ \ \ \ \ (a) }h_{1j}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}}=h_{1l}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}}\text{ on \ } \partial \Gamma _{j}\cap \partial \Gamma _{l}\text{, for all }1\leq l\leq K \text{ such that }\partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset ; \\ \text{ \ \ \ \ \ \ \ }\left. \text{(b) }\left. \dfrac{\partial h_{0j}}{ \partial n_{j}}\right\vert _{\partial \Gamma _{j}\cap \partial \Gamma _{l}}=-\left. \dfrac{\partial h_{0_{l}}}{\partial n_{l}}\right\vert _{\partial \Gamma _{j}\cap \partial \Gamma _{l}}\text{ on \ }\partial \Gamma _{j}\cap \partial \Gamma _{l}\text{, for all }1\leq l\leq K\text{ such that } \partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset \right\} . \end{array} \label{dom} \end{equation} Now, in our first result, we provide a semigroup wellposedness for $\mathbf{A}:D(\mathbf{A})\subset \mathbf{H} \rightarrow \mathbf{H}$. This is given in the following theorem: \begin{theorem} \label{well}The operator $\mathbf{A}:D(\mathbf{A})\subset \mathbf{H} \rightarrow \mathbf{H}$, defined in (\ref{4a})-(\ref{dom}), generates a $ C_{0}$-semigroup of contractions. Consequently, the solution $\Phi (t)=\left[ u,h_{1},\frac{\partial }{\partial t}h_{1},...,h_{K},\frac{\partial }{ \partial t}h_{K},w,w_{t}\right] $ of (\ref{2a})-(\ref{IC}), or equivalently ( \ref{ODE}), is given by \begin{equation*} \Phi (t)=e^{\mathbf{A}t}\Phi _{0}\in C([0,T];\mathbf{H})\text{,} \end{equation*} where $\Phi _{0}=\left[ u_{0},h_{01},h_{11},...,h_{0K},h_{1K},w_{0},w_{1} \right] \in \mathbf{H}$. \end{theorem} After proving the existence and uniqueness of the solution, in our second result, we investigate the long term analysis of this solution. Our main goal here is to show that the solution to the system (\ref{2a})-(\ref{IC}) is strongly stable, which is given as follows: \begin{theorem} \label{SS} For the modeling generator $\mathbf{A}:D(\mathbf{A})\subset \mathbf{H} \rightarrow \mathbf{H}$ of (\ref{2a})-(\ref{IC}), one has $ \sigma(\mathbf{A})\cap i\mathbb{R} $. Consequently, the $C_{0}-$semigroup $\left\{ e^{\mathbf{A}t}\right\} _{t\geq 0}$given in Theorem \ref{well} is strongly stable. That is, the solution $\Phi (t)$ of the PDE (\ref{2a})-(\ref {IC}) tends asymptotically to the zero state for all initial data $\Phi _{0}\in \mathbf{H.}$ \end{theorem} \begin{remark} \label{remark_1}The wellposedness and stability statements Theorems 1 and 2 are equally valid in the lower dimensional setting $n=2$; i.e., for multilayered 2D heat -- 1D wave -- 2D wave coupled PDE systems (2)-(5), in which interface $\Gamma _{s}$ is the boundary of a convex polygonal domain $ \Omega _{s}$ (and so each segment $\Gamma _{j}$ is a line segment). (Also, analogous to the present 3D setting, $\Omega _{f}$ is a Lipschitz domain with $\partial \Omega _{f}=\Gamma _{s}\cup \Gamma _{f}$, with $\overline{ \Gamma }_{s}\cap \overline{\Gamma }_{f}=\emptyset $. \end{remark} \begin{remark} \label{remark_2}Inasmuch as we wish in future to turn our attention to the numerical analysis and simulation of solutions of the multilayered PDE system (2)-(5), the boundary interface is taken here to be polyhedral, with each polygonal boundary segment $\Gamma _{j}$ having its own wave equation IC-BVP in variable $h_{j}$. Alternatively, the Theorems 1 and 2 will also hold true in the case that boundary interface $\Gamma _{s}$ is smooth: in this case, the \textquotedblleft thin\textquotedblright\ wave equation -- in solution variable $h$, say -- will have its spatial displacements described by the Laplace Beltrami operator $\Delta ^{\prime }$. That is, for the multilayered FSI model on a smooth boundary interface $\Gamma _{s}$, the thin wave PDE component in (3) is replaced with \[ h_{tt}-\Delta ^{\prime }h+h=\left. \frac{\partial w}{\partial \nu } \right\vert _{\Gamma _{s}}-\left. \frac{\partial u}{\partial \nu } \right\vert _{\Gamma _{s}}\text{ \ on }(0,T)\times \Gamma _{s}\text{, } \] with the matching velocity B.C.'s \[ \left. w_{t}\right\vert _{\Gamma _{s}}=h_{t}=\left. u\right\vert _{\Gamma _{s}}\text{ \ on }(0,T)\times \Gamma _{s}. \] The heat and thick wave PDE components in (2) and (4) respectively are unchanged. In addition, there are the initial conditions \[ \lbrack u(0),h(0),h_{t}(0),w(0),w_{t}(0)]=[u_{0},h_{0},h_{1},w_{0},w_{1}]\in L^{2}(\Omega _{f})\times H^{1}(\Gamma _{s})\times L^{2}(\Gamma _{s})\times H^{1}(\Omega _{s})\times L^{2}(\Omega _{s}). \] Also, the initial conditions satisfy the compatibility conditions $\left. w_{0}\right\vert _{\Gamma _{s}}=h_{0}$. \end{remark} \begin{remark} \label{remark_3}In line with what is observed in [31] and [32], it seems possible -- at least for certain configurations of the polygonal segments $ \Gamma _{j}$, $j=1,...K$ -- that the domain $D(\mathbf{A})$ of the multilayer FSI generator (as prescribed in (A.i)-(A.iv) above) manifests a regularity higher than that of finite energy; i.e., $D(\mathbf{A})\subset H^{1}(\Omega _{f})\times H^{1+\rho _{1}}(\Gamma _{1})\times H^{1}(\Gamma _{1})\times ...\times H^{1+\rho _{1}}(\Gamma _{K})\times H^{1}(\Gamma _{K})\times H^{1+\rho _{2}}(\Omega _{s})\times H^{1}(\Omega _{s})$, where parameters $\rho _{1}$,$\rho _{2}>0$. In the course of our future work -- e.g., an analysis of uniform decay properties of the FSI model (2)-(5) -- this higher regularity will be fleshed out. We should note that in the case of a smooth boundary interface $\Gamma _{s}$ (see Remark \ref{remark_2}), smoothness of the associated FSI semigroup generator domain comes directly from classic elliptic regularity. In dimension $n=2$ (see Remark \ref{remark_1}), smoothness of the semigroup generator domain can be inferred by the work of P. Grisvard; see e.g., \cite{Grisvard2}, Theorem 2.4.3 of p. 57, along with Remarks 2.4.5 and 2.4.6 therein. \end{remark} \section{Wellposedness--\textit{Proof of Theorem \ref{well}}} This section is devoted to prove the Hadamard well-posedness of the coupled system given in (\ref{2a})-(\ref{IC}). Our proof hinges on the application of the Lumer Phillips Theorem which assures the existence of a $C_{0}$-semigroup of contractions $\left\{ e^{\mathbf{A} t}\right\} _{t\geq 0}$ once we establish that $\mathbf{A}$ is maximal dissipative.\\ \noindent \textbf{\textit{Proof of Theorem \ref{well}:}} In order to prove the maximal dissipativity of $\mathbf{A}$, we will follow a few steps: \\ \noindent\emph{\textbf{Step 1 (Dissipativity of }}$\mathbf{A}$\emph{) } Given data $\Phi _{0}$ in (\ref{stat}) to be in $D(\mathbf{A})$, \begin{eqnarray} (\mathbf{A}\Phi_0 ,\Phi_0 )_{\mathbf{H}} &=&(\Delta u_{0},u_{0})_{\Omega _{f}}+\sum\limits_{j=1}^{K}(\nabla h_{1j},\nabla h_{0j})_{\Gamma _{j}} \notag \\ &&+\sum\limits_{j=1}^{K}(h_{1j},h_{0j})_{\Gamma _{j}}+\sum\limits_{j=1}^{K}([\Delta -I]h_{0j},h_{1j})_{\Gamma _{j}} \notag \\ &&+\sum\limits_{j=1}^{K}\left\langle \frac{\partial w_{0}}{\partial \nu } ,h_{1j}\right\rangle _{\Gamma _{j}}-\sum\limits_{j=1}^{K}\left\langle \frac{ \partial u_{0}}{\partial \nu },h_{1j}\right\rangle _{\Gamma _{j}} \notag \\ &&+(\nabla w_{1},\nabla w_{0})_{\Omega _{s}}+(\Delta w_{0},w_{1})_{\Omega _{s}} \notag \\ &=&-(\nabla u_{0},\nabla u_{0})_{\Omega _{f}}+\left\langle \frac{\partial }{ \partial \nu }u_{0},u_{0}\right\rangle _{\Gamma _{s}} \notag \\ &&+\sum\limits_{j=1}^{K}(\nabla h_{1j},\nabla h_{0j})_{\Gamma _{j}}+\sum\limits_{j=1}^{K}(h_{1j},h_{0j})_{\Gamma _{j}} \notag \\ &&-\sum\limits_{j=1}^{K}\overline{(\nabla h_{1j},\nabla h_{0j})}_{\Gamma _{j}}-\sum\limits_{j=1}^{K}\overline{(h_{1j},h_{0j})}_{\Gamma _{j}}+\sum\limits_{j=1}^{K}\left( \frac{\partial h_{0j}}{\partial n_{j}} ,h_{1j}\right) _{\partial \Gamma _{j}} \notag \\ &&+\sum\limits_{j=1}^{k}(\frac{\partial w_{0}}{\partial \nu } ,h_{1j})_{\Gamma _{j}}-\sum\limits_{j=1}^{k}\left\langle \frac{\partial u_{0}}{\partial \nu },h_{1j}\right\rangle _{\Gamma _{j}} \notag \\ &&+(\nabla w_{1},\nabla w_{0})_{\Omega _{s}}-\overline{(\nabla w_{1},\nabla w_{0})}_{\Omega _{s}}-\left\langle \frac{\partial w_{0}}{\partial \nu } ,w_{1}\right\rangle _{\Gamma _{s}}. \label{ten} \end{eqnarray} (In the last expression, we are implicitly using the fact the unit normal vector $\nu $ is \emph{interior }with respect to $\Gamma _{s}$.) Note now via domain criterion (A.iv),we have for fixed index $j$, $1\leq j\leq K$, \begin{equation*} \left( \frac{\partial h_{0j}}{\partial n_{j}},h_{1j}\right) _{\partial \Gamma _{j}}=\sum\limits_{\substack{ 1\leq l\leq K \\ \partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset }}-\left( \frac{\partial h_{0l}}{ \partial n_{l}},h_{1l}\right) _{\partial \Gamma _{j}\cap \partial \Gamma _{l}}. \end{equation*} Such relation gives then the inference \begin{equation} \sum\limits_{j=1}^{K}\left( \frac{\partial h_{0j}}{\partial n_{j}} ,h_{1j}\right) _{\partial \Gamma _{j}}=0. \label{10.5} \end{equation} Applying this relation and domain criterion (A.iii) to (\ref{ten}), we then have \begin{equation} \begin{array}{l} (\mathbf{A}\Phi_0,\Phi_0 )_{\mathbf{H}}=-||\nabla u_{0}||_{_{\Omega _{f}}}^{2} \\ \text{ \ \ }+2i\sum\limits_{j=1}^{K} \text{Im}(\nabla h_{1j},\nabla h_{0j})_{\Gamma _{j}}+2i\sum\limits_{j=1}^{K}\text{Im}(h_{1j},h_{0j})_{ \Gamma _{j}} \\ \text{ \ \ \ }+2i\text{Im}(\nabla w_{1},\nabla w_{0})_{\Omega _{s}}, \end{array} \label{dissi} \end{equation} which gives $$\text{Re}(\mathbf{A}\Phi ,\Phi )_{\mathbf{H}}\leq 0.$$ \\ \noindent\emph{\textbf{Step 2 (The Maximality of} }$\mathbf{A}$\emph{)} Given parameter $ \lambda >0$, suppose $\Phi =\left[ u_{0},h_{01},h_{11},\ldots ,h_{0K},h_{1K},w_{0},w_{1}\right] \in D(\mathbf{A})$ is a solution of the equation \begin{equation} (\lambda I -\mathbf{A})\Phi =\Phi ^{\ast }, \label{a1} \end{equation} where $\Phi ^{\ast }=\left[ u_{0}^{\ast },h_{01}^{\ast },h_{11}^{\ast },\ldots ,h_{0K}^{\ast },h_{1K}^{\ast },w_{0}^{\ast },w_{1}^{\ast }\right] \in \mathbf{H}$. Then in PDE terms, the abstract equation (\ref{a1}) becomes \begin{equation} \left\{ \begin{array}{l} \lambda u_{0}-\Delta u_{0} =u_{0}^{\ast }\text{ \ \ in \ \ \ }\Omega _{f} \\ u_{0}|_{\Gamma _{f}} =0\text{ \ \ on \ \ }\Gamma _{f}; \end{array} \right. \label{p1} \end{equation} and for $1\leq j\leq K,$ \begin{equation} \left\{ \begin{array}{l} \lambda h_{0j}-h_{1j}=h_{0j}^{\ast }\text{ \ \ in \ \ }\Gamma _{j} \\ \lambda h_{1j}-\Delta h_{0j}+h_{0j}-\dfrac{\partial w_{0}}{\partial \nu }+ \dfrac{\partial u_{0}}{\partial \nu }=h_{1j}^{\ast }\text{\ \ \ in \ \ } \Gamma _{j} \\ u_{0}|_{\Gamma _{j}}=h_{1j}=w_{1}|_{\Gamma _{j}}\text{\ \ \ in \ \ }\Gamma _{j} \\ h_{0j}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}}=h_{0l}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}}\text{ on \ }\partial \Gamma _{j}\cap \partial \Gamma _{l}\text{, for all }1\leq l\leq K\text{ such that }\partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset \\ \label{p2} \left. \dfrac{\partial h_{0j}}{\partial n_{j}}\right\vert _{\partial \Gamma _{j}\cap \partial \Gamma _{l}}=-\left. \dfrac{\partial h_{0_{l}}}{\partial n_{l}}\right\vert _{\partial \Gamma _{j}\cap \partial \Gamma _{l}}\text{ on \ }\partial \Gamma _{j}\cap \partial \Gamma _{l}\text{, for all }1\leq l\leq K\text{ such that }\partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset ; \end{array} \right. \end{equation} and also \begin{equation} \left\{ \begin{array}{l} \lambda w_{0}-w_{1} =w_{0}^{\ast }\text{ \ \ in \ \ \ }\Omega _{s} \\ \lambda w_{1}-\Delta w_{0} =w_{1}^{\ast }\text{ \ \ in \ \ \ }\Omega _{s}. \end{array} \right. \label{p3} \end{equation} \noindent With respect to this static PDE system, we multiply the heat equation in ( \ref{p1}) by test function $\varphi \in H_{\Gamma _{f}}^{1}(\Omega _{f})$, where \begin{equation*} H_{\Gamma _{f}}^{1}(\Omega _{f})=\left\{ \zeta \in H^{1}(\Omega _{f}):\zeta |_{\Gamma _{f}}=0\right\} . \label{4} \end{equation*} Upon integrating and invoking Green's Theorem, then solution component $ u_{0} $ satisfies the variational relation, \begin{equation} \lambda (u_{0},\varphi )_{\Omega _{f}}+(\nabla u_{0},\nabla \varphi )_{\Omega _{f}}-\left\langle \frac{\partial u_{0}}{\partial v},\varphi \right\rangle _{\Gamma _{s}}=(u_{0}^{\ast },\varphi )_{\Omega _{f}}\text{ \ for }\varphi \in H_{\Gamma _{f}}^{1}(\Omega _{f})\text{.} \label{5} \end{equation} In addition, define Hilbert space $\mathcal{V}$ by \begin{eqnarray} \mathcal{V} &=&\left\{ \left[ \psi _{1},...,\psi _{K}\right] \in H^{1}(\Gamma _{1})\times ...\times H^{1}(\Gamma _{K}):\right. \text{For\ all \ } 1\leq j\leq K, \notag \\ &&\left. \psi _{j}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}}=\psi _{l}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}}\text{ on \ }\partial \Gamma _{j}\cap \partial \Gamma _{l}\text{, for all }1\leq l\leq K\text{ such that }\partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset \right\} \label{V} \end{eqnarray} \noindent Therewith, we multiply both sides of the $h_{0j}$-wave equation in (\ref{p2} ) by component $\psi _{j}$ of $\mathbf{\psi }\in \mathcal{V}$, for $1\leq j\leq K$. Upon integration we have for $\mathbf{\psi }\in \mathcal{V}$, \begin{equation*} \left[ \begin{array}{c} \lambda (h_{11},\psi _{1})_{\Gamma _{1}}-(\Delta h_{01},\psi _{1})_{\Gamma _{1}}+(h_{01},\psi _{1})_{\Gamma _{1}}-(\frac{\partial }{\partial \nu } w_{0},\psi _{1})_{\Gamma _{1}}+(\frac{\partial }{\partial \nu }u_{0},\psi _{1})_{\Gamma _{1}} \\ \vdots \\ \lambda (h_{1K},\psi _{K})_{\Gamma _{K}}-(\Delta h_{0K},\psi _{K})_{\Gamma _{K}}+(h_{0K},\psi _{K})_{\Gamma _{K}}-(\frac{\partial }{\partial \nu } w_{0},\psi _{K})_{\Gamma _{K}}+(\frac{\partial }{\partial \nu }u_{0},\psi _{K})_{\Gamma _{K}} \end{array} \right] = \left[ \begin{array}{c} (h_{11}^{\ast },\psi _{1})_{\Gamma _{1}} \\ \vdots \\ (h_{1K}^{\ast },\psi _{K})_{\Gamma _{K}} \end{array} \right] \end{equation*} For each vector component, we subsequently integrate by parts while invoking the resolvent relations in (\ref{p2}) (and using the domain criterion (A.iv.b)). Summing up the components of the resulting vectors, we see that the solution component $\left[ h_{11},...,h_{1K}\right] \in \mathcal{V}$ of ( \ref{a1}) satisfies \begin{eqnarray} &&\sum\limits_{j=1}^{K}\left[ \lambda (h_{1j},\psi _{j})_{\Gamma _{j}}+ \frac{1}{\lambda }(\nabla h_{1j},\nabla \psi _{j})_{\Gamma _{j}}+\frac{1}{ \lambda }(h_{1j},\psi _{j})_{\Gamma _{j}}+(\frac{\partial }{\partial \nu } u_{0}-\frac{\partial }{\partial \nu }w_{0},\psi _{j})_{\Gamma _{j}}\right] \notag \\ &&\text{ \ \ \ \ \ }=\sum\limits_{j=1}^{K}\left[ (h_{1j}^{\ast },\psi _{j})_{\Gamma _{j}}-\frac{1}{\lambda }(h_{0j}^{\ast },\psi _{j})_{\Gamma _{j}}-\frac{1}{\lambda }(\nabla h_{0j}^{\ast },\nabla \psi _{j})_{\Gamma _{j}}\right] \text{, \ for }\mathbf{\psi }\in \mathcal{V}. \label{6} \end{eqnarray} \noindent Moreover, multiplying the both sides of the wave equation in (\ref{p3}) by $ \xi \in H^{1}(\Omega _{s})$, and integrating by parts -- while using the resolvent relations in (\ref{p3}) -- we see that the solution component $ w_{1}$ of (\ref{a1}) satisfies \begin{equation} \lambda (w_{1},\xi )_{\Omega _{s}}+\frac{1}{\lambda }(\nabla w_{1},\nabla \xi )_{\Omega _{s}}+(\frac{\partial }{\partial \nu }w_{0},\xi )_{\Gamma _{s}}=(w_{1}^{\ast },\xi )_{\Omega _{s}}-\frac{1}{\lambda }(\nabla w_{0}^{\ast },\nabla \xi )_{\Omega _{s}}\text{, for }\xi \in H^{1}(\Omega _{s}). \label{7} \end{equation} \noindent Set now $$ \mathbf{W} \equiv \left\{ [\varphi ,\psi _{1},...,\psi _{K},\xi ]\in H_{\Gamma _{f}}^{1}(\Omega _{f})\times \mathcal{V}\times H^{1}(\Omega _{s}):\varphi |_{\Gamma _{j}}=\psi _{j}=\xi |_{\Gamma _{j}},\text{ for\ } 1\leq j\leq K \right\};$$ \begin{equation} \left\Vert \lbrack \varphi ,\psi _{1},...,\psi _{K},\xi ]\right\Vert _{ \mathbf{W}}^{2} =\left\Vert \nabla \varphi \right\Vert _{\Omega _{f}}^{2}+\sum\limits_{j=1}^{K}\left[ \left\Vert \nabla \psi _{j}\right\Vert _{\Gamma _{j}}^{2}+\left\Vert \psi _{j}\right\Vert _{\Gamma _{j}}^{2}\right] +\left\Vert \nabla \xi\right\Vert _{\Omega _{s}}^{2}. \label{W} \end{equation} \noindent With respect to this Hilbert space, we have the following conclusion, upon adding (\ref{5}), (\ref{6}) and (\ref{7}): if $\Phi =\left[ u_{0},h_{01},h_{11},\ldots ,h_{0k},h_{1k},w_{0},w_{1}\right] \in D(\mathbf{A} )$ solves (\ref{a1}), then necessarily its solution components $\left[ u_{0},h_{11},\ldots ,h_{1K},w_{1}\right] \in \mathbf{W}$ satisfy \ for $ \left[ \varphi ,\mathbf{\psi },\xi \right] \in \mathbf{W}$, \begin{equation} \begin{array}{l} \lambda (u_{0},\varphi )_{\Omega _{f}}+(\nabla u_{0},\nabla \varphi )_{\Omega _{f}}+\lambda (w_{1},\xi )_{\Omega _{s}}+\frac{1}{\lambda }(\nabla w_{1},\nabla \xi )_{\Omega _{s}} \\ \text{ \ }+\sum\limits_{j=1}^{K}\left[ \lambda (h_{1j},\psi _{j})_{\Gamma _{j}}+\frac{1}{\lambda }(\nabla h_{1j},\nabla \psi _{j})_{\Gamma _{j}}+\frac{ 1}{\lambda }(h_{1j},\psi _{j})_{\Gamma _{j}}\right] =\mathbf{F}_{\lambda }\left( \left[ \begin{array}{c} \varphi \\ \mathbf{\psi } \\ \xi \end{array} \right] \right) ; \end{array} \label{8} \end{equation} where \begin{equation} \mathbf{F}_{\lambda }\left( \left[ \begin{array}{c} \varphi \\ \mathbf{\psi } \\ \xi \end{array} \right] \right) =(u_{0}^{\ast },\varphi )_{\Omega _{f}}+\sum\limits_{j=1}^{K}\left[ (h_{1j}^{\ast },\psi _{j})_{\Gamma _{K}}- \frac{1}{\lambda }(h_{0j}^{\ast },\psi _{j})_{\Gamma _{j}}-\frac{1}{\lambda } (\nabla h_{0j}^{\ast },\nabla \psi _{j})_{\Gamma _{j}}\right] +(w_{1}^{\ast },\xi )_{\Omega _{s}}-\frac{1}{\lambda }(\nabla w_{0}^{\ast },\nabla \xi )_{\Omega _{s}}. \label{F} \end{equation} \noindent In sum, in order to recover the solution $\Phi =\left[ u_{0},h_{01},h_{11}, \ldots ,h_{0K},h_{1K},w_{0},w_{1}\right] \in D(\mathbf{A})$ to (\ref{a1}), one can straightaway apply the Lax-Milgram Theorem to the operator $\mathbf{B }\in \mathcal{L}(\mathbf{W},\mathbf{W}^{\ast })$, given by \begin{eqnarray} &&\left\langle \mathbf{B}\left[ \begin{array}{c} \varphi \\ \psi _{1} \\ \vdots \\ \psi _{k} \\ \xi \end{array} \right] ,\left[ \begin{array}{c} \widetilde{\varphi } \\ \widetilde{\psi }_{1} \\ \vdots \\ \widetilde{\psi }_{k} \\ \widetilde{\xi } \end{array} \right] \right\rangle _{\mathbf{W}^{\ast }\times \mathbf{W}} = \lambda (\varphi ,\widetilde{\varphi })_{\Omega _{f}}+(\nabla \varphi ,\nabla \widetilde{\varphi })_{\Omega _{f}}+\lambda (\xi ,\widetilde{\xi } )_{\Omega _{s}}+\frac{1}{\lambda }(\nabla \xi ,\widetilde{\nabla \xi } )_{\Omega _{s}} \notag \end{eqnarray} $$ +\sum\limits_{j=1}^{K}\left[ \lambda (\psi _{j},\widetilde{\psi } _{j})_{\Gamma _{j}}+\frac{1}{\lambda }(\nabla \psi _{j},\widetilde{\nabla \psi }_{j})_{\Gamma _{j}}+\frac{1}{\lambda }(\psi _{j},\widetilde{\psi } _{j})_{\Gamma _{j}}\right] . \label{8.5} $$ It is clear that $\mathbf{B}\in \mathcal{L}(\mathbf{W},\mathbf{W}^{\ast })$ is $\mathbf{W}$-elliptic; so by the Lax-Milgram Theorem, the equation (\ref {8}) has a unique solution \begin{equation} \left[ u_{0},h_{11},\ldots ,h_{1K},w_{1}\right] \in \mathbf{W}. \label{9} \end{equation} Subsequently, we set \begin{equation} \left\{ \begin{array}{l} h_{0j}=\frac{h_{1j}+h_{0j}^{\ast }}{\lambda }\text{,\ for }1\leq j\leq K, \\ w_{0}=\frac{w_{1}+w_{0}^{\ast }}{\lambda }. \end{array} \right. \label{10} \end{equation} In particular, since the data $\left[ u_{0}^{\ast },h_{01}^{\ast },h_{11}^{\ast },\ldots ,h_{0k}^{\ast },h_{1k}^{\ast },w_{0}^{\ast },w_{1}^{\ast }\right] \in \mathbf{H},$ then the relations in (\ref{10}) give that \begin{equation} w_{0}|_{\Gamma _{j}}=h_{0j},\text{ \ \ }1\leq j\leq K. \label{18} \end{equation} \noindent We further show that the dependent variable $\Phi =\left[ u_{0},h_{01},h_{11},\ldots ,h_{0k},h_{1k},w_{0},w_{1}\right] ,$ given by the solution of (\ref{8}) and (\ref{10}), is an element of $D(\mathbf{A})$: If we take $[\varphi ,0,\ldots ,0,0]\in \mathbf{W}$ in (\ref{8}), where $ \varphi \in \mathcal{D}(\Omega _{f})$, then we have \begin{equation*} \lambda (u_{0},\varphi )_{\Omega _{f}}-(\Delta u_{0},\varphi )_{\Omega _{f}}=(u_{0}^{\ast },\varphi )_{\Omega _{f}}\text{\ \ \ }\forall \text{ } \varphi \in \mathcal{D}(\Omega _{f}), \end{equation*} whence \begin{equation} \lambda u_{0}-\Delta u_{0}=u_{0}^{\ast }\text{ in }L^{2}(\Omega _{f}). \label{11} \end{equation} Subsequently, the fact that $\left\{ \Delta u_{0},u_{0}\right\} \in L^{2}(\Omega _{f})\times H^{1}(\Omega _{f})$ gives \begin{equation} \frac{\partial u_{0}}{\partial v}|_{\Gamma _{s}}\in H^{-\frac{1}{2}}(\Gamma _{s}). \label{12} \end{equation} In turn, using the relations in (\ref{10}), if we take $[0,0,\ldots ,0,\xi ]\in \mathbf{W}$, where $\xi \in \mathcal{D}(\Omega _{s})$, then upon integrating by parts, we have \begin{equation*} \lambda (w_{1},\xi )_{\Omega _{s}}-(\Delta w_{0},\xi )_{\Omega _{s}}=(w_{1}^{\ast },\xi )_{\Omega _{s}}\text{ \ \ \ }\forall \text{ }\xi \in \mathcal{D}(\Omega _{s}), \end{equation*} and so \begin{equation} \lambda w_{1}-\Delta w_{0}=w_{1}^{\ast }\text{ \ \ in \ }L^{2}(\Omega _{s}), \label{13} \end{equation} which gives that $\left\{ \Delta w_{0},w_{0}\right\} \in L^{2}(\Omega _{s})\times H^{1}(\Omega _{s})$. A subsequent integration by parts yields that \begin{equation} \frac{\partial w_{0}}{\partial v}|_{\Gamma _{s}}\in H^{-\frac{1}{2}}(\Gamma _{s}). \label{14} \end{equation} \noindent Moreover, let $\gamma _{s}^{+}\in \mathcal{L}(H^{\frac{1}{2}}(\Gamma _{s}),H^{1}(\Omega _{s}))$ be the right continuous inverse for the Sobolev trace map $\gamma _{s}\in \mathcal{L}(H^{1}(\Omega _{s}),H^{\frac{1}{2}}(\Gamma _{s}))$ ; viz., \begin{equation*} \gamma _{s}(f)=f|_{\Gamma _{s}}\text{ \ for }f\in C^{\infty }(\overline{ \Omega _{s}}). \end{equation*} Likewise, let $\gamma _{f}^{+}\in \mathcal{L}(H^{\frac{1}{2}}(\Gamma _{s}),H_{\Gamma _{f}}^{1}(\Omega _{f}))$ denote the right inverse for the Sobolev trace map \newline $\gamma _{f}\in \mathcal{L}(H_{\Gamma _{f}}^{1}(\Omega _{f}),H^{\frac{1}{2} }(\Gamma _{s})).$ Also, for given $\psi _{j}\in H_{0}^{1}(\Gamma _{j}),$ \ $ 1\leq j\leq K,$ let \begin{equation} \left( \psi _{j}\right) _{ext}(x)\equiv \left\{ \begin{array}{l} \psi _{j},\text{ \ \ }x\in \Gamma _{j} \\ 0,\text{ \ \ }x\in \Gamma _{s}\backslash \Gamma _{j}. \end{array} \right. \label{15} \end{equation} Then $\left( \psi _{j}\right) _{ext}\in H^{\frac{1}{2}}(\Gamma _{s})$ \ for all $1\leq j\leq K$. We now specify test function $[\varphi ,\psi _{1},...,\psi _{K},\xi ]\in \mathbf{W}$ in (\ref{8}): namely, $\psi _{j}\in H_{0}^{1}(\Gamma _{j}),$ \ $1\leq j\leq K$, and \begin{equation} \varphi \equiv \gamma _{f}^{+}\left[ \sum\limits_{j=1}^{K}\left( \psi _{j}\right) _{ext}\right] ,\text{ \ \ \ \ \ \ \ \ }\xi \equiv \gamma _{s}^{+} \left[ \sum\limits_{j=1}^{K}\left( \psi _{j}\right) _{ext}\right] . \label{16} \end{equation} Therewith we have verbatim from (\ref{8}), \begin{eqnarray*} &&\lambda (u_{0},\varphi )_{\Omega _{f}}+(\nabla u_{0},\nabla \varphi )_{\Omega _{f}} \\ &&+\sum\limits_{j=1}^{K}\left[ \lambda (h_{1j},\psi _{j})_{\Gamma _{j}}+ \frac{1}{\lambda }(\nabla h_{1j},\nabla \psi _{j})_{\Gamma _{j}}+\frac{1}{ \lambda }(h_{1j},\psi _{j})_{\Gamma _{j}}\right] \\ &&+\lambda (w_{1},\xi )_{\Omega _{s}}+\frac{1}{\lambda }(\nabla w_{1},\nabla \xi )_{\Omega _{s}} \\ &=&(u_{0}^{\ast },\varphi )_{\Omega _{f}}+\sum\limits_{j=1}^{k}\left[ (h_{1j}^{\ast },\psi _{j})_{\Gamma _{j}}-\frac{1}{\lambda }(\nabla h_{0j}^{\ast },\nabla \psi _{j})_{\Gamma _{j}}-\frac{1}{\lambda } (h_{0j}^{\ast },\psi _{j})_{\Gamma _{j}}\right] \\ &&+(w_{1}^{\ast },\xi )_{\Omega _{s}}-\frac{1}{\lambda }(\nabla w_{0}^{\ast },\nabla \xi )_{\Omega _{s}}. \end{eqnarray*} Upon integrating by parts, and invoking the relations in (\ref{10}), as well as (\ref{11})-(\ref{14}), we get \begin{equation} \left\langle \frac{\partial u_{0}}{\partial \nu },\varphi \right\rangle _{\Gamma _{s}}+\sum\limits_{j=1}^{K}\left[ \lambda (h_{1j},\psi _{j})_{\Gamma _{j}}-(\Delta h_{0j},\psi _{j})_{\Gamma _{j}}+(h_{0j},\psi _{j})_{\Gamma _{j}}\right] -\left\langle \frac{\partial w_{0}}{\partial \nu } ,\xi \right\rangle _{\Gamma _{s}}=\sum\limits_{j=1}^{K}(h_{1j}^{\ast },\psi _{j})_{\Gamma _{j}}. \label{inter} \end{equation} Since each test function component $\psi _{j}\in H_{0}^{1}(\Gamma _{j})$ is arbitrary, we then deduce from this relation and (\ref{15})-(\ref{16}) that each $h_{0j}$ solves \begin{equation} \lambda h_{1j}-\Delta h_{0j}+h_{0j}-\frac{\partial w_{0}}{\partial \nu }+ \frac{\partial u_{0}}{\partial \nu }=h_{1j}^{\ast }\text{ \ \ in \ }\Gamma _{j},\text{ \ \ }1\leq j\leq K. \label{17} \end{equation} \noindent In addition, we have from (\ref{17}), (\ref{9}), (\ref{12}), and (\ref{14}) that $\left\{ \Delta h_{0j},h_{0j}\right\} \in \lbrack H^{1}(\Gamma _{j})]^{\prime }\times H^{1}(\Gamma _{j})$, for $1\leq j\leq K$. Consequently, an integration by parts gives that \begin{equation} \frac{\partial h_{0j}}{\partial n_{j}}\in H^{-\frac{1}{2}}(\partial \Gamma _{j})\text{, \ for }1\leq j\leq K. \label{19} \end{equation} \noindent Finally: Let given indices $j^{\ast },l^{\ast }$, $1\leq j^{\ast },l^{\ast }\leq K$, satisfy $\partial \Gamma _{j^{\ast }}\cap $ $\partial \Gamma _{l^{\ast }}\neq \emptyset $. Let $g$ be a given element in $H_{0}^{\frac{1}{ 2}+\epsilon }(\partial \Gamma _{j^{\ast }}\cap $ $\partial \Gamma _{l^{\ast }})$. Then one has that $\tilde{g}_{j^{\ast }}\in H^{\frac{1}{2}+\epsilon }(\partial \Gamma _{j^{\ast }})$ and $\tilde{g}_{l^{\ast }}\in H^{\frac{1}{2} +\epsilon }(\partial \Gamma _{l^{\ast }})$, where \begin{equation*} \tilde{g}_{j^{\ast }}(x)\equiv\left\{ \begin{array}{l} g(x)\text{, }x\in \partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }} \\ 0\text{, }x\in \partial \Gamma _{j^{\ast }}\backslash \left( \partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}\right) ; \end{array} \right. \text{ \ \ }\tilde{g}_{l^{\ast }}(x)\equiv\left\{ \begin{array}{l} g(x)\text{, }x\in \partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }} \\ 0\text{, }x\in \partial \Gamma _{l^{\ast }}\backslash \left( \partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}\right) \end{array} \right. \end{equation*} (see e.g., Theorem 3.33, p. 95 of \cite{mclean}). Subsequently, by the (limited) surjectivity of the Sobolev Trace Map on Lipschitz domains-- see e.g., Theorem 3.38, p.102 of \cite{mclean} -- there exists $\psi _{j^{\ast }}\in H^{1+\epsilon }(\Gamma _{j^{\ast }})$ and $\psi _{l^{\ast }}\in H^{1+\epsilon }(\Gamma _{l^{\ast }})$ such that \begin{equation} \left. \psi _{j^{\ast }}\right\vert _{\partial \Gamma _{j^{\ast }}}=\tilde{g} _{j^{\ast }}\text{, \ and \ }\left. \psi _{l^{\ast }}\right\vert _{\partial \Gamma _{l^{\ast }}}=\tilde{g}_{l^{\ast }}. \label{g} \end{equation} In turn, by the Sobolev Embedding Theorem, if we define, on $\overline{ \Gamma }_{s}$ the function \begin{equation} \Upsilon (x)\equiv \left\{ \begin{array}{l} \psi _{j^{\ast }}(x)\text{, \ for }x\in \overline{\Gamma} _{j^{\ast }} \\ \psi _{l^{\ast }}(x)\text{, \ for }x\in \overline{\Gamma }_{l^{\ast }} \\ 0\text{, \ for }x\in \overline{\Gamma }_{s}\backslash \left( \overline{\Gamma} _{j^{\ast }}\cup \overline{\Gamma} _{l^{\ast }}\right) , \end{array} \right. \label{sig} \end{equation} then $\Upsilon (x)\in C(\overline{\Gamma }_{s})$. Since also $\psi _{j^{\ast }}\in H^{1}(\Gamma _{j^{\ast }})$ and $\psi _{l^{\ast }}\in H^{1}(\Gamma _{l^{\ast }})$, we eventually deduce via an integration by parts that $ \Upsilon \in H^{1}(\Gamma _{s})$. (See e.g., the proof of Theorem 2, p. 36 of \cite{ciarlet}.) With this $H^{1}$-function in hand, and with aforesaid continuous right inverses $\gamma _{s}^{+}\in \mathcal{L}(H^{\frac{1}{2} }(\Gamma _{s}),H^{1}(\Omega _{s}))$ and $\gamma _{f}^{+}\in \mathcal{L}(H^{ \frac{1}{2}}(\Gamma _{s}),H_{\Gamma _{f}}^{1}(\Omega _{f}))$, we specify the vector \begin{equation} \left[ \varphi ,\mathbf{\psi },\xi \right] \equiv\left[ \gamma _{f}^{+}(\Upsilon ),0,...,\psi _{j^{\ast }},0,...0,\psi _{l^{\ast }},...,0,\gamma _{s}^{+}(\Upsilon )\right] \in \mathbf{W}\text{,} \label{vec} \end{equation} where again, space $\mathbf{W}$ is given in (\ref{W}). With this vector in hand, we consider the thin wave equation in (\ref{17}): With respect to the two fixed indices $1\leq j^{\ast },l^{\ast }\leq K$, we have via (\ref{17}) \begin{equation*} \begin{array}{l} \lambda \left( h_{1j^{\ast }},\psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}}-\left( \Delta h_{0j^{\ast }},\psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}}+\left( h_{0j^{\ast }},\psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}} \\ -\left( \frac{\partial w_{0}}{\partial \nu }-\frac{\partial u_{0}}{ \partial \nu },\psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}} +\lambda \left( h_{1l^{\ast }},\psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}}-\left( \Delta h_{0l^{\ast }},\psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}} \\ +\left( h_{0l^{\ast }},\psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}}-\left( \frac{\partial w_{0}}{\partial \nu }-\frac{\partial u_{0}}{ \partial \nu },\psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}}=\left( h_{1j^{\ast }}^{\ast },\psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}}+\left( h_{1l^{\ast }}^{\ast },\psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}}. \end{array} \end{equation*} A subsequent integration by parts, with (\ref{vec}) in mind, subsequently yields \begin{equation*} \begin{array}{l} \lambda \left( h_{1j^{\ast }},\psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}}+\left( \nabla h_{0j^{\ast }},\nabla \psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}}-\left\langle \frac{\partial h_{0j^{\ast }}}{\partial n_{j^{\ast }}},g\right\rangle _{\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}} +\left( h_{0j^{\ast }}, \psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}}\\ \text{ \ }+\lambda \left( h_{1l^{\ast }},\psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}}+\left( \nabla h_{0l^{\ast }},\nabla \psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}}-\left\langle \frac{\partial h_{0l^{\ast }}}{\partial n_{l^{\ast }}},g\right\rangle _{\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}}+\left( h_{0l^{\ast }},\psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}} \\ \text{ \ }+\left( \nabla w_{0},\nabla \xi \right) _{\Omega _{s}}+\left( \Delta w_{0},\xi \right) _{\Omega _{s}}+\left( \nabla u_{0},\nabla \varphi \right) _{\Omega _{f}}+\left( \Delta u_{0},\varphi \right) _{\Omega _{f}}=\left( h_{1j^{\ast }}^{\ast },\psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}}+\left( h_{1l^{\ast }}^{\ast },\psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}}. \end{array} \end{equation*} Invoking (\ref{11}) and (\ref{13}), we then have \begin{equation*} \begin{array}{l} -\left\langle \frac{\partial h_{0j^{\ast }}}{\partial n_{j^{\ast }}} ,g\right\rangle _{\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}}-\left\langle \frac{\partial h_{0l^{\ast }}}{\partial n_{l^{\ast }}} ,g\right\rangle _{\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}} +\left( h_{0j^{\ast }}, \psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}}+\left( h_{0l^{\ast }},\psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}}\\ +\lambda \left( h_{1j^{\ast }},\psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}}+\left( \nabla h_{0j^{\ast }},\nabla \psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}}+\lambda \left( h_{1l^{\ast }},\psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}}+\left( \nabla h_{0l^{\ast }},\nabla \psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}} \\ +\left( \nabla w_{0},\nabla \xi \right) _{\Omega _{s}}+\lambda \left( w_{1},\xi \right) _{\Omega _{s}}-\left( w_{1}^{\ast },\xi \right) _{\Omega _{s}}+\left( \nabla u_{0},\nabla \varphi \right) _{\Omega _{f}}+\lambda \left( u_{0},\varphi \right) _{\Omega _{f}}-\left( u_{0}^{\ast },\varphi \right) _{\Omega _{f}} \\ =\left( h_{1j^{\ast }}^{\ast },\psi _{j^{\ast }}\right) _{\Gamma _{j^{\ast }}}+\left( h_{1l^{\ast }}^{\ast },\psi _{l^{\ast }}\right) _{\Gamma _{l^{\ast }}}. \end{array} \end{equation*} Invoking the relations in (\ref{10}) and the variational equation (\ref{8} ), which is satisfied by $\left[ u_{0},h_{11},\ldots ,h_{1K},w_{1}\right] $\\ (where again vector $\left[ \varphi ,\mathbf{\psi },\xi \right] $ is given by (\ref{vec})), we have the relation \begin{equation*} \left\langle \frac{\partial h_{0j^{\ast }}}{\partial n_{j^{\ast }}} ,g\right\rangle _{\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}}=-\left\langle \frac{\partial h_{0l^{\ast }}}{\partial n_{l^{\ast }}} ,g\right\rangle _{\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}}\text{, \ for all }g\in H_{0}^{\frac{1}{2}+\epsilon }(\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }})\text{. } \end{equation*} Since $H_{0}^{\frac{1}{2}+\epsilon }(\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }})$ is dense in $H^{\frac{1}{2}}(\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }})$, we deduce now that \begin{equation} \frac{\partial h_{0j^{\ast }}}{\partial n_{j^{\ast }}}=-\frac{\partial h_{0l^{\ast }}}{\partial n_{l^{\ast }}}\text{, \ for }\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}\neq \emptyset . \label{last} \end{equation} Collecting (\ref{9})-(\ref{14}) and (\ref{17}), (\ref{19}) and (\ref{last}), we have that the obtained variable \begin{equation*} \left[ u_{0},h_{01},h_{11},\ldots ,h_{0K},h_{1K},w_{0},w_{1}\right] \in D( \mathbf{A}), \end{equation*} and solves the resolvent equation (\ref{a1}). This concludes the proof of Theorem \ref{well}, upon application of the Lumer-Phillips Theorem. \section{Strong Stability-Proof of Theorem \ref{SS} } In this section, our main aim is to address the issue of asymptotic behavior of the solution that we stated in Section 2. In this regard, we show that the system given in (\ref{2a})-(\ref{IC}) is strongly stable. Our proof will be independent of the compactness or noncompactness of the resolvent of $\mathbf{A}$ (see Remark \ref {remark_1}.) It will hinge on an ultimate appeal to the following well known result : \begin{theorem} \label{AB} (\cite{A-B}) Let ${T(t)}_{t\geq 0}$ be a bounded $C_{0}$-semigroup on a reflexive Banach space X, with generator $\mathbf{A}$. Assume that $\sigma_p(\mathbf{A})\cap i\mathbb{R}=\emptyset$, where $\sigma_p(\mathbf{A})$ is the point spectrum of $\mathbf{A}$. If $\sigma(\mathbf{A})\cap i\mathbb{R}$ is countable then ${T(t)}_{t\geq 0}$ is strongly stable. \end{theorem} The proof of this theorem entails the elimination of all three parts of the spectrum of the generator $\mathbf{A}$ from the imaginary axis. For this, we will give the necessary analysis on the spectrum in the following subsection. \subsection{Spectral Analysis on the generator $\mathbf{A}$} Since we wish to satisfy the conditions of Theorem \ref{AB}, we will prove that $\sigma(\mathbf{A})\cap i\mathbb{R}=\emptyset$ which is equivalent to show that $$ i\mathbb{R}\subset \rho (\mathbf{A}).$$ To do this, we start with the following Proposition: \begin{proposition} \label{invert} With generator $\mathbf{A}:D(\mathbf{A})\subset \mathbf{H} \rightarrow \mathbf{H}$ given in (\ref{4a})-(\ref{dom}), the point $0\in \rho (\mathbf{A).}$ That is, $\mathbf{A}$ is boundedly invertible. \end{proposition} \begin{proof} Given $\Phi ^{\ast }=\left[ u_{0}^{\ast },h_{01}^{\ast },h_{11}^{\ast },\ldots ,h_{0K}^{\ast },h_{1K}^{\ast },w_{0}^{\ast },w_{1}^{\ast }\right] \in \mathbf{H,}$ we take up the task of finding $\Phi =\left[ u_{0},h_{01},h_{11},\ldots ,h_{0K},h_{1K},w_{0},w_{1}\right] \in D(\mathbf{A} )$ which solves \begin{equation} \mathbf{A}\Phi =\Phi ^{\ast }, \label{43} \end{equation} or \begin{equation} \left[ \begin{array}{c} \Delta u_{0} \\ h_{11} \\ -\frac{\partial u_{0}}{\partial \nu }|_{\Gamma _{1}}+(\Delta -I)h_{01}+\frac{ \partial w_{0}}{\partial \nu }|_{\Gamma _{1}} \\ \vdots \\ h_{1K} \\ -\frac{\partial u_{0}}{\partial \nu }|_{\Gamma _{K}}+(\Delta -I)h_{0K}+\frac{ \partial w_{0}}{\partial \nu }|_{\Gamma _{K}} \\ w_{1} \\ \Delta w_{0} \end{array} \right] =\left[ \begin{array}{c} u_{0}^{\ast } \\ h_{01}^{\ast } \\ h_{11}^{\ast } \\ \vdots \\ h_{0K}^{\ast } \\ h_{1K}^{\ast } \\ w_{0}^{\ast } \\ w_{1}^{\ast } \end{array} \right] . \label{44} \end{equation} From the thin and thick wave component of this equation we see that \begin{equation} w_{1}=w_{0}^{\ast }\in H^{1}(\Omega_s ) \label{45} \end{equation} \begin{equation} h_{1j}=h_{0j}^{\ast }\in H^{1}(\Gamma _{j}),\text{ \ \ \ for \ }1\leq j\leq K \label{46} \end{equation} Moreover, from the heat and thick wave components of (\ref{44}), and the domain criterion (A.iii), we have that the solution component $u_{0}$ should satisfy the following BVP: \begin{equation} \left\{ \begin{array}{c} \Delta u_{0}=u_{0}^{\ast }\text{ \ \ \ \ in \ }\Omega _{f} \\ u_{0}|_{\Gamma _{f}}=0 \\ u_{0}|_{\Gamma s}=w_{0}^{\ast }|_{\Gamma _{s}} \end{array} \right. \label{47} \end{equation} Solving this BVP, and estimating its solution, in part by the Sobolev Trace Theorem, we have \begin{equation} \left\Vert u_{0}\right\Vert _{H_{\Gamma _{f}}^{1}(\Omega _{f})}+\left\Vert \Delta u_{0}\right\Vert _{\Omega _{f}}\leq C\left[ \left\Vert u_{0}^{\ast }\right\Vert _{\Omega _{f}}+\left\Vert w_{0}^{\ast }\right\Vert _{H^{1}(\Omega _{s})}\right] . \label{48} \end{equation} In turn, the use of this estimate in an integration by parts gives \begin{equation} \left\Vert \frac{\partial u_{0}}{\partial \nu }\right\Vert _{H^{-\frac{1}{2} }(\partial \Omega _{f})}\leq C\left[ \left\Vert u_{0}^{\ast }\right\Vert _{\Omega _{f}}+\left\Vert w_{0}^{\ast }\right\Vert _{H^{1}(\Omega _{s})} \right]. \label{49} \end{equation} In addition, with the space $\mathcal{V}$ as in (\ref{V}), we set \begin{equation} \chi \equiv \left\{ \left[ \psi ,\xi \right] \in \mathcal{V}\times H^{1}(\Omega _{s}):\psi _{j}=\xi |_{\Gamma _{j}}\text{ \ \ for \ }1\leq j\leq K\text{\ }\right\} . \label{50} \end{equation} With this space in hand, and with the thin-wave and thick-wave components of equation (\ref{44}) in mind, we consider the variational relation \begin{eqnarray} &&(\nabla w_{0},\nabla \xi )_{\Omega _{s}} \notag \\ &&+\sum\limits_{j=1}^{K}\left[ (\nabla h_{0j},\nabla \psi _{j})_{\Gamma _{j}}+(h_{0j},\psi _{j})_{\Gamma _{j}}\right] \notag \\ &=&-(w_{1}^{\ast },\xi )_{\Omega _{s}} \notag \\ &&-\sum\limits_{j=1}^{K}\left[ (h_{1j}^{\ast },\psi _{j})_{\Gamma _{j}}+( \frac{\partial u_{0}}{\partial \nu },\psi _{j})_{\Gamma _{j}}\right], \label{51} \end{eqnarray} for every $\left[ \psi ,\xi \right] \in \chi $ where the term $\frac{\partial u_{0}}{\partial \nu }|_{\Gamma _{s}}$ is from (\ref{49}). Since the bilinear form $ b(\cdot ,\cdot ):\chi \rightarrow \mathbb{R} ,$ given by \begin{equation} b(\left[ \psi ,\xi \right] ,\left[ \widetilde{\psi },\widetilde{\xi }\right] )=(\nabla \xi ,\nabla \widetilde{\xi })_{\Omega _{s}}+\sum\limits_{j=1}^{K} \left[ (\nabla \psi _{j},\nabla \widetilde{\psi }_{j})_{\Gamma _{j}}+(\psi _{j},\widetilde{\psi }_{j})_{\Gamma _{j}}\right] \label{52} \end{equation} for every $\left[ \psi ,\xi \right] ,\left[ \widetilde{\psi },\widetilde{\xi }\right] \in \chi ,$ is continuous and $\chi $-elliptic, then by Lax-Milgram, there exists a unique solution \begin{equation} \phi =\left[ (h_{01},h_{02},\ldots ,h_{0K}),w_{0}\right] \in \chi \label{53} \end{equation} to the variational relation (\ref{51}). To show that the obtained $\left[ u_{0},[h_{01},h_{11},\ldots ,h_{0K},h_{1K}],w_{0},w_{1}\right] \in \mathbf{H} $ is in $D(\mathbf{A})$ and satisfies the equation (\ref{44}):\newline Proceeding very much as we did in the proof of Theorem \ref{well}, we take in (\ref{51}) \begin{equation*} \left[ \psi ,\xi \right] =\left[ \left[ 0,0,...,0\right] ,\varphi \right] , \end{equation*} where $\varphi \in \mathcal{D}(\Omega _{s}).$ This gives \begin{equation*} (\nabla w_{0},\nabla \xi )_{\Omega _{s}}=-(w_{1}^{\ast },\xi )_{\Omega _{s}}, \end{equation*} whence we obtain \begin{equation} -\Delta w_{0}=-w_{1}^{\ast }\text{ \ \ \ in \ \ }\Omega _{s}, \label{54} \end{equation} with \begin{eqnarray} \left\Vert \Delta w_{0}\right\Vert _{\Omega _{s}}+\left\Vert \frac{\partial w_{0}}{\partial \nu }\right\Vert _{H^{-\frac{1}{2}}(\Gamma _{s})} &\leq &C \left[ \left\Vert w_{1}^{\ast }\right\Vert _{\Omega _{s}}+\left\Vert w_{0}\right\Vert _{H^{1}(\Omega _{s})}\right] \notag \\ &\leq &C\left\Vert \left[ u_{0}^{\ast },[h_{01}^{\ast },h_{11}^{\ast },\ldots ,h_{0K}^{\ast },h_{1K}^{\ast }],w_{0}^{\ast },w_{1}^{\ast }\right] \right\Vert _{\mathbf{H}}, \label{55} \end{eqnarray} after using (\ref{53}). In turn, using aforesaid right continuous inverse $ \gamma _{s}^{+}\in \mathcal{L}(H^{\frac{1}{2}}(\Gamma _{s}),H^{1}(\Omega _{s})),$ let in (\ref{51}), test function \begin{equation*} \left[ \psi ,\xi \right] =\left[ \left[ (\psi _{1})_{ext},...,(\psi _{K})_{ext}\right] ,\gamma _{s}^{+}\left( \sum\limits_{j=1}^{K}\left( \psi _{j}\right) _{ext}\right) \right] \in \chi , \end{equation*} where each $\psi _{j}\in H_{0}^{1}(\Gamma _{j})$ $(1\leq j\leq K),$ and each $\left( \psi _{j}\right) _{ext}$ is as in (\ref{15}). Applying this function to (\ref{51}), integrating by parts and invoking (\ref{54}), we have \begin{eqnarray*} &&-(\Delta w_{0},\xi )_{\Omega _{s}}-\left\langle \frac{\partial w_{0}}{ \partial \nu },\xi |_{\Gamma _{s}}\right\rangle _{\Gamma _{s}} \\ &&+\sum\limits_{j=1}^{K}\left[ (\nabla h_{0j},\nabla \psi _{j})_{\Gamma _{j}}+(h_{0j},\psi _{j})_{\Gamma _{j}}\right] \\ &=&-\sum\limits_{j=1}^{K}\left[ \left\langle \frac{\partial u_{0}}{\partial \nu },\psi _{j}\right\rangle _{\Gamma _{j}}+(h_{1j}^{\ast },\psi _{j})_{\Gamma _{j}}\right] -(w_{1}^{\ast },\xi )_{\Omega _{s}}. \end{eqnarray*} Again, as each $\psi _{j}\in H_{0}^{1}(\Gamma _{j})$ is arbitrary, we deduce that each $h_{0j}$ solves the thin-wave equation \begin{equation} -\Delta h_{0j}+h_{0j}-\frac{\partial w_{0}}{\partial \nu }+\frac{\partial u_{0}}{\partial \nu }=-h_{1j}^{\ast },\text{\ in \ \ } \Gamma _{j},\text{ \ } 1\leq j\leq K. \label{56} \end{equation} A subsequent integration by parts, and invocation of (\ref{49}), (\ref{53}) and (\ref{55} ), give for $1\leq j\leq K,$ \begin{equation*} \left\Vert \Delta h_{0j}\right\Vert _{\Gamma _{j}}+\left\Vert \frac{\partial h_{0j}}{\partial n_{j}}\right\Vert _{H^{-\frac{1}{2}}(\partial \Gamma _{j})} \end{equation*} \begin{equation} \leq C\left\Vert \left[ u_{0}^{\ast },[h_{01}^{\ast },h_{11}^{\ast },\ldots ,h_{0K}^{\ast },h_{1K}^{\ast }],w_{0}^{\ast },w_{1}^{\ast }\right] \right\Vert _{\mathbf{H}}. \label{57} \end{equation} Now, proceeding as in the final stage of the proof of Theorem \ref{well}: let fixed indices $j^{\ast },l^{\ast }$, $1\leq j^{\ast },l^{\ast }\leq K$, satisfy $\partial \Gamma _{j^{\ast }}\cap $ $\partial \Gamma _{l^{\ast }}\neq \emptyset $. Given function $g$ $\in $ $H_{0}^{\frac{1}{2}+\epsilon }(\partial \Gamma _{j^{\ast }}\cap $ $\partial \Gamma _{l^{\ast }}),$ we invoke the associated functions $\psi _{j^{\ast }}\in H^{1+\epsilon }(\Gamma _{j^{\ast }})$ and $\psi _{l^{\ast }}\in H^{1+\epsilon }(\Gamma _{l^{\ast }}) $ as in (\ref{g}), also $\Upsilon \in H^{1}(\Gamma _{s})$\ as in (\ref{sig} ). With these functions, and said continuous right inverse $\gamma _{s}^{+}\in \mathcal{L}(H^{\frac{1}{2}}(\Gamma _{s}),H^{1}(\Omega _{s}))$, we consider test function \begin{equation*} \left[ \psi ,\xi \right] =\left[ \left[ 0,...,\psi _{j^{\ast }},0,...0,\psi _{l^{\ast }},...,0\right] ,\gamma _{s}^{+}\left( \Upsilon \right) \right] \in \chi. \end{equation*} Applying this test function to the variational relation (\ref{51}), and subsequently invoking (\ref{54}), we obtain \begin{eqnarray*} &&-\left\langle \frac{\partial w_{0}}{\partial \nu },\xi |_{\Gamma _{s}}\right\rangle _{\Gamma _{s}}+(\nabla h_{0j^{\ast }},\nabla \psi _{j^{\ast }})_{\Gamma _{j}^{\ast }}+(h_{0j^{\ast }},\psi _{j^{\ast }})_{\Gamma _{j^{\ast }}} \\ &&+(\nabla h_{0l^{\ast }},\nabla \psi _{l^{\ast }})_{\Gamma _{l}^{\ast }}+(h_{0l^{\ast }},\psi _{l^{\ast }})_{\Gamma _{l^{\ast }}} \\ &=&-(h_{1j^{\ast }},\psi _{j^{\ast }})_{\Gamma _{j^{\ast }}}-\left\langle \frac{\partial u_{0}}{\partial \nu },\psi _{j^{\ast }}\right\rangle _{\Gamma _{j}^{\ast }} \\ &&-(h_{1l}^{\ast },\psi _{l^{\ast }})_{\Gamma _{l}^{\ast }}-\left\langle \frac{\partial u_{0}}{\partial \nu },\psi _{l^{\ast }}\right\rangle _{\Gamma _{l}^{\ast }}. \end{eqnarray*} Integrating by parts with respect to the thin wave components, and invoking ( \ref{56}) and (\ref{g}), we then have \begin{equation*} \left\langle \frac{\partial h_{0j^{\ast }}}{\partial n_{j^{\ast }}} ,g\right\rangle _{\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}}+\left\langle \frac{\partial h_{0l^{\ast }}}{\partial n_{l^{\ast }}} ,g\right\rangle _{\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}}=0\text{.} \end{equation*} Since $g\in H_{0}^{\frac{1}{2}+\epsilon }(\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }})$ is arbitrary, a density argument yields \begin{equation} \left\langle \frac{\partial h_{0j^{\ast }}}{\partial n_{j^{\ast }}} ,g\right\rangle _{\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}}=-\left\langle \frac{\partial h_{0l^{\ast }}}{\partial n_{l^{\ast }}} ,g\right\rangle _{\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}}\text{, \ }\forall \text{ }j^{\ast },l^{\ast }\text{ \ \ },1\leq j^{\ast },l^{\ast }\leq K \label{58} \end{equation} such that $\partial \Gamma _{j^{\ast }}\cap \partial \Gamma _{l^{\ast }}\neq \emptyset .$ Collecting (\ref{45}), (\ref{46}), (\ref{48}), (\ref{49}), (\ref{53}), (\ref{54}), (\ref {56})-(\ref{58}), we have now that the obtained $\left[ u_{0},[h_{01},h_{11},\ldots ,h_{0K},h_{1K}],w_{0},w_{1}\right]\in D(\mathbf{A})$ satisfies the equation (\ref{43}) for arbitrary $\Phi ^{\ast }\in \mathbf{H.}$ Since also $ \mathbf{A}:D(\mathbf{A})\subset \mathbf{H}\rightarrow \mathbf{H}$ is dissipative (and so injective), we conclude that $\mathbf{A}$ is boundedly invertible. \end{proof} In what follows, we will need the Hilbert space adjoint of $\mathbf{A}:D( \mathbf{A})\subset \mathbf{H}\rightarrow \mathbf{H}$ which can be readily computed: \begin{proposition} \label{adj} The Hilbert space adjoint \ $\mathbf{A}^{\ast }:D(\mathbf{A} ^{\ast })\subset \mathbf{H}\rightarrow \mathbf{H}$ of the thick wave-thin wave-heat generator is given as, \begin{equation*} \mathbf{A}^{\ast }=\left[ \begin{array}{cccccccc} \Delta & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -I & \cdots & 0 & 0 & 0 & 0 \\ -\frac{\partial }{\partial \nu }|_{\Gamma _{1}} & (I-\Delta ) & 0 & \cdots & 0 & 0 & -\frac{\partial }{\partial \nu }|_{\Gamma _{1}} & 0 \\ \vdots & \vdots & \vdots & \cdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0 & -I & 0 & 0 \\ -\frac{\partial }{\partial \nu }|_{\Gamma _{K}} & 0 & 0 & \cdots & (I-\Delta ) & 0 & -\frac{\partial }{\partial \nu }|_{\Gamma _{1}}& 0 \\ 0 & 0 & 0 & \cdots & 0 & 0 & 0 & -I \\ 0 & 0 & 0 & \cdots & 0 & 0 & -\Delta & 0 \end{array} \right] ; \end{equation*} \end{proposition} where \begin{equation*} \begin{array}{l} D(\mathbf{A}^{\ast })=\left\{ \left[ u_{0},h_{01},h_{11},\ldots ,h_{0K},h_{1K},w_{0},w_{1}\right] \in \mathbf{H}:\right. \\ \text{ \ \ (A}^{\ast }\text{.i) }u_{0}\in H^{1}(\Omega _{f})\text{, } h_{1j}\in H^{1}(\Gamma _{j})\text{ for }1\leq j\leq K\text{, }w_{1}\in H^{1}(\Omega _{s})\text{;} \\ \text{ \ }\left. \text{(A}^{\ast }\text{.ii) (a) }\Delta u_{0}\in L^{2}(\Omega _{f})\text{, }\Delta w_{0}\in L^{2}(\Omega _{s})\text{, (b) } -\Delta h_{0j}-\frac{\partial u_{0}}{\partial \nu }|_{\Gamma _{j}}-\frac{ \partial w_{0}}{\partial \nu }|_{\Gamma _{j}}\in L^{2}(\Gamma _{j})\text{ \ for\ } 1\leq j\leq K\text{;}\right. \\ \text{ \ \ \ \ \ \ (c) }\left. \dfrac{\partial h_{0j}}{\partial n_{j}} \right\vert _{\partial \Gamma _{j}}\in H^{-\frac{1}{2}}(\partial \Gamma _{j}) \text{, \ for\ } 1\leq j\leq K\text{;} \\ \text{ \ }\left. \text{(A}^{\ast }\text{.iii) }u_{0}|_{\Gamma _{f}}=0,\ \ u_{0}|_{\Gamma _{j}}=h_{1j}=w_{1}|_{\Gamma _{j}},\ \text{for }1\leq j\leq K \text{;}\right. \\ \text{ \ }\left. \text{(A}^{\ast }\text{.iv) For }1\leq j\leq K\text{: } \right. \\ \text{ \ \ \ \ \ \ (a) }h_{1j}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}}=h_{1l}|_{\partial \Gamma _{j}\cap \partial \Gamma _{l}}\text{ on \ } \partial \Gamma _{j}\cap \partial \Gamma _{l}\text{, for all }1\leq l\leq K \text{ such that }\partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset ; \\ \text{ \ \ \ \ \ \ \ }\left. \text{(b) }\left. \dfrac{\partial h_{0j}}{ \partial n_{j}}\right\vert _{\partial \Gamma _{j}\cap \partial \Gamma _{l}}=-\left. \dfrac{\partial h_{0_{l}}}{\partial n_{l}}\right\vert _{\partial \Gamma _{j}\cap \partial \Gamma _{l}}\text{ on \ }\partial \Gamma _{j}\cap \partial \Gamma _{l}\text{, for all }1\leq l\leq K\text{ such that } \partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset \right\} . \end{array} \end{equation*} Now, we continue with analyzing the point and continuous spectra of the generator $\mathbf{A}$: \begin{lemma} \label{point-cont-spec} The point $\sigma_p(\mathbf{A})$ and continuous spectra $\sigma_c(\mathbf{A})$ of $\mathbf{A}$ have empty intersection with $i \mathbb{R} $. \end{lemma} \textbf{Proof. } To prove this, it will be enough to show that $i \mathbb{R} \backslash \{0\}$ has empty intersection with the approximate spectrum of $\mathbf{A}$; see e.g., Theorem 2.27, pg. 128 of \cite{F}. To this end, given $\beta \neq 0,$ suppose that $i\beta $ is in the approximate spectrum of $\mathbf{A.}$ Then there exist sequences \begin{equation} \{\Phi _{n}\}=\left\{ \left[ \begin{array}{c} u_{n} \\ h_{1n} \\ \xi _{1n} \\ \vdots \\ h_{Kn} \\ \xi _{Kn} \\ w_{0n} \\ w_{1n} \end{array} \right] \right\} \subseteq D(\mathbf{A});\text{ \ \ \ \ \ \ \ }\{(i\beta I- \mathbf{A)}\Phi _{n}\}=\left\{ \left[ \begin{array}{c} u_{n}^{\ast } \\ \varphi _{1n}^{\ast } \\ \psi _{1n}^{\ast } \\ \vdots \\ \varphi _{Kn}^{\ast } \\ \psi _{Kn}^{\ast } \\ w_{0n}^{\ast } \\ w_{1n}^{\ast } \end{array} \right] \right\} \subseteq \mathbf{H}\text{\ ,} \label{60} \end{equation} which satisfy for $n=1,2,...,$ \begin{equation} \left\Vert \Phi _{n}\right\Vert _{\mathbf{H}}=1,\text{ \ \ \ }\left\Vert (i\beta I-\mathbf{A)}\Phi _{n}\right\Vert _{\mathbf{H}}<\frac{1}{n}. \label{61} \end{equation} As such, each $\Phi _{n}$ solves the following static system: \begin{equation} \left\{ \begin{array}{c} i\beta u_{n}-\Delta u_{n}=u_{n}^{\ast }\text{ \ \ in \ }\Omega _{f} \\ u_{n}|_{\Gamma _{f}}=0\text{ \ \ on \ }\Gamma _{f} \end{array} \right. \label{62} \end{equation} For $1\leq j\leq K,$ \begin{equation} \left\{ \begin{array}{c} i\beta h_{jn}-\xi _{jn}=\varphi _{jn}^{\ast }\text{ \ \ in \ }\Gamma _{j} \\ -\beta ^{2}h_{jn}-\Delta h_{jn}+h_{jn}+\frac{\partial u_{n}}{\partial \nu }- \frac{\partial w_{0n}}{\partial \nu }=\psi _{jn}^{\ast }+i\beta \varphi _{jn}^{\ast }\text{ \ \ in \ }\Gamma _{j} \end{array} \right. \label{63} \end{equation} Also \begin{equation} \left\{ \begin{array}{c} i\beta w_{0n}-w_{1n}=w_{0n}^{\ast }\text{ \ \ in \ \ }\Omega _{s} \\ -\beta ^{2}w_{0n}-\Delta w_{0n}=w_{1n}^{\ast }+i\beta w_{0n}^{\ast }\text{ \ \ in \ \ }\Omega _{s} \end{array} \right. \label{64} \end{equation} and again for $1\leq j\leq K,$ \begin{equation} \left\{ \begin{array}{c} u_{n}|_{\Gamma _{j}}=\xi _{jn}=w_{1n}|_{\Gamma _{j}} \\ \left. \dfrac{\partial h_{nj}}{\partial n_{j}}\right\vert _{\partial \Gamma _{j}\cap \partial \Gamma _{l}}=-\left. \dfrac{\partial h_{nl}}{\partial n_{l} }\right\vert _{\partial \Gamma _{j}\cap \partial \Gamma _{l}}\text{ for all }1\leq l\leq K\text{ such that }\partial \Gamma _{j}\cap \partial \Gamma _{l}\neq \emptyset . \end{array} \right. \label{65} \end{equation} Now the left part of the proof of Lemma \ref{point-cont-spec} will be given in five steps:\newline\\ \underline{\textbf{STEP 1:}}\textbf{\ (Estimating the heat component of }$ \Phi _{n}$\textbf{) }\\ \\Proceeding as we did in establishing the dissipativity of $\mathbf{A}:D(\mathbf{A})\subset \mathbf{H}\rightarrow \mathbf{H}$, (see relations (\ref{ten}) and (\ref{dissi})), if we denote \begin{equation*} \Phi _{n}^{\ast }=(i\beta I-\mathbf{A)}\Phi _{n} \end{equation*} then from the relation \begin{equation*} \left( (i\beta I-\mathbf{A)}\Phi _{n},\Phi _{n}\right) _{\mathbf{H}}=(\Phi _{n}^{\ast },\Phi _{n})_{\mathbf{H}}, \end{equation*} we obtain \begin{equation} \left\Vert \nabla u_{n}\right\Vert _{\Omega _{f}}^{2}=\text{Re}(\Phi _{n}^{\ast },\Phi _{n})_{\mathbf{H}}. \label{66} \end{equation} From (\ref{61}), we then have \begin{equation} \underset{n\rightarrow \infty }{\lim }u_{n}=0\text{ \ \ in \ \ }H^{1}(\Omega _{f}). \label{67} \end{equation} In turn, via the thin wave resolvent condition in (\ref{63}) and boundary conditions in (\ref{65}), we have for $1\leq j\leq K$ \begin{equation*} h_{jn}=-\frac{i}{\beta }u_{n}|_{\Gamma _{j}}-\frac{i}{\beta }\varphi _{jn}^{\ast }\text{ \ \ in \ }\Gamma _{j}. \end{equation*} From this relation, we can then invoke (\ref{67}), the Sobolev Trace Map, and (\ref{61}), to have \begin{equation} \underset{n\rightarrow \infty }{\lim }h_{jn}=0\text{ \ \ in \ \ }H^{\frac{1}{ 2}}(\Gamma _{j}) \label{68} \end{equation} for $1\leq j\leq K.$ Moreover, an integration by parts, with respect to the heat equation (\ref{62}), gives the estimate \begin{eqnarray*} \left\Vert \frac{\partial u_{n}}{\partial \nu }\right\Vert _{H^{-\frac{1}{2} }(\partial \Omega _{f})} &\leq &C\left[ \left\Vert \nabla u_{n}\right\Vert _{\Omega _{f}}+\left\Vert \Delta u_{n}\right\Vert _{\Omega _{f}}\right] \\ &\leq &C\left[ \left\Vert \nabla u_{n}\right\Vert _{\Omega _{f}}+\left\Vert i\beta u_{n}-u_{n}^{\ast }\right\Vert _{\Omega _{f}}\right] . \end{eqnarray*} Now, invoking (\ref{66}) and (\ref{61}) gives \begin{equation} \underset{n\rightarrow \infty }{\lim }\frac{\partial u_{n}}{\partial \nu }=0 \text{ \ \ in \ \ }H^{-\frac{1}{2}}(\Gamma _{j}) . \label{69} \end{equation} \newline \underline{\textbf{STEP 2:}} We start here by defining the "Dirichlet" map $ D_{s}:L^{2}(\Gamma _{s})\rightarrow L^{2}(\Omega _{s})$\ via \begin{equation*} D_{s}g=f\Longleftrightarrow \left\{ \begin{array}{c} \Delta f=0\text{ \ in \ }\Omega _{s} \\ f|_{\Gamma _{s}}=g\text{ \ on \ }\Gamma _{s}. \end{array} \right. \end{equation*} We know by the Lax-Milgram Theorem \begin{equation} D_{s}\in \mathcal{L}(H^{\frac{1}{2}}(\Gamma _{s}),H^{1}(\Omega _{s})). \label{70} \end{equation} Therewith, considering the resolvent relations in (\ref{64}), we set \begin{equation} z_{n}\equiv w_{0n}+\frac{i}{\beta}D_{s}[u_{n}|_{\Gamma _{s}}+w_{0n}^{\ast }|_{\Gamma _{s}}], \label{71} \end{equation} and so from (\ref{64}) $z_{n}$ satisfies the following BVP: \begin{equation} \left\{ \begin{array}{c} -\beta ^{2}z_{n}-\Delta z_{n}=w_{1n}^{\ast }+i\beta w_{0n}^{\ast }-i\beta D_{s}[u_{n}|_{\Gamma _{s}}+w_{0n}^{\ast }|_{\Gamma _{s}}] \text{ \ \ in \ \ }\Omega _{s} \\ z_{n}|_{\Gamma _{s}}=0\text{ \ \ on \ \ }\Gamma _{s}. \end{array} \right. \label{bvp} \end{equation} Since $\Omega _{s}$ is convex, then $z_{n}\in H^{2}(\Omega _{s}).$ See e.g., Theorem 3.2.1.2, pg. 147 of \cite{GV}. In consequence, we can apply the static version of the well-known wave identity which is often used in PDE control theory-- [see (Proposition 7 (ii) of \cite{AG1}), \cite{chen}, \cite{trigg}. To wit, let $m(x)$ be any $[C^{2}(\overline{\Omega _{s}})]^{3}$- vector field with associated Jacobian matrix \begin{equation*} \left[ M(x)\right] _{ij}=\frac{\partial m_{i}(x)}{\partial x_{j}},\text{ \ \ }1\leq i,j\leq 3 \end{equation*} Therewith, we have \begin{eqnarray} &&\int\limits_{\Omega _{s}}M\nabla z_{n}\cdot \nabla z_{n}d\Omega _{s} \notag \\ &=&-\text{Re}\int\limits_{\Gamma _{s}}\frac{\partial z_{n}}{\partial \nu } m\cdot \nabla \overline{z_{n}}d\Gamma _{s} \notag \\ &&-\frac{\beta ^{2}}{2}\int\limits_{\Gamma _{s}}\left\vert z_{n}\right\vert ^{2}m\cdot \nu d\Gamma _{s}+\frac{1}{2}\int\limits_{\Gamma _{s}}\left\vert \nabla z_{n}\right\vert ^{2}m\cdot \nu d\Gamma _{s} \notag \\ &&+\frac{1}{2}\int\limits_{\Omega _{s}}\{\left\vert \nabla z_{n}\right\vert ^{2}-\beta ^{2}\left\vert z_{n}\right\vert ^{2}\}\text{div}(m)d\Omega _{s} \notag \\ &&+\text{Re}\int\limits_{\Omega _{s}}\left[ F_{\beta }^{\ast }-i\beta D_{s}[u_{n}|_{\Gamma _{s}}+w_{0n}^{\ast }|_{\Gamma _{s}}]\right] m\cdot \nabla \overline{z_{n}}d\Omega _{s}, \label{72} \end{eqnarray} where \begin{equation} F_{\beta }^{\ast }=(\text{Re}w_{1n}^{\ast }-\beta I_{m}w_{0n}^{\ast })+i(I_{m}w_{1n}^{\ast }+\beta \text{Re}w_{0n}^{\ast }). \label{73} \end{equation} Again, relation (\ref{72}) holds for any $C^{2}-$vector field $m(x).$ We now specify it to be the smooth vector field of Lemma 1.5.1.9, pg. 40 of \cite{GV}. Namely, for some $\delta >0,$ the $C^{\infty }$ vector field $m(x)$ satisfies \begin{equation} -m(x)\cdot \nu \geq \delta \text{ \ \ a.e. \ on }\Gamma _{s} \label{74} \end{equation} Specifying this vector field in (\ref{72}), and considering that $ z_{n}|_{\Gamma _{s}}=0,$ we have then \begin{eqnarray} &&-\frac{1}{2}\int\limits_{\Gamma _{s}}\left\vert \frac{\partial z_{n}}{ \partial \nu }\right\vert ^{2}m\cdot \nu d\Gamma _{s} \notag \\ &=&\int\limits_{\Omega _{s}}M\nabla z_{n}\cdot \nabla z_{n}d\Omega _{s} \notag \\ &&+\frac{1}{2}\int\limits_{\Omega _{s}}\{\beta ^{2}\left\vert z_{n}\right\vert ^{2}-\left\vert \nabla z_{n}\right\vert ^{2}\}d\Omega _{s} \notag \\ &&-\text{Re}\int\limits_{\Omega _{s}}\left[ F_{\beta }^{\ast }-i\beta D_{s}[u_{n}|_{\Gamma _{s}}+w_{0n}^{\ast }|_{\Gamma _{s}}]\right] m\cdot \nabla \overline{z_{n}}d\Omega _{s}. \label{75} \end{eqnarray} Estimating this relation via (\ref{61}), ((\ref{67}), \ref{71}), (\ref{70}) and the Sobolev Trace map, we then have \begin{equation} \int\limits_{\Gamma _{s}}\left\vert \frac{\partial z_{n}}{\partial \nu } \right\vert ^{2}d\Gamma _{s}\leq C_{\delta ,\beta ,m}, \label{76} \end{equation} where positive constant $C_{\delta ,\beta ,m}$ is independent of $n=1,2,...$ \newline\\ \underline{\textbf{STEP 3:}} \textbf{( An energy estimate for $h_{jn}$ )} \\ \\We multiply both sides of the thin wave $h_{jn}-$ equation (\ref{63}) by $h_{jn},$ integrate and subsequently integrate by parts to have for $1\leq j\leq K,$ \begin{eqnarray} \int\limits_{\Gamma _{j}}\left\vert \nabla h_{jn}\right\vert ^{2}d\Gamma _{j} &=&\int\limits_{\Gamma _{j}}\frac{\partial w_{0n}}{\partial \nu } h_{jn}d\Gamma _{j} \notag \\ &&+(\beta ^{2}-1)\int\limits_{\Gamma _{j}}\left\vert h_{jn}\right\vert ^{2}d\Gamma _{j}-\int\limits_{\Gamma _{j}}\frac{\partial u_{n}}{\partial \nu }h_{jn}d\Gamma _{j} \notag \\ &&+\int\limits_{\Gamma _{j}}(\psi _{jn}^{\ast }+i\beta \varphi _{jn}^{\ast })h_{jn}d\Gamma _{j} \label{77} \end{eqnarray} Here, we are also implicitly using $D(\mathbf{A})$-criterion (A.iv). For the first term on RHS: we note that upon combining the regularity for $D_s$ in (\ref{70}) with an integration by parts, we have that \begin{equation} \frac{\partial }{\partial \nu }D_s \in \mathcal{L}(H^{\frac{1}{2}}(\Gamma _{s}),H^{-\frac{1}{2}}(\Omega _{s})) \label{add1} \end{equation} This gives the estimate, via the decomposition (\ref{71}), \begin{equation} \left\Vert \frac{\partial w_{0n}}{\partial \nu }\right\Vert _{H^{-\frac{1}{2} }(\Gamma _{s})}\leq C\left[ \left\Vert \frac{\partial z_{n}}{\partial \nu } \right\Vert _{H^{-\frac{1}{2}}(\Gamma _{s})}+\left\Vert i\beta\frac{\partial }{\partial \nu }D_{s}[u_{n}|_{\Gamma _{s}}+w_{0n}^{\ast }|_{\Gamma _{s}}]\right\Vert _{H^{-\frac{1}{2} }(\Gamma _{s})}\right] \leq C_{\beta }, \label{78} \end{equation} after also using (\ref{61}), (\ref{67}), The Sobolev Trace Map, and (\ref{76}). Applying this estimate to RHS of (\ref{77}), along with (\ref{68}), (\ref{69} ), and (\ref{61}) we have \begin{equation} \underset{n\rightarrow \infty }{\lim }h_{jn}=0\text{ \ \ in \ \ } H^{1}(\Gamma _{j}),\text{ \ \ }1\leq j\leq K. \label{79} \end{equation} \newline \underline{\textbf{STEP 4:}} \\ \\ We note from the previous step that the limit in (\ref{79}) when applied to the equation \begin{equation*} \frac{\partial w_{0n}}{\partial \nu }|_{\Gamma _{j}}=-\Delta h_{jn}+(1-\beta ^{2})h_{jn}+\frac{\partial u_{n}}{\partial \nu }-(\psi _{jn}^{\ast }+i\beta \varphi _{jn}^{\ast })\text{ \ \ in \ \ }\Gamma _{j},\text{ \ \ }1\leq j\leq K, \end{equation*} gives \begin{equation} \underset{n\rightarrow \infty }{\lim }\frac{\partial w_{0n}}{\partial \nu } |_{\Gamma _{j}}=0\text{ \ \ in \ \ }H^{-1}(\Gamma _{j}). \label{80} \end{equation} In obtaining this limit, along with (\ref{79}), we are also using (\ref{69}) and (\ref{61}). In turn, via an interpolation we have for $1\leq j\leq K,$ \begin{eqnarray} \left\Vert \frac{\partial z_{n}}{\partial \nu }\right\Vert _{H^{-\frac{1}{2} }(\Gamma _{j})} &\leq &C\left\Vert \frac{\partial z_{n}}{\partial \nu } \right\Vert _{H^{-1}(\Gamma _{j})}^{\frac{1}{2}}\left\Vert \frac{\partial z_{n}}{\partial \nu }\right\Vert _{L^{2}(\Gamma _{j})}^{\frac{1}{2}} \notag \\ &=&C\left\Vert \frac{\partial w_{0n}}{\partial \nu }+i\beta\frac{\partial }{\partial \nu }D_{s}[u_{n}|_{\Gamma _{s}}+w_{0n}^{\ast }|_{\Gamma _{s}}]\right\Vert _{H^{-1}(\Gamma _{s})}^{\frac{1}{2}}\left\Vert \frac{\partial z_{n}}{\partial \nu }\right\Vert _{L^{2}(\Gamma _{j})}^{\frac{ 1}{2}} \label{82} \end{eqnarray} Applying (\ref{add1}), (\ref{61}), (\ref{80}) and (\ref{76}) to RHS of ( \ref{82}), we have now (upon summing up over $j$), \begin{equation} \underset{n\rightarrow \infty }{\lim }\frac{\partial z_{n}}{\partial \nu }=0 \text{ \ \ in \ \ }H^{-\frac{1}{2}}(\Gamma _{s}) . \label{83} \end{equation} \newline \underline{\textbf{STEP 5}}: \ By (\ref{61}) we have that $\{z_{n}\}$ of ( \ref{71}) converges weakly to, say, $z$ in $H_{0}^{1}(\Omega _{s}).$ With this limit in mind, we multiply both sides of the wave equation in (\ref{bvp}) by given $\eta \in H^{1}(\Omega _{s}).$ Integrating by parts we then have \begin{eqnarray*} &&-\beta ^{2}(z_{n},\eta )_{\Omega _{s}}+(\nabla z_{n},\nabla \eta )_{\Omega _{s}}+\left\langle \frac{\partial z_{n}}{\partial \nu },\eta \right\rangle _{\Gamma _{s}} \\ &=&(w_{1n}^{\ast }+i\beta w_{0n}^{\ast }-i\beta D_{s}[u_{n}|_{\Gamma _{s}}+w_{0n}^{\ast }|_{\Gamma _{s}}],\eta )_{\Omega _{s}},\text{ \ \ \ }\forall \text{ }\eta \in H^{1}(\Omega _{s}). \end{eqnarray*} Taking the limit of both sides of this equation, while taking into account ( \ref{61}), (\ref{67}), (\ref{70}), The Sobolev Trace Map, and (\ref{83}), we obtain that $z\in $ $H_{0}^{1}(\Omega _{s})$ satisfies the variational problem \begin{equation*} -\beta ^{2}(z,\eta )_{\Omega _{s}}+(\nabla z,\nabla \eta )_{\Omega _{s}}=0, \text{ \ \ }\forall \text{ }\eta \in H^{1}(\Omega _{s}) \end{equation*} That is, $z$ satisfies the overdetermined eigenvalue problem \begin{equation*} \left\{ \begin{array}{c} -\Delta z=\beta ^{2}z\text{ \ \ in \ \ }\Omega _{s} \\ z|_{\Gamma _{s}}=\frac{\partial z}{\partial \nu }|_{\Gamma _{s}}=0 \end{array} \right. \end{equation*} which gives that \begin{equation*} z=0\text{ \ \ in \ \ }\Omega _{s} \end{equation*} Combining this convergence with (\ref{71}), (\ref{67}), (\ref{61}) and (\ref{70}), we get \begin{equation} \underset{n\rightarrow \infty }{\lim }w_{0n}=0\text{ \ \ in \ \ } H^{1}(\Omega _{s}). \label{84} \end{equation} \\ \textbf{Completion of the Proof of Lemma \ref{point-cont-spec}} \\ \noindent The resolvent relations in (\ref{63}), (\ref{64}) and the convergences (\ref{68}), (\ref{84}) give also \begin{equation} \left\{ \begin{array}{c} \underset{n\rightarrow \infty }{\lim }\xi _{jn}=0\text{ \ \ in \ \ } L^{2}(\Gamma _{j}),\text{ \ \ }1\leq j\leq K \\ \underset{n\rightarrow \infty }{\lim }w_{1n}=0\text{ \ \ in \ \ } H^{1}(\Omega _{s}) \end{array} \right. \label{85} \end{equation} Collecting now, (\ref{67}), (\ref{79}), (\ref{84}) and (\ref{85}) we have \begin{equation*} \underset{n\rightarrow \infty }{\lim }\Phi _{n}=0\text{ \ \ in \ \ }\mathbf{ H,} \end{equation*} which contradicts (\ref{61}) and finishes the proof of Lemma \ref{point-cont-spec}.\\ Lastly, we give the following Corollary regarding the residual spectrum $\sigma_r(\mathbf{A})$: \begin{corollary} \label{RS} The residual spectrum $\sigma_r(\mathbf{A})$ of $\mathbf{A}$ does not intersect the imaginary axis. \end{corollary} \begin{proof} Given the form of the adjoint operator $\mathbf{A}^{\ast }:\mathbf{ H\rightarrow H}$ in Proposition \ref{adj}, then proceeding identically as in the proof of Lemma \textbf{\ref{point-cont-spec}} we obtain \begin{equation*} \sigma _{p}(\mathbf{A}^{\ast })\cap i \mathbb{R} =\sigma _{c}(\mathbf{A}^{\ast })\cap i \mathbb{R} =\emptyset \end{equation*} which finishes the proof of Corollary \ref{RS}. \end{proof} \\ Now, having established the above results for the spectrum of $\mathbf{A}$, we are in a position to give the proof of Theorem \ref{SS}:\\ \textbf{Proof of Theorem \ref{SS}} \\ If we combine the above results Proposition \ref{invert}, Lemma \ref{point-cont-spec} and Corollary \ref{RS} and remember that $\left\{ e^{At}\right\} _{t\geq 0}$ is a contraction semigroup, the strong stability result follows immediately from the application of Theorem \ref{AB}. \end{document}
\begin{document} \title{affine translation hypersurfaces in Euclidean and isotropic spaces} \author{Muhittin Evren Aydin} \address{Department of Mathematics, Faculty of Science, Firat University, Elazig, 23200, Turkey} \email{[email protected]} \thanks{} \subjclass[2000]{53A05, 53A35, 53B25.} \keywords{Affine translation hypersurface, Gauss-Kronocker curvature, isotropic space, relative curvature, isotropic mean curvature, Laplacian.} \begin{abstract} In this paper, we extend the notion of affine translation surfaces introduced by Liu and Yu (Proc. Japan Acad. Ser. A Math. Sci. 89, 111--113, 2013) in a Euclidean space $\mathbb{R}^{3}$ to higher dimensional ambient spaces. We provide that an affine translation hypersurface of constant Gauss-Kronocker curvature $K_{0}$ in $\mathbb{R}^{n+1}$ is a cylinder, i.e. $ K_{0}=0$. As further applications we describe such hypersurfaces in the isotropic spaces satisfying certain conditions on the isotropic curvatures and the Laplacian. \end{abstract} \maketitle \section{Introduction} Let $\mathbb{R}^{n+1}$ be a Euclidean space and $\left( x_{1},x_{2},...,x_{n+1}\right) $ the orthogonal coordinate system in $ \mathbb{R}^{n+1}$. Then a hypersurface in $\mathbb{R}^{n+1},$ $n\geq 2,$ is called \textit{translation hypersurface} if it is the graph of the form \begin{equation} x_{n+1}\left( x_{1},x_{2},...,x_{n}\right) =f_{1}\left( x_{1}\right) +f_{2}\left( x_{2}\right) +...+f_{n}\left( x_{n}\right) , \tag{1.1} \end{equation} where $f_{1},f_{2},...,f_{n}$ are real-valued smooth functions of one variable (see \cite{2,7,30}). These hypersurfaces are obtained by translating the curves (called \textit{generating curves}) lying in mutually orthogonal planes of $\mathbb{R}^{n+1}$. Dillen et al. \cite{7} proved that a minimal (vanishing mean curvature) translation hypersurface in $\mathbb{R}^{n+1}$ is either a hyperplane or a product manifold $M^{2}\times \mathbb{R}^{n-2},$ where $M^{2}$ is \textit{Scherk's minimal translation surface} in $\mathbb{R}^{3}$ given in explicit form \begin{equation*} x_{3}\left( x_{1},x_{2}\right) =\frac{1}{c}\log \left\vert \frac{\cos \left( cx_{1}\right) }{\cos \left( cx_{2}\right) }\right\vert ,\text{ }c\in \mathbb{ R-}\left\{ 0\right\} . \end{equation*} In 3-dimensional context, many different generalizations of Scherk's surface were treated on $\mathbb{A}^{3}$ \cite{9,31}, $Nil_{3}$ \cite{12}, $\mathbb{H}^{3}$ \cite{16}, $Sol_{3}$ \cite{17}, $\mathbb{R}^{3}$ \cite{18,19}. Constant Gauss-Kronocker curvature (CGKC) and constant mean curvature (CMC) translation hypersurfaces in $\mathbb{R}^{n+1}$ (also in the Lorentz-Minkowski space $\mathbb{R}_{1}^{n+1}$) were described in \cite{28} by Seo. For lightlike counterparts of such results see \cite{11}. Most recently, Moruz and Munteanu \cite{22} conjectured a new class of translation hypersurfaces in $\mathbb{R}^{4}$ as the graph of the form \begin{equation*} x_{4}\left( x_{1},x_{2},x_{3}\right) =f_{1}\left( x_{1}\right) +f_{2}\left( x_{2},x_{3}\right) . \end{equation*} This one appears as the sum of a curve in $x_{1}x_{4}-$plane and a graph surface in $x_{2}x_{3}x_{4}-$space. Immediately afterwards this new concept was generalized to higher dimensionals by Munteanu et al. \cite{23} as considering the form \begin{equation} x_{n+m+1}\left( x_{1},x_{2},...,x_{n+m}\right) =f_{1}\left( x_{1},x_{2},...,x_{n}\right) +f_{2}\left( x_{n+1},x_{n+2},...,x_{n+m}\right) \tag{1.2} . \end{equation} The graph of the form (1.2) in $\mathbb{R}^{n+m+1}$ is called \textit{translation graph}. The authors in \cite{22,23} obtained new classifications and results by imposing the minimality condition. Due to the above framework, the following problems can be stated: \begin{problem} To obtain CMC and CGKC translation hypersurfaces in $\mathbb{R}^{n+1}$ (as defined by Dillen et al.) whose either \begin{enumerate} \item the generating curves are planar lying in non-orthogonal planes; or \item some of them generating curves are planar, others are not; or \item the generating curves are all non-planar (space curves). \end{enumerate} \end{problem} \begin{problem} To characterize CGKC and CMC translation graphs in $\mathbb{R}^{n+1}$ (as defined by Moruz et al.) without imposing restrictions. \end{problem} This study aims to solve a part of first item of Problem 1, that is, to classify the CGKC translation hypersurfaces whose the generating curves lie in non-orthogonal planes. For this, we are motivated by the notion of \textit{ affine translation surface} introduced by Liu and Yu \cite{14} as a graph of the form \begin{equation} x_{3}\left( x_{1},x_{2}\right) =f_{1}\left( x_{1}\right) +f_{2}\left( x_{2}+cx_{1}\right) \notag \end{equation} for some nonzero constant $c$. Such surfaces with CMC were classified in \cite{15}. By a change of parameter, its parameterization turns to \begin{equation*} r\left( u,v\right) =\left( u,v-cu,f_{1}\left( u\right) +f_{2}\left( v\right) \right) , \end{equation*} which implies that the generating curves lie in non-orthogonal planes. In order to achieve our purpose, we consider the graph in $\mathbb{R}^{n+1}$ of the form \begin{equation} x_{n+1}\left( x_{1},x_{2},...,x_{n}\right) =f_{1}\left( y_{1}\right) +f_{2}\left( y_{2}\right) +...+f_{n}\left( y_{n}\right) , \tag{1.3} \end{equation} where \begin{equation} y_{i}=\sum_{j=1}^{n}a_{ij}x_{j},\text{ }i=1,2,...,n. \tag{1.4} \end{equation} If $A=\left( a_{ij}\right) $ in (1.4) is non-orthogonal regular matrix, then we call the graph of the form (1.3) \textit{affine translation hypersurface } and $\left( y_{1},y_{2},...,y_{n}\right) $ \textit{affine parameter coordinates.} Note that the generating curves of an affine translaiton hypersurface lie in non-orthgonal planes due to the non-orthogonality of $A$. In the particular case $y_{1}=x_{1},$ $y_{2}=x_{2},$ ..., $y_{n-1}=x_{n-1}$, Yang and Fu \cite{31} proposed to obtain some curvature classifications for such a hypersurface in $\mathbb{R}^{n+1}$. In more general case, we provide the following: \begin{theorem} Let $M^{n}$ be an affine translation hypersurface in $\mathbb{R}^{n+1}$ with CGKC $K_{0}.$ Then it is congruent to a cylinder, i.e. $K_{0}=0$. \end{theorem} Combining this with the result of Seo \cite[Theorem 2.5]{28}, we derive: \begin{corollary} There is no a translation hypersurface in $\mathbb{R}^{n+1}$ with nonzero CGKC provided the generating curves are all planar. \end{corollary} Further we classify these hypersurfaces in isotropic spaces satisfying certain conditions on the isotropic curvatures and the Laplacian. \section{Preliminaries} \subsection{Basics on hypersurfaces in $\mathbb{R}^{n+1}$} Let $M^{n},\mathbb{S}^{n},\left\langle \cdot ,\cdot \right\rangle$ and $ \left\Vert \cdot \right\Vert$ denote a hypersurface, the standard hypersphere, the Euclidean scalar product and the induced norm of $\mathbb{R} ^{n+1}$, respectively. For further properties of submanifolds in $\mathbb{R} ^{n+1}$ see \cite{3}. The map $\nu :M^{n}\longrightarrow \mathbb{S}^{n}$ in $\mathbb{R}^{n+1}$ is called \textit{Gauss map} of $M^{n}$ and its differential $d\nu$ is known as the \textit{shape operator} $A$ of $M^{n}.$ Let $T_{p}M^{n}$ be the tangent space at a point $p\in M^{n},$ then the following occurs: \begin{equation*} \left\langle A_{p}\left( x_{p}\right) ,y_{p}\right\rangle =\left\langle d\nu \left( x_{p}\right) ,y_{p}\right\rangle ,\text{ }x_{p},y_{p}\in T_{p}M^{n}, \end{equation*} where the induced metric on $M^{n}$ from $\mathbb{R}^{n+1}$ is denoted by same symbol $\left\langle \cdot ,\cdot \right\rangle .$ The real number $\det \left( A_{p}\right) $ is called the \textit{ Gauss-Kronocker curvature }of $M^{n}$ at $p\in M^{n}$. A hypersurface in $ \mathbb{R}^{n+1}$ for which the Gauss-Kronocker curvature at each point is zero is called \textit{flat.} The graph hypersurface in $\mathbb{R}^{n+1}$ of a given real-valued smooth function $z=z\left( x_{1},x_{2},...,x_{n}\right) $ is of the form \begin{equation*} r:\mathbb{R}^{n}\longrightarrow \mathbb{R}^{n+1}, \text{ } r\left( x_{1},x_{2},...,x_{n}\right) =\left( x_{1},x_{2},...,x_{n},z\left( x_{1},x_{2},...,x_{n}\right) \right) . \end{equation*} The Gauss-Kronecker curvature $K$ of such a hypersurface in $\mathbb{R} ^{n+1} $ turns to \begin{equation} K=\frac{\det \left( Hess\left( z\right) \right) }{\left( 1+\sum_{i=1}^{n}\left( z_{,x_{i}}\right) ^{2}\right) ^{\frac{n+2}{2}}}, \tag{2.1} \end{equation} where $z_{,x_{i}}=\frac{\partial z}{\partial x_{i}}$ and $Hess\left( z\right) $ is the Hessian of $z$, namely \begin{equation} Hess\left( z\right) = \begin{bmatrix} z_{,x_{1}x_{1}} & z_{,x_{1}x_{2}} & ... & z_{,x_{1}x_{n}} \\ z_{,x_{2}x_{1}} & z_{,x_{2}x_{2}} & ... & z_{,x_{2}x_{n}} \\ \vdots & \vdots & ... & \vdots \\ z_{,x_{n}x_{1}} & z_{,x_{n}x_{2}} & ... & z_{,x_{n}x_{n}} \end{bmatrix} \tag{2.2} \end{equation} for $z_{,x_{i}x_{j}}=\frac{\partial ^{2}z}{\partial x_{i}\partial x_{j}},$ $ i,j=1,2,...,n$. \subsection{Basics on hypersurfaces in $\mathbb{I}^{n+1}$} For general references of the isotropic space $\mathbb{I}^{n+1}$ we refer to \cite{5,8,20,21} and \cite{24}-\cite{27}. $\mathbb{I}^{n+1}$ is based on the following group of motions \begin{equation} \begin{bmatrix} A & 0 \\ B & 1 \end{bmatrix} , \tag{2.3} \end{equation} where $A \in \mathbb{R}_{n}^{n}$ is an orthonogal $n\times n-$matrix and $ B\in \mathbb{R}_{1}^{n}$ is a $\left( 1\times n\right) -$matrix. The \textit{isotropic distance} of $\mathbb{I}^{n+1}$\ which is an invariant under $\left( 2.3\right) $ is defined as \begin{equation} \left\Vert p-q\right\Vert _{i}=\sqrt{\sum_{j=1}^{n}\left( q_{j}-p_{j}\right) ^{2}} \tag{2.4} \end{equation} for $p=\left( p_{1},p_{2},...,p_{n+1}\right) ,$ $q=\left( q_{1},q_{2},...,q_{n+1}\right) \in \mathbb{I}^{n+1}.$ Thereby $\mathbb{I} ^{n+1}$ can appear as a real affine space endowed with the metric (2.4). Let $\left( x_{1}, x_{2},...,x_{n+1}\right) $ be the standart affine coordinates of $\mathbb{I}^{n+1}.$ The metric (2.4) is degenerate along $ x_{n+1}-$direction and we call the lines in $x_{n+1}-$direction \textit{ isotropic} \textit{lines}. The $k-$plane involving an isotropic line is called \textit{isotropic }$k-$\textit{plane.} A hypersurface in $\mathbb{I}^{n+1}$ is called \textit{admissible} if nowhere it has isotropic tangent hyperplane. A \textit{graph hypersurface} $M^{n}$ in $\mathbb{I}^{n+1}$ of a given smooth function $z\left( x_{1},x_{2},...,x_{n}\right)$ is of the form \begin{equation*} r:\mathbb{R}^{n}\longrightarrow \mathbb{I}^{n+1}, \text{ } r\left( x_{1},x_{2},...,x_{n}\right) =\left( x_{1},x_{2},...,x_{n},z\left( x_{1},x_{2},...,x_{n}\right) \right) . \end{equation*} Note that $M^{n}$ is admissible since its tangent hyperplane spanned by $ \left\{ r_{,x_{1}},r_{,x_{2}},...,r_{,x_{n}}\right\} $ does not involve an isotropic line. The induced metric $\left\langle \cdot ,\cdot \right\rangle $ on $M^{n}$ from $\mathbb{I}^{n+1}$ is given by \begin{equation} \left\langle \cdot ,\cdot \right\rangle =dx_{1}^{2}+...+dx_{n}^{2}. \tag{2.5} \end{equation} Thus, its Laplacian becomes \begin{equation} \bigtriangleup =\sum_{i=1}^{n}\frac{\partial ^{2}}{\partial x_{i}^{2}}. \tag{2.6} \end{equation} Now let us consider a curve on $M^{n}$ that has the position vector \begin{equation} r=r\left( s\right) =\mathbf{x}\left( s\right) +z\left( s\right) e_{n+1}, \tag{2.7} \end{equation} where \begin{equation} \mathbf{x}\left( s\right) =\left( x_{1}\left( s\right) ,x_{2}\left( s\right) ,...,x_{n}\left( s\right),0 \right), \text{ } e_{n+1}=\left( \underset{ n-tuple}{\underbrace{0,0,...0}},1\right) . \notag \end{equation} Derivating of (2.7) with respect to $s$ leads to \begin{equation} r^{\prime } =\mathbf{x}^{\prime }+ \left\langle \mathbf{x}^{\prime } ,\nabla z \right\rangle e_{n+1}, \tag{2.8} \end{equation} where $\nabla $ denotes the gradient operator in $\mathbb{R}^{n}.$ By again derivating of $\left( 2.8\right) $ with respect to $s,$ we arrange the following \begin{equation} r^{\prime \prime } =\mathbf{x}^{\prime \prime }+ \left\langle \mathbf{x} ^{\prime \prime } ,\nabla z \right\rangle e_{n+1}+ \left( \mathbf{X}^{\prime }\right)^{T}\cdot Hess\left( z \right)\cdot \mathbf{X}^{\prime } e_{n+1}, \tag{2.9} \end{equation} where $\mathbf{X}^{\prime } $ is column matrix associated to $\mathbf{x} ^{\prime }$ and $\left( \mathbf{X}^{\prime }\right)^{T}$ its transpose. Therefore, in (2.9), the following decomposition occurs: \begin{equation*} Tan\left( r^{\prime \prime } \right) =\mathbf{x}^{\prime \prime } +\left\langle \mathbf{x}^{\prime \prime } ,\nabla z \right\rangle e_{n+1} \end{equation*} and \begin{equation*} Nor\left( r^{\prime \prime } \right) = \left( \mathbf{X}^{\prime }\right)^{T}\cdot Hess\left( z \right) \cdot \mathbf{X}^{\prime } e_{n+1}, \end{equation*} where $Tan\left( r^{\prime \prime }\right)$ implies the projection of $ r^{\prime \prime }$ onto tangent hyperplane of $M^{n}$ and $Nor\left( r^{\prime \prime } \right)$ the isotropic component of $r^{\prime \prime }$ which is normal to $M^{n}$. If $\left\Vert Tan\left( r^{\prime \prime } \right) \right\Vert _{i} \neq 0$ then it is called \textit{geodesic curvature function} $\kappa _{G}$ of $r .$ Otherwise $\kappa _{G}=1$ is assumed. Accordingly the following function is called \textit{normal curvature function} $\kappa _{N}$ of $r$: \begin{equation} \kappa _{N}= \left( \mathbf{X}^{\prime }\right)^{T}\cdot Hess\left( z \right)\cdot \mathbf{X}^{\prime }. \tag{2.10} \end{equation} The extremal values $\kappa _{1},...,\kappa _{n}$ of (2.10) corresponding to the eigenvalue functions of $Hess\left( z\right) $ are called \textit{ principal curvatures} of $M^{n}.$ Since $Hess\left( z \right) $ is symmetric, all eigenvalue functions are real. Thus one gives rise to define the following certain curvature functions: \begin{equation} K_{i}=\frac{1}{\binom{n}{i}}\left( \kappa _{1}...\kappa _{i}+\kappa _{1}...\kappa _{i-1}\kappa _{i+1}+...+\kappa _{n-i+1}...\kappa _{n}\right) . \tag{2.11} \end{equation} By $\left( 2.11\right) ,$ the \textit{isotropic mean curvature function} $ H=K_{1}$ is \begin{equation} H=\frac{1}{n}trace\left( Hess\left( z\right) \right) =\frac{1}{n} \bigtriangleup z \tag{2.12} \end{equation} and the \textit{relative curvature} (or \textit{isotropic Gaussian curvature} ) \textit{function } $K=K_{n}$ \begin{equation} K=\det \left( Hess\left( z\right) \right). \tag{2.13} \end{equation} A hypersurface in $\mathbb{I}^{n+1}$ with vanishing relative curvature (resp. isotropic mean curvature) is called \textit{isotropic flat }(resp. \textit{ isotropic minimal}). \section{Affine translation hypersurfaces in\textbf{\ }$\mathbb{R}^{n+1}$} Let $x=\left( x_{1},x_{2},...,x_{n}\right) $ denote the orthogonal coordinate system in $\mathbb{R}^{n}$ and \newline $z:\mathbb{R}^{n}\longrightarrow \mathbb{R}$, $z=z\left( y\right) ,$ be a smooth function, where \begin{equation} y=\left( y_{1},y_{2},...,y_{n}\right) ,\text{ }y_{i}= \sum_{j=1}^{n}a_{ij}x_{j},\text{ }a_{ij}\in \mathbb{R},\text{ }i=1,2,...,n. \tag{3.1} \end{equation} If $A=\left( a_{ij}\right) $ is a non-orthogonal $n\times n-$matrix and $ \det \left( A\right) \neq 0,$ then we call the graph of $z\left( y\right) $ in $\mathbb{R}^{n+1}$ \textit{affine graph} of $z\left( x\right) $ and $ \left( y_{1},y_{2},...,y_{n}\right) $ \textit{affine parameter coordinates}. Hence we provide the following result to use later. \begin{lemma} Let $z\left( y\right) $ be a smooth real-valued function on $\mathbb{R}^{n}$ , where $y$ is the affine parameter coordinates given by $\left( 3.1\right) . $ Then the following relation holds: \begin{equation} \det \left[ Hess\left( z\left( x\right) \right) \right] =\det \left[ A\right] ^{2}\det \left[ Hess\left( z\left( y\right) \right) \right] \tag{3.2} \end{equation} for $x=\left( x_{1},x_{2},...,x_{n}\right) .$ \end{lemma} \begin{proof} The partial derivatives of $z$ with respect to $x_{i},$ $1\leq i\leq n$ , gives \begin{equation*} z_{,x_{i}}=\sum_{k=1}^{n}a_{ki}z_{,y_{k}},\text{ }z_{,x_{i}x_{j}}= \sum_{k,l=1}^{n}a_{ki}a_{lj}z_{,y_{l}y_{k}},\text{ }1\leq j\leq n. \end{equation*} Then the Hessian of $z\left( x\right) $ follows \begin{equation} Hess\left( z\left( x\right) \right) = \begin{bmatrix} \sum_{k,l=1}^{n}a_{k1}a_{l1}z_{,y_{l}y_{k}} & \sum_{k,l=1}^{n}a_{k1}a_{l2}z_{,y_{l}y_{k}} & ... & \sum_{k,l=1}^{n}a_{k1}a_{ln}z_{,y_{l}y_{k}} \\ \sum_{k,l=1}^{n}a_{k2}a_{l1}z_{,y_{l}y_{k}} & \sum_{k,l=1}^{n}a_{k2}a_{l2}z_{,y_{l}y_{k}} & ... & \sum_{k,l=1}^{n}a_{k2}a_{ln}z_{,y_{l}y_{k}} \\ \vdots & \vdots & \vdots & \vdots \\ \sum_{k,l=1}^{n}a_{kn}a_{l1}z_{,y_{l}y_{k}} & \sum_{k,l=1}^{n}a_{kn}a_{l2}z_{,y_{l}y_{k}} & ... & \sum_{k,l=1}^{n}a_{kn}a_{ln}z_{,y_{l}y_{k}} \end{bmatrix} . \tag{3.3} \end{equation} By considering matrix multiplication in $\left( 3.3\right) $ we deduce that \begin{equation} Hess\left( z\left( x\right) \right) =A^{T}\cdot Hess\left( z\left( y\right) \right) \cdot A, \tag{3.4} \end{equation} where $A^{T}$ denotes the transpose of $A.$ Thus by $\left( 3.4\right) $ we obtain (3.2). \end{proof} If $\det \left( A\right) \neq 0,$ Lemma 3.1 immediately implies the following trivial result \begin{corollary} A graph of a given smooth real-valued function is flat if and only if so is its affine graph in $\mathbb{R}^{n+1}$. \end{corollary} In particular, the affine graph of (1.1), so-called \textit{affine translation hypersurface}, has the form \begin{equation} z\left( x_{1},x_{2},...,x_{n}\right) =f_{1}\left( y_{1}\right) +f_{2}\left( y_{2}\right) +...+f_{n}\left( y_{n}\right) ,\text{ }z=x_{n+1}, \tag{3.5} \end{equation} where $f_{1},f_{2},...,f_{n}$ are arbitrary nonzero smooth functions and $ \left( y_{1},y_{2},...,y_{n}\right) $ is affine parameter coordinates given by $\left( 3.1\right) .$ Remark that such a hypersurface reduces to the standard translation hypersurface, if $A$ is an orthogonal matrix. Denote $A^{-1}=\left( a^{ij}\right) $ the inverse matrix of $A=\left( a_{ij}\right) .$ Then, by a change of parameter, the affine translation hypersurface $M^{n}$ has a parameterization \begin{equation} \left. \begin{array}{l} r\left( y_{1},y_{2},..,y_{n}\right) =\left( \sum_{i=1}^{n}a^{1i}y_{i},\sum_{i=1}^{n}a^{2i}y_{i},...,\sum_{i=1}^{n}f_{i} \left( y_{i}\right) \right) \\ =\underset{\alpha _{1}}{\underbrace{\left( a^{11}y_{1},a^{21}y_{1},...,f_{1}\left( y_{1}\right) \right) }}+\underset{ \alpha _{2}}{\underbrace{\left( a^{12}y_{2},a^{22}y_{2},...,f_{2}\left( y_{2}\right) \right) }}+...+ \\ +\underset{\alpha _{n}}{\underbrace{\left( a^{1n}y_{n},a^{2n}y_{n},...,f_{n}\left( y_{n}\right) \right) }}. \end{array} \right. \tag{3.6} \end{equation} Since $A$ is non-orthogonal, so is $A^{-1}$ and this yields that the row and column vectors of $A^{-1}$ form a non-orthogonal system. Thereby, the generating curves $\alpha _{1},\alpha _{2},...,\alpha _{n}$ lie in non-orthogonal planes. \subsection{Proof of Theorem 1.1.} We purpose to describe the affine translation hypersurfaces in $\mathbb{ R}^{n+1}$ with CGKC. For this we need to fix some notations to use in remaining part: \begin{equation} f_{k}^{\prime }=\frac{df_{k}}{dy_{k}},\text{ }f_{k}^{\prime \prime }=\frac{ d^{2}f_{k}}{dy_{k}^{2}},\text{ }k=1,2,...,n, \tag{3.7} \end{equation} and \begin{equation} z_{,x_{i}}=\sum_{k=1}^{n}a_{ki}f_{k}^{\prime },\text{ }z_{,x_{i}x_{j}}= \sum_{k=1}^{n}a_{ki}a_{kj}f_{k}^{\prime \prime },\text{ }i,j=1,2,...,n. \tag{3.8} \end{equation} By (3.7), the Hessian of $z(y)$ turns to \begin{equation} Hess\left( z\left( y\right) \right) = \begin{bmatrix} f_{1}^{\prime \prime } & 0 & ... & 0 \\ 0 & f_{2}^{\prime \prime } & ... & 0 \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & ... & f_{n}^{\prime \prime } \end{bmatrix} . \tag{3.9} \end{equation} Substituting (3.9) into (3.2) leads to \begin{equation} \det \left[ Hess\left( z(x)\right) \right] =\det \left[ A\right] ^{2}f_{1}^{\prime \prime }f_{2}^{\prime \prime }...f_{n}^{\prime \prime }, \tag{3.10} \end{equation} where $x=\left( x_{1},x_{2},...,x_{n}\right) $. Now we assume that the affine translation hypersurface $M^{n}$ in $\mathbb{R} ^{n+1}$ has $K=K_{0}=const.$ Then (2.1), (3.7) and $\left( 3.10\right) $ imply that \begin{equation} K_{0}=\frac{\det \left( A\right) ^{2}\left( f_{1}^{\prime \prime }f_{2}^{\prime \prime }...f_{n}^{\prime \prime }\right) }{\left( 1+\sum_{i=1}^{n}\left( \sum_{j=1}^{n}a_{ji}f_{j}^{\prime }\right) ^{2}\right) ^{\frac{n+2}{2}}}. \tag{3.11} \end{equation} \begin{enumerate} \item[\textbf{Case 1}] If $K_{0}=0$ in $\left( 3.11\right) ,$ then at least one of $f_{1},f_{2},...,f_{n}$ is a linear function with respect to the variables $y_{1},y_{2},...,y_{n}$, respectively. Without lose of generality, we may assume that $f_{1}\left( y_{1}\right) =cy_{1}+d,$ $c,d\in \mathbb{R}$ . Considering this one into (3.6), we conclude \begin{equation*} r\left( y_{1},y_{2},..,y_{n}\right) =y_{1}\left( a^{11},a^{21},...,c\right) +\left( \sum_{i=2}^{n}a^{1i}y_{i},\sum_{i=2}^{n}a^{2i}y_{i},...,d+ \sum_{i=2}^{n}f_{i}\left( y_{i}\right) \right) , \end{equation*} which implies that $M^{n}$ turns to a cylinder. \item[\textbf{Case 2}] Otherwise, i.e. $K_{0}\neq 0,$ the functions $ f_{1},f_{2},...,f_{n}$ have to be non-linear. Put $W:=1+\sum_{i=1}^{n}\left( \sum_{j=1}^{n}a_{ji}f_{j}^{\prime }\right) ^{2}.$ Taking partial derivative of $\left( 3.11\right) $ with respect to $y_{p},$ $p=1,2,...,n,$ gives \begin{equation} \left( f_{1}^{\prime \prime }f_{2}^{\prime \prime }...f_{p}^{\prime \prime \prime }...f_{n}^{\prime \prime }\right) W=\left( n+2\right) \left( f_{1}^{\prime \prime }f_{2}^{\prime \prime }...\left( f_{p}^{\prime \prime }\right) ^{2}...f_{n}^{\prime \prime }\right) \left( \sum_{i,j=1}^{n}a_{pi}a_{ji}f_{j}^{\prime }\right) . \tag{3.12} \end{equation} Since $f_{1}^{\prime \prime }f_{2}^{\prime \prime }...f_{n}^{\prime \prime }\neq 0,$ $\left( 3.12\right) $ can be rewritten as \begin{equation} \frac{f_{p}^{\prime \prime \prime }}{\left( n+2\right) \left( f_{p}^{\prime \prime }\right) ^{2}}=\frac{\sum_{i,j=1}^{n}a_{pi}a_{ji}f_{j}^{\prime }}{W}. \tag{3.13} \end{equation} The partial derivative of (3.13) with respect to $y_{q}$, $p\neq q=1,2,...,n, $ gives \begin{equation} W\sum_{i,j=1}^{n}a_{pi}a_{qi}-2\left( \sum_{i,j=1}^{n}a_{qi}a_{ji}f_{j}^{\prime }\right) \left( \sum_{i,j=1}^{n}a_{pi}a_{ji}f_{j}^{\prime }\right) =0. \tag{3.14} \end{equation} After twice taking the partial derivative of $\left( 3.13\right) $ with respect to $y_{q}$ yields \begin{equation} \sum_{i,j=1}^{n}a_{pi}a_{qi}=0. \tag{3.15} \end{equation} Substituting $\left( 3.15\right) $ into $\left( 3.14\right) $ leads to either \begin{equation} \sum_{i,j=1}^{n}a_{qi}a_{ji}f_{j}^{\prime }=0\text{ or } \sum_{i,j=1}^{n}a_{pi}a_{ji}f_{j}^{\prime }=0. \tag{3.16} \end{equation} Taking partial derivative in the second equality of $\left( 3.16\right) $ with respect to $y_{p}$ gives \begin{equation*} f_{p}^{\prime \prime }\sum_{i=1}^{n}\left( a_{pi}\right) ^{2}=0 \end{equation*} which implies $a_{p1}=a_{p2}=...=a_{pn}=0$. This is a contradiction since $ \det (A)\neq 0$, which completes the proof. \end{enumerate} \section{Further applications} Before introducing the affine translation hypersurfaces in $\mathbb{I}^{n+1}$, let us reconsider the notion of translation hypersurface in $\mathbb{I}^{n+1}.$ By means of the isotropic motions given by (2.3), a \textit{translation hypersurface} in $\mathbb{I}^{n+1}$ generated by translating the curves lying in orthogonal isotropic planes is the graph of the form $\left(1.1\right). $ Such hypersurfaces in $\mathbb{I}^{n+1}$ with constant relative curvature (CRC) and constant isotropic mean curvature (CIMC) were provided in \cite {1}. Therefore, as similar to Euclidean case, we can state that an \textit{affine translation hypersurface} in $\mathbb{I}^{n+1}$ is the graph of a function given via $\left( 3.1\right) $ and $\left( 3.5\right) .$ Point out that the generating curves for this one lie in non-orthogonal isotropic planes. So, by having in mind that the generating curves may also lie non-isotropic planes, the problems given in the Introduction can be also considered in the isotropic spaces. By (2.13) and (3.9), for an affine translation hypersurface with CRC $K_{0}$ in $\mathbb{I}^{n+1}$, we get \begin{equation} K_{0}=\det \left( A \right) ^{2}f_{1}^{\prime \prime }f_{2}^{\prime \prime }...f_{n}^{\prime \prime }, \tag{4.1} \end{equation} where $f_{i}^{\prime \prime }=\frac{d^{2}f_{i}}{dy_{i}^{2}}$\ and $ (y_{1},y_{2},...,y_{n})$ the affine parameter coordinates given by (3.1). Hence (4.1) immediately implies that $K_{0}$ vanishes when at least one $ f_{1},f_{2},...,f_{n}$ is a linear function with respect to the variables $ y_{1},y_{2},...,y_{n}$, respectively. Suppose that $K_{0}\neq 0.$ Taking partial derivative of $\left( 4.1\right) $ with respect to $y_{p}$ leads to \begin{equation*} f_{1}^{\prime \prime }f_{2}^{\prime \prime }...f_{p}^{\prime \prime \prime }...f_{n}^{\prime \prime }=0, \end{equation*} namely \begin{equation*} f_{p}\left( y_{p}\right) =c_{p}y_{p}^{2}+d_{p}y_{p}+e_{p},\text{ }p=1,2,...,n \end{equation*} for some constants $c_{p},d_{p},e_{p}\in \mathbb{R}$, $c_{p}\neq 0$ and $ c_{1}c_{2}...c_{n}=\frac{K_{0}}{2^{n}\det \left( A \right) ^{2}}.$ Accordingly the following result can be expessed: \begin{theorem} Let $M^{n}$ be an affine translation hypersurface in $\mathbb{I}^{n+1}$ with $K_{0}.$ Then, it is either congruent to a cylinder $\left( K_{0}=0\right) $ or given by $\left( K_{0}\neq 0\right) $ \begin{equation*} \left\{ \begin{array}{l} z\left( x_{1},x_{2},...,x_{n}\right) =\sum_{i=1}^{n}c_{i}y_{i}^{2}+d_{i}y_{i}+e_{i}, \\ c_{i},d_{i},e_{i}\in \mathbb{R},\text{ } c_{i}\neq 0,\text{ }c_{1}c_{2}...c_{n}= \frac{K_{0}}{2^{n}\det \left( A \right) ^{2}},\text{ }i=1,2,...,n, \end{array} \right. \end{equation*} where $\left( y_{1},y_{2},...,y_{n}\right) $ is the affine parameter coordinates given by $\left( 3.1\right) .$ \end{theorem} Next we assume that an affine translation hypersurface $M^{n}$ in $\mathbb{I} ^{n+1}$ has CIMC $H_{0}$. Hence we have from (2.12) and (3.7) that \begin{equation} nH_{0}=\sum_{i,j=1}^{n}a_{ij}^{2}f_{i}^{\prime \prime }. \tag{4.2} \end{equation} Taking partial derivative of $\left( 4.2\right) $ with respect to $y_{p},$ $ p=1,2,...,n,$ gives \begin{equation*} \left(\sum_{i=1}^{n}a_{pi}^{2}\right)f_{p}^{\prime \prime \prime}=0 \end{equation*} or \begin{equation*} f_{p}\left( y_{p}\right) =\frac{c_{p}}{2\sum_{i=1}^{n}a_{pi}^{2}} y_{p}^{2}+d_{p}y_{p}+e_{p} \end{equation*} for some constants $c_{p},d_{p},e_{p}$ such that $ \sum_{i=1}^{n}c_{i}=nH_{0}. $ Therefore we can present the following result. \begin{theorem} Let $M^{n}$ be an affine translation hypersurface in $\mathbb{I}^{n+1}$ with CIMC $H_{0}.$ Then, it is given in explicit form \begin{equation*} \left\{ \begin{array}{l} z\left( x_{1},x_{2},...,x_{n}\right) =\sum_{i=1}^{n}\left(\frac{c_{i}/2}{ \sum_{j=1}^{n}a_{ij}^{2}}\right)y_{i}^{2}+d_{i}y_{i}+e_{i}, \\ \sum_{i=1}^{n}c_{i}=nH_{0},c_{i},d_{i},e_{i}\in \mathbb{R}, \end{array} \right. \end{equation*} where $\left( y_{1},y_{2},...,y_{n}\right) $ is the affine parameter coordinates given by $\left( 3.2\right) .$ In particular, $M^{n}$ is isotropic minimal provided $\sum_{i=1}^{n}c_{i}=0.$ \end{theorem} Finally we aim to observe the affine translation hypersurface $M^{n}$ in $ \mathbb{I}^{n+1}$ whose the coordinate functions are eigenfunctions of the Laplacian, i.e., that satisfies the condition \begin{equation} \bigtriangleup r_{k}=\lambda _{k}r_{k},\text{ }\lambda _{k}\in \mathbb{R}, \text{ }k=1,2,...,n+1, \tag{4.3} \end{equation} where $r_{k}$ is the coordinate function of the position vector of an arbitray point on $M^{n}$ and $\bigtriangleup $ the Laplace operator of $ M^{n}$ with respect to the induced metric from $\mathbb{I}^{n+1}$. In the particular case $\lambda _{1}=\lambda _{2}=...=\lambda _{n+1}=\lambda ,$ the condition $\left( 4.3\right) $ was firstly treated to Riemannian submanifolds by Tahakashi \cite{29}. Then Garay \cite{10} generalized this condition as follows: \begin{equation*} \bigtriangleup r=Ar, \text{ } A \in \mathbb{R}_{n+1}^{n+1}. \end{equation*} One is also related to the notion of \textit{submanifolds of finite type} conjectured by Chen (see \cite{4,7}). An affine translation hypersurface $M^{n}$ in $\mathbb{I}^{n+1}$ is of the form \begin{equation*} r\left( x_{1},x_{2},...,x_{n}\right) =\left( x_{1},x_{2},...,x_{n},f_{1}\left( y_{1}\right) +f_{2}\left( y_{2}\right) +...+f_{n}\left( y_{n}\right) \right) , \end{equation*} where $\left( y_{1},y_{2},...,y_{n}\right) $ is the affine parameter coordinates given by $\left( 3.1\right) .$ Let us put \begin{equation} r_{1}=x_{1},\text{ }r_{2}=x_{2},...,\text{ }r_{n}=x_{n} \tag{4.4} \end{equation} and \begin{equation} r_{n+1}=f_{1}\left( y_{1}\right) +f_{2}\left( y_{2}\right) +...+f_{n}\left( y_{n}\right) . \tag{4.5} \end{equation} From (2.6), $\left( 4.4\right) $ and $\left( 4.5\right) ,$ we conclude that \begin{equation} \bigtriangleup r_{1}=\bigtriangleup r_{2}=...=\bigtriangleup r_{n}=0\text{ and }\bigtriangleup r_{n+1}=\sum_{i,j=1}^{n}a_{ij}^{2}f_{i}^{\prime \prime }. \tag{4.6} \end{equation} Now suppose that $M^{n}$ holds $\left( 4.3\right) .$ Then $\left( 4.6\right)$ implies $\lambda _{1}=\lambda _{2}=...=\lambda _{n}=0$ and the following system of ordinary differential equations: \begin{equation} \sum_{i,j=1}^{n}a_{ij}^{2}f_{i}^{\prime \prime }=\lambda \sum_{i=1}^{n}f_{i}, \text{ }\lambda _{n+1}=\lambda. \tag{4.7} \end{equation} In the case $\lambda =0,$ $M^{n}$ becomes isotropic minimal stated already via Theorem 4.2. Hence it is meaningful to assume $\lambda \neq 0.$ Since $ f_{1},f_{2},...,f_{n}$ depend on the variables $y_{1},y_{2},...,y_{n}$, (4.7) turns to \begin{equation} \sum_{j=1}^{n}a_{ij}^{2}f_{i}^{\prime \prime }-\lambda f_{i}=\mu _{i}, \tag{4.8} \end{equation} where $\mu _{i}$ are some constants such that $\sum_{i=1}^{n}\mu _{i}=0.$ If $\lambda >0$ in $\left( 4.8\right) ,$ then by solving it we obtain \begin{equation*} f_{i}\left( y_{i}\right) =c_{i}\exp \left( \sqrt{\frac{\lambda }{ \sum_{j=1}^{n}a_{ij}^{2}}}y_{i}\right) +d_{i}\exp \left( -\sqrt{\frac{ \lambda }{\sum_{j=1}^{n}a_{ij}^{2}}}y_{i}\right) -\frac{\mu _{i}}{\lambda }, \end{equation*} and if $\lambda <0$ \begin{equation*} f_{i}\left( y_{i}\right) =c_{i}\cos \left( \sqrt{\frac{-\lambda }{ \sum_{j=1}^{n}a_{ij}^{2}}}y_{i}\right) +d_{i}\sin \left( \sqrt{\frac{ -\lambda }{\sum_{j=1}^{n}a_{ij}^{2}}}y_{i}\right) -\frac{\mu _{i}}{\lambda }, \end{equation*} where $c_{i},d_{i}$ are some constants. Therefore we have proved next result. \begin{theorem} Let $M^{n}$ be a non isotropic minimal affine translation hypersurface in $ \mathbb{I}^{n+1}$ satisfying $\bigtriangleup r_{k}=\lambda _{k}r_{k}$. Then $ \left( \lambda_{1},\lambda_{2},...,\lambda_{n+1}\right) =\left( 0,0,...,\lambda \neq 0\right) $ and $M^{n}$ is congruent to the graph of the function either \begin{equation*} z\left( x_{1},x_{2},...,x_{n}\right) =\sum_{i=1}^{n}c_{i}\exp \left( \sqrt{ \frac{\lambda }{\sum_{j=1}^{n}a_{ij}^{2}}}y_{i}\right) +d_{i}\exp \left( - \sqrt{\frac{\lambda }{\sum_{j=1}^{n}a_{ij}^{2}}}y_{i}\right) \end{equation*} or \begin{equation*} z\left( x_{1},x_{2},...,x_{n}\right) =\sum_{i=1}^{n}c_{i}\cos \left( \sqrt{ \frac{-\lambda }{\sum_{j=1}^{n}a_{ij}^{2}}}y_{i}\right) +d_{i}\sin \left( \sqrt{\frac{-\lambda }{\sum_{j=1}^{n}a_{ij}^{2}}}y_{i}\right) , \end{equation*} where $\left( y_{1},y_{2},...,y_{n}\right) $ is the affine parameter coordinates given by $\left( 3.1\right) $ and $c_{i},d_{i}$ some constants. \end{theorem} \end{document}
\begin{document} \title{On the linear stability of vortex columns in the energy space} \author{ {\bf Thierry Gallay}\\ Institut Fourier\\ Universit\'e Grenoble Alpes, CNRS\\ 100 rue des Maths\\ 38610 Gi\`eres, France\\ {\small\tt [email protected]} \and {\bf Didier Smets}\\ Laboratoire Jacques-Louis Lions\\ Sorbonne Universit\'e\\ 4, Place Jussieu\\ 75005 Paris, France\\ {\small\tt [email protected]}} \date{June 17, 2019} \maketitle \begin{abstract} We investigate the linear stability of inviscid columnar vortices with respect to finite energy perturbations. For a large class of vortex profiles, we show that the linearized evolution group has a sub-exponential growth in time, which means that the associated growth bound is equal to zero. This implies in particular that the spectrum of the linearized operator is entirely contained in the imaginary axis. This contribution complements the results of our previous work \cite{GS1}, where spectral stability was established for the linearized operator in the enstrophy space. \end{abstract} \section{Introduction}\label{sec1} It is well known that radially symmetric vortices in two-dimensional incompressible and inviscid fluids are stable if the vorticity distribution is a monotone function of the distance to the vortex center \cite{Arn,MP}. In a three-dimensional framework, this result exactly means that {\em columnar vortices} with no axial flow are stable with respect to two-dimensional perturbations, provided Arnold's monotonicity condition is satisfied. Vortex columns play an important role in nature, especially in atmospheric flows, and are also often observed in laboratory experiments \cite{AKO}. It is therefore of great interest to determine their stability with respect to arbitrary perturbations, with no particular symmetry, but this question appears to be very difficult and the only rigorous results available so far are sufficient conditions for {\em spectral stability}. In a celebrated paper \cite{Ke}, Lord Kelvin considered the particular case of Rankine's vortex and proved that the linearized operator has a countable family of eigenvalues on the imaginary axis. The corresponding eigenfunctions, which are now referred to as {\em Kelvin's vibration modes}, have been extensively studied in the literature, also for more general vortex profiles \cite{FSJ,LL,RS}. An important contribution was made by Lord Rayleigh in \cite{Ra}, who gave a simple condition for spectral stability with respect to axisymmetric perturbations. Rayleigh's criterion, which requires that the angular velocity $\Omega$ and the vorticity $W$ have the same sign everywhere, is actually implied by Arnold's monotonicity condition for localized vortices. In the non-axisymmetric case, the only stability result one can obtain using the techniques introduced by Rayleigh is restricted to perturbations in a particular subspace, where the angular Fourier mode $m$ and the vertical wave number $k$ are fixed. In that subspace, we have a sufficient condition for spectral stability, involving a quantity that can be interpreted as a local Richardson number. However, as is emphasized by Howard and Gupta \cite{HG}, that criterion always fails when the ratio $k^2/m^2$ is sufficiently small, and therefore does note provide any unconditional stability result. In a recent work \cite{GS1}, we perform a rigorous mathematical study of the linearized operator at a columnar vortex, using the vorticity formulation of the Euler equations. We assume that the unperturbed vorticity profile satisfies Arnold's monotonicity condition, hence Rayleigh's criterion as well, and we impose an additional condition which happens to be satisfied in all classical examples and may only be technical. We work in the enstrophy space, assuming periodicity (with arbitrary period) in the vertical direction. In this framework, we prove that the spectrum of the linearized operator is entirely contained in the imaginary axis of the complex plane, which gives the first spectral stability result for columnar vortices with smooth velocity profile. More precisely, in any Fourier subspace characterized by its angular mode $m \neq 0$ and its vertical wave number $k \neq 0$, we show that the spectrum of the linearized operator consists of an essential part that fills an interval of the imaginary axis, and of a countable family of imaginary eigenvalues which accumulate only on the essential spectrum (the latter correspond to Kelvin's vibration modes). The most difficult part of our analysis is to preclude the existence of isolated eigenvalues with nonzero real part, which can eventually be done by combining Howard and Gupta's criterion, a homotopy argument, and a detailed analysis of the eigenvalue equation when critical layers occur. The goal of the present paper is to extend the results of \cite{GS1} in several directions. First, we use the velocity formulation of the Euler equations, and assume that the perturbations have finite energy. This functional framework seems more natural than the enstrophy space used in \cite{GS1}, but part of the analysis becomes more complicated. In particular, due to the pressure term in the velocity formulation, it is not obvious that the linearized operator in a given Fourier sector is the sum of a (nearly) skew-symmetric principal part and a compact perturbation. This decomposition, however, is the starting point of our approach, as it shows that the spectrum outside the imaginary axis is necessarily discrete. Also, unlike in \cite{GS1}, we do not have to assume periodicity in the vertical direction, so that our result applies to localized perturbations as well. Finally, we make a step towards linear stability by showing that the evolution group generated by the linearized operator has a mild, sub-exponential growth as $|t| \to \infty$. This is arguably the strongest way to express spectral stability. We now present our result in more detail. We consider the incompressible Euler equations in the whole space $\mathbb{R}^3$\thinspace : \begin{equation}\label{eq:Euler3d} \partial_t u + (u\cdot\nabla)u \,=\, -\nabla p\,, \qquad \mathop{\mathrm{div}}\nolimits u \,=\, 0\,, \end{equation} where $u = u(x,t) \in \mathbb{R}^3$ denotes the velocity of the fluid at point $x = (x_1,x_2,x_3) \in \mathbb{R}^3$ and time $t \in \mathbb{R}$, and $p = p(x,t) \in \mathbb{R}$ is the associated pressure. The solutions we are interested in are perturbations of flows with axial symmetry, and are therefore conveniently described using cylindrical coordinates $(r,\theta,z)$ defined by $x_1 = r\cos\theta$, $x_2 = r\sin\theta$, and $x_3 = z$. The velocity field is decomposed as \[ u \,=\, u_r(r,\theta,z,t) e_r + u_\theta(r,\theta,z,t) e_\theta + u_z(r,\theta,z,t) e_z\,, \] where $e_r$, $e_\theta$, $e_z$ are unit vectors in the radial, azimuthal, and vertical directions, respectively. The evolution equation in \eqref{eq:Euler3d} is then written in the equivalent form \begin{equation}\label{eq:Eulercyl} \begin{split} \partial_t u_r + (u\cdot\nabla)u_r - \frac{u_\theta^2}{r} \,&=\, -\partial_r p\,, \\ \partial_t u_\theta + (u\cdot\nabla)u_\theta + \frac{u_r u_\theta}{r} \,&=\, -\frac1r \partial_\theta p\,, \\ \partial_t u_z + (u\cdot\nabla)u_z \,&=\, -\partial_z p\,, \end{split} \end{equation} where $u\cdot \nabla = u_r \partial_r + \frac1r u_\theta \partial_\theta + u_z \partial_z$, and the incompressibility condition becomes \begin{equation}\label{eq:incomp} \mathop{\mathrm{div}}\nolimits u \,=\, \frac1r\partial_r (ru_r) + \frac1r \partial_\theta u_\theta + \partial_z u_z \,=\, 0\,. \end{equation} Columnar vortices are described by stationary solutions of \eqref{eq:Eulercyl}, \eqref{eq:incomp} of the following form \begin{equation}\label{eq:column} u \,=\, V(r) \,e_\theta\,, \qquad p \,=\, P(r)\,, \end{equation} where the velocity profile $V : \mathbb{R}_+ \to \mathbb{R}$ is arbitrary, and the pressure $P : \mathbb{R}_+ \to \mathbb{R}$ is determined by the centrifugal balance $rP'(r) = V(r)^2$. Other physically relevant quantities that characterize the vortex are the angular velocity $\Omega$ and the vorticity $W$\thinspace : \begin{equation}\label{eq:OmW} \Omega(r) \,=\, \frac{V(r)}{r}\,, \qquad W(r) \,=\, \frac{1}{r}\,\frac{{\rm d}}{{\rm d} r}\bigl(r V(r)\bigr) \,=\, r \Omega'(r) + 2 \Omega(r)\,. \end{equation} To investigate the stability of the vortex \eqref{eq:column}, we consider perturbed solutions of the form \[ u(r,\theta,z,t) \,=\, V(r) \,e_\theta + \tilde u(r,\theta,z,t)\,, \qquad p(r,\theta,z,t) \,=\, P(r) + \tilde p(r,\theta,z,t)\,. \] Inserting this Ansatz into \eqref{eq:Eulercyl} and neglecting the quadratic terms in $\tilde u$, we obtain the linearized evolution equations \begin{equation}\label{eq:upert} \begin{split} \partial_t u_r + \Omega \partial_\theta u_r - 2 \Omega u_\theta \,&=\, -\partial_r p\,, \\ \partial_t u_\theta + \Omega \partial_\theta u_\theta + W u_r \,&=\, -\frac1r \partial_\theta p\,, \\ \partial_t u_z + \Omega \partial_\theta u_z \,&=\, -\partial_z p\,, \end{split} \end{equation} where we have dropped all tildes for notational simplicity. Remark that the incompressibility condition \eqref{eq:incomp} still holds for the velocity perturbations. Thus, taking the divergence of both sides in \eqref{eq:upert}, we see that the pressure $p$ satisfies the second order elliptic equation \begin{equation}\label{eq:pressure} -\partial_r^* \partial_r p -\frac{1}{r^2}\partial_\theta^2 p -\partial_z^2 p \,=\, 2 \bigl(\partial_r^* \Omega\bigr) \partial_\theta u_r - 2 \partial_r^* \bigl(\Omega\,u_\theta\bigr)\,, \end{equation} where we introduced the shorthand notation $\partial_r^* f = \frac1r \partial_r(rf) = \partial_r f + \frac1r f$. We want to solve the evolution equation \eqref{eq:upert} in the Hilbert space \[ X \,=\, \Bigl\{u = (u_r,u_\theta,u_z) \in L^2(\mathbb{R}^3)^3\,\Big|\, \partial_r^* u_r + \frac1r \partial_\theta u_\theta + \partial_z u_z = 0\Bigr\}\,, \] equipped with the standard $L^2$ norm. Note that the definition of $X$ incorporates the incompressibility condition \eqref{eq:incomp}. In Section~\ref{sec3} we shall verify that, for any $u \in X$, the elliptic equation \eqref{eq:pressure} has a unique solution (up to an irrelevant additive constant) that satisfies $\nabla p \in L^2(\mathbb{R}^3)^3$. Denoting that solution by $p = P[u]$, we can write Eq.~\eqref{eq:upert} in the abstract form $\partial_t u = L u$, where $L$ is the integro-differential operator in $X$ defined by \begin{equation}\label{eq:Ldef} L u \,=\, \begin{pmatrix*}[l] -\Omega \partial_\theta u_r + 2 \Omega u_\theta -\partial_r P[u] \\[1mm] -\Omega \partial_\theta u_\theta - W u_r -\frac1r\partial_\theta P[u] \\[1mm] -\Omega \partial_\theta u_z -\partial_z P[u]\end{pmatrix*}\,. \end{equation} If the angular velocity $\Omega$ and the vorticity $W$ are, for instance, bounded and continuous functions on $\mathbb{R}_+$, it is not difficult to verify that the operator $L$ generates a strongly continuous group of bounded linear operators in $X$, see Section~\ref{sec2}. Our goal is to show that, under additional assumptions on the vortex profile, the norm of this evolution group has a mild growth as $|t| \to \infty$. Following \cite{GS1}, we make the following assumptions. \noindent{\bf Assumption H1:} {\em The vorticity profile $W : \mathbb{R}pb \to \mathbb{R}p$ is a $\mathcal{C}^2$ function satisfying $W'(0) = 0$, $W'(r) < 0$ for all $r > 0$, $r^3 W'(r) \to 0$ as $r \to \infty$, and} \begin{equation}\label{eq:Winteg} \Gamma \,:=\, \int_0^\infty W(r) r\,{\rm d} r \,<\, \infty\,. \end{equation} According to \eqref{eq:OmW}, the angular velocity $\Omega$ can be expressed in terms of the vorticity $W$ by the formula \begin{equation}\label{eq:Omrep} \Omega(r) \,=\, \frac{1}{r^2}\int_0^r W(s) s\,{\rm d} s\,, \qquad r > 0\,, \end{equation} and the derivative $\Omega'$ satisfies \begin{equation}\label{eq:Omrep2} \Omega'(r) \,=\, \frac{W(r)-2\Omega(r)}{r} \,=\, \frac{1}{r^3} \int_0^r W'(s)s^2\,{\rm d} s\,, \qquad r > 0\,. \end{equation} Thus $\Omega \in \mathcal{C}^2(\mathbb{R}pb) \cap C^3(\mathbb{R}p)$ is a positive function satisfying $\Omega(0) = W(0)/2$, $\Omega'(0) = 0$, $\Omega'(r) < 0$ for all $r > 0$, and $r^2 \Omega(r) \to \Gamma$ as $r \to \infty$. Moreover, since $W$ is nonincreasing, it follows from \eqref{eq:Winteg} that $r^2 W(r) \to 0$ as $r \to \infty$, and this implies that $r^3 \Omega'(r) \to -2\Gamma$ as $r \to \infty$. Similarly $r^4 \Omega''(r) \to 6\Gamma$ as $r \to \infty$. Finally, assumption H1 implies that the {\em Rayleigh function} is positive\thinspace : \begin{equation}\label{eq:Phidef} \Phi(r) \,=\, 2 \Omega(r) W(r) \,>\, 0\,, \qquad r \ge 0\,. \end{equation} \noindent{\bf Assumption H2:} {\em The $\mathcal{C}^1$ function $J : \mathbb{R}p \to \mathbb{R}p$ defined by \begin{equation}\label{eq:Jdef} J(r) \,=\, \frac{\Phi(r)}{\Omega'(r)^2}\,, \qquad r > 0\,, \end{equation} satisfies $J'(r) < 0$ for all $r > 0$ and $rJ'(r)\to 0$ as $r\to \infty$.} The reader is referred to the previous work \cite{GS1} for a discussion of these hypotheses. We just recall here that assumptions H1, H2 are both satisfied in all classical examples that can be found in the physical literature. In particular, they hold for the Lamb-Oseen vortex\thinspace : \begin{equation}\label{eq:LOvortex} \Omega(r) \,=\, \frac{1}{r^2}\Bigl(1 - e^{-r^2}\Bigr)\,, \qquad W(r) \,=\, 2\,e^{-r^2}\,, \end{equation} and for the Kaufmann-Scully vortex\thinspace : \begin{equation}\label{eq:KSvortex} \Omega(r) \,=\, \frac{1}{1+r^2}\,, \qquad W(r) \,=\, \frac{2}{(1+r^2)^2}\,. \end{equation} Our main result can now be stated as follows\thinspace : \begin{thm}\label{thm:main} Assume that the vorticity profile $W$ satisfies assumptions H1, H2 above. Then the linear operator $L$ defined in \eqref{eq:Ldef} is the generator of a strongly continuous group $(e^{tL})_{t \in \mathbb{R}}$ of bounded linear operators in $X$. Moreover, for any $\varepsilonilon > 0$, there exists a constant $C_\varepsilonilon \ge 1$ such that \begin{equation}\label{eq:eLbound} \|e^{tL}\|_{X \to X} \,\le\, C_\varepsilonilon\,e^{\varepsilonilon |t|}\,, \qquad \hbox{for all } t \in \mathbb{R}\,. \end{equation} \end{thm} \begin{rem}\label{rem:growth} Estimate \eqref{eq:eLbound} means that {\em growth bound} of the group $e^{tL}$ is equal to zero, see \cite[Section~I.5]{EN}. Equivalently, the spectrum of $e^{tL}$ is contained in the unit circle $\{z \in \mathbb{C}\,|\, |z| = 1\}$ for all $t \in \mathbb{R}$. Invoking the Hille-Yosida theorem, we deduce from \eqref{eq:eLbound} that the spectrum of the generator $L$ is entirely contained in the imaginary axis of the complex plane, and that the following resolvent bound holds for any $a > 0$\thinspace : \begin{equation}\label{eq:resbound} \sup\Bigl\{ \|(z - L)^{-1}\|_{X \to X} \,\Big|\, z \in \mathbb{C}\,,~|\mathbb{R}e(z)| \ge a\Bigr\} \,<\, \infty\,. \end{equation} In fact, since $X$ is a Hilbert space, the Gearhart-Pr\"uss theorem \cite[Section~V.1]{EN} asserts that the resolvent bound \eqref{eq:resbound} is also equivalent to the group estimate \eqref{eq:eLbound}. \end{rem} \begin{rem}\label{rem:Ceps} The constant $C_\varepsilonilon$ in \eqref{eq:eLbound} may of course blow up as $\varepsilonilon \to 0$, but unfortunately our proof does not give any precise information. It is reasonable to expect that $C_\varepsilonilon = \mathcal{O}(\varepsilonilon^{-N})$ for some $N > 0$, which would imply that $ \|e^{tL}\| = \mathcal{O}(|t|^N)$ as $|t| \to \infty$, but proving such an estimate is an open problem. \end{rem} \begin{rem}\label{rem:normal} In \eqref{eq:LOvortex}, \eqref{eq:KSvortex}, and in all what follows, we always assume that the vortex profile is normalized so that $W(0) = 2$, hence $\Omega(0) = 1$. The general case can be easily deduced by a rescaling argument. \end{rem} The rest of this paper is organized as follows. In Section~\ref{sec2}, we describe the main steps in the proof of Theorem~\ref{thm:main}. In particular, we show that the linearized operator \eqref{eq:Ldef} is the generator of a strongly continuous group in the Hilbert space $X$, and we reduce the linearized equations to a family of one-dimensional problems using a Fourier series expansion in the angular variable $\theta$ and a Fourier transform with respect to the vertical variable $z$. For a fixed value of the angular Fourier mode $m \in \mathbb{Z}$ and of the vertical wave number $k \in \mathbb{R}$, we show that the restricted linearized operator $L_{m,k}$ is the sum of a (nearly) skew-symmetric part $A_m$ and of a compact perturbation $B_{m,k}$. Actually, proving compactness of $B_{m,k}$ requires delicate estimates on the pressure, which are postponed to Section~\ref{sec3}. We then invoke the result of \cite{GS1} to show that $L_{m,k}$ has no eigenvalue, hence no spectrum, outside the imaginary axis. The last step in the proof consists in showing that, for any $a \neq 0$, the resolvent norm $\|(s - L_{m,k})^{-1}\|$ is uniformly bounded for all $m \in \mathbb{Z}$, all $k \in \mathbb{R}$, and all $s \in \mathbb{C}$ with $\mathbb{R}e(s) = a$. This crucial bound is obtained in Section~\ref{sec4} using a priori estimates for the resolvent equation, which give explicit bounds in some regions of the parameter space, combined with a contradiction argument which takes care of the other regions. The proof of Theorem~\ref{thm:main} is thus concluded at the end of Section~\ref{sec2}, taking for granted the results of Sections~\ref{sec3} and \ref{sec4} which are the main original contributions of this paper. \noindent{\bf Acknowledgements.} This work was partially supported by grants ANR-18-CE40-0027 (Th.G.) and ANR-14-CE25-0009-01 (D.S.) from the ``Agence Nationale de la Recherche''. The authors warmly thank an anonymous referee for suggesting a more natural way to prove compactness of the operator $B_{m,k}$, which is now implemented in Section~\ref{sec32}. \section{Main steps of the proof} \label{sec2} The proof of Theorem~\ref{thm:main} can be divided into four main steps, which are detailed in the following subsections. The first two steps are rather elementary, but the remaining two require more technical calculations which are postponed to Sections~\ref{sec3} and \ref{sec4}. \subsection{Splitting of the linearized operator}\label{sec21} The linearized operator \eqref{eq:Ldef} can be decomposed as $L = A + B$, where $A$ is the first order differential operator \begin{equation}\label{eq:Adef} A u \,=\, -\Omega(r)\partial_\theta u + r\Omega'(r)u_r \,e_\theta\,, \end{equation} and $B$ is the nonlocal operator \begin{equation}\label{eq:Bdef} B u \,=\, -\nabla P[u] + 2\Omega u_\theta e_r - 2 (r\Omega)' u_r e_\theta\,. \end{equation} We recall that $W = r\Omega' + 2\Omega$, and that $P[u]$ denotes the solution $p$ of the elliptic equation \eqref{eq:pressure}. As is easily verified, both operators $A$ and $B$ preserve the incompressibility condition $\mathop{\mathrm{div}}\nolimits u = 0$, and this is precisely the reason for which we included the additional term $r\Omega'(r)u_r \,e_\theta$ in the definition \eqref{eq:Adef} of the advection operator $A$. \begin{lem}\label{lem:AB} Under assumption H1, the linear operator $A$ is the generator of a strongly continuous group in the Hilbert space $X$, and $B$ is a bounded linear operator in $X$. \end{lem} \begin{proof} The evolution equation $\partial_t u = Au$ is equivalent to the system \[ \partial_t u_r + \Omega(r) \partial_\theta u_r \,=\, 0\,, \quad \partial_t u_\theta + \Omega(r) \partial_\theta u_\theta \,=\, r \Omega' u_r\,, \quad \partial_t u_z + \Omega(r) \partial_\theta u_z \,=\, 0\,, \] which has the explicit solution \begin{align}\nonumber u_r(r,\theta,z,t) \,&=\, u_r\bigl(r,\theta-\Omega(r) t,z,0\bigr)\,, \\ \label{eq:Agroup} u_\theta(r,\theta,z,t) \,&=\, u_\theta\bigl(r,\theta-\Omega(r) t,z,0\bigr) + r\Omega'(r)t \,u_r\bigl(r,\theta-\Omega(r) t,z,0\bigr)\,, \\ \nonumber u_z(r,\theta,z,t) \,&=\, u_z\bigl(r,\theta-\Omega(r) t,z,0\bigr)\,, \end{align} for any $t \in \mathbb{R}$. Under assumption H1, the functions $\Omega$ and $r \mapsto r \Omega'(r)$ are bounded on $\mathbb{R}p$. With this information at hand, it is straightforward to verify that the formulas \eqref{eq:Agroup} define a strongly continuous group $(e^{tA})_{t \in \mathbb{R}}$ of bounded operators in $X$. Moreover, there exists a constant $C > 0$ such that $\|e^{tA}\|_{X \to X} \le C(1+|t|)$ for all $t \in \mathbb{R}$. On the other hand, in view of definition \eqref{eq:pressure}, the pressure $p = P[u]$ satisfies the energy estimate \begin{equation}\label{eq:pressurest} \|\partial_r p\|_{L^2(\mathbb{R}^3)}^2 + \|\frac{1}{r} \partial_\theta p\|_{L^2(\mathbb{R}^3)}^2 + \|\partial_z p\|_{L^2(\mathbb{R}^3)}^2 \,\le\, C\Bigl(\|u_r\|_{L^2(\mathbb{R}^3)}^2 + \|u_\theta\|_{L^2(\mathbb{R}^3)}^2\Bigr)\,, \end{equation} which is established in Section~\ref{sec3}, see Remark~\ref{rem:p} below. This shows that $B$ is a bounded linear operator in $X$. \end{proof} It follows from Lemma~\ref{lem:AB} and standard perturbation theory \cite[Section~III.1]{EN} that the linear operator $L = A+B$ is the generator of a strongly continuous group of bounded operators in $X$. Our goal is to show that, under appropriate assumptions on the vortex profile, this evolution group has a mild (i.e., sub-exponential) growth as $|t| \to \infty$, as specified in \eqref{eq:eLbound}. \subsection{Fourier decomposition}\label{sec22} To fully exploit the symmetries of the linearized operator \eqref{eq:Ldef}, whose coefficients only depend on the radial variable $r$, it is convenient to look for velocities and pressures of the following form \begin{equation}\label{eq:upFour} u(r,\theta,z,t) \,=\, u_{m,k}(r,t)\,e^{im\theta}\,e^{ikz}\,, \qquad p(r,\theta,z,t) \,=\, p_{m,k}(r,t)\,e^{im\theta}\,e^{ikz}\,, \end{equation} where $m \in \mathbb{Z}$ is the angular Fourier mode and $k \in \mathbb{R}$ is the vertical wave number. Of course, we assume that $\overline{u_{m,k}} = u_{-m,-k}$ and $\overline{p_{m,k}} = p_{-m,-k}$ so as to obtain real-valued functions after summing over all possible values of $m,k$. When restricted to the Fourier sector \begin{equation}\label{eq:Xmkdef} X_{m,k} \,=\, \Bigl\{u = (u_r,u_\theta,u_z) \in L^2(\mathbb{R}p,r\,{\rm d} r)^3\,\Big|\, \partial_r^* u_r + \frac{im}{r} u_\theta + ik u_z = 0\Bigr\}\,, \end{equation} the linear operator \eqref{eq:Ldef} reduces to the one-dimensional operator \begin{equation}\label{eq:Lmkdef} L_{m,k} u \,=\, \begin{pmatrix*}[l] -im\Omega u_r + 2 \Omega u_\theta -\partial_r P_{m,k}[u] \\[1mm] -im\Omega u_\theta - W u_r -\frac{im}{r}P_{m,k}[u] \\[1mm] -im\Omega u_z -ik P_{m,k}[u]\end{pmatrix*}\,, \end{equation} where $P_{m,k}[u]$ denotes the solution $p$ of the following elliptic equation on $\mathbb{R}p$\thinspace : \begin{equation}\label{eq:pmk} -\partial_r^* \partial_r p +\frac{m^2}{r^2}\,p + k^2 p \,=\, 2im \bigl(\partial_r^* \Omega\bigr) u_r - 2 \partial_r^* \bigl(\Omega\,u_\theta\bigr)\,. \end{equation} As in Section~\ref{sec21}, we decompose $L_{m,k} = A_m + B_{m,k}$, where \begin{equation}\label{eq:ABmkdef} A_m u \,=\, \begin{pmatrix*}[l] -im \Omega u_r \\[1mm] -im \Omega u_\theta + r\Omega'u_r \\[1mm] -im \Omega u_z \end{pmatrix*}\,, \qquad B_{m,k} u \,=\, \begin{pmatrix*}[l] -\partial_r P_{m,k}[u] + 2 \Omega u_\theta \\[1mm] -\frac{im}{r} P_{m,k}[u] - 2(r\Omega)' u_r \\[1mm] -ik P_{m,k}[u] \end{pmatrix*}\,. \end{equation} The following result is the analog of \cite[Proposition~2.1]{GS1} in the present context. \begin{prop}\label{prop:AB} Assume that the vorticity profile $W$ satisfies assumption H1 and the normalization condition $W(0) = 2$. For any $m \in \mathbb{Z}$ and any $k \in \mathbb{R}$,\\[1mm] 1) The linear operator $A_m$ defined by \eqref{eq:ABmkdef} is bounded in $X_{m,k}$ with spectrum given by \begin{equation}\label{eq:spAm} \sigma(A_m) \,=\, \Bigl\{z \in \mathbb{C}\,\Big|\, z = -imb \hbox{ for some } b \in [0,1]\Bigr\}\,; \end{equation} 2) The linear operator $B_{m,k}$ defined by \eqref{eq:ABmkdef} is compact in $X_{m,k}$. \end{prop} \begin{proof} Definition \eqref{eq:ABmkdef} shows that $A_m$ is essentially the multiplication operator by the function $-im\Omega$, whose range is precisely the imaginary interval \eqref{eq:spAm} since the angular velocity is normalized so that $\Omega(0) = 1$. So the first assertion in Proposition~\ref{prop:AB} is rather obvious, and can be established rigorously by studying the resolvent operator $(z-A_m)^{-1}$, see \cite[Proposition~2.1]{GS1}. The proof of the second assertion requires careful estimates on a number of quantities related to the pressure, and is postponed to Section~\ref{sec32}. \end{proof} \subsection{Control of the discrete spectrum}\label{sec23} For any $m \in \mathbb{Z}$ and any $k \in \mathbb{R}$, it follows from Proposition~\ref{prop:AB} and Weyl's theorem \cite[Theorem~I.4.1]{EE} that the {\em essential spectrum} of the operator $L_{m,k} = A_m + B_{m,k}$ is the purely imaginary interval \eqref{eq:spAm}, whereas the rest of the spectrum entirely consists of isolated eigenvalues with finite multiplicities\footnote{It is not difficult to verify that, in the present case, the various definitions of the essential spectrum given e.g. in \cite[Section~I.4]{EE} are all equivalent.}. To prove spectral stability, it is therefore sufficient to show that $L_{m,k}$ has no {\em eigenvalue} outside the imaginary axis. Given any $s \in \mathbb{C}$ with $\mathbb{R}e(s) \neq 0$, the eigenvalue equation $(s - L_{m,k})u = 0$ is equivalent to the system \begin{equation}\label{eq:eigsys} \begin{array}{l} \gamma(r) u_r - 2\Omega(r)u_\theta \,=\, -\partial_r p\,, \\[1mm] \gamma(r) u_\theta + W(r)u_r \,=\, -\frac{im}{r} p\,, \\[1mm] \gamma(r) u_z \,=\, -ik p\,, \end{array} \qquad\quad \partial_r^* u_r + \frac{im}{r} u_\theta + ik u_z \,=\, 0\,, \end{equation} where $\gamma(r) = s + im\Omega(r)$. If $(m,k) \neq (0,0)$, one can eliminate the pressure $p$ and the velocity components $u_\theta$, $u_z$ from system \eqref{eq:eigsys}, which then reduces to a scalar equation for the radial velocity only\thinspace : \begin{equation}\label{eq:eigscalar} -\partial_r \biggl(\frac{r^2 \partial_r^* u_r}{m^2 + k^2 r^2}\biggr) + \biggl\{1 + \frac{1}{\gamma(r)^2}\frac{k^2 r^2 \Phi(r)}{m^2 + k^2r^2} + \frac{imr}{\gamma(r)}\partial_r \Bigl(\frac{W(r)}{m^2+k^2r^2}\Bigr)\biggr\}u_r \,=\, 0\,, \end{equation} where $\Phi = 2\Omega W$ is the Rayleigh function. The derivation of \eqref{eq:eigscalar} is standard and can be found in many textbooks, see e.g. \cite[Section 15]{DR}. It is reproduced in Section~\ref{sec41} below in the more general context of the resolvent equation. The main result of our previous work on columnar vortices can be stated as follows. \begin{prop}\label{prop:GS1} {\bf\cite{GS1}} Under assumptions H1 and H2, the elliptic equation \eqref{eq:eigscalar} has no nontrivial solution $u_r \in L^2(\mathbb{R}p,r\,{\rm d} r)$ if $\mathbb{R}e(s) \neq 0$. \end{prop} \begin{cor}\label{cor:GS1} Under assumptions H1 and H2, the operator $L_{m,k}$ in $X_{m,k}$ has no eigenvalue outside the imaginary axis. \end{cor} \begin{proof} Assume that $u \in X_{m,k}$ satisfies $L_{m,k} u = su$ for some complex number $s$ with $\mathbb{R}e(s) \neq 0$. If $m = k = 0$, the incompressibility condition shows that $\partial_r^* u_r = 0$, hence $u_r = 0$, and since $\gamma(r) = s \neq 0$ the second and third relations in \eqref{eq:eigsys} imply that $u_\theta = u_r=u_z = 0$. If $(m,k) \neq (0,0)$, the radial velocity $u_r$ satisfies \eqref{eq:eigscalar}, and Proposition~\ref{prop:GS1} asserts that $u_r = 0$. Using the relations \eqref{eq:uthetaexp}, \eqref{eq:uzexp} below (with $f = 0$), we conclude that $u_\theta = u_z = 0$. \end{proof} \subsection{Uniform resolvent estimates}\label{sec24} Under assumptions H1, H2, it follows from Proposition~\ref{prop:AB} and Corollary~\ref{cor:GS1} that the spectrum of the linear operator $L_{m,k} = A_m + B_{m,k}$ is entirely located on the imaginary axis. Equivalently, for any $s \in \mathbb{C}$ with $\mathbb{R}e(s) \neq 0$, the resolvent $(s - L_{m,k})^{-1}$ is well defined as a bounded linear operator in $X_{m,k}$. The main technical result of the present paper, whose proof is postponed to Section~\ref{sec4} below, asserts that the resolvent bound is uniform with respect to the Fourier parameters $m$ and $k$, and to the spectral parameter $s \in \mathbb{C}$ if $\mathbb{R}e(s)$ is fixed. \begin{prop}\label{prop:main} Assume that the vortex profile satisfies assumptions H1, H2. Then for any real number $a \neq 0$, one has \begin{equation}\label{eq:unifres} \sup_{\mathbb{R}e(s) = a} \,\sup_{m \in \mathbb{Z}} ~\sup_{k \in \mathbb{R}}~ \bigl\|(s - L_{m,k})^{-1}\bigr\|_{X_{m,k} \to X_{m,k}} \,<\, \infty\,. \end{equation} \end{prop} Equipped with the uniform resolvent estimate given by Proposition~\ref{prop:main}, it is now straightforward to conclude the proof of our main result. \noindent{\bf End of the proof of Theorem~\ref{thm:main}.} We know from Lemma~\ref{lem:AB} that the operator $L$ defined by \eqref{eq:Ldef} is the generator of a strongly continuous group of bounded linear operators in the Hilbert space $X$. For any $a \neq 0$, we set \begin{equation}\label{eq:Fdef} F(a) \,=\, \sup_{\mathbb{R}e(s) = a} \bigl\|(s - L)^{-1}\bigr\|_{X \to X} ~\leq\, \sup_{\mathbb{R}e(s) = a} \,\sup_{m \in \mathbb{Z}} ~\sup_{k \in \mathbb{R}}~ \bigl\|(s - L_{m,k})^{-1}\bigr\|_{X_{m,k} \to X_{m,k}}\,, \end{equation} where the last inequality follows from Parseval's theorem. The function $F : \mathbb{R}^* \to (0,\infty)$ defined by \eqref{eq:Fdef} is even by symmetry, and a straightforward perturbation argument shows that \[ \frac{F(a)}{1 + |b| F(a)} \,\le\, F(a+b) \,\le\, \frac{F(a)}{1 - |b| F(a)}\,, \] for all $a \neq 0$ and all $b \in \mathbb{R}$ with $|b| F(a) < 1/2$, so that $F$ is continuous. Moreover, the Hille-Yosida theorem \cite[Theorem~II.3.8]{EN} asserts that $F(a) = \mathcal{O}(|a|^{-1})$ as $|a| \to \infty$, and it follows that the resolvent bound \eqref{eq:resbound} holds for any $a > 0$. In particular, given any $\varepsilonilon > 0$, the semigroup $\bigl(e^{t(L-\varepsilonilon)}\bigl)_{t \ge 0}$ satisfies the assumptions of the Gearhart-Pr\"uss theorem \cite[Theorem~V.1.11]{EN}, and is therefore uniformly bounded. This gives the desired bound \eqref{eq:eLbound} for positive times, and a similar argument yields the corresponding estimate for $t \le 0$. The proof of Theorem~\ref{thm:main} is thus complete. \mathbb{Q}ED \section{Estimates for the pressure} \label{sec3} In this section, we give detailed estimates on the pressure $p = P_{m,k}[u]$ satisfying \eqref{eq:pmk}. That quantity appears in all components of the vector-valued operator $B_{m,k}$ introduced in \eqref{eq:ABmkdef}, and our ultimate goal is to prove the last part of Proposition~\ref{prop:AB}, which asserts that $B_{m,k}$ is a compact operator in the space $X_{m,k}$. We assume henceforth that the vorticity profile $W$ satisfies assumption~H1 in Section~\ref{sec1}. To derive energy estimates, it is convenient in a first step to suppose that the divergence-free vector field $u \in X_{m,k}$ is smooth and has compact support in $(0,+\infty)$. As is shown in Proposition~\ref{prop:approximation} in the Appendix, the family of all such vector fields is dense in $X_{m,k}$, and the estimates obtained in that particular case remain valid for all $u \in X_{m,k}$ by a simple continuity argument. With this observation in mind, we now proceed assuming that $u$ is smooth and compactly supported. Equation \eqref{eq:pmk} has a unique solution $p$ such that the quantities $\partial_r p$, $mp/r$, and $kp$ all belong to $L^2(\mathbb{R}p,r\,{\rm d} r)$; the only exception is the particular case $m = k =0$ where uniqueness holds up to an additive constant. One possibility to justify this claim is to return to the cartesian coordinates and to consider the elliptic equation \eqref{eq:pressure} for the pressure $p : \mathbb{R}^3 \to \mathbb{R}$, which can be written in the form \begin{equation}\label{eq:pressurebis} - {\rm d}elta p \,=\, 2r\Omega'(r) \bigl(e_r, (\nabla u)e_\theta\bigr) - 2 \Omega(r) (\mathop{\mathrm{curl}} u)\cdot e_z\,, \end{equation} where $r = (x_1^2 + x_2^2)^{1/2}$. Note that the right-hand side is regular (of class $\mathcal{C}^2$ under assumption H1) and has compact support in the horizontal variables. Eq.~\eqref{eq:pressurebis} thus holds in the classical sense, and uniqueness up to a constant of a bounded solution $p$ is a consequence of Liouville's theorem for harmonic functions in $\mathbb{R}^3$. In our framework, however, using \eqref{eq:pressurebis} is not the easiest way to prove existence, because according to \eqref{eq:upFour} we are interested in solutions which depend on the variables $\theta$, $z$ in a specific way, and do not decay to zero in the vertical direction. If we restrict ourselves to the Fourier sector indexed by $(m,k)$, existence of a solution to \eqref{eq:pmk} is conveniently established using the explicit representation formulas collected in Lemma~\ref{lem:rep} below. As can be seen from these expressions, the solution $p$ of \eqref{eq:pmk} is smooth near the origin and satisfies the homogeneous Dirichlet condition at the artificial boundary $r = 0$ if $|m| \ge 1$, and the homogeneous Neumann condition if $m = 0$ or $|m| \ge 2$. As $r \to \infty$, it follows from \eqref{eq:repmk}, \eqref{eq:repm0} that $p(r)$ decays to zero exponentially fast if $k \neq 0$, and behaves like $r^{-|m|}$ if $m \neq 0$ and $k = 0$. In the very particular case where $m = k = 0$, the pressure vanishes near infinity if $u$ has compact support. Boundary conditions and decay properties for the derivatives of $p$ can be derived in a similar way, and will (often implicitly) be used in the proofs below to neglect boundary terms when integrating by parts. For functions or vector fields defined on $\mathbb{R}p,$ we always use in the sequel the notation $\|\cdot\|_{L^2}$ to denote the Lebesgue $L^2$ norm with respect to the measure $r\,{\rm d} r$. The corresponding Hermitian inner product will be denoted by $\langle \cdot,\cdot\rangle$. \subsection{Energy estimates}\label{sec31} Throughout this section, we assume that $u \in X_{m,k}$ and we denote by $p = P_{m,k}[u]$ the solution of \eqref{eq:pmk} given by Lemma~\ref{lem:rep}. We begin with a standard $L^2$ energy estimate. \begin{lem}\label{lem:ell} For any $u \in X_{m,k}$ we have \begin{equation}\label{eq:pmkest} \|\partial_r p\|_{L^2}^2 + \Bigl\|\frac{mp}{r}\Bigr\|_{L^2}^2 + \|kp\|_{L^2}^2 \,\le\, C \bigl(\|u_r\|_{L^2}^2 + \|u_\theta\|_{L^2}^2\bigr)\,, \end{equation} where the constant $C > 0$ depends only on $\Omega$. \end{lem} \begin{proof} By density, it is sufficient to prove \eqref{eq:pmkest} under the additional assumption that $u$ is smooth and compactly supported in $(0,+\infty)$. To do so, we multiply both sides of \eqref{eq:pmk} by $r\bar p$ and integrate the result over $\mathbb{R}p$. Integrating by parts and using H\"older's inequality, we obtain \begin{align*} \|\partial_r p\|_{L^2}^2 + \Bigl\|\frac{mp}{r}\Bigr\|_{L^2}^2 + \|kp\|_{L^2}^2 \,&=\, \int_0^\infty \bar p\Bigl(\frac{2im}{r} (r\Omega)' u_r - 2 \partial_r^* (\Omega u_\theta)\Bigr)r\,{\rm d} r \\ \,&=\, \int_0^\infty \Bigl(\frac{2im}{r}(r\Omega)' \bar p u_r + 2(\partial_r \bar p) \Omega u_\theta\Bigr)r\,{\rm d} r \\ \,&\le\, 2 \|(r\Omega)'\|_{L^\infty} \Bigl\|\frac{mp}{r}\Bigr\|_{L^2} \|u_r\|_{L^2} + 2 \|\Omega\|_{L^\infty}\|\partial_r p\|_{L^2} \|u_\theta\|_{L^2}\,, \end{align*} hence \[ \|\partial_r p\|_{L^2}^2 + \Bigl\|\frac{mp}{r}\Bigr\|_{L^2}^2 + \|kp\|_{L^2}^2 \,\le\, 4 \|(r\Omega)'\|_{L^\infty}^2 \|u_r\|_{L^2}^2 + 4 \|\Omega\|_{L^\infty}^2 \|u_\theta\|_{L^2}^2\,. \] Note that, by assumption H1, $\|(1{+}r)^2\Omega\|_{L^\infty} + \|(1{+}r)^3\Omega'\|_{L^\infty} + \|(1{+}r)^4\Omega''\|_{L^\infty} < \infty$. \end{proof} \begin{rem}\label{rem:p} The integrated pressure bound \eqref{eq:pressurest} can be established by an energy estimate as in the proof of Lemma~\ref{lem:ell}, or can be directly deduced from \eqref{eq:pmkest} using Parseval's theorem. \end{rem} For later use, we also show that the solution $p = P_{m,k}[u]$ of \eqref{eq:pmk} depends continuously on the parameter $k$ as long as $k \neq 0$. \begin{lem}\label{lem:pcont} Assume that $u_1 \in X_{m,k_1}$ and $u_2 \in X_{m,k_2}$, where $m \in \mathbb{Z}$ and $k_1, k_2 \neq 0$. If we denote $p = P_{m,k_1}[u_1] - P_{m,k_2}[u_2]$, we have the estimate \begin{equation}\label{eq:deltap} \|\partial_r p\|_{L^2}^2 + \Bigl\|\frac{mp}{r}\Bigr\|_{L^2}^2 + \|k_1 p\|_{L^2}^2 \,\le\, C \biggl(\|u_1 {-} u_2\|_{L^2}^2 + \|u_2\|_{L^2}^2 \Bigl|\frac{k_1}{k_2} - \frac{k_2}{k_1}\Bigr|^2 \biggr)\,, \end{equation} where the constant $C > 0$ depends only on $\Omega$. \end{lem} \begin{proof} In view of \eqref{eq:pmk}, the difference $p = p_1 - p_2 \equiv P_{m,k_1}[u_1] - P_{m,k_2}[u_2]$ satisfies the equation \[ -\partial_r^* \partial_r p +\frac{m^2}{r^2}\,p + k_1^2 p \,=\, \frac{2im}{r}\bigl(r \Omega\bigr)'(u_{1,r} {-} u_{2,r}) - 2 \partial_r^* \bigl(\Omega\,(u_{1,\theta} {-} u_{2,\theta})\bigr) + (k_2^2 - k_1^2)p_2\,. \] As in the proof of Lemma~\ref{lem:ell}, we multiply both sides by $r\bar p$ and we integrate over $\mathbb{R}p$. Integrating by parts and using H\"older's inequality, we easily obtain \[ \|\partial_r p\|_{L^2}^2 + \Bigl\|\frac{mp}{r}\Bigr\|_{L^2}^2 + \|k_1p\|_{L^2}^2 \,\le\, C \biggl(\|u_1 {-} u_2\|_{L^2}^2 + \|k_2p_2\|_{L^2}^2 \Bigl|\frac{k_1}{k_2} - \frac{k_2}{k_1}\Bigr|^2\biggr)\,, \] where the constant $C > 0$ depends only on $\|\Omega\|_{L^\infty}$ and $\|(r\Omega)'\|_{L^\infty}$. As $\|k_2 p_2\|_{L^2} \le C\|u_2\|_{L^2}$ by \eqref{eq:pmkest}, this gives the desired result. \end{proof} Finally, we derive a weighted estimate which allows us to control the pressure $p = P_{m,k}[u]$ in the far-field region where $r \gg 1$. \begin{lem}\label{lem:pressionloin} Assume that $k \neq 0$ or $|m| \ge 2$. If $u \in X_{m,k}$ and $p = P_{m,k}[u]$, then \begin{equation}\label{eq:pressionloin} \|r \partial_r p\|_{L^2}^2 + \|m p\|_{L^2}^2 + \|k r p\|_{L^2}^2 \,\le\, 3 \|p\|_{L^2}^2 + C \bigl(\|u_r\|_{L^2}^2 + \|u_\theta\|_{L^2}^2\bigr)\,, \end{equation} where the constant $C > 0$ depends only on $\Omega$. If $|m| \ge 2$ the first term in the right-hand side can be omitted. \end{lem} \begin{proof} We multiply both sides of \eqref{eq:pmk} by $r^3\bar p$ and integrate the result over $\mathbb{R}p$. Note that the integrand decays to zero exponentially fast if $k \neq 0$, and like $r^{1 - 2|m|}$ if $k \neq 0$, so that the integral converges if we assume that $k \neq 0$ or $|m| \ge 2$. After integrating by parts, we obtain the identity \begin{equation}\label{eq:loin1} \|r \partial_r p\|_{L^2}^2 + \|m p\|_{L^2}^2 + \|k r p\|_{L^2}^2 \,=\, 2 \|p\|_{L^2}^2 + \mathbb{R}e\bigl(I_1 + I_2\bigr)\,, \end{equation} where $I_1 = 2im \langle rp,(r\Omega)' u_r\rangle$ and $I_2 = 2 \langle \partial_r (r^2 p),\Omega u_\theta\rangle = 2 \langle r \partial_r p,r\Omega u_\theta\rangle + 4 \langle p,r\Omega u_\theta\rangle$. We observe that \begin{align*} |I_1| \,&\le\, 2 \|r(r\Omega)'\|_{L^\infty} \|mp\|_{L^2} \|u_r\|_{L^2} \,\le\, \frac14 \|m p\|_{L^2}^2 + 4\|r(r\Omega)'\|_{L^\infty}^2 \|u_r\|_{L^2}^2\,, \\ |I_2| \,&\le\, 2 \|r\Omega\|_{L^\infty} \Bigl(\|r\partial_r p\|_{L^2} + 2 \|p\|_{L^2} \Bigr) \|u_\theta\|_{L^2} \,\le\, \frac14 \Bigl(\|r\partial_r p\|_{L^2}^2 + \|p\|_{L^2}^2 \Bigr) + 20 \|r\Omega\|_{L^\infty}^2 \|u_\theta\|_{L^2}^2\,, \end{align*} and replacing these estimates into \eqref{eq:loin1} we obtain \eqref{eq:pressionloin}. If $|m| \ge 2$, then $3\|p\|_{L^2}^2 \le \frac34 \|mp\|_{L^2}^2$, so that the first term in the right-hand side of \eqref{eq:pressionloin} can be included in the left-hand side. \end{proof} \begin{cor}\label{cor:comp1} For any $m \in \mathbb{Z}$ and any $k \in \mathbb{R}$, the linear map $u \mapsto k P_{m,k}[u]$ from $X_{m,k}$ into $L^2(\mathbb{R}p,r\,{\rm d} r)$ is compact. \end{cor} \begin{proof} We can of course assume that $k \neq 0$. If $u$ lies in the unit ball of $X_{m,k}$, it follows from estimates \eqref{eq:pmkest} and \eqref{eq:pressionloin} that $\|k \partial_r p\|_{L^2}^2 + \|k r p\|_{L^2}^2 \le C(k,\Omega)$ for some constant $C(k,\Omega)$ independent of $u$. Applying Lemma~\ref{lem:compcrit}, we conclude that the map $u \mapsto kp$ is compact. \end{proof} \subsection{Compactness results}\label{sec32} The aim of this section is to complete the proof of Proposition~\ref{prop:AB}, by showing the compactness of the linear operator $B_{m,k}$ defined in \eqref{eq:ABmkdef}. In view of Corollary~\ref{cor:comp1}, which already settles the case of the third component $B_{m,k,z}u := -ikP_{m,k}[u]$, we are left to prove that the linear mappings \begin{align*} u \mapsto B_{m,k,r}u \,&:=\, -\partial_r P_{m,k}[u] + 2\Omega u_\theta\,, \qquad \hbox{and} \\ u \mapsto B_{m,k,\theta}u \,&:=\, -\frac{im}{r}P_{m,k}[u] - 2(r\Omega)' u_r\,, \end{align*} are compact from $X_{m,k}$ to $L^2(\mathbb{R}p,r\,{\rm d} r)$, for any $m \in \mathbb{Z}$ and any $k \in \mathbb{R}$. In the sequel, to simplify the notation, we write $B_r$, $B_\theta$, $B_z$ instead of $B_{m,k,r}u$, $B_{m,k,\theta}u$, $B_{m,k,z}u$, respectively. We first treat the simple particular case where $m = 0$. \begin{lem}\label{lem:estimBm0} If $m = 0$ and $u \in X_{0,k}$, then \begin{equation}\label{eq:estimBm0} \|\partial_r^* B_r\|_{L^2} + \|\partial_r^* B_\theta\|_{L^2} + \|rB_r\|_{L^2} + \|rB_\theta\|_{L^2} \,\le\, C(k,\Omega) \|u\|_{L^2}\,, \end{equation} where the constant $C(k,\Omega)$ depends only on $k$ and $\Omega$. \end{lem} \begin{proof} If $m = 0$ and $k = 0$, then $\partial_r^* u_r = \mathop{\mathrm{div}}\nolimits u = 0$, and this implies that $u_r = 0$. Similarly, the incompressibility condition for the vector $B$ implies that $B_r = 0$, and in view of \eqref{eq:ABmkdef} it follows that $B$ vanishes identically. Thus estimate \eqref{eq:estimBm0} is trivially satisfied in that case. If $m = 0$ and $k \neq 0$, we deduce from \eqref{eq:pmkest} and \eqref{eq:pressionloin} that \[ \|rB_r\|_{L^2} \,\le\, \|r\partial_r p\|_{L^2} + 2\|r\Omega\|_{L^\infty} \|u_\theta\|_{L^2} \,\le\, C(k,\Omega) \bigl( \|u_r\|_{L^2} + \|u_\theta\|_{L^2} \bigr)\,, \] and \[ \|rB_\theta\|_{L^2} \,=\, 2\|r(r\Omega)'u_r\|_{L^2} \,\le\, 2 \|r(r\Omega)'\|_{L^\infty} \|u_r\|_{L^2}\,. \] As for the derivatives, we observe that $\partial_r^* B_r = -\partial_r^*\partial_r p + 2\partial_r^*(\Omega u_\theta) = -k^2p$ in view of \eqref{eq:pmk}, and therefore we deduce from \eqref{eq:pmkest} that \[ \|\partial_r^* B_r\|_{L^2} \,=\, \|k^2p\|_{L^2} \,\le\, C(k,\Omega) \bigl(\|u_r\|_{L^2} + \|u_\theta\|_{L^2}\bigr)\,. \] Finally, as $\partial_r^* u_r + ik u_z = 0$, we have $\partial_r^* B_\theta = -2(r\Omega)''u_r - 2(r\Omega)'\partial_r^*u_r = -2(r\Omega)''u_r + 2ik(r\Omega)'u_z$, and it follows that \[ \|\partial_r^* B_\theta\|_{L^2} \,\le\, C(k,\Omega) \bigl(\|u_r\|_{L^2} + \|u_z\|_{L^2}\bigr)\,. \] Collecting these estimates, we arrive at \eqref{eq:estimBm0}. \end{proof} When $m \neq 0$, useful estimates on the vector $B_{m,k}u$ can be deduced from an elliptic equation satisfied by the radial component $B_r$, which also involves the quantities $R_1, R_2$ defined by \begin{equation}\label{eq:R1R2def} R_1 \,=\, 2\bigl(-(r\Omega)'' u_r + im\Omega'u_\theta + ik(r\Omega)'u_z\bigr)\,, \quad\hbox{and}\quad R_2 \,=\, \frac{2}{r}p + 2\Omega u_\theta\,. \end{equation} To derive that equation, we first observe that, in view of the definitions \eqref{eq:ABmkdef} of $B_r, B_\theta$ and of the incompressibility condition for $u$, the following relation holds \begin{equation}\label{eq:elimBtheta} \partial_r\bigl(r B_\theta\bigr) \,=\, -im \partial_r p -2 \partial_r \bigl(r(r\Omega)'u_r\bigr) \,=\, imB_r + r R_1\,. \end{equation} Next, we have the incompressibility condition for $B$, which is equivalent to \eqref{eq:pmk}\thinspace : \begin{equation}\label{eq:divB} \partial_r^* B_r + \frac{im}{r}B_\theta + ikB_z \,=\, 0\,. \end{equation} If we multiply both members of \eqref{eq:divB} by $r^2$ and differentiate the resulting identity with respect to $r$, we obtain using \eqref{eq:elimBtheta} the desired equation \begin{equation}\label{eq:Br} - \partial^2_r B_r - \frac{3}{r} \partial_r B_r + \Bigl(\frac{m^2-1}{r^2} + k^2\Bigr) B_r \,=\, \frac{im}{r}R_1 + k^2 R_2\,. \end{equation} If $u \in X_{m,k}$ is smooth and compactly supported in $(0,+\infty)$, it is clear from the definition \eqref{eq:ABmkdef} that the radial component $B_r$ satisfies exactly the same boundary conditions at $r = 0$ as the pressure derivative $\partial_r p$, and has also the same decay properties at infinity. In particular $B_r$ decays to zero exponentially fast as $r \to \infty$ if $k \neq 0$, and behaves like $r^{-1-|m|}$ if $k = 0$ and $m \neq 0$. These observations also apply to the azimuthal component $B_\theta$. We now exploit \eqref{eq:Br} to estimate $B_r$ and $B_\theta$, starting with the general case where $|m| \ge 2$. \begin{lem}\label{lem:estimBm2} If $|m|\ge 2$ and $u \in X_{m,k}$, then \begin{equation}\label{eq:estimBm2} \|\partial_r B_r\|_{L^2} + \|\partial_r^* B_\theta\|_{L^2} + \|rB_r\|_{L^2} + \|rB_\theta\|_{L^2} \,\le\, C(m,k,\Omega) \|u\|_{L^2}\,, \end{equation} where the constant $C(m,k,\Omega)$ depends only on $m$, $k$ and $\Omega$. \end{lem} \begin{proof} We first observe that $\|R_1\|_{L^2} + \|r^2 R_1\|_{L^2} \le C \|u\|_{L^2}$, where the constant depends on $m$, $k$, and $\Omega$. Similarly, in view of \eqref{eq:pmkest} and \eqref{eq:pressionloin}, we have $\|krR_2\|_{L^2} + \|kr^2 R_2\|_{L^2} \le C \|u\|_{L^2}$. Now, we multiply \eqref{eq:Br} by $r\bar B_r$ and integrate by parts. This leads to the identity \begin{equation}\label{eq:eb3} \|\partial_r B_r\|_{L^2}^2 + (m^2-1)\Bigl\|\frac{B_r}{r}\Bigr\|_{L^2}^2 + \|kB_r\|_{L^2}^2 \,=\, \mathbb{R}e \langle B_r,\frac{im}{r}R_1 + k^2 R_2\rangle\,. \end{equation} To control the right-hand side, we use the estimates \[ \Bigl|\langle B_r,\frac{im}{r}R_1\rangle\Bigr| \,\le\, \Bigl\|\frac{mB_r}{r} \Bigr\|_{L^2} \|R_1\|_{L^2} \,\le\, C(m,k,\Omega)\Bigl\|\frac{mB_r}{r}\Bigr\|_{L^2} \|u\|_{L^2}\,, \] and \[ \bigl| \langle B_r,k^2 R_2\rangle\bigr| \,\le\, 2\Bigl\|\frac{B_r}{r}\Bigr\| \|k^2 r R_2\|_{L^2} \,\le\, C(m,k,\Omega) \Bigl\|\frac{B_r}{r}\Bigr\| \|u\|_{L^2}\,. \] Inserting these bounds into \eqref{eq:eb3} and using Young's inequality together with the assumption that $|m| \ge 2$, we easily obtain \begin{equation}\label{eq:eb4} \|\partial_r B_r\|_{L^2}^2 + m^2 \Bigl\|\frac{B_r}{r}\Bigr\|_{L^2}^2 + k^2\|B_r\|_{L^2}^2 \,\le\, C(m,k,\Omega) \|u\|_{L^2}^2\,. \end{equation} In exactly the same way, if we multiply \eqref{eq:Br} by $r^3\bar B_r$ and integrate by parts, we arrive at the weighted estimate \begin{equation}\label{eq:eb5} \|r\partial_r B_r\|_{L^2}^2 + m^2 \|B_r\|_{L^2}^2 + k^2 \|rB_r\|_{L^2}^2 \,\le\, C(m,k,\Omega) \|u\|_{L^2}^2\,. \end{equation} When $k = 0$, estimates \eqref{eq:eb4}, \eqref{eq:eb5} remain valid but they do not provide the desired control on $\|rB_r\|_{L^2}$. In that case, we multiply \eqref{eq:Br} by $r^5\bar B_r$ to derive the additional identity \[ \|r^2 \partial_r B_r\|_{L^2}^2 + (m^2-1)\|rB_r\|_{L^2}^2 \,=\, -2 \mathbb{R}e\langle rB_r , r^2\partial_r B_r\rangle + \mathbb{R}e\langle r^4 B_r,\frac{im}{r} R_1\rangle\,. \] To estimate the right-hand side, we use the following bounds \begin{align*} 2\bigl|\langle r B_r, r^2\partial_r B_r\rangle\bigr| \,&\le\, \frac{3}{4} \|r^2\partial_r B_r\|_{L^2}^2 + \frac{4}{3} \|r B_r\|_{L^2}^2\,, \\ \bigl|\langle rB_r,imr^2R_1\rangle\bigr| \,&\le\, \| mrB_r\|_{L^2} \|r^2R_1\|_{L^2} \,\le\, C(m,\Omega) \|mrB_r\|_{L^2} \|u\|_{L^2}\,. \end{align*} Taking into account the assumption that $|m|\ge 2$, so that $\frac43 \le \frac{m^2-1}{2}$, we deduce that, for $k = 0$, \begin{equation}\label{eq:eb6} \|r^2\partial_r B_r\|_{L^2}^2 + m^2 \|rB_r\|_{L^2}^2 \,\le\, C(m,\Omega) \|u\|_{L^2}^2\,. \end{equation} Combining \eqref{eq:eb4}, \eqref{eq:eb5} and \eqref{eq:eb6} (when $k=0$), we obtain in particular the inequality \begin{equation}\label{eq:eb7} \|\partial_r B_r\|_{L^2} + \|rB_r\|_{L^2} \,\le\, C(m,k,\Omega) \|u\|_{L^2}\,. \end{equation} It remains to estimate the azimuthal component $B_\theta$, which satisfies $\partial_r^* B_\theta = \frac{im}{r}B_r + R_1$ by \eqref{eq:elimBtheta}. Using inequalities \eqref{eq:eb4} and \eqref{eq:pressionloin} (in the case where $|m| \ge 2$), we easily obtain \begin{equation}\label{eq:eb8} \|\partial_r^* B_\theta\|_{L^2} + \|rB_\theta\|_{L^2} \,\le\, C(m,k,\Omega) \|u\|_{L^2}\,, \end{equation} and estimate \eqref{eq:estimBm2} follows by combining \eqref{eq:eb7} and \eqref{eq:eb8}. \end{proof} The case where $m = \pm 1$ requires a slightly different argument, because an essential term in the elliptic equation \eqref{eq:Br} vanishes when $m^2 = 1$. It can be shown that this phenomenon is related to the translation invariance of the Euler equation in the original, cartesian coordinates. \begin{lem}\label{lem:estimBm1} Assume that $m = \pm 1$, $k\neq 0$ and $u \in X_{m,k}$. Then \begin{equation}\label{eq:estimBm1} \|\partial_r B_r\|_{L^2} + \|\partial_r^* D\|_{L^2} + \|rB_r\|_{L^2} + \|rD\|_{L^2} \,\le\, C(k,\Omega) \|u\|_{L^2}\,, \end{equation} where $D = B_r + imB_\theta$ and the constant $C(k,\Omega)$ depends only on $k$ and $\Omega$. \end{lem} \begin{proof} We multiply both sides of \eqref{eq:Br} by $r^2\partial_r \bar B_r$, take the real part, and integrate by parts. We obtain the identity \[ 2\|\partial_r B_r\|_{L^2}^2 + k^2\|B_r\|_{L^2}^2 \,=\, -\mathbb{R}e \langle \partial_r B_r , im R_1 + k^2 r R_2\rangle\,, \] where the right-hand side is estimated as in the previous lemma. This yields the bound \begin{equation}\label{eq:eb9} \|\partial_r B_r\|_{L^2}^2 + k^2\|B_r\|_{L^2}^2 \,\le\, C(k,\Omega)\|u\|_{L^2}^2\,. \end{equation} In exactly the same way, multiplying \eqref{eq:Br} by $r^4\partial_r \bar B_r$, we arrive at \begin{equation}\label{eq:eb10} \|r \partial_r B_r\|_{L^2}^2 + k^2\|r B_r\|_{L^2}^2 \,\le\, C(k,\Omega)\|u\|_{L^2}^2\,. \end{equation} In particular, combining \eqref{eq:pmkest}, \eqref{eq:pressionloin}, \eqref{eq:eb9} and \eqref{eq:eb10}, we find \begin{equation}\label{eq:eb11} \|D\|_{L^2} + \|rD\|_{L^2} \,\le\, C(k,\Omega)\|u\|_{L^2}\,. \end{equation} In addition, using the identity $\partial_r^* D = \partial_r B_r + \frac{1}{r}B_r + im(\frac{im}{r}B_r + R_1) = \partial_r B_r + imR_1$, we obtain \begin{equation}\label{eq:eb12} \|\partial_r^* D\|_{L^2} \,\le\, C(k,\Omega) \|u\|_{L^2}\,. \end{equation} Estimate \eqref{eq:estimBm1} follows directly from \eqref{eq:eb9}--\eqref{eq:eb12}. \end{proof} \begin{lem}\label{lem:estimBm1k0} Assume that $m = \pm 1$, $k = 0$, and $u \in X_{m,0}$. Then, for any $\alpha \in (0,1)$, \begin{equation}\label{eq:estimBm1k0} \|\partial_r B_r\|_{L^2} + \|\partial_r^* D\|_{L^2} + \|r^\alpha B_r\|_{L^2} + \|r^\alpha D\|_{L^2} \,\le\, C(\alpha,\Omega) \|u\|_{L^2}\,, \end{equation} where $D = B_r + imB_\theta$ and the constant $C(\alpha, \Omega)$ depends only on $\alpha$ and $\Omega$. \end{lem} \begin{proof} If $|m| = 1$ and $k = 0$, equation \eqref{eq:Br} reduces to \[ -\frac{1}{r^3}\partial_r\left(r^3 \partial_r B_r\right) \,=\, \frac{im}{r}R_1\,, \] which can be explicitly integrated to give \begin{equation}\label{eq:solexpl1} \partial_r B_r(r) \,=\, -\frac{im}{r^3} \int_0^r s^2R_1(s)\,{\rm d} s\,, \qquad r > 0\,, \end{equation} and finally \begin{equation}\label{eq:solexpl2} B_r(r) \,=\, \frac{im}{2} \int_0^r R_1(s)\frac{s^2}{r^2} \,{\rm d} s + \frac{im}{2} \int_r^\infty R_1(s)\,{\rm d} s\,, \qquad r > 0\,. \end{equation} Since $|m| = 1$ and $k = 0$, it follows from \eqref{eq:R1R2def} that $\|r^{-1}R_1\|_{L^2} + \|r^3 R_1\|_{L^2} \le C \|u\|_{L^2}$, where the constant depends only on $\Omega$. Using that information, it is straightforward to deduce from the representations \eqref{eq:solexpl1} and \eqref{eq:solexpl2} that \[ \|\partial_r B_r\|_{L^2} + \|r^\alpha B_r\|_{L^2} \,\le\, C(\alpha,\Omega) \|u\|_{L^2}\,, \] for any $\alpha < 1$. Note that $r B_r \notin L^2(\mathbb{R}_+,r\,{\rm d} r)$ in general, because the first term in the right-hand side of \eqref{eq:solexpl2} decays exactly like $r^{-2}$ as $r \to \infty$. On the other hand, it follows from \eqref{eq:divB} that $imB_\theta = -r\partial_rB_r - B_r$, which implies that $D = -r\partial_r B_r$. Moreover, as in the previous lemma, we have $\partial_r^* D = \partial_r B_r + imR_1$. So, using the estimates above on $R_1$, we easily obtain the bound $\|\partial_r^* D\|_{L^2} + \|r^\alpha D\|_{L^2} \le C\|u\|_{L^2}$, which concludes the proof. \end{proof} \noindent {\bf End of the Proof of Proposition \ref{prop:AB}.} When $|m|\neq 1$, in view of Lemmas~\ref{lem:estimBm0} and \ref{lem:estimBm2}, the compactness of the maps $u \mapsto B_{m,k,r}u$ and $u \mapsto B_{m,k,\theta}u$ is a direct consequence of Lemma~\ref{lem:compcrit} in the Appendix. When $m = \pm1$, Lemma~\ref{lem:compcrit}, combined with Lemma~\ref{lem:estimBm1} or Lemma~\ref{lem:estimBm1k0}, shows that the maps $u \mapsto B_{m,k,r}u$ and $u \mapsto B_{m,k,r}u + imB_{m,k,\theta}u$ are compact, and so is the map $u \mapsto B_{m,k,\theta}u$. \qed \begin{rem}\label{rem:oldversion} It is also possible to obtain explicit representation formulas for the components of the vector-valued operator $B_{m,k}$ defined in \eqref{eq:ABmkdef}, and to use them to prove that the map $u \mapsto B_{m,k}u$ is compact in $X_{m,k}$. The computations, however, are rather cumbersome. That approach was followed in a previous version of this work \cite{GS2}. \end{rem} \section{Resolvent bounds on vertical lines} \label{sec4} This final section is entirely devoted to the proof of Proposition~\ref{prop:main}. Let $a \neq 0$ be a nonzero real number. For any value of the angular Fourier mode $m \in \mathbb{Z}$, of the vertical wave number $k \in \mathbb{R}$, and of the spectral parameter $s \in \mathbb{C}$ with $\mathbb{R}e(s) = a$, we consider the resolvent equation $(s - L_{m,k})u = f$, which by definition \eqref{eq:Lmkdef} is equivalent to the system \begin{equation}\label{eq:eigsysf} \begin{array}{l} \gamma(r) u_r - 2\Omega(r)u_\theta \,=\, -\partial_r p + f_r\,, \\[1mm] \gamma(r) u_\theta + W(r)u_r \,=\, -\frac{im}{r} p+f_\theta\,, \\[1mm] \gamma(r) u_z \,=\, -ik p+f_z\,, \end{array} \end{equation} where $\gamma(r) = s + im \Omega(r)$ and $p = P_{m,k}[u]$ is the solution of \eqref{eq:pmk} given by Lemma~\ref{lem:rep}. We recall that $u,f$ are divergence-free\thinspace : \begin{equation}\label{eq:incompuf} \partial_r^* u_r + \frac{im}{r} u_\theta + ik u_z \,=\, 0\,, \qquad \partial_r^* f_r + \frac{im}{r} f_\theta + ik f_z \,=\, 0\,. \end{equation} Our goal is to show that, given any $f \in X_{m,k}$, the (unique) solution $u \in X_{m,k}$ of \eqref{eq:eigsysf} satisfies $\|u\|_{L^2} \le C \|f\|_{L^2}$, where the constant $C > 0$ depends only on the spectral abscissa $a$ and on the angular velocity profile $\Omega$. In particular, the constant $C$ is independent of $m$, $k$, and $s$ provided $\mathbb{R}e(s) = a$. \begin{rem}\label{rem:symm} It is interesting to observe how the resolvent system \eqref{eq:eigsysf}, \eqref{eq:incompuf} is transformed under the action of the following isometries\thinspace : \[ \begin{array}{ll} \mathcal{I}_1 \,:\, X_{m,k} \to X_{-m,k}\,, \quad & u \,\mapsto\, \tilde u \,:=\, (u_r,-u_\theta,u_z)\,, \\ \mathcal{I}_2 \,:\, X_{m,k} \to X_{m,-k}\,, \quad & u \,\mapsto\, \hat u \,:=\, (u_r,u_\theta,-u_z)\,, \\ \mathcal{I}_3 \,:\, X_{m,k} \to X_{-m,-k}\,, \quad & u \,\mapsto\, \bar u \,:=\, (\bar u_r,\bar u_\theta,\bar u_z)\,, \end{array} \] where (as usual) $\bar u$ denotes the complex conjugate of $u$. If $u,f \in X_{m,k}$ and $s \in \mathbb{C}$, the resolvent equation $(s - L_{m,k})u = f$ is equivalent to any of the following three relations\thinspace : \[ \bigl(s + L_{-m,k}\bigr)\tilde u \,=\, \tilde f\,, \qquad \bigl(s - L_{m,-k}\bigr)\hat u \,=\, \hat f\,, \qquad \bigl(\bar s - L_{-m,-k}\bigr)\bar u \,=\, \bar f\,. \] This implies in particular that the spectrum of the operator $L_{m,k}$ in $X_{m,k}$ satisfies \begin{equation}\label{eq:symetriespectre} \sigma(L_{m,k}) \,=\, \sigma(L_{m,-k}) \,=\, -\sigma(L_{-m,k})\,, \qquad\hbox{and}\quad \sigma(L_{m,k}) \,=\, -\overline{\sigma(L_{m,k})}\,. \end{equation} As the spectrum $\sigma(L_{m,k})$ is symmetric with respect to the imaginary axis, due to the last relation in \eqref{eq:symetriespectre}, we can assume in what follows that the spectral abscissa $a$ is positive. Also, thanks to the first two relations, we can suppose without loss of generality that $m \in \mathbb{N}$ and $k \ge 0$. \end{rem} \subsection{The scalar resolvent equation}\label{sec41} A key ingredient in the proof of Proposition~\ref{prop:main} is the observation that the resolvent system \eqref{eq:eigsysf} is equivalent to a second order differential equation for the radial velocity $u_r$. \begin{lem}\label{lem:resODE} Assume that $(k,m) \neq (0,0)$. If $u \in X_{m,k}$ is the solution of the resolvent equation \eqref{eq:eigsysf} for some $f \in X_{m,k}$, the radial velocity $u_r$ satisfies, for all $r > 0$, \begin{equation}\label{eq:eigscalarf} -\partial_r\bigl(\mathcal{A}(r)\partial_r^* u_r\bigr) + \left(1 + \frac{k^2}{\gamma^2} \,\mathcal{A}(r)\Phi(r) + \frac{imr}{\gamma}\partial_r\Bigl(\frac{W(r)}{m^2+k^2r^2} \Bigr)\right) u_r \,=\, \mathcal{F}(r)\,, \end{equation} where $\mathcal{A}(r)\,=\,r^2/(m^2 + k^2 r^2)$ and \begin{equation}\label{eq:cFdef} \mathcal{F}(r) \,=\, \frac{1}{\gamma}f_r + \mathcal{A}\Bigl( \frac{2ik\Omega}{\gamma^2} + \frac{2km}{\gamma}\frac{1}{m^2+k^2r^2}\Bigr)\bigl( -ikf_\theta + \frac{im}{r}f_z\bigr) + \frac{im}{\gamma r}\mathcal{A}\partial_r^* f_\theta + \frac{ik}{\gamma} \mathcal{A} \partial_r f_z\,. \end{equation} In addition, the azimuthal and vertical velocities are expressed in terms of $u_r$ by \begin{align}\label{eq:uthetaexp} u_\theta \,&=\, \frac{im\mathcal{A}}{r} \partial_r^*u_r -\frac{k^2\mathcal{A}}{\gamma}\big( W u_r -f_\theta\big) - \frac{mk\mathcal{A}}{\gamma r} f_z\,, \\ \label{eq:uzexp} u_z \,&=\, ik\mathcal{A} \partial_r^*u_r +\frac{mk\mathcal{A}}{\gamma r} \big( Wu_r-f_\theta\big) +\frac{m^2\mathcal{A}}{\gamma r^2} f_z\,. \end{align} \end{lem} \begin{proof} If we eliminate the pressure $p$ from the last two lines in \eqref{eq:eigsysf}, we obtain \begin{equation}\label{eq:ODE1} k W u_r + k\gamma u_\theta - \frac{\gamma m}{r} u_z \,=\, k f_\theta - \frac{m}{r}f_z\,. \end{equation} This first relation can be combined with the incompressibility condition in \eqref{eq:incompuf} to eliminate the azimuthal velocity $u_\theta$. This gives \begin{equation}\label{eq:ODE2} k\Bigl(\partial_r^* - \frac{imW}{\gamma r}\Bigr)u_r + i \Bigl(k^2 + \frac{m^2}{r^2} \Bigr) u_z \,=\, g_1 \,:=\, \frac{im^2}{\gamma r^2}f_z - \frac{imk}{\gamma r}f_\theta\,, \end{equation} which is \eqref{eq:uzexp}. As is easily verified, if in the previous step we eliminate the vertical velocity $u_z$ from \eqref{eq:ODE1} and \eqref{eq:incompuf}, we arrive at \eqref{eq:uthetaexp} instead of \eqref{eq:uzexp}. Alternatively, we can eliminate the pressure from the first and the last line in \eqref{eq:eigsysf}. This gives the second relation \begin{equation}\label{eq:ODE3} ik \gamma u_r -2ik\Omega u_\theta - \partial_r (\gamma u_z) \,=\, ik f_r -\partial_r f_z\,, \end{equation} which can in turn be combined with \eqref{eq:ODE1} to eliminate the azimuthal velocity $u_\theta$. Using the relations $\gamma' = im\Omega'$ and $W = r\Omega' + 2\Omega$, we obtain in this way \begin{equation}\label{eq:ODE4} \gamma^2 \Bigl(\partial_r + \frac{imW}{\gamma r}\Bigr)u_z -ik \bigl(\gamma^2 + \Phi\bigr)u_r \,=\, g_2 \,:=\, 2i\Omega \Bigl(\frac{m}{r}f_z - kf_\theta \Bigr) + \gamma\Bigl(\partial_r f_z -ikf_r\Bigr)\,, \end{equation} where $\Phi = 2 \Omega W$ is the Rayleigh function. Now, we multiply the equality \eqref{eq:ODE2} by $\mathcal{A} = r^2/(m^2 + k^2 r^2)$ and apply the differential operator $\partial_r + \frac{imW}{{\gamma r}}$ to both members of the resulting expression. In view of \eqref{eq:ODE4}, we find \begin{equation}\label{eq:ODE5} k \Bigl(\partial_r + \frac{imW}{\gamma r}\Bigr)\mathcal{A}\Bigl(\partial_r^* - \frac{imW}{\gamma r}\Bigr)u_r - k \Bigl(1 + \frac{\Phi}{\gamma^2}\Bigr)u_r \,=\, \Bigl(\partial_r + \frac{imW}{\gamma r}\Bigr)\mathcal{A} g_1 - \frac{i}{\gamma^2} g_2\,. \end{equation} If $k \neq 0$, this equation is equivalent to \eqref{eq:eigscalarf}, as is easily verified by expanding the expressions in both sides of \eqref{eq:ODE5} and performing elementary simplifications. In the particular case where $k = 0$ (and $m \neq 0$), equation \eqref{eq:eigscalarf} still holds but the derivation above is not valid anymore. Instead, one must eliminate the pressure $p$ from the first two lines in \eqref{eq:eigsysf}, and then express the azimuthal velocity $u_\theta$ using the incompressibility condition. The details are left to the reader. \end{proof} \begin{rem}\label{rem:obvious} If $f = 0$, then $\mathcal{F} = 0$ and Eq.~\eqref{eq:eigscalarf} reduces to the eigenvalue equation \eqref{eq:eigscalar}. \end{rem} \begin{cor}\label{cor:uthetaz} Under the assumptions of Lemma~\ref{lem:resODE}, we have the estimate \begin{equation}\label{eq:uthetaz} \max\bigl\{\|u_\theta\|_{L^2}, \|u_z\|_{L^2}\bigr\} \,\le\, \|\mathcal{A}^\frac12 \partial_r^* u_r\|_{L^2} + \frac{1}{a}\Bigl(\|W\|_{L^\infty}\|u_r\|_{L^2} + \|f_\theta\|_{L^2}+ \|f_z\|_{L^2}\Bigr)\,. \end{equation} \end{cor} \begin{proof} As $|\gamma(r)| \ge \mathbb{R}e(s) = a$ and \begin{equation}\label{eq:cAbounds} 0 \,<\, \mathcal{A}(r) \,\le\, \min\Bigl\{\frac{1}{k^2}, \frac{r^2}{m^2}\Bigr\}\,, \end{equation} estimate \eqref{eq:uthetaz} follows immediately from the representations \eqref{eq:uthetaexp}, \eqref{eq:uzexp}. \end{proof} \subsection{Explicit resolvent estimates in particular cases} \label{sec42} We first establish the resolvent bound in the relatively simple case where $m = 0$, which corresponds to axisymmetric perturbations of the columnar vortex. \begin{lem}\label{lem:axi} Assume that $m = 0$. For any $f \in X_{0,k}$, the solution $u \in X_{0,k}$ of \eqref{eq:eigsysf} satisfies \begin{equation}\label{eq:axibound} \|u\|_{L^2} \,\le\, C_0\Bigl(\frac{1}{a} + \frac{1}{a^4}\Bigr)\|f\|_{L^2}\,, \end{equation} where the constant $C_0 > 0$ depends only on $\Omega$. \end{lem} \begin{proof} When $k = 0$, the incompressibility condition \eqref{eq:incompuf} implies that $u_r = 0$, and since $\gamma(r) = s$ we deduce from the last two lines in \eqref{eq:eigsysf} that $u_\theta = f_\theta/s$ and $u_z = f_z/s$. As $|s| \ge \mathbb{R}e(s) = a$, we thus have $\|u\|_{L^2} \le \|f\|_{L^2}/a$, which is the desired conclusion. If $k \neq 0$, we assume without loss of generality that $k > 0$. Since $m = 0$, equation \eqref{eq:eigscalarf} satisfied by the radial velocity $u_r$ reduces to \[ -\partial_r \partial_r^* u_r + k^2\Bigl(1 + \frac{\Phi(r)}{s^2}\Bigr)u_r \,=\, \frac{k^2}{s}f_r + \frac{2k^2\Omega(r)}{s^2}f_\theta + \frac{ik}{s}\partial_r f_z\,. \] We multiply both sides by $s r \bar u_r $ and integrate the resulting equality over $\mathbb{R}p$. After taking the real part, we obtain the identity \[ a\int_0^\infty \biggl\{|\partial_r^* u_r|^2 + k^2\Bigl(1 + \frac{\Phi(r)}{|s|^2}\Bigr)|u_r|^2\biggr\}r\,{\rm d} r \,=\, \mathbb{R}e \int_0^\infty \bar u_r \Bigl(k^2f_r + \frac{2k^2\Omega(r)}{s}f_\theta + ik\partial_r f_z\Bigr)r\,{\rm d} r\,. \] As $\Phi(r) \ge 0$ by assumption H1, we easily deduce that \[ a \Bigl(\|\partial_r^* u_r\|_{L^2}^2 + k^2 \|u_r\|_{L^2}^2\Bigr) \,\le\, k^2 \|u_r\|_{L^2} \Bigl(\|f_r\|_{L^2} + \frac{2\|\Omega\|_{L^\infty}}{a} \|f_\theta\|_{L^2}\Bigr) + k \|\partial_r^* u_r\|_{L^2} \|f_z\|_{L^2}\,, \] and applying Young's inequality we obtain \begin{equation}\label{eq:ur0} \frac{1}{k^2}\,\|\partial_r^* u_r\|_{L^2}^2 + \|u_r\|_{L^2}^2 \,\le\, \frac{C}{a^2} \bigl(\|f_r\|_{L^2}^2 + \|f_z\|_{L^2}^2\bigr) + \frac{C}{a^4}\|f_\theta\|_{L^2}^2\,, \end{equation} where the constant $C > 0$ depends only on $\Omega$. With estimate \eqref{eq:ur0} at hand, we deduce from the second line in \eqref{eq:eigsysf} that \begin{equation}\label{eq:utheta0} \|u_\theta\|_{L^2} \,\le\, \frac{1}{|s|}\Bigl(\|W\|_{L^\infty} \|u_r\|_{L^2} + \|f_\theta\|_{L^2}\Bigr) \,\le\, C\Bigl(\frac{1}{a} + \frac{1}{a^3}\Bigr)\|f\|_{L^2}\,. \end{equation} Similarly, using the third line in \eqref{eq:eigsysf} and estimate \eqref{eq:pmkest} for the pressure, we obtain \begin{equation}\label{eq:uz0} \|u_z\|_{L^2} \,\le\, \frac{1}{|s|}\bigl(\|kp\|_{L^2} + \|f_z\|_{L^2} \bigr) \,\le\, \frac{C}{a}\bigl(\|u_r\|_{L^2} + \|u_\theta\|_{L^2} + \|f_z\|_{L^2}\bigr) \,\le\, C\Bigl(\frac{1}{a} + \frac{1}{a^4}\Bigr) \|f\|_{L^2}\,. \end{equation} Combining \eqref{eq:ur0}, \eqref{eq:utheta0}, and \eqref{eq:uz0}, we arrive at \eqref{eq:axibound}. \end{proof} In the rest of this section, we consider the more difficult case where $m \neq 0$. In that situation, given any $s \in \mathbb{C}$ with $\mathbb{R}e(s) = a$, there exists a unique $b \in \mathbb{R}$ such that \begin{equation}\label{eq:sdef} s \,=\, a - imb\,, \qquad \hbox{hence} \quad \gamma(r) \,=\, a + im\bigl(\Omega(r)-b\bigr)\,. \end{equation} Our goal is to obtain a resolvent bound that is uniform in the parameters $m$, $k$, and $b$. In view of Remark~\ref{rem:symm}, we can assume without loss of generality that $m \ge 1$ and $k \ge 0$. Unlike in the axisymmetric case, we are not able to obtain here an explicit resolvent bound of the form \eqref{eq:axibound} for all values of the parameters $m$, $k$, and $b$. In some regions, we will have to invoke Proposition~\ref{prop:GS1}, which was established in \cite{GS1} using a contradiction argument that does not provide any explicit estimate of the resolvent operator. Nevertheless, our strategy is to obtain explicit bounds in the largest possible region of the parameter space, and to rely on Proposition~\ref{prop:GS1} only when the direct approach does not work. We begin with the following elementary observation: \begin{lem}\label{lem:grandgamma} If $u,f\in X_{m,k}$ satisfy \eqref{eq:eigsysf}, then for any $M > 0$ we have the estimate \begin{equation}\label{eq:gg} \|1_{\{|\gamma|\ge M\}}\,u\|_{L^2} \,\le\, \frac{C_1}{M} \bigl( \|u\|_{L^2} + \|f\|_{L^2}\bigr)\,, \end{equation} where the constant $C_1$ depends only on $\Omega$. \end{lem} \begin{proof} We multiply all three equations in \eqref{eq:eigsysf} by $\gamma(r)^{-1} 1_{\{|\gamma|\ge M\}}$ and take the $L^2$ norm of the resulting expression. Using estimate \eqref{eq:pmkest} to control the pressure, we arrive at \eqref{eq:gg}. \end{proof} To obtain more general resolvent estimates, we exploit the differential equation \eqref{eq:eigscalarf} satisfied by the radial velocity $u_r$. As a preliminary step, we prove the following result. \begin{lem}\label{lem:estimF} If $u,f \in X_{m,k} \cap H^1(\mathbb{R}p,r\,{\rm d} r)$ and $\mathcal{F}$ is defined by \eqref{eq:cFdef}, we have \begin{equation}\label{eq:estimFur} \Bigl| \int_0^\infty \mathcal{F}(r)\bar{u}_r r\,{\rm d} r\Bigr| \,\le\, \frac{2}{a}\, \|\mathcal{A}^\frac12 \partial_r^*u_r\|_{L^2}\|f\|_{L^2} + C_2 \Bigl(\frac{1}{a} + \frac{1}{a^2}\Bigr)\|u_r\|_{L^2}\|f\|_{L^2}\,, \end{equation} where the constant $C_2$ depends only on $\Omega$. \end{lem} \begin{proof} We split the integral $\int_0^\infty \mathcal{F}(r)\bar{u}_r r\,{\rm d} r$ into four pieces, according to the expression of $\mathcal{F}$ in \eqref{eq:cFdef}. As $|\gamma(r)| \ge \mathbb{R}e(s) = a$, the first term is easily estimated\thinspace : \[ \Bigl|\int_0^\infty \frac{1}{\gamma(r)} \,\bar{u}_r f_r r \,{\rm d} r\Bigr| \,\le\, \frac{1}{a}\,\|u_r\|_{L^2}\|f_r\|_{L^2}\,. \] As for the second term, we observe that $|k\mathcal{A}(-ikf_\theta + \frac{im}{r}f_z)| \le |f_\theta| + |f_z|$ by \eqref{eq:cAbounds}, so that \[ \Bigl| \int_0^\infty \mathcal{A}\Bigl( \frac{2ik\Omega}{\gamma^2} + \frac{2km}{\gamma}\frac{1}{m^2+k^2r^2}\Bigr)\bigl(-ikf_\theta + \frac{im}{r}f_z\bigr)\bar{u}_r r\,{\rm d} r\Bigr| \,\le\, \Bigl(\frac{2}{a^2} +\frac{2}{am}\Bigr)\|u_r\|_{L^2}\bigl(\|f_\theta\|_{L^2} +\|f_z\|_{L^2}\bigr)\,. \] The third term is integrated by parts as follows\thinspace : \[ \int_0^\infty \bar{u}_r im\frac{\mathcal{A}}{\gamma r^2}\partial_r(rf_\theta)\,r\,{\rm d} r \,=\, -\int_0^\infty (\partial_r^*\bar{u}_r)im\frac{\mathcal{A}}{\gamma r} f_\theta\,r\,{\rm d} r - \int_0^\infty im\bar{u}_r \partial_r\Bigl(\frac{\mathcal{A}}{\gamma r^2}\Bigr)rf_\theta \,r\,{\rm d} r\,. \] Since $|m\mathcal{A}^\frac12/r|\le 1$ by \eqref{eq:cAbounds}, we have on the one hand \[ \Bigl|\int_0^\infty (\partial_r^*\bar{u}_r)im\frac{\mathcal{A}}{\gamma r}\,f_\theta\,r \,{\rm d} r\Bigr| \,\le\, \frac{1}{a}\,\|\mathcal{A}^\frac12 \partial_r^*u_r\|_{L^2} \|f_\theta\|_{L^2}\,, \] and on the other hand \[ \Bigl|mr\partial_r\Bigl(\frac{\mathcal{A}}{\gamma r^2}\Bigr)\Bigr| \,=\, \Bigl|\frac{im^2\Omega'\mathcal{A}}{\gamma^2 r} + \frac{2mk^2\mathcal{A}^2}{\gamma r^2} \Bigr| \,\le\, \frac{\|r\Omega'\|_{L^\infty}}{a^2} + \frac{2}{am}\,, \] so that \[ \Bigl|\int_0^\infty im\partial_r\Bigl(\frac{\mathcal{A}}{\gamma r^2}\Bigr)r f_\theta \bar{u}_r\,r\,{\rm d} r\Bigr| \,\le\, C\Bigl(\frac{1}{a^2} + \frac{1}{am}\Bigr) \|u_r\|_{L^2} \|f_\theta\|_{L^2}\,. \] In a similar way, the fourth and last term is integrated by parts\thinspace : \begin{equation}\label{eq:pouru} \int_0^\infty \bar{u}_r \frac{ik}{\gamma}\,\mathcal{A}\partial_rf_z\,r\,{\rm d} r \,=\, -\int_0^\infty (\partial_r^*\bar{u}_r)\frac{ik}{\gamma}\,\mathcal{A} f_z\,r\,{\rm d} r - \int_0^\infty ik\bar{u}_r \partial_r\Bigl(\frac{\mathcal{A}}{\gamma}\Bigr) f_z\,r\,{\rm d} r\,. \end{equation} Since $|k\mathcal{A}^\frac12|\le 1$ by \eqref{eq:cAbounds}, we have \[ \Bigl|\int_0^\infty(\partial_r^*\bar{u}_r)\frac{ik}{\gamma}\mathcal{A} f_z\,r\,{\rm d} r\Bigr| \,\le\, \frac{1}{a}\,\|\mathcal{A}^\frac12 \partial_r^*u_r\|_{L^2}\|f_z\|_{L^2}\,. \] Moreover, using the relations $r\mathcal{A}' = 2\mathcal{A}(1-k^2\mathcal{A})$ and $\gamma' = im\Omega'$, we can estimate the last integral in \eqref{eq:pouru} as follows\thinspace : \begin{align*} \Bigl|\int_0^\infty ik\bar{u}_r\partial_r\Bigl(\frac{\mathcal{A}}{\gamma}\Bigr)f_z \,r\,{\rm d} r\Bigr| \,&\le\, \Big\|\frac{2k\mathcal{A}}{\gamma r}(1-k^2\mathcal{A})-imk\frac{ \Omega'\mathcal{A}}{\gamma^2}\Big\|_{L^\infty} \|u_r\|_{L^2}\|f_z\|_{L^2}\\ \,&\le\, \Bigl(\frac{2}{am} + \frac{\|r\Omega'\|_{L^\infty}}{a^2}\Bigr) \|u_r\|_{L^2}\|f_z\|_{L^2}\,. \end{align*} Collecting all estimates above and recalling that $m \ge 1$, we arrive at \eqref{eq:estimFur}. \end{proof} We next establish an explicit estimate that will be useful when the vertical wave number $k$ is small compared to the angular Fourier mode $m$. \begin{lem}\label{lem:kpetit} If $m \ge 1$ and $u,f \in X_{m,k}$ satisfy \eqref{eq:eigsysf}, we have the estimate \begin{equation}\label{eq:kpetit} \|\mathcal{A}^\frac12 \partial_r^*u_r\|_{L^2}^2 +\|u_r\|_{L^2}^2 \,\le\, C_3\Bigl(\frac{1}{a^2} + \frac{1}{a^4}\Bigr)\,\frac{k^2}{m^2+k^2}\,\|u_r\|_{L^2}^2 + C_3 \Bigl(\frac{1}{a^2} + \frac{1}{a^6}\Bigr)\|f\|_{L^2}^2\,, \end{equation} where the constant $C_3 > 0$ depends only on $\Omega$. \end{lem} \begin{proof} We start from the scalar resolvent equation \eqref{eq:eigscalarf} satisfied by the radial velocity $u_r$. Multiplying both sides by $r\bar{u}_r$ and integrating the resulting expression over $\mathbb{R}p$, we obtain the following identity\thinspace : \begin{equation}\label{eq:pouru0} \|\mathcal{A}^\frac12 \partial_r^*u_r\|_{L^2}^2 +\|u_r\|_{L^2}^2 + I_1 + I_2 \,=\, \int_0^\infty \mathcal{F}(r)\bar{u}_r r\,{\rm d} r\,, \end{equation} where $\mathcal{F}(r)$ is defined in \eqref{eq:cFdef} and \begin{equation}\label{eq:I12def} I_1 \,=\, \int_0^\infty \frac{k^2}{\gamma^2}\,\mathcal{A}\Phi\,|u_r|^2r\,{\rm d} r\,, \qquad I_2 \,=\, \int_0^\infty \frac{imr}{\gamma}\partial_r\Bigl(\frac{W}{m^2{+}k^2r^2}\Bigr) |u_r|^2r\,{\rm d} r\,. \end{equation} The right-hand side of \eqref{eq:pouru0} is estimated in Lemma~\ref{lem:estimF}. On the other hand, using \eqref{eq:cAbounds} and the fact that $|\gamma(r)| \ge \mathbb{R}e(s) = a$, we can bound \begin{equation}\label{eq:pouru1} \Big|\frac{k^2}{\gamma^2}\mathcal{A}\Phi\Big| \,\le\, \min\biggl\{\frac{\|\Phi\|_{L^\infty}}{a^2}\,,\,\frac{k^2}{a^2m^2}\, \|r^2\Phi\|_{L^\infty}\biggr\}\,, \quad \hbox{so that}\quad |I_1| \,\le\, \frac{C}{a^2}\,\frac{k^2}{m^2+k^2}\, \|u_r\|_{L^2}^2\,. \end{equation} Moreover, we have \begin{equation}\label{eq:pouru2} \Big| \frac{mr}{\gamma}\partial_r\Bigl(\frac{W}{m^2+k^2r^2}\Bigr)\Big| \,\le\, \frac{1}{am}\bigl(\|rW'\|_{L^\infty} + 2 \|W\|_{L^\infty}\bigl)\,, \quad \hbox{so that}\quad |I_2| \,\le\, \frac{C}{am}\|u_r\|_{L^2}^2\,. \end{equation} Combining \eqref{eq:pouru0}, \eqref{eq:I12def}, \eqref{eq:pouru1}, \eqref{eq:estimFur} and using Young's inequality, we obtain the preliminary estimate \begin{equation}\label{eq:estimur0} \|\mathcal{A}^\frac12 \partial_r^*u_r\|_{L^2}^2 +\|u_r\|_{L^2}^2 \,\le\, C_4\Bigl( \frac{k^2}{a^2(m^2+k^2)}+\frac{1}{am}\Bigr)\|u_r\|_{L^2}^2 + C_4 \Bigl(\frac{1}{a^2} + \frac{1}{a^4}\Bigr) \|f\|_{L^2}^2\,, \end{equation} where the constant $C_4 > 0$ depends only on $\Omega$. If $ma \ge 2 C_4$, it is clear that \eqref{eq:estimur0} implies \eqref{eq:kpetit}. In the rest of the proof, we assume therefore that $ma \le 2C_4$. To obtain the improved bound \eqref{eq:kpetit}, the idea is to control the integral term $I_2$ in a different way. Denoting \[ Z(r) \,=\, -r\partial_r \Bigl(\frac{W(r)}{m^2{+}k^2r^2}\Bigr) \,>\, 0\,, \] we observe that \begin{equation}\label{eq:I2exp} I_2 \,=\, -\int_0^\infty \frac{im}{\gamma}\,Z(r)|u_r|^2r\,{\rm d} r \,=\, \int_0^\infty \frac{m^2(b-\Omega) -iam}{|\gamma|^2}\,Z(r)|u_r|^2r\,{\rm d} r\,. \end{equation} As $\Omega(r) \le 1$ for all $r$, a lower bound on $\mathbb{R}e I_2$ is obtained if we replace $b-\Omega$ by $b -1$ in \eqref{eq:I2exp}. Thus, taking the real part of \eqref{eq:pouru0}, we obtain the bound \begin{equation}\label{eq:Ireal} \|\mathcal{A}^\frac12 \partial_r^*u_r\|_{L^2}^2 +\|u_r\|_{L^2}^2 + (b-1) \int_0^\infty \frac{m^2}{|\gamma|^2}\,Z(r)|u_r|^2r\,{\rm d} r \,\le\, |I_1| + |I_3|\,, \end{equation} where $I_3 = \int_0^\infty \mathcal{F}(r)\bar{u}_r r\,{\rm d} r$. If $b \ge 1$, we can drop the integral in the left-hand side, and using the estimates \eqref{eq:pouru1}, \eqref{eq:estimFur} on $|I_1|$, $|I_3|$ we arrive at \eqref{eq:kpetit}. If $b < 1$, we consider also the imaginary part of \eqref{eq:pouru0}, which gives the inequality \begin{equation}\label{eq:Iimag} \int_0^\infty \frac{am}{|\gamma|^2}\,Z(r)|u_r|^2r\,{\rm d} r \,\le\, |I_1| + |I_3|\,. \end{equation} Combining \eqref{eq:Ireal}, \eqref{eq:Iimag} so as to eliminate the integral term, we obtain \begin{equation}\label{eq:Itotal} \|\mathcal{A}^\frac12 \partial_r^*u_r\|_{L^2}^2 + \|u_r\|_{L^2}^2 \,\le\, \Bigl(1 + \frac{m(1-b)}{a}\Bigr)\bigl(|I_1| + |I_3|\bigr)\,. \end{equation} If $b \ge -1$, then $m(1-b)/a \le 2m/a \le 4 C_4/a^2$. If $b \le -1$, we can assume that $m(1-b) \le 4 C_1$, because in the converse case we have \[ |\gamma(r)| \,\ge\, m(\Omega(r) - b) \,\ge\, -mb \,\ge\, \frac{m(1-b)}{2} \,\ge\, 2 C_1\,, \quad \hbox{for all } r > 0\,, \] so that we can apply Lemma~\ref{lem:grandgamma} with $M = 2 C_1$ and deduce \eqref{eq:kpetit} from \eqref{eq:gg} and \eqref{eq:estimur0}. So, in all relevant cases, the right-hand side of \eqref{eq:Itotal} is smaller than $C(1+a^{-2}) \bigl(|I_1| + |I_3|\bigr)$, and using the estimates \eqref{eq:pouru1}, \eqref{eq:estimFur} on $|I_1|$, $|I_3|$ we obtain \eqref{eq:kpetit}. \end{proof} \begin{rem}\label{rem:Adr} Estimate \eqref{eq:estimur0} implies in particular that \begin{equation}\label{eq:borneh1} \|\mathcal{A}^\frac12 \partial_r^* u_r\|_{L^2}^2 \,\le\, C_4\Bigl(\frac{1}{a} + \frac{1}{a^2}\Bigr)\,\|u_r\|_{L^2}^2 + C_4 \Bigl(\frac{1}{a^2} + \frac{1}{a^4}\Bigr)\|f\|_{L^2}^2\,. \end{equation} In view of Corollary~\ref{cor:uthetaz}, this shows that controlling the quantity $\|u_r\|_{L^2}$ in terms of $\|f\|_{L^2}$ is equivalent to the full resolvent estimate, because the azimuthal and vertical velocities can be estimated using \eqref{eq:uthetaz}, \eqref{eq:borneh1}. As an aside, we also observe that \eqref{eq:estimur0} provides an explicit resolvent estimate if $a > 0$ is sufficiently large, for instance if $a \ge 2 C_4 +1$. Thus we may assume in the sequel that $a$ is bounded from above by a constant depending only on $\Omega$. \end{rem} To estimate the radial velocity $u_r$ in the regime where $k$ is large compared to $m$, it is convenient to introduce the auxiliary function $v(r) = \gamma(r)^{-1/2} u_r(r)$ (this idea already used in \cite{GS1} is borrowed from \cite{HG}). The new function $v$ satisfies the differential equation \begin{equation}\label{eq:pourv} -\partial_r\bigl(\mathcal{A}(r)\gamma(r)\partial_r^* v\bigr) + \mathcal{E}(r) v \,=\, \gamma(r)^{1/2} \mathcal{F}(r)\,, \qquad r > 0\,, \end{equation} where $\mathcal{A}(r)$, $\mathcal{F}(r)$ are as in \eqref{eq:eigscalarf} and \begin{equation*} \mathcal{E}(r) \,=\, \gamma(r) + \frac{k^2}{\gamma(r)}\,\mathcal{A}(r)\Phi(r) + \frac{imr}{2}\partial_r\Bigl(\frac{W(r)+2\Omega(r)}{m^2+k^2r^2}\Bigr) - \frac{m^2\Omega'(r)^2}{4\gamma(r)}\,\mathcal{A}(r)\,. \end{equation*} \begin{lem}\label{lem:HG12} If $m \ge 1$ and $u,f\in X_{m,k}$ satisfy \eqref{eq:eigsysf}, there exists a constant $C_5 > 0$ depending only on $\Omega$ such that the function $v(r)= \gamma(r)^{-1/2}u_r(r)$ satisfies the estimate \begin{equation}\label{eq:estimv} \|\mathcal{A}^{1/2}\partial_r^* v\|_{L^2}^2 + \|v\|_{L^2}^2 \,\le\, \frac{C_5}{a^2}\, \frac{m^2}{m^2+k^2}\,\|1_B v\|_{L^2}^2 + C_5\Bigl(\frac{1}{a^3}+\frac{1}{a^5} \Bigr)\|f\|_{L^2}^2\,, \end{equation} where $1_B$ is the indicator function of the set $B = \{r > 0\,;\, |\gamma(r)| \le r|\Omega'(r)|\}$. \end{lem} \begin{proof} Multiplying both sides of \eqref{eq:pourv} by $r \bar v$, integrating the resulting expression over $\mathbb{R}p$ and taking the real part, we obtain the identity \begin{equation}\label{eq:pourv0} a \int_0^\infty \biggl\{\mathcal{A} |\partial_r^* v|^2 + \Bigl(1 + \frac{k^2}{ |\gamma|^2}\mathcal{A}\Phi\Bigr)|v|^2\biggr\}r\,{\rm d} r = \mathbb{R}e\int_0^\infty \!\!\bar{v} \gamma^\frac12 \mathcal{F} r\,{\rm d} r + \frac{a}{4}\int_0^\infty \!\!m^2\Omega'^2 \frac{\mathcal{A}}{|\gamma|^2}|v|^2r\,{\rm d} r\,. \end{equation} Since $\Phi\ge 0$, the left-hand side of \eqref{eq:pourv0} is bounded from below by $a\bigl(\|\mathcal{A}^\frac12\partial_r^*v\|_{L^2}^2 + \|v\|_{L^2}^2\bigr)$. On the other hand, repeating the proof of Lemma~\ref{lem:estimF}, we can estimate the first integral in the right-hand side as follows\thinspace : \begin{equation}\label{eq:pourv7} \begin{split} \Bigl|\int_0^\infty \bar{v} \gamma^\frac12 \mathcal{F} r\,{\rm d} r\Bigr| \,&\le\, \frac{2}{a^{1/2}}\,\|\mathcal{A}^\frac12 \partial_r^*v\|_{L^2}\|f\|_{L^2} + C \Bigl(\frac{1}{a^{1/2}} + \frac{1}{a^{3/2}}\Bigr)\|v\|_{L^2}\|f\|_{L^2} \\ \,&\le\, \frac{a}{2}\bigl(\|\mathcal{A}^\frac12\partial_r^*v\|_{L^2}^2+\|v\|_{L^2}^2\bigr) + C\Bigl(\frac{1}{a^2} + \frac{1}{a^4}\Bigr)\|f\|_{L^2}^2\,, \end{split} \end{equation} where the constant $C > 0$ depends only on $\Omega$. It remains to estimate the second integral in the right-hand side of \eqref{eq:pourv0}. Defining $\mathcal{G}(r) = m^2\Omega'(r)^2\frac{\mathcal{A}(r)}{|\gamma(r)|^2}$, we observe that \begin{equation}\label{eq:pourv8} \begin{split} \frac{a}{4}\int_0^\infty m^2\Omega'^2\frac{\mathcal{A}}{|\gamma|^2}\,|v|^2r\,{\rm d} r \,&\le\, \frac{a}{4}\,\|1_{\{\mathcal{G}\le 1\}}v\|_{L^2}^2 + \frac{a}{4}\,\|\mathcal{G}\|_{L^\infty} \|1_{\{ \mathcal{G}\ge 1\}} v\|_{L^2}^2 \\ \,&\le\, \frac{a}{4}\,\|v\|_{L^2}^2 + \frac{1}{4a}\,\frac{m^2}{m^2+k^2}\, \|(1{+}r^2){\Omega'}^2\|_{L^\infty} \|1_{\{ \mathcal{G}\ge 1\}} v\|_{L^2}^2\,, \end{split} \end{equation} where the upper bound on the quantity $\|\mathcal{G}\|_{L^\infty}$ is obtained using the definition of $\mathcal{A}$ and the fact that $|\gamma(r)| \ge \mathbb{R}e(s) = a$. Now, if $\mathcal{G}(r) \ge 1$, then $|\gamma(r)|^2 \le m^2\Omega'(r)^2 \mathcal{A}(r) \le r^2 \Omega'(r)^2$, so that the set $\{\mathcal{G}\ge 1\}$ is contained in $B = \{r > 0\,;\, |\gamma(r)| \le r|\Omega'(r)|\}$. Thus, combining \eqref{eq:pourv0}, \eqref{eq:pourv7}, and \eqref{eq:pourv8}, we obtain \eqref{eq:estimv}. \end{proof} The following result is a rather direct consequence of Lemmas~\ref{lem:grandgamma} and \ref{lem:HG12}\thinspace : \begin{lem}\label{lem:ksmg} If $m \ge 1$ and $u,f \in X_{m,k}$ satisfy \eqref{eq:eigsysf}, there exists a constant $C_6 > 0$, depending only on $\Omega$, such that the inequality \begin{equation}\label{eq:ksmg0} \|u\|_{L^2} \,\le\, C_6\Bigl(\frac{1}{a} + \frac{1}{a^{7/2}}\Bigr)\|f\|_{L^2} \end{equation} holds in each of the following three situations\thinspace : \[ \hbox{i) } ak \ge C_6 m\,, \quad~ \hbox{ii) } am \ge C_6 \hbox{ and } C_6(1-b) \le a\,, \quad~ \hbox{iii) } am \ge C_6 \hbox{ and } C_6b \le a\,. \] \end{lem} \begin{proof} Applying Lemma~\ref{lem:grandgamma} with $M = 3 C_1$, we deduce from \eqref{eq:gg} that \begin{equation}\label{eq:ksmg1} \|1_{\{|\gamma|\ge M\}}u\|_{L^2} \,\le\, \frac12\|1_{\{|\gamma|\le M\}}u\|_{L^2} + \frac12\|f\|_{L^2}\,. \end{equation} In the sequel, we may thus focus our attention to the region where $|\gamma|\le M$. Our strategy is to use Lemma~\ref{lem:HG12}, which requires a good control on the term involving $1_Bv$ in the right-hand side of \eqref{eq:estimv}. We consider three cases separately. \noindent i) If $ak \ge m\sqrt{2C_5}$, we simply observe that \begin{equation}\label{eq:ksmg2} \frac{C_5}{a^2}\,\frac{m^2}{m^2+k^2}\,\|1_B v\|_{L^2}^2 \,\le\, \frac12 \|1_B v\|_{L^2}^2 \,\le\, \frac12 \|v\|_{L^2}^2\,. \end{equation} \noindent ii) By definition, for any $r > 0$, we have \begin{equation}\label{eq:Bdomain} r \in B \quad \hbox{if and only if} \quad a^2 + m^2(\Omega(r)-b)^2 \,\le\, r^2 \Omega'(r)^2\,. \end{equation} Clearly $B = \emptyset$ if $a > \|r\Omega'\|_{L^\infty}$, hence we may assume that $a \le \|r\Omega'\|_{L^\infty}$. Since $\Omega'(r) = \mathcal{O}(r)$ as $r \to 0$ by assumption H1, there exists a small constant $\varepsilonilon > 0$ (depending only on $\Omega$) such that inequality \eqref{eq:Bdomain} cannot be satisfied if $r \le \varepsilonilon a^{1/2}$. On the other hand, if $r \ge \varepsilonilon a^{1/2}$, then $\Omega(r) \le \Omega(\varepsilonilon a^{1/2}) \le 1 - 2\delta a$ for some sufficiently small $\delta > 0$. Thus, if we assume that $b \ge 1-\delta a$ and $m\delta a > \|r\Omega'\|_{L^\infty}$, we see that $m(b-\Omega(r)) \ge m\delta a > \|r\Omega'\|_{L^\infty}$, so that inequality \eqref{eq:Bdomain} is not satisfied either. Summarizing, we have $B = \emptyset$ if $ma \ge C$ and $C(1-b) \le a$ for some sufficiently large $C > 0$. \noindent iii) Similarly, since $r\Omega'(r) = \mathcal{O}(r^{-2})$ as $r \to \infty$ by assumption H1, there exists a large constant $\rho > 0$ (depending only on $\Omega$) such that \eqref{eq:Bdomain} cannot be satisfied if $r \ge \rho a^{-1/2}$. If $r \le \rho a^{-1/2}$, we have $\Omega(r) \ge \Omega(\rho a^{-1/2}) \ge 2\sigma a$ for some $\sigma > 0$. Thus, if we assume that $b \le \sigma a$ and $m\sigma a > \|r\Omega'\|_{L^\infty}$, inequality \eqref{eq:Bdomain} is never satisfied, so that $B = \emptyset$. In all three cases, we deduce from \eqref{eq:estimv} the estimate \begin{equation}\label{eq:estimvbis} \|\mathcal{A}^{1/2}\partial_r^* v\|_{L^2}^2 + \frac12 \|v\|_{L^2}^2 \,\le\, C_5 \Bigl(\frac{1}{a^3}+\frac{1}{a^5}\Bigr)\|f\|_{L^2}^2\,. \end{equation} As $u_r(r) = \gamma(r)^{1/2}v(r)$, we have $\|1_{\{|\gamma|\le M\}}u_r\|_{L^2} \le M^{1/2} \|1_{\{|\gamma|\le M\}}v\|_{L^2} \le M^{1/2} \|v\|_{L^2}$, and \[ \bigl\|1_{\{|\gamma|\le M\}} \mathcal{A}^{\frac12}\partial_r^*u_r\bigr\|_{L^2} \,\le\, M^{1/2} \bigl\|1_{\{|\gamma|\le M\}} \mathcal{A}^{\frac12}\partial_r^*v\bigr\|_{L^2} + \frac{\|r\Omega'\|_{L^\infty}}{2a^{1/2}}\,\|v\|_{L^2}\,. \] Thus, using the representations \eqref{eq:uthetaexp}, \eqref{eq:uzexp} of the azimuthal and vertical velocities, we deduce from \eqref{eq:estimvbis} that \[ \|1_{\{|\gamma|\le M\}}u_r\|_{L^2} + \|1_{\{|\gamma|\le M\}}u_\theta\|_{L^2} + \|1_{\{|\gamma|\le M\}}u_z\|_{L^2} \,\le\, C\Bigl(\frac{1}{a} +\frac{1}{a^{7/2}}\Bigr)\|f\|_{L^2}\,. \] Finally, invoking \eqref{eq:ksmg1} to bound $\|1_{\{|\gamma|\ge M\}}u\|_{L^2}$ in terms of $\|1_{\{|\gamma|\le M\}}u\|_{L^2}$, and recalling that we can assume $a \le 2C_4 +1$ by Remark~\ref{rem:Adr}, we arrive at \eqref{eq:ksmg0}. \end{proof} \begin{rem}\label{rema:alter} Alternatively, one can obtain the resolvent estimate in case iii) by the following argument. If $m \ge 1$ is large and $b > 0$ is small, the inequality $|\gamma(r)| \le M := 3C_1$ can be satisfied only if $r \gg 1$. In that region, the coefficients $\Omega(r)$ and $W(r)$ in \eqref{eq:eigsysf} are very small, and so is the pressure $p$ in view of Lemma~\ref{lem:pressionloin}. It is thus easy to estimate $\|1_{\{|\gamma| \le M\}}u\|_{L^2}$ in terms $\|f\|_{L^2}$ directly from \eqref{eq:eigsysf}. Combining this observation with Lemma~\ref{lem:grandgamma} gives the desired result. \end{rem} \subsection{End of the proof of Proposition~\ref{prop:main}} \label{sec43} If we combine Lemma~\ref{lem:axi}, Lemma~\ref{lem:kpetit}, Remark~\ref{rem:Adr}, and Lemma~\ref{lem:ksmg}, we obtain the following statement which specifies the regions in the parameter space where we could obtain a uniform resolvent estimate, with explicit (or at least computable) constant. \begin{cor}\label{cor:explicit} Assume that $m \in \mathbb{N}$, $k \ge 0$, and $s \in \mathbb{C}$ with $\mathbb{R}e(s) = a > 0$. There exists a constant $C > 0$, depending only on $\Omega$, such that the resolvent estimate \begin{equation}\label{eq:unifresksmg} \bigl\|(s - L_{m,k})^{-1}\bigr\|_{X_{m,k} \to X_{m,k}} \,\le\, C\Bigl(\frac{1}{a}+\frac{1}{a^4}\Bigr) \end{equation} holds in each of the following cases\thinspace : \begin{equation}\label{eq:6cases} \begin{array}{lll} 1)~m = 0\,, & 2)~a \ge C\,, & 3)~ma^2 \ge Ck\,, \\[1mm] 4)~ak \ge C m\,,\quad &5)~am \ge C \hbox{ and }C(1-b) \le a\,, \quad &6)~am \ge C \hbox{ and } Cb \le a\,. \end{array} \end{equation} We recall that $b$ is defined by \eqref{eq:sdef} when $m \neq 0$. \end{cor} To conclude the proof of Proposition~\ref{prop:main}, we use a contradiction argument to establish a resolvent estimate in the regions that are not covered by Corollary \ref{cor:explicit}. More precisely, if we consider a sequence of values of the parameters $m,k,s$ (with $\mathbb{R}e(s) = a$) such that none of the conditions 1)--6) in \eqref{eq:6cases} is satisfied, two possibilities can occur. Either the angular Fourier mode $m$ goes to infinity, as well as the vertical wave number $k$, and the parameter $b$ remains in the interval $[a/C,1{-}a/C] \subset (0,1)$. In that case, after extracting a subsequence, we can assume that $b$ converges to some limit. So, to establish the resolvent estimate, we have to prove that, for any $b \in (0,1)$, \begin{equation}\label{eq:remaining1} \sup_{\mathbb{R}e(s) = a} ~\limsup_{\substack{m\to +\infty,\\ \mathop{\mathrm{Im}}(s)/m \to -b}} \bigl\|(s - L_{m,k})^{-1}\bigr\|_{X_{m,k} \to X_{m,k}} \,<\, \infty\,. \end{equation} The other possibility is that the angular Fourier mode $m \ge 1$ stays bounded, as well as the vertical wave number $k \ge k_0 := a^2/C$. In that case, we have to prove that, for all $N \ge 1$, \begin{equation}\label{eq:remaining2} \sup_{\mathbb{R}e(s) = a} ~\sup_{\substack{1 \le m \le N,\\ k_0 \le k \le N}} ~\bigl\|(s - L_{m,k})^{-1}\bigr\|_{X_{m,k} \to X_{m,k}} \,<\, \infty\,. \end{equation} \noindent{\bf Proof of estimate \eqref{eq:remaining1}\thinspace :} \\ We argue by contradiction and assume the existence of sequences $(m_n)_{n\in \mathbb{N}}$ in $\mathbb{N}$, $(k_n)_{n\in \mathbb{N}}$ in $\mathbb{R}_+$, $(b_n)_{n\in \mathbb{N}}$ in $\mathbb{R}$ and $(u^n)_{n\in \mathbb{N}}$, $(f^n)_{n\in \mathbb{N}}$ in $X_{m_n,k_n}$ with the following properties\thinspace : $u^n,f^n$ are solutions of the resolvent system $(s_n-L_{m_n,k_n})u^n=f^n$ where $s_n=a-im_nb_n$, $\|u^n\|_{L^2}=1$ $\forall n \in \mathbb{N}$, and we have $\|f^n\|_{L^2} \to 0$, $m_n \to +\infty$, and $b_n \to b$ as $n\to +\infty$. Without loss of generality we may assume that $b_n \in (0,1)$ for all $n\in \mathbb{N}$, and we define $r_n = \Omega^{-1}(b_n)$; in particular $r_n \to \bar r := \Omega^{-1}(b)$ as $n \to +\infty$. We also denote by $(p_n)_{n\in \mathbb{N}}$ the sequence of pressures associated to $u^n$, namely $p_n=P_{m_n,k_n}[u^n]$, and we set $\gamma_n(r) = a+im_n(\Omega(r)-b_n)$. In view of inequalities \eqref{eq:uthetaz} and \eqref{eq:borneh1}, the normalization condition $\|u^n\|_{L^2} = 1$ and the assumption that $\|f^n\|_{L^2} \to 0$ as $n \to \infty$ imply that the quantity $\|u_r^n\|_{L^2}$ is bounded from below for large values of $n$, namely \begin{equation}\label{eq:urdomine} I_r \,:=\, \liminf_{n\to +\infty} \|u^n_r\|_{L^2}^2 \,>\, 0\,. \end{equation} Setting $M = C_1\sqrt{2/I_r}$, we deduce from \eqref{eq:urdomine} and Lemma~\ref{lem:grandgamma} that \begin{equation}\label{eq:urdomineloc} \liminf_{n\to +\infty} \int_{\{|\gamma_n|\le M\}}|u^n_r(r)|^2\,r\,{\rm d} r \,\ge\, \frac{I_r}{2} \,>\, 0\,. \end{equation} As the angular velocity $\Omega$ is continuously differentiable and strictly decreasing on $\mathbb{R}p$, the set $\{|\gamma_n|\le M\}$ is asymptotically contained in the interval $[r_n-R/m_n,r_n+R/m_n]$, where $R > 0$ is a constant that depends only on $\Omega$ and $I_r$ (one may take $R = 2M |\Omega'(\overline{r})|^{-1}$). Since the length of that interval shrinks to zero as $n \to \infty$, it is useful to introduce rescaled vector fields and functions by setting \[ u^n(r) = m_n^{1/2}\, \tilde u^n(m_n(r{-}r_n))\,, \quad f^n(r) = m_n^{1/2}\, \tilde f^n(m_n(r{-}r_n))\,, \quad p_n(r) = m_n^{-1/2}\, \tilde p_n(m_n(r{-}r_n))\,. \] Note that the new variable $y := m_n(r{-}r_n)$ is defined on the $n$-dependent domain $(-m_nr_n,\infty)$. Likewise, we set $\Omega(r)=\tilde \Omega_n(m_n(r{-}r_n))$, $W(r)=\tilde W_n(m_n(r{-}r_n))$ and $\gamma_n(r)= \tilde \gamma_n(m_n(r{-}r_n))$. The system \eqref{eq:eigsysf} may then be rewritten as \begin{equation}\label{eq:eigsysftilde1} \begin{array}{l} \tilde \gamma_n(y) \tilde u^n_r - 2\tilde \Omega_n(y)\tilde u^n_\theta \,=\, -\partial_y \tilde p_n + \tilde f^n_r\,, \\[1mm] \tilde \gamma_n(y) \tilde u^n_\theta + \tilde W_n(y)\tilde u^n_r \,=\, -\frac{i}{r_n+y/m_n} \tilde p_n+\tilde f^n_\theta\,, \\[1mm] \tilde \gamma_n(y)\tilde u^n_z \,=\, -i\frac{k_n}{m_n} \tilde p_n+\tilde f^n_z\,, \end{array} \end{equation} and the incompressibility condition becomes \begin{equation}\label{eq:eigsysftilde2} \partial_y \tilde u^n_r + \tfrac{i}{r_n+y/m_n} \tilde u^n_\theta + i\tfrac{k_n}{m_n} \tilde u^n_z \,=\, -\tfrac{1}{r_nm_n+y}\tilde u^n_r\,. \end{equation} After this change of variables, inequality \eqref{eq:urdomineloc} implies the lower bound \begin{equation}\label{eq:Urdomineloc} \liminf_{n\to +\infty} \int_{-R}^{R} |\tilde u^n_r(y)|^2\,{\rm d} y \,\ge\, \frac{I_r}{2\overline{r}} \,>\, 0\,. \end{equation} Since, by assumption, inequalities 3) and 4) in \eqref{eq:6cases} are not satisfied, we can suppose without loss of generality that $k_n/m_n \to \delta \in (0,+\infty)$ as $m\to +\infty$. By construction, we also have $\tilde \Omega_n(y) \to \Omega(\bar r)$, $\tilde W_n(y) \to W(\bar r)$ and $\tilde \gamma_n(y) \to \overline{\gamma}(y) := a+i\Omega'(\bar r)y$ as $n\to +\infty$, uniformly on any compact subset of $\mathbb{R}$. Using the normalization condition for $u^n$, we observe that \[ 1 \,=\, \int_{-m_n r_n}^{\infty} |\tilde u^n(y)|^2\Bigl(r_n+\frac{y}{m_n}\Bigr) \,{\rm d} y \,\ge\, \frac{r_n}{2} \int_{-m_n\frac{r_n}{2}}^{\infty} |\tilde u^n(y)|^2\,{\rm d} y\,. \] Extracting a subsequence if needed, we may therefore assume that $\tilde u^n \rightharpoonup U$ in $L^2(K)$ for each compact subset $K\subset \mathbb{R}$, where $U \in L^2(\mathbb{R})$ and $\|U\|_{L^2}^2 \le 2/\overline{r}$. Similarly, using the uniform bounds on the pressure given by Lemma~\ref{lem:ell}, we may assume that $\tilde p_n \to P$ and $\partial_y \tilde p_n \rightharpoonup P'$ in $L^2(K)$, for each compact subset $K\subset \mathbb{R}$, where $P \in H^1_{\rm loc}(\mathbb{R})$ and $P'\in L^2(\mathbb{R})$. The radial velocities $\tilde u^n_r$ have even better convergence properties. Indeed, it follows from \eqref{eq:borneh1} that the quantity $\|\mathcal{A}_n^{1/2}\partial_r^* u_r^n\|_{L^2}$ is uniformly bounded for $n$ large, and since $\|\mathcal{A}_n^{1/2} r^{-1} u_r^n\|_{L^2} \le 1/m_n \to 0$ we deduce that $\|\mathcal{A}_n^{1/2}\partial_r u_r^n\|_{L^2}$ is uniformly bounded too. After the change of variables, this implies that \[ C \,\ge\, \int_{-m_n r_n}^{\infty} \frac{m_n^2 r^2}{m_n^2 + k_n^2 r^2}\, |\partial_y \tilde u_r^n(y)|^2\,r\,{\rm d} y \,\ge\, \frac{r_n}{2}\, \frac{r_n^2}{4+\delta_n^2 r_n^2} \int_{-m_n\frac{r_n}{2}}^{\infty} |\partial_y \tilde u_r^n(y)|^2\,{\rm d} y\,, \] where $r = r_n+y/m_n$ and $\delta_n = k_n/m_n$. Thus $U_r \in H^1(\mathbb{R})$, and extracting a further subsequence if necessary we can assume that $\partial_y \tilde u^n_r \rightharpoonup U_r'$ and $\tilde u^n_r \to U_r$ in $L^2(K)$, for each compact subset $K\subset \mathbb{R}$. In particular, we deduce from \eqref{eq:Urdomineloc} that $U_r$ is not identically zero. Moreover, passing to the limit in \eqref{eq:eigsysftilde1}, \eqref{eq:eigsysftilde2}, we obtain the asymptotic system \begin{equation}\label{eq:eigsyslim} \begin{array}{l} (a+i\Omega'(\overline{r})y) U_r - 2 \Omega(\overline{r}) U_\theta \,=\, -P'\,,\\[1mm] (a+i\Omega'(\overline{r})y) U_\theta + W(\overline{r}) U_r \,=\, -\frac{i}{\overline{r}} P\,, \\[1mm] (a+i\Omega'(\overline{r})y) U_z \,=\, -i\delta P\,, \end{array} \qquad\quad U_r' + \tfrac{i}{\overline{r}} U_\theta + i\delta U_z \,=\, 0\,, \end{equation} where equalities hold almost everywhere. We claim that system \eqref{eq:eigsyslim} does not possess any solution such that $U \in L^2_{\rm loc}(\mathbb{R})$, $P \in H^1_{\rm loc}(\mathbb{R})$ and such that $U_r \in H^1(\mathbb{R})$ is nontrivial. This will provide the desired contradiction. Indeed, if we repeat the proof of Lemma~\ref{lem:resODE} (with $f = 0$), we can extract from system \eqref{eq:eigsyslim} a second-order differential equation for the radial velocity $U_r$. Eliminating the pressure $P$ and the azimuthal velocity $U_\theta$, we obtain as in \eqref{eq:ODE2}, \eqref{eq:ODE4}\thinspace : \[ \delta\Bigl(U_r' - \frac{iW(\overline{r})}{\overline{r}\,\overline{\gamma}(y)}U_r\Bigr) + i \Bigl(\delta^2 + \frac{1}{\overline{r}^2}\Bigr)U_z \,=\,0\,, \qquad U_z' + \frac{iW(\overline{r})}{\overline{r}\,\overline{\gamma}(y)}U_z -i\delta \Bigl(1 + \frac{\Phi(\overline{r})}{\overline{\gamma}(y)^2}\Bigr) \,=\, 0\,, \] and combining these relations we arrive at \begin{equation}\label{eq:lim3} -U_r'' + \biggl[\Bigl(\delta^2 + \frac{1}{\overline{r}^2}\Bigr) + \frac{\Phi(\overline{r})\delta^2}{\overline{\gamma}(y)^2}\Biggr] U_r \,=\, 0\,, \qquad y \in \mathbb{R}\,, \end{equation} where $\Phi(\overline{r}) = 2 \Omega(\overline{r})W(\overline{r}) > 0$. If we observe that $\overline{\gamma}(y) = a + i\Omega'(\overline{r})y = i\Omega'(\overline{r})(y + ic)$, where $c=-a/\Omega'(\overline{r})$, we can write \eqref{eq:lim3} in the equivalent form \begin{equation}\label{eq:besseltype} -U_r'' + \Bigl(\kappa^2 - \frac{J(\overline{r})\delta^2}{(y+ic)^2} \Bigr) U_r \,=\, 0\,,\qquad y \in \mathbb{R}\,, \end{equation} where $\kappa^2 = 1/\overline{r}^2+\delta^2$ and $J(\overline{r}) = \Phi(\overline{r})/\Omega'(\overline{r})^2$. Up to a multiplicative constant, the unique solution of \eqref{eq:besseltype} that belongs to $L^2(\mathbb{R}p)$ is \begin{equation}\label{eq:Knu} U_r(y) \,=\, (y+ic)^{1/2} K_\nu\bigl(\kappa(y+ic)\bigr)\,, \qquad y \in \mathbb{R}\,, \end{equation} where $K_\nu$ is the modified Bessel function, see \cite[Section~9.6]{AS}, and $\nu \in \mathbb{C}$ is determined, up to an irrelevant sign, by the relation $\nu^2 = \frac14 - J(\overline{r})\delta^2$. In fact, any linearly independent solution of \eqref{eq:besseltype} grows like $\exp(\kappa y)$ as $y \to +\infty$. Now, it is well known that the function $K_\nu(\kappa(y+ic))$ has itself an exponential growth as $y \to -\infty$, see \cite[Section~9.7]{AS}, and this implies that \eqref{eq:besseltype} has no nontrivial solution in $L^2(\mathbb{R})$. \noindent{\bf Proof of estimate \eqref{eq:remaining2}\thinspace :} \\ This is the only place where we use our assumption H2 on the vorticity profile. According to Proposition~\ref{prop:GS1}, which is the main result of \cite{GS1}, the resolvent operator $(s - L_{m,k})^{-1}$ is well defined as a bounded linear operator in $X_{m,k}$ for any $m \in \mathbb{N}$, any $k \in \mathbb{R}$, and any $s \in \mathbb{C}$ with $\mathbb{R}e(s) \neq 0$. To prove \eqref{eq:remaining2}, it remains to show that, for any fixed $m$, the resolvent estimate holds uniformly in $k$ on compact subsets of $\mathbb{R}p = (0,\infty)$, and uniformly in $s$ on vertical lines. Actually, we can assume that the spectral parameter lies in a compact set too, because if $m$ is fixed and $|\mathop{\mathrm{Im}}(s)| \ge m + 2C_1$, we have $|\gamma(r)| \ge |\mathop{\mathrm{Im}}(s)| - m \ge 2C_1$ and the resolvent bound follows from estimate \eqref{eq:gg} with $M = 2C_1$. So the only missing step is\thinspace : \begin{lem}\label{lem:localbd} For any $m \in \mathbb{Z}$, the resolvent norm $\|(s - L_{m,k})^{-1} \|_{X_{m,k} \to X_{m,k}}$ is uniformly bounded in the neighborhood of any point $(k,s) \in \mathbb{R} \times \mathbb{C}$ with $k \neq 0$ and $\mathbb{R}e(s) > 0$. \end{lem} \begin{proof} Since the function space $X_{m,k}$ changes when $k$ is varied, due to the incompressibility condition, the result does not immediately follow from standard perturbation theory. However, it is easy to reformulate the problem so that perturbation theory can be applied. It is sufficient to note that, for any fixed $k^*\neq 0$, the mappings \[ M_k : X_{m,k^*} \to X_{m,k}\,, \qquad \bigl(u_r,u_\theta,u_z\bigr) \,\mapsto\, \bigl(u_r,u_\theta,\frac{k^*}{k}u_z\bigr)\,, \] are linear homeomorphisms that depend continuously on $k$ in a neighborhood of $k^*$. Given $s \in \mathbb{C}$, $m \in \mathbb{Z}$, and $k \in \mathbb{R}$ close to $k^*$, the resolvent equation $(s - L_{m,k})u = f$ for $u,f \in X_{m,k}$ is equivalent to the conjugated equation $(s - \mathcal{L}_{m,k})v = g$, where $u = M_k v$, $f = M_k g$, and \begin{equation}\label{eq:conjugL} \mathcal{L}_{m,k} \,=\, M_k^{-1} L_{m,k} M_k \,:\, X_{m,k^*} \to X_{m,k^*}\,. \end{equation} Now, using in particular estimate \eqref{eq:deltap} in Lemma~\ref{lem:pcont}, it is straightforward to verify that the operator $\mathcal{L}_{m,k}$ depends continuously on $k$ as a bounded linear operator in $X_{m,k^*}$, as long as $k \neq 0$. This implies that the resolvent norm $\|(s - \mathcal{L}_{m,k})^{-1}\|_{X_{m,k^*} \to X_{m,k^*}}$ depends continuously on the parameters $s$ and $k$, when $k$ stays in a neighborhood of $k^*$, and the conclusion easily follows. \end{proof} \section{Appendix\thinspace : analysis in $X_{m,k}$} \label{sec5} We collect here various auxiliary results that are useful for our analysis in Section~\ref{sec3}. We first show that smooth and compactly supported divergence-free vector fields are dense in the space $X_{m,k}$ defined by \eqref{eq:Xmkdef}, and we give simple criteria for compactness in that space. Finally, we establish explicit representations formulas for the pressure $p$ satisfying \eqref{eq:pmk}. \subsection{Approximation in $X_{m,k}$}\label{sec51} Truncating divergence-free vector fields is not straightforward, and a general solution to that problem involves the so-called Bogovskii operator, see e.g. \cite{Ga}. However, in the particular case of the space $X_{m,k}$ introduced in \eqref{eq:Xmkdef}, localization can be performed in a rather elementary way, which we now describe. \begin{lem}\label{lem:truncation} For any $m \in \mathbb{Z}$ and any $k \in \mathbb{R}$, the set of all $u \in X_{m,k}$ with compact support in $(0,+\infty)$ is dense in $X_{m,k}$. \end{lem} \begin{proof} Let $\phi,\psi : \mathbb{R}_+ \to \mathbb{R}$ be smooth, monotonic functions such that \[ \phi(r) \,=\, \begin{cases} 0 &\hbox{if } r \le \frac12\,, \cr 1 &\hbox{if } r \ge 1\,, \end{cases} \qquad \hbox{and}\quad \psi(r) \,=\, \begin{cases} 1 &\hbox{if } r \le 1\,, \cr 0 &\hbox{if } r \ge 2\,. \end{cases} \qquad \] Given $\varepsilonilon \in (0,1)$, we define $\chi_\varepsilonilon(r) = \min\{\phi(r/\varepsilonilon), \psi(\varepsilonilon r)\}$. By construction $\chi_\varepsilonilon$ is smooth and satisfies $\chi_\varepsilonilon(r) = 0$ if $r \le \varepsilonilon/2$ or $r \ge 2/\varepsilonilon$, and $\chi_\varepsilonilon(r) = 1$ if $\varepsilonilon \le r \le 1/\varepsilonilon$. Assume first that $m \neq 0$. Given $u \in X_{m,k}$, we define $v_\varepsilonilon = u \chi_\varepsilonilon + w_\varepsilonilon e_\theta$, where \[ w_\varepsilonilon(r) \,=\, \frac{i}{m}\,r \chi_\varepsilonilon'(r) u_r(r)\,, \quad r > 0\,. \] The corrector $w_\varepsilonilon$ is tailored so that $\mathop{\mathrm{div}}\nolimits v_\varepsilonilon = (\mathop{\mathrm{div}}\nolimits u) \chi_\varepsilonilon + u_r \chi_\varepsilonilon' + \frac{im}{r} w_\varepsilonilon = 0$. Moreover $w_\varepsilonilon$ is supported in the set $[\varepsilonilon/2,2/\varepsilonilon]$ by construction. Since $\chi_\varepsilonilon(r) \to 1$ as $\varepsilonilon \to 0$ for any $r > 0$, it is clear that $\|u \chi_\varepsilonilon - u\|_{L^2} \to 0$ as $\varepsilonilon \to 0$. Moreover \begin{align*} \|w_\varepsilonilon\|_{L^2}^2 \,&=\, \frac{1}{m^2}\int_{\varepsilonilon/2}^{\varepsilonilon} \frac{r^2}{\varepsilonilon^2}\,|\phi'(r/\varepsilonilon)|^2\,|u_r(r)|^2 r\,{\rm d} r + \frac{1}{m^2}\int_{1/\varepsilonilon}^{2/\varepsilonilon} \varepsilonilon^2 r^2 |\psi'(\varepsilonilon r)|^2 \,|u_r(r)|^2 r\,{\rm d} r \\ \,&\le\, \frac{C}{m^2}\Bigl(\int_0^{\varepsilonilon} |u_r(r)|^2 r\,{\rm d} r + \int_{1/\varepsilonilon}^\infty |u_r(r)|^2 r\,{\rm d} r\Bigr) \,\xrightarrow[\varepsilonilon \to 0]{}\, 0\,. \end{align*} Thus $\|v_\varepsilonilon - u\|_{L^2} \to 0$ as $\varepsilonilon \to 0$, which is the desired result. Next we assume that $m = 0$ and $k \neq 0$. Given any $u \in X_{0,k}$, the divergence-free condition $\partial_r^* u_r + ik u_z = 0$ implies that \begin{equation}\label{urrep} u_r(r) \,=\, -\frac{ik}{r}\int_0^r u_z(s) s\,{\rm d} s\,, \qquad \hbox{hence} \quad |u_r(r)|^2 \,\le\, \frac{k^2}{2} \int_0^r |u_z(s)|^2 s\,{\rm d} s\,, \end{equation} for any $r > 0$. We now define $\tilde v_\varepsilonilon = u \chi_\varepsilonilon + \tilde w_\varepsilonilon e_z$, where \[ \tilde w_\varepsilonilon(r) \,=\, \frac{i}{k}\,\chi_\varepsilonilon'(r) u_r(r)\,, \quad r > 0\,. \] As before $\tilde v_\varepsilonilon$ is divergence-free and supported in $[\varepsilonilon/2,2/\varepsilonilon]$. Moreover, using \eqref{urrep}, we find \begin{align*} \|\tilde w_\varepsilonilon\|_{L^2}^2 \,&=\, \frac{1}{k^2}\int_{\varepsilonilon/2}^{\varepsilonilon} \frac{1}{\varepsilonilon^2}\,|\phi'(r/\varepsilonilon)|^2\,|u_r(r)|^2 r\,{\rm d} r + \frac{1}{k^2}\int_{1/\varepsilonilon}^{2/\varepsilonilon} \varepsilonilon^2 |\psi'(\varepsilonilon r)|^2 \,|u_r(r)|^2 r\,{\rm d} r \\ \,&\le\, C \int_0^{\varepsilonilon} |u_z(r)|^2 r\,{\rm d} r + \frac{C \varepsilonilon^2}{k^2} \int_{1/\varepsilonilon}^\infty |u_r(r)|^2 r\,{\rm d} r \,\xrightarrow[\varepsilonilon \to 0]{}\, 0\,, \end{align*} and this shows that $\|\tilde v_\varepsilonilon - u\|_{L^2} \to 0$ as $\varepsilonilon \to 0$. Finally, if $u \in X_{0,0}$, the divergence-free condition asserts that $\partial_r^* u_r = 0$, hence $u_r = 0$. It follows that $u\chi_\varepsilonilon$ is divergence-free, and we know that $\|u \chi_\varepsilonilon - u\|_{L^2} \to 0$ as $\varepsilonilon \to 0$. \end{proof} Using Lemma~\ref{lem:truncation} and a standard regularization procedure, we obtain: \begin{prop}\label{prop:approximation} For any $m \in \mathbb{Z}$ and any $k \in \mathbb{R}$, the set of all smooth, divergence-free vector fields with compact support in $(0,+\infty)$ is dense in $X_{m,k}$. \end{prop} \begin{proof} According to Lemma~\ref{lem:truncation}, it is sufficient to prove that any $u \in X_{m,k}$ with compact support can be approximated by smooth, divergence-free and compactly supported vector fields. Assume thus that $u \in X_{m,k}$ is such that $u(r) = 0$ for $r \le r_1$ and $r \ge r_2$, with $0 < r_1 < r_2 < \infty$. We consider the vector field $U = (U_1,U_2,U_3)$ in $\mathbb{R}^3$ defined by \begin{equation}\label{eq:urep1} U(r\cos\theta, r\sin\theta,z) \,=\, \Bigl( u_r(r) e_r(\theta) + u_\theta(r) e_\theta(\theta) + u_z(r) e_z\Bigr) \,e^{im\theta} \,e^{ikz}\,, \end{equation} where $r > 0$, $\theta \in \mathbb{R}/(2\pi\mathbb{Z})$, and $z \in \mathbb{R}$. Then $\mathop{\mathrm{div}}\nolimits U = 0$ and, for any fixed $x_3 \in \mathbb{R}$, the map $(x_1,x_2) \mapsto U(x_1,x_2,x_3)$ belongs to $L^2(\mathbb{R}^2,\mathbb{C}^3)$, because $\|U(\cdot,\cdot,x_3)\|_{L^2(\mathbb{R}^2)}^2 = 2\pi \|u\|_{L^2}^2 < \infty$. Given $\varepsilonilon > 0$, we define the approximation \[ U^\varepsilonilon(x_1,x_2,x_3) \,=\, \frac{1}{\varepsilonilon^2}\int_{\mathbb{R}^2} \chi\Bigl(\frac{x_1-y_1}{\varepsilonilon},\frac{x_2-y_2}{\varepsilonilon}\Bigr) \,U(y_1,y_2,x_3)\,{\rm d} y_1 \,{\rm d} y_2\,, \] where $\chi : \mathbb{R}^2 \to \mathbb{R}_+$ is smooth, {\em radially symmetric}, supported in the unit ball, and normalized so that $\int \chi \,{\rm d} x_1 \,{\rm d} x_2 = 1$. By construction, the vector field $U^\varepsilonilon$ is smooth, divergence-free, and close to $U$ in the sense that $\|U^\varepsilonilon(\cdot, \cdot,x_3) - U(\cdot,\cdot,x_3)\|_{L^2(\mathbb{R}^2)} \to 0$ as $\varepsilonilon \to 0$ for any $x_3 \in \mathbb{R}$. If $\varepsilonilon \le r_1/2$, we also have $U^\varepsilonilon(x_1,x_2,x_3) = 0$ whenever $r := (x_1^2 + x_2^2)^{1/2} \le r_1/2$ or $r \ge r_1 + r_2$. Under this assumption, since $\chi$ is radially symmetric, we can represent $U^\varepsilonilon$ as \begin{equation}\label{eq:urep2} U^\varepsilonilon(r\cos\theta, r\sin\theta,z) \,=\, \Bigl(u^\varepsilonilon_r(r) e_r(\theta) + u^\varepsilonilon_\theta(r) e_\theta(\theta) + u^\varepsilonilon_z(r) e_z\Bigr) \,e^{im\theta} \,e^{ikz}\,, \end{equation} for some {\em smooth} vector field $u^\varepsilonilon = u_r^\varepsilonilon e_r + u_\theta^\varepsilonilon e_\theta + u_z^\varepsilonilon e_z \in X_{m,k}$, which is supported in the compact interval $[r_1/2,r_2 + r_1] \subset (0,\infty)$. Here the condition on the support is essential, because the unit vectors $e_r, e_\theta$ are smooth only away from the axis $r = 0$. From \eqref{eq:urep1}, \eqref{eq:urep2} we deduce that \[ \|u^\varepsilonilon - u\|_{L^2(\mathbb{R}_+,r\,{\rm d} r)}^2 \,=\, \frac{1}{2\pi} \|U^\varepsilonilon(\cdot,\cdot,0) - U(\cdot,\cdot,0)\|_{L^2(\mathbb{R}^2)}^2 \,\xrightarrow[\varepsilonilon \to 0]{}\, 0\,, \] and this gives the desired result. \end{proof} \subsection{Compactness criteria}\label{sec52} We next mention two simple compactness criteria in the space $X = L^2(\mathbb{R}_+,r\,{\rm d} r)$. \begin{lem}\label{lem:compcrit} For any $\alpha > 0$ and any $M > 0$, the sets \begin{align*} E_{M,\alpha} \,&=\, \bigl\{f \in X\,;\, \|\partial_r f\|_{L^2} \le M\,,~ \|r^\alpha f\|_{L^2} \le M\bigr\}\,, \quad \hbox{and}\\ E_{M,\alpha}^* \,&=\, \bigl\{f \in X\,;\, \|\partial_r^* f\|_{L^2} \le M\,,~ \|r^\alpha f\|_{L^2} \le M\bigr\}\,, \end{align*} are compact in $X$. We recall that $\partial_r^* = \partial_r + \frac1r$. \end{lem} \begin{proof} If $f \in X$, we define $F : \mathbb{R}^2 \to \mathbb{C}$ by $F(x) = (2\pi)^{-1/2} f(|x|)$ for all $x \in \mathbb{R}^2$. The linear map $f \mapsto F$ is an isometric embedding of $X$ into $L^2(\mathbb{R}^2)$, and the image of $E_{M,\alpha}$ under that map is included in the set \[ \bigl\{F \in L^2(\mathbb{R}^2)\,;\, \|\nabla F\|_{L^2} \le M\,,~ \| |x|^\alpha F\|_{L^2} \le M\bigr\}\,, \] which is known to be compact in $L^2(\mathbb{R}^2)$ by Rellich's criterion, see \cite[Theorem~XIII.65]{ReSi}. This shows that the closed subset $E_{M,\alpha} \subset X$ is relatively compact, hence compact. Compactness of $E_{M,\alpha}^*$ can be established by a variant of the previous argument, but for a change we give here a direct proof based on the Arzel\`a-Ascoli theorem. If $f \in E_{M,\alpha}^*$, we observe that \begin{equation}\label{eq:fintrep} f(r) \,=\, \frac{1}{r}\int_0^r \partial_r^* f(s) s\,{\rm d} s\,, \qquad \hbox{for all } r > 0\,. \end{equation} This shows that $|f(r)| \le \|\partial_r^* f\|_{L^2} \le M$ for all $r > 0$, and we deduce that \[ \int_0^\varepsilonilon |f(r)|^2 r\,{\rm d} r \,\le\, M^2 \varepsilonilon^2\,, \qquad \int_L^\infty |f(r)|^2 r\,{\rm d} r \,\le\, \frac{1}{L^{2\alpha}} \|r^\alpha f\|_{L^2}^2 \,\le\, \frac{M^2}{L^{2\alpha}}\,, \] for any $\varepsilonilon > 0$ and any $L > 0$. In particular, the set $E_{M,\alpha}^*$ is bounded in $X$, and its elements are uniformly small near the origin and at infinity. Moreover, it follows from \eqref{eq:fintrep} and H\"older's inequality that \[ |r_1 f(r_1) - r_2 f(r_2)| \,\le\, M |r_1 - r_2|^{1/2}\,, \qquad \hbox{for all } r_1, r_2 > 0\,, \] which means that the elements of $E_{M,\alpha}^*$ are {\em uniformly equicontinuous} on any compact interval $[\varepsilonilon,L] \subset (0,\infty)$. These properties altogether imply that $E_{M,\alpha}^*$ is a compact subset of $X$. \end{proof} \subsection{Representation formulas}\label{sec53} Finally we give explicit representation formulas for the pressure $p$ satisfying \eqref{eq:pmk}, in terms of solutions of the homogeneous equation \begin{equation}\label{eq:homogene} -\partial_r^* \partial_r p(r) + \frac{m^2}{r^2}p(r) + k^2 p(r) \,=\, 0\,. \end{equation} If $k \neq 0$, a pair of linearly independent solutions of \eqref{eq:homogene} is given by the modified Bessel functions $I_m(|k|r)$ and $K_m(|k|r)$, see e.g. \cite[Section~9.6]{AS}. For later use, we recall that $I_{-m}(r) = I_m(r)$, $K_{-m}(r) = K_m(r)$, and $K_m(r) I_m'(r) - K_m'(r) I_m(r) = 1/r$ for all $r > 0$. Moreover, if $m \ge 1$, then \begin{equation}\label{eq:asym0} I_m(r) \,\sim\, \frac{1}{m!}\Bigl(\frac{r}{2}\Bigr)^m\,, \qquad K_m(r) \,\sim\, \frac{(m{-}1)!}{2}\Bigl(\frac{2}{r}\Bigr)^m\,, \qquad \hbox{as } r \to 0\,, \end{equation} whereas $I_0(r) \to 1$ and $K_0(r) \sim -\log(r)$ as $r \to 0$. For all $m \in \mathbb{Z}$, we also have \begin{equation}\label{eq:asyminf} I_m(r) \,\sim\, \frac{1}{\sqrt{2\pi}}\,\frac{e^r}{\sqrt{r}}\,, \qquad K_m(r) \,\sim\, \sqrt{\frac{\pi}{2}}\,\frac{e^{-r}}{\sqrt{r}}\,, \qquad \hbox{as } r \to +\infty\,. \end{equation} When $k = 0$ linearly independent solutions of \eqref{eq:homogene} are $r^{\pm m}$ if $m \neq 0$, and $\{1,\log(r)\}$ if $m = 0$. \begin{lem}\label{lem:rep} Assume that the vorticity profile $W$ satisfies assumption~H1. For any $m \in \mathbb{Z}$, $k \in \mathbb{R}$, and $u \in X_{m,k}$, the elliptic equation \eqref{eq:pmk} has a unique solution $p = P_{m,k}[u]$ such that $p(r) = \mathcal{O}(|\log r|^{1/2})$ as $r \to 0$ and $p(r) \to 0$ as $r \to +\infty$. If $k \neq 0$, we have $p = 2im p_1 + 2|k|p_2$ where \begin{equation}\label{eq:repmk} \begin{split} p_1(r) \,&=\, K_m(|k|r) \int_0^r I_m(|k|s) (s\Omega)' u_r(s) \,{\rm d} s + I_m(|k|r) \int_r^\infty K_m(|k|s) (s\Omega)' u_r(s) \,{\rm d} s\,, \\ p_2(r) \,&=\, K_m(|k|r) \int_0^r I_m'(|k|s) \Omega(s) u_\theta(s) s\,{\rm d} s + I_m(|k|r) \int_r^\infty K_m'(|k|s) \Omega(s) u_\theta(s) s\,{\rm d} s\,. \end{split} \end{equation} If $k = 0$ and $m \neq 0$, then $p = \sigma p_1 + p_2$ where $\sigma = m/|m|$ and \begin{equation}\label{eq:repm0} \begin{split} p_1(r) \,&=\, \frac{i}{r^{|m|}} \int_0^r s^{|m|} (s\Omega)'(s) u_r(s)\,{\rm d} s + i r^{|m|} \int_r^\infty \frac{1}{s^{|m|}} (s\Omega)'(s) u_r(s)\,{\rm d} s\,, \\ p_2(r) \,&=\, \frac{1}{r^{|m|}} \int_0^r s^{|m|} \Omega(s) u_\theta(s)\,{\rm d} s - r^{|m|} \int_r^\infty \frac{1}{s^{|m|}} \Omega(s) u_\theta(s)\,{\rm d} s\,. \end{split} \end{equation} Finally, if $k = m = 0$, then $p(r) = - 2 \int_r^\infty \Omega(s) u_\theta(s)\,{\rm d} s$. \end{lem} \begin{proof} In view of \eqref{eq:pmk} we can suppose without loss of generality that $k \ge 0$. If $k > 0$, we first assume that $u \in X_{m,k} \cap C^1_c(\mathbb{R}p)$ and we consider the linear elliptic equation \begin{equation}\label{eq:linell} -\partial_r^* \partial_r p(r) +\frac{m^2}{r^2}\,p(r) + k^2 p(r) \,=\, f(r)\,, \qquad r > 0\,, \end{equation} where $f = 2im(\partial_r^* \Omega)u_r - 2 \partial_r^* (\Omega\,u_\theta)$. The unique solution of \eqref{eq:linell} that is regular at the origin and decays to zero at infinity is \begin{equation}\label{eq:frep} p(r) \,=\, K_m(kr) \int_0^r I_m(ks) f(s) s\,{\rm d} s + I_m(kr) \int_r^\infty K_m(ks) f(s) s\,{\rm d} s\,, \qquad r > 0\,. \end{equation} Replacing $f$ by its expression and integrating by parts, we easily obtain the representation \eqref{eq:repmk}. The general case where $u$ is an arbitrary function in $X_{m,k}$ follows by a density argument, using Proposition~\ref{prop:approximation}. If $k = 0$ and $m \neq 0$, the solutions of the homogeneous equation \eqref{eq:homogene} are $r^{|m|}$ and $r^{-|m|}$, instead of $I_m(|k|r)$) and $K_m(|k|r)$. Proceeding exactly as above, we thus arrive at \eqref{eq:repm0} instead of \eqref{eq:repmk}. Finally, if $k = m = 0$, any solution of \eqref{eq:pmk} such that $\partial_r p \in L^2(\mathbb{R}p,r\,{\rm d} r)$ satisfies $\partial_r p = 2 \Omega u_\theta$, hence $p(r) = - 2 \int_r^\infty \Omega(s) u_\theta(s)\,{\rm d} s$. In all cases, the solution of \eqref{eq:pmk} given by the above formulas satisfies $p(r) = \mathcal{O}(|\log r|^{1/2})$ as $r \to 0$ and $p(r) \to 0$ as $r \to +\infty$, and is unique in that class. \end{proof} \end{document}
\begin{document} \mainmatter \title{The maximum time of 2-neighbour bootstrap percolation in grid graphs and some parameterized results} \begin{abstract} In 2-neighborhood bootstrap percolation on a graph $G$, an infection spreads according to the following deterministic rule: infected vertices of $G$ remain infected forever and in consecutive rounds healthy vertices with at least two already infected neighbors become infected. Percolation occurs if eventually every vertex is infected. The maximum time $t(G)$ is the maximum number of rounds needed to eventually infect the entire vertex set. In 2013, it was proved by Benevides et al \cite{eurocomb13} that $t(G)$ is NP-hard for planar graphs and that deciding whether $t(G)\geq k$ is polynomial time solvable for $k\leq 2$, but is NP-complete for $k\geq 4$. They left two open problems about the complexity for $k=3$ and for planar bipartite graphs. In 2014, we solved the first problem\cite{wg2014}. In this paper, we solve the second one by proving that $t(G)$ is NP-complete even in grid graphs with maximum degree 3. We also prove that $t(G)$ is polynomial time solvable for solid grid graphs with maximum degree 3. Moreover, we prove that the percolation time problem is W[1]-hard on the treewidth of the graph, but it is fixed parameter tractable with parameters treewidth$+k$ and maxdegree$+k$. \end{abstract} \keywords{2-neighbor bootstrap percolation, maximum percolation time, grid graph, fixed parameter tractability, treewidth} \section{Introduction} We consider a problem in which an infection spreads over the vertices of a connected simple graph $G$ following a deterministic spreading rule in such a way that an infected vertex will remain infected forever. Given a set $S \subseteq V(G)$ of initially infected vertices, we build a sequence $S_0, S_1, S_2, \ldots$ in which $S_0=S$ and $S_{i+1}$ is obtained from $S_i$ using such spreading rule. Under $r$-neighbor bootstrap percolation on a graph $G$, the spreading rule is a threshold rule in which $S_{i+1}$ is obtained from $S_i$ by adding to it the vertices of $G$ which have at least $r$ neighbors in $S_i$. We say that a set $S$ infects a vertex $v$ at time $i$ if $v \in S_i \setminus S_{i-1}$. Let, for any set of vertices $S$ and vertex $v$ of $G$, $t_r(G,S,v)$ be the minimum $t$ such that $v$ belongs to $S_t$ or, if there is no $t$ such that $v$ belongs to $S_t$, then $t_r(G,S,v) = \infty$. Also, we say that a set $S_0$ infects $G$, or that $S_0$ is a percolating set of $G$, if eventually every vertex of $G$ becomes infected, that is, there exists a $t$ such that $S_t = V(G)$. If $S$ is a percolating set of $G$, then we define $t_r(G,S)$ as the minimum $t$ such that $S_t = V(G)$. Also, define the {\em percolation time of $G$} as $t_r(G) = \max \{t_r(G,S) : S \text{ is a percolating set of } G\}$. In this paper, we shall focus on the case where $r=2$ and in such case we omit the subscript of the notations $t_r(G,S)$ and $t_r(G)$. Also, from the notation $t(G,S)$ and $t(G,S,v)$, when the parameter $G$ is clear from context, it will be omitted. Bootstrap percolation was introduced by Chalupa, Leath and Reich \cite{chalupa} as a model for certain interacting particle systems in physics. Since then it has found applications in clustering phenomena, sandpiles \cite{sandpiles}, and many other areas of statistical physics, as well as in neural networks \cite{neural2} and computer science \cite{cs1}. There are two broad classes of questions one can ask about bootstrap percolation. The first, and the most extensively studied, is what happens when the initial configuration $S_0$ is chosen randomly under some probability distribution? For example, vertices are included in $S_0$ independently with some fixed probability $p$. One would like to know how likely percolation is to occur, and if it does occur, how long it takes. The answer to these questions is now well understood for various types of graphs \cite{holroyd,balogh,balogh2,balogh4,bollobasHolmgrenSmithUzzell}. The second broad class of questions is the one of extremal questions. For example, what is the smallest or largest size of a percolating set with a given property? The size of the smallest percolating set in the $d$-dimensional grid, $[n]^d$, was studied by Pete and a summary can be found in \cite{BaloghPete}. Morris \cite{morris} and Riedl \cite{riedl} studied the maximum size of minimal percolating sets on the square grid $[n]^2$ and the hypercube $\{0,1\}^d$, respectively, answering a question posed by Bollob\'as. However, the problem of finding the smallest percolating set is NP-hard even on subgraphs of the square grid \cite{jayme13} and it is APX-hard even for bipartite graphs with maximum degree four \cite{waoa-13}. Moreover, it is hard \cite{ningchen} to approximate within a ratio $O(2^{\log^{1-\varepsilon}n})$, for any $\varepsilon>0$, unless $NP\subseteq DTIME(n^{polylog(n)})$. Another type of question is: what is the minimum or maximum time that percolation can take, given that $S_0$ satisfies certain properties? Recently, Przykucki \cite{Przykucki} determined the precise value of the maximum percolation time on the hypercube $2^{[n]}$ as a function of $n$, and Benevides and Przykucki \cite{fabricio1,fabriciob1} have similar results for the square grid $[n]^2$, also answering a question posed by Bollob\'as. In particular, they have a polynomial time dynamic programming algorithm to compute the maximum percolation time on rectangular grids \cite{fabricio1}. Here, we consider the decision version of the Percolation Time Problem, as stated below. \noindent{\sc Percolation Time} \\ {\em Input:} A graph $G$ and an integer $k$. \\ {\em Question:} Is $t(G) \geq k$? In 2013, Benevides et.al. \cite{eurocomb13}, among other results, proved that the Percolation Time Problem is polynomial time solvable for $k\leq 2$, but is NP-complete for $k\geq 4$ and, when restricted to bipartite graphs, it is NP-complete for $k\geq 7$. Moreover, it was proved that the Percolation Time Problem is NP-complete for planar graphs. They left three open questions about the complexity for $k=3$ in general graphs, the complexity for $3\leq k\leq 6$ in bipartite graphs and the complexity for planar bipartite graphs. In 2014, the first and the second questions were solved \cite{wg2014}: it was proved that the Percolation Time Problem is $O(mn^5)$-time solvable for $k=3$ in general graphs and, when restricted to bipartite graphs, it is $O(mn^3)$-time solvable for $k=3$, it is $O(m^2n^9)$-time solvable for $k=4$ and it is NP-complete for $k\geq 5$. In this paper, we solve the third question of \cite{eurocomb13}. We prove that the Percolation Time Problem is NP-complete for planar bipartite graphs. In fact, we prove a stronger result: the NP-completeness for grid graphs, which are induced subgraphs of grids, with maximum degree 3. There are NP-hard problems in grid graphs which are polynomial time solvable for solid grid graphs. For example, the Hamiltonian cycle problem is NP-complete for grid graphs \cite{itai82}, but it is polynomial time solvable for solid grid graphs \cite{lenhart97}. Motivated by the work of \cite{fabricio1} for rectangular grids, we obtain in this paper a polynomial time algorithm for solid grid graphs with maximum degree 3. Finally, we prove several complexity results for $t(G)$ in graphs with bounded maximum degree and bounded treewidth, some of which implies fixed parameter tractable algorithms for the Percolation Time Problem. Moreover, we obtain polynomial time algorithms for $(q,q-4)$-graphs, for any fixed $q$, which are the graphs such that every subset of at most $q$ vertices induces at most $q-4$ $P_4$'s. Cographs and $P_4$-sparse graphs are exactly the $(4,0)$-graphs and the $(5,1)$-graphs, respectively. These algorithms are fixed parameter tractable on the parameter $q$. \section{Percolation Time Problem in grid graphs with $\Delta = 3$} \label{griddelta3} In this section, we prove that the Percolation Time Problem is NP-complete in grid graphs with maximum degree $\Delta=3$. We also show that, when the graph is a grid graph with $\Delta=3$ and $k=O(\log n)$, the Percolation Time Problem can be solved in polynomial time. But, first, let us define a $S$-infection path and, then, prove two lemmas that will be useful in the proofs. Let $S$ be a percolating set. A path $P = v_0,v_1,\hdots,v_n$ is a $S$-\emph{infection path} if, for every $0\leq i\leq n-1$, $t(S,v_i) < t(S,v_{i+1})$. Notice that, if $t(S,v)=k$, then there is a $S$-infection path $v_0,v_1,\hdots,v_k=v$, where $t(S,v_i)=i$ for each $0\leq i\leq k$. Roughly speaking, the next lemma shows that, if the $S$-infection time of a vertex $v$ is decreased by the inclusion of a vertex $v'$ to $S$, then there is an infection path starting at $v'$ and ending at $v$. \begin{lemma} \label{lemacaminho} Let $S$ be a subset of $V(G)$, $v,v'$ be vertices of $G$ and $S' = S\cup\{v'\}$. If $t(S',v)<t(S,v)$, then there is a $S'$-infection path starting in $v'$ and ending in $v$. \end{lemma} \begin{proof} Notice that, since $t(S',v)<t(S,v)$, then $t(S',v)<\infty$, i.e., $S'$ infects $v$. Also, $t(S',v)$ cannot be 0; otherwise, $v\in S'$ and then $S=S'$ and $t(S,v)=t(S',v)$, a contradiction. Thus $t(S',v) \geq 1$. Let us prove by induction on $t(S',v)$ that there is a $S'$-infection path starting in $v'$ and ending in $v$. For $t(S',v) = 1$, we have that $v$ must be neighbor of two vertices in $S'$, where one of these two vertices must be $v'$ because, otherwise, $t(S,v) = 1 = t(S',v)$. Thus, the path $v',v$ is a $S'$-infection path. Now, suppose that the theorem holds for all values less than $k$. Let us prove that the theorem still holds if $t(S',v) = k$. We have that $v$ must have a neighbor $u$ such that $t(S',u) < t(S,u)$ and $t(S',u) < k$ because, otherwise, we would have that $t(S',v) = t(S,v)$. Thus, by our inductive hypothesis, there is a $S'$-infection path from some vertex $v'$ to $u$ and, since $t(S',u) < t(S',v) = k$, there is a $S'$-infection path from $v'$ to $v$. \end{proof} The next lemma, which is valid for every graph with maximum degree 3, is the main technical lemma of this section. \begin{lemma} \label{lemainfcam} Let $G$ be a connected graph with $\Delta(G)=3$ and $k$ a non-negative integer. Then, $t(G)\geq k$ if and only if $G$ has an induced path $P$ where either all vertices of $V(P)$ have degree 3 and $|E(P)|\geq 2k-2$ or all vertices of $V(P)$ have degree 3, except for one of his extremities, which has degree 2, and $|E(P)|\geq k-1$. \end{lemma} \begin{proof} First, suppose that $t(G)\geq k$. Let us prove that $G$ has an induced path $P$ where either all vertices in $V(P)$ have degree 3 and $|E(P)| \geq 2k-2$ or all vertices in $V(P)$ have degree 3, except for one of his extremities, which has degree 2, and $|E(P)| \geq k-1$. Since $t(G) \geq k$, there is a percolating set $S$ such that $t(S) \geq k$. Let $t=t(S)$ and let $v$ be a vertex that is infected by $S$ at time $t$. Note that $v$ cannot have degree 1, because otherwise $v\in S$, a contradiction. So, let us divide the proof in 2 cases. The first case occurs when $v$ has degree 2. In this case, let $P= v_1,\hdots,v_{t-1},v_t = v$ be a $S$-infection path where each $v_i$ is infected at time $i$ by $S$. Thus, we have that all vertices $v_1,v_2,\hdots,v_{t-2},v_{t-1}$ have degree three because each vertex $v_i$, for $1 \leq i \leq t-1$, must have two neighbors infected by $S$ at time $\leq i-1$ and, additionally, $v_i$ also has $v_{i+1}$, which is the next vertex in $P$, as his neighbor. Thus, since $\Delta = 3$, each vertex $v_i \in V(P)$, for $1 \leq i \leq t-1$, has exactly one neighbor infected at time $i+1$ by $S$, which is the vertex $v_{i+1}$, and has no neighbor infected at time $\geq i+2$ by $S$, which implies that no two consecutive vertices in $P$ are neighbors. Therefore, we have that $P$ is an induced path in $G$ such that $|E(P)| = t-1 \geq k-1$. The second case occurs when $v$ has degree 3. Thus, since there is no vertex infected by $S$ at time greater than the time of $v$, we have that $v$ has one neighbor infected by $S$ at time $\leq t-1$, another neighbor infected by $S$ at time $t-1$ and yet another neighbor infected by $S$ at time either $t-1$ or $t$. Let $t'$ be the infection time of the neighbor of $v'$ that is infected by $S$ at the greatest time among the neighbors of $v$, which may be $t$ or $t-1$. Let $P_1 = v_1,\hdots,v_{t-1},v_t = v$ and $P_2 = v'_1,\hdots,v'_{t'-1},v'_{t'}$ be two $S$-infection paths where each $v_i$ and $v'_i$ is infected at time $i$ by $S$ and $v'_{t'}$ is the neighbor of $v$ that is infected by $S$ at time $t'$. By the same arguments used in the first case, we have that both $P_1$ and $P_2$ are induced paths in $G$. Additionally, no vertex of $P_1$ is adjacent to a vertex of $P_2$, except the vertex $v$ that is adjacent to $v'_{t'}$ because, otherwise, we would have either that $v_{t-1} = v'_{t'}$ or that $v_t = v'_{t'}$. Thus, let $P$ be the path $v_1,\hdots,v_{t-1},v, v'_{t'},\hdots,v'_2,v'_1$, which is the resulting path from the union of the induced paths $P_1$ and $P_2$ through the edge between the vertices $v$ and $v'_{t'}$. We have that $P$ is an induced path in $G$ because there is no edge between two non-consecutive vertices of $P$. We also have that all vertices in $P$ have degree 3 and $|E(P)| = t + t' - 1 \geq t + (t-1) - 1 \geq 2k-2$. Now, suppose that $G$ has an induced path $P$ where either all of his vertices has degree 3 and $|E(P)| \geq 2k-2$ or all of his vertices has degree 3, except for one of his extremities, which has degree 2, and $|E(P)| \geq k-1$. Let us prove that $t(G) \geq k$. Suppose that $G$ has an induced path $P = v_1,v_2,\hdots,v_t$, for $t \geq k$, where all of his vertices have degree 3, except for $v_t$, which has degree 2. Since each vertex $v_i$ has degree 3, except $v_t$, which has degree 2, and $P$ is an induced path, then each vertex $v_i$ has exactly one neighbor outside of $P$, except for $v_1$, which has two neighbors outside of $P$. Let $S'$ be the set of neighbors of each $v_i$ that is not in $V(P)$. It is easy to see that $S'$ infects all vertices in $P$. Since $v_t$ has only two neighbors and one of them is in $S'$, then $t(S',v_t) = t(S',v_{t-1}) + 1$. Since $v_{t-1}$ has 3 neighbors, where one of them is in $S'$ and the other is $v_t$, which is infected by $S'$ after $v_{t-1}$, then $t(S',v_{t-1}) = t(S',v_{t-2}) + 1$. Therefore, by this same argument, we have that, for all $2 \leq i \leq t$, we have that $t(S',v_i) = t(S',v_{i-1}) + 1$. Since $v_1$ has two neighbors in $S'$, then $v_1$ is infected by $S'$ at time 1 and, hence, for all $1 \leq i \leq t$, $t(S',v_i)=i$. Thus, there is a set $S'$ that infects $v_t$ at time $t$. Now, suppose that $G$ has an induced path $P = v_1,v_2,\hdots,v_{2t-2},v_{2t-1}$, where all of his vertices have degree 3. Since each vertex $v_i$ has degree 3 and $P$ is an induced path, then each vertex $v_i$ has exactly one neighbor outside of $P$, except for both $v_1$ and $v_{2t-1}$, which have two neighbors outside of $P$. Let $S'$ be the set of neighbors of each $v_i$ that is not in $V(P)$. Again, it is easy to see that $S'$ infects all vertices in $P$. Also, similarly to the prior case, it is not hard to see that all vertices at distance $d$ of $v_t$ are infected by $S'$ at time $t - d$, i.e., $S'$ infects a vertex $v_i$ at time $t-|t-i|$. \begin{figure} \caption{A graph with $\Delta = 3$ infected by the set $S'$ to the left and by the percolating set $S$ to the right.} \label{infcammesmo} \end{figure} Thus, in both cases, we have that $S'$ infects each vertex $v_i$ in $P$ at time $t-|t-i|$ (in the first case, since $1 \leq i \leq t$, we have that $i = t - |t-i|$). Let $Y$ be the set of vertices that are neither in $V(P)$ nor in the neighborhood of any vertex in $V(P)$. Let $S = S' \cup Y$. We have that $S$ is a percolating set because all vertices in $V(G)$ are either in $V(P)$ or in the neighborhood of some vertex in $V(P)$ or in $Y$, and $S'$ infects all vertices that are either in $V(P)$ or in the neighborhood of some vertex in $V(P)$. Also, since all neighbors of the vertices in $V(P)$ are in $S'$, $S$ cannot possible infect the vertices in $V(P)$ in a different time than $S'$, as exemplified in Figure \ref{infcammesmo}. Thus, since $S$ is a percolating set that infects $v_t$ at time $t$, we have that $t(S) \geq t \geq k$ and, hence, $t(G) \geq k$. \end{proof} Before proving the NP-completeness result of this section, we use Lemma \ref{lemainfcam} to show that the Percolation Time Problem is polynomial time solvable for $k=O(\log n)$ when the graph has maximum degree 3. \begin{theorem}\label{teo-d3-logn} If $G$ is a graph with maximum degree 3, then deciding whether $t(G)\geq k$ can be done in time $O(n^{2c+3})$ for $k\leq c\cdot\log_2 n$ and $c>0$. \end{theorem} \begin{proof}[sketch of the proof] We can decide whether $t(G) \geq k$ by making use of a modified version of the depth-first search. This version of the depth-first search with maximum search depth $\ell$ traverses all paths with $\ell+1$ vertices starting from some vertex $v$. For each $v \in V(G)$, we will run this version of the depth-first search starting in $v$. If $d(v) = 2$, we run the modified depth-first search with maximum search depth $k-1$. If $d(v) = 3$, we run the modified depth-first search with maximum search depth $2k-2$. If there is a vertex $v$ such that the depth-first search that starts in $v$ finds a path that is an induced path, reaches the maximum depth and passes only by vertices of degree 3, except maybe for $v$, then, by Lemma \ref{lemainfcam}, $t(G) \geq k$. Otherwise, $t(G) < k$. Now, let us show that this algorithm runs in polynomial time. For each vertex $v$ in $G$, there are at most $3\cdot 2^{\ell-2}$ paths of length $\ell$ in $G$ that starts in $v$, for any $\ell$. In this case, since $\ell\leq 2k-2$, there are at most $3\cdot 2^{2k-2}=3n^{2c}/4$ paths of length $\ell$ in $G$ that starts in $v$. Therefore, since we have $n$ vertices and we take time $O(n^2)$ to obtain each path, the algorithm runs in time $O(n^{2c+3})$. \end{proof} Thus, if $k=O(\log n)$, we can find whether $t(G) \geq k$ in polynomial time for every graph $G$ with $\Delta(G) = 3$. However, the following theorem states that the Percolation Time Problem is NP-complete, even when $G$ is restricted to be a grid graph with $\Delta = 3$. \begin{theorem} \label{teogrid} Deciding whether $t(G)\geq k$ is NP-complete when the input $G$ is restricted to be a grid graph with $\Delta(G)\leq 3$. \end{theorem} \begin{proof} Clearly, the problem is in NP. To prove that the problem is also NP-hard, we obtained a reduction from the Longest Path problem with input restricted to be grid graphs with maximum degree 3. The Longest Path problem with input restricted to be grid graphs with maximum degree 3 is a NP-complete problem because the Hamiltonian Path Problem with input restricted to be grid graphs with maximum degree 3 is also NP-complete \cite{papa1} and there is a trivial reduction from the Hamiltonian Path Problem to the Longest Path problem that does not change the input graph: $G$ has an Hamiltonian Path if and only if $G$ has a path greater or equal to $n-1$. \begin{figure} \caption{\label{exemplo} \label{exemplo} \end{figure} Consider the following reduction from the Longest Path Problem's instance $(G,k)$ where $G$ is restricted to be a grid graph with maximum degree 3 to the Percolation Time Problem's instance $(G',3k+2)$ where $G'$ is also a grid graph with maximum degree 3: Multiply the scale of the grid $G$ by three. Each edge in $G$ becomes a path in $G'$ with 4 vertices where the vertices at the extremities are vertices that were originally in $G$. Let us call an \textit{original vertex} the vertices in $G'$ that were originally in $G$. After that, for each original vertex $v$, if $d(v) < 3$, add to $G'$ $3 - d(v)$ vertices in any free position in the grid adjacent to $v$ and link them to $v$. Thus, after we do that, each original vertex has degree 3 in $G'$. Henceforth, if a vertex in $G'$ is not an original vertex at this point, then we will call it an \textit{auxiliary vertex}. Note that each auxiliary vertex is adjacent to exactly one original vertex and each original vertex is adjacent to 3 auxiliary vertices. After that, for each auxiliary vertex $v$, add a new vertex adjacent to $v$ in the following manner: if the original neighbor of $v$ is located above it, add a vertex adjacent to $v$ at his left position, if there is not one there already, and link it to $v$. If the original neighbor of $v$ is located below it, add a vertex adjacent to $v$ at his right position, if there is not one there already, and link it to $v$. If the original neighbor of $v$ is located at his left position, add a vertex adjacent to $v$ at the position below it, if there is not one there already, and link it to $v$. If the original neighbor of $v$ is located at his right position, add a vertex adjacent to $v$ at the position above it, if there is not one there already, and link it to $v$. The Figure \ref{bloco} show how a 4x4 block will look like in $G'$ before and after we add these vertices. \begin{figure} \caption{4x4 block before and after addition of the auxiliary vertices' neighbors} \label{bloco} \end{figure} Then, for each auxiliary vertex $v$, if $d(v) = 2$, add a new vertex adjacent to $v$ in the following position: if the original neighbor of $v$ is at the left position of $v$, add a vertex adjacent to $v$ at his right position. If the original neighbor of $v$ is at the right position of $v$, add a vertex adjacent to $v$ at his left position. If the original neighbor of $v$ is below $v$, add a vertex adjacent to $v$ above $v$. If the original neighbor of $v$ is above $v$, add a vertex adjacent to $v$ below $v$. Thus, the construction of $G'$ is finished. Since $G$ is a grid graph and, every time an original vertex and an auxiliary vertex are in adjacent positions in the grid, they are linked, then $G'$ is a grid graph. Note that all original and auxiliary vertices have degree 3 and they are the only vertices that have degree 3. Let us call \textit{corner vertex} all the vertices that have degree 2 in $G'$. Also, note that, for each corner vertex, there is exactly one original vertex at distance 2 of it, and, for each original vertex, there is exactly one corner vertex at distance 2 of it. This happens because each original vertex has degree exactly three. Let $f$ be the bijective function that maps each original vertex to the corner vertex that is at distance 2 of it. The Figure \ref{exemplototal} shows the reduction applied to the grid graph of the Figure \ref{exemplo}. It is worth noting that, in $G'$, a path $P$ that has only original and auxiliary vertices and starts with an original vertex, it has length multiple of 3 if and only if it ends in an original vertex. Also, for each 3 consecutive vertices of this path, two are auxiliary vertices and one is an original vertex. \begin{figure} \caption{\label{exemplototal} \label{exemplototal} \end{figure} Now, let us prove that $G$ has a path of length $\geq k$ if and only if $t(G') \geq 3k+2$. Suppose that $G$ is a grid graph with maximum degree 3 that has a path of length $\geq k$. Let us prove that $t(G') \geq 3k+2$. Since $G$ has a path $P$ of length $\geq k$, we have that $G'$ has an induced path $P$ of length $\geq 3k$ that passes by the same path that $P$ passes, which implies that $P$ passes only by original and auxiliary vertices. Note that, when an auxiliary vertex is in $P$, his auxiliary neighbor is also in $P$. Let $v$ and $v'$ be the extremities of $P$ and $f(v') = q'$. Since $v$ is an original vertex, then let $w$ be any auxiliary neighbor of $v$ that is not in $V(P)$. Note that all neighbors of $w$, except $v$, are not in $V(P)$. Let $r$ be the vertex auxiliary neighbor of $v'$ that is in $P$ and let $P'$ be the induced path that we obtain from $P$ by adding $w$, by removing $v'$ and by adding all vertices in any smallest path between $r$ and $q'$, excluding $r$, that only have vertices not adjacent to $w$ and passes only by original and auxiliary vertices. Since $P$ is an induced path and we removed one vertex and added only one induced path that has either 1 or 3 vertices to create $P'$, we have that $P'$ is an induced path with length $\geq 3k+1$ where all of its vertices have degree 3, except for $q'$, which has degree 2. Therefore, by Lemma \ref{lemainfcam}, we have that $t(G') \geq 3k+2$. Now, suppose that $G$ is a grid graph with maximum degree 3 such that, when we apply the reduction to $G$ to create $G'$, we have that $t(G') \geq 3k+2$. Let us prove that $G$ has a path of length $\geq k$. Since $t(G') \geq 3k+2$, applying the Lemma \ref{lemainfcam}, we have that $G'$ has an induced path $P$ where either all vertices in $V(P)$ have degree 3 and $|E(P)| \geq 6k+2$ or all vertices in $V(P)$ have degree 3, except for one of his extremities, which has degree 2, and $|E(P)| \geq 3k+1$. Firstly, suppose that $G'$ has an induced path $P$ where all vertices in $V(P)$ have degree 3 and $|E(P)| \geq 6k+2$. Since, the only vertices that have degree 3 are the original and auxiliary vertices and for each three consecutive vertices in $P$ there is one original vertex and two auxiliary vertices, it is easy to see that $P$ has at least $k+1$ original vertices and, thus, there is a path in $G$ of length at least $k$. Finally, suppose that $G'$ has an induced path $P$ where all vertices in $V(P)$ have degree 3, except for one of his extremities, which has degree 2, and $|E(P)| \geq 3k+1$. It is enough to analyze the case $|E(P)| = 3k+1$ because, if $|E(P)| > 3k+1$, any subpath of $P$ of length $3k+1$ that starts at the extremity of $P$ that have degree 2 is an induced path where all of his vertices have degree 3, except for one of his extremities, which has degree 2, and has length $3k+1$. So, let us say that $P$ starts in the vertex that has degree 2. Since the only vertices that have degree 2 are corner vertices, then $P$ starts with a corner vertex. Let $q$ be that corner vertex, let $q' = f^{-1}(q)$ and let $v$ be the other extremity of $P$. Suppose that $P$ passes by $q'$. Since $P$ is an induced path, then $q'$ is the third vertex of $P$. Since $q$ and $q'$ are at distance 2 of each other and $|E(P)| = 3k+1$, then $v$ is an auxiliary vertex which his neighbor that is an original vertex, say $v'$, is not in $P$. Let us append $v'$ to $P$ and remove all vertices between $q$ and $q'$, including $q$ and excluding $q'$. So, since $P$ starts at $q'$, an original vertex, ends in $v'$, another original vertex, and has length $3k$, then there is a path in $G$ of length greater or equal to $k$. Now, suppose that $P$ does not pass by $q'$. Since $|E(P)| = 3k+1$, then $v$ is an auxiliary vertex which his neighbor that is an original vertex, say $v'$, is in $P$. Let us remove $q$, appending $q'$ in his place, and $v$ from $P$. Thus, since $P$ starts at $q'$, an original vertex, ends in $v'$, another original vertex, and has length $3k$, then there is a path in $G$ of length greater or equal to $k$. \end{proof} \section{Percolation Time Problem in solid grid graphs with $\Delta = 3$} \label{solidgriddelta3} A solid grid graph is a grid graph in which all of his bounded faces have area one. There are NP-hard problems in grid graphs that are polynomial time solvable for solid grid graphs. For example, since 1982 it is known that the hamiltonian cycle problem is NP-hard for grid graphs \cite{itai82}, but, in 1997, it was proved that it is polynomial time solvable for solid grid graphs \cite{lenhart97}. Motivated by the work of \cite{fabricio1} on the maximum percolation time for rectangular grids, we obtain in this section a polynomial time algorithm for solid grid graphs with maximum degree 3. However, the Percolation Time Problem for solid grid graphs with maximum degree 4 is still open. \begin{theorem} For any solid grid graph $G$ with $\Delta = 3$, $t(G)$ can be found in $O(n^2)$ time. \end{theorem} \begin{proof} If a solid grid graph has $\Delta = 3$, then, since it is $K_{1,4}$-free, it becomes a graph formed only by ladders $L_k$, which are grid graphs with dimensions $2 \times k$, and by paths, possibly linking these ladders by the vertices in their extremities. Let the extremities of a ladder be the four vertices that have only two neighbors in the ladder and let all the other vertices be the vertices internal to the ladder. In Figure \ref{solidgrid}, there is an example of solid grid graph with $\Delta = 3$. \begin{figure} \caption{\label{solidgrid} \label{solidgrid} \end{figure} To find the percolation time of $G$, according to Lemma \ref{lemainfcam}, it is enough to find both the longest induced path that starts with a degree 2 vertex and, then, passes only by vertices with degree 3, and the longest induced path that passes only by vertices with degree 3. Thus, since all bounded faces of $G$ are squares with area one and $G$ is composed only by paths and ladders, the only difficulty to calculate $t(G)$ is to find the longest induced paths in the ladders between any two extremities that passes only by vertices with degree 3. However, one can easily calculate the longest induced paths between any two extremities of a ladder $L_k$: if the two extremities are neighbors, the length of the longest induced paths between them is 1; if the two extremities are at distance $k-1$, the length of the longest induced paths between them is $(k-t) + 2 \cdot \lfloor (k-t+1)/4 \rfloor - 1 + t$; if the two extremities are at distance $k$, the length of the longest induced paths between them is $(k-t) + 2 \cdot \lfloor (k-t-1)/4 \rfloor + t$, where $t$ is how many of the two others extremities have degree 2. \begin{figure} \caption{\label{solidgridtransf} \label{solidgridtransf} \end{figure} So, first, we will transform $G$ in a weighted graph $G'$ where $G'$ is the same graph as $G$ only with all the ladders replaced by weighted $K_4$'s, where the weight of an edge between two vertices in a $K_4$ represents the length of a longest induced path between the corresponding extremities of the ladders in $G$ that passes only by vertices with degree 3. The weight of all the other edges is 1. The Figure \ref{solidgridtransf} represents the transformation applied in the graph of the Figure \ref{solidgrid}. Note that there is exactly one induced path between any two vertices in $G'$, which length is equal to the longest induced path between the same two vertices in $G$. It is not hard to see that this transformation from $G$ to $G'$ can be done in linear time. \begin{algorithm}[ht] \DontPrintSemicolon \SetKwFunction{algo}{MaximumTimeSolidGrid$\Delta{}3$} \SetKwFunction{funo}{\texttt{LongestInducedPathFrom}} \SetKwProg{myalg}{Algorithm}{}{} \myalg{\algo{G}}{ $G' =$ Transform$(G)$\\ maxPercTime $= 0$\\ \ForAll{$u \in V(G')$ such that $d_G(u) \geq 2$}{ \If{$d_G(u) = 2$}{ percTimeU = \funo{$G',u$}$ + 1$\\ } \Else{ percTimeU $= \lfloor($\funo{$G',u$}$ + 2)/2\rfloor$\\ } \If{maxPercTime $<$ percTimeU}{ maxPercTime $=$ percTimeU\\ } } \Return {maxPercTime} } \caption{\label{algsolgriddelta3}Algorithm that finds $t(G)$ for any solid grid graph $G$ with $\Delta = 3$} \end{algorithm} In Algorithm \ref{algsolgriddelta3}, let $w(u,v)$ be the weight of the edge $(u,v)$. The algorithm, for each vertex $u \in V(G')$ such that $d_G(u) \geq 2$, calls the function \texttt{LongestInducedPathFrom}, which do a Depth-First Search to find the longest induced path in $G'$ from $u$ such that the last vertex is the only vertex in the path that either has degree $\leq 2$, besides perhaps the vertex $u$, or is in the neighborhood of a vertex already in the path, and, then, it subtracts the length of the found path by one. This is necessary because a longest induced path from some vertex $u$ in $G$ can end in a vertex $v$ internal to a ladder, but internal vertices of a ladder are not represented in $G'$. However, if that happens, since all vertices internal to a ladder have degree 3, then $v$ must be adjacent to some vertex at the extremity of the ladder that has degree 2. In any case, the resulting length corresponds to the length of the longest induced path in $G$ beginning in $u$, which last vertex has degree 3 and is not in the neighborhood of any vertex already in the path. Then, it compares all these values, according to the Lemma \ref{lemainfcam}, to find $t(G)$. Since there is only one induced path between any two vertices in $G'$, we have that the recursive function \texttt{LongestInducedPathFrom} takes the same time as any Depth-First Search algorithm. Thus, since $m = O(n)$, the Function \texttt{LongestInducedPathFrom} takes $O(n)$ time. Therefore, the Algorithm \ref{algsolgriddelta3} takes $O(n^2)$ time. \end{proof} \section{Percolation Time Problem in graphs with bounded maximum degree} \label{delta4} In Section 2 (Theorem \ref{teo-d3-logn}), we proved that the Percolation Time Problem is polynomial time solvable in grid graphs with $\Delta(G)\leq 3$ for $k=O(\log n)$. In this section, we prove on Theorem \ref{teodelta4log} that this not happen for general graphs with fixed maximum degree $\Delta\geq 4$, unless P=NP. However, if the percolation time $k$ is also fixed, then we prove in Theorem \ref{teo-fpt-delta} that the Percolation Time Problem is solvable in quadratic time (in other words, it is fixed parameter tractable on $\Delta(G)+k$). \begin{theorem}\label{teodelta4log} Let $\Delta\geq 4$ be fixed. Deciding whether $t(G)\geq k$ is NP-complete for graphs with bounded maximum degree $\Delta$ and any $k \geq \log_{\Delta-2}n$. \end{theorem} \begin{proof} \begin{figure} \caption{\label{gadget-delta3} \label{gadget-delta3} \end{figure} We obtain a reduction from the variation of the $\mathbf{SAT}$ problem where each clause has exactly three literals, each variable appears in at most four clauses \cite{craig1984}. Given $M$ clauses $\mathcal{C}=\{C_1,\ldots,C_M\}$ on $N$ variables $X=\{x_1, \ldots,x_N\}$ as an instance of $\mathbf{SAT}$, we denote the three literals of $C_i$ by $\ell_{i,1}$, $\ell_{i,2}$ and $\ell_{i,3}$. Note, since any variable can only appear in at most 4 clauses, that $N/3 \leq M \leq 4N/3$. So, first, let us show how to construct a graph $G$ with maximum degree $\Delta$. For each clause $C_i$ of $\mathcal{C}$, add to $G$ a gadget as the one in Figure~\ref{gadget-delta3}. Then, for each pair of literals $\ell_{i,a}, \ell_{j,b}$ such that one is the negation of the other, add a vertex $y_{(i,a),(j,b)}$, link it to either $w^A_{i,a}$ or $w^B_{i,a}$ and link it to either $w^A_{j,b}$ or $w^B_{j,b}$, but always respecting the restriction of degree at most 4 for the vertices $w^A_{i,a}$, $w^B_{i,a}$, $w^A_{j,b}$ and $w^B_{j,b}$. Since each variable can appear in at most 4 clauses, it is always possible to do that. Let $Y$ be the set of all vertices $y_{(i,a),(j,b)}$ created this way. Notice that $y=|Y| \leq 4N$. \begin{figure} \caption{\label{linkYtoTree} \label{linkYtoTree} \end{figure} Then, add the maximum full $(\Delta-2)$-ary tree with root $z$ such that the number of leaves is less than $y$ and, then, add a new vertex of degree one adjacent to each vertex of the tree. Let $T$ be this tree and $t=|V(T)|$. After that, link each leaf to at least one and at most $\Delta-2$ vertices in $Y$. Thus, each vertex in the tree has degree $\Delta$, except for the leaves, which have degree at most $\Delta$, and $z$, which has degree $\Delta-1$. Figure \ref{linkYtoTree} shows an example for $\Delta=4$ and $y=6$. Notice that $t \leq 2 \cdot \frac{(\Delta - 2)y - 1}{\Delta - 3}\leq 16N$ and $height(T) = \lceil \log_{\Delta - 2} y \rceil$. Let $c = (\Delta-2)^{-8}$, $\alpha = xc$, where $x = \max(41 + \lceil c \rceil,\lceil 1/c \rceil)$, $r = \lceil \log_{\Delta - 2} (4N\alpha) \rceil - \lceil \log_{\Delta - 2} y \rceil$ and $\beta = 4Nx - (39M + y + t + 2 + 2r)$. It is not difficult to see that $r$ and $\beta$ are non-negative integers. If $r>0$, add a path with $r$ vertices, link one end to $z$ and let $q$ be the other end. Also, add a new neighbor of degree one to each vertex that belongs to the path. Let $P'$ be the set of vertices in this path and his neighbors of degree one. If $r = 0$, let $q = z$. Finally, add a path with $\beta+2$ vertices and link one end to $q$. Let $P$ be the set of vertices in this path and let $x$ be the vertex in $P$ that is adjacent to $q$. By our construction, since $\Delta \geq 4$, we have that $G$ is a graph in which every vertex has degree at most $\Delta$. Notice that any percolating set must contain a vertex of $\{u_{i,1},u_{i,2},u_{i,3}\}$ for each clause $C_i$ of $\mathcal{C}$ and all vertices that have degree 1. Thus, following similar arguments presented in \cite{wg2014}, it is possible to prove that the maximum percolation time of the vertex $z$ is $height(T)+7$ if and only if $\mathcal{C}$ is satisfiable, which implies that the maximum percolation time of the vertex $x$ is $ \lceil \log_{\Delta - 2} (4N\alpha) \rceil+8$ if and only if $\mathcal{C}$ is satisfiable. Then, we have that $\mathcal{C}$ is satisfiable if and only if $t(G) \geq \lceil \log_{\Delta - 2} (4N\alpha) \rceil+8$, but, since $n = |V(G)| = 39M + y + t + 2 + 2r + \beta$, then $4N\alpha = c \cdot n$. Therefore, since $c=(\Delta - 2)^{-8}$, $\mathcal{C}$ is satisfiable if and only if $t(G) \geq \lceil \log_{\Delta - 2} n \rceil$. \end{proof} We already saw that the Percolation Time Problem is NP-hard for graphs with maximum degree $\Delta(G)\geq 3$. In this section, we prove that the Percolation Time Problem is FPT on $\Delta(G)+k$. In fact, we prove a stronger result: \begin{theorem}\label{teo-fpt-delta} Percolation Time Problem is fixed parameter tractable with parameter $\Delta(G)+k$. Moreover, for fixed $\Delta$, the Percolation Time Problem is polynomial time solvable in graphs with bounded maximum degree $\Delta$ for $k=\log_\Delta O(\log n)$, if $\Delta\geq 4$, and for $k=O(\log n)$, if $\Delta = 3$. \end{theorem} \begin{proof} Let $\Delta=\Delta(G)$ and let $u\in V(G)$. Then $|N_{\leq k}(u)|\leq\Delta^k$ and, consequently, the power set $2^{N_{\leq k}(u)}$ has $2^{|N_{\leq k}(u)|}\leq 2^{\Delta^k}$ sets. We claim that $t(G)\geq k$ if and only if there is a vertex $u$ and a percolating set $S \supseteq N_{\geq k}(u)$ such that $t(G,S,u)=k$. If $t(G)\geq k$, then there is a percolating set $S'$ that infects some vertex $u$ at time $k$. In \cite{wg2014}, it was proved that, given a graph $G$, a set $Q\subseteq V(G)$ and a vertex $z\in V(G)\setminus S$, if $t(G,Q,w)\geq k$, then $t(G,Q,w)\geq t(G,Q\cup\{z\},w)\geq k$, for any $k$ and any $w\in N_{\geq k}(z)$. Then, applying this result once for each vertex in $N_{\geq k}(u)$, the percolating set $S = S'\cup N_{\geq k}(u)$ infects $u$ also at time $k$. On the other hand, if there is a percolating set $S \supseteq N_{\geq k}(u)$ such that $t(G,S,u)=k$, for some vertex $u$, then, trivially, $t(G)\geq k$. Then the claim is true. Therefore, since for each vertex $u$ and set $S' \subseteq N_{\leq k-1}(u)$, it takes $O(km)$ time to know whether the set $S' \cup N_{\geq k}(u)$ infects $u$ at time $k$, this equivalence gives us an algorithm that decides whether $t(G)\geq k$ in time $n\cdot O(m + km\cdot 2^{\Delta^k}) = O(2^{\Delta^k}k\Delta \cdot n^2)$, since $m=O(\Delta n)$. Notice that, if $k=\log_\Delta O(\log n)$, then the time is polynomial in $n$. Moreover, if $\Delta = 3$, by Theorem \ref{teo-d3-logn}, we are done. \end{proof} \section{Fixed Parameter Tractability on the treewidth} \label{fpt} We say that a decision problem is \emph{fixed parameter tractable} (or just \emph{fpt}) on some parameter $\Psi$ if there exists an algorithm (called \emph{fpt-algorithm}) that solves the problem in time $f(\Psi)\cdot n^{O(1)}$, where $n$ is the size of the input and $f$ is an arbitrary function depending only on the parameter $\Psi$. In this section, we obtain some results regarding the fixed parameter tractability of the Percolation Time Problem when the parameter is the treewidth $tw(G)$ of the graph. We prove on Lemma \ref{fpt-tw1} that the Percolation Time Problem is FPT on $tw(G)+k$ (that is, to decide if the time is at most $k$). In fact, we prove a stronger (but technical) result on Theorem \ref{fpt-tw3}: the Percolation Time Problem is polynomial time solvable for fixed $tw(G)$ and it is linear time solvable for fixed $tw(G)+k$. And what happens when $tw(G)$ is fixed, but $k$ is not fixed? The problem is FPT on the treewidth? We prove in Theorem \ref{fpt-tw2} that the Percolation Time Problem is W[1]-hard when parameterized by the treewidth. \begin{lemma}\label{fpt-tw1} The Percolation Time Problem is fixed parameter tractable with parameter $cwd(G)+k$, where $cwd(G)$ is the clique-width of $G$. Consequently, the Percolation Time Problem is fpt on $tw(G)+k$. \end{lemma} \begin{proof} A consequence of the Courcelle's theorem \cite{fpt1,fpt2} states that, if a decision problem on graphs can be expressed in a Monadic Second Order (MSO$_1$) sentence $\varphi$ (with quantification only over vertex subsets), then this problem is fixed parameter tractable in the parameter $cwd(G)+|\varphi|$, where $cwd(G)$ is the clique-width of $G$. It is known that fixed parameter tractability on the clique-width implies fixed parameter tractability on the treewidth. Moreover, the running time is linear on the size of the input. The Percolation Time Problem can be expressed by the following MSO$_1$-sentence: \[ maxtime_k\ :=\ \exists w,X_0,X_1,\ldots,X_k\ \forall x\ \Big(x\in X_k\Big)\ \wedge\ \left(\bigwedge_{0\leq i<k}(x\in X_i\to x\in X_{i+1})\right)\ \wedge \] \[ \wedge\left(\bigwedge_{0\leq i<k}(x\in X_{i+1}\setminus X_i)\to\exists y,z(Exy\wedge Exz\wedge (y\in X_i)\wedge (z\in X_i))\right)\wedge\ \Big(w\in X_k\setminus X_{k-1}\Big), \] where $X_i$ represents the set of vertices infected at time $i$, $Exy$ is true if $xy$ is an edge (and false, otherwise) and $\wedge$ is the \emph{and} operator. This MSO$_1$ sentence asserts that all vertices are infected in time $k$, that a vertex infected in time $i$ remains infected in time $i+1$, that a vertex infected in time $i+1$, but not infected in time $i$, has two neighbors infected in time $i$, and that there exists a vertex $w$ infected in time $k$ but not infected in time $k-1$. \end{proof} The proof of the next theorem uses some technicalities inspired by \cite{Fellows11}. \begin{theorem}\label{fpt-tw2} The Percolation Time Problem is W[1]-hard with parameter $tw(G)$. \end{theorem} \begin{proof} Let us prove that the percolation time problem is $W[1]$-hard when parametrized by $treewidth$. We found a reduction from the Multicolored Clique Problem, which is $W[1]$-hard when parametrized by the number of colors $k$ \cite{multihard}. Given a graph $G$, an integer $k$ and a partition $V_1,V_2,\hdots,V_k$ of $V(G)$, which represent the colors of the vertices, the Multicolored Clique Problem asks for a $k$-clique containing exactly one vertex from each partition $V_i$. Let us show a parameterized reduction from an instance $(G,k,C)$ of Multicolored Clique Problem to an instance $(G')$ of Percolation Time Problem, where the treewidth of $G'$ is bounded by $k' = 2k+3$. To save some space in the figures, every time we have a path where every vertex of this path has one pendant neighbor, we will replace it by a weighted edge, like in Figure \ref{path}. So, if the vertex $i$ is infected at time $t$ by some set, then, if the infection follows the path $v_1,v_2,\hdots,v_{n-1}$, the vertex $v_{n-1}$ will be infected at time $t + (n-1)$ by the same set. \begin{figure} \caption{\label{path} \label{path} \end{figure} Let us describe how to construct the graph $G'$. Let $n = |V(G)|$. First, assign the labels $v_1,v_2,\hdots, v_n$ arbitrarily to the vertices in $G$ and, for $1 \leq i \leq n$, let $M_i^a = 2n + 2i - 2$ and $M_i^b = 2n - 2i + 2$. For $1 \leq x \leq k$ we are going to add the gadget $Q^x$ in the Figure \ref{ColorGadget}, where the vertices $v_i, v_j, \hdots$ are the vertices in the partition $V_x$ in the graph $G$. Let $T = \bigcup_{1\leq i \leq k} \{a^i,b^i\}$. Also, let $V^x$ be the set of all vertices in $\{v_1,v_2,\hdots,v_n\} \cap V(Q^x)$. \begin{figure} \caption{\label{ColorGadget} \label{ColorGadget} \end{figure} For each pair of vertices $v_i \in V_x$ and $v_j \in V_y$, for $x \neq y$, such that $(v_i,v_j) \notin E(G)$, we add a gadget such as the one in the Figure \ref{ChoiceGadget}. Also, we add six vertices $a_i^j,b_i^j,a_j^i,b_j^i$ and link $a_i^j$ to $a^x$, $b_i^j$ to $b^x$, $a_j^i$ to $a^y$ and $b_j^i$ to $b^y$. After that, link the vertex $vo_i^j$ to $a_i^j$ and to $b_i^j$ with edges with weights $M_i^a$ and $M_i^b$, respectively, and link the vertex $vo_j^i$ to $a_j^i$ and to $b_j^i$ with edges with weights $M_j^a$ and $M_j^b$, respectively. Let the set $VC_i^j = VC_j^i = \{vc_i^j,vc_j^i\}$. In the Figure \ref{exemploChoiceGadget}, there is an example where $v_i,v_s \in V(Q^x)$ and $(v_i,v_j),(v_s,v_r) \notin E(G)$. \begin{figure} \caption{\label{ChoiceGadget} \label{ChoiceGadget} \end{figure} \begin{figure} \caption{\label{exemploChoiceGadget} \label{exemploChoiceGadget} \end{figure} Let $E$ be how many vertices we added until this point, including the vertices hidden by the weighted edges. We continue the construction of $G'$ by adding, for each vertex $a_i^j$ and $b_i^j$, a vertex $l_i^j$ and link both $a_i^j$ and $b_i^j$ to it with edges with weights $E-M_i^a-2$ and $E-M_i^b-2$, respectively. Let $L$ be the set of all vertices $l_i^j$ added in that manner. Then, add a vertex $p$, add another vertex and link it to $p$ and also link all vertices in $L$ to $p$. Let $D$ be how many vertices we added until this point. Finally, add a vertex $z$, add another vertex and link it to $z$ and link $p$ to $z$ with an edge with weight $D$. In the Figure \ref{linksFinale} there is an example for this step of the construction. Since $V(G') = O(n^5)$, then $G'$ can be constructed in polynomial time. Note that is necessary and sufficient to choose one vertex on each set $V^x$ and each set $VC_i^j$ to be part of a set $S$ in order to $S$ to infect the whole graph $G'$. \begin{figure} \caption{\label{linksFinale} \label{linksFinale} \end{figure} \begin{figure} \caption{\label{GraphG} \label{GraphG} \end{figure} In Figure \ref{GraphG} there is an example of Instance for the Multicolor Clique Problem for 3 partitions. The Figures \ref{GraphGl1} and \ref{GraphGl2} are the graph $G'$, which is the result of our construction applied to the graph in the Figure \ref{GraphG}. \begin{figure} \caption{\label{GraphGl1} \label{GraphGl1} \end{figure} \begin{figure} \caption{\label{GraphGl2} \label{GraphGl2} \end{figure} Since the set $S = T \cup \{p\}$ is a separator set of $G'$ and each connected component of $G' - S$ have treewidth at most 2, we have that $tw(G') \leq |S| + (2+1) - 1 = 2k+3$. Now, let us prove that there is a $k$-clique in $G$ such that all vertices are in different partitions if and only if $t(G') \geq D + E + 1$. First, assume that there is a $k$-clique in $G$ such that all vertices are in different partitions. Let us build a percolating set $S$ such that $t(S) \geq D+E+1$. Let $C = \{v_{c_1},v_{c_2},\hdots,v_{c_k}\}$ be the set of vertices in this $k$-clique. Initially, set $S = C$. For each set $VC_i^j$ such that, for some $1 \leq s \leq k$, either $i = c_s$ or $j = c_s$, add to $S$ the vertex $vc_i^j$, if $j = c_s$, or add $vc_j^i$ to $S$, if $i = c_s$. Since $C$ induce a $k$-clique on $G$, we do not have a set $VC_i^j$ such that, for some $1 \leq s,r \leq k$, $i = c_s$ and $j = c_r$. Finally, for each set $VC_i^j$ such that, for all $1 \leq s \leq k$, neither $i = c_s$ nor $j = c_s$, add, arbitrarily, either $vc_i^j$ or $vc_j^i$ to $S$. We have that $S$ infects each vertex $a^x$ at time $M_{v_{c_x}}^a$ and each vertex $b^x$ at time $M_{v_{c_x}}^b$. Since, for all $1 \leq i \neq j \leq n$, either $M_i^a \geq M_j^a + 2$ and $M_i^b \leq M_j^b - 2$ or $M_i^a \leq M_j^a - 2$ and $M_i^b \geq M_j^b + 2$, then for all vertices $a_i^j$ and $b_i^j$, either $a_i^j$ is infected by $S$ at time at least $M_i^a + 2$ or $b_i^j$ is infected by $S$ at time at least $M_i^b + 2$ and, thus, all vertices $l_i^j \in L$ are infected by $S$ at time at least $E$. Then, we can conclude that $p$ is infected by $S$ at time at least $E+1$ and $z$ is infected by $S$ at time at least $D+E+1$. Therefore, we have that $t(G) \geq D + E + 1$. Now, assume that there is no $k$-clique in $G$ such that all vertices are in different partitions. Since is necessary and sufficient to choose one vertex, and only one, from each set $V^x$ and each set $VC_i^j$ so that the resulting set infects the whole graph $G'$, we can focus on proving that the percolating sets built in that way infect $G'$ at time less than $D+E+1$. Let $SS$ be the family of such percolating sets. By our construction and the values of $D$ and $E$, we have that only $z$ can possible be infected at time at least $D+E+1$ by some percolating set in $SS$ because $z$ is the only vertex in $G'$ such that there is a simple path from some vertex in $S$ to $z$ with length greater than $D+E$, for any $S \in SS$. Let $S$ be an arbitrary percolating set in $SS$. Since there is no $k$-clique in $G$ such that all vertices are in different partitions, there will be vertices $a^x,b^x$ and $vo_i^j$ such that $S$ infects $a^x$ at time $M_i^a$, $b^x$ at time $M_i^b$ and $vo_i^j$ at time 1. Thus, we have that there will be vertices $a_i^j$ and $b_i^j$ that are infected by $S$ at times $M_i^a + 1$ and $M_i^b + 1$, respectively. Then, we have that there will be some vertex $l_i^j$ infected by $S$ at time $E-1$. We can, then, conclude that $p$ is infected by $S$ at time at most $E$ and $z$ at time at most $D+E$. Since $z$ is the only vertex that could be possibly infected at time at least $D+E+1$ by $S$, we have that $t(S) < D+E+1$ and, since $S$ is a arbitrary set of $SS$, therefore, $t(G) < D+E+1$. \end{proof} The next theorem proves that the Percolation Time Problem is polynomial time solvable for fixed $tw(G)$ and it is linear time solvable for fixed $tw(G)+k$. \begin{theorem}\label{fpt-tw3} Let $w$ be an integer and let $G$ be a graph with treewidth $tw(G)=w$. Then $t(G)$ can be computed in time $O(50^{w+1} n^{w+2})$ and we can decide whether $t(G)\geq k$ in time $O((50k)^{w+1}n)$. \end{theorem} In order to prove this theorem, we have to give some definitions first. A \emph{tree decomposition} \cite{rob-seymour84} of a graph $G$ is a tuple $(\mathcal{T},\mathcal{B})$, where $\mathcal{T}$ is a tree, $\mathcal{B}$ contains a subset $B_t\subseteq V(G)$ for each node $t\in \mathcal{T}$ and: \begin{enumerate} \item[(i)] for each vertex $u$ of $G$, there exists some $B_t\in \mathcal{B}$ containing $u$; \item[(ii)] for each edge $uv$ of $G$, there exists some $B_t\in \mathcal{B}$ containing $u$ and $v$; and \item[(iii)] if $B_i$ and $B_j$ both contain a vertex $v$, then, for every node $t$ of the tree in the (unique) path between $i$ and $j$, $B_t$ also contains $v$. \end{enumerate} Observe from (iii) that $\mathcal{T}[\{t:v\in B_t\}]$ is connected, for all $v\in V(G)$. That is, the nodes associated with vertex $v$ form a connected component of $\mathcal{T}$. In other words, if $t$ is on the unique path of $\mathcal{T}$ between $i$ to $j$, then $B_i\cap B_j\subseteq B_t$. The \emph{width of $(\mathcal{T},\mathcal{B})$} is $\max\{|B_t|-1:t\in T, B_t\in\mathcal{B}\}$ and the \emph{treewidth $tw(G)$ of $G$} is the minimum width over all tree decompositions of $G$ \cite{rob-seymour84}. Consider $\mathcal{T}$ to be rooted in $r$. We say that $(\mathcal{T},\mathcal{B})$ is a \emph{nice tree decomposition} \cite{kloks91} if each node $t$ of $\mathcal{T}$ is either a leaf, or $t$ has exactly two children $t_1$ and $t_2$ with $B_{t}=B_{t_1}=B_{t_2}$ (called \emph{join node}), or $t$ has exactly one child $t'$ and either $B_t=B_{t'}\setminus\{x\}$ (called \emph{forget node}) or $B_t=B_{t'}\cup\{x\}$ (called \emph{introduce node}), for some $x\in V(G)$. It is known that, for a fixed $k$, we can determine if the treewidth of a given graph $G$ is at most $k$, and if so, find a nice tree decomposition of $G$ with $O(n)$ nodes and width at most $w$ in linear time \cite{bodlaender96}. Given a node $t$ of $\mathcal{T}$, we denote by $G_t$ the subgraph of $G$ induced by $\bigcup_{t'\in V(\mathcal{T}_t)}B_{t'}$, where $\mathcal{T}_t$ is the subtree of $\mathcal{T}$ rooted at $t$. For each node $t \in \mathcal{T}$, we will compute a table $W_t$ with integer values for every pair $(p,f)$, where $p$ is a function $p: B_t \rightarrow \{0,1,\hdots,n-1\}$ that is an assignment of times for the vertices in $B_t$ and $f$ is a function $f : B_t \rightarrow \{z,o_1,o_2,t_1,t_2\}$ which associate a status to each vertex in $B_t$. Let $v \in B_t$: \begin{itemize} \item $f(v) = z$ indicates that $v$ has does not have any neighbor in $G_t$ infected at time less than $p(v)$; \item $f(v) = o_1$ indicates that $v$ has exactly one neighbor in $G_t$ infected at time less than $p(v)$ and this neighbor is infected at time less than $p(v)-1$; \item $f(v) = o_2$ indicates that $v$ has exactly one neighbor in $G_t$ infected at time less than $p(v)$ and this neighbor is infected at time equal to $p(v)-1$; \item $f(v) = t_1$ indicates that $v$ has two or more neighbors in $G_t$ infected at time less than $p(v)$ and exactly one of them is infected at time less than $p(v)-1$; \item $f(v) = t_2$ indicates that $v$ has two or more neighbors in $G_t$ infected at time less than $p(v)$ and all of them are infected at time equal to $p(v)-1$. \end{itemize} Since $|B_t| \leq w+1$, we have that the number of functions $p$ and $f$ are bounded by $n^{w+1}$ and $5^{w+1}$, respectively. Let $f$ be a function $V(G) \rightarrow \{0,1,\hdots,n-1\}$. We say that $f$ is a \emph{infection time function} of $G$ if there is a percolating set $S$ such that $t(S,v) = f(v)$ for all $v \in V(G)$. We say that $f$ is a \emph{quasi-infection time function} of $G$ if, for each $v \in V(G)$, there is at most one vertex $u \in N(v)$ such that $f(u) < f(v) - 1$. Let $W_t(p,f) = \max_g \max_{v \in V(G_t)} g(v)$ where $g$ iterates over all quasi-infection time functions of $G_t$ such that: each vertex $v\in V(G_t)\setminus B_t$, where $g(v) > 0$, has at least two neighbors $u_1$ and $u_2$ such that $g(u_1) < g(v)$ and $g(u_2) < g(v)$; $p$ is $g$ restricted to $B_t$ and $g$ does not contradict $f$ regarding the vertices in $B_t$. We say that $g$ contradicts $f$ if, for some $v \in B_t$, $f(v)$ does not correspond to the reality of $g$ on $v$ and its neighbors in $G_t$. Set $W_t(p,f) = -1$ if there is no quasi-infection time function $g$ satisfying these conditions. Our algorithm computes $W_t(p,f)$ for every $t\in T$ and every pair $(p,f)$ of functions. Clearly, $t(G)$ is equal to the highest $W_r(p,f)$ (recall that $r$ is the root of $\mathcal{T}$) such that, for all vertices $v \in B_r$, either $f(v) = t_1$ or $f(v) = t_2$ or $p(v) = 0$. For some indexes $h=(p,f)$, we can conclude directly that $W_t(h) = -1$ for some $t \in T$. We will say that such indexes are invalid. We are only interested in valid indexes, defined as below. Let $t \in T$ and $h = (p,f)$ be an index of $W_t$. We say that $h$ is valid if and only if, for each vertex $v \in B_t$, we have that: \begin{itemize} \item If $f(v) = z$ then $p(z) \geq p(v)$ for all $z \in N(v) \cap B_t$; \item If $f(v) = o_1$ then there is at most one vertex $z \in N(v) \cap B_t$ such that $p(z) < p(v)$ and, if there is one, we have that $p(z) < p(v)-1$; \item If $f(v) = o_2$ then there is at most one vertex $z \in N(v) \cap B_t$ such that $p(z) < p(v)$ and, if there is one, we have that $p(z) = p(v)-1$; \item If $f(v) = t_1$ then there is at most one vertex $z \in N(v) \cap B_t$ such that $p(z) < p(v) - 1$; \item If $f(v) = t_2$ then all vertices $z \in N(v) \cap B_t$ where $p(z) < p(v)$, are such that $p(z) = p(v)-1$. \end{itemize} Clearly, if $h=(p,f)$ is an invalid index then $W_t(h) = -1$ because any extension of $p$ for $G_t$ would contradict $f$. Now, we will determine the value of $W_t(h)$ depending on the type of the node $t$. \begin{lemma} If $t$ is a leaf node and $h = (p,f)$ is an index of $W_t$, then we can compute $W_t(h)$ in $O(w^2)$ time. \end{lemma} \begin{proof} We have that $W_t(h) = \max_{v \in B_t} p(v)$ if and only if $h$ is valid, which can be checked in $O(w^2)$ time. \end{proof} \begin{lemma} Let $t$ be a forget node with child $t'$ such that $B_t = B_{t'} \setminus \{v\}$. Let $h = (p,f)$ be a valid index of $W_t$. Then $W_t(h) = \max_{h'} W_{t'}(h')$ where $h' = (p',f')$ iterates over all $p'$ and $f'$ such that $p'$ and $f'$ are extensions of, respectively, $p$ and $f$ for $B_{t'}$ and either $f'(v) = t_1$ or $f'(v) = t_2$ or $p'(v) = 0$. \end{lemma} \begin{proof} First suppose that $\max_{h'} W_{t'}(h') = -1$, i.e., for each valid index $h'=(p',f')$ as stated in the Lemma, $W_{t'}(h') = -1$. Then, since $G_t = G_{t'}$, $B_t \cup \{v\} = B_{t'}$ and each $p'$ and $f'$ are extensions of, respectively, $p$ and $f$ for $B_{t'}$, we have that there is no quasi-infection time function $g$ of $G_t$ such that each vertex $u\in V(G_t)\setminus B_t$, where $g(u) > 0$, has at least two neighbors $u_1$ and $u_2$ such that $g(u_1) < g(u)$ and $g(u_2) < g(u)$; $p$ is $g$ restricted to $B_t$ and $g$ does not contradict $f$ regarding the vertices in $B_t$. Therefore, $W_t(h) = -1 = \max_{h'} W_{t'}(h')$. Now, suppose that $\max_{h'} W_{t'}(h') > -1$. Let us prove that $W_t(h) = \max_{h'} W_{t'}(h')$. First, we are going to prove that $W_t(h) \geq \max_{h'} W_{t'}(h')$. Let $i = (x,y)$ be an index that realizes the previous maximum and let $g$ be the quasi-infection time function of $G_{t'}$ such that $W_{t'}(i) = \max_{r \in V(G_{t'})} g(r)$ and each vertex $r\in V(G_t)\setminus B_t$, where $g(r) > 0$, has at least two neighbors $u_1$ and $u_2$ such that $g(u_1) < g(r)$ and $g(u_2) < g(r)$; $x$ is $g$ restricted to $B_t$ and $g$ does not contradict $y$ regarding the vertices in $B_t$. Since $g$ does not contradict $y$ and $G_t = G_{t'}$, we have that each vertex $r \in \{v\} \cup (V(G_t') \setminus B_t') = V(G_t) \setminus B_t$ either has at least two neighbors $z_1$ and $z_2$ such that $g(z_1) < g(r)$ and $g(z_2) < g(r)$ or $g(r) = 0$. Also, since $x$ is an extension of $p$ for $B_{t'}$ and $x$ is $g$ restricted to $B_{t'}$ and $B_t \subseteq B_{t'}$, we have that $p$ is $g$ restricted to $B_t$. Additionally, since $g$ does not contradict $y$ regarding the vertices in $B_{t'}$ and $y$ is an extension of $f$ for $B_{t'}$, $g$ does not contradict $f$ regarding the vertices in $B_t$. Therefore, we have that $W_t(h) = \max_{g'} max_{r \in V(G_t)} g'(r) \geq \max_{r \in V(G_t)} g(r) = W_{t'}(i) = \max_{h'} W_{t'}(h')$. Now, let us prove that $W_t(h) \leq \max_{h'} W_{t'}(h')$. We have that $W_t(h) = \max_g' \max_{v \in V(G_t)} g'(v)$. Let $g$ be a quasi-infection time function of $G_t$ that realizes this maximum. Let $p'$ be $g$ restricted to $B_{t'}$, which is an extension of $p$ for $B_{t'}$, and $f'$ be set accordingly to $g$, which is an extension of $f$ for $B_{t'}$. Since $v \in G_t \setminus B_t$ and $f'$ was set accordingly to $g$, we have that either $f'(v)=t_1$ or $f'(v)=t_2$ or $p'(v) = 0$. Thus, $g$ is also a quasi-time infection function of $G_{t'}$ that realizes $\max_{g'} max_{r \in V(G_{t'})} g'(r)$ and, since also each vertex $r \in V(G_{t'}) \setminus B_{t'}$, where $p'(r) > 0$, has at least two neighbors $z_1$ and $z_2$ such that $g(z_1) < g(r)$ and $g(z_2) < g(r)$; $p'$ is $g$ restricted to $B_{t'}$ and $g$ does not contradict $f'$ regarding the vertices in $B_{t'}$, then $W_{t'}(p',f') = \max_{r \in V(G_t)} g(r)$. Therefore, we have that $W_t(h) = \max_{r \in V(G_t)} g(r) = W_{t'}(p',f') \leq \max_{h'} W_{t'}(h')$. \end{proof} \begin{lemma} Let $t$ be a introduce node with child $t'$ such that $B_t = B_{t'} \cup \{v\}$. Let $h = (p,f)$ be a valid index of $W_t$. Also, let $p'$ be $p$ restricted to $B_{t'}.$ Then, if $\max_{f'} (W_{t'}(p',f')) > -1$, $W_t(h) = \max (p(v), \max_{f'} (W_{t'}(p',f'))$, where $f'$ iterates over all functions $f' : B_{t'} \rightarrow \{z,o_1,o_2,t_1,t_2\}$ such that, for all $r \in B_{t'} \cap N(v)$ where $p(r) > p(v)$, we have: \begin{enumerate} \item If $p(v) < p(r)-1$ and $f(r) = o_1$ then $f'(r) = z$; \item If $p(v) = p(r)-1$ and $f(r) = o_2$ then $f'(r) = z$; \item If $p(v) = p(r)-1$ and $f(r) = t_1$ then either $f'(r) = o_1$ or $f'(r) = t_1$; \item If $p(v) < p(r)-1$ and $f(r) = t_1$ then either $f'(r) = o_2$ or $f'(r) = t_2$; \item If $p(v) = p(r)-1$ and $f(r) = t_2$ then either $f'(r) = o_2$ or $f'(r) = t_2$; \end{enumerate} And for all other vertices $r$, $f'(r) = f(r)$. If, for all such $f'$, $\max_{f'} (W_{t'}(p',f')) = -1$, then $W_t(h) = -1$. \end{lemma} \begin{proof} First suppose that $\max_{f'} W_{t'}(p',f') = -1$, then we have that there is no quasi-infection time function $g$ of $G_{t'}$ that extends $p'$ and respects some $f'$ where $f'$ is as in the Lemma. Since $V(G_t) = V(G_{t'}) \cup \{v\}$, $B_t = B_{t'} \cup \{v\}$ and $p'$ is $p$ restricted to $B_{t'}$, by a simple case analysis on $f'$, we can conclude that there is no quasi-infection time function $g$ of $G_t$ that extends $p$ and does not contradict $f$. Therefore, $W_t(h) = -1$. Now, suppose that $\max_{f'} W_{t'}(p',f') > -1$. Let us prove that $W_t(h) = \max (p(v), \max_{f'} (W_{t'}(p',f'))$. First, we are going to prove that $W_t(h) \geq \max (p(v), \max_{f'} (W_{t'}(p',f'))$. Let $y : B_{t'} \rightarrow \{z,o_1,o_2,t_1,t_2\}$ be a function that realizes the previous maximum ($\max_{f'} (W_{t'}(p',f')$) and let $g'$ be the quasi-infection time function of $G_{t'}$ such that $W_{t'}(p',y) = max_{r \in V(G_{t'})} g'(r)$ where each vertex $r \in V(G_{t'}) \setminus B_{t'}$, where $g'(r) > 0$, has at least two neighbors $z_1$ and $z_2$ such that $g'(z_1) < g'(r)$ and $g'(z_2) < g'(r)$; $p'$ is $g'$ restricted to $B_{t'}$ and $g'$ does not contradict $y$ regarding the vertices in $B_{t'}$. Let $g$ be an extension of $g'$ for $B_t$ such that $g(v) = p(v)$. So, we have that $g$ is a quasi-infection time function of $G_t$ where each vertex $r \in V(G_t) \setminus B_t$, where $g(r) > 0$, has at least two neighbors $z_1$ and $z_2$ such that $g(z_1) < g(r)$ and $g(z_2) < g(r)$; $p$ is $g$ restricted to $B_t$ and, by a case analysis on $y$, $g$ does not contradict $f$ regarding the vertices in $B_t$. Therefore, we have that $W_t(h) = \max_{g'} max_{r \in V(G_t)} g'(r) \geq \max_{r \in V(G_t)} g(r) = \max (p(v),W_{t'}(p',y)) = \max (p(v),\max_{f'} W_{t'}(p',f'))$. Now, let us prove that $W_t(h) \leq \max (p(v),\max_{f'} W_{t'}(p',f'))$. We have that $W_t(h) = \max_g' \max_{r \in V(G_t)} g'(r)$. Let $g$ be a quasi-infection time function of $G_t$ that realizes this maximum. Let $p'$ be $g$ restricted to $B_{t'}$, which implies that $p'$ is $p$ restricted to $B_{t'}$, and $y : B_{t'} \rightarrow \{z,o_1,o_2,t_1,t_2\}$ be set accordingly to $g$ restricted to $B_{t'}$, which, by a simple case analysis on $f$ and $p$, will fall in one of the cases in the Lemma. Thus, at least one of the two cases occur: \begin{enumerate} \item $\max_{r \in V(G_t)} g(r) = g(v)$ \item $\max_{r \in V(G_t)} g(r) = max_{r \in V(G_{t'})} g(r)$. \end{enumerate} If $g(v) = \max_{r \in V(G_t)} g(r)$ then we have that $W_t(h) = \max_{r \in V(G_t)} g(r) = g(v) = p(v) \leq \max (p(v),\max_{f'} W_{t'}(p',f'))$. On the other hand, if $\max_{r \in V(G_t)} g(r) = max_{r \in V(G_{t'})} g(r)$, then $g$ is also a quasi-time infection function of $G_{t'}$ that realizes $\max_{g'} max_{r \in V(G_{t'})} g'(r)$ and, since also each vertex $r \in V(G_{t'}) \setminus B_{t'}$, where $g(r) > 0$, has at least two neighbors $z_1$ and $z_2$ such that $g(z_1) < g(r)$ and $g(z_2) < g(r)$; $p'$ is $g$ restricted to $B_{t'}$ and $g$ does not contradict $y$ regarding the vertices in $B_{t'}$, then $W_{t'}(p',y) = \max_{r \in V(G_{t'})} g(r)$ and, therefore, we have that $W_t(h) = \max_{r \in V(G_t)} g(r) = W_{t'}(p',y) \leq \max_{f'} W_{t'}(p',f') \leq \max (p(v),\max_{f'} W_{t'}(p',f'))$. \end{proof} \begin{lemma} Let $t$ be a join node with children $t_1$ and $t_2$ and let $h = (p,f)$ be a valid index of $W_t$. If there is some pair $(f_1,f_2)$ such that $\min(W_{t_1}(p,f_1),W_{t_2}(p,f_2)) > -1$, then $W_t(h) = \max_{(f_1,f_2)} \max(W_{t_1}(p,f_1),W_{t_2}(p,f_2))$ where the pair $(f_1,f_2)$ iterates over all pair of functions $f_1 : B_{t_1} \rightarrow \{z,o_1,o_2,t_1,t_2\}$ and $f_2 : B_{t_2} \rightarrow \{z,o_1,o_2,t_1,t_2\}$ such that, for each $r \in B_t$, there are $i \neq j \in \{1,2\}$ where: \begin{enumerate} \item If $f(r) = z$ then $f_i(r) = z$ and $f_j(r) = z$; \item If $f(r) = o_1$ then $f_i(r) = z$ and $f_j(r) = o_1$; \item If $f(r) = o_2$ then $f_i(r) = z$ and $f_j(r) = o_2$; \item If $f(r) = t_1$ then either: \begin{enumerate} \item $f_i(r) = z$ and $f_j(r) = t_1$; or \item $f_i(r) = o_1$ and $f_j(r) = o_2$; or \item $f_i(r) = o_1$ and $f_j(r) = t_2$; or \item $f_i(r) = o_2$ and $f_j(r) = t_1$; or \item $f_i(r) = t_1$ and $f_j(r) = t_2$. \end{enumerate} \item If $f(r) = t_2$ then either: \begin{enumerate} \item $f_i(r) = z$ and $f_j(r) = t_2$; or \item $f_i(r) = o_2$ and $f_j(r) = o_2$; or \item $f_i(r) = o_2$ and $f_j(r) = t_2$; or \item $f_i(r) = t_2$ and $f_j(r) = t_2$. \end{enumerate} \end{enumerate} If there is no pair $(f_1,f_2)$ such that $\min(W_{t_1}(p,f_1),W_{t_2}(p,f_2)) > -1$, then $W_t(h) = -1$ \end{lemma} \begin{proof} First, suppose that there is no pair $(f_1,f_2)$ such that $\min(W_{t_1}(p,f_1),W_{t_2}(p,f_2)) > -1$ where $f_1$ and $f_2$ are as stated in the Lemma. Thus, we have that, for all pairs $(f_1,f_2)$ as stated in the Lemma, there is a $i$ in $\{1,2\}$, such that there is no quasi-infection time function $g_i$ of $G_{t_i}$ such that each vertex $v \in V(G_{t_i}) \setminus B_{t_i}$, where $g_i(v) > 0$, has at least two neighbors $z_1$ and $z_2$ such that $g_i(z_1) < g_i(v)$ and $g_i(z_2) < g_i(v)$; $p$ is $g_i$ restricted to $B_{t_i}$ and $g_i$ does not contradict $f_i$ regarding the vertices in $B_{t_i}$. Suppose, by contradiction, that there is a quasi-infection time function $g$ of $G_t$ such that each vertex $v \in V(G_t) \setminus B_t$, where $g(v) > 0$, has at least two neighbors $z_1$ and $z_2$ such that $g(z_1) < g(v)$ and $g(z_2) < g(v)$; $p$ is $g$ restricted to $B_t$ and, for each vertex $v \in B_t$, $g$ does not contradict $f$ regarding the vertices in $B_t$. Letting $g_1$ be $g$ restricted to $G_{t_1}$ and $g_2$ be $g$ restricted to $G_{t_2}$ and $(f_1,f_2)$ be as stated in the Lemma, basing on $f$, we have that for all $i \in \{1,2\}$, $g_i$ is a quasi-infection time function of $G_{t_i}$ such that each vertex $v \in V(G_{t_i}) \setminus B_{t_i}$, where $g_i(v) > 0$, has at least two neighbors $z_1$ and $z_2$ such that $g_i(z_1) < g_i(v)$ and $g_i(z_2) < g_i(v)$; $p$ is $g_i$ restricted to $B_{t_i}$ and $g_i$ does not contradict $f_i$ regarding the vertices in $B_{t_i}$. However, this contradicts the former paragraph, and, hence, we have that there is no quasi-infection time function $g$ of $G_t$ such that each vertex $v \in V(G_t) \setminus B_t$, where $g(v) > 0$, has at least two neighbors $z_1$ and $z_2$ such that $g(z_1) < g(v)$ and $g(z_2) < g(v)$; $p$ is $g$ restricted to $B_t$ and, for each vertex $v \in B_t$, $g$ does not contradict $f$ regarding the vertices in $B_t$. Therefore, $W_t(h) = -1$. Now, suppose that there is some pair $(f_1,f_2)$ such that $\min(W_{t_1}(p,f_1),W_{t_2}(p,f_2)) > -1$. First, let us prove that $W_t(h) \geq \max_{(f_1,f_2)} \max(W_{t_1}(p,f_1),W_{t_2}(p,f_2))$. Let $(y_1,y_2)$ be a pair of functions that realizes the previous maximum ($\max_{(f_1,f_2)} \max(W_{t_1}(p,f_1),W_{t_2}(p,f_2))$). Also, for all $i \in \{1,2\}$, let $g_i$ be the quasi-infection time function of $G_{t_i}$ such that $W_{t_i}(p,y_i) = max_{r \in V(G_{t_i})} g_i(r)$ where each vertex $r \in V(G_{t_i}) \setminus B_{t_i}$, where $g_i(r) > 0$, has at least two neighbors $z_1$ and $z_2$ such that $g_i(z_1) < g_i(r)$ and $g_i(z_2) < g_i(r)$; $p$ is $g_i$ restricted to $B_{t_i}$ and $g_i$ does not contradict $y_i$ regarding the vertices in $B_{t_i}$. Let $g$ be a quasi-infection time function of $G_t$ where $g(v) = g_1(v)$, if $v \in V(G_{t_1})$, and $g(v) = g_2(v)$, if $v \in V(G_{t_2})$. Note that, since, for all $v \in V(G_{t_1}) \cap V(G_{t_2}) = B_t$, $g_1(v) = g_2(v) = p(v)$, then $g$ is well defined. We have that $g$ is a quasi-infection time function of $G_t$ such that each vertex $r \in V(G_t) \setminus B_t$, where $g(r) > 0$, has at least two neighbors $z_1$ and $z_2$ such that $g(z_1) < g(r)$ and $g(z_2) < g(r)$; $p$ is $g$ restricted to $B_t$ and, by a simple case analysis on $y_1$ and $y_2$, we have that $g$ does not contradict $f$ regarding the vertices in $B_t$. Therefore, $W_t(h) = \max_{g'} \max_{r \in V(G_t)} g'(r) \geq \max_{r \in V(G_t)} g(r) = \max(W_{t_1}(p,y_1),W_{t_2}(p,y_2)) = \\ \max_{(f_1,f_2)} \max(W_{t_1}(p,f_1),W_{t_2}(p,f_2))$. Now, let us prove that $W_t(h) \leq \max_{(f_1,f_2)} \max(W_{t_1}(p,f_1),W_{t_2}(p,f_2))$. We have that $W_t(h) = \max_g' \max_{r \in V(G_t)} g'(r)$. Let $g$ be a quasi-infection time function of $G_t$ that realizes this maximum. Let $g_1$ be $g$ restricted to $G_{t_1}$ e $g_2$ be $g$ restricted to $G_{t_2}$. Also, let $y_1 : B_{t_1} \rightarrow \{z,o_1,o_2,t_1,t_2\}$ be set accordingly to $g_1$ restricted to the graph $G_{t_1}$ and $y_2 : B_{t_2} \rightarrow \{z,o_1,o_2,t_1,t_2\}$ be set accordingly to $g_2$ restricted to the graph $G_{t_2}$. By case analysis on $f$, it is easy to see that $y_1$ and $y_2$ will fall in one of the cases in the Lemma. Thus, we have that, for all $i \in \{1,2\}$, $g_i$ is a quasi-infection time functions of $G_{t_i}$ such that each vertex $v \in V(G_{t_i}) \setminus B_{t_i}$, where $g_i(v) > 0$, has at least two neighbors $z_1$ and $z_2$ such that $g_i(z_1) < g_i(v)$ and $g_i(z_2) < g_i(v)$; $p$ is $g_i$ restricted to $B_{t_i}$ and $g_i$ does not contradict $y_i$ regarding the vertices in $B_{t_i}$. Additionally, note that $\max_{r \in V(G_t)} g(r) = \max_{r \in V(G_{t_1})} g_1(r)$ or $\max_{r \in V(G_t)} g(r) = \max_{r \in V(G_{t_2})} g_2(r)$ or possibly both. Therefore, we have that $W_t(h)=\max_{r \in V(G_t)} g(r) = \max(\max_{r \in V(G_{t_1})} g_1(r),\max_{r \in V(G_{t_2})} g_2(r)) \leq \\ \max(W_{t_1}(p,y_1),W_{t_2}(p,y_2)) \leq \max_{(f_1,f_2)} \max(W_{t_1}(p,f_1),W_{t_2}(p,f_2))$. \end{proof} \begin{proof}[Proof of Theorem \ref{fpt-tw3}] For each node $t$, we have that $W_t$ has $5^{w+1} \cdot n^{w+1}$ indexes. Each index can be computed by checking, in a join node, $O(10^{w+1})$ indexes of his children, in a introduce node, $O(1)$ indexes of his child, and, in a forget node, $O(n)$ indexes of his child. However, one can reduce the time needed to compute the entire table of the forget node simple by computing its table while computing the table of its child, i.e., if $t$ is the forget node and $t'$ is its child, we can initialize $W_t(h)$ with $-1$ for every index $h$ of $W_t$ and every time we compute the value $W_{t'}(p,f)$, letting $p'$ and $f'$ be, respectively, $p$ and $f$ restricted to $B_t$, if $W_{t'}(p,f) > W_t(p',f')$ and either $f(v) = t_1$ or $f(v) = t_2$ or $p(v) = 0$, then we update the value of $W_t(p',f')$ with the value $W_{t'}(p,f)$. This would add only $O(1)$ time to compute each index of its child's table. Thus, we can find $t(G)$ in $O(n \cdot 10^{w+1} \cdot (5^{w+1}\cdot n^{w+1})) = O(50^{w+1} \cdot n^{w+2})$ time. In a very similar way, considering that we only have to check the indexes where $p$ is a function $p : B_t \rightarrow \{0,1,\hdots,k\}$, we can show that, for each node $t$, we have that $W_t$ has $5^{w+1} \cdot k^{w+1}$ indexes and each index can be checked worst case in $O(10^{w+1})$, which happens when $t$ is a join node, and then we can find $t(G)$ in $O(n \cdot 10^{w+1} \cdot (5^{w+1} \cdot k^{w+1})) = O((50k)^{w+1}n)$ time. \end{proof} \section{Acknowledgments} The statements of some of the results of this paper appeared in WG-2015 (Workshop on Graph-Theoretic Concepts in Computer Science). This research was partially supported by CNPq (Universal Proc. 478744/2013-7) and FAPESP (Proc. 2013/03447-6). \end{document} \appendix \section{Fixed parameter tractability on the clique-width} \end{document}
{\mathbf b}egin{document} \title{fMBN-E: Efficient Unsupervised Network Structure Ensemble and Selection for Clustering} \author{Xiao-Lei~Zhang \thanks{Xiao-Lei Zhang is with the Research \& Development Institute of Northwestern Polytechnical University in Shenzhen, Shenzhen, China, and with the School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an, China. (e-mail: [email protected]).} } \maketitle {\mathbf b}egin{abstract} It is known that unsupervised nonlinear dimensionality reduction and clustering is sensitive to the selection of hyperparameters, particularly for deep learning based methods, which hinders its practical use. How to select a proper network structure that may be dramatically different in different applications is a hard issue for deep models, given little prior knowledge of data. In this paper, we aim to automatically determine the optimal network structure of a deep model, named multilayer bootstrap networks (MBN), via simple ensemble learning and selection techniques. Specifically, we first propose an MBN ensemble (MBN-E) algorithm which concatenates the sparse outputs of a set of MBN base models with different network structures into a new representation. Then, we take the new representation produced by MBN-E as a reference for selecting the optimal MBN base models. Moreover, we propose a fast version of MBN-E (fMBN-E), which is not only theoretically even faster than a single standard MBN but also does not increase the estimation error of MBN-E. Importantly, MBN-E and its ensemble selection techniques maintain the simple formulation of MBN that is based on one-nearest-neighbor learning. Empirically, comparing to a number of advanced deep clustering methods and as many as 20 representative unsupervised ensemble learning and selection methods, the proposed methods reach the state-of-the-art performance without manual hyperparameter tuning. fMBN-E is empirically even hundreds of times faster than MBN-E without suffering performance degradation. The applications to image segmentation and graph data mining further demonstrate the advantage of the proposed methods. {\mathbf e}nd{abstract} {\mathbf b}egin{IEEEkeywords} Ensemble selection, cluster ensemble, multilayer bootstrap networks, unsupervised learning {\mathbf e}nd{IEEEkeywords} {\mathbf s}etlength{\arraycolsep}{0.2em} {\mathbf s}ection{Introduction} {\mathbf I}EEEPARstart{U}{nsupervised} learning and clustering is a fundamental task of machine learning. It finds wide applications in data mining, text analysis, etc. Early works, like principal component analysis (PCA) and k-means clustering, conduct clustering in the original data space. Because the data in the original space is usually linearly-inseparable and noisy, later on, research turned to projecting data in the original space into a probability space where the data is supposed to be uniformly distributed and linearly separable, such as kernel methods, probabilistic models, and manifold and subspace learning {\mathbf c}ite{fu2022latent}. However, a proper probability space is usually found by tuning parameters manually, e.g. kernel widths {\mathbf c}ite{ng2001spectral} or regularization parameters, which is a long term headache problem. Although some work has tried to find the optimal parameters automatically, e.g. {\mathbf c}ite{soares2004meta}, the learned representation, which is produced from a single layer nonlinear transform, is not abstract enough to describe the semantic classes of data. To learn highly abstract representations, deep neural network based data clustering has received much attention recently. The first work {\mathbf c}ite{hinton2006reducing} extracts abstract representations from the bottleneck layer of a deep belief network. To make the deep representations suitable for clustering, some work adds additional terms, such as constraints {\mathbf c}ite{huang2014deep}, clustering-like loss functions and models {\mathbf c}ite{yang2017towards}, or novel network structures {\mathbf c}ite{ji2017deep}, to the network training; while some work learns deep representations and refines cluster assignments iteratively {\mathbf c}ite{xie2016unsupervised}. Recently, a new kind of deep learning based clustering, named self-supervised clustering optimizes cleverly designed objective functions of some pretext tasks, such as image completion, image colorization, or clustering, in which supervised pseudo labels are automatically obtained from the input data without manual annotations. It can be generally categorized into predictive self-supervised clustering {\mathbf c}ite{chang2018deep,wang2021progressive,wang2022local}, generative self-supervised clustering {\mathbf c}ite{jiang2016variational,xia2021adversarial}, and contrastive self-supervised clustering {\mathbf c}ite{ji2019invariant,dang2021nearest}, respectively. See {\mathbf c}ite{liu2021self} for an recent overview. Although the methods achieve superior performance over conventional clustering methods, many of them apply handcrafted priors to the benchmark data case by case, such as strong prior knowledge of data, data augmentation with clear intrinsic data structures, or hyperparameter tuning with the ground-truth labels. {If prior knowledge is insufficient, then some methods have to make a compromise with default hyperparameter settings, e.g. {\mathbf c}ite{zhang2018multilayer}, which may degrade performance apparently.} {\mathbf b}egin{figure*}[t] {\mathbf c}entering \resizebox{13cm}{!}{\includegraphics*{general4.pdf}} {\mathbf c}aption{{On the network structure selection problem of MBN.} Each square of MBN in figure (a) represents a base clustering, while the black circles connected to the square represent the input/output of the base clustering. The hyperparameter ``$\delta$'' controls the network structure of MBN. The words in red color are two ensemble selection criteria for MBN-SO and MBN-SD respectively. The word ``ACC'' is short for clustering accuracy. The demo data is the COIL20 dataset {\mathbf c}ite{nene1996columbia}.} \label{fig:1} {\mathbf e}nd{figure*} As we know, a long term goal of unsupervised learning and clustering is to design algorithms that are tuning-free and with little human labor, like k-means clustering. {However, from the above literature review, it seems that this topic is far from explored yet. Because the topic is a rather broad research area, this paper focuses on the network structure selection problem of a special deep model, named \textit{multilayer bootstrap network} (MBN) {\mathbf c}ite{zhang2018multilayer}, given little prior knowledge of data. It seems a difficult problem, since that the network structure of MBN, which is controlled by hyperparameters, is strongly related to the unknown intrinsic property of the input data. See Section \ref{preliminary} for the details of the problem.} We address the network structure selection problem of MBN by unsupervised ensemble learning and ensemble selection. See Section \ref{related} for an overview of the state-of-the-art works on unsupervised ensemble learning and selection. Although many ensemble selection methods may be applied successfully, we aim to exploit a simple and efficient way under the rule of Occam's Razor. We find empirically that, when applied to MBN, even a very simple ensemble selection method is able to achieve comparable top performance with the advanced ones, where 20 unsupervised ensemble learning and selection methods are used for comparison in Appendix D of the Supplement Material. Eventually, this finding derives a simple and efficient tuning-free unsupervised deep learning algorithm for practical use. To summarize, as shown in Fig. \ref{fig:1}, the contribution of this paper is listed as follows: {\mathbf b}egin{itemize} \item We theoretically prove that increasing the depth of MBN does not always improve the performance, which induces the network structure selection problem of MBN. \item To address the aforementioned problem, we propose a simple MBN ensemble (MBN-E) algorithm. It groups the sparse outputs of a number of MBN base models with different network structures into a new representation. \item To reduce the high computational complexity problem of MBN-E, we propose the fast MBN-E (fMBN-E) by a simple modification of MBN-E. It accelerates MBN-E by over hundreds of times both theoretically and empirically. We have proved that the acceleration does not degrade the performance. \item To further improve the performance of MBN-E, we propose (i) the MBN ensemble selection with optimization-like criteria (MBN-SO) for the case when the number of classes is known, and (ii) the MBN ensemble selection with distribution divergence criteria (MBN-SD) when the number of classes is unknown. Both of them select a number of highly-effective MBN base models from MBN-E to group into a new MBN-E. The difference between them lies in the selection criteria of the base models. \item We have run experiments on a number of benchmark datasets where the optimal network structure of MBN appears in fundamentally different ranges. Experimental results show that MBN-E significantly outperforms the MBN with the default setting and approaches to the MBN with the optimal setting. fMBN-E achieves similar performance with MBN-E, and is over dozens of times faster than MBN-E. MBN-SO and MBN-SD further improves the performance of MBN-E. \item Because the proposed algorithms intend to solve the difficulty of real-world applications of MBN, we further applied the proposed methods to image segmentation and graph data mining. Experimental results verified the effectiveness of the proposed methods. {\mathbf e}nd{itemize} The rest of the paper is organized as follows. In Section \ref{related}, we present related work. In Section \ref{preliminary}, we review MBN. In Section \ref{sec:anal}, we analyze the structure selection problem of MBN both theoretically and empirically. In Sections \ref{mbn-e} and \ref{mbn-s}, we present MBN-E, fMBN-E, MBN-SO, and MBN-SD, respectively. In Section \ref{experiment}, we present an extensive experiment. In Section \ref{sec:appl}, we apply the proposed methods to image segmentation and graph data mining. Finally, in Section \ref{conclusion}, we conclude the paper. {\mathbf s}ection{Related work}\label{related} The proposed MBN-E essentially is rooted in clustering ensemble. The proposed MBN-SO and MBN-SD are essentially rooted in ensemble selection and reweighting. The selection criteria of the base models of MBN-SD, which measures the divergence between data distributions, are rooted in unsupervised domain adaptation. We present the three aspects as follows. {\mathbf s}ubsection{Clustering ensemble} Ensemble learning, such as {\it{bagging}}, {\it{boosting}}, and their variations, has demonstrated its effectiveness on many learning problems {\mathbf c}ite{dietterich2000ensemble}. Unsupervised ensemble learning inherits the fundamental theories and methods of classifier ensemble. The mostly studied unsupervised ensemble learning is \textit{clustering ensemble}. It aims to combine multiple \textit{base clusterings} with a so-called \textit{meta-clustering function}, a.k.a \textit{consensus function}, for enhancing the stability and accuracy of the base clusterings {\mathbf c}ite{strehl2003cluster,vega2011survey}. Meta-clustering functions can be categorized generally to two classes {\mathbf c}ite{vega2011survey}. The first class analyzes the co-occurrence of objects: how many times an object belongs to one cluster or how many times two objects belong to the same cluster. The second class, called the median partition, pursues the maximal similarity with all partitions in the ensemble {{\mathbf c}ite{li2007solving,nguyen2007consensus,yu2020clustering}}. Recently, some unsupervised deep ensemble learning methods have been proposed {\mathbf c}ite{liu2016infinite,koohzadi2020unsupervised,hu2022representation}. For example, {\mathbf c}ite{liu2016infinite} takes deep neural networks act like a meta-clustering function. {\mathbf c}ite{koohzadi2020unsupervised} decomposes each layer of a deep neural network into an ensemble of encoders or decoders and mask operations. To our knowledge, unsupervised deep ensemble learning is not prevalent, due to maybe that neural networks need supervised signals to maximize their discriminant ability. See {\mathbf c}ite{vega2011survey,ganaie2021ensemble} for the reviews of clustering ensemble. {\mathbf s}ubsection{Clustering ensemble reweighting and selection} Because not all base clusterings contribute equivalently to a cluster ensemble, it is needed to conduct ensemble reweighting and selection, which mainly focuses on three respects: (i) different types of weights, (ii) algorithms for calculating the weights, and (iii) cluster validation criteria for measuring the diversity and quality of the base models. The most common type of weights is to assign a weight to each base clustering according to its quality or/and diversity in the ensemble, e.g. {\mathbf c}ite{zhou2006clusterer}. A special case of this type is to constrain the weights of some weak base clusterings to zero, named \textit{clustering selection} {\mathbf c}ite{fern2008cluster,azimi2009adaptive}. However, weak base clusterings may also contain some high quality clusters, and vise versa. With this perspective, many reweighting strategies at levels of clusters {\mathbf c}ite{yang2010temporal,huang2017locally}, data structures {\mathbf c}ite{yu2016distribution}, and data points {\mathbf c}ite{li2019clustering} were proposed. The algorithms for calculating the weights can be categorized into two types {\mathbf c}ite{zhang2019weighted}. The first type calculates weights by measuring the similarity between the predicted labels of the clustering ensemble and its base clusterings {\mathbf c}ite{zhou2006clusterer,fern2008cluster}. The second type treats the weights as variables of consensus functions which are obtained by advanced optimization algorithms, e.g. {\mathbf c}ite{li2008weighted}. The criteria for measuring the diversity and quality of the base models can be categorized into two classes. The first class of measurements calculates the normalized mutual information {\mathbf c}ite{zhou2006clusterer,fern2008cluster}, adjusted rand index {\mathbf c}ite{jia2011bagging}, clustering accuracies {\mathbf c}ite{hong2009resampling}, and their variants {\mathbf c}ite{duarte2006weighted} or aggregations {\mathbf c}ite{huang2021toward} between the sets of the predicted labels. The second class of validation criteria is based on data distributions {\mathbf c}ite{vendramin2010relative,halkidi2001clustering}. They usually calculate some kinds of statistics of data {\mathbf c}ite{naldi2013cluster,yu2016distribution}. Some systematical studies on cluster validation indices {\mathbf c}ite{vendramin2010relative,halkidi2001clustering} have been carried out as well. To summarize, when the number of classes is given, we evaluate the quality of the base models by \textit{optimization-like criteria} {\mathbf c}ite{vendramin2010relative}, for MBN-SO. When the number of classes is not given, we propose to evaluate the quality of the base models by so-called \textit{distribution divergence criteria} for MBN-SD, which measure the learned representations of data directly without predicted labels. {\mathbf s}ubsection{Unsupervised domain adaptation} Domain adaptation is the ability of applying an algorithm trained in one or more ``source domains'' to a different but related ``target domain''. Unsupervised domain adaptation is a subtask of domain adaptation where the target domain does not have labels. The algorithms can be categorized into three branches {\mathbf c}ite{kouw2019review}, which are sample-based, feature-based, and inference-based approaches. No matter how the approaches vary, the distribution divergence measurement between the source domains and the target domain always lies in the core of unsupervised domain adaptation. The most popular measurement is maximum mean discrepancy (MMD) {\mathbf c}ite{borgwardt2006integrating}. Other measurements include Kullback-Leibler divergence, total variation distance, second-order (covariance) statistics, and Hellinger distance. Although the distribution divergence measurement has been extensively studied in unsupervised domain adaptation, it seems far from explored in unsupervised ensemble selection. In this paper, we name this kind of measurements as distribution divergence criteria, and apply them to MBN-SD. Because MMD performs generally well among the measurements and is applicable to all data types, from high-dimensional vectors to strings and graphs, we focus on using MMD. {\mathbf s}ection{Preliminaries}\label{preliminary} This section presents MBN and its theoretical foundation briefly. See Appendices A and B of the Supplementary Material for the summary of important notations and detailed description of MBN as well as its geometric and theoretical foundations. {\mathbf s}ubsection{Multilayer bootstrap networks} This paper takes MBN {\mathbf c}ite{zhang2018multilayer} as a research object. It is a simple deep model. As shown in Fig. \ref{fig:1}a, suppose we are to build an $M$-layer MBN from bottom-up, it can be described as follows: {\mathbf b}egin{itemize} \item Step 1, for each layer, MBN trains $V$ mutually-independent $k$-centroids base clusterings, where the parameter $k$ of all clusterings at the same layer is the same. For each base clustering, it takes the following three operators successively to generate a new representation of data: {\mathbf b}egin{itemize} \item \textbf{Random selection of features:} It first randomly selects some features of the input data, which yields a new representation of the data. \item \textbf{Random sampling of data:} It randomly samples $k$ data points from the data with the new representation as the $k$ centroids. \item \textbf{One nearest neighbor optimization:} It assigns each input data to one of the $k$ clusters, and outputs a $k$-dimensional one-hot code, indicating which cluster the input data belongs to. {\mathbf e}nd{itemize} The one-hot representations from all base clusterings are concatenated as the input of the upper layer. \item Step 2, MBN stacks the cluster ensemble described in Step 1 for $M$ times. The parameter $k$ at two adjacent layers have the following connection: {\mathbf b}egin{equation}\label{eq:delta} k_{m} = \delta k_{m-1} {\mathbf e}nd{equation} where $k_m$ and $k_{m-1}$ are the parameter $k$ at the $m$-th and $(m-1)$-th adjacent layers respectively, and $\delta\in(0,1)$ is a hyperparameter controlling the network structure of MBN. Because $\delta\in(0,1)$, we must have {\mathbf b}egin{equation}\label{eq:k} k_1>k_2>\ldots>k_m>\ldots >k_o {\mathbf e}nd{equation} where $k_o$ is the parameter $k$ at the top layer. Note that, the total number of layers of MBN is usually determined automatically by $k_1$, $k_o$, and $\delta$. {\mathbf e}nd{itemize} {\mathbf s}ubsection{Estimation error of a single layer of MBN} {\mathbf c}ite{zhang2018multilayer} analyzed the estimation error of a single layer of MBN, which explains the empirical success of MBN. We summarize the analysis here. Given an input ${\mathbf x}$ of MBN at a layer, it is easy to image that each $k$-centroids clustering contributes a nearest neighbor ${\mathbf w}_v$ to ${\mathbf x}$, $\forall v=1,\ldots,V$, then, the new location of ${\mathbf x}$ in the input data space, denoted as ${\mathbf h}at{{\mathbf x}}$, is given by the $V$ nearest neighbors as: {\mathbf b}egin{eqnarray}\label{eq:soafaj} {\mathbf h}at{{\mathbf x}} = \frac{1}{V}{\mathbf s}um_{v=1}^V {\mathbf w}_v {\mathbf e}nd{eqnarray} If ${\mathbf h}at{{\mathbf x}}$ is an effective estimation of ${\mathbf x}$, then the \textit{locally linear assumption} between $\{{\mathbf w}_v\}_{v=1}^V$ and ${\mathbf x}$ must hold; otherwise, ${\mathbf h}at{{\mathbf x}}$ is not an accurate estimation. Under the locally linear assumption, the estimation error $\mathbb{E}(x-{\mathbf h}at{x})$ can be decomposed into the following form using the famous \textit{bias-variance decomposition of expectation risk} {\mathbf c}ite{hastie2009unsupervised}: {\mathbf b}egin{eqnarray}\label{eq:bv} \mathbb{E}(({\mathbf x}-{\mathbf h}at{{\mathbf x}})^2)& = &({\mathbf x}-\mathbb{E}({\mathbf h}at{{\mathbf x}}))^2 + \mathbb{E}\left(({\mathbf x}-\mathbb{E}({\mathbf h}at{{\mathbf x}}))^2\right)\nonumber\\ &=&\mathrm{Bias}^2({\mathbf h}at{{\mathbf x}}) + \mathrm{Var}({\mathbf h}at{{\mathbf x}}) {\mathbf e}nd{eqnarray} Given {\mathbf e}qref{eq:bv}, we can derive the following theorem for the estimation error of a single layer of MBN: {\mathbf b}egin{thm}\label{thm:4} { The estimation error of a single layer of MBN $\mathbb{E}_{\mathrm{ensemble}}$ and the estimation error of a single $k$-centroids clustering $\mathbb{E}_{\mathrm{single}}$ in the layer have the following relationship: {\mathbf s}etlength{\arraycolsep}{0.2em} {\mathbf b}egin{eqnarray}\label{eq:important} \mathbb{E}_{\mathrm{ensemble}} = \left(\frac{1}{V}+\left(1-\frac{1}{V}\right)\rho\right)\mathbb{E}_{\mathrm{single}} {\mathbf e}nd{eqnarray} where $\rho$ is the pairwise positive correlation coefficient between the $k$-centroids clusterings, $0\leq \rho \leq 1$ {\mathbf c}ite{zhang2018multilayer}.} {\mathbf e}nd{thm} {\mathbf s}ection{Analysis of the network structure problem of MBN}\label{sec:anal} It is expected that adding more layers to a deep network could improve the representation learning ability of the network. However, this is not always the case empirically, so as to MBN. In this section, we first give an empirical demo on how different network structures affect the performance in Section \ref{subsec:empirical}, and then derive the estimation error of the entire MBN in Section \ref{subsec:theoretical} by extending Theorem \ref{thm:4} to the multilayer scenario, which explains the empirical phenomenon theoretically and motivates the novel algorithms of this paper. {\mathbf s}ubsection{{Empirical justification}}\label{subsec:empirical} A core problem of MBN is that its effectiveness is strongly related to the network structure which is controlled by parameter $\delta$. Given parameters $k_1$ and $k_o$ in {\mathbf e}qref{eq:k} fixed, how fast $k$ drops from $k_1$ to $k_o$ layer by layer according to {\mathbf e}qref{eq:k}, which is determined by $\delta$, should match the nonlinearity and noise level of data. When $\delta$ approaches to 0, MBN builds a shallow network with a single nonlinear layer, which is suitable for linearly separable data. When $\delta$ is enlarged towards 1, MBN becomes deeper and deeper, which is suitable for highly nonlinear and non-Gaussian data. If the above regularity is violated, the performance of MBN may drop sharply. In Fig. \ref{fig:1}a, we can see that, increasing $\delta$ from 0.1 to 0.9 yields gradually improved performance on COIL20. The gap between the best performance and poorest performance is as high as 58\%. However, in Fig. \ref{fig:new}, we see that (i) the best performance of MBN on the Dermatology dataset appears at $\delta = 0.1$, and the performance degrades gradually along with the increase of $\delta$, which is contrary to the trend on COIL20; (ii) the best performance on MNIST(5000) appears at $\delta = 0.5$, which significantly outperforms the performance when $\delta = 0.1$ and $\delta = 0.9$. Moreover, as will be shown in Table \ref{table:data_set_info} and Fig. \ref{fig:scores} in the experiment, the best $\delta$ for different datasets appears at dramatically different ranges. {\mathbf b}egin{figure}[t] {\mathbf c}entering \resizebox{8.5cm}{!}{\includegraphics*{untitled_new.pdf}} {\mathbf c}aption{{Visualization of features produced by MBN with different $\delta$ on the Dermatology and MNIST(5000) datasets, where Dermatology is a dataset from UCI, and MNIST(5000) is a subset of MNIST dataset that consists of 5000 randomly selected data points.} } \label{fig:new} {\mathbf e}nd{figure} Because it is difficult to evaluate the properties of data in unsupervised learning, MBN has to make a compromise by setting $\delta = 0.5$. This may lead to far inferior performance from the optimal one, though $\delta = 0.5$ happens to be the best choice on some data like MNIST. In this paper, we aim to address this issue by detecting the optimal $\delta$ automatically. {\mathbf s}ubsection{Theoretical explanation}\label{subsec:theoretical} A fundamental element of MBN is the locally linear assumption defined in {\mathbf e}qref{eq:soafaj}. The correctness of the assumption is strongly related to the choice of $\delta$. Suppose the optimal performance of MBN appears at $\delta = \delta_0$. Then, a diagram in Fig. \ref{fig:fajo} explains the empirical phenomenon in Section \ref{subsec:empirical}. {\mathbf b}egin{figure}[t] {\mathbf c}entering \resizebox{8.5cm}{!}{\includegraphics*{theory_explain2.png}} {\mathbf c}aption{{Diagram of the density estimation process of MBN with different $\delta$. The notation $\delta_0$ denotes the optimal $\delta$. The black cross ${\mathbf h}at{\mathbf{x}}$ denotes the coordinate of the learned representation of an input data ${\mathbf x}$. The four red points, which are ${\mathbf w}_1$, ${\mathbf w}_2$, ${\mathbf w}_3$ and ${\mathbf w}_4$ respectively, are the nearest centroids of four $k$-centroids clusterings to an input data point ${\mathbf x}$. The blue dotted oval is the area of the locally linear assumption.} } \label{fig:fajo} {\mathbf e}nd{figure} When we set $\delta \ll \delta_0$, the locally linear assumption {\mathbf e}qref{eq:soafaj} may be violated, which makes MBN fail to learn correct representations. For example, in Fig. \ref{fig:fajo}a, given an input data point ${\mathbf x}$ that is sampled from the nonlinear data distribution, its representation ${\mathbf h}at{\mathbf{x}}$ learned by the nearest centroids ${\mathbf w}_1$, ${\mathbf w}_2$, ${\mathbf w}_3$, and ${\mathbf w}_4$ is even out of the data distribution, which is clearly wrong. This explains the empirical phenomenon that MBN does not reach the top performance on COIL 20 when $\delta\ll 0.9$, and on MNIST(5000) when $\delta\ll 0.5$. To explain the failure of MBN at $\delta {\mathbf g}g \delta_0$, we first give the following theorem: {\mathbf b}egin{thm}\label{thm:MBN_error} When $\delta >\delta_0$, the estimation error of MBN is: {\mathbf b}egin{equation}\label{eq:error_accumulate} \mathbb{E}_{\mathrm{MBN}} {\mathbf g}eq {\mathbf s}um_{m=1}^M \left(\frac{1}{V}+\left(1-\frac{1}{V}\right)\left(\frac{a k_1}{n}\right)^2\delta^{2(m-1)}\right)\mathbb{E}_{(\mathrm{single,1})} {\mathbf e}nd{equation} where $a\in(0,1]$ is the ratio of the number of randomly selected features over the number of all features in Step 1 of MBN, $\mathbb{E}_{\mathrm{single,1}}$ is the estimation error of a single $k$-centroids clustering at the bottom layer, and $M$ is the number of nonlinear layers of MBN. {\mathbf e}nd{thm} {\mathbf b}egin{proof} First of all, we should emphasize that, when $\delta <\delta_0$, the locally linear assumption for {\mathbf e}qref{eq:soafaj} does not hold, which makes Theorem \ref{thm:4} do not hold as well. Because the following proof is built on Theorem \ref{thm:4}, Theorem \ref{thm:MBN_error} is effective only when $\delta >\delta_0$. Because the probability that any two $k$-centroids clusterings select the same element of the same input data point as one of their centroids is $(ak/n)^2$, then we can imagine easily that the correlation is {\mathbf b}egin{eqnarray}\label{eq:rho} \rho=(ak/n)^2 {\mathbf e}nd{eqnarray} We denote the correlation at the $m$th layer as $\rho_m$. Substituting {\mathbf e}qref{eq:delta} into {\mathbf e}qref{eq:rho} derives {\mathbf b}egin{eqnarray}\label{eq:connection} \rho_{m} = (a k_{m-1}/n)^2\delta^2=\ldots= (a k_{1}/n)^2\delta^{2(m-1)} {\mathbf e}nd{eqnarray} We denote the estimation error of a single $k$-centroids clustering and an ensemble of clusterings at the $m$th layer as $\mathbb{E}_{({\mathrm{single}},m)}$ and $\mathbb{E}_{({\mathrm{ensemble}},m)}$ respectively. Because reducing $k$ makes $\mathbb{E}_{{\mathrm{single}}}$ enlarged, we may assume that $\mathbb{E}_{({\mathrm{single}},m)}$ is lower-bounded by $\mathbb{E}_{({\mathrm{single}},1)}$. Substituting {\mathbf e}qref{eq:connection} into {\mathbf e}qref{eq:important} derives: {\mathbf b}egin{equation}\label{eq:error_accumulatex} \mathbb{E}_{(\mathrm{ensemble},m)} {\mathbf g}eq \left(\frac{1}{V}+\left(1-\frac{1}{V}\right)\left(\frac{a k_1}{n}\right)^2\delta^{2(m-1)}\right)\mathbb{E}_{(\mathrm{single,1})} {\mathbf e}nd{equation} Because $\mathbb{E}_{\mathrm{MBN}}$ accumulates $\mathbb{E}_{(\mathrm{ensemble},m)}$ of all layers from bottom-up, we can derive the overall estimation error of MBN as {\mathbf e}qref{eq:error_accumulate}. {\mathbf e}nd{proof} We further derive the following corollary from Theorem \ref{thm:MBN_error}: {\mathbf b}egin{cor}\label{cor:faoo} When $\delta>\delta_0$ and $V\rightarrow \infty$, the estimation error of MBN is: {\mathbf b}egin{equation} \mathbb{E}_{\mathrm{MBN}} {\mathbf g}eq C {\mathbf s}um_{m=1}^M \delta^{2(m-1)} {\mathbf e}nd{equation} where $C = \left({a k_1}/{n}\right)^2\mathbb{E}_{(\mathrm{single,1})}$ is a constant. {\mathbf e}nd{cor} Corollary \ref{cor:faoo} can be visualized in Fig. \ref{fig:fajofa}. From the figure, we see that, when $\delta$ approaches to 1, $\mathbb{E}_{\mathrm{MBN}}$ is increased exponentially. {\mathbf b}egin{figure}[t] {\mathbf c}entering \resizebox{5.5cm}{!}{\includegraphics*{E_MBN_delta.png}} {\mathbf c}aption{{Connection between the estimation error of MBN and $\delta$ when $\delta>\delta_0$, where $C = \left({a k_1}/{n}\right)^2\mathbb{E}_{(\mathrm{single,1})}$ is a constant.} } \label{fig:fajofa} {\mathbf e}nd{figure} Fig. \ref{fig:fajo}c gives an example on how the large estimation error occurs when $\delta{\mathbf g}g\delta_0$. In this figure, we see that, because the four $k$-centroids clusterings have strong correlation, three out of four nearest centroids to ${\mathbf x}$, i.e. ${\mathbf w}_1$, ${\mathbf w}_2$, and ${\mathbf w}_3$, share the same location, which makes MBN difficult to learn a good representation. The above analysis explains the phenomenon why the performance of MBN on Dermatology and MNIST(5000) drops sharply when $\delta=0.9$. As shown in Fig. \ref{fig:fajo}b, only when $\delta\approx \delta_0$, not only the locally linear assumption holds, but also the $k$-centroids clusterings have weak correlation, which makes MBN learn the best representation for ${\mathbf x}$. However, avoiding the sensitivity of MBN to $\delta$ is not straightforward, which motivates the proposed methods in the following of this paper. {\mathbf s}ection{Multilayer bootstrap network ensemble}\label{mbn-e} In this section, we first introduce MBN-E in Section \ref{mbn-e1}, then present an efficient algorithm for MBN-E, named fMBN-E, in Section \ref{fMBN-E}, and finally discuss why fMBN-E can accelerate MBN-E without degrading the estimation accuracy in Section \ref{discussion}. {\mathbf s}ubsection{MBN-E}\label{mbn-e1} Because MBN is sensitive to $\delta$, a straightforward thought is to integrate a number of MBN base models with different $\delta$ into MBN-E. We present MBN-E in Algorithm \ref{alg:mbn-e}. In Algorithm \ref{alg:mbn-e}, we usually conduct PCA preprocessing to $\{{\mathbf x}_i \}_{i=1}^{n}$ before MBN-E, which not only reduces the computational complexity of the bottom layers of the MBN base models but also de-correlates the input features. After getting the output $\{{\mathbf b}by_{i}\}_{i=1}^n$, we sometimes need to reduce $\{{\mathbf b}by_{i}\}_{i=1}^n$ to a low-dimensional representation $\{{\mathbf b}bu_{i}\}_{i=1}^n$ in an Euclidian space by, e.g. PCA, for applications, since that $\{{\mathbf b}by_{i}\}_{i=1}^n$ is very high dimensional. Likewise, we denote the low-dimensional representation of the base models $\{{\mathbf y}_{z,i}\}_{i=1}^n$ as $\{{\mathbf u}_{z,i}\}_{i=1}^n$. The computational complexity of MBN-E, which is $Z$ times higher than MBN, is too high to be intolerable in practice when $Z{\mathbf g}g1$: {\mathbf b}egin{thm}\label{thm:2} {The computational complexity of MBN-E approximates to $ Z({\mathcal O}(\alpha kVn)+{\mathcal O}(kVn))$ empirically, where ${\mathcal O}(\alpha kVn)$ and ${\mathcal O}(kVn)$ are the complexity of a single MBN at the bottom layer and the other layers respectively, and $\alpha$ is a constant related to the sparse property of the input data.} {\mathbf e}nd{thm} {\mathbf b}egin{algorithm}[t] {\mathbf c}aption{MBN-E.} {\mathbf b}egin{algorithmic}[1]\label{alg:mbn-e} {\mathcal R}EQUIRE A $h$-dimensional unlabeled dataset $\{{\mathbf x}_i \}_{i=1}^{n}$, parameter $k_o$, and number of MBN base models $Z$\\ \renewcommand{\textbf{Initialization: }}{\textbf{Initialization: }} \ENSURE $\{{\mathbf b}ar{{\mathbf y}}_i \}_{i=1}^{n}$ \FOR{$z=1,\ldots,Z$} \STATE Randomly generate $\delta$ from the range $[0.05,0.95]$; \STATE $\{{\mathbf y}_{z,i} \}_{i=1}^{n}\leftarrow \mathrm{MBN}(\{{\mathbf x}_i \}_{i=1}^{n}, k_o,\delta)$\\ \ENDFOR \FOR{$i=1,\ldots,n$} \STATE ${\mathbf b}ar{{\mathbf y}}_{i}\leftarrow [{\mathbf y}^T_{1,i},{\mathbf y}^T_{2,i},\ldots,{\mathbf y}^T_{Z,i}]^T$ \ENDFOR {\mathbf e}nd{algorithmic} {\mathbf e}nd{algorithm} {\mathbf b}egin{figure}[t] {\mathbf c}entering \resizebox{7.8cm}{!}{\includegraphics*{MBN-E2.pdf}} {\mathbf c}aption{{Architecture of fMBN-E.} Different color represents different MBN base models with random $\delta$ values.} \label{fig:x} {\mathbf e}nd{figure} {\mathbf s}ubsection{fMBN-E}\label{fMBN-E} {\mathbf b}egin{algorithm}[t] {\mathbf c}aption{fMBN-E.} {\mathbf b}egin{algorithmic}[1]\label{alg:fmbn-e} {\mathcal R}EQUIRE A $h$-dimensional unlabeled dataset $\{{\mathbf x}_i \}_{i=1}^{n}$, parameter $k_o$, and number of MBN base models $Z$\\ \renewcommand{\textbf{Initialization: }}{\textbf{Initialization: }} {\mathcal R}EQUIRE{ $k_1= \lfloor n/2\rfloor$, number of base clusterings per layer $V=400$ } \ENSURE $\{{\mathbf b}ar{{\mathbf y}}_i \}_{i=1}^{n}$\\ \STATE /* train a shared bottom layer */ \STATE $\{{\mathbf y}_{i} \}_{i=1}^{n}\leftarrow \mathrm{MBN}(\{{\mathbf x}_i \}_{i=1}^{n}, k_1-1,\delta=0)$\\ \STATE /* train an ensemble of fast MBN */ \FOR{$z=1,\ldots,Z$} \STATE ${\mathbf x}_{z,i} \leftarrow {\mathbf y}_{i},\mbox{ }\forall i = 1,\ldots,n$\\ \STATE $m\leftarrow 2$ \STATE Randomly generate $\delta$ from the range $[0.05,0.95]$\\ {\mathbf W}HILE{$k_m{\mathbf g}e k_o$} \FOR{$v=1,\ldots,V$} \STATE Calculate pairwise similarity matrix $\mathbf{B} = \mathbf{X}_z^T\mathbf{X}_z$ where $\mathbf{X}_z = [\mathbf{x}_{z,1},\ldots,\mathbf{x}_{z,n}]$\\ \STATE Randomly select $k_m$ columns of $\mathbf{B}$ to form a new matrix $\mathbf{B}'$, which is the similarity scores between the input data and the centroids of the $v$-th clustering at the $m$-th layer\\ \FOR{$i=1,\ldots,n$} \STATE Find the largest element of the $i$th row of $\mathbf{B}$, supposed to be the $j$th element\\ \STATE Derive a one-hot code ${\mathbf s}_{i,v}= [s_{i,v,1},\ldots,s_{i,v,k_m}]^T$ where {\mathbf b}egin{equation} s_{i,v,t}=\left\{ {\mathbf b}egin{array}{ll} 1, &\mbox{if } t = j\\ 0, &\mbox{otherwise} \\ {\mathbf e}nd{array},\mbox{ } \forall t = 1,\ldots,k_m \right.\nonumber {\mathbf e}nd{equation}\\ \ENDFOR \ENDFOR \STATE ${\mathbf x}_{z,i}\leftarrow [{\mathbf s}_{i,1}^T,\ldots,{\mathbf s}_{i,k_m}^T]^T,\mbox{ } \forall i = 1,\ldots,n$ \STATE $k_{m+1}\leftarrow \delta k_m$\\ \STATE $m\leftarrow m+1$\\ \ENDWHILE \STATE ${\mathbf b}ar{{\mathbf y}}_{z,i}\leftarrow {\mathbf b}ar{{\mathbf x}}_{z,i},\mbox{ } \forall i = 1,\ldots,n,\mbox{ }\forall z= 1,\ldots,Z$ \ENDFOR \STATE ${\mathbf b}ar{{\mathbf y}}_{i}\leftarrow [{\mathbf y}^T_{1,i},{\mathbf y}^T_{2,i},\ldots,{\mathbf y}^T_{Z,i}]^T,\mbox{ } \forall i = 1,\ldots,n$ {\mathbf e}nd{algorithmic} {\mathbf e}nd{algorithm} To reduce the computational complexity of MBN-E, we design a new algorithm fMBN-E in Algorithm \ref{alg:fmbn-e}. Its architecture is shown in Fig. \ref{fig:x}. Specifically, fMBN-E and MBN-E differs in the following two aspects. {\mathbf b}egin{itemize} \item \textbf{The first novel aspect:} fMBN-E trains a single bottom layer, instead of training $Z$ independent bottom layers as that in MBN-E. \item \textbf{The second novel aspect:} For training each MBN base model, fMBN-E removes the random feature selection step from MBN. This modification makes us able to train the MBN base learners by random resampling of similarity scores, instead of random resampling of data. {\mathbf e}nd{itemize} From the above algorithm, we can easily obtain that: {\mathbf b}egin{thm}\label{thm:3} \textit{The computational complexity of fMBN-E is ${\mathcal O}(\alpha kVn)+{\mathcal O}(Zn^2)$.} {\mathbf e}nd{thm} Comparing Theorems \ref{thm:2} and \ref{thm:3}, we see that the computational complexities of the bottom layer and the other layers are reduced by $Z$ and $kV/n$ times respectively. For example, in a typical setting where $k=n/2$, $Z = 40$, and $V=400$, the computational complexity of MBN-E is as high as $ ({\mathcal O}(8000\alpha n^2)+{\mathcal O}(8000n^2))$, while the complexity of fMBN-E is ${\mathcal O}(200\alpha n^2)+{\mathcal O}(40n^2)$ which may be hundreds of times faster than MBN-E. Particularly, because the complexity of the original MBN model is $({\mathcal O}(\alpha kVn)+{\mathcal O}(kVn))$ {\mathbf c}ite{zhang2018multilayer}, we can see that fMBN-E may be even faster than a single MBN described in {\mathbf c}ite{zhang2018multilayer} since that $V$ is larger than $Z$ in practice. {\mathbf s}ubsection{Analysis}\label{discussion} Here we explain theoretically how the two novel aspects of fMBN-E reduce the computational complexity of MBN-E without suffering significant performance degradation. {\mathbf s}ubsubsection{On the first novel aspect of fMBN-E} Based on Theorem \ref{thm:4}, we can draw the connections between $\mathbb{E}_{\mathrm{ensemble}}/\mathbb{E}_{\mathrm{single}}$, $\rho$, and $V$ in Fig. \ref{fig:jfoa}, and further derive the following corollary from {\mathbf e}qref{eq:important}. {\mathbf b}egin{figure}[t] {\mathbf c}entering \resizebox{5cm}{!}{\includegraphics*{untitled.pdf}} {\mathbf c}aption{Relationship between the estimation error $\mathbb{E}_{\mathrm{ensemble}}/\mathbb{E}_{\mathrm{single}}$, correlation coefficient $\rho$, and number of $k$-centroids clusterings per layer $V$.} \label{fig:jfoa} {\mathbf e}nd{figure} {\mathbf b}egin{cor}\label{cor:0} The estimation errors of the bottom layers of fMBN-E $\mathbb{E}_{\mathrm{fMBN-E}}$ and MBN $\mathbb{E}_{\mathrm{MBN-E}}$ have the following connection: {\mathbf b}egin{equation} \frac{\mathbb{E}_{\mathrm{fMBN-E}}}{\mathbb{E}_{\mathrm{MBN-E}}} = \frac{\left(\frac{1}{V}+\left(1-\frac{1}{V}\right)\rho\right)\mathbb{E}_{\mathrm{single}}}{\left(\frac{1}{ZV}+\left(1-\frac{1}{ZV}\right)\rho\right)\mathbb{E}_{\mathrm{single}}}=\frac{Z+(ZV-Z)\rho}{1+(ZV-1)\rho} {\mathbf e}nd{equation} {\mathbf e}nd{cor} From Corollary \ref{cor:0}, we can further derive the following corollary: {\mathbf b}egin{cor}\label{cor:1} When $V$ is large enough, the estimation error of the bottom layer of fMBN-E is similar to that of $Z$ independent bottom layers of MBN-E: {\mathbf b}egin{equation} {\mathbb{E}_{\mathrm{fMBN-E}}}\approx {\mathbb{E}_{\mathrm{MBN-E}}} {\mathbf e}nd{equation} {\mathbf e}nd{cor} {\mathbf b}egin{proof} According to Corollary \ref{cor:0}, we see that, when $V$ and $Z$ are both large enough, ${\mathbb{E}_{\mathrm{fMBN-E}}}/{\mathbb{E}_{\mathrm{MBN-E}}}$ is determined by $\rho$. For the first case when $\rho\rightarrow 0$, ${\mathbb{E}_{\mathrm{fMBN-E}}}\approx Z {\mathbb{E}_{\mathrm{MBN-E}}} $; for the second case when $\rho{\mathbf g}g 0$, ${\mathbb{E}_{\mathrm{fMBN-E}}}\approx {\mathbb{E}_{\mathrm{MBN-E}}} $. In the following, we show that the second case is true. It is easy to know that enlarging $k$ reduces $\mathbb{E}_{\mathrm{single}}$. From {\mathbf e}qref{eq:rho}, we also observe that, when $k$ is enlarged, $\rho$ is enlarged as well. According to Theorem \ref{thm:4}, for the bottom layer of MBN, empirically, setting $k$ to a proper number balances $\mathbb{E}_{\text{single}}$ and $\rho$, which produces the minimum $\mathbb{E}_{\text{ensemble}}$. Here we take the common setting $k=n/2$ and $a=0.5$ as an example. In this setting, we may have $\rho \approx 0.0625$, which supports that ${\mathbb{E}_{\mathrm{fMBN-E}}}\approx {\mathbb{E}_{\mathrm{MBN-E}}} $. Corollary \ref{cor:1} is proved. {\mathbf e}nd{proof} Corollary \ref{cor:1} motivates us to train a single bottom layer as fMBN-E, instead of training $Z$ independent bottom layers as MBN-E. {\mathbf s}ubsubsection{On the second novel aspect of fMBN-E} This subsection explains why fMBN-E is able to discard the random feature selection step of MBN when training the upper layers. {\mathbf b}egin{cor}\label{cor:3} The random feature selection step has limited effect on the upper layers of the MBN base models of fMBN-E. {\mathbf e}nd{cor} {\mathbf b}egin{proof} For the upper layers of fMBN-E, the parameter $k$ is usually far smaller than $n$, e.g. $k=n/2^3$ at the third layer from bottom-up. According to {\mathbf e}qref{eq:rho} if we remove the random feature selection step by setting $a=1$, we may have $\rho \approx 1/2^6$ . From Fig. \ref{fig:jfoa}, we see that $\mathbb{E}_{\text{ensemble}}$ is far smaller than $\mathbb{E}_{\text{single}}$ when $\rho \approx 1/2^6$. Therefore, we do not need the random feature selection step to further pursue a marginal reduction of $\mathbb{E}_{\text{ensemble}}$. {\mathbf e}nd{proof} Corollary \ref{cor:3} motivates us to remove the random feature selection step at the upper layers of fMBN-E, which provides the opportunity to reduce the computational complexity significantly. Following a similar explanation with the proof of Corollary \ref{cor:3}, we can obtain: {\mathbf b}egin{cor}\label{cor:2} The random feature selection step reduces the estimation error of the bottom layer of fMBN-E significantly. {\mathbf e}nd{cor} Corollary \ref{cor:2} motivates us to retain the random feature selection step at the bottom layer of fMBN-E. {\mathbf s}ection{Unsupervised Network Structure Selection}\label{mbn-s} In this section, we first present an unsupervised ensemble selection framework for MBN-E in Section \ref{subsec:framework}, and then present MBN-SO and MBN-SD in Sections \ref{subsec:MBN-SO} and \ref{subsec:MBN-SD} respectively. {\mathbf s}ubsection{Framework}\label{subsec:framework} {\mathbf b}egin{algorithm}[t] {\mathbf c}aption{Unsupervised ensemble selection for MBN-E.} {\mathbf b}egin{algorithmic}[1]\label{alg:so} {\mathcal R}EQUIRE Sparse output of MBN-E $\{{\mathbf b}by_i \}_{i=1}^{n}$ and its low-dimensional representation $\{{\mathbf b}bu_i\}_{i=1}^n$; \\ Sparse outputs of the MBN base models $\{\{{\mathbf y}_{z,i} \}_{i=1}^{n}\}_{z=1}^Z$ and their low-dimensional representations $\{\{{\mathbf u}_{z,i}\}_{i=1}^n\}_{z=1}^Z$;\\ Number of selected base models $B$\\ Number of classes $c$ (optional). \\ \ENSURE $\{{\mathbf b}ar{{\mathbf b}ar{{\mathbf y}}}_{i}\}_{i=1}^n$, $\{{\mathbf b}ar{{\mathbf b}ar{{\mathbf u}}}_{i}\}_{i=1}^n$. {\mathbf I}F{$c$ is given} \STATE $\{l_i\}_{i=1}^n\leftarrow\mathrm{clustering}(\{{\mathbf b}bu_i \}_{i=1}^{n},c)$\\ \FOR{$z=1$ to $Z$} \STATE ${\mathbf o}mega_z \leftarrow f_{\textrm{MBN-SO}}(\{l_i\}_{i=1}^n, \{{\mathbf u}_{z,i} \}_{i=1}^{n})$ \\$\quad$(or ${\mathbf o}mega_z \leftarrow f_{\textrm{MBN-SO}}(\{l_i\}_{i=1}^n, \{{\mathbf y}_{z,i}\}_{i=1}^{n})$)\\ \ENDFOR \ELSE \FOR{$z=1$ to $Z$} \STATE ${\mathbf o}mega_z \leftarrow f_{\textrm{MBN-SD}}(\{{\mathbf b}by_i \}_{i=1}^{n}, \{{\mathbf y}_{z,i} \}_{i=1}^{n})$ \\$\quad$(or ${\mathbf o}mega_z \leftarrow f_{\textrm{MBN-SD}}(\{{\mathbf b}bu_i \}_{i=1}^{n}, \{{\mathbf u}_{z,i} \}_{i=1}^{n})$)\\ \ENDFOR \ENDIF \STATE Pick $B$ sparse representations that correspond to the $B$ largest weights of $\{{\mathbf o}mega_z\}_{z=1}^Z$, supposed to be $\{\{{\mathbf x}_{b,i} \}_{i=1}^{n}\}_{b=1}^B$ without loss of generality \STATE ${\mathbf b}ar{{\mathbf b}ar{{\mathbf x}}}_{i} \leftarrow [{\mathbf x}_{1,i}^T,\ldots,{\mathbf x}_{B,i}^T]^T, \mbox{ } \forall i = 1,\ldots,n$ \STATE $\{{\mathbf b}ar{{\mathbf b}ar{{\mathbf y}}}_{i}\}_{i=1}^n \leftarrow \mathrm{PCA}(\{{\mathbf b}ar{{\mathbf b}ar{{\mathbf x}}}_{i}\}_{i=1}^n)$ {\mathbf e}nd{algorithmic} {\mathbf e}nd{algorithm} Algorithm \ref{alg:so} presents the unsupervised ensemble selection framework for MBN-E. If the number of classes $c$ is given, it adopts MBN-SO to select $B$ effective MBN base models. Specifically, it first conducts clustering on $\{{\mathbf b}by_i\}_{i=1}^n$, which generates a set of predicted labels $\{l_i\}_{i=1}^n$. Then, it calculates a weight ${\mathbf o}mega_z$ for the $z$-th MBN base model by an optimization-like criterion $f_{\textrm{MBN-SO}}(\{l_i\}_{i=1}^n, \{{\mathbf y}_{z,i} \}_{i=1}^{n})$. The larger the weight ${\mathbf o}mega_z$ is, the more important the corresponding MBN base model is. If $c$ is not given, it adopts MBN-SD to select the base models. Specifically, it first calculates the weight ${\mathbf o}mega_z$ by evaluating the difference between the distributions $\{{\mathbf b}bx_i \}_{i=1}^{n} $ and $\{{\mathbf x}_{z,i} \}_{i=1}^{n}$ directly via a distribution divergence criterion $f_{\textrm{MBN-SD}}({\mathbf c}dot)$. After obtaining $\{{\mathbf o}mega_z\}_{z=1}^Z$, it concatenates the sparse output of the $B$ ($B\ll Z$) MBN base models whose weights are the $B$ largest ones among $\{{\mathbf o}mega_z\}_{z=1}^Z$ into a new sparse representation of data $\{{\mathbf b}ar{{\mathbf b}ar{{\mathbf x}}}_{i}\}_{i=1}^n$. Note that there are a vast number of ensemble selection algorithms manipulating on $\{{\mathbf o}mega_z\}_{z=1}^Z$. Because this is not the focus of this paper, here we prefer the simple yet effective one. {\mathbf s}ubsection{MBN-SO: Ensemble selection with optimization-like criteria}\label{subsec:MBN-SO} MBN-SO follows the comparison conclusion on the optimization-like criteria {\mathbf c}ite{vendramin2010relative}, and picks four best criteria, which are the silhouette width criterion (SWC), point-biserial (PB), PBM, and variance ratio criterion (VRC), respectively. Because they are defined in Euclidian spaces, MBN-SO takes the low-dimensional representations $\{{\mathbf y}_{z,i}\}_{z=1}^Z$ of the MBN base models for evaluation. Due to the length limitation of the paper, we present the four criteria in Appendix C of the Supplementary Material. {\mathbf s}ubsection{MBN-SD: Ensemble selection with distribution divergence criteria}\label{subsec:MBN-SD} MBN-SD adopts MMD, which is a common distribution divergence criterion in unsupervised domain adaptation, to evaluate the distribution divergence between the outputs of MBN-E and its MBN base models. See Appendix C of the Supplementary Material for the detailed derivation of MMD. Note that, we have studied many probability distribution divergence criteria in literature, including the Kullback-Leibler divergence, total variance distance, L2-norm distance, Hellinger distance, Wasserstein distance, Bhattacharyya distance, etc. Unfortunately, they do not work for MBN-SD. However, it does not mean that MMD is the only choice, which needs further investigation in the future. {\mathbf s}ection{Experiments}\label{experiment} In this section, we first compare the proposed methods with a number of representative methods on several benchmark datasets in Section \ref{subsec:main_result}. Then, we demonstrate how fMBN-E accelerates MBN-E without sacrificing accuracy in Section \ref{subsec:fMBN-E}, and compare the ensemble selection criteria in Section \ref{subsec:effect1}. Finally, we present the experimental conclusions of some important aspects in Section \ref{subsec:effect2}. {\mathbf s}ubsection{Datasets} {\mathbf b}egin{table}[t] {\mathbf c}aption{\label{table:data_set_info}{{Description of data sets.} The term ``optimal $\delta$'' denotes where the optimal performance of MBN appears by searching $\delta$ from a range of $(0,1)$.}} \renewcommand{1.5}{1.5} {\mathbf c}enterline{{\mathbf s}calebox{0.75}{ {\mathbf b}egin{tabular}{llllll} {\mathbf h}line {Name} & {\# samples} & \# dimensions & \# classes & Attribute & Optimal $\delta$\\ {\mathbf h}line Dermatology & 366 & 34 & 6 & Biomedical & $ (0, 0.2)$\\ New-Thyroid & 255 & 5 & 3 & Biomedical & $ (0, 0.35)$\\ UMIST & 575 & 1024 & 20 & Faces & $ (0.75, 0.85)$\\ Extended-Yale B & 2414 & 32256 & 38 & Faces & $ (0.6, 0.75)$\\ COIL20 & 1440 & 4096 & 20 & Images& $ (0.8, 0.9)$\\ COIL100 & 7200 & 1024 & 100 & Images& $ (0.8, 0.9)$\\ 20-Newsgroups & 18846 & 26214 &20 & Text& $ (0.4, 0.5)$\\ MNIST & 70000 & 768 & 10 & Images& $(0.35, 0.75)$ \\ {\mathbf h}line {\mathbf e}nd{tabular} }} {\mathbf e}nd{table} We selected 8 benchmark datasets as summarized in Table \ref{table:data_set_info}. For Extended-Yale B, because the luminance of the images dominates the similarity measurement instead of the faces themselves, we preprocessed Extended-Yale B by the dense scale invariant feature transform as in {\mathbf c}ite{maggu2020deeply}. For 20-Newsgroups, we extracted the term frequency-inverse document frequency (TF-IDF) text feature. PCA preprocessing was applied to the image datasets, which reduced the original features to 100 dimensions. Cosine similarity measurement was used to measure the similarity between the documents of 20-Newsgroups. All other datasets used Euclidean distance as the similarity measurement. Clustering accuracy (ACC) was used as the evaluation metric. From the table, we see that the operating range of the optimal $\delta$ of MBN appears at dramatically different positions, which are sufficient to demonstrate how the proposed methods address the network structure selection problem, as well as how the proposed methods behave when comparing with the state-of-the-art referenced methods. {\mathbf s}ubsection{Parameter settings} {\mathbf b}egin{table*}[t] {\mathbf c}entering {\mathbf c}aption{{ACC comparison between the proposed methods and the state-of-the-art referenced methods.} The results of the referenced methods on the datasets marked with ``$*$'' are copied from their original publications or the ``papers with code'' website. The number in bold denotes the best performance.} {\mathbf s}calebox{0.93}{{\mathbf b}egin{tabular}{lllll} \toprule[1pt] & Dermatology & New-Thyroid & UMIST* & Extended-Yale B* \\ {\mathbf h}line kmeans & 0.261 & 0.860 & 0.408 & 0.311 {\mathbf b}igstrut[t]\\ Rank1 & 0.313 (DREC {\mathbf c}ite{zhou2019ensemble}) & 0.863 (Borda {\mathbf c}ite{sevillano2007bordaconsensus}) & \textbf{0.769 (DASC {\mathbf c}ite{zhou2018deep})} & \textbf{0.992 (DMSC {\mathbf c}ite{abavisani2018deep})} \\ Rank2 & 0.307 (LinkClueE {\mathbf c}ite{iam2011link}) & 0.859 (LinkClueE {\mathbf c}ite{iam2011link}) & 0.750 (DSC-Net-L2 {\mathbf c}ite{ji2017deep}) & 0.973 (DSC-Net-L2 {\mathbf c}ite{ji2017deep}) \\ Rank3 & 0.306 (HGPA {\mathbf c}ite{strehl2003cluster}) & 0.853 (ECPCS\_MC {\mathbf c}ite{huang2018enhanced}) & 0.732 (J-DSSC {\mathbf c}ite{lim2020doubly})) & 0.924 (J-DSSC {\mathbf c}ite{lim2020doubly})) \\ Rank4 & 0.299 (CSPA {\mathbf c}ite{strehl2003cluster}) & 0.851 (MCLA {\mathbf c}ite{strehl2003cluster}) & 0.728 (DSC-Net-L1 {\mathbf c}ite{ji2017deep}) & 0.917 (A-DSSC {\mathbf c}ite{lim2020doubly}) \\ Rank5 & 0.297 (ECPCS\_HC {\mathbf c}ite{huang2018enhanced}) & 0.845 (Vote {\mathbf c}ite{dimitriadou2002combination}) & 0.725 (A-DSSC {\mathbf c}ite{lim2020doubly})) & 0.776 (SSC-OMP {\mathbf c}ite{you2016scalable}) \\ MBN (default) & 0.855 & 0.881 & 0.544 & 0.934 \\ MBN-E & 0.866 & 0.860 & 0.670 & 0.973 \\ MBN-SO (VRC) & 0.714 & 0.771 & \textbf{0.767 } & 0.941 \\ MBN-SD & \textbf{0.947 } & \textbf{0.941 } & 0.547 & 0.909 {\mathbf b}igstrut[b]\\ {\mathbf h}line \rowcolor[rgb]{ .851, .851, .851} MBN$^\dagger$ & 0.971 & 0.964 & 0.770 & 0.969 {\mathbf b}igstrut\\ \toprule[1pt] {\mathbf s}pecialrule{0em}{1pt}{4pt} \toprule[1pt] & COIL20* & COIL100* & 20-Newsgroups & MNIST* \\ {\mathbf h}line kmeans & 0.679 & 0.511 & 0.416 & 0.527 {\mathbf b}igstrut[t]\\ Rank1 & \textbf{1.000 (JULE {\mathbf c}ite{yang2016joint})} & \textbf{0.911 (JULE {\mathbf c}ite{yang2016joint})} & 0.600 (LTM {\mathbf c}ite{cai2009probabilistic}) & \textbf{0.979 (N2D {\mathbf c}ite{mcconville2021n2d})} \\ Rank2 & 0.858 (AGDL {\mathbf c}ite{zhang2012graph}) & 0.824 (A-DSSC {\mathbf c}ite{lim2020doubly}) & 0.523 (DFPA {\mathbf c}ite{henao2015deep}) & 0.969 (DDC-DA {\mathbf c}ite{ren2020deep}) \\ Rank3 & 0.858 (GDL {\mathbf c}ite{zhang2012graph}) & 0.796 (J-DSSC {\mathbf c}ite{lim2020doubly})) & 0.490 (LDA {\mathbf c}ite{blei2003latent}) & 0.965 (PSSC {\mathbf c}ite{villar2020scattering}) \\ Rank4 & 0.793 (DBC {\mathbf c}ite{li2018discriminatively}) & 0.775 (DBC {\mathbf c}ite{li2018discriminatively}) & 0.447 (AnchorFree {\mathbf c}ite{fu2018anchor}) & 0.964 (GDL {\mathbf c}ite{zhang2012graph}) \\ Rank5 & N/A & 0.731 (GDL {\mathbf c}ite{zhang2012graph}) & 0.435 (LapPLSI {\mathbf c}ite{cai2008modeling}) & 0.939 (SR-K-means {\mathbf c}ite{jabi2019deep}) \\ MBN (default) & 0.795 & 0.683 & 0.623 & 0.964 \\ MBN-E & 0.929 & 0.832 & 0.584 & 0.964 \\ MBN-SO (VRC) & \textbf{0.995 } & \textbf{0.908 } & \textbf{0.623 } & 0.964 \\ MBN-SD & 0.973 & 0.803 & 0.611 & 0.963 {\mathbf b}igstrut[b]\\ {\mathbf h}line \rowcolor[rgb]{ .851, .851, .851} MBN$^\dagger$ & 0.994 & 0.901 & 0.623 & 0.965 {\mathbf b}igstrut\\ \toprule[1pt] {\mathbf e}nd{tabular} } \label{tab:addlabel_main} {\mathbf e}nd{table*} The parameter settings of MBN and the proposed methods are summarized as follows: {\mathbf b}egin{itemize} \item \textbf{MBN (default) {\mathbf c}ite{zhang2018multilayer}:} We used its default setting as in {\mathbf c}ite{zhang2018multilayer}. \item \textbf{MBN-E:} It used 40 MBN base models. The base models of MBN-E used the same parameter setting as MBN except that $\delta$ was randomly selected from $[0.05,0.95]$. \item \textbf{fMBN-E:} It is the fast version of MBN-E without performance degradation. It discards the random feature selection step in the upper layers of the MBN base models. \item \textbf{fMBN-Ev2: } It is a \textit{variant of fMBN-E} that discards the random feature selection step at the bottom layer, and uses the random resampling of similarity scores instead of the random data resampling to train the bottom layer as its upper layers. It accelerates the training time of the bottom layer of fMBN-E, with a risk of performance degradation. \item \textbf{MBN-SO:} The number of selected base models $B$ was set to 3. The MBN-SO with the four optimization-like criteria are denoted as ``MBN-SO (SWC)'', ``MBN-SO (PB)'', ``MBN-SO (PBM)'', and ``MBN-SO (VRC)'', respectively. \item \textbf{MBN-SD:} The parameter $B$ was set to $10$. {\mathbf e}nd{itemize} Agglomerative hierarchical clustering (AHC) was used for partitioning data into clusters. Although the MMD criterion in MBN-SD is designed to handle the case where the number of classes is unknown, we still give AHC the number of classes during the clustering stage, for a comparable study on how the distribution divergence criterion differs from the optimization-like criteria in MBN-SO. All reported results are average ones over 5 independent runs. The time efficiency was evaluated on an Intel(R) Xeon(R) Platinum 8160 CPU server with 512 GB memory, where the CPU has 48 physical cores. All experiments were run with 48 parallel workers of MATLAB. The source code is available at {\mathbf u}rl{http://www.xiaolei-zhang.net/mbn-e.htm}. {\mathbf s}ubsection{Comparison methods} The comparison strategy is described as follows. For the image datasets, we copied the ranking lists of the image clustering methods from {{\mathbf u}rl{https://paperswithcode.com/}}, which reflects the state-of-the-art performance on the datasets. {Note that because self-supervised deep learning based methods explore strong handcrafted features from augmented data {\mathbf c}ite{chen2020simple}, we omit them from the experiments to maintain the fairness of the comparison.} For the small-scale Dermatology and New-Thyroid datasets that deep learning methods usually do not handle with, we compared with 12 representative clustering ensemble methods, see Supplementary Material for the referenced methods. All these clustering ensemble methods are meta-clustering functions, which can be used jointly with any base clusterings, such as k-means or spectral clustering. Here we took 40 k-means clusterings as the base clusterings for each meta-clustering function. Like many clustering ensemble methods, e.g. {\mathbf c}ite{fred2005combining}, we selected the number of clusters of each k-means base clustering randomly from a range of $[2c, 10c]$. For the 20-Newsgroups text corpus, we compared with 9 text clustering methods, see {\mathbf c}ite{wang2021deep} for the referenced methods. Besides, k-means clustering are also provided as a baseline. Because k-means clustering suffers from bad local minima, we ran k-means clustering on each dataset for 100 times, and pick one that has the minimum objective value. All reported results are average ones over 5 independent runs. {\mathbf b}egin{table*}[t] {\mathbf c}entering {\mathbf c}aption{ACC comparison between MBN-E, fMBN-E, and fMBN-Ev2. } {\mathbf s}calebox{0.9}{ {\mathbf b}egin{tabular}{lllllllll} {\mathbf h}line & Dermatology & New-Thyroid & UMIST & Extended-Yale B & COIL20 & COIL100 & 20-Newsgroups & MNIST {\mathbf b}igstrut\\ {\mathbf h}line MBN-E & \textbf{0.866 } & 0.860 & \textbf{0.670 } & \textbf{0.973 } & 0.929 & \textbf{0.832 } & 0.584 & \textbf{0.964 } {\mathbf b}igstrut[t]\\ fMBN-E & \textbf{0.868 } & \textbf{0.907 } & 0.659 & 0.964 & \textbf{0.938 } & \textbf{0.837 } & 0.582 & \textbf{0.964 } \\ fMBN-Ev2 & 0.528 & 0.576 & 0.653 & 0.896 & 0.902 & 0.828 & \textbf{0.595 } & 0.963 {\mathbf b}igstrut[b]\\ {\mathbf h}line {\mathbf e}nd{tabular} } \label{tab:afa1} {\mathbf e}nd{table*} {\mathbf b}egin{table*}[t] {\mathbf c}entering {\mathbf c}aption{Running time (in seconds) of the bottom layers of MBN-E, fMBN-E, and fMBN-Ev2. } {\mathbf s}calebox{0.9}{ {\mathbf b}egin{tabular}{lllllllll} {\mathbf h}line & Dermatology & New-Thyroid & UMIST & Extended-Yale B & COIL20 & COIL100 & 20-Newsgroups & MNIST {\mathbf b}igstrut\\ {\mathbf h}line MBN-E & 225.08 & 14.96 & 118.00 & 2190.72 & 834.64 & 22148.48 & 59997.16 & 979832.20 {\mathbf b}igstrut[t]\\ fMBN-E & 0.63 & 0.36 & 3.44 & 70.96 & 24.99 & 679.75 & 1356.35 & 5525.12 \\ fMBN-Ev2 & 0.84 & 0.74 & 0.82 & 2.74 & 1.17 & 20.58 & 278.06 & 1216.84 {\mathbf b}igstrut[b]\\ {\mathbf h}line {\mathbf e}nd{tabular} } \label{tab:afa2} {\mathbf e}nd{table*} {\mathbf b}egin{table*}[t] {\mathbf c}entering {\mathbf c}aption{Running time (in seconds) of the upper layers of MBN-E, fMBN-E, and fMBN-Ev2.} {\mathbf s}calebox{0.9}{ {\mathbf b}egin{tabular}{lllllllll} {\mathbf h}line & Dermatology & New-Thyroid & UMIST & Extended-Yale B & COIL20 & COIL100 & 20-Newsgroups & MNIST {\mathbf b}igstrut\\ {\mathbf h}line MBN-E & 293.85 & 165.15 & 508.75 & 1829.94 & 1413.17 & 5617.11 & 26002.17 & 63939.58 {\mathbf b}igstrut[t]\\ fMBN-E & 3.02 & 1.63 & 3.38 & 31.85 & 20.05 & 206.46 & 2085.35 & 9108.11 \\ fMBN-Ev2 & 1.95 & 1.34 & 2.37 & 21.52 & 10.17 & 103.35 & 1141.76 & 8638.58 {\mathbf b}igstrut[b]\\ {\mathbf h}line {\mathbf e}nd{tabular} } \label{tab:afa3} {\mathbf e}nd{table*} {\mathbf s}ubsection{General results}\label{subsec:main_result} Table \ref{tab:addlabel_main} lists the results of the aforementioned comparison methods and the proposed methods. Because it is too lengthy to list all results, here we only list the results of the top 5 referenced methods; for the proposed MBN-SO variants, we only provide ``MBN-SO (VRC)'' as a representative. See Supplementary Material for the results of the other three variants of MBN-SO. We also list the performance of the MBN with the optimal $\delta$, denoted as MBN$^\dagger$. Note that because it is unlikely to select the optimal $\delta$ manually in real-world applications, MBN$^\dagger$ only provides an upperbound of the proposed methods. From the table, we see that the proposed methods outperform ``MBN (default)'' in general, as what we have targeted to in this paper. Specifically, MBN-E outperforms ``MBN (default)'' on UMIST, Extended Yale B, COIL20, and COIL100 significantly where the optimal operating range of $\delta$ of MBN is far from the default value 0.5. It is also comparable to ``MBN (default)'' on Dermatology and New-Thyroid. As for MNIST and 20-Newsgroups, even if the default $\delta$ happens to be in the optimal operating range, MBN-E can still be competitive to ``MBN (default)'' if the optimal range is wide enough, such as that on MNIST. MBN-SO further improves the performance of MBN-E, and outperforms ``MBN (default)'' significantly on most datasets, except the small-scale Dermatology and New-Thyroid. Finally, MBN-SD outperforms ``MBN (default)'' on Dermatology and New-Thyroid, COIL20, and COIL100 significantly, and is comparable to the latter in the remaining four datasets. {\mathbf b}egin{figure*}[t] {\mathbf c}entering \resizebox{14cm}{!}{\includegraphics*{plot_scores2.pdf}} {\mathbf c}aption{{Weights of the MBN base models produced by different ensemble selection criteria, where SWC, PB, PBM and VRC are optimization-like criteria for MBN-SO, and MMD is a distribution divergence criterion for MBN-SD. The dotted lines in grey color are the accuracies of MBN with respect to $\delta$, which are references for evaluating the effectiveness of the weights.}} \label{fig:scores} {\mathbf e}nd{figure*} The proposed MBN-SO also approaches to the top performance of the referenced methods on most datasets. Although it behaves worse than DMSC on Extended Yale B, it still ranks among the top 5 comparison methods. Here we need to emphasize one merit of MBN-SO: it is implemented in a simple mathematical form and behaves robustly across datasets without carefully selected architectures or hyperparameters, which fascinates its practical use. Note that it is interesting to observe that the clustering ensemble methods do not show significant performance improvement over k-means on the small scale Dermatology and New-Thyroid data. Note also that {the performance of text clustering is strongly related to text features. If bag-of-words is used instead of TF-IDF, then the performance of all referenced methods on 20-Newsgroups degrades significantly. To improve the performance on text clustering, new text features that incorporate context information of words may be helpful.} Focusing on our three algorithms, we see that MBN-SO is at least comparable to MBN-E and MBN-SD on most of the challenging data, except the two small-scale data where a shallow network of MBN is able to produce a highly accurate result. Comparing MBN-E and MBN-SD, we see that MBN-SD outperforms MBN-E on the two small-scale data, COIL20 and 20-Newsgroups, and is inferior to the latter on UMIST, Extended Yale B, and COIL100. Although the result of MBN-SD is not very impressive, it introduces a new class of ensemble selection criteria---distribution divergence criteria--- into clustering ensemble, which may motivate new criteria beyond MMD for further improving the performance of MBN-SD. {\mathbf s}ubsection{Comparison between MBN-E and fMBN-E}\label{subsec:fMBN-E} Table \ref{tab:afa1} lists the clustering accuracies of MBN-E, fMBN-E, and fMBN-Ev2. From the table, we see that MBN-E and fMBN-E achieve similar performance. This phenomenon supports the correctness of Corollaries \ref{cor:1} and \ref{cor:3}. Moreover, fMBN-E behaves better than fMBN-Ev2, particularly on Dermatology, New-Thyroid, and Extended Yale-B, which supports the correctness of Corollary \ref{cor:2}. Tables \ref{tab:afa2} and \ref{tab:afa3} summarize the running time of the comparison methods. From the tables, we see that fMBN-E is dozens of times faster than MBN-E on training the bottom layers. Moreover, fMBN-E and fMBN-Ev2 are even hundreds of times faster than MBN-E on training the upper layers. The phenomenon supports the theoretical analysis of Theorem \ref{thm:3}. {\mathbf s}ubsection{Comparison between different ensemble selection criteria for MBN-SO and MBN-SD}\label{subsec:effect1} To study how different ensemble selection criteria affect the weights of the MBN base models, we compared the weights with the clustering accuracy of the MBN base models in a single run in Fig. \ref{fig:scores}. From the figure, we see that the weights produced by all ensemble selection criteria can cleverly reflect the quality of the base models on most datasets except Dermatology. Particularly, the weights produced by ``VRC'' seem to be the most accurate among the ensemble selection criteria. Although the weights produced by ``MMD'', which is a distribution divergence criterion, seem not as accurate as the optimization-like criteria, if we pick a number of MBN base models, then the optimal MBN base models may be selected as well. {\mathbf s}ubsection{Discussions}\label{subsec:effect2} This subsection reports the main conclusions of some important aspects, leaving the detailed description of the experiments in Appendix D of the Supplementary Material. {\mathbf s}ubsubsection{Effect of number of selected base models on MBN-SO and MBN-SD} To study how the number of MBN base models affect the performance of MBN-SO and MBN-SD, we tuned the hyperparameter $B$ from 1 to 10. We find that, for MBN-SO, we can set the hyperparameter $B$ to a small number for saving the computing resource; however, for MBN-SD, we should set $B$ to a large number in order to achieve the optimal performance. {\mathbf s}ubsubsection{Effect of the referenced labels on MBN-SO} MBN-SO need referenced labels to calculate the weights of the MBN base models, where we adopt the predicted labels from MBN-E as the reference. After studying different generation methods of referenced labels, including (i) randomly generated labels, (ii) predicted labels from ``MBN (default)'', (iii) predicted labels from MBN-E, and (iv) ground-truth labels, we find that the accuracy of the referenced labels has significant impact on the performance, and that the predicted labels generated from MBN-E yield good performance. {\mathbf s}ubsubsection{On candidate meta-clustering functions of MBN-E} It is known that combining the base clusterings via a meta-clustering function is important for clustering ensemble technologies. In this paper, we combine the MBN base models by simply concatenating their sparse output without referring to an advanced meta-clustering function. In the Supplementary Material, we have tried 12 representative meta-clustering functions to fuse the output of the MBN base models. Empirical results show that simply concatenating the outputs of the MBN base models yields similar performance to the best meta-clustering functions. {\mathbf s}ubsubsection{On candidate ensemble selection methods of MBN-SO} MBN-SO simply selects the MBN base models with the highest weights. In literature, there are many studies on how to select the base models given the weights, which may lead to higher performance and lower computational power than the proposed method. In the Supplementary Material, we have compared with 8 representative ensemble selection methods as well as their 5 variants. Empirical results show that simply picking the top MBN base models is enough to reach the highest performance, while further exploring the diversity between the base models via complicated ensemble selection algorithms is unnecessary. {\mathbf s}ection{Applications}\label{sec:appl} In this section, we apply the proposed algorithms to image segmentation and graph data mining. {\mathbf b}egin{figure*}[t] {\mathbf c}entering \resizebox{13cm}{!}{\includegraphics*{image_segmentation.pdf}} {\mathbf c}aption{{Results of the image segmentation methods on 2 randomly selected examples from the 2017 Val images of the COCO datasets. }} \label{fig:imageseg} {\mathbf e}nd{figure*} {\mathbf s}ubsection{Application to image segmentation} Image segmentation partitions an image into multiple image segments, so as to simplify the analysis of the image. It is a process of assigning a label to every pixel of an image such that the pixels with the same label share certain characteristics. It is a core task of image signal processing. It can be either unsupervised or supervised. Unsupervised image segmentation, which is usually used as a preprocessing of supervised segmentation, is formulated as a clustering problem on pixels such that the pixels with similar colors and nearby locations are grouped into the same cluster. {\mathbf b}egin{table}[t] {\mathbf c}entering {\mathbf c}aption{Description of the GEMSEC-facebook datasets. } {\mathbf s}calebox{0.9}{ {\mathbf b}egin{tabular}{lccc} {\mathbf h}line & Number of nodes & Density & Transitivity {\mathbf b}igstrut\\ {\mathbf h}line Politicians& 5,908& 0.0024& 0.3011\\ Companies& 14,113& 0.0005& 0.1532\\ Athletes& 13,866& 0.0009& 0.1292\\ News sites& 27,917& 0.0005& 0.1140\\ Public figures& 11,565& 0.0010& 0.1666\\ Artists &50,515& 0.0006& 0.1140\\ Government& 7,057& 0.0036& 0.2238\\ TV shows& 3,892& 0.0023& 0.5906\\ {\mathbf h}line {\mathbf e}nd{tabular} } \label{tab:fjaow} {\mathbf e}nd{table} {\mathbf b}egin{table*}[t] {\mathbf c}entering {\mathbf c}aption{Modularity of the community detection algorithms on the GEMSEC-facebook datasets. The results of the referenced methods are copied from {\mathbf c}ite{rozemberczki2019gemsec}.} {\mathbf s}calebox{0.9}{ {\mathbf b}egin{tabular}{lccccccccc} {\mathbf h}line & Politicians &Companies& Athletes& News sites &Public figures& Artists &Government &TV shows & \textbf{Ranking}\\{\mathbf h}line \multirow{2}*{Overlap factorization {\mathbf c}ite{ahmed2013distributed}} & 0.810 &0.553 &0.601 &0.471 &0.551 &0.474 &0.608 &0.786& \multirow{2}*{\textbf{4.57}}\\ &{\mathbf s}criptsize{($\pm$0.008)} &{\mathbf s}criptsize{($\pm$0.010)} &{\mathbf s}criptsize{($\pm$0.020)} &{\mathbf s}criptsize{($\pm$0.016)} &{\mathbf s}criptsize{($\pm$0.01)} &{\mathbf s}criptsize{($\pm$0.018)} &{\mathbf s}criptsize{($\pm$0.024)} &{\mathbf s}criptsize{($\pm$0.008)}\\ \multirow{2}*{Walktrap {\mathbf c}ite{pons2005computing}} &0.841 &0.639 & 0.670 &0.514 &0.628 &0.554 &0.675 &0.790& \multirow{2}*{\textbf{2.00}}\\ &{\mathbf s}criptsize{($\pm$0.023)} &{\mathbf s}criptsize{($\pm$0.016)} &{\mathbf s}criptsize{($\pm$0.021)} &{\mathbf s}criptsize{($\pm$0.023)} &{\mathbf s}criptsize{($\pm$0.023)} &{\mathbf s}criptsize{($\pm$0.026)} &{\mathbf s}criptsize{($\pm$0.043)} &{\mathbf s}criptsize{($\pm$0.036)}\\ \multirow{2}*{Fast greedy {\mathbf c}ite{clauset2004finding}} &0.819 &0.665& 0.605& 0.531 &0.630 &0.464& 0.615& 0.835& \multirow{2}*{\textbf{2.86}}\\ &{\mathbf s}criptsize{($\pm$0.008)} &{\mathbf s}criptsize{($\pm$0.014)} &{\mathbf s}criptsize{($\pm$0.026)} &{\mathbf s}criptsize{($\pm$0.020)} &{\mathbf s}criptsize{($\pm$0.011)} &{\mathbf s}criptsize{($\pm$0.023)} &{\mathbf s}criptsize{($\pm$0.046)} &{\mathbf s}criptsize{($\pm$0.006)}\\ \multirow{2}*{Label propagation {\mathbf c}ite{gregory2010finding}} &0.826 &0.647 &0.647 &0.243 &0.612 &0.393& 0.659& 0.839& \multirow{2}*{\textbf{3.29}}\\ &{\mathbf s}criptsize{($\pm$0.009)} &{\mathbf s}criptsize{($\pm$0.075)} &{\mathbf s}criptsize{($\pm$0.094)} &{\mathbf s}criptsize{($\pm$0.159)} &{\mathbf s}criptsize{($\pm$0.027)} &{\mathbf s}criptsize{($\pm$0.018)} &{\mathbf s}criptsize{($\pm$0.041)} &{\mathbf s}criptsize{($\pm$0.004)}\\ \multirow{2}*{fMBN-E} &0.830& 0.549& 0.657& 0.518& 0.580& 0.502& 0.681& 0.809& \multirow{2}*{\textbf{2.29}}\\ &{\mathbf s}criptsize{($\pm$0.004)} & {\mathbf s}criptsize{($\pm$0.011)} & {\mathbf s}criptsize{($\pm$0.002)} & {\mathbf s}criptsize{($\pm$0.014)} & {\mathbf s}criptsize{($\pm$0.015)} & {\mathbf s}criptsize{($\pm$0.003)} & {\mathbf s}criptsize{($\pm$0.009)} & {\mathbf s}criptsize{($\pm$0.005)} \\ {\mathbf h}line {\mathbf e}nd{tabular} } \label{tab:dadaa} {\mathbf e}nd{table*} We randomly selected several images from the 2017 Val images of the COCO datasets\footnote{https://cocodataset.org} for evaluation. We reduced the length and width of each image to about 1/7 of their original sizes, and further transformed the color space from RGB to CIELAB. Finally, for each pixel, we concatenated its three-dimensional colors and its two-dimensional coordinates as the feature. We compared with the classic mean-shift clustering and k-means clustering. The bandwidth of mean-shift was set to 0.2. The clustering number of both k-means clustering and the proposed methods was set to 8. We applied k-means clustering to the output of the proposed methods. Two examples of the comparison results are shown in Fig. \ref{fig:imageseg}, while more examples are listed in Appendix E of the Supplementary Materials. From the figure, we see that the proposed methods not only maintain sufficient details of the images than mean-shift, but also yield smoother and more accurate results than k-means. As for the proposed methods, MBN-SO behaves similarly to fMBN-E. {\mathbf s}ubsection{Application to graph data mining} All of the aforementioned experiments were conducted on the data whose features are given explicitly. However, the data points in many real-world applications do not have explicit features, e.g. graph data where only the connections between the data points are given. Here we give an example on how to apply the proposed methods to graph data. Community detection is a method for finding groups within complex systems that are represented on a graph. It is a core task of network science, and finds its applications in network security, recommendation systems, etc. As collected in {\mathbf u}rl{https://snap.stanford.edu/data/}, the data in community detection are various sparse graphs. Here we used the undirected GEMSEC-facebook data in the collection for evaluation. The statistics of the GEMSEC-facebook data is summarized in Table \ref{tab:fjaow}. For each link between a node $i$ and a node $j$, we set the elements $b_{i,j}$ and $b_{j,i}$ of the graph $\mathbf{B}$ to the weight of the link. Because the pairwise similarity between the nodes has already been given as $\mathbf{B}$, the output of each $k$-centroids clustering at the bottom layer of fMBN-E is simply a random sample of the columns of $\mathbf{B}$. Because the ground-truth number of communities is unknown, we used \textit{modularity} as the evaluation metric as that in {\mathbf c}ite{rozemberczki2019gemsec}. Because the modularity can be calculated in an unsupervised manner by comparing $\mathbf{B}$ with the prediction result, we are able to search for the optimal modularity results as {\mathbf c}ite{rozemberczki2019gemsec}. Specifically, we set parameter $k_o$ of fMBN-E to $1.5c$ where $c$ was set to 10, 20, 30, and 40 respectively. For each $k_o$, we grouped the nodes to 2 to 50 communities, and picked the optimal result in terms of the modularity. We applied k-means clustering to the output of the proposed methods. Following {\mathbf c}ite{rozemberczki2019gemsec}, we reported the average results over 5 independent runs. Table \ref{tab:dadaa} lists the comparison results with four well-known community detection algorithms {\mathbf c}ite{ahmed2013distributed,pons2005computing,clauset2004finding,gregory2010finding}. From the average ranking over the 8 community detection tasks, we see that the proposed fMBN-E ranks the second, which is slightly worse than the walktrap algorithm {\mathbf c}ite{pons2005computing}. Note that because MBN-SD yields almost identical performance with fMBN-E, we omit its result here. {\mathbf s}ection{Conclusions}\label{conclusion} In this paper, we aim to derive a simple and tuning-free deep clustering tool that is able to yield comparable performance to the state-of-the-art deep clustering methods, for the sake of towards solving the heavy parameter-tuning problem in clustering. To achieve this goal, we propose to automatically determine the network structure of the deep clustering algorithm---MBN---by ensemble learning and selection. The proposed MBN-E simply concatenates the sparse output of a number of MBN base models with different $\delta$ to a meta-representation. The proposed MBN-SO and MBN-SD use the output of MBN-E to select the base models whose output distributions have the highest discriminability, without further exploring the diversity between the base models as conventional ensemble selection methods did. Because training an ensemble of MBN is expensive, we proposed fMBN-E, which first discards the random feature selection step of MBN and then replaces the step of random data resampling by the random resampling of similarity scores. We proved theoretically that this simplification does not degrade the estimation accuracy of MBN-E. Finally, the above methods contribute an efficient off-the-shelf deep clustering tool. Experimental comparison results on a wide variety of benchmark datasets show that the proposed methods significantly outperform the MBN with the default network structure; fMBN-E is empirically hundreds of times faster than MBN-E without suffering performance degradation; MBN-SO is able to detect the optimal MBN base model, and reaches comparable performance to the state-of-the-art clustering methods; although MBN-SD is less effective than MBN-SO, it is the first work of unsupervised ensemble selection based on the distribution divergence criteria. Further studies also show that the proposed methods reach top performance via only a simple formulation, comparing to as many as 20 candidate meta-clustering functions and clustering ensemble selection functions. At last, we show the advantage of the proposed methods in image segmentation and graph data mining. {\mathbf b}ibliography{zxlrefs,mywork} {\mathbf b}ibliographystyle{IEEEtran} {\mathbf e}nd{document}
\begin{document} \begin{abstract} Using a natural representation of a $1/s$-concave function on ${\mathbb R}^d$ as a convex set in ${\mathbb R}^{d+1},$ we derive a simple formula for the integral of its $s$-polar. This leads to convexity properties of the integral of the $s$-polar function with respect to the center of polarity. In particular, we prove that that the reciprocal of the integral of the polar function of a log-concave function is log-concave as a function of the center of polarity. Also, we define the Santal\'o regions for $s$-concave and log-concave functions and generalize the Santal\'o inequality for them in the case the origin is not the Santal\'o point. \end{abstract} \title{Geometric representation of classes of concave functions and duality} Log-concave and $s$-concave functions provide a natural extension of the theory of convex bodies. Starting with functional version of the famous Blaschke--Santal\'o inequality \cite{artstein2004santalo, artstein2007characterization, KBallthesis, fradelizi2007some, lehec2009partitions}, much research has been devoted to the study of such finctions in recent years. This has led to e.g., functional analogs of the floating body \cite{LiSchuettWerner}, John ellipsoids \cite{Alonso-Gutierrez2017, ivanov2020functional} and L\"{o}wner ellipsoids \cite{Alonso-Gutierrez2020, LiSchuettWerner2019}. More examples can be found in e.g., \cite{ColesantiLudwigMussnig2017, ColesantiLudwigMussnig, Gardner, Rotem2020}). \par Motivated by the setting of convex bodies, we investigate in this paper the properties of log-concave and $1/s$-concave functions related to duality transforms associated with the corresponding class of functions. \vskip 2mm \subsection{Background} For $s > 0,$ a non-negative function $f$ on ${\mathbb R}^d$ is called $1/s$-concave if the function $f^{1/s}$ is concave on its support. It is well known that for a positive integer $s,$ a $1/s$-concave function on ${\mathbb R}^d$ is the marginal of the indicator function of a convex set in ${\mathbb R}^{d+s}$. A non-negative function on ${\mathbb R}^d$ is called log-concave if its logarithm is concave on its support. It is also well known that any log-concave function is the local uniform limit of certain $1/s$-concave functions, as $s$ tends to infinity, e.g., \cite[Section 2.2]{brazitikos2014geometry}. This observation has been useful in many instances in the setting of log-concave functions, e.g., \cite{artstein2004santalo, CaglarWerner2014, klartag2007marginals}, as it allows to pass results from $1/s$-concave functions to log-concave functions. \par The concept of duality is a cornerstone of both, geometry in general, and asymptotic geometric analysis in particular. In convex analysis, the concept of duality is tightly connected with the notion of a polar set. Recall that the polar of a set $K$ in ${\mathbb R}^d$ is the set $\circset{K}$ given by \[ \circset{K} = \{y \in {\mathbb R}^d :\; \iprod{x}{y} \leq 1 \quad \text{for all } \ x \in K \}. \] Natural generalizations of this definition to the setting of classes of concave functions are as follows. For any $s > 0,$ the \emph{$s$-polar transform}, introduced in \cite{artstein2004santalo}, is defined by \[ \slogleg{f}(y) = \inf\limits_{\{x :\; f(x) > 0\}} \frac{\truncf{1 - {\iprod{x}{y}}}^{s}} {f(x)}, \] where $\truncf{a} = \max \{a, 0\}.$ The \emph{polar} (or \emph{log-conjugate}) of a non-negative function $f$ on ${\mathbb R}^d$ is defined by \[ \slogleg[\infty]{f}(y) = \inf\limits_{\{x :\; f(x) > 0\}} \frac{e^{-\iprod{x}{y}}}{f(x)}. \] As shown in \cite[Theorem 1]{artstein2008concept}, the $s$-polar transform is essentially the only order reversing involution on the class of upper semi-continuous ${1}/{s}$-concave functions containing the origin in the interior of support. Likewise, as shown in \cite[Corollary 12]{artstein2007characterization}, $ \slogleg[\infty]{}$ is essentially the only order reversing involution on the class of upper semi-continuous log-concave functions. See also \cite{boroczky2008characterization} for a similar characterization of the polar transformation in the setting of convex bodies. \par Alexandrov \cite{aleksandrov1967mean} noticed that the reciprocal of volume of the polar of a convex body $K$ in ${\mathbb R}^d$ as a function of the center of polarity is a $1/d$-concave function on the interior of $K.$ More formally, denote the shift of a body $K$ by a vector $z$ by \[ \shift{K}{z} = K - z. \] Then Alexandrov's results says that for a convex body $K \subset {\mathbb R}^d,$ the function \[ z \mapsto \parenth{\vol{d} \left(\shift{K}{z}\right)}^{-1/d} \] is concave on the interior of $K,$ where $\vol{d}$ is the standard $d$-dimensional volume. \vskip 5mm \subsection{The main theorems} \vskip 3mm \noindent For a function $f$ on ${\mathbb R}^d,$ we denote its shift by a vector $z$ by \[ \shift{f}{z} (x)= f(x - z), \hskip 3mm x \in {\mathbb R}^d. \] The \emph{barycenter} of an integrable function $f$ on ${\mathbb R}^d$ is defined as \[ \frac{\int_{{\mathbb R}^d} x f(x) \,\mathrm{d} x}{\int_{{\mathbb R}^d} f}, \] if the quotient exists. Any log-concave function of finite integral has the barycenter \cite[Lemma 2.2.1]{brazitikos2014geometry}. \vskip 2mm A main result of this paper is the following generalization of Alexandrov's theorem to the setting of log-concave functions. \vskip 2mm \noindent \begin{thm}\lambdabel{thm:Alexandrov_log-conc} Let $f : {\mathbb R}^d \to [0, \infty)$ be an upper semi-continuous log-concave function with finite integral. Then the function \[ z \mapsto \int_{{\mathbb R}^d} \slogleg[\infty]\! \parenth{\shift{f}{z}} \] is convex and its reciprocal is log-concave on the interior of support of $f.$ \end{thm} \par \noindent We derive this theorem via a limit argument from its analog in the $1/s$-concave setting. \par \noindent \begin{thm}\lambdabel{thm:Alexandrov_s-conc} Let $s \in (0, \infty)$ and $f : {\mathbb R}^d \to [0, \infty)$ be an upper semi-continuous, $1/s$-concave function with finite integral. Then we have \begin{enumerate} \item\lambdabel{ass:1_thm:Alexandrov_s-conc}The function $z \mapsto \int_{{\mathbb R}^d} \slogleg \! \parenth{\shift{f}{z}}$ is convex on the interior of the support of $f$. It attains the minimum at point $\tilde{z}$ such that the origin is the barycenter of $\shift{f}{\tilde{z}}$. \item\lambdabel{ass:2_thm:Alexandrov_s-conc} The function \[ z \mapsto \parenth{ \int_{{\mathbb R}^d} \slogleg \! \parenth{\shift{f}{z}} }^{-\frac{1}{d + s}} \] is concave on the interior of the support of $f$. \end{enumerate} \end{thm} \vskip 2mm \noindent Note that Theorem \ref{thm:Alexandrov_s-conc} yields Alexandrov's result, since the indicator function of a convex body is $s$-concave for any positive $s$. \par To prove this theorem, we use a representation of a $1/s$-concave function as a convex set in ${\mathbb R}^{d+1}$ suggested in \cite{ivanov2020functional}, see also \cite{pivovarov2020stochastic}. This representation allows to express the integral of an $s$-polar function via a simple formula, even for non-integer $s$. We elaborate on this in Section \ref{sec:slifting}. \vskip 2mm For any function $f$ on ${\mathbb R}^d,$ define \begin{equation} \lambdabel{eq:logconv_approx_s_conc} f_s (x) = \truncf{ 1 + \frac{\log f(x)}{s}}^{s}. \end{equation} Clearly, for a log-concave function $f,$ the function $f_s$ is $s$-concave and $f_s \to f$ pointwise on ${\mathbb R}^d$ as $s \to + \infty.$ Then, to carry out the limit argument, we use the following technical observation, which, surprisingly, seems to be new. \par \noindent \begin{thm} \lambdabel{thm:approx_by_s-conc_dual} Let $f \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty)$ be an upper semi-continuous, log-concave function with finite integral, containing the origin in the interior of its support. Denote by $A$ the set of all points in ${\mathbb R}^d$ that are not in the boundary of the support of $ \slogleg[\infty] f$. Then, as $s \to \infty$, $$\slogleg f_s \!\parenth{\frac{x}{s}} \to \slogleg[\infty] f (x)$$ locally uniformly on $A$. \end{thm} \vskip 2mm \noindent A weaker version of this result is \cite[Lemma 3.3]{artstein2004santalo}. \vskip 3mm One of the most important results in convex geometry is the Blaschke--Santal\'o inequality, see, e.g., \cite{GruberBook, SchneiderBook}. It says that there is a unique $z_0$ in $\text{int} (K)$, the interior of $K$, for which $\vol{d} (\circset{\parenth{\shift{K}{z}})}$ is minimal and then \[ \vol{d} (K) \cdot \vol{d} (\circset{\parenth{\shift{K}{z_0}})} \leq \parenth{\vol{d} \ball{d}}^2. \] Equality holds if and only if $K$ is an ellipsoid. Here, and throughout the paper, $\ball{d}$ denotes the $d$-dimensional Euclidean unit ball. Meyer and Pajor \cite{meyer1990blaschke} proved a more general form of the this inequality, which we state now: \\ \noindent \emph{Let $K$ be a convex body in ${\mathbb R}^d$. Let $H$ be an affine hyperplane with half-spaces $H_{+}$ and $H_{-}$, such that $\vol{d} \left(H_{+} \cap K\right) = \lambdambda \vol{d} \left(K\right)$. Then there exists $z \in H \cap \text{int} (K)$ such that \begin{equation} \lambdabel{eq:unbalanced_santalo_sets} \vol{d} K \cdot \vol{d} \circset{\parenth{\shift{K}{z}}} \leq \frac{\parenth{\vol{d} \ball{d}}^2}{4 \lambdambda(1 -\lambdambda)}. \end{equation}} \par \noindent We note here that $\ball{d}$ is the only self-polar set in ${\mathbb R}^d$. Let $|\cdot |$ be the Euclidean norm and put \[ \hat{h}(x) = \begin{cases} \left[1 -\enorm{x}^2\right]^{1/2},& \text{ if }x \in \ball{d}\\ 0,&\text{ otherwise}. \end{cases} \] It is not hard to see that $\hat{h}^s$ is self-$s$-polar, that is, $$\slogleg \! \parenth{\hat{h}^s} = \hat{h}^s,$$ and thus this function plays the role of the unit ball in the class of $1/s$-concave functions. Similarly, the standard Gaussian density $e^{-\enorm{x}^2/2}$ is self-polar in the class of log-concave functions. The Blaschke--Santal\'o inequality for log-concave functions was obtained in e.g., \cite{artstein2004santalo, KBallthesis, lehec2009partitions}. The Blaschke--Santal\'o inequality for a more general ``duality relation'' including the $s$-polar transform was obtained in \cite{fradelizi2007some}. In \cite{lehec2009direct}, Lehec generalized \eqref{eq:unbalanced_santalo_sets} to log-concave functions. \par Using Lehec's approach, we prove the following extension of \eqref{eq:unbalanced_santalo_sets} to the class of $1/s$-concave functions. \par \noindent \begin{thm} \lambdabel{thm:santalo_s_concave_lambda} Let $f \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty)$ be a $1/s$-concave function with finite integral. Let $H$ be an affine hyperplane with half-spaces $H_{+}$ and $H_{-}$ and such that $\lambdambda \int_{{\mathbb R}^d} f = \int_{H_{+}} f$ for some $\lambdambda \in (0,1).$ Then there exists $z \in H$ such that \begin{equation} \lambdabel{eq:s_santalo_lambda} \int_{{\mathbb R}^d} f \int_{{\mathbb R}^d} \slogleg{\parenth{\shift{f}{z}}} \leq \frac{\volbs^2}{4 \lambdambda(1 - \lambdambda)}, \end{equation} where \( \volbs = \int_{{\mathbb R}^d} \hat{h}^s. \) \end{thm} \par \noindent Again, this theorem implies both the analogous result \eqref{eq:unbalanced_santalo_sets} for convex sets and Lehec's result \cite{lehec2009direct} for log-concave functions. \vskip 2mm Finally, generalizing the definition of Meyer and Werner \cite{meyer1998santalo}, we define and list several properties of the Santal\'o $s$-region of a non-negative function with bounded support. This region is essentially the set of points $z$ such that the integral of the $s$-polar transform of $\shift{f}{s}$ is bounded by some positive constant. We give formal definitions and discuss possible definitions of Santal\'o $s$-function in Section \ref{sec:santalo_func}. \vskip 2mm The rest of the paper is organized as follows. In \ref{sec:slifting}, we define the $s$-lifting of a function, study its properties related to duality, and prove Theorem \ref{thm:Alexandrov_s-conc}. In Section \ref{sec:log_conc_as_limit}, we recall several definitions of convex analysis and prove Theorem \ref{thm:approx_by_s-conc_dual}. We also show that Theorem \ref{thm:Alexandrov_log-conc} is a consequence of Theorems \ref{thm:Alexandrov_s-conc} and \ref{thm:approx_by_s-conc_dual}. Next, we prove Theorem \ref{thm:santalo_s_concave_lambda} in Section \ref{sec:santalo_ineq_s_conc}. Finally, we introduce and study the Santal\'o $s$-region in Section \ref{sec:santalo_func}. \subsection*{Notation} We use $\iprod{x}{y}$ to denote the standard inner product of vectors $x$ and $y$ of ${\mathbb R}^d$. We write ${\mathbb R}^d \subset {\mathbb R}^{d+1}$, when we consider ${\mathbb R}^d$ as the subspace of ${\mathbb R}^{d+1}$ of vectors with zero last coordinates. We say that a set $K \subset {\mathbb R}^{d+1}$ is \emph{$d$-symmetric} if $K$ is symmetric with respect to ${\mathbb R}^d \subset {\mathbb R}^{d+1}$. The \emph{closure} of a set $K \subset {\mathbb R}^d$ is denoted by $\mathop{\rm cl} K$. The \emph{support function} of a convex body $K$ at $y\neq 0$ is defined by $$ \mathop{\rm supp}func{K}{y} = \sup_{x \in K} \lambdangle x, y \rangle. $$ The \emph{convex hull} of a set $K$ is denoted by $\mathrm{conv} K.$ The \emph{support} of a non-negative function $f$ on ${\mathbb R}^d$ is the set \[ \mathop{\rm supp} f = \{x \in {\mathbb R}^d :\; f(x) > 0\}. \] We will refer to an integrable lower semi-continuous function of finite integral as a \emph{proper} function. We will integrate over domains in ${\mathbb R}^d$ and ${\mathbb R}^{d+1}$ and use $\lambdambda_{d+1}$ to denote the standard Lebesgue measure on ${\mathbb R}^{d+1}$ and $\sigma$ to denote the uniform measure on the unit sphere $S^d \subset {\mathbb R}^{d+1}.$ \section{The \tpdfs-lifting and its properties} \vskip 2mm \lambdabel{sec:slifting} \vskip 2mm \noindent The notion of $s$-lifting of a function was first introduced in \cite{ivanov2020functional}. We give its definition and prove Theorem \ref{thm:Alexandrov_s-conc}. \vskip 2mm Let $f \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty)$ be a function and $s>0$. The \emph{$s$-lifting} of $f$ is a $d$-symmetric set in ${\mathbb R}^{d+1}$ defined by \begin{equation}\lambdabel{sLift} \slift{f} = \left\{ (x,\xi) \in {\mathbb R}^{d+1} :\; x \in \mathop{\rm cl} \mathop{\rm supp}{f}, \; \enorm{\xi} \le \parenth{f(x)}^{1/s} \right\}. \end{equation} \par \noindent Clearly, the $s$-lifting of a $1/s$-concave function is a convex set. Thus, the $s$-lifting gives a nice way of representing a $1/s$-concave function on ${\mathbb R}^d$ as a convex set in ${\mathbb R}^{d+1}$. In fact, it is represented as a $d$-symmetric convex set. There are other representations of $1/s$-concave function as convex sets. One of them is mentioned in (\ref{def.Ksf}). The advantages of the representation (\ref{sLift}) are \begin{itemize} \item it holds for non-integer $s$ \par \item one can investigate the properties of the $s$-lifting instead of studying $1/s$-concave functions directly. \end{itemize} \vskip 2mm \noindent For example, the following simple lemma shows that the $s$-polar transform of an $1/s$-concave function is just the polar set of its $s$-lifting. \vskip 2mm \begin{lem}\lambdabel{lem:dual_s_lifting} Let $f \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty).$ Then \[ \left(\slift{f}\right)^{\circ} = \slift{\left(\slogleg f \right)}. \] \end{lem} \begin{proof} Let $y \in {\mathbb R}^d$ and $\tau \in {\mathbb R}.$ Then $(y, \tau) \in \left(\slift{f}\right)^{\circ}$ if and only if the inequality \[ \iprod{x}{y} + t \tau \leq 1 \] holds for all $x \in \mathop{\rm supp}{f}$ and $t \in {\mathbb R}$ such that $(x, t) \in \slift{f}.$ By the symmetry of the $s$-lifting, we conclude that $(y, \tau) \in \left(\slift{f}\right)^{\circ}$ if and only if \[ \iprod{x}{y} + \enorm{\tau} f^{1/s}(x) \leq 1 \] for any $x \in \mathop{\rm supp} f.$ Thus, $\iprod{x}{y} \leq 1$ and \[ \enorm{\tau}^{s} \leq \frac{\left(1 - \iprod{x}{y}\right)^s}{f(x)}. \] Taking the infimum, we see that $(y, \tau) \in \left(\slift{f}\right)^{\circ}$ if and only if \[ \enorm{\tau}^{s} \leq \inf\limits_{x \in \mathop{\rm supp}{f}} \frac{\left(1 - \iprod{x}{y}\right)^s}{f(x)} = \slogleg{f}( y). \] That is, $(y, \tau) \in \left(\slift{f}\right)^{\circ}$ if and only if $(y, \tau)$ belongs to the $s$-lifting of $\slogleg{f}.$ \end{proof} \vskip 3mm \noindent Let ${C}\subset{\mathbb R}^{d+1}$ be a $d$-symmetric Borel set. The \emph{$s$-volume} of ${C}$ is defined by \[ \smeasure{{C}}=\int_{{\mathbb R}^d} \left[\frac{1}{2}\length{{C}\cap\ell_x}\right]^s \,\mathrm{d} x. \] By Fubini's theorem, we have \begin{equation} \lambdabel{eq:integral_slift_repr} \int_{{\mathbb R}^d} f = \smeasure{\slift{f}} = \frac{s}{2}\int_{\slift{f}} \abs{\iprod{e_{d+1}}{x}}^{s-1} \,\mathrm{d} \lambdambda_{d+1}(x). \end{equation} \vskip 3mm \begin{lem}\lambdabel{lem:int_s-polar_via_suppfunc} Fix $s > 0.$ Let $K$ be a $d$-symmetric convex body in ${\mathbb R}^{d+1}.$ The functional \[ z \mapsto \frac{s}{2(d+s)}\int_{S^{d}} \frac{\abs{\iprod{e_{d+1}}{ u}}^{s-1}}{ \left(h_{\shift{K}{z}}(u)\right)^{d+s}} \,\mathrm{d} \spherem{u} \] is convex on the interior of $K.$ Moreover, if $K$ is the $s$-lifting of a $1/s$-concave function $f$ and $z \in {\mathbb R}^d,$ then the value of this functional at $z$ is equal to \[ \int_{{\mathbb R}^d} \slogleg{\!\parenth{\shift{f}{z}}}. \] \end{lem} \begin{proof} Since $\mathop{\rm supp}func{\shift{K}{z}}{u} = \mathop{\rm supp}func{K}{u} - \iprod{z}{u},$ one sees that $\parenth{\mathop{\rm supp}func{\shift{K}{z}}{u}}^{d+s}$ is a convex function of $z$ for a fixed $u \in S^{d}$ and any $s > 0.$ The convexity of the functional follows immediately. \par \noindent Assume now that $z \in {\mathbb R}^d$ and put $K = \slift{f}$ for a $1/s$-concave function $f$. By Lemma \ref{lem:dual_s_lifting} and with equation \eqref{eq:integral_slift_repr}, we get \[ \int_{{\mathbb R}^d} \slogleg \!\parenth{\shift{f}{z}} = \smeasure{\circset{\shift{K}{z}}}= \frac{s}{2}\int_{\circset{\shift{K}{z}}} \abs{\iprod{e_{d+1}}{x}}^{s-1} \,\mathrm{d} \lambda_{d+1}(x). \] Using spherical coordinates gives \[ \int_{{\mathbb R}^d} \slogleg\!\parenth{ \shift{f}{z}} = \frac{s}{2}\int_{\circset{\shift{K}{z}}} \abs{\iprod{e_{d+1}}{r u}}^{s-1} r^{d}\,\mathrm{d} r \,\mathrm{d} \spherem{u} = \] \[ \frac{s}{2}\int_{S^{d}} \abs{\iprod{e_{d+1}}{ u}}^{s-1} \int_{\left[0, \frac{1}{\mathop{\rm supp}func{\shift{K}{z}}{u}}\right]} r^{d+s-1}\,\mathrm{d} r \,\mathrm{d} \spherem{u} . \] That is, \begin{equation} \lambdabel{eq:integ_stransf_rep} \int_{{\mathbb R}^d} \slogleg \!\parenth{\shift{f}{z}} = \frac{s}{2(d+s)}\int_{S^{d}} \frac{\abs{\iprod{e_{d+1}}{ u}}^{s-1}}{ \parenth{\mathop{\rm supp}func{\shift{K}{z}}{u}}^{d+s}} \,\mathrm{d} \spherem{u}. \end{equation} This completes the proof. \end{proof} \vskip 3mm \noindent \begin{proof}[Proof of Theorem \ref{thm:Alexandrov_s-conc}] Denote $K_z = \slift{\parenth{\shift{f}{z}}},$ $K^\circ_{z} = \slift{\slogleg \! \parenth{\shift{f}{z}}},$ and \[ \Phi(z) = \int_{{\mathbb R}^d} \slogleg \!\parenth{\shift{f}{z}} . \] Lemma \ref{lem:int_s-polar_via_suppfunc} implies that $\Phi$ is convex on the interior of the support of $f.$ \par \noindent We now address the second assertion of the theorem. Denote $\Psi(z) = \Phi(z)^{-\frac{1}{d+s}},$ and let $\nu_z $ be a measure on $S^d$ with density given by \[ \,\mathrm{d} \nu_z(u) = \frac{s}{2(d+s)} \frac{\abs{\iprod{e_{d+1}}{ u}}^{s-1}}{ \parenth{\mathop{\rm supp}func{K_z}{u}}^{d+s}} \,\mathrm{d} \spherem{u}. \] The directional derivative of $\mathop{\rm supp}func{K_z}{u}$ in the direction of the $i$-th standard basis vector $e_i$ is $ -u[i],$ where $a[i]$ stands for the $i$-th coordinate of $a.$ Differentiating \eqref{eq:integ_stransf_rep}, we obtain that \[ \Phi^\prime_{e_i}(z) = (d + s) \frac{s}{2(d+s)}\int_{S^{d}} \frac{\abs{\iprod{e_{d+1}}{ u}}^{s-1} u [i]}{ \parenth{\mathop{\rm supp}func{K_z}{u} }^{d+s+1} } \,\mathrm{d} \spherem{u} = (d + s) \int_{S^{d}} \frac{u [i]}{ \mathop{\rm supp}func{K_z}{u}} \,\mathrm{d} \nu_z(u), \] and \[ \Phi^{\prime\prime}_{e_i e_j}(z) = (d+s)(d+s+1) \int_{S^{d}} \frac{u [i] u [j]}{ \parenth{\mathop{\rm supp}func{K_z}{u}}^2} \,\mathrm{d} \nu_z(u). \] On the other hand, \begin{eqnarray*} \Phi^{\prime\prime}_{e_i e_j}(z) &=& \frac{1}{d+s} \parenth{\frac{d+s+1}{d+s}} \Phi(z)^{-\frac{1}{d+s} - 2} \Phi^\prime_{e_i}(z) \Phi^\prime_{e_j}(z) - \frac{1}{d+s} \Phi(z)^{-\frac{1}{d+s} - 1} \Phi^{\prime\prime}_{e_i e_j}(z)\\ &=& - \frac{1}{d+s} \Phi(z)^{-\frac{1}{d+s} - 2} \parenth{ \Phi(z) \Phi^{\prime\prime}_{e_i e_j}(z) - \frac{d+s+1}{d+s} \Phi^\prime_{e_i}(z) \Phi^\prime_{e_j}(z) }. \end{eqnarray*} Therefore, $\Psi$ is concave on its support if and only if the matrix $A_z$ given by \[ A_z[i,j] = \Phi(z) \Phi^{\prime\prime}_{e_i e_j}(z) - \frac{d+s+1}{d+s} \Phi^\prime_{e_i}(z) \Phi^\prime_{e_j}(z) \] is positive semi-definite at every point of the interior of the support of $\Psi.$ Using the formulas for the partial derivatives of $\Phi$ and \eqref{eq:integ_stransf_rep}, we get \begin{eqnarray*} &&\frac{A_z[i,j]}{(d+s)(d+s+1)} = \\ && \int_{S^{d}} \,\mathrm{d} \nu_z(u) \cdot \int_{S^{d}} \frac{u [i] u [j]}{ \parenth{\mathop{\rm supp}func{K_z}{u}}^2} \,\mathrm{d} \nu_z(u) - \int_{S^{d}} \frac{u [i]}{ \mathop{\rm supp}func{K_z}{u}} \,\mathrm{d} \nu_z(u) \int_{S^{d}} \frac{u [j]}{ \mathop{\rm supp}func{K_z}{u}} \,\mathrm{d} \nu_z(u). \end{eqnarray*} That is, $A_z$ is a covariance matrix, and hence it is positive semi-definite. \par \noindent Finally, differentiating \eqref{eq:integ_stransf_rep} again, we get that the directional derivative $\Phi^\prime_{h}$ of $\Phi$ in direction $h \in {\mathbb R}^d$ is \[ \Phi^\prime_{h} (z) = \frac{s}{2}\int_{S^{d}} \frac{\abs{\iprod{e_{d+1}}{ u}}^{s-1} }{ \parenth{\mathop{\rm supp}func{K_z}{u} }^{d+s+1} } \iprod{h}{u}\,\mathrm{d} \spherem{u} . \] Reversing the chain of identities in the proof of Lemma \ref{lem:int_s-polar_via_suppfunc}, one gets \[ (d+s+1) \Phi^\prime_{h} (z) = \frac{s}{2} \int_{S^{d}} \abs{\iprod{e_{d+1}}{ u}}^{s-1} \iprod{h}{u} \int_{\left[0, \frac{1}{\mathop{\rm supp}func{K_z}{u}}\right]} r^{d+s} \,\mathrm{d} r \,\mathrm{d} \spherem{u} = \] \[ \frac{s}{2} \int_{K^\circ_{z}} \abs{\iprod{e_{d+1}}{r u}}^{s-1} \iprod{h}{r u} r^{d}\,\mathrm{d} r \,\mathrm{d} \spherem{u} = \frac{s}{2} \int_{K^\circ_{z}} \abs{\iprod{e_{d+1}}{x}}^{s-1} \iprod{h}{x} \,\mathrm{d} \lambda_{d+1}(x). \] Since $h \in {\mathbb R}^d$ and by the definition of $K^\circ_{z},$ the latter is \[ \int_{{\mathbb R}^d} \iprod{h}{y} \slogleg \! \parenth{\shift{f}{z}} (y) \,\mathrm{d} y. \] By convexity, all directional derivatives of $\Phi$ vanish at the argmin. Consequently, the above calculations show that at the argmin $\tilde{z}$, the identity \[ \int_{{\mathbb R}^d} {y} \slogleg \! \parenth{\shift{f}{\tilde{z}}} (y) \,\mathrm{d} y =0 \] holds. Thus, the origin is the barycenter of $\slogleg \! \parenth{\shift{f}{\tilde{z}}}$ \vskip 2mm \noindent This finishes the proof of Theorem \ref{thm:Alexandrov_s-conc}. \end{proof} \vskip 10mm \section{Log-concave functions} \lambdabel{sec:log_conc_as_limit} In this section, we study the properties of log-concave functions related to the polar transform and prove Theorems \ref{thm:approx_by_s-conc_dual} and \ref{thm:Alexandrov_log-conc}. \vskip 2mm \noindent \subsection{Consequences of Theorem \ref{thm:approx_by_s-conc_dual}} \par \noindent Before proving Theorem \ref{thm:approx_by_s-conc_dual}, we derive several results from it including Theorem \ref{thm:Alexandrov_log-conc}. \par \noindent \begin{proof}[Proof of Theorem \ref{thm:Alexandrov_log-conc}] Recall that \begin{equation} \lambdabel{eq:s-to-zero_limit} \lim\limits_{s \to + \infty} \parenth{\lambdambda a^{\frac{1}{d+s}} + (1- \lambdambda) b^{\frac{1}{d+s}}}^{d+s} = a^{\lambdambda}b^{1-\lambdambda} \end{equation} for any positive real numbers $a$ and $b$. \par \noindent Let now $f$ be a proper log-concave function containing the origin in the interior of its support. Then, to prove Theorem \ref{thm:Alexandrov_log-conc}, it suffices to show that \begin{equation} \lambdabel{eq:log-conv_approx_conv_integ} \int_{{\mathbb R}^d} \slogleg[\infty] f = \lim\limits_{s \to + \infty} s^d \int_{{\mathbb R}^d} \slogleg f_s , \end{equation} where $f_s$ is as in (\ref{eq:logconv_approx_s_conc}). Indeed, the convexity of $z \mapsto \int_{{\mathbb R}^d} \slogleg[\infty]\! \parenth{\shift{f}{z}}$ follows immediately from assertion (\ref{ass:1_thm:Alexandrov_s-conc}) of Theorem \ref{thm:Alexandrov_s-conc} and \eqref{eq:log-conv_approx_conv_integ}. The log-concavity of \[ z \mapsto \frac{1}{\int_{{\mathbb R}^d} \slogleg[\infty]\! \parenth{\shift{f}{z}}} \] follows from assertion (\ref{ass:2_thm:Alexandrov_s-conc}) of Theorem \ref{thm:Alexandrov_s-conc} and identity \eqref{eq:s-to-zero_limit}. \par \noindent Identity (\ref{eq:log-conv_approx_conv_integ}) follows from Theorem \ref{thm:approx_by_s-conc_dual} and \cite[Lemma 3.2]{artstein2004santalo}. We use the following simplified version of this lemma. \vskip 2mm \noindent \begin{lem}\cite{artstein2004santalo} \lambdabel{lem:convergence_log-conc} Let $\{f_i\}_1^{\infty}$ be a sequence of log-concave functions converging to a log-concave function $f$ of finite integral on a dense subset $A \subset {\mathbb R}^d.$ Then $\int_{{\mathbb R}^d} f_n \to \int_{{\mathbb R}^d} f.$ \end{lem} \par \noindent This completes the proof of Theorem \ref{thm:Alexandrov_log-conc}. \end{proof} \vskip 3mm \noindent The next corollary is another consequence of Theorem \ref{thm:approx_by_s-conc_dual}. \vskip 2mm \noindent \begin{cor}\lambdabel{cor:approx_mahler_int} Let $f : {\mathbb R}^d \to [0, \infty)$ be a proper log-concave function containing the origin in the interior of its support. Then \[ \int f \cdot \int \slogleg[\infty] f = \lim\limits_{s \to + \infty} s^d \int f_s \cdot \int \slogleg f_s. \] \end{cor} \vskip 5mm \subsection{Proof of Theorem 3} To prove Theorem \ref{thm:approx_by_s-conc_dual}, we recall several definitions and facts of convex analysis which can be found in e.g., \cite{rockafellar1970convex}. \vskip 2mm \noindent We start with the definition of the classical \emph{convex conjugate transform} or \emph{Legendre transform} $\mathcal{L}$ defined for functions $\varphi: {\mathbb R}^d \to {\mathbb R}\cup \{+\infty\}$ by \[ (\slogleg[]{\varphi})(y) = \sup\limits_{x \in {\mathbb R}^d} \{\iprod{x}{y} - \varphi(x)\}. \] Thus, for $f = e^{-\psi} : {\mathbb R}^d \to [0, \infty)$, \[ \slogleg[\infty]{f}(y) = e^{- (\slogleg[] \psi)(y)}. \] \par \noindent A vector $p$ is said to be a \emph{subgradient} of a convex function $\psi$ on ${\mathbb R}^d$ at the point $x$ if \[ \psi(y) \geq \psi(x) + \iprod{p}{y-x} \] for all $y \in {\mathbb R}^d$. The set of all subgradients of $\psi$ at $x$ is called the \emph{subdifferential} of $\psi$ at $x$ and is denoted by $\partial \psi(x)$ \par \noindent The \emph{effective domain} $\dom \psi$ of a convex function $\psi$ on ${\mathbb R}^d$ is the set \[ \dom \psi = \left\{x :\; \psi(x) < + \infty\right\}. \] The \emph{epigraph} of a convex function $\psi$ on ${\mathbb R}^d$ is the set \[ \left\{(x, \xi) :\; x\in \dom \psi, \; \xi \in {\mathbb R}, \; \xi \geq \psi(x) \right\}. \] \vskip 3mm \noindent In the remainder of the paper we work with convex functions that have non-empty effective domain and closed epigraph. \vskip 2mm \noindent The following statement is a direct consequence of the definition of subdifferential. \vskip 2mm \noindent \begin{lem}[Geometric meaning of subdifferential] \lambdabel{lem:geom_meaning_of_subdif} Let $\varphi: {\mathbb R}^d \to {\mathbb R} \cup \{+\infty\}$ be a lower semi-continuous, convex function with non-empty effective domain. Let $x \in \dom \varphi$. If $p \in {\mathbb R}^d$ and a negative number $\xi$ are such that \[ \iprod{(p, \xi)}{(y, \varphi(y))} \leq \iprod{(p, \xi)}{(x, \varphi(x))} \] for all $y \in \dom \psi,$ then there are a subgradient $q$ of $\varphi$ at $x$ and a positive constant $\alpha$ such that \[ (p, \xi) = \alpha (q,-1). \] \end{lem} \vskip 2mm \noindent \begin{rem} The assertion of Lemma \ref{lem:geom_meaning_of_subdif} can be rephrased as: $(p, \xi)$ belongs to the \emph{normal cone} to the epigraph of $\psi$ at point $(x, \psi(x))$. \end{rem} \vskip 3mm \noindent The following facts on the subgradient can be found in e.g., \cite[Chapter 23]{rockafellar1970convex}. See also \cite{clarke1990optimization}. \vskip 2mm \noindent Let $\psi: {\mathbb R}^d \to {\mathbb R} \cup \{+\infty\}$ be a lower semi-continuous convex function with non-empty effective domain, and let $z$ be in the interior of $\dom \slogleg[]{\psi}$. Let $\varepsilon >0$ be such that $z + \varepsilon \, \ball{d}_2 \subset \dom{\slogleg[]{\psi}}$. Then $\slogleg[]{\psi}$ is Lipschitz on $z + \varepsilon\, \ball{d}_2$ and, denoting the Lipschitz constant by $C$, the following statements hold. \begin{enumerate} \item Fact 1: $\partial \slogleg[] \psi (z)$ is a non-empty compact convex set and $\partial \slogleg[] \psi (z) \subset C \ball{d}_2$. \vskip 2mm \item Fact 2: Let $q \in \partial \slogleg[]\psi(z).$ Then $z \in \partial \psi(q),$ and \begin{equation} \lambdabel{eq:fenchel_moro_lem} \psi(q) + \slogleg[]\psi(z) = \iprod{q}{z} \end{equation} \vskip 2mm \item Fact 3: Let $q \in \partial \slogleg[]\psi(z).$ Then, by the previous assertion, \begin{equation} \lambdabel{eq:dual_subgrad_norm} \enorm{\psi(q)} \leq \enorm{\slogleg[] \psi(z)} + C\enorm{z}. \end{equation} \vskip 2mm \item Fact 4: Let $q \in \partial \slogleg[]\psi(z)$. Then for all $p \in {\mathbb R}^d$, \begin{equation} \lambdabel{eq:conjugate_func_ineq} \iprod{z}{p} - \psi(p) \leq \slogleg[]\psi(z). \end{equation} \end{enumerate} \vskip 3mm \noindent To prove Theorem \ref{thm:approx_by_s-conc_dual}, we will also need the following lemmas. \vskip 2mm \begin{lem}\lambdabel{lem:point_in_supp_L_infty} Let $f \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty)$ be a proper log-concave function. Let $z$ be a point in the interior of $\mathop{\rm supp} \slogleg[\infty] f.$ Let $C$ be the Lipschitz constant of the convex function $\slogleg[]\psi = -\log \slogleg[\infty] f$ on some open neighborhood of $z.$ Then for any $s > \enorm{\slogleg[] \psi(z)} + C\enorm{z},$ \begin{equation} \lambdabel{eq:spolar_sfunc_value} \slogleg f_s \parenth{\frac{z}{\slogleg[]\psi(z) + s}} = \parenth{{1+ \frac{\slogleg[]\psi(z)}{s}}}^{-s}. \end{equation} \end{lem} \vskip 2mm \noindent \begin{proof} Denote $\psi = - \log f$. By the above Fact 1, there is $q \in \partial \slogleg[]\psi(z)$. By \eqref{eq:dual_subgrad_norm}, \[ f_s^{1/s}(q) = \truncf{1 - \frac{\psi(q)}{s}} = 1 - \frac{\psi(q)}{s} > 0. \] Using \eqref{eq:conjugate_func_ineq} and \eqref{eq:fenchel_moro_lem}, one gets for all $p \in \mathop{\rm supp}{f_s}$ \[ \iprod{ \parenth{p, {f_s(p)^{1/s}}}}{\parenth{\frac{z}{s},1}} = 1+ \frac{\iprod{z}{p} - \psi(p)}{s} \leq 1+ \frac{\slogleg[]\psi(z)}{s}. \] Since equality in the rightmost inequality is achieved at $p =q$ and since $1+ \frac{\slogleg[]\psi(z)}{s} > 0$ by the choice of $s$, we conclude that the vector \[ \frac{1}{1+ \frac{\slogleg[]\psi(z)}{s}} \parenth{\frac{z}{s},1} = \parenth{\frac{z}{\slogleg[](z) + s}, \frac{1}{1+ \frac{\slogleg[]\psi(z)}{s}}} \] belongs to the boundary of $\parenth{\slift{f_s}}^\circ$. Thus, \eqref{eq:spolar_sfunc_value} follows from Lemma \ref{lem:dual_s_lifting}. \end{proof} \vskip 3mm \noindent \begin{lem}\lambdabel{lem:s_dual_point_representation} Let $f=e^{-\psi} \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty)$ be a proper log-concave function such that the origin is in the interior of $ \mathop{\rm supp} f_s.$ Denote $\slogleg[]\psi = - \log\slogleg[\infty] f.$ If $z$ is in the interior of $\mathop{\rm supp} \slogleg{f_s},$ then there exists $z_s$ in the support of $\slogleg[\infty]{f}$ such that \begin{equation} \lambdabel{eq:spolar_sfunc_value2} z= \frac{z_s }{\slogleg[]\psi(z_s) + s} \quad \text{and} \quad \slogleg f_s \parenth{{z}} = \parenth{{1+ \frac{\slogleg[]\psi(z_s)}{s}}}^{-s}. \end{equation} \end{lem} \vskip 2mm \noindent \begin{proof} Let $\parenth{y, f_s^{1/s}(y)}$ be a dual point to the convex set $\slift{\slogleg f_s}$ at $\parenth{z, \parenth{\slogleg{f_s}}^{1/s} \parenth{{z}}},$ that is \begin{equation} \lambdabel{eq:s_dual_point_rep1} \iprod{\parenth{y, f_s^{1/s}(y)}}{\parenth{\tilde{z}, \parenth{\slogleg{f_s}}^{1/s} \parenth{{\tilde{z}}}}} \leq \iprod{\parenth{y, f_s^{1/s}(y)}}{\parenth{z, \parenth{\slogleg{f_s}}^{1/s} \parenth{z}}} = 1 \end{equation} for all $\tilde{z} \in \mathop{\rm supp} \slogleg f_s$ and \begin{equation} \lambdabel{eq:s_dual_point_rep2} \iprod{\parenth{\tilde{y}, f_s^{1/s}(\tilde{y})}} {\parenth{{z}, \parenth{\slogleg{f_s}}^{1/s} \parenth{z}}} \leq \iprod{\parenth{y, f_s^{1/s}(y)}}{\parenth{{z}, \parenth{\slogleg{f_s}}^{1/s} \parenth{{z}}}} = 1 \end{equation} for all $\tilde{y} \in \mathop{\rm supp} f_s$. \par \noindent Since $z$ is in the interior of $\mathop{\rm supp} \slogleg f_s, $ inequality \eqref{eq:s_dual_point_rep1} implies that $f_s(y) > 0$. Note that $y$ might belong to the boundary of the support of $f_s$. Consider the convex function \[ \varphi(x) = \begin{cases} - f_s^{1/s}(x), & x \in \mathop{\rm cl} \mathop{\rm supp} f_s \\ +\infty, & x \notin \mathop{\rm cl} \mathop{\rm supp} f_s. \end{cases} \] Using inequality \eqref{eq:s_dual_point_rep2} in Lemma \ref{lem:geom_meaning_of_subdif}, we conclude that \[ \parenth{z, \parenth{\slogleg{f_s}}^{1/s} \parenth{z}} = \alpha \parenth{\frac{z_s}{s},1}, \] where $\alpha > 0$ and $\frac{z_s}{s}$ belongs to the subdifferential of the convex function $-f_{s}^{1/s}$ at $y$. Since $f_s^{1/s}(y) > 0$ and $f_{s}$ is lower semi-continuous, there is an open neighborhood $U$ of $y$ such that for all $\tilde{y}$ that are in $U$ and in the boundary of $\mathop{\rm cl} \mathop{\rm supp} f_{s}$ we have $f_{s}(\tilde{y})> 0$. Moreover, since $\psi(y) < s$ and $\psi$ is upper semi-continuous, one has that $\psi(\tilde{y}) = + \infty$ for all $\tilde{y} \in \parenth{y+ \varepsilon\, \ball{d}_2} \cap \parenth{{\mathbb R}^d \setminus \mathop{\rm supp} f_{s}}$ for some $\varepsilon > 0.$ That is, the function $\varphi$ coincides with $ -1 + \frac{\psi(x)}{s}$ on some open neighborhood of $y.$ Hence, $z_s \in \partial \psi(y).$ Therefore, by \eqref{eq:fenchel_moro_lem}, \[ \iprod{\parenth{y, f_s^{1/s}(y)}}{\parenth{\frac{z_s}{s},1} } = 1+ \frac{\iprod{z_s}{y} - \psi(y)}{s} = 1+ \frac{\slogleg[]\psi(z_s)}{s} < + \infty, \] and the identities \eqref{eq:spolar_sfunc_value2} hold. \end{proof} \vskip 3mm The following corollary is an immediate consequence of of Lemma \ref{lem:s_dual_point_representation}. \vskip 2mm \noindent \begin{cor} \lambdabel{cor:support_of_slogleg} Let $f \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty)$ be a proper log-concave function containing the origin in it's support. Denote $\slogleg[]\psi = -\log \slogleg[\infty] f$ and $M = \min_{{\mathbb R}^d} \slogleg[]\psi.$ Then for any $s$ such that the origin is in the interior of $\mathop{\rm supp} f_s$ and $s + M > 0,$ one has \[ \mathop{\rm supp} \slogleg f_s \subset \frac{1}{M+s} \mathop{\rm cl} \mathop{\rm supp} \slogleg[\infty] f. \] \end{cor} \vskip 3mm We are now ready for the proof of Theorem \ref{thm:approx_by_s-conc_dual}. \vskip 2mm \noindent \begin{proof}[Proof of Theorem \ref{thm:approx_by_s-conc_dual}] We put $\psi = - \log f$. Then $\slogleg[] \psi = - \log \slogleg[\infty] f$. \vskip 2mm \noindent Corollary \ref{cor:support_of_slogleg} shows that it suffices to consider points in the interior of the support of $\slogleg[\infty] f$. \par Let therefore $x$ be a point in the interior of the support of $\slogleg[\infty] f$. That is, $x$ is an interior point of $\dom \slogleg[]\psi$. Let $ \varepsilon_1$ be such that $x + \varepsilon_1 \ball{d}_2 \subset \mathop{\rm supp} \slogleg[\infty] f$. As noted above, $\slogleg[\infty] f$ is then Lipschitz on some open neighborhood in $x + \varepsilon_1 \ball{d}_2$ with Lipschitz constant $C$. By Lemma \ref{lem:point_in_supp_L_infty}, we have for all sufficiently large $s$, \[ \left\{ \parenth{\frac{z}{\slogleg[]\psi(z) + s}} :\; z \in x + \varepsilon_1 \ball{d}_2 \right\} \subset \mathop{\rm supp} \slogleg f_s. \] \par \noindent Denote by $\ell$ the line passing through the origin and $x$. If $x$ and the origin coincide, then $\ell$ is an arbitrary linear one-dimensional subspace of ${\mathbb R}^d$. By continuity, for any $0 < \varepsilon_2 <1$ there exists $s_0$ such that \[ x \in \left\{\frac{s\, z}{\slogleg[]\psi(z) + s} :\; z \in \ell, \; \enorm{z-x} < \varepsilon_2 \right\} \] for all $s > s_0$. It follows that for all sufficiently large $s$ there exists $z_s$ satisfying \begin{enumerate} \item $\frac{s\, z_s}{\slogleg[]\psi(z_s) + s} = x$ \vskip 2mm \item $z_s \to x$ as $s \to \infty.$ \end{enumerate} This, continuity and identity \eqref{eq:spolar_sfunc_value} yield \[ \lim \limits_{s \to \infty} \slogleg f_s \parenth{ \frac{x}{s}} = \lim \limits_{s \to \infty} \slogleg f_s \parenth{\frac{z_s}{\slogleg[]\psi(z_s) + s}} = \lim \limits_{s \to +\infty} \parenth{{1+ \frac{\slogleg[]\psi(z_s)}{s}}}^{-s} = \slogleg[\infty] f \parenth{x}. \] Since all the functions considered are continuous at any point of the interior of the support, we conclude that $f_s$ converges locally uniformly to $f$ on $A$. \vskip 2mm \noindent This finishes the proof of Theorem \ref{thm:approx_by_s-conc_dual}. \end{proof} \vskip 10mm \section{A Blaschke--Santal\'o inequality} \lambdabel{sec:santalo_ineq_s_conc} \vskip 2mm \noindent In this section, we prove the following theorem, which implies Theorem \ref{thm:santalo_s_concave_lambda}. Recall also that \begin{equation}\lambdabel{kappa} \volbs = \int_{\ball{d}_2} \left[1 -\enorm{x}^2\right]^{s/2}\, \,\mathrm{d} x. \end{equation} \vskip 2mm \noindent \begin{thm} \lambdabel{thm:santalo_s_general} Let $f \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty)$ be $1/s$-concave function with finite integral. Let $H$ be an affine hyperplane with half-spaces $H_{+}$ and $H_{-}$ and such that $\lambdambda \int_{{\mathbb R}^d} f = \int_{H_{+}} f$ for some $\lambdambda \in (0,1)$. Then there exists $z \in H$ such that for any Borel function $g \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty)$ satisfying \[ \forall x, y \in {\mathbb R}^d, \quad f(x + z) g(y) \leq \truncf{1 - {\iprod{x}{y}}}^s \] the inequality \begin{equation} \lambdabel{eq:s_santalo_gen} \int_{{\mathbb R}^d} f \int_{{\mathbb R}^d} g \leq \frac{\volbs^2}{4 \lambdambda (1-\lambdambda)}. \end{equation} holds. \end{thm} \vskip 3mm \noindent The idea of our proof mostly follows Lehec's arguments \cite{lehec2009direct} -- namely, we prove it by induction on the dimension. However, in our setting the one dimensional case requires a more subtle analysis of the Lebesgue level sets of the functions than in the case of log-concave functions. \vskip 5mm \subsection{The one-dimensional case} \begin{lem}\lambdabel{lem:s_santalo_onedim} Let $\varphi_1 \mathop{\rm co}lon [0, \infty) \to [0, \infty)$ and $\varphi_2 \mathop{\rm co}lon [0, \infty) \to [0, \infty)$ be two Borel functions satisfying the duality relation \begin{equation} \lambdabel{eq:onedim_santalo_lemma} \text{for all} \;\; t_1, t_2 \in [0, \infty), \quad \varphi_1( t_1) \varphi_2( t_2) \leq \truncf{1 - t_1 t_2}^s . \end{equation} Then \begin{equation} \lambdabel{eq:s_santalo_lemma_onedim} \int_{[0, \infty)} \varphi_1 \int_{[0, \infty)} \varphi_2 \leq \left(\frac{:\;hing\kappa_{2}}{2}\right)^2. \end{equation} \end{lem} \begin{proof} For any Borel function $\varphi \mathop{\rm co}lon [0, \infty) \to [0, \infty)$ such that $\int_{{\mathbb R}^d} \varphi\, \,\mathrm{d} x < \infty$, we define its $s$-level transform $\sltrans\Psi_{\varphi} \mathop{\rm co}lon {\mathbb R} \to [0, \infty)$ by \[ \sltrans\Psi_{\varphi} (\alpha) = s e^{s \alpha} \vol{1} \left\{\tau \in {\mathbb R} :\; \left(\varphi(e^{\tau}) e^{ \tau}\right)^{1/s} \geq e^{\alpha} \right\}. \] The $s$-level transform allows us to write the integral of a function in a convenient way. \vskip 2mm \noindent \begin{claim}\lambdabel{claim:fubini_onedim} For any Borel function $\varphi \mathop{\rm co}lon [0, \infty) \to [0, \infty)$ such that $\int_{{\mathbb R}} \varphi\, \,\mathrm{d} x < \infty$, one has that \[ \int_{{\mathbb R}} \sltrans\Psi_{\varphi}= \int_{[0, \infty)} \varphi. \] \end{claim} \begin{proof} By Fubini's Theorem, we have for any Borel function $f \mathop{\rm co}lon {\mathbb R} \to [0, \infty)$, \begin{equation} \lambdabel{eq:fubini_s_power} \int_{{\mathbb R}} f = \int_{{\mathbb R}} \int\limits_{0}^{f^{1/s}(t)} s \tau^{s-1} \,\mathrm{d} \tau \,\mathrm{d} t = \int_{[0, \infty)} s \tau^{s-1} \vol{1} \left[f^{1/s} \geq \tau\right] \,\mathrm{d} \tau. \end{equation} We put $g(\beta) = \vol{1} \left\{\tau \in {\mathbb R} :\; \varphi^{1/s}(e^{\tau}) e^{ \tau /s } \geq \beta \right\}.$ Using \eqref{eq:fubini_s_power} with $f(\tau) = \varphi(e^{\tau}) e^{\tau},$ we obtain that \[ \int_{{\mathbb R}} \sltrans\Psi_{\varphi} = \int_{{\mathbb R}} s e^{s\alpha} g(e^{\alpha}) \,\mathrm{d} \alpha :\;ackrel{\beta = e^{\alpha}}{=} \int_{[0, \infty)} s \beta^{s-1} g(\beta) \,\mathrm{d} \beta = \int_{{\mathbb R}} \varphi(e^t) e^{t} \,\mathrm{d} t = \int_{[0, \infty)} \varphi. \] \end{proof} \vskip 3mm \noindent Define $h : [0, \infty) \to [0,\infty)$ by $h(t) = \sqrt{\truncf{1 - t^2}}$ and, for given Borel functions $\varphi_1, \varphi_2 \mathop{\rm co}lon [0, \infty) \to [0, \infty)$, we denote \[ \Phi_1 (\alpha) = \frac{\sltrans\Psi_{\varphi_1}(\alpha)}{s e^{s \alpha}}, \quad \Phi_2 (\alpha) = \frac{\sltrans\Psi_{\varphi_2}(\alpha)}{s e^{s \alpha}}, \quad \text{and} \quad H(\alpha) = \frac{\sltrans\Psi_{h^s}(\alpha)}{s e^{s \alpha}}. \] In other words, \[ \Phi_i (\alpha) = \vol{1} \left\{\tau :\; \varphi_i^{1/s}(e^{\tau}) e^{ \tau/s} \geq e^{\alpha} \right\},\ i =1,2 \] and \[ H (\alpha) = \vol{1} \left\{\tau :\; \sqrt{1 - e^{2\tau}} \cdot e^{ \tau/s}\geq e^{\alpha} \right\}. \] \vskip 2mm \noindent \begin{claim}\lambdabel{claim:onedim_level_transform_inclusion} For all $\alpha_1, \alpha_2 \in {\mathbb R}$, for all Borel functions $\varphi_1, \varphi_2 \mathop{\rm co}lon [0, \infty) \to [0, \infty)$ for which (\ref{eq:onedim_santalo_lemma}) holds, we have \[ H \! \left(\frac{\alpha_1 + \alpha_2}{2} \right) \geq \sqrt{\Phi_1(\alpha_1)\Phi_2(\alpha_2)}. \] \end{claim} \begin{proof} Assume that $\tau_i \in \left\{\tau :\; \varphi_i^{1/s}(e^{\tau}) e^{ \tau/s} \geq e^{\alpha_i} \right\}$, for $i=1, 2$. Using inequality \eqref{eq:onedim_santalo_lemma} with $t_i = e^{\tau_i}, $ one has \[ \truncf{1 - e^{\tau_1+\tau_2}} \geq e^{\alpha_1 + \alpha_2} e^{-\frac{(\tau_1 + \tau_2)}{s}}. \] This implies that $\frac{\tau_1 +\tau_2}{2}$ belongs to the set $\left\{\tau :\; \sqrt{1 - e^{2\tau}} \cdot e^{ \tau/s}\geq e^{\frac{\alpha_1 + \alpha_2}{2}} \right\}.$ Thus, the desired inequality follows from the multiplicative form of Brunn--Minkowski inequality, e.g., \cite{Gardner, SchneiderBook}, which finishes the proof of this claim. \end{proof} \vskip 2mm \noindent We continue the proof of Lemma \ref{lem:s_santalo_onedim}. By Claim \ref{claim:onedim_level_transform_inclusion}, we have that \[ \sltrans\Psi_{h^s} \! \left(\frac{\alpha_1 + \alpha_2}{2} \right) \geq \sqrt{\sltrans\Psi_{\varphi_1}(\alpha_1)\sltrans\Psi_{\varphi_1}(\alpha_2)}. \] Therefore, the Pr\'ekopa--Leindler inequality \cite{prekopa1971logarithmic}, see also \cite{Gardner, SchneiderBook}, yields that \[ \left(\int_{{\mathbb R}} \sltrans\Psi_{h^s} \right)^2 \geq \int_{{\mathbb R}} \sltrans\Psi_{\varphi_1} \int_{{\mathbb R}} \sltrans\Psi_{\varphi_2}. \] By Claim \ref{claim:fubini_onedim} and (\ref{kappa}), this is exactly inequality \eqref{eq:s_santalo_lemma_onedim}. This completes the proof of Lemma \ref{lem:s_santalo_onedim}. \end{proof} \vskip 3mm \noindent \begin{cor}\lambdabel{cor:onedim_satalo_s_lambda} Let $f \mathop{\rm co}lon {\mathbb R} \to [0, \infty)$ be a Borel function such that $\int_{{\mathbb R}} f \, \,\mathrm{d} t < \infty,$ and let $\int_{[0, \infty)} f = \lambdambda \int_{{\mathbb R}} f $ for some $\lambdambda \in (0,1)$. Then for any Borel function $g \mathop{\rm co}lon {\mathbb R} \to [0, \infty)$ satisfying \[ \forall t_1, t_2 \in {\mathbb R}, \quad f(t_1) g(t_2) \leq \truncf{1 - {\iprod{t_1}{t_2}}}^s \] the inequality \begin{equation} \lambdabel{eq:s_santalo_gen_onedim} \int_{{\mathbb R}} f \int_{{\mathbb R}} g \leq \frac{:\;hing\kappa^2_2}{{4 \lambdambda (1-\lambdambda)}} \end{equation} holds. \end{cor} \begin{proof} We use Lemma \ref{lem:s_santalo_onedim} twice. First with $\varphi_1(t) = f(t)$ and $\varphi_2(t) = g(t)$ and then with $\varphi_1(t) = f(- t)$ and $\varphi_2(t) = g(- t)$. In both cases, the condition $$ \varphi_1( t_1)\, \varphi(t_2) \leq \truncf{1 - t_1 t_2}^s $$ for all $t_1, t_2 \in [0, \infty)$ is satisfied. Therefore, \[ \left(\frac{:\;hing\kappa_2}{2}\right)^2 \geq \int_{[0,\infty)} f(t) \,\mathrm{d} t \int_{[0,\infty)}g(t) \,\mathrm{d} t \] and \[ \left(\frac{:\;hing\kappa_2}{2}\right)^2 \geq \int_{(-\infty,0]} f(t) \,\mathrm{d} t \int_{(-\infty,0]} g( t) \,\mathrm{d} t . \] By the assumption $\int_{[0, \infty)} f = \lambdambda \int_{{\mathbb R}} f$, we get \[ \parenth{\frac{:\;hing\kappa_2}{2}}^2 \geq {\lambdambda} \int_{{\mathbb R}} f( t) \,\mathrm{d} t \int_{[0,\infty)} g( t) \,\mathrm{d} t \] and \[ \left(\frac{:\;hing\kappa_2}{2}\right)^2 \geq \parenth{1 - \lambdambda} \int_{{\mathbb R}} f( t) \,\mathrm{d} t \int_{(-\infty,0]} g( t) \,\mathrm{d} t. \] Summing these inequalities, we obtain \[ \left(\frac{:\;hing\kappa_2}{2}\right)^2 \frac{1}{ \lambdambda(1 - \lambdambda)} = \left(\frac{:\;hing\kappa_2}{2}\right)^2 \left(\frac{1}{ \lambdambda} + \frac{1}{(1 - \lambdambda)}\right) \geq \int_{{\mathbb R}} f \, \int_{{\mathbb R}} g . \] \end{proof} \vskip 5mm \noindent \subsection{Induction on the dimension} \vskip 3mm \noindent \begin{proof}[Proof of Theorem \ref{thm:santalo_s_general}] We prove the theorem by induction on the dimension. The one-dimensional case was proved in Corollary \ref{cor:onedim_satalo_s_lambda}. \par \noindent Assume now that the theorem is true in dimension $d-1$. Let $b_{+} = \int_{H_{+}} xf(x) \,\mathrm{d} x$ and $b_{-}=\int_{H_{-}} xf(x) \,\mathrm{d} x,$ that is, $b_{\pm}$ is a scalar multiple of the barycenter of the restriction of the measure with density $f$ on $H_{\pm}$, respectively. Since $f$ is not concentrated on $H$, the point $b_{+}$ belongs to the interior of $H_{+}$, and similarly for $b_{-}.$ Hence the line passing through $b_{+}$ and $b_{-}$ intersects $H$ at one point, which we call $z$. We will show that $z$ satisfies \eqref{eq:s_santalo_gen_onedim}, for all functions $g$ that satisfy the assumption. \par \noindent Clearly, replacing $f$ by $\shift{f}{-z}$ and $H$ by $H - z$, we can assume that $z=0$. Let $g$ be such that \begin{equation} \lambdabel{eq:reduality_lambda_santalo} \forall x,y\in {\mathbb R}^d, \qquad f(x) g(y) \leq \truncf{1- \iprod{x}{y}}^s . \end{equation} Let $e_1, \dots, e_d$ be an orthonormal basis of ${\mathbb R}^d$ such that $H = e_{d}^{\perp}$ and $\iprod{b_+}{e_d} > 0$. Let $v = b_+ / \iprod{b_+}{ e_d}$ and let $A$ be the linear operator on ${\mathbb R}^d$ that maps $e_d$ to $v$ and $e_i$ to itself, for $i=1\dots d-1$. Let $B=\parenth{ A^{-1} }^t$. Define \[ F_+ : H \to {\mathbb R}_+ \hskip 3mm \text{by} \hskip 3mm y_1 \mapsto \int_{{\mathbb R}_+} f( y_1 + t v) \, \,\mathrm{d} t \] and \[ G_+: H \to {\mathbb R}_+ \hskip 3mm \text{by} \hskip 3mm y_2 \mapsto \int_{{\mathbb R}_+} g (B y_2 +t e_d) \, \,\mathrm{d} t . \] By Fubini's theorem, and since $A$ has determinant $1$, \begin{equation} \lambdabel{Fplus} \int_H F_+ = \int_{H_+} f\circ A=\lambdambda \int_{{\mathbb R}^d} f. \end{equation} \par \noindent Also, letting $P$ be the projection with range $H$ and kernel ${\mathbb R} v$, one sees that the barycenter of $F_{+}$ is \[ \frac{\int_H x F_+ \,\mathrm{d} x }{\int_H F_+ \,\mathrm{d} x } = \frac{\int_{H_+} P(Ax) f(Ax) \, \,\mathrm{d} x}{\lambdambda \int_{{\mathbb R}^d} f} = P \Bigl(\frac{ \int_{H_+} x f(x) \, \,\mathrm{d} x } { \int_{H_{+}} f}\Bigr) = P ( b_+ ) , \] and this is $0$ by the definition of $P$. Since $\iprod{A x_1}{B x_2} = \iprod{x_1}{x_2}$ for all $x_1,x_2 \in {\mathbb R}^d$, we have $ \iprod{ y_1 + t_1 v}{B y_2 + t_2 e_d } = \iprod{y_1}{y_2} + t_1 t_2$ for all $t_1 , t_2 \in {\mathbb R}$ and $y_1, y_2 \in H$. So \eqref{eq:reduality_lambda_santalo} implies \begin{equation} \lambdabel{eq:santalo_dim_reduction} f ( y_1 + t_1 v ) g ( By_2 + t_2 e_d) \leq \truncf{1 - t_1 t_2- \iprod{y_1}{y_2} }^s . \end{equation} Let $t_1, t_2 > 0$ and assume that $ \iprod{y_1}{y_2} < 1$. Set $c = \sqrt{1 - \iprod{y_1}{y_2}}.$ Then, using \eqref{eq:santalo_dim_reduction} with $\tau_i = t_i/c,$ we get \[ \frac{f(y_1 + {c v \cdot \tau_1 })g(B y_2 + {c e_d \cdot \tau_2 })}{c^{2s}} \leq \truncf{1 - \tau_1 \tau_2}^s \] By Lemma \ref{lem:s_santalo_onedim}, we get that \[ \frac{1}{c^{2s}}\int_{[0, \infty)} f(y_1 + {c v \cdot \tau_1) } \,\mathrm{d} \tau_1 \int_{[0, \infty)} g(B y_2 + {c e_d \cdot \tau_2 }) \,\mathrm{d} \tau_2 \leq \left(\frac{:\;hing\kappa_2}{2}\right)^2. \] Returning to $t_1$ and $t_2$ in the integrals, one has \[ F_{+} (y_1)\, G_{+}(y_2) \leq \left(\frac{:\;hing\kappa_2}{2}\right)^2 (1 - \iprod{y_1}{y_2})^{s+1} \] for all $y_1, y_2$ in $H$ satisfying $\iprod{y_1}{y_2} < 1$. \par \noindent If $\iprod{y_1}{y_2} \geq 1,$ inequality \eqref{eq:santalo_dim_reduction} implies that $F_{+} (y_1) G_{+}(y_2)= 0.$ Thus, \begin{equation} \lambdabel{eq:FG_marginal_santalo} F_{+} (y_1)\, G_{+}(y_2) \leq \parenth{\frac{:\;hing\kappa_2}{2}}^2 \truncf{1 - \iprod{y_1}{y_2}}^{s+1}. \end{equation} The function $F_{+} $ is a function with finite integral on the $(d-1)$-dimensional space $H,$ and it is $\frac{1}{1+s}$-concave by the Borell--Brascamb--Lieb inequality \cite{brascamp1976extensions}. Thus, by induction assumption, there exists $v \in H$ such that \[ \int_{H} F_{+} \int_{H} \slogleg[s+1]\!\parenth{{\shift{F_{+}}{v}}} \leq \parenth{:\;hing[s+1]\kappa_{d}}^2. \] Such a $v$ can be found in any hyperplane inside $H$ bisecting $F_{+}$. Since the origin is the barycenter of $F_+$, we see that the integral \[ \int_{H} \slogleg[s+1]\!\!\parenth{ \shift{\slogleg[s+1]F_{+}}{q}} \] attains the minimum at the origin, applying Theorem \ref{thm:Alexandrov_s-conc} to $ \slogleg[s+1] F_{+}$. Using again the induction assumption, we can assume that $v = 0$ in the previous inequality. Inequality \eqref{eq:FG_marginal_santalo} yields that $G_+$ is pointwise less or equal to $ \parenth{\frac{:\;hing\kappa_2}{2}}^2 \slogleg[s+1]\!\parenth{F_{+}}.$ Hence, we get \[ \int_{H} F_{+} \int_{H}{G_{+}} \leq {\parenth{\frac{:\;hing\kappa_2}{2}}^2} \, \int_{H} F_{+} \, \int_{H} \slogleg[s+1]\!\parenth{F_{+}} \leq \parenth{\frac{:\;hing\kappa_2}{2}}^2 \parenth{:\;hing[s+1]\kappa_{d}}^2. \] However, by direct computation, \[ \volbs =\smeasure{\ball{d+1}} = \pi^{d/2} \frac{\gammaf{ s/2 +1}}{ \gammaf{ s/2 + d/2 + 1}}, \] where $\gammaf{\cdot}$ is the Euler Gamma function, and thus $$ \parenth{\frac{:\;hing\kappa_2}{2}}^2 \parenth{:\;hing[s+1]\kappa_{d}}^2= \parenth{\frac{:\;hing\kappa_{d+1}}{2}}^2. $$ Hence, also using (\ref{Fplus}), \[ \parenth{\frac{:\;hing\kappa_{d+1}}{2}}^2 \geq \int_{H} F_{+} \int_{H}{G_{+}} = \lambdambda \int_{{\mathbb R}^d} f \int_{H_{+}} g(Bx) \,\mathrm{d} x = \lambdambda \int_{{\mathbb R}^d} f \int_{H_{+}} g. \] Similarly, \[ \parenth{\frac{:\;hing\kappa_{d+1}}{2}}^2 \geq (1-\lambdambda) \int_{{\mathbb R}^d} f \int_{H_{-}} g. \] Therefore, \begin{eqnarray*} \int_{{\mathbb R}^d} f \int_{{\mathbb R}^d} g &=& \int_{{\mathbb R}^d} f \left( \int_{H_{+}} g \ + \ \int_{H_{-}} g \right) \leq \parenth{\frac{:\;hing\kappa_{d+1}}{2}}^2 \parenth{\frac{1}{\lambdambda} + \frac{1}{1 - \lambdambda}} \\ &= & \parenth{\frac{:\;hing\kappa_{d+1}}{2}}^2 \frac{1}{\lambdambda(1-\lambdambda)}. \end{eqnarray*} This completes the proof of Theorem \ref{thm:santalo_s_general}. \end{proof} \vskip 10mm \section{Santal\'o \tpdfs-regions} \lambdabel{sec:santalo_func} In this section, we will define Santal\'o regions for functions and discuss possible approaches to define Santal\'o functions. \par \noindent For a convex body $K$, its \emph{Santal\'o region} $S\parenth{K, t}$ with parameter $t$ was introduced and studied in \cite{meyer1998santalo}. It is defined as \begin{equation}\lambdabel{Santalo-body} S\parenth{K, t} = \left\{ x \in K :\; \vol{d} K \cdot \vol{d}\shift{K}{x} \leq t \parenth{\vol{d} \ball{d}}^2 \right\}. \end{equation} Note that the Santal\'o region approximates the initial set as $t \to \infty$. More importantly, it was shown in \cite[Theorem 10]{meyer1998santalo} that the {\em affine surface area} of the initial convex body can be computed as a limit as $t \to \infty$ of the volume difference of the convex body and its Santal\'o regions. \par Affine surface area was first introduced by Blaschke \cite{Blaschke1923} for dimensions $2$ and $3$ and for smooth enough convex bodies as an integral over the boundary $\partial K$ of a power of the Gauss curvature $\kappa_K$, $$ as(K) = \int_{\partial K} \kappa_K(x)^\frac{1}{n+1} d\mu(x). $$ Integration is with respect to the usual surface area measure $\mu$ on $\partial K$. It was successively extend within the last decades. Aside from the afore mentioned successful approach using the Santal\'o region, there are other successful approaches via the (convex) floating body resp. the illumination body in \cite{SchuettWerner1990, Werner1994} or in e.g., \cite{Lutwak1991, HanSlomkaWerner}, and they all coincide. Such extensions are desirable as the affine surface area is one of the most powerful tools in convex and differential geometry. It proved to be fundamental in the solution of the affine Plateau problem by Trudinger and Wang \cite{TrudingerWang2005,TrudingerWang2008}, in the theory of valuations where the affine and centro-affine surface areas have been characterized by Ludwig and Reitzner \cite{Ludwig-Reitzner} and Haberl and Parapatits \cite{HaberlParapatitis} as unique valuations satisfying certain invariance properties. Affine surface area appears naturally in the approximation of general convex bodies by polytopes, e.g., \cite{Boeroetzky2000, Reitzner, SchuettWerner2003}. Furthermore, there are connections to e.g., PDEs and ODEs and concentration of volume (e.g., \cite{FGP2007, LutwakOliker}), information theory (e.g., \cite{artstein2012functional, CaglarWerner2014, PaourisWerner2012, Werner2012}) and in a spherical and hyperbolic setting \cite{BesauWerner2015a, BesauWerner2018}. \par \noindent It would be extremely interesting to develop an analogue of affine surface area in the functional setting. There are several approaches already how to define it for log-concave functions \cite{artstein2012functional, CaglarWerner2014, caglar2016functional, LiSchuettWerner}, but those do not all coincide. We think that an approach via a Santal\'o function will help not only to clarify this point, but also can reveal new properties and inequalities related to log-concave functions and their integrals. Towards this goal, we define Santal\'o functions for $1/s$-concave functions in the next subsection. \vskip 2mm \noindent First we define, following (\ref{Santalo-body}), the Santal\'o $s$-region for a fixed positive $s$ by $\santalosreg{f}{s}{t}$ of a non-negative Borel function $f$ on ${\mathbb R}^d$ by \[ \santalosreg{f}{s}{t} = \left\{ x \in \mathop{\rm co} \mathop{\rm supp} f :\; \int_{{\mathbb R}^d} f \cdot \int_{{\mathbb R}^d} \slogleg \parenth{\shift{f}{x}} \leq t \parenth{\volbs}^2 \right\}, \] where $\text{co}\, A$ denotes the convex hull of a set $A$. In the limit case $ s =\infty$ we define the Santal\'o $\infty$-region by \[ \santalosreg{f}{\infty}{t} = \left\{ x \in \mathop{\rm co} \mathop{\rm supp} f :\; \int_{{\mathbb R}^d} f \cdot \int_{{\mathbb R}^d} \slogleg[\infty] \parenth{\shift{f}{x}} \leq t \cdot (2 \pi)^d \right\}. \] \vskip 2mm \noindent We summarize the properties of the Santal\'o regions in the next lemmas. They follow from Lemmas \ref{lem:dual_s_lifting} and \ref{lem:int_s-polar_via_suppfunc} and Theorems \ref{thm:Alexandrov_log-conc}-\ref{thm:santalo_s_general}. \vskip 2mm \noindent \begin{lem} Let $s$ be a fixed positive real number. Let $f$ be a non-negative function on ${\mathbb R}^d$ such that $\slogleg \slogleg f$ has positive integral. Then $\santalosreg{f}{s}{t}$ \begin{enumerate} \vskip 2mm \item is non-empty if $t \geq 1,$ and has non-empty interior if $t > 1$, \vskip 2mm \item is a convex set if it is non-empty, \vskip 2mm \item is strictly convex if it has non-empty interior, \vskip 2mm \item has $C^{\infty}$-smooth boundary if it has non-empty interior. \end{enumerate} \end{lem} \vskip 2mm \noindent \begin{lem} Let $f \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty)$ be a proper log-concave function. If $\santalosreg{f}{\infty}{t}$ is non-empty, then \[ \santalosreg{f_s}{s}{t} \to \santalosreg{f}{\infty}{t} \] in the Hausdorff metric as $s \to \infty.$ \end{lem} \vskip 5mm \subsection{Marginals of convex sets} \par Let $s$ be the reciprocal of a positive integer. Then, as we already discussed, any $1/s$-concave function on ${\mathbb R}^d$ is the marginal of a convex set in ${\mathbb R}^{d+s}$. In particular, one can lift a $1/s$-concave function $f \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty)$ into ${\mathbb R}^{d+ s}$ as follows \cite{artstein2004santalo}, \begin{equation}\lambdabel{def.Ksf} {K}_s(f) = \left\{ (x, y) \in {\mathbb R}^{d} \times {\mathbb R}^{s} :\; x \in \mathop{\rm cl} \mathop{\rm supp} f,\, \enorm{y} \leq \parenth{\frac{f(x)}{\vol{s} \ball{s}}}^{\!1/s} \right\}. \end{equation} Clearly, \[ \vol{d +s} {K}_s(f) = \int_{\mathop{\rm supp} f} \vol{s} \ball{s} \parenth{ \frac{f(x)}{\vol{s} \ball{s}}} \,\mathrm{d} x = \int_{{\mathbb R}^d} f. \] From \cite[Lemma 3.1]{artstein2004santalo} follows that for any $z$ in the interior of the support of $f,$ one has \[ \vol{d+s} \shift{K_s(f)}{z} = \parenth{\vol{s} \ball{s}}^2 \int_{{\mathbb R}^d} \slogleg\! \parenth{\shift{f}{z}}, \] and therefore, \[ \int_{{\mathbb R}^d} f \, \int_{{\mathbb R}^d} \slogleg \! \parenth{\shift{f}{z}} = \frac{\vol{d+s} K_s(f) \, \vol{d+s} \shift{K_s(f)}{z}} {\parenth{\vol{s} \ball{s}}^2}. \] On the other hand, $\vol{d+s} \ball{d+s}= \volbs \vol{s} \ball{s}$. \vskip 2mm \noindent We then define the {\em Santal\'o $m$-function} $S_{m}(f,s,t)$ of a $1/s$-concave function $f$ with integer $s$ to be such that \begin{equation}\lambdabel{Santalofunction-sconcave} K_s \! \parenth{S_m(f,s,t)}= S \! \parenth{K_s(f), t}. \end{equation} \par \noindent It follows that the Santal\'o m-function $S_{m}(f,s,t)$ of a $1/s$-concave function $f$ is $1/s$-concave. Moreover, the following identity holds. \vskip 2mm \noindent \begin{prp} Let $s \in {\mathbb N},$ and let $f \mathop{\rm co}lon {\mathbb R}^d \to [0, \infty)$ be a $1/s$-concave function of finite integral. Then \begin{eqnarray*} &&\hskip -10mm \lim_{t \to \infty} \frac{ \int_{{\mathbb R}^d} f - \int_{{\mathbb R}^d} S_m(f,s,t) } { t^{-\frac{2}{ d+ {s} +1 }} }= \\ && \hskip 20mm \frac{s}{2} \, \parenth{ \frac{\vol{s} \ball{s}}{\vol{d+s} \ball{d+s}} \int_{{\mathbb R}^d} f}^\frac{2}{d+s+1} \, \int_{{\mathbb R}^d} \left| \det \parenth{\text{Hess} \left(f^\frac{1}{s}} \right) \right|^\frac{1}{d+s+1} f^{\frac{(s-1)(d+s)}{s(d+s+1)} }, \end{eqnarray*} where $\text{Hess} \left(f^\frac{1}{s}\right)$ is the Hessian of $ f^\frac{1}{s}.$ \end{prp} \vskip 2mm \noindent \begin{proof} The proof follows immediately from Theorem 10 of \cite{meyer1998santalo} and Proposition 6 of \cite{artstein2012functional}. \end{proof} \vskip 3mm \noindent It remains to find a reasonable definition of a Santal\'o function for a log-concave function. The natural first approach $\lim_{s \to \infty} S_m(f_s,s,t)$ does not lead to anything meaningful. Lemma \ref{lem:int_s-polar_via_suppfunc} and Theorem \ref{thm:Alexandrov_s-conc} provide another possible approach. \par \noindent For $s>0$, we define the set \begin{eqnarray*} &&:\;hing S_p(f, s, t) =\\ && \left\{ y \in \slift{f} :\; \frac{s}{2(d+s)}\int_{S^{d}} \frac{\abs{\iprod{e_{d+1}}{ u}}^{s-1}}{ \left(h_{\shift{K}{z}}(u)\right)^{d+s}} \,\mathrm{d} \spherem{u} \cdot {\int_{{\mathbb R}^d} f } \leq t \parenth{ \volbs }^2 \right\}. \end{eqnarray*} Lemma \ref{lem:int_s-polar_via_suppfunc} shows that $:\;hing S_p(f, s, t)$ is a convex $d$-symmetric set, and hence, it is the $s$-lifting of a $1/s$-concave function which we denote $S_p(f, s, t)$. The function $S_p(f, s, t)$ seems a good candidate for a Santal\'o function and we investigate this in a forthcoming paper. \vskip 20mm \noindent Grigory Ivanov \\ {\small Institute of Science and Technology Austria }\\ {\small Klosterneuburg, 3400, Austria} \\ {\small \tt [email protected]} \vskip 10mm \noindent Elisabeth M. Werner\\ {\small Department of Mathematics \hskip 42 mmUniversit\'{e} de Lille 1}\\ {\small Case Western Reserve University \hskip 34mm UFR de Math\'{e}matique }\\ {\small Cleveland, Ohio 44106, U. S. A. \hskip 36mm 59655 Villeneuve d'Ascq, France}\\ {\small \tt [email protected]}\\ \end{document}
\begin{document} \begin{abstract} We shall present an elementary approach to extremal decompositions of (quantum) covariance matrices determined by densities. We give a new proof on former results and provide a sharp estimate of the ranks of the densities that appear in the decomposition theorem. \end{abstract} \title{A note on extremal decompositions of covariances} \section{Introduction} Let $D \in M_n(\mathbb{C})$ denote an $n \times n$ (complex) density matrix (i.e. $D \geq 0$ and $\mbox{Tr } D = 1$), and let $X_i$ ($1 \leq i \leq k$) stand for self-adjoint matrices in $M_n(\mathbb{C}).$ Then the non-commutative covariance matrix is defined by $$ \mbox{Var}_D(\mathbf{X})_{ij} := \mbox{Tr } DX_iX_j - \left(\mbox{Tr } DX_i\right) \left(\mbox{Tr } DX_j\right) \quad 1 \leq i,j \leq k,$$ where $\mathbf{X}$ stands for the tuple $(X_1, \hdots, X_k),$ see \cite[p. 13]{P}. We note that there are more general versions of variances and covariance matrices. For instance, in \cite{B}, \cite{BD} R. Bhatia and C. Davis introduced them by means of completely positive maps and applied the concept for improving non-commutative Schwarz inequalities. Covariances naturally appear in quantum information theory as well and it seems that there is a recent interest in order to understand their extremal properties \cite{PD}, \cite{PD2}. More precisely, in \cite{PD} D. Petz and G. T\'oth proved that any density matrix $D$ can be written as the convex combination of projections $\{P_l\},$ i.e. $D = \sum_l \lambda_l P_l,$ such that $$ \mbox{Var}_D(X) = \sum_l \lambda_l \mbox{Var}_{P_l}(X)$$ holds, where $X$ denotes a fixed Hermitian. It is worth it to mention here that quite recently S. Yu pointed out some extremal aspects of the variances which yields a descriptions of the quantum Fisher information in terms of variances (for the details, see \cite{Y}). In this short note we study analogous questions in the multivariable case. Actually, we are interested in the following problem: let us find densities $D_l \in M_n(\mathbb{C})$ such that $$ D = \sum_l \lambda_lD_l \quad \mbox{and} \quad \mbox{Var}_D(\mathbf{X}) = \sum_l \lambda_l \mbox{Var}_{D_l}(\mathbf{X}),$$ where $\sum_l \lambda_l = 1$ and $0 < \lambda_l < 1.$ Let us call a density $D$ {\bf extreme with respect to} $\mathbf{X} = (X_1, \hdots, X_k)$ if it admits only the trivial decomposition (i.e. $D_l = D$ for every $l$). It was proved in the cases $k = 1$ and $k = 2$ that the extreme densities are rank-one projections \cite{LP}, \cite{PD}. Furthermore, the number of projections used, i.e. the length of the decomposition, is polynomial in rank $D$ (see \cite{LP}). The aim of this note is to present a simple approach to the extremal problem above and to look at the question from the theory of extreme correlation matrices (see \cite{CV},\cite{GPW} and \cite{CT}). In this context we shall give a new proof to the decomposition theorems appeared in \cite{LP}, \cite{PD}, \cite{PD2} and we present a sharp rank-estimate of the extreme densities. \section{Results and examples} First we collect some basic properties of the covariance matrix $\mbox{Var}_D(\mathbf{X}).$ We note that the matrix does not change by (real) scalar perturbations of the tuple $(X_1, \hdots, X_k).$ In fact, an elementary calculation on the entries gives that $$(1) \qquad \mbox{Var}_D(\mathbf{X}) = \mbox{Var}_D(X_1 - \lambda_1I, \hdots, X_k - \lambda_kI),$$ where $\lambda_i \in \mathbb{R}$ for every $i.$ Moreover, one can readily check that $\mbox{Var}_D(\mathbf{X})$ is positive. For the sake of completeness, here is a simple proof. \begin{lemma} $ {\rm Var}_D(\mathbf{X}) \geq 0. $ \end{lemma} \begin{proof} By (1), without loss of generality, one can assume that Tr $DX_i = 0$ holds for every $1 \leq i \leq k.$ The density $D$ defines a semi--inner product $ \langle A,B \rangle_D := \mbox{Tr } DA^*B$ on $M_n(\mathbb{C}).$ Since ${\rm Var}_D(\mathbf{X})_{ij} = \langle X_i, X_j \rangle_D,$ for any $y = (y_1, \hdots, y_k) \in \mathbb{C}^k,$ we get that $$ y {\rm Var}_D(\mathbf{X}) y^* = \langle \: \sum_i y_i X_i,\sum_i y_i X_i \rangle_D \geq 0$$ and the proof is done. \end{proof} Next we show that the covariance is a concave function on the set of the density matrices. \begin{lemma} Let $D = \sum_l \lambda_l D_l$ be a finite sum of densities $D_l \in M_n(\mathbb{C})$ such that $\sum_l \lambda_l = 1$ and $0 \leq \lambda_l \leq 1.$ Then $$ {\rm Var}_D(\mathbf{X}) \geq \sum_l \lambda_l {\rm Var}_{D_l}(\mathbf{X}). $$ \end{lemma} \begin{proof} Choose $0 < \lambda < 1.$ If $D = \lambda D_1 + (1-\lambda)D_2,$ a straightforward calculation gives that $${\rm Var}_D(\mathbf{X}) - (\lambda{\rm Var}_{D_1}(\mathbf{X}) + (1-\lambda) {\rm Var}_{D_2}(\mathbf{X})) = \lambda(1-\lambda)[x_{ij}]_{1 \leq i,j \leq k},$$ where $x_{ij} = {\rm Tr} \: (D_1-D_2)X_i {\rm Tr} \: (D_1-D_2)X_j.$ Therefore $[x_{ij}]_{1 \leq i,j \leq k} = XX^* \geq 0$ holds with $$ X = \left[\begin{matrix} {\rm Tr} \: (D_1-D_2)X_1 & 0 & \hdots & 0 \cr \vdots & \vdots & & \vdots \cr {\rm Tr} \: (D_1-D_2)X_k & 0 & \hdots & 0 \end{matrix}\right] \: \in \: M_k(\mathbb{C}),$$ and the lemma readily follows. \end{proof} The scalar perturbation property $\mbox{Var}_D(\mathbf{X}) = \mbox{Var}_D(\mathbf{X} - {\bf \lambda})$ guarantees that it is enough to solve the extremal problem when $\mbox{Tr } DX_i = 0$ comes for every $1 \leq i \leq k.$ Then the nonlinear part of the covariance vanishes, thus we can simply transform our problem into a geometrical one: let $X_i \in M_n(\mathbb{C})$ ($1 \leq i \leq k$) be self-adjoints and define the set \begin{eqnarray*} \begin{split} \mathcal{D}(\mathbf{X}) := \{ D \colon D \in M_n(\mathbb{C}) &\mbox{ is density and } \\ &\mbox{Tr } DX_i = 0 \mbox{ for every } 1 \leq i \leq k \}. \end{split} \end{eqnarray*} Clearly, $\mathcal{D}(\mathbf{X})$ is a convex, compact set. From the Krein--Milman theorem, $\mathcal{D}(\mathbf{X})$ is the convex hull of its extreme points. Precisely, these extreme points are the extreme densities we are looking for in the decomposition of Var$_D(\mathbf{X}).$ Notice that there is no restriction if we assume that $X_1, \hdots, X_k$ are linearly independent over $\mathbb{R}.$ Hence from here on we shall use this assumption on $X_i$-s. When $k \geq 3,$ one can see that it is no longer true that the extreme points of $\mathcal{D}(\mathbf{X})$ are rank-one projections. In fact, look at the following simple example in $M_2(\mathbb{C})$ with $k = 3.$ \noindent {\bf Example 1.} Recall that the Pauli matrices are given by $$ \sigma_x = \left[\begin{matrix} 0 & 1 \cr 1 & 0 \end{matrix}\right] \qquad \sigma_y = \left[\begin{matrix} 0 & {\rm -i} \cr {\rm i} & 0 \end{matrix}\right] \qquad \sigma_z = \left[\begin{matrix} 1 & 0 \cr 0 & -1 \end{matrix}\right]. $$ Any $2 \times 2$ Hermitian $Z$ with Tr $Z = 1$ can be expressed in the form $$ Z = {1 \over 2}(I_2 + x\sigma_x + y\sigma_y + z \sigma_z), $$ where $x,y$ and $z \in \mathbb{R}.$ Then the points of the Bloch sphere, i.e. $x^2 + y^2 + z^2 = 1,$ correspond to the rank-one projections. It is standard that the self-adjoints of trace $1,$ which are orthogonal to a fixed $Z,$ form an affine $2$-dimensional subspace of $\mathbb{R}^3.$ Hence one can find $X_1, X_2$ and $X_3$ so that the only density $D$ that satisfies $\mbox{Tr } DX_i = 0$ $(1 \leq i \leq 3)$ is inside the Bloch ball. Then $\mathcal{D}(\mathbf{X})= \{ D \}$ and $D$ is a density of rank $2.$ We shall present a simple characterization of extreme densities or the extreme points of $\mathcal{D}(\mathbf{X}).$ We recall that for any positive operators $D$ and $C,$ $D - \varepsilon C$ is positive for some $\varepsilon > 0$ if and only if $\mbox{ran } C \leq \mbox{ran } D$ holds. Then we can prove \begin{lemma} The following statements are equivalent: \begin{itemize} \item [(i)] $D$ is an extreme point of $\mathcal{D}(\mathbf{X}),$ \item [(ii)] if $C \in \mathcal{D}(\mathbf{X})$ such that ${\rm ran} \: C \leq {\rm ran } \: D$ then $C = D.$ \end{itemize} \end{lemma} \begin{proof} Let us assume that ${\rm ran} \: C \leq {\rm ran } \: D$ and $D \neq C \in \mathcal{D}(\mathbf{X}).$ Then $$ (1-\varepsilon) \left( {1 \over 1 - \varepsilon} (D - \varepsilon C) \right) + \varepsilon C = D,$$ where $0 < \varepsilon < 1,$ hence $D$ cannot be an extreme point of $\mathcal{D}(\mathbf{X}).$ Conversely, if $D$ is not extreme then $D = {1 \over 2} D_1 + {1 \over 2} D_2$ which implies that $\mbox{ran } D - {1 \over 2} D_1 \leq \mbox{ran } D,$ since $D - {1 \over 2} D_1 $ is positive. \end{proof} To produce a description of ext $\mathcal{D}(\mathbf{X})$ which is more effective for our purposes, we need some basic facts about correlation matrices. We recall that a positive semidefinite matrix is a correlation matrix if its diagonal entries are $1$-s. Correlation matrices form a convex, compact set in $M_n(\mathbb{C}).$ Its extreme points, or extreme correlation matrices, were described by several authors, see e.g. \cite{GPW}, \cite{CT}. It is well-known that an $n \times n$ extreme correlation matrix has rank at most $\sqrt{n}$ (see e.g. \cite{CV}). Later we shall present an estimate of the rank of extreme densities matrices (with respect to tuples). The perturbation method used by C.-K. Li and B.-S. Tam is relevant for us. Let us say that a nonzero Hermitian $S \in M_n(\mathbb{C})$ is a {\bf perturbation} of $D$ if there exists an $\varepsilon > 0$ such that $D \pm \varepsilon S$ are density matrices as well. Then $D$ is an extreme density with respect to $X_1, \hdots, X_k$ if and only if there does not exist perturbation $S$ of $D$ such that Tr $S = 0$ and Tr $SX_i = 0$ for every $1 \leq i \leq k.$ In fact, if $D$ is not extreme, one can find $D_1$ and $D_2$ densities such that $D = {1 \over 2} D_1 + {1\over 2} D_2$ and $\mbox{Tr } D_jX_i = 0.$ It follows that $S = D_1 - D_2$ is a perturbation of $D.$ The converse statement is trivial. From here on let $H_n(\mathbb{C})$ denote the real Hilbert space of $n \times n$ complex Hermitian matrices with the usual inner product $\langle A, B \rangle = \mbox{Tr } AB.$ One can easily conclude that an extreme density $D$ (with respect to $\mathbf{X}$) must be singular if $n^2 > k+1.$ Actually, the last inequality guarantees the existence of a Hermitian perturbation $S$ which satisfies the orthogonality constraints; i.e. $S$ is orthogonal to $X_i$-s and $I.$ Moreover, the continuity of the spectra here gives that any small perturbation $D \pm \varepsilon S$ is positive if $D$ is invertible. Let $\sigma(A)$ denote the spectrum of any $A \in M_n(\mathbb{C}).$ Suppose that the matrix $D$ is of rank $r$. Then there does exist an $Y \in M_{n \times r}(\mathbb{C})$ and $R \in H_r(\mathbb{C})$ such that $D = YRY^*.$ Now one can prove the following lemma which is analogous to \cite[Theorem 1. (a)]{CT}. \begin{lemma} Let $D = YRY^* \in \mathcal{D}(\mathbf{X})$ be a density of rank $r.$ Then $S$ is a perturbation of $D$ if and only if ${\rm Tr} \: S = 0$ and $S = YQY^*$ where $Q \in H_r(\mathbb{C}).$ \end{lemma} \begin{proof} First, assume that $S = YQY^*.$ Then $S$ is nonzero if and only if $Q \neq 0.$ Indeed, we have $\mbox{rank } S = \mbox{rank } Q$ because $Y$ has full column rank $r.$ Since $D = YRY^*$ is positive, we obtain that $R$ is positive and invertible. From $0 \notin \sigma(R),$ there does exist an $\varepsilon > 0,$ such that $D \pm \varepsilon S = Y(R\pm \varepsilon Q)Y^* $ are positive. Obviously, we get that $S$ is a perturbation. Conversely, let us assume that $S$ is perturbation of $D.$ Clearly, Tr $S = 0$ must hold. Expand $Y$ with a matrix $Z \in M_{n \times (n-r)}(\mathbb{C})$ such that $V = (Y|Z)$ is invertible and $V(R \oplus 0_{n-r})V^* = D$ hold. Next, let us write $V^{-1}S(V^*)^{-1}$ into blocks that corresponds to the block form of $R \oplus 0_{n-r}.$ Since $V^{-1}(D \pm \varepsilon S)(V^{-1})^*$ are positive for some $\varepsilon > 0,$ it follows that $S = V(Q \oplus 0_{n-r})V^* $ must hold for some $ Q \in H_r(\mathbb{C}).$ \\ \end{proof} After this lemma here is our main result which reflects some similarity with the characterization theorem of extreme correlations, see \cite[Theorem 1]{CT}. \begin{thm} Let $X_i \in H_n(\mathbb{C}),$ $1 \leq i \leq k,$ and $D = YRY^* \in \mathcal{D}(\mathbf{X})$ be a density of rank $r,$ where $Y \in M_{n \times r}(\mathbb{C}).$ The followings are equivalent: \begin{itemize} \item[(i)] $D$ is an extreme point of $\mathcal{D}(\mathbf{X}),$ \item[(ii)] $ {\rm span} \: \{ Y^*X_1Y, \hdots, Y^*X_kY, Y^*Y \} = H_r(\mathbb{C}),$ \item[(iii)] $ \{ DX_1D, \hdots, DX_kD, D^2 \} \mbox{ has (real) rank } r^2.$ \end{itemize} Moreover, if $D = YY^*$ then the above statements are equivalent to \begin{itemize} \item[(iv)] $r^{-1}I_r$ is an extreme density with respect to $Y^*\mathbf{X}Y;$ that is, $$\mathcal{D}(Y^*\mathbf{X}Y) = \{r^{-1} I_r\}.$$ \end{itemize} \end{thm} \begin{proof} \noindent (i) $\Leftrightarrow$ (ii) From Lemma 4, $D$ is extreme if and only if there does not exist $0 \neq YQY^*$ such that Tr $YQY^*Y_i = \mbox{Tr } Q(Y^*X_iY) = 0$ and Tr $YQY^* = \mbox{Tr } Q(Y^*Y) = 0.$ We notice that $Q = 0$ if and only if the linear span of $Y^*X_1Y, \hdots,$ $Y^*X_kY$ and $Y^*Y$ is the full space $H_r(\mathbb{C}).$ \\ \noindent (iii) $\Leftrightarrow$ (ii) Let us choose the decomposition $D = YY^*;$ that is, $R = I_r.$ Note that the self-adjoint $Y^*Y \in M_r(\mathbb{C})$ is invertible. In fact, $\sigma(YY^*) \cup \{0\} = \sigma(Y^*Y) \cup \{0\}$ holds, thus $\sigma(Y^*Y)$ equals to the set of positive eigenvalues of $D$ (with multiplicities). This implies that $\sum_{i = 0}^k \alpha_i Y^*X_iY = 0$ if and only if $\sum_{i = 0}^k \alpha_i YY^*X_iYY^* = 0$ $(\alpha_i \in \mathbb{R}, \: X_{0} = I_n),$ so the systems $\{ Y^*X_1Y, \hdots, Y^*X_kY, Y^*Y \}$ and $\{ DX_1D, \\ \hdots, DX_kD, D^2 \}$ have the same rank. \\ \noindent (i) $\Rightarrow$ (iv) Since $D$ is an extreme point, we get from (ii) that $\{Y^*X_1Y, \\ \ldots, Y^*X_kY\}$ has rank at least $r^2-1.$ However, $I_r$ is not in the linear span of the above system because it is orthogonal to every matrix $Y^*X_iY.$ Adjusting $r^{-1}I_r$ to $Y^*\mathbf{X}Y$, we get a full rank system of $H_r(\mathbb{C}).$ Hence by (iii) we conclude that $r^{-1}I_r$ is an extreme point of $\mathcal{D}(Y^*\mathbf{X}Y).$ \\ \noindent (iv) $\Rightarrow$ (i) If $r^{-1}I_r$ is an extreme point, it has no perturbation $S$ which is orthogonal to every $Y^*X_iY.$ Thus it follows that $I_r, Y^*X_1Y, \ldots, Y^*X_kY$ must span $H_r(\mathbb{C});$ that is, $\mathcal{D}(Y^*\mathbf{X}Y) = \{r^{-1}I_r\}.$ Note that $Y^*Y, Y^*X_1Y, \\ \ldots,$ $Y^*X_kY$ span $H_r(\mathbb{C})$ as well becase $\mbox{Tr } Y^*Y = \mbox{Tr } D = 1$ and $Y^*X_iY$-s are traceless. Thus (ii) implies that $D$ is an extreme point. \end{proof} The theorem gives a straightforward estimate of the rank of extreme densities. \begin{cor} Let $D \in M_n(\mathbb{C})$ be an extreme density with respect to $X_1, \hdots, X_k \in H_n(\mathbb{C}).$ Then $$ {\rm rank} \: D \leq \sqrt{k+1}. $$ \end{cor} The Krein--Milman theorem implies that $\mbox{Var}_D(\mathbf{X})$ can be written as the convex sum of covariances determined by densities of rank at most $\sqrt{k+1}.$ Moreover, one can easily deduce the following result which first appeared in \cite{LP}, \cite{PD2} and \cite[Theorem]{PD}. \begin{cor} Let $D \in M_n(\mathbb{C})$ denote a density matrix. In the case of $k = 1$ and $k = 2,$ there exist projections $P_1, \hdots, P_m$ such that $$D = \sum_{l=1}^m \lambda_l P_l \quad \mbox{and} \quad {\rm Var}_D(\mathbf{X}) = \sum_{l=1}^m \lambda_l {\rm Var}_{P_l}(\mathbf{X})$$ hold, where $\sum_{l=1}^m \lambda_l = 1$ and $0 \leq \lambda_l \leq 1.$ \end{cor} In the case of $k \geq 3$, one might expect that the covariance matrix still can be decomposed by means of projections if $n$ is large enough. However, this is not necessarily true. The next example shows that the estimate of Corollary 1 is sharp if $n$ is large enough. \noindent {\bf Example 2.} Let $n = \lfloor \sqrt{k+1} \rfloor.$ The special unitary group $SU(n)$ has dimension $n^2-1,$ so let $\lambda_i$ $(1 \leq i \leq n^2-1)$ denote a collection of its traceless, Hermitian infinitesimal generators. One can also assume that $\mbox{Tr } \lambda_i \lambda_j = 0$ holds for every $i \neq j$ (for the generalized Gell--Mann matrices, see e.g. \cite{SZ}). Then the matrices $\{I_n, \lambda_1, \hdots, \lambda_{n^2-1}\}$ span the real vector space $H_n(\mathbb{C}).$ Thus it follows that $$ \mathcal{D}(\lambda_1, \hdots, \lambda_{n^2-1}) = \left\{{I_n \over n} \right\}$$ is a singleton, hence $(1/n) I_n$ is an extreme density of rank $n.$ If $n^2 < k + 1,$ let us choose arbitrary $\lambda_{n^2}, \hdots, \lambda_k \in M_m(\mathbb{C})$ Hermitians which are linearly independent where $m$ is large enough. From Theorem 1 (iii), $(1/n)I_n \oplus 0_m$ remains extremal with respect to $\lambda = (\lambda_1 \oplus 0_m, \hdots, \lambda_{n^2-1} \oplus 0_m, 0_n \oplus \lambda_{n^2}, \hdots, 0_n \oplus \lambda_k),$ hence $\mbox{Var}_{(1/n)I_n \oplus 0_m}({\bf \lambda})$ is not decomposable. Applying direct sums as above, for every large $n$ one can construct $n \times n$ extreme densities of arbitrary rank between $1$ and $\sqrt{k+1}.$ The method we used is very similar to that of describing extreme correlations. However, the next example shows that $\mbox{Var}_D(\mathbf{X})$ is not necessarily extreme even if it is a correlation matrix and $D$ is an extreme density (with respect to some tuple). \noindent {\bf Example 3.} Let $D$ be the projection $\mbox{diag}(1,0,\hdots, 0) \in \mathbb{R}^{n+1}.$ We define the Hermitians in $H_{n+1}(\mathbb{C})$ $$ X_1 := \left[\begin{matrix} 0 & 1 \cr 1 & 0 \end{matrix} \right] \oplus 0_{n-1}, \; X_2 := \left[\begin{matrix} 0 & 0 & 1 \cr 0 & 0 & 0 \cr 1 & 0 & 0 \end{matrix} \right] \oplus 0_{n-2}, \; \hdots \; , $$ $$ X_n := \left[\begin{matrix} 0 & \hdots & 0 & 1 \cr \vdots & \vdots & \vdots & 0 \cr 0 & \vdots & \vdots & \vdots \cr 1 & 0 & \hdots & 0 \end{matrix} \right]. $$ Then a simple calculation gives that $\mbox{Var}_D(\mathbf{X}) = I_n$ which is obviously not an extreme correlation matrix. Finally, for the converse, we give an example that $\mbox{Var}_D(\mathbf{X})$ can be an extreme correlation matrix while $D$ is not necessarily extremal (with respect to $\mathbf{X}$). \noindent {\bf Example 4.} Consider $D = (1/n) I_n \oplus 0_n \in H_{2n}(\mathbb{C}),$ $n > 2.$ Let us choose reals $x_1, \hdots, x_n$ such that $\sum_{i=1}^n x_i = 0$ and $\sum_{i=1}^n nx_i^2 = 1$ hold. For any $\tilde{X}_i \in H_n(\mathbb{C}),$ $1 \leq i \leq n,$ we set $$ X_i = \mbox{diag} (x_1, \hdots, x_n) \oplus \tilde{X}_i \in H_{2n}(\mathbb{C}) \qquad 1 \leq i \leq n. $$ Then we get that $\mbox{Var}_D(\mathbf{X})$ is the $n \times n$ matrix which consists only $1$-s; that is, it is a rank-one extreme correlation matrix. From Corollary 1, $D$ cannot be extreme with respect to $\mathbf{X}.$ \end{document}
\begin{document} \title{Computations on Some Hankel Matrices} \author{Ruiming Zhang} \begin{abstract} In this note, we present the determinant, the inverse and a lower bound for the smallest eigenvalue for some Hankel matrices \end{abstract} \subjclass[2000]{Primary 15A09; Secondary 33D45. } \curraddr{School of Mathematical Sciences\\ Guangxi Normal University\\ Guilin City, Guangxi 541004\\ P. R. China.} \keywords{\noindent Orthogonal Polynomials; Hilbert matrices; Hankel Matrices; Determinants; Inverse Matrices; Smallest eigenvalue.} \email{[email protected]} \maketitle \section{Introduction} For each nonnegative integer $n$, the $n$-th Hilbert matrix is \cite{Weisstein} \[ \left(\frac{1}{j+k+1}\right)_{j,k=0}^{n}.\] These matrices are the moment matrices associated the Legendre polynomials. The generalized Hilbert matrices, which are also called Hankel matrices, are from the generalized moment matrices associated with some more general orthogonal polynomials. Some interesting questions for Hankel matrices are the determinants, inverses and lower bounds for the smallest eigenvalues. In \cite{Zhang} we have developed a general method to compute the determinants, inverses and lower bounds for the smallest eigenvalues for the generalized moment matrices associated with some orthogonal systems (not just limited to orthogonal polynomials). In this note we apply the results to some Hankel matrices. The following theorem is adapted from \cite{Zhang} and we won't repeat the proof here. \begin{thm} \label{thm:1}Given a probability measure $P(dx)$ on $\mathbb{R}$, for each nonnegative integer $n$, let\[ \mu_{n}=\int_{\mathbb{R}}x^{n}P(dx),\] \[ G_{n}=\left(\mu_{j+k}\right)_{j,k=0}^{n}\] and\begin{align*} p_{n}(x) & =\sum_{k=0}^{n}a_{n,k}x^{k},\quad n=0,1,\dots\end{align*} be the orthonormal polynomials, then \begin{align*} \det G_{n} & =\prod_{j=0}^{n}a_{j,j}^{-2}\end{align*} and\begin{align*} G_{n}^{-1} & =\left(\gamma_{j,k}\right)_{j,k=0}^{n}\end{align*} with\begin{align*} \gamma_{j,k} & =\sum_{\ell=\max(j,k)}^{n}\overline{a_{\ell,j}}a_{\ell,k}.\end{align*} Furthermore, if there is a complex number $z_{0}$ with $|z_{0}|=1$ such that for each nonnegative integer $n$, the following sequence \[ a_{n,k}z_{0}^{k},\quad k=0,1,\dots\] have the same sign , then the smallest eigenvalue $\lambda_{s}$ of the matrix $G_{n}$ has a lower bond\begin{align*} \lambda_{s} & \ge\frac{1}{\sum_{m=0}^{n}|p_{m}(z_{0})|^{2}}.\end{align*} \end{thm} \begin{rem} In the case that all the $p_{m}(z_{0})$ are real, we could apply the Christoffel-Darboux formula to get \cite{Andrews,Szego} \[ \lambda_{s}\ge\frac{a_{n+1,n+1}}{a_{n,n}\left\{ p'_{n+1}(z_{0})p_{n}(z_{0})-p_{n+1}(z_{0})p'_{n}(z_{0})\right\} }.\] \end{rem} Recall that the Euler's $\Gamma(z)$ is defined as \cite{Andrews,Koekoek,Szego} \begin{align*} \Gamma(z) & =\int_{0}^{\infty}x^{z-1}e^{-x}dx,\quad\Re(z)>0\end{align*} and it could be analytically extended to a meromorphic function on the complex plane. The shifted factorial of $z$ is defined as\begin{align*} (z)_{n} & =\frac{\Gamma(z+n)}{\Gamma(z)},\quad n\in\mathbb{Z}.\end{align*} The hypergeometric function ${}_{2}F_{1}$ is defined as\begin{align*} {}_{2}F_{1}\left(\begin{array}{c} a,b\\ c\end{array};z\right) & =\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}n!}z^{n}\end{align*} for $|z|<1$. Euler's Beta integral could be evaluated in terms of $\Gamma(z)$,\begin{align*} \int_{0}^{1}x^{\alpha-1}(1-x)^{\beta-1}dx & =\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)},\quad\Re(\alpha),\Re(\beta)>0.\end{align*} For any complex number $a$ and $0<q<1$, we define \cite{Andrews,Koekoek} \begin{align*} (a;q)_{\infty} & =\prod_{m=0}^{\infty}(1-aq^{m}),\quad(a;q)_{m}=\frac{(a;q)_{\infty}}{(aq^{m};q)_{\infty}}.\end{align*} The $q$-Binomial theorem is\begin{align*} \frac{(az;q)_{\infty}}{(z;q)_{\infty}} & =\sum_{k=0}^{\infty}\frac{(a;q)_{k}}{(q;q)_{k}}z^{k},\quad|z|<1,\end{align*} one of its direct consequences is \begin{align*} (z;q)_{\infty} & =\sum_{k=0}^{\infty}\frac{q^{\binom{k}{2}}\left(-z\right)^{k}}{(q;q)_{k}}.\end{align*} The confluent $q$-hypergeometric series ${}_{1}\phi_{1}$ \begin{align*} {}_{1}\phi_{1}\left(a;b;q,z\right) & =\sum_{n=0}^{\infty}\frac{(a;q)_{n}q^{\binom{n}{2}}(-z)^{n}}{(b;q)_{n}(q;q)_{n}}\end{align*} satisfies the following identity,\begin{align*} {}_{1}\phi_{1}\left(a;b;q,b/a\right) & =\frac{(b/a;q)_{\infty}}{(b;q)_{\infty}}.\end{align*} Hold $|b|<1$ fixed and let $a\to\infty$ in the above formula to obtain the Cauchy's formula \[ \sum_{n=0}^{\infty}\frac{q^{n(n-1)}b^{n}}{(q,b;q)_{n}}=\frac{1}{(b;q)_{\infty}}.\] \section{Applications} \subsection{Laguerre Polynomials} The Laguerre polynomials $\left\{ L_{n}^{\alpha}(x)\right\} _{n=0}^{\infty}$ are defined as \cite{Andrews,Koekoek,Szego} \begin{align*} L_{n}^{\alpha}(x) & =\frac{(\alpha+1)_{n}}{n!}\sum_{k=0}^{n}\frac{(-n)_{k}x^{k}}{(\alpha+1)_{k}k!}\end{align*} for $n\ge0$ and we assume that \begin{align*} L_{-1}^{\alpha}(x) & =0.\end{align*} For any $\alpha>-1$, the orthogonal relation for the Laguerre polynomials is \begin{align*} \int_{0}^{\infty}L_{m}^{\alpha}(x)L_{n}^{\alpha}(x)\frac{x^{\alpha}e^{-x}}{\Gamma(\alpha+1)}dx & =\frac{(\alpha+1)_{n}}{n!}\delta_{mn}\end{align*} for any nonnegative integers $m,n$ where\begin{align*} \delta_{mn} & =\begin{cases} 1 & m=n\\ 0 & m\neq n\end{cases}.\end{align*} Clearly, the $n$-th moment is\begin{align*} m_{n} & =\frac{\int_{0}^{\infty}x^{\alpha+n}e^{-x}dx}{\Gamma(\alpha+1)}=(\alpha+1)_{n},\end{align*} and \begin{align*} \ell_{n}^{(\alpha)}(x) & =(-1)^{n}\sqrt{\frac{n!}{(\alpha+1)_{n}}}L_{n}^{\alpha}(x)\end{align*} is the $n$-th orthonormal polynomial. Apply Theorem \ref{thm:1} with\begin{align*} a_{n,k} & =\sqrt{\frac{(\alpha+1)_{n}}{n!}}\frac{(-n)_{k}(-1)^{n}}{(\alpha+1)_{k}k!}\end{align*} we get \begin{align*} \det\left((\alpha+1)_{j+k}\right)_{0\le j,k\le n} & =\prod_{k=0}^{n}\left\{ k!(\alpha+1)_{k}\right\} ,\end{align*} and its inverse matrix is\begin{align*} \left((\alpha+1)_{j+k}\right)_{0\le j,k\le n}^{-1} & =\left(\sum_{\ell=0}^{n}\frac{(\alpha+1)_{\ell}(-\ell)_{j}(-\ell)_{k}}{\ell!(\alpha+1)_{j}(\alpha+1)_{k}j!k!}\right)_{j,k=0}^{n}.\end{align*} For $\alpha>-1$, the smallest eigenvalue of the matrix $\left((\alpha+1)_{j+k}\right)_{0\le j,k\le n}$ is \begin{align*} \lambda_{s} & \ge\left\{ \sum_{\ell=0}^{n}\frac{\ell!}{(\alpha+1)_{\ell}}L_{\ell}^{(\alpha)}(-1)^{2}\right\} ^{-1}\\ & =\frac{(\alpha+1)_{n}}{(n+1)!}\frac{1}{L_{n}^{(\alpha+1)}(-1)L_{n}^{(\alpha)}(-1)-L_{n+1}^{(\alpha)}(-1)L_{n-1}^{(\alpha+1)}(-1)},\end{align*} where we applied the formula\[ \frac{d}{dx}L_{n}^{(\alpha)}(x)=-L_{n-1}^{(\alpha+1)}(x).\] From the the Perron formula \cite{Szego} \begin{align*} L_{n}^{(\alpha)}(x) & =\frac{e^{x/2}}{2\pi}(-x)^{-\alpha/2-1/4}n^{\alpha/2-1/4}\exp\left\{ 2(-nx)^{1/2}\right\} \left\{ 1+\mathcal{O}\left(\frac{1}{n^{1/2}}\right)\right\} \end{align*} for $x\in\mathbb{C}\backslash(0,\infty)$ as $n\to\infty$ to get\begin{align*} L_{n}^{(\alpha+1)}(-1)L_{n}^{(\alpha)}(-1)-L_{n+1}^{(\alpha)}(-1)L_{n-1}^{(\alpha+1)}(-1) & =\frac{n^{\alpha-1}\exp(4n^{1/2})}{8\pi e}\left\{ 1+\mathcal{O}\left(\frac{1}{n^{1/2}}\right)\right\} \end{align*} as $n\to\infty$, this together with \begin{align*} \frac{\Gamma(n+\alpha)}{\Gamma(n+\beta)} & =n^{\alpha-\beta}\left\{ 1+\mathcal{O}\left(\frac{1}{n}\right)\right\} \end{align*} as $n\to\infty$ gives\begin{align*} \left\{ \sum_{\ell=0}^{n}\frac{\ell!}{(\alpha+1)_{\ell}}L_{\ell}^{(\alpha)}(-1)^{2}\right\} ^{-1} & =\frac{8\pi e}{\Gamma(\alpha+1)}\exp(-4n^{1/2})\left\{ 1+\mathcal{O}\left(\frac{1}{n^{1/2}}\right)\right\} \end{align*} as $n\to\infty$. \subsection{The Jacobi Polynomials $\{P_{n}^{(\alpha,\beta)}(x)\}_{n=0}^{\infty}$} The Jacobi polynomials $\left\{ P_{n}^{(\alpha,\beta)}(x)\right\} _{n=0}^{\infty}$ are defined as \cite{Andrews,Koekoek,Szego}\begin{align*} P_{n}^{(\alpha,\beta)}(x) & =\frac{(\alpha+1)_{n}}{n!}{}_{2}F_{1}\left(\begin{array}{c} -n,n+\alpha+\beta+1\\ \alpha+1\end{array};\frac{1-x}{2}\right)\end{align*} for $n\ge0$ and \begin{align*} P_{-1}^{(\alpha,\beta)}(x) & =0.\end{align*} For $\alpha,\beta>-1$, they satisfy the orthogonal relation \begin{align*} \int_{-1}^{1}P_{m}^{(\alpha,\beta)}(x)P_{n}^{(\alpha,\beta)}(x)w(x)dx & =h_{n}\delta_{mn}\end{align*} for all nonnegative integers $n,m$ where\begin{align*} w(x) & =(1-x)^{\alpha}(1+x)^{\beta},\end{align*} and\begin{align*} h_{n} & =\frac{2^{\alpha+\beta+1}\Gamma(\alpha+n+1)\Gamma(\beta+n+1)}{(2n+\alpha+\beta+1)\Gamma(\alpha+\beta+n+1)n!}.\end{align*} Since \begin{align*} P_{n}^{(\alpha,\beta)}(x) & =\frac{(\beta+1)_{n}}{(-1)^{n}n!}{}_{2}F_{1}\left(\begin{array}{c} -n;n+\alpha+\beta+1\\ \beta+1\end{array};\frac{1+x}{2}\right),\end{align*} we let\begin{align*} A_{n}^{(\alpha,\beta)}(y) & =P_{n}^{(\alpha,\beta)}(2y-1)\end{align*} then, for $\alpha,\beta>-1$ we have \begin{align*} \int_{0}^{1}A_{m}^{(\alpha,\beta)}(y)A_{n}^{(\alpha,\beta)}(y)\tilde{w}(y)dy & =h_{n}^{(\alpha,\beta)}\delta_{mn}\end{align*} for all nonnegative integers $n,m$ where\begin{align*} \tilde{w}(y) & =\frac{y^{\alpha}(1-y)^{\beta}\Gamma(\alpha+\beta+2)}{\Gamma(\alpha+1)\Gamma(\beta+1)},\end{align*} and\[ h_{n}^{(\alpha,\beta)}=\frac{(\alpha+\beta+1)(\alpha+1)_{n}(\beta+1)_{n}}{(2n+\alpha+\beta+1)n!(\alpha+\beta+1)_{n}}.\] Clearly, the $n$-th moment is \begin{align*} \mu_{n} & =\int_{0}^{1}y^{n}\tilde{w}(y)dy=\frac{(\alpha+1)_{n}}{(\alpha+\beta+2)_{n}}\end{align*} and the $n$-th orthonormal polynomial is \begin{align*} a_{n}^{(\alpha,\beta)}(y) & =\frac{1}{\sqrt{h_{n}^{(\alpha,\beta)}}}A_{n}^{(\alpha,\beta)}(y),\end{align*} or\begin{align*} a_{n}^{(\alpha,\beta)}(y) & =\sqrt{\frac{(2n+\alpha+\beta+1)(\beta+1)_{n}(\alpha+\beta+1)_{n}}{(\alpha+\beta+1)(\alpha+1)_{n}n!}}\\ \times & (-1)^{n}{}_{2}F_{1}\left(\begin{array}{c} -n;n+\alpha+\beta+1\\ \beta+1\end{array};y\right).\end{align*} Thus, \begin{align*} a_{n,k} & =\sqrt{\frac{(2n+\alpha+\beta+1)(\beta+1)_{n}(\alpha+\beta+1)_{n}}{(\alpha+\beta+1)(\alpha+1)_{n}n!}}\\ \times & \frac{(-n)_{k}(n+\alpha+\beta+1)_{k}}{(-1)^{n}(\beta+1)_{k}k!},\end{align*} and for each nonnegative integer $n$, we have\begin{align*} \det\left(\frac{(\alpha+1)_{j+k}}{(\alpha+\beta+2)_{j+k}}\right)_{0\le j,k\le n} & =\prod_{m=0}^{n}\frac{(\beta+1)_{m}(\alpha+1)_{m}m!}{(\alpha+\beta+2)_{2m}(m+\alpha+\beta+1)_{m}}\end{align*} and\begin{align*} \left(\frac{(\alpha+1)_{j+k}}{(\alpha+\beta+2)_{j+k}}\right)_{0\le j,k\le n}^{-1} & =\left(\gamma_{j,k}\right)_{0\le j,k\le n}\end{align*} with\begin{align*} \gamma_{j,k} & =\sum_{m=0}^{n}\frac{(2m+\alpha+\beta+1)(\beta+1)_{m}(\alpha+\beta+1)_{m}}{(\alpha+\beta+1)(\alpha+1)_{m}m!}\\ & \frac{(-m)_{j}(-m)_{k}(m+\alpha+\beta+1)_{j}(m+\alpha+\beta+1)_{k}}{j!k!(\beta+1)_{j}(\beta+1)_{k}}.\end{align*} Then, its smallest eigenvalue of the matrix $\left(\frac{(\alpha+1)_{j+k}}{(\alpha+\beta+2)_{j+k}}\right)_{0\le j,k\le n}$ has a lower bound \begin{align*} \lambda_{s} & \ge\left\{ \sum_{m=0}^{n}\frac{\left(P_{m}^{(\alpha,\beta)}(-3)\right)^{2}}{h_{m}^{(\alpha,\beta)}}\right\} ^{-1}.\end{align*} From the Christoffel-Darboux formula and the formula\[ \frac{d}{dx}P_{n}^{(\alpha,\beta)}(x)=\frac{(n+\alpha+\beta+1)}{2}P_{n-1}^{(\alpha+1,\beta+1)}(x),\] and\[ P_{n}^{(\alpha,\beta)}(-x)=(-1)^{n}P_{n}^{(\beta,\alpha)}(x)\] to get\begin{align*} \sum_{m=0}^{n}\frac{\left(P_{m}^{(\alpha,\beta)}(-3)\right)^{2}}{h_{m}^{(\alpha,\beta)}} & =\frac{(\beta+1+n)(\alpha+\beta+1+n)}{(\alpha+\beta+1+2n)(\alpha+\beta+2+2n)h_{n}^{(\alpha,\beta)}}\\ \times & \left\{ (\alpha+\beta+2+n)P_{n}^{(\beta+1,\alpha+1)}(3)P_{n}^{(\beta,\alpha)}(3)\right.\\ - & \left.(\alpha+\beta+1+n)P_{n+1}^{(\beta,\alpha)}(3)P_{n-1}^{(\beta+1,\alpha+1)}(3)\right\} .\end{align*} From the asymptotic formula of Jacobi polynomials to obtain \cite{Szego}\[ P_{n}^{(\alpha,\beta)}(3)=\frac{\left(3+2\sqrt{2}\right)^{n}}{n^{1/2}}\left\{ \phi_{0}(\alpha,\beta;3+2\sqrt{2})+\mathcal{O}\left(\frac{1}{n}\right)\right\} \] as $n\to\infty$ and\begin{align*} \sum_{m=0}^{n}\frac{\left(P_{m}^{(\alpha,\beta)}(-3)\right)^{2}}{h_{m}^{(\alpha,\beta)}} & =\phi_{0}(\alpha,\beta;3+2\sqrt{2})\phi_{0}(\alpha+1,\beta+1;3+2\sqrt{2})\\ \times & \frac{\Gamma(\alpha+1)\Gamma(\beta+1)(3+2\sqrt{2})^{2n}}{2\Gamma(\alpha+\beta+2)}\left\{ 1+\mathcal{O}\left(\frac{1}{n}\right)\right\} \end{align*} as $n\to\infty$. Thus\[ \left\{ \sum_{m=0}^{n}\frac{\left(P_{m}^{(\alpha,\beta)}(-3)\right)^{2}}{h_{m}^{(\alpha,\beta)}}\right\} ^{-1}=\mathcal{O}\left(\frac{1}{(3+2\sqrt{2})^{2n}}\right)\] as $n\to\infty$. \subsection{The $q$-Laguerre Polynomials $\{L_{n}^{(\alpha)}(x;q)\}_{n=0}^{\infty}$ with $q\in(0,1)$ and $\alpha>-1$} The $q$-Laguerre polynomials $\{L_{n}^{(\alpha)}(x;q)\}_{n=0}^{\infty}$ are defined as \cite{Andrews,Koekoek} \begin{align*} L_{n}^{(\alpha)}(x;q) & =\frac{(q^{\alpha+1};q)_{n}}{(q;q)_{n}}\sum_{k=0}^{n}\frac{(q^{-n};q)_{k}q^{\binom{k+1}{2}}q^{(\alpha+n)k}x^{k}}{(q;q)_{k}(q^{\alpha+1};q)_{k}}\end{align*} for $n\ge0$, and we assume that\[ L_{-1}^{(\alpha)}(x;q)=0.\] The moment problem of the $q$-Laguerre polynomials is indeterminate and one of the orthogonality for $\{L_{n}^{(\alpha)}(x;q)\}_{n=0}^{\infty}$ is\begin{align*} \sum_{k=-\infty}^{\infty}L_{m}^{(\alpha)}(q^{k};q)L_{n}^{(\alpha)}(q^{k};q)w_{ql}(q^{k};\alpha) & =\frac{(q^{\alpha+1};q)_{n}}{(q;q)_{n}q^{n}}\delta_{m,n},\end{align*} for $m,n\ge0$ with\begin{align*} w_{ql}(q^{k};\alpha) & =\frac{(q^{\alpha+1};q)_{\infty}(-q;q)_{\infty}(-1;q)_{k}q^{k(\alpha+1)}}{(q;q)_{\infty}(-q^{\alpha+1};q)_{\infty}(-q^{-\alpha};q)_{\infty}}.\end{align*} Clearly, the $n$-th moment is\begin{align*} \mu_{n}(\alpha) & =\sum_{k=0}^{\infty}q^{kn}w_{ql}(q^{k};\alpha)=\left(q^{\alpha+1};q\right)_{n}q^{-\alpha n-\binom{n+1}{2}},\end{align*} and the orthonormal system is given by \begin{align*} \ell_{n}^{(\alpha)}(x;q) & =\sqrt{\frac{(q^{\alpha+1};q)_{n}q^{n}}{(q;q)_{n}}}\sum_{k=0}^{n}\frac{(q^{-n};q)_{k}q^{\binom{k+1}{2}}q^{(\alpha+n)k}x^{k}}{(-1)^{n}(q;q)_{k}(q^{\alpha+1};q)_{k}},\end{align*} then,\begin{align*} a_{n,k} & =\sqrt{\frac{(q^{\alpha+1};q)_{n}q^{n}}{(q;q)_{n}}}\frac{(q^{-n};q)_{k}q^{\binom{k+1}{2}}q^{(\alpha+n)k}}{(-1)^{n}(q;q)_{k}(q^{\alpha+1};q)_{k}}.\end{align*} According to Theorem \ref{thm:1}, the matrix \begin{equation} \left((q^{\alpha+1};q)_{j+k}q^{-\binom{j+k+1}{2}-\alpha(j+k)}\right)_{j,k=0}^{n}\label{eq:2.2.1}\end{equation} has inverse \[ \left(\gamma_{j,k}\right)_{j,k=0}^{n}\] where\begin{align*} \gamma_{j,k} & =\sum_{m=0}^{n}\frac{(q^{\alpha+1};q)_{m}q^{m(j+k+1)}(q^{-m};q)_{j}(q^{-m};q)_{k}q^{\alpha(j+k)+\binom{j+1}{2}+\binom{k+1}{2}}}{(q;q)_{m}(q;q)_{j}(q;q)_{k}(q^{\alpha+1};q)_{j}(q^{\alpha+1};q)_{k}}.\end{align*} Its determinant is \begin{align*} \det\left((q^{\alpha+1};q)_{j+k}q^{-\binom{j+k+1}{2}-\alpha(j+k)}\right)_{j,k=0}^{n} & =\frac{{\displaystyle \prod_{m=0}^{n}(q;q)_{m}(q^{\alpha+1};q)_{m}}}{q^{n(n+1)(4n+6\alpha+5)/6}},\end{align*} and the smallest eigenvalue has a lower bound is\[ \left(\sum_{m=0}^{n}\left|\ell_{m}^{(\alpha)}(-1;q)\right|^{2}\right)^{-1}.\] But\begin{align*} \ell_{m}^{(\alpha)}(-1;q) & =\sqrt{\frac{(q^{\alpha+1};q)_{m}q^{m}}{(q;q)_{m}}}\sum_{k=0}^{m}\frac{(q^{-m};q)_{k}q^{\binom{k}{2}}\left(-q^{m+\alpha+1}\right)^{k}}{(-1)^{m}(q;q)_{k}(q^{\alpha+1};q)_{k}},\\ = & (-1)^{m}\sqrt{\frac{(q^{\alpha+1};q)_{m}q^{m}}{(q;q)_{m}}}{}_{1}\phi_{1}\left(q^{-m};q^{\alpha+1};q,q^{m+\alpha+1}\right)\\ = & (-1)^{m}\sqrt{\frac{(q^{\alpha+1};q)_{m}q^{m}}{(q;q)_{m}}}\frac{(q^{m+\alpha+1};q)_{\infty}}{(q^{\alpha+1};q)_{\infty}}\\ = & (-1)^{m}\sqrt{\frac{q^{m}}{(q,q^{\alpha+1};q)_{m}}},\end{align*} thus the lower bound for the smallest eigenvalue is\[ \left(\sum_{m=0}^{n}\frac{q^{m}}{(q,q^{\alpha+1};q)_{m}}\right)^{-1}.\] Notice that everything involved here is a rational function of variable $q$, we may replace $q$ by $q^{-1}$ then apply the relation\[ (a;q)_{n}=(-a)^{n}q^{\binom{n}{2}}(a^{-1};q^{-1})_{n}\] and the diagonal matrix \[ D_{n}=\left((-1)^{j}\delta_{j,k}\right)_{j,k=0}^{n},\quad D_{n}^{2}=I\] to obtain\begin{align*} \det\left((q^{\alpha+1};q)_{j+k}\right)_{0\le j,k\le n} & =a^{n(n+1)\alpha/2}q^{n(n+1)(2n+1)/6}\prod_{k=0}^{n}(q;q)_{k}(q^{\alpha+1};q)_{k},\end{align*} and\begin{align*} \left((q^{\alpha+1};q)_{j+k}\right)_{0\le j,k\le n}^{-1} & =\left(\sum_{m=0}^{n}\frac{q^{j+k}(q^{\alpha+1};q)_{m}(q^{-m};q)_{j}(q^{-m};q)_{k}}{(q;q)_{j}(q;q)_{k}(q^{\alpha+1};q)_{j}(q^{\alpha+1};q)_{k}(q;q)_{m}q^{(\alpha+1)m}}\right)_{j,k=0}^{n}.\end{align*} and the smallest eigenvalue has a lower bound\[ \left(\sum_{m=0}^{n}\frac{q^{2\binom{m}{2}}q^{m(\alpha+1)}}{(q,q^{\alpha+1};q)_{m}}\right)^{-1},\] which is bounded again by the absolute constant\[ \left(\sum_{m=0}^{\infty}\frac{q^{2\binom{m}{2}}q^{m(\alpha+1)}}{(q,q^{\alpha+1};q)_{m}}\right)^{-1}=(q^{\alpha+1};q)_{\infty}.\] \end{document}
\begin{document} \title{Tight Vector Bin Packing with Few Small Items via Fast Exact Matching in Multigraphs} \begin{abstract} We solve the Bin Packing problem in $O^*(2^k)$ time, where $k$ is the number of items less or equal to one third of the bin capacity. This parameter measures the distance from the polynomially solvable case of only large (i.e., greater than one third) items. Our algorithm is actually designed to work for a more general Vector Bin Packing problem, in which items are multidimensional vectors. We improve over the previous fastest $O^*(k! \cdot 4^k)$ time algorithm. Our algorithm works by reducing the problem to finding an exact weight perfect matching in a (multi-)graph with $O^*(2^k)$ edges, whose weights are integers of the order of $O^*(2^k)$. To solve the matching problem in the desired time, we give a variant of the classic Mulmuley-Vazirani-Vazirani algorithm with only a~linear dependence on the edge weights and the number of edges -- which may be of independent interest. Moreover, we give a tight lower bound, under the Strong Exponential Time Hypothesis (SETH), showing that the constant $2$ in the base of the exponent cannot be further improved for Vector Bin Packing. Our techniques also lead to improved algorithms for Vector Multiple Knapsack, Vector Bin Covering, and Perfect Matching with Hitting Constraints. \end{abstract} \section{Introduction} NP-hard problems often have special cases that can be solved in polynomial time, e.g., Vertex Cover is tractable in graphs with the Kőnig property, Dominating Set is tractable in trees, and Longest Common Subsequence is tractable in permutations. Many of these problems remain (fixed-parameter) tractable with a distance from the polynomially solvable case taken as a parameter, e.g., Vertex Cover Above Matching~\cite{RazgonO08}, Dominating Set in bounded treewidth graphs~\cite{AlberBFKN02}, or Longest Common Subsequence parameterized by the maximum occurrence number~\cite{GuoHN04}. In parameterized complexity, this concept is sometimes dubbed \emph{distance from triviality}~\cite{GuoHN04}. In the Bin Packing problem, we are given $n$ \emph{items} from $\mathbb{Q}_{\geqslant 0}$, and we have to pack them into the smallest possible number of unit-sized \emph{bins}. It is a classic strongly NP-hard problem. When all items are \emph{large}, i.e., greater than $\nicefrac{1}{3}$, then no three items can fit into a single bin and the problem reduces to the Maximum Matching problem, and hence it can be solved in polynomial time~\cite{Edmonds65}. Bannach et al.~\cite{Bannach20} are the first to study Bin Packing parameterized by the number $k$ of \emph{small} (i.e., $\leqslant \nicefrac{1}{3}$) items -- which is the distance from the above tractable case. They give algorithms running in randomized $O^{*}\!(k! \cdot 4^k)$ time, and deterministic $O^{*}\!\big((k!)^2 \cdot 2^k\big)$ time.\footnote{We use $O^{*}\!(\cdot)$ notation to suppress factors polynomial in the input size $n$, i.e., $O^{*}\!(f(k)) = f(k) \cdot n^{O(1)}$\!.} Their randomized algorithm works even for a more general Vector Bin Packing problem, in which items are $d$-dimensional vectors from $\mathbb{Q}_{\geqslant 0}^d$, and a set of items fits into a bin if their coordinate-wise sum does not exceed $1$ in any coordinate. (The notion of a small item is more complex in the multidimensional case; see Section~\ref{sec:definition} for the definition.) We improve upon their result by giving an $O^{*}\!(2^k)$ randomized time algorithm for Vector Bin Packing. We complement it with a matching conditional lower bound, showing that the constant $2$ in the base of the exponent cannot be further improved, unless the Strong Exponential Time Hypothesis ({\small SETH}) fails. Our algorithm works by reducing the problem to finding a perfect matching of a~given total weight in an edge-weighted (multi-)graph. The graph has only $O(n)$ nodes, but can have up to $O(2^kn^2)$ edges, whose weights are integers of the order of $2^k \cdot k$. To solve the matching problem in the desired $O^{*}\!(2^k)$ time, we give a variant of the classic Mulmuley-Vazirani-Vazirani algorithm~\cite{Mulmuley1987-mc} with only a linear dependence on the edge weights and the number of edges -- which may be of independent interest. Our techniques also lead to improved algorithms for the two other problems studied by Bannach et al.~\cite{Bannach20}, i.e., the Vector Multiple Knapsack and Vector Bin Covering problems, as well as for the Perfect Matching with Hitting Constraints problem, studied by Marx and Pilipczuk~\cite{Marx14}. \subsection{Vector Bin Packing with Few Small Items} \label{sec:definition} First, let us formally define Vector Bin Packing as a decision problem. We remark that (Vector) Bin Packing is also often studied as an optimization problem -- especially in the context of approximation algorithms -- but one can always switch between the two variants via binary search, loosing at most a factor of $O(\log n)$ in the running time. \begin{problem}{Vector Bin Packing} Given: & a set of $n$ items $V = \{v_1, \ldots, v_n\} \subseteq \mathbb{Q}_{\geqslant 0}^d$, \\ & and an integer $\ell \in \mathbb{Z}_+$ denoting the number of unit-sized bins. \\[3pt] Decide: & if the items can be partitioned into $\ell$ bins $B_1 \cup \cdots \cup B_\ell = V$ such that \\ & $\sum_{v \in B_i} v[j] \leqslant 1$ for every bin $i \in [\ell]$ and every dimension $j \in [d]$.\footnotemark \end{problem} \footnotetext{We use $[n]$ to denote the set of integers $\{1,2,\ldots,n\}$.} Note that the assumption that bins are unit-sized is without loss of generality, as one can always independently scale each dimension in order to meet that constraint. We can also safely assume that $V$ is a set, as one can handle multiple occurrences of the same item by introducing one extra dimensions with negligibly small but unique coordinates. Unlike in the one-dimensional Bin Packing problem, where a small item can be defined simply as smaller or equal to $\nicefrac{1}{3}$, we use a more complex definition, introduced by Bannach et al.~\cite{Bannach20}. Let $V \subseteq \mathbb{Q}_{\geqslant 0}^d$ be a set of $d$-dimensional items. We say that a subset $V' \subseteq V$ is \textbf{3-incompatible} if no three distinct items from $V'$ fit into a unit-sized bin, i.e., for every distinct $u, v, w \in V'$ there exists a dimension $i \in [d]$ such that $u[i]+v[i]+w[i] > 1$. Now we can define the parameterized problem that we study. \begin{problem}{Vector Bin Packing with Few Small Items} Parameter: & the number of \emph{small} items $k$. \\[3pt] Given: & a set of $n$ items $V = \{v_1, \ldots, v_n\} \subseteq \mathbb{Q}_{\geqslant 0}^d$, \\ & a subset of $k$ items $V_S \subseteq V$ such that $V_L = V \setminus V_S$ is \textbf{3-incompatible}, \\ & and an integer $\ell \in \mathbb{Z}_+$ denoting the number of unit-sized bins. \\[3pt] Decide: & if the items can be partitioned into $\ell$ bins $B_1 \cup \cdots \cup B_\ell = V$ such that \\ & $\sum_{v \in B_i} v[j] \leqslant 1$ for every bin $i \in [\ell]$ and every dimension $j \in [d]$. \end{problem} We say that items in $V_S$ are \emph{small}, and the remaining items in $V_L = V \setminus V_S$ are \emph{large}. Note that we assume that a subset of small items is specified in the input. This way we can study the complexity of the packing problem independently of the complexity of finding a (smallest) subset of small items. This is similar, e.g., to the standard practice for treewidth parameterization, where one assumes that a suitable tree decomposition is given in the input (see, e.g., \cite{CyganFKLMPPS15}). We remark that if only the set of all items $V$ is given, a smallest possible subset of small items can be found in $O^{*}\!(2.0755^k)$ time~\cite{Wahlstrom07} via a reduction to the 3-Hitting Set problem~\cite{Bannach20}. \subsection{Our results} Our main result is an $O^{*}\!(2^k)$ time randomized algorithm for Vector Bin Packing with Few Small Items. The algorithm consists of two parts: reducing the packing problem to a matching problem, and solving the matching problem. More formally, we first prove the following. \begin{restatable}{lemma}{lempacktomatch} \label{lem:packtomatch} An $n$-item instance of Vector Bin Packing with $k$ small items can be reduced, in deterministic time $O(2^kn^2kd)$, to the problem of finding an exact-weight perfect matching in a (multi-)graph. The graph has $O(n)$ vertices, $O(2^kn^2)$ edges, and non-negative integer edge weights that do not exceed $O(2^kk)$. The target exact total weight of a matching is $O(2^kk)$. \end{restatable} The above matching problem is dubbed Exact Matching, and is known to be in randomized\footnote{It is a big open problem to derandomize the algorithm, see, e.g., \cite{SvenssonT17}.} (pseudo-)polynomial time since the Mulmuley-Vazirani-Vazirani algorithm~\cite{Mulmuley1987-mc}. The algorithm directly solves the $0$/$1$ weights variant of Exact Matching in simple graphs. A prior reduction of Papadimitriou and Yannakakis~\cite{Papadimitriou1982-qg} handles arbitrary non-negative integer edge weights and multiple parallel edges. The reduction replaces each edge of weight $w$ by a path of length $2w+1$ with alternating $0$/$1$ edge weights. The reduction multiplies the number of vertices by the edge weights and by the number of edges. Further, Mulmuley-Vazirani-Vazirani is not a linear time algorithm. Hence, this would give us only a $2^{O(k)}n^{O(1)}$ time algorithm for Vector Bin Packing. This is already an improvement over the previous factorial time algorithm, but still not our desired $2^k n^{O(1)}$ running time. There are more direct and faster ways to solve the general Exact Matching problem than going through the Papadimitriou-Yannakakis reduction. It seems folklore to handle arbitrary edge weights by replacing a monomial $x$, corresponding to a weight-one edge in the Mulmuley-Vazirani-Vazirani algorithm, with $x^w$, where $w$ is the edge weight. It remains to handle multiple parallel edges. A crucial part of the algorithm is the so-called \emph{isolation lemma}. It assigns random \emph{costs} to edges, ensuring that the minimum cost perfect matching of the target weight is unique, and hence it cannot cancel out in the algebraic computations. The range of costs, required to ensures that property, on one hand depends on the number of edges, and on the other hand, determines the bitsize of the costs, on which the algorithm later needs to do arithmetic. Due to the number of edges in Lemma~\ref{lem:packtomatch}, a direct application of isolation lemma would lead to an $O^{*}\!(4^k)$ time algorithm. To mitigate this issue, we carefully apply isolation lemma to pairs of vertices, and hence the number of edges appears in the running time only as a linear additive factor. \begin{restatable}{theorem}{thmmatching} \label{thm:matching} Given an edge-weighted multigraph with $n$ nodes and $m$ edges, and an integer~$t$, a randomized Monte-Carlo algorithm can decide whether there is a perfect matching of total weight exactly $t$ in $\widetilde O(t \cdot n^8 + m)$ time. \end{restatable} Lemma~\ref{lem:packtomatch} and Theorem~\ref{thm:matching} together imply our main result. \begin{theorem} \label{thm:binpacking} There is a randomized Monte Carlo algorithm solving Vector Bin Packing with Few Small Items in $O^{*}\!(2^k)$ time. \end{theorem} In Appendix~\ref{sec:alt} we give an alternative proof of Theorem~\ref{thm:binpacking}, using a different algorithm, whose running time has a better dependence on the number of items $n$. This algorithm, however, presents a more complicated and tailored approach; in particular, it does not seem to generalize to the Vector Bin Covering problem that we discuss later. \subsubsection*{Lower bound} We show that the above result is tight, via a matching conditional lower bound, under the Strong Exponential Time Hypothesis ({\small SETH})~\cite{ImpagliazzoP01}. The hypothesis states that deciding $k${\small-CNF-SAT} with $n$ variables requires time $2^{s_k n}$ for $\lim_{k \to \infty} s_k = 1$. In particular, it implies that deciding {\small CNF-SAT} requires $2^{(1-\varepsilon) n}$ time, for every $\varepsilon > 0$. {\small SETH} is a standard hardness assumption for conditional lower bounds in fine-grained and parameterized complexity~\cite{CyganFKLMPPS15,williams2018some}. We prove the following lower bound for the (non-parameterized) Vector Bin Packing problem. \begin{restatable}{theorem}{thmlowerbound} Unless {\small SETH} fails, Vector Bin Packing cannot be solved in $O^{*}\!(2^{(1-\varepsilon) n})$ time, for any $\varepsilon > 0$. This holds even restricted to instances with only two bins and dimension $d = O(n)$. \end{restatable} Since $k \leqslant n$, the corollary for the parameterized version of the problem follows immediately, proving that the algorithm of Theorem~\ref{thm:binpacking} is tight. \begin{corollary} Unless {\small SETH} fails, Vector Bin Packing with Few Small Items cannot be solved in $O^{*}\!(2^{(1-\varepsilon) k})$ time, for any $\varepsilon > 0$. This holds even restricted to instances with only two bins and dimension $d = O(n)$. \end{corollary} We remark that our lower bound crucially relies on multiple dimensions. The best known hardness result for the (one-dimensional) Bin Packing problem rules out only $2^{o(n)}$~time algorithms~\cite{JansenLL16}, assuming the Exponential Time Hypothesis ({\small ETH})~\cite{Impagliazzo98}. It is a big open problem whether an $O(1.99^n)$ time algorithm for Bin Packing exists. Recently, Nederlof et al.~\cite{NederlofPSW21} gave such an algorithm for any constant number of bins. This is in contrast to Vector Bin Packing, which, as we show, requires $2^{(1-\varepsilon) n}$ time already for two bins. \subsubsection*{Other applications} Bannach et al.~\cite{Bannach20} studied two further problems closely related to the Vector Bin Packing problem -- namely, Vector Multiple Knapsack and Vector Bin Covering -- under similar parameterizations. In the Vector Multiple Knapsack problem, each item comes with a \emph{profit}, and instead of having to pack all the items, we aim to pack a subset of the items into a fixed number of bins while maximizing the overall profit of the packed items. In the few small items regime, the fastest known algorithm so far has a running time of $O^{*}\!(k! \cdot 4^k)$, where $k$ is the number of small items~\cite{Bannach20}. Adapting our algorithm to handle the profits and the obstacle that only a subset of items might be packed, we obtain the following theorem. \begin{restatable}{theorem}{thmKnapsack} There is a randomized Monte Carlo algorithm solving Vector Multiple Knapsack with Few Small Items in $O^{*}\!(2^k)$ time when item profits are bounded by $\operatorname{poly}(n)$. \end{restatable} In the Vector Bin Covering problem, we aim to \emph{cover} bins. Intuitively speaking, instead of packing the items into as few bins as possible, we want to partition them into as many bins as possible while satisfying a \emph{covering constraint} for each bin. This new desired property of a solution leads to a slightly different definition of the set of small items: instead of any three large items not fitting together into a bin, now they cover a bin. So far, the fastest algorithm solving this problem parameterized by the number $k$ of small items\footnote{Even though Bannach et al.~do not explicitly adapt their definition of a small item to this problem, they indeed work with the same definition as we do. In the full version on arXiv~\cite[page 11]{Bannach20Long} they write: ``The large vectors have the property that every subset of three vectors cover a container.''} runs in time $O^{*}\!(k! \cdot 4^k)$~\cite{Bannach20}. We give the following improvement. \begin{restatable}{theorem}{thmCover} There is a randomized Monte Carlo algorithm solving Vector Bin Covering with Few Small Items in $O^{*}\!(2^k)$ time. \end{restatable} Further, our results directly imply an improved running time for the Perfect Matching with Hitting Constraints problem. This problem asks whether we can find a~perfect matching in a graph using at least one edge from each of given subsets of edges. It was studied by Marx and Pilipczuk~\cite{Marx14} as a tool for solving a subgraph isomorphism problem in forests. They gave an algorithm (for the matching problem) running in time $2^{O(k)}n^{O(1)}$, where $k$ is the number of edge subsets. Their algorithm shares certain similarities with our Vector Bin Packing algorithm. They use, however, a~less efficient encoding of subsets into edge weights (using $2k$ bits, compared to $k \log k$ bits we achieve in Lemma~\ref{lem:edgeWeights}), and they only coarsely analyze the polynomial dependence on the weights when solving Exact Matching. Avoiding these two inefficiencies, we prove the following theorem. \begin{restatable}{theorem}{thmPMwHC} There is a randomized Monte Carlo algorithm solving Perfect Matching with Hitting Constraints in $O^{*}\!(2^k)$ time. \end{restatable} \section{From Vector Bin Packing to Exact Matching}\label{sec:PackingToMatching} \lempacktomatch* \begin{proof} We interpret the problem of finding a packing as the problem of finding a~perfect matching with a certain total weight in an edge-weighted (multi-)graph. Intuitively, each large item is represented by a vertex, and an edge connects two large items if they fit together into a bin. The edge weight indicates a set of small items which can be packed together with the endpoints~(i.e., the corresponding large items). The goal is to match (pack) all large items while achieving the total weight that corresponds to all small items being assigned to some pairing of large items. Formally, we first add $2\ell - \lvert V_L\rvert$ dummy items $\langle 0, \dots, 0 \rangle \in \mathbb{Q}_{\geqslant 0}^d$ to the set $V_L$ so that each bin will contain exactly two large items. A dummy item can be paired with another dummy item (no original large item is in that bin), or with an original large item (only one original large item is in that bin). For each large item $v \in V_L$ (including the dummy items), create a vertex $u_v$. For each pair of large items $v_1, v_2 \in V_L$, $v_1 \neq v_2$, and for each subset $V'_S \subseteq V_S$ of small items, introduce an edge between $u_{v_1}$ and $u_{v_2}$ if $v_1[i] + v_2[i] + \sum_{v \in V'_S}v[i] \leqslant 1$ for all $i \in [d]$, i.e., the small items fit together with the two large ones into a bin.\footnote{Note that it is important to add an edge for each fitting subset, and not, e.g., only for inclusion-wise maximal fitting subsets. That is because we design the edge weights so that an exact matching corresponds to a partition (and not to a cover) of the set $V_s$.} The weight of the edge will depend on $V'_S$ (but not on $v_1$ and $v_2$). We need to design the edge weights such that each collection of edges of a certain total weight corresponds to a collection of subsets of small items that form a partition of the set of all small items $V_s$, and vice versa. A naive, but incorrect, solution would be to label the small items with integers $1, 2, \ldots, k$, and assign to a subset $X \subseteq [k]$ the integer whose binary representation corresponds to the indicator vector of $X$, i.e., $\sum_{x \in X} 2^{x-1}$. It is true that, with such weights, any collection of edges whose associated subsets form a partition of $V_s$ has the total weight $1\ldots1_2 = 2^k - 1$. However, the reverse statement is not true: it is possible to obtain the total weight $2^k-1$ by, e.g., taking $2^k-1$ edges that each allow small item $1$ but no other small items. As we will show in Lemma~\ref{lem:edgeWeights}, in order to prevent such false positives, it suffices to concatenate the indicator vectors with $(\log k)$-bit counters denoting the number of elements in a set.\footnote{Marx and Pilipczuk~\cite{Marx14} solve a similar issue by concatenating the indicator vector with its reverse, i.e., they assign to $X$ weight $\sum_{x \in X} (2^{2k-x} + 2^{x-1})$. Their approach results in weights of the order of $4^k$, which is prohibitively large for achieving $O^{*}\!(2^k)$ running time.} More formally, we assign to a subset $X \subseteq [k]$ the weight $|X| \cdot 2^k + \sum_{x \in X} 2^{x-1}$, i.e., the $(k+\log k)$-bit integer whose $k$ least significant bits correspond to the indicator vector of $X$ and the $\log k$ most significant bits form the integer equal to the cardinality of $X$. The target total weight $k \cdot 2^k + (2^k - 1)$ can only be achieved by summing weights given to subsets forming a partition of $V_s$, i.e., by assigning each small item to (exactly) one matching edge. \end{proof} \begin{lemma}\label{lem:edgeWeights} Fix the universe size $k \in \mathbb{N}$, and let $f: 2^{[k]} \to \mathbb{N}$ be given by \[f(X) = |X| \cdot 2^k + \sum_{x \in X} 2^{x - 1}.\] Then, a family $X_1, \ldots, X_n \subseteq [k]$ is a partition\footnote{That is, $X_1 \cup \cdots \cup X_n = [k]$, and $X_i \cap X_j = \emptyset$ for every $i \neq j$.} of $[k]$ if and only if \[f(X_1) + \cdots + f(X_n) = k \cdot 2^k + (2^k - 1).\] \end{lemma} \begin{proof} The ``partition\,$\Rightarrow$\,sum'' direction follows from a simple calculation. Let us prove the ``sum\,$\Rightarrow$\,partition'' direction. For $i \in [k]$, let $c_i$ denote the number of sets containing element~$i$. We want to show that $c_i = 1$, for every $i$. We have \[f(X_1) + \cdots + f(X_n) = \bigg(\sum_{i=1}^{k} c_i\bigg) \cdot 2^k + \sum_{i=1}^{k} c_i2^{i-1}.\] Note that the $k$ least significant bits of the sum $f(X_1) + \cdots + f(X_n)$ are lower bounding the term $\sum_{i=1}^{k} c_i2^{i-1}$, and the remaining bits are upper bounding the term $\sum_{i=1}^{k} c_i$, that is, \[ \sum_{i=1}^{k} c_i2^{i-1} \geqslant 2^k - 1 = {\overbrace{1\ldots1}^{k \text{ ones}}}_2, \quad \text{and} \quad \sum_{i=1}^{k} c_i \leqslant k. \] For $i = 0, 1, \dots, k$, let $p_i = c_1 + \cdots + c_i$, with $p_0 = 0$. Observe that $p_i \geqslant i$, for every $i$, as otherwise there are not enough bits to set the one in every position among the $i$ least significant bits of the sum $f(X_1) + \cdots + f(X_n)$.\footnote{It follows from the fact that the number of one-bits in the sum is less or equal to the total number of one-bits in the summands, and that this holds even if we look only at the $i$ least significant bits.} Moreover, $p_k = \sum_{i=1}^{k} c_i \leqslant k$, and thus $p_k = k$. Last but not least, by definition, $c_i = p_i - p_{i-1}$. We have \begin{spreadlines}{.8em} \begin{align*} 2^k - 1 \: & \leqslant \: \sum_{i=1}^{k} 2^{i-1} c_i \:=\: \sum_{i=1}^{k} 2^{i-1} (p_i - p_{i-1}) \:=\: \sum_{i=1}^{k} 2^{i-1} p_i - \sum_{i=1}^{k} 2^{i-1} p_{i-1} \\ & = \: \sum_{i=1}^{k} 2^{i-1} p_i - \sum_{i=0}^{k-1} 2^i p_i \:=\: 2^k p_k + \sum_{i=1}^{k} (2^{i-1}-2^i) p_i - 2^0 p_0 \:=\: 2^k p_k - \sum_{i=1}^{k} 2^{i-1} p_i \\ & = \: 2^k \cdot k - \sum_{i=1}^{k} 2^{i-1} p_i \\ & \leqslant \: 2^k \cdot k - \sum_{i=1}^{k} 2^{i-1} i \:=\: 2^k \cdot k - \big( (k-1) \cdot 2^k + 1 \big) \:=\: 2^k - 1. \end{align*} \end{spreadlines} Hence, all the inequalities must be tight. In particular, $p_i = i$ for every $i$, and thus $c_i = 1$, i.e., each element of the universe is contained in exactly one set of the family. \end{proof} \section{Fast Exact Weight Matching in Multigraphs} In this section we give our variant of the Mulmuley-Vazirani-Vazirani algorithm, with only a linear dependence on the edge weights and a linear additive dependence on the number of edges, proving Theorem~\ref{thm:matching}. \subsection{The Pfaffian} At the heart of the matching algorithm lies a computation of the Pfaffian of a skew-symmetric matrix of certain polynomials. In order to introduce the notion of a Pfaffian properly, let us fix some definitions and notation first. For an $n \times n$ matrix $A$, we denote by $A[i, j]$ the value in the $i$-th row and $j$-th column. We say that $A$ is skew-symmetric if and only if $A[i,j] = -A[j, i]$ for every $i, j \in [n]$. Let $\mathcal{M}$ be a perfect matching in the complete graph $K_{n}$. We can look at $\mathcal{M}$ as a sequence of edges in some arbitrary order, i.e., \[\mathcal{M} = (i_1, j_1), (i_2, j_2), \ldots, (i_{n/2}, j_{n/2}),\] where, by convention, $i_k \leqslant j_k$ for any $k$. Now, we define the sign of $\mathcal{M}$ as follows: \[\operatorname{sgn} \mathcal{M} = \operatorname{sgn} \left( \begin{smallmatrix} 1 & 2 & 3 & 4 & \cdots & n-1 & n \\ i_1 & j_1 & i_2 & j_2 & \cdots & i_{n/2} & j_{n/2} \end{smallmatrix} \right),\] where the right-hand side is the sign of a permutation. One can easily show that this definition does not depend on the chosen order of the edges. Now, we are ready to give the definition of a Pfaffian. \begin{definition}[Pfaffian] Let $A$ be an $n \times n$ skew-symmetric matrix. The Pfaffian of $A$ is denoted by $\operatorname{pf}(A)$ and is defined as follows \[\operatorname{pf}(A) = \sum \biggl\{ \operatorname{sgn} \mathcal{M} \cdot \prod_{\mathclap{(i_k, j_k) \in \mathcal{M}}} A[i_k, j_k] \biggm\vert \mathcal{M} \text{ perfect matching in } K_n \biggr\}. \] \end{definition} We note that since $A$ is skew-symmetric, our convention that $i_k \leqslant j_k$ does not affect the definition of the Pfaffian at all -- if we were to switch $i_k$ and $j_k$, the sign of the matching changes, but so does the sign of the product of the weights. Several equivalent definitions of a Pfaffian exist in the literature. However, we have chosen this one, as it immediately illustrates the connection between the Pfaffian and perfect matchings. The Pfaffian of a matrix over an arbitrary field can be computed by, e.g., a variant of the Gaussian elimination. However, since we are dealing with polynomial matrices, we would like to avoid divisions. Fortunately, several division-free polynomial time algorithms for computing Pfaffian exist \cite{Mahajan99, Rote2001-ra, Urbanska07}. Incidentally, Mahajan, Subramanya, and Vinay \cite{Mahajan99} give a dynamic programming algorithm computing the Pfaffian of a matrix with entries from an arbitrary ring that makes $O(n^4)$ additions and multiplications (see also survey \cite{Rote2001-ra} for an alternative exposition)\footnote{Urbańska's algorithm \cite{Urbanska07} runs even faster, in $O(n^{3.005})$ time. But since we care more about getting linear dependence on the target weight in our matching algorithm, rather than optimizing the polynomial dependence on $n$, we have chosen to use a slightly slower, yet simpler algorithm for the sake of clarity.}. By analysing the structure of their algorithm, we get the following result for matrices with polynomial entries. \begin{theorem}[cf.~\cite{Mahajan99}, Section 4] \label{alg:pfaffian} Given an $n \times n$ matrix $A$ of univariate polynomials of degree at most~$d$ and integer coefficients bounded by $M$, the Pfaffian $\operatorname{pf}(A)$ can be computed in $\widetilde O(n^6 d \log M)$ time. \end{theorem} \begin{proof} The algorithm in \cite{Mahajan99}, Section 4, is described as a weighted {\small DAG} $H_A$ with each vertex corresponding to a state of the dynamic program. The weights on the edges are signed entries of the matrix $A$. There is an auxiliary starting state $s \in H_A$ and the dynamic programming value for a state $v \in H_A$ is a sum of products of weights along all the paths from $s$ to $v$. Moreover, $H_A$ has $O(n^3)$ vertices, depth equal to $O(n)$ and indegree of each vertex equal to $O(n)$. Therefore, if the entries of $A$ are polynomials of degree $d$ and coefficients bounded by $M$, then the values of the dynamic programming states are polynomials with a degree bounded by $O(nd)$ and coefficients bounded by $O(n^n M^n)$. Hence, by using {\small FFT}, we can perform each arithmetic operation in $\widetilde O(n^2 d \log M)$ time. The number of arithmetic operations needed is proportional to the number of edges in $H_A$, which is $O(n^4)$. This yields the desired time bound. \end{proof} Since we do not need to compute the whole Pfaffian in the Exact Matching problem, but are only interested in the coefficient of the monomial $x^t$ (which conveys the information about matchings of the target weight $t$), we can speed up the computation by a factor of $n$. \begin{corollary} \label{alg:pfaffian-coefficient} Given an integer $t$ and an $n \times n$ matrix $A$ of univariate polynomials with integer coefficients bounded by $M$, a coefficient of the monomial $x^t$ in $\operatorname{pf}(A)$ can be computed in $\widetilde O(n^5 t \log M)$ time. \end{corollary} \begin{proof} In the algorithm from Theorem \ref{alg:pfaffian}, we can perform all the arithmetic operations modulo $x^{t+1}$. Then, the degree of the polynomials is bounded by $O(t)$ instead of $O(nd)$, and a similar analysis follows. \end{proof} \subsection{The algorithm} We first recall the central lemma of the Mulmuley-Vazirani-Vazirani algorithm, used to deal with possible cancellations caused by varying signs in the Pfaffian definition. \begin{lemma}[Isolation Lemma, cf.~\cite{Mulmuley1987-mc}] \label{lem:isolation-lemma} Let $S$ be a finite set, and let $F \subseteq 2^S$ be a family of subsets of $S$. To each element $x \in S$, we assign an integer cost $c(x)$ chosen uniformly and independently at random from $\{1, \ldots, 2 |S|\}$. For a subset $S' \subseteq S$, we define a total cost of $S'$ to be $c(S') = \sum_{x \in S'} c(x)$. Then, \[\mathbb{P}(\text{there is a unique minimum total cost set in } F) \geqslant \frac{1}{2}.\] \end{lemma} \noindent Now we are ready to present the matching algorithm. \thmmatching* \begin{proof} We first present the algorithm. Then we argue its correctness and analyse the running time. \vskip 1em \noindent\textbf{Algorithm.}\quad For every $\{u, v\} \in \binom{V}{2}$, let $E_{\{u,v\}} = \{ e \in E : e \text{ connects $u$ and $v$}\}$ denote the set of (parallel) edges between nodes $u$ and $v$. For an edge $e \in E$, we use $w(e) \in \mathbb{Z}_{\geqslant 0}$ to denote the weight of $e$. Moreover, we assume w.l.o.g.~that $V = [n]$. The algorithm works as follows. \begin{enumerate} \item Set $\lambda = 2m^{n}$. \item For every $\{i,j\} \in \binom{V}{2}$, assign a cost $c(\{i,j\})$ uniformly at random from $\{1, \ldots, 2\binom{n}{2} \}$. \item Set up an $n \times n$ matrix $A$ of univariate polynomials: For each $i, j \in [n]$, $i \leqslant j$, put \[A[i,j] = \lambda^{c(\{i,j\})} \sum_{\mathclap{e \in E_{\{i,j\}}}} x^{w(e)}, \quad \text{and} \quad A[j,i] = -A[i,j].\] \item Compute the coefficient of $x^t$ in $\operatorname{pf}(A)$ using the algorithm from Corollary \ref{alg:pfaffian-coefficient}. \item If the coefficient of $x^t$ in $\operatorname{pf}(A)$ is nonzero return {\small YES}, otherwise return {\small NO}. \end{enumerate} \vskip 1em \noindent\textbf{Correctness.}\quad We use $\operatorname{coef}_{t}(\operatorname{pf}(A))$ to denote the coefficient of $x^t$ in $\operatorname{pf}(A)$. For every perfect matching $\mathcal{M}$ in the complete graph $K_n$, let \[ f(\mathcal{M}) = \operatorname{sgn}~\mathcal{M} \cdot \lambda^{c(\mathcal{M})} \cdot \, \# \text{perfect matchings in } G \text{ of weight } t \text{ contained\footnotemark{} in } \mathcal{M} \] \footnotetext{We say that a matching (in multigraph $G$) is contained in another matching (in the complete graph~$K_n$) if the set of $n/2$ pairs of endpoints is the same for the two matchings.} denote the contribution of matching $\mathcal{M}$ to the coefficient $\operatorname{coef}_t(\operatorname{pf}(A))$. Now, we have \begin{equation}\label{eq:pft} \operatorname{coef}_t(\operatorname{pf}(A)) = \sum \bigl\{ f(\mathcal{M}) \bigm\vert \mathcal{M} \text{ perfect matching in } K_n \bigr\}. \end{equation} Let $F$ be the family of all perfect matchings in $K_n$ that contain a perfect matching in $G$ of weight exactly $t$. If $F = \emptyset$, then every summand in (\ref{eq:pft}) is zero. Hence, $\operatorname{coef}_t(\operatorname{pf}(A)) = 0$ and our algorithm answers correctly. If $F \neq \emptyset$, then by Isolation Lemma, with probability at least $\nicefrac{1}{2}$, there is only one minimum cost perfect matching $\mathcal{N} \in F$. Let $c = c(\mathcal{N})$. Observe that the number of perfect matchings in $G$ of weight $t$ that are contained in $\mathcal{N}$ is trivially bounded by $m^n < \lambda$. This means that $|f(\mathcal{N})| < \lambda^{c+1}$. In other words, $f(\mathcal{N})$ is divisible by $\lambda^c$, but not by $\lambda^{c+1}$. On the other hand, every other summand in~(\ref{eq:pft}) is divisible by $\lambda^{c+1}$, as $\mathcal{N}$ is the unique minimum cost matching. Therefore, $\operatorname{coef}_t(\operatorname{pf}(A))$ is divisible by $\lambda^c$, but not by $\lambda^{c+1}$ -- so it cannot be zero. If we want to amplify the probability of giving the correct answer to $1 - \nicefrac{1}{n^C}$, for some constant $C > 0$, we repeat the algorithm $C\log n$ times. \vskip 1em \noindent\textbf{Time cost analysis.}\quad The time needed to complete steps 1--3 is $O(n^2 + m)$. Since the coefficients of the polynomial entries of $A$ are bounded by $2m^{n\cdot2\binom{n}{2}} = 2^{O(n^3 \log m)}$, we get that invoking the algorithm from Corollary \ref{alg:pfaffian-coefficient} takes $\widetilde O(t \cdot n^8 \log m)$ time. In total, that yields $\widetilde O(t \cdot n^8 + m)$ time complexity. \end{proof} \section{Lower bound} \thmlowerbound* \begin{proof} Given a {\small CNF} formula with $n$ variables and $m$ clauses,\footnote{Note that, thanks to the \emph{sparsification lemma}~\cite{Impagliazzo98}, we can assume that $m = O(n)$.} we will construct $n+1$ instances of Vector Bin Packing such that the formula is satisfiable if and only if at least one of them is a yes-instance. Intuitively, this corresponds to guessing the number of variables set to true in a satisfying assignment. Formally, for $t \in \{0,\ldots,n\}$, the $t$-th Vector Bin Packing instance is a yes-instance if and only if the formula has a satisfying assignment with exactly $t$ variables set to true. Let us fix $t$. The $t$-th instance consists of $n+2$ items $V = \{v_1, \ldots, v_n, T, F\} \subseteq \mathbb{Q}_{\geqslant 0}^{m+2}$. The first $n$ items correspond to the $n$ variables; the remaining two items $T$ and $F$ are used to break the symmetry -- in any feasible solution they are necessarily in two different bins, which we call the $T$-bin and the $F$-bin, respectively. Packing item $v_i$ to the $T$-bin corresponds to setting variable $i$ to true, and packing it to the $F$-bin corresponds to setting the variable to false. The items are $(m+2)$-dimensional. The first $m$ dimensions correspond to clauses, and we will discuss them in a moment. Dimension $m+1$ ensures that $T$ and $F$ go to different bins; we have $T[m+1]=F[m+1]=1$, and $v_i[m+1] = 0$ for every $i \in [n]$. Dimension $m+2$ ensures that (at most) $t$ items go to the $T$-bin and (at most) $n-t$ items go to the $F$-bin; we have $T[m+2] = (n-t)/n$, $F[m+2]=t/n$, and $v_i[m+2]=1/n$ for every $i \in [n]$. Now, fix a clause $j \in [m]$. We set \[v_i[j] = \begin{cases} \nicefrac{0}{2n}, & \text{if variable $i$ appears in a positive literal in clause $j$}, \\ \nicefrac{1}{2n}, & \text{if variable $i$ does not appear in clause $j$}, \\ \nicefrac{2}{2n}, & \text{if variable $i$ appears in a negative literal in clause $j$}. \end{cases}\] Let $n_j$ denote the number of variables that appear negated in clause $j$. We set \[T[j] = 1 - \frac{t + n_j - 1}{2n}, \quad \text{and} \quad F[j] = 0.\] This ends the description of the instance. To finish the proof, it remains to show that the above items can be packed into two bins if and only if the formula has a~satisfying assignment with exactly $t$ variables set to true. Note that there is a natural one-to-one correspondence between (not necessarily satisfying) assignments that set exactly $t$ variables to true and (not necessarily feasible) Vector Bin Packing solutions that are feasible in the last two dimensions. We now show that, for $j \in [m]$, such an assignment satisfies clause $j$ if and only if the corresponding solution is feasible in dimension~$j$. The $F$-bin is never overfull in dimension $j$. To analyse the $T$-bin, let $\alpha$, $\beta$, $\gamma$ denote the numbers of variables set to true that, in clause $j$, appear in a positive literal, do not appear, and appear in a negative literal, respectively. Let $\delta$ denote the number of variables set to false that appear in clause $j$ in a negative literal. Note that $t = \alpha + \beta + \gamma$, and $n_j = \gamma + \delta$. Consider the following chain of equivalent inequalities, starting with the condition saying that the $T$-bin is not overfull in dimension $j$. \begin{align*} \alpha \cdot \nicefrac{0}{2n} + \beta \cdot \nicefrac{1}{2n} + \gamma \cdot \nicefrac{2}{2n} & \leqslant 1 - T[j] \\ \beta + 2\gamma & \leqslant t + n_j - 1 \\ \beta + 2\gamma & \leqslant (\alpha + \beta + \gamma) + (\gamma + \delta) - 1 \\ 1 & \leqslant \alpha + \delta \end{align*} The last inequality states that clause $j$ is satisfied. \end{proof} \section{Other applications} In this section we explain how the techniques presented in our paper can be adapted to also solve Vector Multiple Knapsack and Vector Bin Covering, two closely related problems to the Vector Bin Packing problem. The main difference lies in the reduction to the Exact Matching problem, which has to integrate profits of the items, or the new covering property, respectively. Further, we show that our techniques directly apply to the Perfect Matching with Hitting Constraints problem, leading to an improved running time. \subsubsection*{Vector Multiple Knapsack} In Vector Multiple Knapsack, instead of packing all items into the smallest number of bins, we aim to place a subset of items with profits into a fixed number of bins while maximizing the profit of the packed items. Like in Vector Bin Packing, small items hinder us from solving the problem using a polynomial time algorithm for the maximum weight perfect matching. Hence, following Bannach et al.~\cite{Bannach20}, we study the problem parameterized by the number $k$ of small items. \begin{problem}{Vector Multiple Knapsack with Few Small Items} Parameter: & the number of \emph{small} items $k$. \\[3pt] Given: & a set of $n$ items $V = \{v_1, \ldots, v_n\} \subseteq \mathbb{Q}_{\geqslant 0}^{d}$, \textbf{item profits} $p(v_1), \dots, p(v_n) \in \mathbb{Z}_{+}$,\\ & a subset of $k$ items $V_S \subseteq V$ such that $V_L = V \setminus V_S$ is \textbf{3-incompatible},\\ & an integer $\ell \in \mathbb{Z}_+$ denoting the number of unit-sized bins, \\ & and an integer $P \in \mathbb{Z}_{+}$, denoting the \textbf{goal profit}.\\[3pt] Decide: & if a subset $V'$ of the items can be partitioned into $\ell$ bins $B_1 \cup \cdots \cup B_\ell = V'$ \\ & such that $\sum_{v \in B_i} v[j] \leqslant 1$ for every bin $i \in [\ell]$ and every dimension $j \in [d]$, \\ & and $\sum_{v \in V'} p(v) \geqslant P$. \end{problem} To solve the problem, we reduce the instance to the Exact Matching problem as in Section~\ref{sec:PackingToMatching}. It remains to handle the fact that only a subset of items has to be packed, and that we need to integrate the profits. We do so in the following manner: With each edge between $v_1$ and $v_2$ and the weight corresponding to $V'_S \subseteq V_S$, we associate the \emph{cost} of $p(v_1)+p(v_2)+\sum_{v \in V'_s} p(v)$. Further, we introduce $g = n - 2 \cdot \ell$ new vertices $b_1, b_2 \dots, b_g$, called \emph{blocker} vertices. These vertices serve as ``garbage collectors'' for the items which are not packed in any of the $\ell$ bins, i.e., they match $g$ unpacked items, and by that block them. To do so, for each $V'_S \subseteq V_S$, each large vector $v_i$, and each blocker vertex $b_j$, introduce an edge between $v_i$ and $b_j$ with weight dependent on $V'_S$ as before, and cost $0$. Note that, because of the dummy items introduced in the reduction in Section~\ref{sec:PackingToMatching}, we can assume that each bin in an optimal solution contains exactly two large items (some original, some dummy), so we know that exactly $g = n - 2 \cdot \ell$ large items has to be handled by blockers. Using Lemma~\ref{lem:edgeWeights}, clearly, each yes-instance of the Vector Multiple Knapsack problem has a prefect matching of weight exactly $k \cdot 2^k + (2^k -1)$ and cost at least $P$ in the above graph, and vice versa. This is due to the equivalence of choosing $\ell$ edges with non-zero costs and the packing of the $\ell$ bins. The remaining items can be matched with the blocker vertices, and all small items are covered due to the weights. We are left with solving the following matching problem: Given a (multi-)graph with edge weight and edge costs, find a perfect matching with a given total weight and the maximum possible total cost. This can be done with a slight modification of the algorithm of Theorem~\ref{thm:matching}. Indeed, note that the algorithm already looks for a perfect matching minimizing the sum of edge costs coming from Isolation Lemma. All we have to do is to (1) combine input costs with Isolation Lemma costs, and (2) turn minimization into maximization. For (1), it suffices to put the input cost into the most significant bits, and the Isolation Lemma cost into the least significant bits of the combined edge cost. For (2), to find out what the maximum (instead of the minimum) possible total cost is, it suffices to look at the most (instead of the least) significant digit in the $\lambda$-ary representation of the coefficient $\operatorname{coef}_t(\operatorname{pf}(A))$. Last but not least, we remark that Isolation Lemma is symmetric with respect to minimization/maximization, i.e., it also ensures that the maximum total cost set is unique with probability at least $\nicefrac{1}{2}$. To analyze the running time, let $p_{\max} = \max_{v \in V} p(v)$ denote the maximum item profit. The maximum input cost of an edge is $(k+2) p_{\max}$. Hence, the coefficients of the polynomial entries of matrix $A$ are now bounded by $2m^{(k+2)p_{\max}n\cdot2\binom{n}{2}} = 2^{O(p_{\max}n^4 \log m)}$, and the matching algorithm takes $\widetilde O(t \cdot p_{\max} n^9 \log m)$ time. This leads to the following theorem. \thmKnapsack* \subsubsection*{Vector Bin Covering} Another set of problems asks to \emph{cover} the largest number of bins possible. In the one-dimensional setting, covering typically refers to the bin capacity being exceeded by the set of items packed into it. This property can be extended in multiple ways to a~$d$-dimensional case, for example by requiring that at least one dimension is exceeded. However, other properties, such as ``all dimension have to be exceeded'', ``certain set combinations of dimensions have to be exceeded'', et cetera, are possible as well. Our algorithm works for all such definitions of covering. Thus, in the following, we refer to the one chosen as the covering property $\mathcal P$. Following our story line to study a parameter capturing a distance to triviality, we consider the problem variant parameterized by the number $k$ of small items. However, the property of being a small item depends on $\mathcal P$, so we introduce a new definition for the covering problems: We say that a subset $V' \subseteq V$ is \textbf{3-covering} if every three distinct items from $V'$ cover a unit-sized bin w.r.t.\ $\mathcal P$. \begin{problem}{Vector Bin Covering with Few Small Items} Parameter: & the number of \emph{small} items $k$. \\[3pt] Given: & a set of $n$ items $V = \{v_1, \ldots, v_n\} \subseteq \mathbb{Q}_{\geqslant 0}^d$, \\ & a subset of $k$ items $V_S \subseteq V$ such that $V_L = V \setminus V_S$ is \textbf{3-covering} w.r.t.~$\mathcal P$, \\ & and an integer $\ell \in \mathbb{Z}_+$ denoting the number of unit-sized bins. \\[3pt] Decide: & if the items can be partitioned into $\ell$ bins $B_1 \cup \cdots \cup B_\ell = V$ such that \\ & $\sum_{v \in B_i} v$ satisfies $\mathcal P$ for every bin $i \in [\ell]$. \end{problem} The algorithm proceeds similarly to the one for Vector Bin Packing. However, we have to handle the fact that a bin can contain more than two large items in this case. Thus, we first guess the number of bins $\ell_i$ admitting $i$ large items for $i \in \{0,1,2\}$. This yields $O(\ell^3) = O(n^3)$ guesses. The remaining bins will be covered by triples of the unassigned large items. Hence, the guess has to satisfy that $\ell_0+\ell_1+\ell_2+\lfloor(n-k-\ell_1-2\ell_2)/3\rfloor \geqslant \ell$. Now we construct the graph as in Section~\ref{sec:PackingToMatching} with $2\ell_0 + \ell_1$ dummy items. For each $V'_S \subseteq V_S$, an edge is introduced between $v_1$ and $v_2$ if $v_1 + v_2 +\sum_{v \in V'_S} v$ covers the bin w.r.t.\ $\mathcal P$. The weight of the edge is defined by $V'_S$ as before. Additionally, we introduce $(n-k-\ell_1-2\ell_2)$ \emph{blocker} vertices, and introduce an edge of weight $0$ between each blocker vertex and each large item. The blocker vertices collect all large items not being placed into bins with 0, 1, or 2 large items. With Lemma~\ref{lem:edgeWeights} being proven, clearly, each yes-instance of the Vector Bin Covering problem has a perfect matching of weight $k \cdot 2^k + (2^k -1)$ in the above graph, and vice versa. Indeed, a matching has to choose $(n-k-\ell_1-2\ell_2)$ edges between blocker vertices and large items. These are the ones greedily packed as triples. Note that this might leave up to two large items unpacked, which will be assigned to an arbitrary, already covered bin. The remaining packing is defined by the remaining matching edges as previously. This together with Theorem~\ref{thm:matching} leads to the following result. \thmCover* \subsubsection*{Perfect Matching with Hitting Constraints} The Perfect Matching with Hitting Constraints problem asks whether there exists a perfect matching in a graph using at least one edge from each given set of edges. Formally, the problem is defined as follows. \begin{problem}{Perfect Matching with Hitting Constraints} Parameter: & the number of edge subsets $k$. \\[3pt] Given: & a graph $G = \langle V, E \rangle$, \\ & and $k$ (not necessarily disjoint) edge subsets $E_1,\dots, E_k \subseteq E$.\\[3pt] Decide: & if there is a perfect matching $M$ in $G$ such that \\ & there exists $k$ \emph{distinct} edges $e_1, \dots, e_k \in M$ such that $e_i \in E_i$ for every $i \in [k]$. \end{problem} We again reduce this problem to finding an exact weight perfect matching in a~multigraph. Our approach is similar to the one of Marx and Pilipczuk~\cite{Marx14}. However, in their reduction, they introduce larger edge weights, and, by that, obtain a larger running time. We can circumvent this using edge weights as defined in Lemma~\ref{lem:edgeWeights}. In detail, we create a copy of each edge $e \in E_i$, for each $i \in [k]$, and assign weight $1 \cdot 2^k + 2^{i-1}$ to it -- i.e., we concatenate the indicator vector of the singleton $\{i\}$ with the counter set to $1$, as previously. The original edge gets weight $0$. The target weight is $t=k \cdot 2^k + (2^k -1)$. Clearly, there exists a perfect matching with hitting constraints in $G$ if and only if there is a perfect matching in the transformed graph with edge weights summing up to the correct target value~$t$, see Lemma~\ref{lem:edgeWeights}. This together with Theorem~\ref{thm:matching} leads to the following result. \thmPMwHC* \appendix \section{Alternative algorithm for Vector Bin Packing} \label{sec:alt} In this section, we develop an alternative algorithm for Vector Bin Packing with a better dependence on the number of items $n$ than the algorithm from Theorem~\ref{thm:binpacking}. We follow the approach of Gutin et al.~\cite{DBLP:journals/jcss/GutinWY17} for the Conjoining Bipartite Matching problem that is based on the following lemma by Wahlstr{\"{o}}m \cite{DBLP:conf/stacs/Wahlstrom13}. \begin{lemma}[cf. \cite{DBLP:conf/stacs/Wahlstrom13}, Lemma 2] \label{lem:Qpolynomial} Let $P(x_1, \ldots, x_n)$ be a polynomial over a field of characteristic $2$. For a set $I \subseteq \{x_1,\ldots,x_n\}$, define $P_{-I}(x_1, \ldots, x_n) = P(y_1, \ldots, y_n)$ where $y_i = 0$ if $x_i \in I$, and $y_i = x_i$ otherwise. For a $J \subseteq \{x_1,\ldots,x_n\}$ define $$ \Phi_J(P) = \sum\limits_{I \subseteq J} P_{-I}. $$ For a monomial $T$ and a polynomial $Q$ let us use\,\ $\operatorname{coef}_T(Q)$ to denote the coefficient of $T$ in $Q$. Then, for any monomial $T$ we have \[\operatorname{coef}_T(\Phi_J(P)) = \begin{cases} \operatorname{coef}_T(P), & \text{if~} T \text{~is divisible by~} \prod_{x_i \in J} x_i, \\ 0, & \text{otherwise}. \end{cases}\] \end{lemma} We are also going to use the classic Schwartz-Zippel lemma: \begin{lemma}[Schwartz-Zippel, \cite{DBLP:journals/jacm/Schwartz80, DBLP:conf/eurosam/Zippel79}] Let $P(x_1, \ldots, x_n)$ be a multivariate polynomial of maximum degree at most $d$ over a field\:\ $\mathbb{F}$, and assume that $P$ is not identically equal to zero. Pick $r_1, \ldots, r_n$ uniformly at random from $\mathbb{F}$. Then $\mathbb{P}(P(r_1, \ldots, r_n) = 0) \leqslant d/|\mathbb{F}|$. \end{lemma} \begin{theorem} There is a randomized Monte Carlo algorithm solving Vector Bin Packing with Few Small Items in $O(2^k n^{\omega + o(1)} + 2^k k n^2\operatorname{polylog}(n))$ time. \end{theorem} \begin{proof} Recall the (multi-)graph that we construct in the proof of Lemma~\ref{lem:packtomatch}. Let us call it $G$, and assume w.l.o.g.~that the vertex set of $G$ is $[N]$, for $N = O(n)$. Recall that each of the $M = O(2^k n^2)$ edges of $G$ represented a subset of $[k]$. For the purpose of the current proof, we modify the edge weights -- in Lemma \ref{lem:packtomatch}, we set integer weights, but now, to every edge of $G$, we assign a weight that is an appropriately chosen polynomial over a field of characteristic $2$. More specifically, for every $i \in [k]$, we create a variable $x_i$. Moreover, for every edge $e$, we create an auxiliary variable $z_e$. Now, for every pair of vertices $i, j \in [N]$ and every subset $J \subseteq [k]$, we define \[P_{i, j, J} = \begin{cases} z_e\prod\limits_{\mathclap{s \in J}} x_s, & \text{if between vertices $i$ and $j$ there is edge $e$ corresponding to $J$}, \\ 0, & \text{otherwise}. \end{cases}\] Note that we interpret $P_{i, j, J}$ as a polynomial over a finite field $\mathbb{F}$ of characteristic $2$, with the exact size of $\mathbb{F}$ to be determined later. Next, we construct a matrix $A$ of polynomials over $\mathbb{F}$ in such a way that the Pfaffian of $A$ conveys information about existence of a solution for our Vector Bin Packing instance. For each $i, j \in [N]$, with $i \leqslant j$, let us put \[A[i,j] = \sum_{\mathclap{J \subseteq [k]}} P_{i, j, J}, \quad \text{and} \quad A[j,i] = -A[i,j].\] Observe that, thanks to the introduction of $z$-variables, there are no term cancellations in the Pfaffian $\operatorname{pf}(A)$. Therefore, a solution to the Vector Bin Packing instance exists if and only if there exists a monomial in $\operatorname{pf}(A)$ divisible by $\prod_{i=1}^{k} x_i$. By Lemma~\ref{lem:Qpolynomial}, this is equivalent to $\Phi_X(\operatorname{pf}(A))$, for $X = \{x_1,\ldots,x_k\}$, being not identically equal to zero. That, on the other hand, can be checked using Schwartz-Zippel lemma. In order to use that lemma, we would like to show how to evaluate $\Phi_X(\operatorname{pf}(A))$ efficiently. Observe that $\deg \Phi_X(\operatorname{pf}(A)) \leqslant (k+1)n$. Hence, let us fix $\mathbb{F}$ to be the field of size $2^q$ for $q = \lceil\log(2(k+1)n)\rceil$. Now, let us choose values $r_1, \ldots, r_k$ and $s_1, \ldots, s_M$ (for $x$-variables and $z$-variables, respectively) uniformly at random from $\mathbb{F}$. Let us refer to these two vectors of numbers as $\Bar{r}$ and $\Bar{s}$ for brevity. By Schwartz-Zippel lemma, if $\Phi_X(\operatorname{pf}(A)) \not\equiv 0$, then $\mathbb{P}(\Phi_X(\operatorname{pf}(A))(\Bar{r}, \Bar{s}) = 0) \leqslant 1/2$. We have $$\Phi_X(\operatorname{pf}(A))(\Bar{r}, \Bar{s}) = \sum\limits_{I \subseteq [k]} \operatorname{pf}(A)_{-I}(\Bar{r}, \Bar{s}).$$ It is well known that $\operatorname{pf}(A)^2 = \det(A)$ (for a proof see e.g. \cite{Rote2001-ra}). Since in a field of characteristic $2$, for any $x \in F$, we have $x = -x$, and also $\sqrt x$ exists and is unique. Therefore, we can write $$\Phi_X(\operatorname{pf}(A))(\Bar{r}, \Bar{s}) = \sum\limits_{I \subseteq [k]} \sqrt{\det(A)_{-I}(\Bar{r}, \Bar{s})}. $$ For every $I \subseteq [k]$, let us define an auxiliary $n \times n$ matrix $B_I$ over $\mathbb{F}$ such that $$ B_I[i,j] = A[i,j]_{-I}(\Bar{r}, \Bar{s}) $$ for every $i, j \in [N]$. It is easy to see that $\det(A)_{-I}(\Bar{r}, \Bar{s}) = \det(B_I)$. Hence, \begin{equation}\label{eq:pfdet} \Phi_X(\operatorname{pf}(A))(\Bar{r}, \Bar{s}) = \sum\limits_{I \subseteq [k]} \sqrt{\det(B_I)}. \end{equation} In order to compute the expression \eqref{eq:pfdet}, we need to know the matrices $B_I$ for all $I \subseteq [k]$. However, there are $2^k$ such matrices and each entry in $A_{-I}$ is a polynomial built of up to $2^k$ monomials, making the naive evaluation far too slow. Fortunately, for a fixed $i, j \in [N]$, we can compute all the values $B_I[i,j]$ simultaneously and efficiently, using dynamic programming, as follows. Let us fix $i, j \in [N]$ and define function $f : 2^{[k]} \to \mathbb{F}$ such that, for every $J \subseteq [k]$, \[f(J) = P_{i, j, J}(\Bar{r}, \Bar{s}).\] It is straightforward to compute all the values of $f$ in $O(2^k k)$ arithmetic operations. Note that $\mathbb{F}$ is isomorphic to $F_2[x]$ modulo an irreducible polynomial of degree~$q$. Hence, any arithmetic operation takes $O(q \operatorname{polylog}(q)) = O(\operatorname{polylog}(n))$. This yields $O(2^k k \operatorname{polylog}(n))$ time for computing all the values of $f$. Now, let us define function $g : 2^{[k]} \to \mathbb{F}$ such that, for every $I \subseteq [k]$, $g(I) = \sum_{J \subseteq I} f(J)$. It is now easy to see that $B_I[i,j] = g([k]\setminus I)$. Further, $g$ is the so called \emph{zeta transform} of $f$, and can be computed in $O(2^k k)$ arithmetic operations~\cite{yates1937design} using dynamic programming. This leads to $O(2^k k \operatorname{polylog}(n))$ time in our case. We can therefore compute the functions $f$ and $g$ for all $i,j \in [N]$, and consequently find the matrices $B_I$ for all $I \subseteq [k]$, in $O(2^k k n^2\operatorname{polylog}(n))$ total time. It is well known that the determinant of an $n \times n$ matrix with entries from a field can be computed in $O(n^{\omega + o(1)})$ arithmetic operations (see, e.g., \cite{DBLP:books/aw/AhoHU74}, Chapter 6), where $\omega < 2.37286$ is the \emph{matrix multiplication exponent} \cite{DBLP:conf/soda/AlmanW21}. This means that we can evaluate the expression \eqref{eq:pfdet} using additional $O(2^k n^{\omega + o(1)} )$ time. \end{proof} \end{document}
\b egin{document} \mathtt{i}tle[Weight decomposition and tropical cycle classes] {Weight decomposition of de Rham cohomology sheaves and tropical cycle classes for non-Archimedean spaces} \author{Yifeng Liu} \r{ad}dress{Department of Mathematics, Northwestern University, Evanston IL 60208, United States} \email{[email protected]} \date{\today} \subjclass[2010]{14G22} \b egin{abstract} We construct a functorial decomposition of de Rham cohomology sheaves, called weight decomposition, for smooth analytic spaces over non-Archimedean fields embeddable into $\b C_p$, which generalizes a construction of Berkovich and solves a question raised by him. We then investigate complexes of real tropical differential forms and currents introduced by Chambert-Loir and Ducros, by establishing a relation with the weight decomposition and defining tropical cycle maps with values in the corresponding Dolbeault cohomology. As an application, we show that algebraic cycles that are cohomologically trivial in the algebraic de Rham cohomology are cohomologically trivial in the Dolbeault cohomology of currents as well. \end{abstract} \maketitle \setcounter{tocdepth}{1} \mathtt{a}bleofcontents \section{Introduction} \label{ss:0} Let $K$ be a complete non-Archimedean field of characteristic zero with a nontrivial valuation. Let $X$ be a smooth $K$-analytic space in the sense of Berkovich. Let $\c O_X$ (resp.\ $\f c_X$) be the structure sheaf (resp.\ the sheaf of constant analytic functions \cite{Berk04}*{\S 8}) of $X$ in either analytic or \'{e}tale topology. We have the following complex of $\f c_X$-modules in either analytic or \'{e}tale topology: \b egin{align}\label{eq:derham} \Omega^\b ullet_X\colon 0\to \c O_X=\Omega^0_X \xrightarrow{\r d} \Omega^1_X \xrightarrow{\r d}\Omega^2_X \xrightarrow{\r d}\cdots, \end{align} known as the \emph{de Rham complex}, which satisfies that $\f c_X=\Ker(\r d\colon\c O_X\to\Omega^1_X)$. It is \emph{not} exact from the term $\Omega^1_X$ if $\dim(X)\geq 1$. The cohomology sheaves of the de Rham complex $\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X$ are called \emph{de Rham cohomology sheaves}. For $q\geq 0$, denote by $\Upsilon^q_X$ the subsheaf of $\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X$ generated by sections of the form \[\sum c_i\f rac{\r d f_{i1}}{f_{i1}}\wedge\cdots\wedge\f rac{\r d f_{iq}}{f_{iq}}\] where the sum is finite, $c_i$ are sections of $\f c_X$, and $f_{ij}$ are sections of $\c O^*_X$. In particular, we have $\Upsilon^0_X=\f c_X$, and that $\Upsilon^1_X$ is simply the sheaf $\Upsilon_X$ defined in \cite{Berk07}*{\S 4.3} in the case of \'{e}tale topology. \b egin{theorem}\label{th:1} Suppose that $K$ is embeddable into $\b C_p$. Let $X$ be a smooth $K$-analytic space. Then for every $q\geq 0$, we have a decomposition \[\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X=\b igoplus_{w\in\b Z}(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w\] of $\f c_X$-modules in either analytic or \'{e}tale topology. It satisfies that \b egin{enumerate}[(i)] \item $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w=0$ unless $q\leq w\leq 2q$; \item $\Upsilon^q_X\subset(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_{2q}$, and they are equal if $q=1$; \item $(\Omega^{1,\r{cl}}_X/\r d\c O_X)_1$ coincides with the sheaf $\Psi_X$ defined in \cite{Berk07}*{\S 4.5} in the case of \'{e}tale topology. \end{enumerate} Such decomposition is stable under base change, cup product, and functorial in $X$. \end{theorem} The proof of this theorem will be given at the end of Section \r ef{ss:log}. We call the decomposition in the above theorem the \emph{weight decomposition of de Rham cohomology sheaves}. \b egin{corollary}\label{co:1} Suppose that $K$ is embeddable into $\b C_p$. Then for every smooth $K$-analytic space $X$, we have $\Omega^{1,\r{cl}}_X/\r d\c O_X=\Upsilon_X\r{op}lus\Psi_X$ in \'{e}tale topology. This answers the question in \cite{Berk07}*{Remark 4.5.5} for such $K$. \end{corollary} \b egin{remark} We expect that Theorem \r ef{th:1} and thus Corollary \r ef{co:1} hold by only requiring that the residue field of $K$ is algebraic over a finite field (and $K$ is of characteristic zero). \end{remark} For the rest of Introduction, we work in the analytic topology only. In particular, the de Rham complex $(\Omega^\b ullet_X,\r d)$ is a complex of sheaves on (the underlying topological space of) $X$.\\ In \cite{CLD12}, Chambert-Loir and Ducros define, for every $K$-analytic space $X$, a bicomplex $(\s A_X^{\b ullet,\b ullet},\r d',\r d'')$ of sheaves of real vector spaces on $X$ concentrated in the first quadrant. It is a non-Archimedean analogue of the bicomplex of $(p,q)$-forms on complex manifolds. In particular, we may define analogously the \emph{Dolbeault cohomology} (of forms) as \[H^{q,q'}_{\s A}(X)\coloneqq\f rac{\Ker(\r d''\colon\s A_X^{q,q'}(X)\to\s A_X^{q,q'+1}(X))} {\IM(\r d''\colon\s A_X^{q,q'-1}(X)\to\s A_X^{q,q'}(X))}.\] By \cite{CLD12} and \cite{Jel16}, we know that for every $q\geq 0$, the complex $(\s A_X^{q,\b ullet},\r d'')$ is a fine resolution of the sheaf $\Ker(\r d''\colon\s A_X^{q,0}\to\s A_X^{q,1})$. Thus, $H^{q,q'}_{\s A}(X)$ is canonically isomorphic to the sheaf cohomology $H^{q'}(X,\Ker(\r d''\colon\s A_X^{q,0}\to\s A_X^{q,1}))$. If $X$ is of dimension $n$ and without boundary, then we may define the integration \[\int_X\omega\] for every top form $\omega\in\s A^{n,n}_X(X)$ with compact support. In particular, if $X$ is moreover compact, then the integration induces a real linear functional on $H^{n,n}_\s A(X)$. The next theorem reveals a connection between $\Ker(\r d''\colon\s A_X^{q,0}\to\s A_X^{q,1})$ and the algebraic de Rham cohomology sheaves of $X$. \b egin{theorem}[Lemma \r ef{le:log}, Theorem \r ef{th:kernel}] Let $K$ be a non-Archimedean field embeddable into $\b C_p$ and $X$ a smooth $K$-analytic space. Let $\s L_X^q$ be the subsheaf of $\b Q$-vector spaces of $\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X$ generated by sections of the form $\f rac{\r d f_1}{f_1}\wedge\cdots\wedge\f rac{\r d f_q}{f_q}$ where $f_j$ are sections of $\c O^*_X$. Then \b egin{enumerate} \item the canonical map $\s L_X^q\otimes_\b Q\f c_X\to\Upsilon_X^q$ is an isomorphism; \item there is a canonical isomorphism $\s L_X^q\otimes_\b Q\b R\simeq \Ker(\r d''\colon\s A_X^{q,0}\to\s A_X^{q,1})$. \end{enumerate} \end{theorem} The above theorem implies that the Dolbeault cohomology $H^{q,q'}_\s A(X)$ for $X$ in the theorem has a canonical rational structure through the isomorphism $H^{q,q'}_\s A(X)\simeq H^{q'}(X,\s L_X^q)\otimes_\b Q\b R$.\\ Recall that in the complex world, for a smooth complex algebraic variety $\c X$, we have a cycle class map from $\CH^q(\c X)$ to the classical Dolbeault cohomology $H^{q,q}_{\overline\partial}(\c X^\r{an})$ of the associated complex manifold $\c X^\r{an}$. Over a non-Archimedean field $K$, we may associate a scheme $\c X$ of finite type over $K$ a $K$-analytic space $\c X^\r{an}$. The following theorem is an analogue of the above cycle class map in the non-Archimedean world. \b egin{theorem}[Definition \r ef{de:tropical}, Theorem \r ef{th:cycle}, Corollary \r ef{co:cycle}] Let $K$ be a non-Archimedean field and $\c X$ a smooth scheme over $K$ of dimension $n$. Then there is a \emph{tropical cycle class map} \[\r{cl}_\s A\colon\CH^q(\c X)\to H^{q,q}_\s A(\c X^\r{an}),\] functorial in $\c X$ and $K$, such that for every algebraic cycle $\c Z$ of $\c X$ of codimension $q$, \b egin{align}\label{eq:integral} \int_{\c X^\r{an}}\r{cl}_\s A(\c Z)\wedge\omega=\int_{\c Z^\r{an}}\omega \end{align} for every $\r d''$-closed form $\omega\in\s A^{n-q,n-q}_{\c X^\r{an}}(\c X^\r{an})$ with compact support. In particular, if $\c X$ is proper and $\c Z$ is of dimension $0$, then \[\int_{\c X^\r{an}}\r{cl}_\s A(\c Z)=\deg\c Z.\] \end{theorem} \b egin{remark} Let the situation be as in the above theorem. \b egin{enumerate} \item Our construction actually shows that the image of $\r{cl}_\s A$ is in the canonical rational subspace $H^q(\c X^\r{an},\s L_{\c X^\r{an}}^q)$. \item The tropical cycle class respects products on both sides. More precisely, for $\c Z_i\in\CH^{q_i}(\c X)$ with $i=1,2$, we have $\r{cl}_\s A(\c Z_1\cdot\c Z_2)=\r{cl}_\s A(\c Z_1)\cup\r{cl}_\s A(\c Z_2)$. \item We may regard the formula \eqref{eq:integral} as a tropical version of Cauchy formula in multi-variable complex analysis. \item Even when $\c X$ is proper, one can \emph{not} use \eqref{eq:integral} to define $\r{cl}_\s A(\c Z)$ as we do not know whether the pairing \[H^{q,q}_\s A(\c X^\r{an})\mathtt{i}mes H^{n-q,n-q}_\s A(\c X^\r{an})\xrightarrow{\cup}H^{n,n}_\s A(\c X^\r{an})\xrightarrow{\int_X}\b R\] is perfect or not at this moment. \end{enumerate} \end{remark} For a proper smooth scheme $\c X$ of dimension $n$ over a general field $K$ of characteristic zero, we have a cycle class map $\r{cl}_\r{dR}\colon\CH^q(\c X)\to H^{2q}_\r{dR}(\c X)$ with values in the algebraic de Rham cohomology. It is known that when $K=\b C$, the kernel of $\r{cl}_\r{dR}$ coincides with the kernel of the cycle class map valued in Dolbeault cohomology. In particular, if $\r{cl}_\r{dR}(\c Z)=0$, then $\int_{\c Z^\r{an}}\omega=0$ for every $\overline\partial$-closed $(n-q,n-q)$-form $\omega$ on $\c X^\r{an}$. In the following theorem, we prove that the same conclusion holds in the non-Archimedean setting as well, with mild restriction on the field $K$. \b egin{theorem}(Theorem \r ef{th:trivial})\label{th:2} Let $K\subset\b C_p$ be a finite extension of $\b Q_p$ and $\c X$ a proper smooth scheme over $K$ of dimension $n$. Let $\c Z$ be an algebraic cycle of $\c X$ of codimension $q$ such that $\r{cl}_\r{dR}(\c Z)=0$. Then \[\int_{(\c Z\otimes_K\b C_p)^\r{an}}\omega=0\] for every $\r d''$-closed form $\omega\in\s A^{n-q,n-q}((\c X\otimes_K\b C_p)^\r{an})$. \end{theorem} We emphasize again that in the above theorem, we do not know whether $\r{cl}_\s A(\c Z)=0$ or not. If we know the Poincar\'{e} duality for $H^{\b ullet,\b ullet}_\s A(\c X^\r{an})$, then $\r{cl}_\s A(\c Z)=0$. Nevertheless, we have the following result for lower degree. \b egin{theorem}(Theorem \r ef{th:line}) Let $\c X$ be a proper smooth scheme over $\b C_p$. Then \b egin{enumerate} \item $H^{1,1}_\s A(\c X^\r{an})$ is finite dimensional; \item for a line bundle $\c L$ on $\c X$ such that $\r{cl}_\r{dR}(\c L)=0$, we have $\r{cl}_\s A(\c L)=0$. \end{enumerate} \end{theorem} To the best of our knowledge, the first conclusion in the above theorem is the only known case of the finiteness of $\dim H^{q,q'}_\s A(\c X^\r{an})$ when both $q,q'$ are positive and $\c X$ is of general dimension. Note that in the above theorem, we do not require that $\c X$ can be defined over a finite extension of $\b Q_p$. \b egin{remark} We can interpret Theorem \r ef{th:2} in the following way. Let $k$ be a number field. Let $\c X$ be a proper smooth scheme over $k$ of dimension $n$, and $\c Z$ an algebraic cycle of $\c X$ of codimension $q$. Suppose that there exists \emph{one} embedding $\iota_\infty\colon k\hookrightarrow\b C$ such that \[\int_{(\c Z\otimes_{k,\iota_\infty}\b C)^\r{an}}\omega=0\] for every $\overline\partial$-closed $(n-q,n-q)$-form $\omega$ on $(\c X\otimes_{k,\iota_\infty}\b C)^\r{an}$. Then for \emph{every} prime $p$ and \emph{every} embedding $\iota_p\colon k\hookrightarrow\b C_p$, we have \[\int_{(\c Z\otimes_{k,\iota_p}\b C_p)^\r{an}}\omega=0\] for every $\r d''$-closed $(n-q,n-q)$-form $\omega$ on $(\c Z\otimes_{k,\iota_p}\b C_p)^\r{an}$.\\ \end{remark} The article is organized as follows. We review the basic theory of rigid cohomology in Section \r ef{ss:rigid}, which is one of the main tools in our work. We construct the weight decomposition of de Rham cohomology sheaves in the \'{e}tale topology in Section \r ef{ss:weight}. In Section \r ef{ss:log}, we study the behavior of logarithmic differential forms in rigid cohomology and deduce Theorem \r ef{th:1} for both topologies. We will not use \'{e}tale topology after this point. We start Section \r ef{ss:cycle} by reviewing the theory of real forms developed by Chambert-Loir and Ducros; and then we study its relation with de Rham cohomology sheaves. Next, we define the tropical cycle class maps and establish their relation with integration of real forms. In the last Section \r ef{ss:triviality}, we study algebraic cycles that are cohomologically trivial in the sense of algebraic de Rham cohomology. In particular, we show that they are cohomologically trivial in the sense of Dolbeault cohomology of currents (of forms if they are of codimension $1$). \subsection*{Conventions and Notation} \b egin{itemize} \item Throughout the article, by a \emph{non-Archimedean field} we mean a complete topological field of characteristic \emph{zero} whose topology is induced by a nontrivial non-Archimedean valuation $|\;|$ of rank $1$. If the valuation is discrete, then we say that it is a \emph{discrete non-Archimedean field} by abuse of terminology. \item Let $K$ be a non-Archimedean field. Put \[K^\circ=\{x\in K\mathbin{|} |x|\leq 1\},\quad K^{\circ\circ}=\{x\in K\mathbin{|} |x|<1\},\quad \widetilde{K}=K^\circ/K^{\circ\circ}.\] Denote by $K^\r a$ the algebraic closure of $K$ and $\widehat{K^\r a}$ its completion. A \emph{residually algebraic} extension of $K$ is an extension $K'/K$ of non-Archimedean fields such that the induced extension $\widetilde{K'}/\widetilde{K}$ is algebraic. In the text, discrete non-Archimedean fields are usually denoted by lower-case letters like $k,k'$, etc. And $\varpi$ will always be a uniformizer of a discrete non-Archimedean field, though we will still remind readers of this. \item Let $K$ be a non-Archimedean field, and $A$ an affinoid $K$-algebra. We then have the $K$-analytic space $\c M(A)$. Denote by $A^\circ$ the subring of power-bounded elements, which is a $K^\circ$-algebra. Put $\widetilde{A}=A^\circ\otimes_{K^\circ}\widetilde{K}$. We say that $A$ is \emph{integrally smooth} if $A$ is strictly $K$-affinoid and $\Spf A^\circ$ is a smooth formal $K^\circ$-scheme. \item Let $K$ be a non-Archimedean field. For a real number $r>0$, we denote by $D(0;r)$ the open disc over $K$ with center at zero of radius $r$. For real numbers $R>r>0$, we denote by $B(0;r,R)$ the open annulus over $K$ with center at zero of inner radius $r$ and outer radius $R$. An \emph{open poly-disc (of dimension $n$)} over $K$ is the product of finitely many open discs $D(0;r_i)$ (of number $n$). \item For a non-Archimedean field $K$, all $K$-analytic (Berkovich) spaces are assumed to be Hausdorff and strictly $K$-analytic \cite{Berk93}*{1.2.15}. Suppose that $K'/K$ is an extension of non-Archimedean fields. For a $K$-analytic space $X$ and a $K'$-analytic space $Y$, we put \[X\widehat\otimes_KK'=X\mathtt{i}mes_{\c M(K)}\c M(K'),\quad Y\mathtt{i}mes_KX=Y\mathtt{i}mes_{\c M(K')}(X\widehat\otimes_KK');\] and for a formal $K^\circ$-scheme $\f X$ and a formal $K'^\circ$-scheme $\f Y$, we put \[\f X\widehat\otimes_{K^\circ}K'^\circ=\f X\mathtt{i}mes_{\Spf K^\circ}\Spf K'^\circ,\quad \f Y\mathtt{i}mes_{K^\circ}\f X=\f Y\mathtt{i}mes_{\Spf K'^\circ}(\f X\widehat\otimes_{K^\circ}K'^\circ).\] \item If $k$ is a discrete non-Archimedean field and $\f X$ is a special formal $k^\circ$-scheme in the sense of \cite{Berk96}, then we have the notion $\f X_\acute{\r{e}}\r{t}a$, the generic fiber of $\f X$, which is a $k$-analytic space; and $\f X_s$, the closed fiber of $\f X$, which is a scheme locally of finite type over $\widetilde{k}$; and a reduction map $\pi\colon\f X_\acute{\r{e}}\r{t}a\to\f X_s$. For a general non-Archimedean field $K$, we say a formal $K^\circ$-scheme $\f X$ is special if there exist a discrete non-Archimedean field $k\subset K$ and a special formal $k^\circ$-scheme $\f X'$ such that $\f X\simeq\f X'\widehat\otimes_{k^\circ}K^\circ$. For a special formal $K^\circ$-scheme, we have similar notion $\pi\colon\f X_\acute{\r{e}}\r{t}a\to\f X_s$ which is canonically defined. In this article, all formal $K^\circ$-schemes will be special. Note that if $\c Z$ is a subscheme of $\f X_s$, then $\pi^{-1}\c Z$ is usually denoted as $]\c Z[_{\f X_\acute{\r{e}}\r{t}a}$ in rigid analytic geometry. \item If $\c X$ is a scheme over an affine scheme $\Spec A$ and $B$ is an $A$-algebra, then we put $\c X_B=\c X\mathtt{i}mes_{\Spec A}\Spec B$. Such abbreviation will be applied only to schemes, neither formal schemes nor analytic spaces. If $\c X$ is a scheme over $\Spec K^\circ$ for a non-Archimedean field $K$, then we write $\c X_s$ for $\c X_{\widetilde{K}}$. \item Let $K$ be a non-Archimedean field and $X$ a $K$-analytic space. For a point $x\in X$, one may associate nonnegative integers $s_K(x),t_K(x)$ as in \cite{Berk90}*{\S 9.1}. For readers' convenience, we recall the definition. The number $s_K(x)$ is equal to the transcendence degree of $\widetilde{\c H(x)}$ over $\widetilde{K}$, and the number $t_K(x)$ is equal to to the dimension of the $\b Q$-vector space $\sqrt{|\c H(x)^*|}/\sqrt{|K^*|}$, where $\c H(x)$ is the completed residue field of $x$. In the text, the field $K$ will always be clear so will be suppressed in the notation $s_K(x),t_K(x)$. \item Let $X$ be a site. Whenever we have a suitable notion of de Rham complex $(\Omega^\b ullet_X,\r d)$ on $X$, we denote by $H^\b ullet_\r{dR}(X)\coloneqq H^\b ullet(X,\Omega^\b ullet_X)$ the corresponding de Rham cohomology of $X$, as the hypercohomology of the de Rham complex. \end{itemize} \section{Review of rigid cohomology} \label{ss:rigid} In this section, we review the theory of rigid cohomology developed in, for example, \cite{Bert97} and \cite{LS07}. Let $\f R$ be the category of triples $(K,X,Z)$ where $K$ is a non-Archimedean field; $X$ is a scheme of finite type over $\widetilde{K}$; and $Z$ is a Zariski closed subset of $X$. A morphism from $(K',X',Z')$ to $(K,X,Z)$ consists of a field extension $K'/K$ and a morphism $X'\to X\otimes_{\widetilde{K}}\widetilde{K'}$ whose restriction to $Z'$ factors through $Z\otimes_{\widetilde{K}}\widetilde{K'}$. Let $\f V$ be the category of pairs $(K,V^\b ullet)$ where $K$ is a non-Archimedean field and $V^\b ullet$ is a graded $K$-vector space. A morphism from $(K,V^\b ullet)$ to $(K',V'^\b ullet)$ consists of a field extension $K'/K$ and a graded linear map $V^\b ullet\otimes_KK'\to V'^\b ullet$. We have a functor of \emph{rigid cohomology with support}: $\f R^{\r{opp}}\to\f V$ sending $(K,X,Z)$ to $H^\b ullet_{Z,\r ig}(X/K)$. Put $H^\b ullet_\r ig(X/K)=H^\b ullet_{X,\r ig}(X/K)$ for simplicity. We list the following properties which will be used extensively in this article: \b egin{itemize} \item Suppose that we have a morphism $(K',X',Z')\to (K,X,Z)$ with $X'\simeq X\otimes_{\widetilde{K}}\widetilde{K'}$ and $Z'\simeq Z\otimes_{\widetilde{K}}\widetilde{K'}$. Then the induced map $H^\b ullet_{Z,\r ig}(X/K)\otimes_KK'\to H^\b ullet_{Z',\r ig}(X'/K')$ is an isomorphism of finite dimensional graded $K'$-vector spaces (\cite{GK02}*{Corollary 3.8} and \cite{Berk07}*{Corollary 5.5.2}). \item For $Y=X\b ackslash Z$, we have a long exact sequence: \b egin{align}\label{eq:support} \cdots\to H^i_{Z,\r ig}(X/K) \to H^i_\r ig(X/K) \to H^i_\r ig(Y/K) \to H^{i+1}_{Z,\r ig}(X/K)\to \cdots. \end{align} \item If both $X$, $Z$ are smooth, and $Z$ is of codimension $r$ in $X$, then we have a Gysin isomorphism $H^i_{Z,\r ig}(X/K)\simeq H^{i-2r}_\r ig(Z/K)$. \item Suppose that $K$ is residually algebraic over $\b Q_p$ (in other words, $\widetilde{K}$ is a finite extension of $\b F_p$). Then the sequence \eqref{eq:support} is equipped with a Frobenius action of sufficiently large degree. In particular, each item $V$ in \eqref{eq:support} admits a direct sum decomposition $V=\b igoplus_{w\in\b Z}V_w$ where $V_w$ consists of vectors of generalized weight $w$ (\cite{Chi98}*{\S 1 \& \S 2}). \item Suppose that $X$ is smooth and $Z$ is of codimension $r$, then $H^i_{Z,\r ig}(X/K)_w=0$ unless $i\leq w\leq 2(i-r)$ (\cite{Chi98}*{Theorem 2.3}).\\ \end{itemize} We will extensively use the notion of $K$-analytic germs (\cite{Berk07}*{\S 5.1}), rather than $K$-dagger spaces. Roughly speaking, a $K$-analytic germ is a pair $(X,S)$ where $X$ is a $K$-analytic space and $S\subset X$ is a subset. We say that $(X,S)$ is a strictly $K$-affinoid germ if $S$ is a strictly affinoid domain. We say that $(X,S)$ is smooth if $X$ is smooth in an open neighborhood of $S$. We have the structure sheaf $\c O_{(X,S)}$, and the de Rham complex $\Omega_{(X,S)}^\b ullet$ when $(X,S)$ is smooth. (See \cite{Berk07}*{\S 5.2} for details.) In particular, we have the de Rham cohomology $H^\b ullet_\r{dR}(X,S)$ when $(X,S)$ is smooth. For a smooth $K$-analytic germ $(X,S)$ where $S=\c M(A)$ for an integrally smooth $K$-affinoid algebra $A$, we have a canonical functorial isomorphism $H^\b ullet_\r{dR}(X,S)\simeq H^\b ullet_\r ig(\Spec\widetilde{A}/K)$ (see \cite{Bert97}*{Proposition 1.10}, whose proof actually works for general $K$). The following lemma generalizes the construction in \cite{GK02}*{Lemma 2}. \b egin{lem}\label{le:dagger} Let $(X_1, Y_1)$ and $(X_2,Y_2)$ be two smooth strictly $K$-affinoid germs. Then for a morphism $\phi\colon Y_2\to Y_1$ of strictly $K$-affinoid domains, there is a canonical restriction map $\phi^*\colon H^\b ullet_\r{dR}(X_1,Y_1)\to H^\b ullet_\r{dR}(X_2,Y_2)$. It satisfies the following conditions: \b egin{enumerate}[(i)] \item if $\phi$ extends to a morphism $(X_2,Y_2)\to (X_1,Y_1)$ of germs, then $\phi^*$ coincides with the usual pullback; \item for a finite extension $K'$ of $K$, if we write $X'_i$ (resp.\ $Y'_i$) for $X_i\widehat\otimes_KK'$ (resp.\ $Y_i\widehat\otimes_KK'$) for $i=1,2$ and $\phi'$ for $\phi\widehat\otimes_KK'$, then $\phi'^*$ coincides with the scalar extension of $\phi^*$, in which we identify $H^\b ullet_\r{dR}(X'_i,Y'_i)$ with $H^\b ullet_\r{dR}(X_i,Y_i)\otimes_KK'$ for $i=1,2$; \item if $Y_1=\c M(A_1)$ and $Y_2=\c M(A_2)$ for some integrally smooth $K$-affinoid algebras $A_1$ and $A_2$, then $\phi^*$ coincides with $\widetilde\phi^*\colon H^\b ullet_\r ig(\Spec \widetilde{A_1}/K)\to H^\b ullet_\r ig(\Spec \widetilde{A_2}/K)$ under the canonical isomorphism $H^\b ullet_\r{dR}(X_i,Y_i)\simeq H^\b ullet_\r ig(\Spec\widetilde{A_i}/K)$ for $i=1,2$, where $\widetilde\phi\colon\Spec\widetilde{A_2}\to\Spec\widetilde{A_1}$ is the induced morphism; \item if $(X_3,Y_3)$ is another smooth strictly $K$-affinoid germ with a morphism $\psi\colon Y_3\to Y_2$, then $(\phi\circ\psi)^*=\psi^*\circ\phi^*$. \end{enumerate} \end{lem} \b egin{proof} Put $X=X_1\mathtt{i}mes_K X_2$, $Y=Y_1\mathtt{i}mes_K Y_2$, and $\Delta\subseteq Y$ the graph of $\phi$, which is isomorphic to $Y_2$ via the projection to the second factor. Denote by $a_i\colon X\to X_i$ the projection morphism. We have maps \[H^\b ullet_\r{dR}(X_1,Y_1)\xrightarrow{a_1^*}\varinjlim_V H^\b ullet_\r{dR}(V) \xleftarrow{a_2^*}H^\b ullet_\r{dR}(X_2,Y_2),\] where $V$ runs through open neighborhoods of $\Delta$ in $X$. We show that $a^*_2$ is an isomorphism. Then we define $\phi^*$ as $(a_2^*)^{-1}\circ a_1^*$. The proof is similar to that of \cite{GK02}*{Lemma 2}. To show that $a^*_2$ is an isomorphism is a local problem. Thus we may assume that there are elements $t_1,\dots,t_m\in\s O_{X_1}(X_1)$ such that $\r d t_1,\dots,\r d t_m$ form a basis of $\Omega^1(X_1,Y_1)$ over $\s O(X_1,Y_1)$, and there exist a strictly $K$-affinoid neighborhood $U_\epsilon\subset X$ of $\Delta$ with an element $\epsilon\in|K^\mathtt{i}mes|$, and an isomorphism \[ U_\epsilon\cap Y\xrightarrow{\sim} \c M(K\langle\epsilon^{-1}Z_1,\dots\epsilon^{-1}Z_m\r angle)\mathtt{i}mes_K\Delta,\] in which $\epsilon^{-1}Z_i$ is sent to $\epsilon^{-1}(t_i\otimes1-1\otimes\phi^*(t_i))$. Note that $K\langle\epsilon^{-1}Z_1,\dots\epsilon^{-1}Z_m\r angle$ is an integrally smooth $K$-affinoid algebra, and $\Spec\widetilde{K\langle\epsilon^{-1}Z_1,\dots\epsilon^{-1}Z_m\r angle}$ is canonically isomorphic to $\b A^m_{\widetilde{K}}$. Thus by \cite{GK02}*{Lemma 2}, the restriction map $H^\b ullet_\r{dR}(X_2,Y_2)\to H^\b ullet_\r{dR}(X,U_\epsilon\cap Y)$ is an isomorphism. We may choose a sequence of such $U_\epsilon$ with $\b igcap_\epsilon U_\epsilon=\Delta$. Then $\varinjlim_\epsilon H^\b ullet_\r{dR}(X,U_\epsilon\cap Y)\simeq \varinjlim_V H^\b ullet_\r{dR}(V)$ and thus $a_2^*$ is an isomorphism. Properties (i) and (ii) follow easily from the construction. Property (iv) is straightforward but tedious to check; we will leave it to readers. We now check Property (iii), as it is important for our later argument. The induced projection morphism \[\c M(K\langle\epsilon^{-1}Z_1,\dots\epsilon^{-1}Z_m\r angle)\mathtt{i}mes_K\Delta\simeq U_\epsilon\cap Y\to Y_i\] extends canonically to a morphism of formal $K^\circ$-schemes \[\Spf(K\langle\epsilon^{-1}Z_1,\dots\epsilon^{-1}Z_m\r angle\widehat\otimes_KA_\Delta)^\circ\to\Spf A_i^\circ,\] where $A_\Delta$ is the coordinate $K$-affinoid algebra of $\Delta$ which is isomorphic to $A_2$. Therefore, the restriction map $H^\b ullet_\r{dR}(X_i,Y_i)\to H^\b ullet_\r{dR}(X,U_\epsilon\cap Y)$ coincides with the map \[\widetilde{a_i}^*\colon H^\b ullet_\r ig(\Spec \widetilde{A_i}/K)\to H^\b ullet_\r ig(\Spec\widetilde{K\langle\epsilon^{-1}Z_1,\dots\epsilon^{-1}Z_m\r angle\widehat\otimes_KA_\Delta}/K)\] induced from the homomorphism $\widetilde{A_i}\to\widetilde{K\langle\epsilon^{-1}Z_1,\dots\epsilon^{-1}Z_m\r angle\widehat\otimes_KA_\Delta}$ of $\widetilde{K}$-algebras. Note that $\widetilde{a_2}^*$ is an isomorphism, and that $(\widetilde{a_2}^*)^{-1}$ coincides with the restriction map \[H^\b ullet_\r ig(\Spec\widetilde{K\langle\epsilon^{-1}Z_1,\dots\epsilon^{-1}Z_m\r angle\widehat\otimes_KA_\Delta}/K) \to H^\b ullet_\r ig(\Spec \widetilde{A_2}/K)\] induced from the homomorphism $\widetilde{K\langle\epsilon^{-1}Z_1,\dots\epsilon^{-1}Z_m\r angle\widehat\otimes_KA_\Delta}\to\widetilde{A_2}$ sending $\epsilon^{-1}Z_i$ to $0$ for all $i$. Property (iii) follows. \end{proof} The following example will be used in the computation later. \b egin{example}\label{ex:torus} Let $K$ be a non-Archimedean field. For an integer $t\geq 0$ and an element $\varpi\in K$, define the formal $K^\circ$-scheme \[\f E^t_\varpi=\Spf K^\circ[[T_0,\dots,T_t]]/(T_0\cdots T_t-\varpi)\] and let $\b E^t_\varpi$ be its generic fiber. Let $E^t_\varpi$ be the $K$-affinoid algebra \[K\langle |\varpi|^{-\f rac{1}{t+1}}T_0,\cdots,|\varpi|^{-\f rac{1}{t+1}}T_t, |\varpi|^{\f rac{1}{t+1}}T_0^{-1},\cdots,|\varpi|^{\f rac{1}{t+1}}T_t^{-1}\r angle/(T_0\cdots T_t-\pi),\] which is integrally smooth. Moreover, $\c M(E^t_\varpi)$ is canonically a strictly $K$-affinoid domain in $\b E^t_\varpi$, and the restriction map \[H^\b ullet_\r{dR}(\b E^t_\varpi)\to H^\b ullet_\r{dR}(\b E^t_\varpi,\c M(E^t_\varpi))\simeq H^\b ullet_\r ig(\Spec\widetilde{E^t_\varpi}/K)\] is an isomorphism by \cite{GK02}*{Lemma 3}. If $K$ is residually algebraic over $\b Q_p$, then we have $H^q_\r ig(\Spec\widetilde{E^t_\varpi}/K)=H^q_\r ig(\Spec\widetilde{E^t_\varpi}/K)_{2q}$. \end{example} \section{Weight decomposition in \'{e}tale topology} \label{ss:weight} In this section, we construct the weight decomposition of de Rham cohomology sheaves in the \'{e}tale topology. Therefore, in this section, sheaves like $\c O_X$, $\f c_X$, and the de Rham complex $(\Omega^\b ullet_X,\r d)$ are understood in the \'{e}tale topology. \b egin{definition}[Marked pair]\label{de:marked} Let $k$ be a discrete non-Archimedean field. \b egin{enumerate} \item We say that a scheme $\c X$ over $k^\circ$ is \emph{strictly semi-stable of dimension $n$} if $\c X$ is locally of finite presentation, Zariski locally \'{e}tale over $\Spec K^\circ[T_0,\dots,T_n]/(T_0\cdots T_t-\varpi)$ for some uniformizer $\varpi$ of $k$, and $\c X_k$ is smooth over $k$. For every $0\leq t\leq n$, denote by $\c X_s^{[t]}$ the union of intersection of $t+1$ distinct irreducible components of $\c X_s$. It is a closed subscheme of $\c X_s$ with each irreducible component smooth. \item A \emph{marked $k$-pair $(\c X,\c D)$ of dimension $n$ and depth $t$} consists of an affine strictly semi-stable scheme $\c X$ over $k^\circ$ of dimension $n$, and an irreducible component $\c D$ of $\c X_s^{[t]}$ that is geometrically irreducible. \end{enumerate} \end{definition} We start from the following lemma, which generalizes \cite{Berk07}*{Lemma 2.1.2}. \b egin{lem}\label{le:212} Suppose that $K$ is embeddable into $\widehat{k^\r a}$ for some discrete non-Archimedean field $k$. Let $X$ be a smooth $K$-analytic space, and $x$ a point of $X$ with $s(x)+t(x)=\dim_x(X)$. Given a morphism of strictly $K$-analytic spaces $X\to\f Y_\acute{\r{e}}\r{t}a$, where $\f Y$ is a special formal $K^\circ$-scheme, there exist \b egin{itemize} \item a finite extension $K'$ of $K$, a finite extension $k'$ of $k$ contained in $K'$, \item a marked $k'$-pair $(\c X,\c D)$ of dimension $\dim_x(X)$ and depth $t(x)$, \item an open neighborhood $U$ of $(\widehat\c X_{/\c D})_\acute{\r{e}}\r{t}a\widehat\otimes_{k'}K'$ in $\c X_{K'}^\r{an}$, \item a point $x'\in(\widehat\c X_{/\c D})_\acute{\r{e}}\r{t}a\widehat\otimes_{k'}K'$, \item a morphism of $K$-analytic spaces $\varphi\colon U\to X$, and \item a morphism of formal $K^\circ$-schemes $\widehat\c X_{/\c D}\widehat\otimes_{k'^\circ}K'^\circ\to\f Y$, \end{itemize} such that the following are true: \b egin{enumerate}[(i)] \item $\varphi$ is \'{e}tale and $\varphi(x')=x$; \item the induced morphism $(\widehat\c X_{/\c D})_\acute{\r{e}}\r{t}a\widehat\otimes_{k'}K'\to\f Y_\acute{\r{e}}\r{t}a$ coincides with the composition \[(\widehat\c X_{/\c D})_\acute{\r{e}}\r{t}a\widehat\otimes_{k'}K'\hookrightarrow U\xrightarrow{\varphi}X\to\f Y_\acute{\r{e}}\r{t}a.\] \end{enumerate} \end{lem} \b egin{proof} Put $t=t(x)$, $s=s(x)$, and $n=t+s$. By \cite{Berk07}*{Proposition 2.3.1}, by possibly taking finite extensions of $k$ (and $K$), we may replace $X$ by $(B\mathtt{i}mes_k Y)\widehat\otimes_kK$, where $B=\prod_{j=1}^t B(0;r_j,R_j)$ for some $0<r_j<R_j$ and $Y$ is a smooth $k$-analytic space of dimension $s$, and $x$ projects to $b\in B$ with $t(b)=t$ and $y\in Y$ with $s(y)=s$. Denote by $\c P$ the $k^\circ$-scheme $\b P^1_{k^\circ}$ with the point $0$ on the special fiber blown up, and by $\f P$ the formal completion of $\c P$ along the open subscheme $\c P_s\b ackslash\{\pi(0),\pi(\infty)\}$, which is isomorphic to $\Spf k^\circ\langle X,Y\r angle/(XY-\varpi)$ for some uniformizer $\varpi$ of $k$. By taking further finite extensions of $k$ (and $K$), we may assume that there is an open immersion $\prod_t\f P_\acute{\r{e}}\r{t}a\subset B$ containing $b$ such that $\pi(b)=\b{0}$, where $\b{0}$ is the closed point in $\prod_t\c P_s$ that is nodal in every component. For $Y$, we proceed exactly as in the Step 1 of the proof of \cite{Berk07}*{Lemma 2.1.2}. We obtain two strictly $k$-affinoid domains $Z'\subset W'\subset Y$. As in the beginning of Step 3 of the proof of \cite{Berk07}*{Lemma 2.1.2}, we also get an integral scheme $\c Y'$ proper and flat over $k^\circ$ with an embedding $Y\subset\c Y^{\r{prim}e\r{an}}_\acute{\r{e}}\r{t}a$, open subschemes $\c Z'\subset\c W'\subset\c Y'_s$, such that $Z'=\pi^{-1}\c Z'$, $W'=\pi^{-1}\c W'$. Now we put two parts together. Define $\c Y=\prod_t\c P\mathtt{i}mes\c Y'$ where the fiber product is taken over $k^\circ$, and $\c W=\prod_t\f P_s\mathtt{i}mes\c W'$ where the fiber product is taken over $\widetilde{k}$. Then $W\coloneqq\prod_t\f P_\acute{\r{e}}\r{t}a\mathtt{i}mes W'$ coincides with $\pi^{-1}\c W$ in $\c Y_k^\r{an}$. Moreover, $W_K$ is an open neighborhood of $x$ where $W_K$ denotes the inverse image of $W$ in $X=(B\mathtt{i}mes_k Y)\widehat\otimes_kK$. Write $W=\b igcup_{i=1}^l W_i$ as in Step 2 of the proof of \cite{Berk07}*{Lemma 2.1.2}. By \cite{Berk07}*{Lemma 2.1.3 (ii)}, we may assume that $W_i$ are all $k$-affinoid by taking finite extensions of $k$ (and $K$). Making a finite number of additional blow-ups, we may also assume that there are open subschemes $\c W_i\subset\c W$ with $W_i=\pi^{-1}\c W_i$ and $\c W=\b igcup_{i=1}^l\c W_i$. Now we proceed as in Step 4 of the proof of \cite{Berk07}*{Lemma 2.1.2}. Take an alteration $\phi\colon\c X'\to\c Y$ after further finite extensions of $k$ (and $K$), and a point $x'\in\c X'^\r{an}_K$ such that $\phi(x')=x$. By a similar argument, one can show that $\pi(x')\in\c X'_s\otimes_{\widetilde{k}}\widetilde{K}$ has dimension at least $s$. On the other hand, we have $s(x')\geq s$ and $t(x')\geq t$. Thus, $s(x')=s$ and $t(x')=t$. Denote by $\c C$ the Zariski closure of $\pi(x')$ in $\c X'_s$, equipped with the reduced induced scheme structure. Suppose that $\c C$ is contained in $t'$ distinct irreducible components of $\c X'_s$. Then $t'\leq t$. We take an open subscheme $\c U'$ of $\c X'$ satisfying: $\c D'\coloneqq\c U'\cap\c C$ is open dense in $\c C$; $\phi(\c D')$ is contained in $\c W$; $\c U'$ is \'{e}tale over $\Spec k^\circ[T_0,\dots,T_n]/(T_0\cdots T_{t'}-\pi)$ for some uniformizer $\varpi$ of $k$ such that $\c D'$ is the zero locus of the ideal generated by $(T_0,\dots,T_t,\pi)$. Now we blow up the closed ideal generated by $(T_{t'+1},\varpi)$, and then the strict transform of the closed ideal generated by $(T_{t'+2},\varpi)$, and continue to obtain an affine strictly semi-stable scheme $\c X$ over $k^\circ$ such that the strict transform $\c D$ of $\c D'$ is an irreducible component of $\c X^{[t]}_s$. After further finite extensions of $k$ (and $K$) and replacing $\c X$ by an affine open subscheme such that $\c X_s\cap\c D$ is dense in $\c D$, we obtain a marked $k$-pair $(\c X,\c D)$ of dimension $n$ and depth $t$ such that $\phi\colon\c X\to\c Y$ is \'{e}tale on the generic fiber. Note that $(\phi_K^\r{an})^{-1}W_K$ is a neighborhood of $x'$ containing $\pi^{-1}\c D$ as $\phi(\c D)\subseteq\c W$. Here, $x'\in\c X^\r{an}_K$ is an arbitrary preimage of the original $x'\in\c X'^\r{an}_K$, which exists by construction. We take $U$ to be an arbitrary open neighborhood of $\pi^{-1}\c D$ contained in $(\phi_K^\r{an})^{-1}W_K$, and $\varphi$ to be $\phi_K^\r{an}\mathbin{|}_U$. By the same argument in Step 5 of the proof of \cite{Berk07}*{Lemma 2.1.2}, $\phi$ induces a morphism of $K^\circ$-formal schemes $\widehat\c X_{/\phi^{-1}\c W}\widehat\otimes_{k^\circ}K^\circ\to\f Y$ and thus a morphism $\widehat\c X_{/\c D}\widehat\otimes_{k^\circ}K^\circ\to\f Y$. The conclusions of the lemma are all satisfied by the construction. \end{proof} From now on, we assume that $K$ is a residually algebraic extension of $\b Q_p$. \b egin{definition}[Fundamental chart]\label{de:chart} Let $X$ be a $K$-analytic space and $x\in X$ a point. A \emph{fundamental chart} of $(X;x)$ consists of data $(\b D,(\c Y,\c D),(D,\delta),W,\alpha;y)$ where \b egin{itemize} \item $(\c Y,\c D)$ is a marked $k$-pair of dimension $t(x)+s(x)$ and depth $t(x)$, where $k$ is a finite extension of $\b Q_p$, \item $\b D$ is an open poly-disc over $L$ of dimension $\dim_x(X)-t(x)-s(x)$, where $L$ is simultaneously a finite extension of $K$ and a (residually algebraic) extension of $k$, \item $D$ is an integrally smooth affinoid $k$-algebra, and \b egin{align}\label{eq:splitting} \delta\colon\Spf D^\circ[[T_0,\dots,T_t]]/(T_0\cdots T_t-\varpi)\xrightarrow{\sim}\widehat\c X_{/\c D} \end{align} is an isomorphism of formal $k^\circ$-schemes, where $\varpi$ is a uniformizer of $k$, \item $W$ is an open neighborhood of $(\c Y_{/\c D})_\acute{\r{e}}\r{t}a\widehat\otimes_kL=\pi^{-1}\c D_{\widetilde{L}}$ in $\c Y_L^\r{an}$, \item $y$ is a point in $\b D\mathtt{i}mes_L W$ such that it projects to $0$ in $\b D$ and a point in $W$ whose reduction is the generic point of $\c D_{\widetilde{L}}$, \item $\alpha\colon\b D\mathtt{i}mes_L W\to X$ is an \'{e}tale morphism of $K$-analytic spaces. \end{itemize} Note that the fields $k$ and $L$ will be implicit from the notation (as they are not important). \end{definition} The isomorphism \eqref{eq:splitting} induces an isomorphism $\Spec\widetilde{D}\simeq\c D$ of $\widetilde{k}$-schemes, and an isomorphism \b egin{align}\label{eq:decomposition} \delta^*\colon H^q_\r{dR}(\b D\mathtt{i}mes_L(W,\pi^{-1}\c D_{\widetilde{L}}))\xrightarrow{\sim} \b igoplus_{j=0}^q H^j_\r ig(\c D/k)\otimes_k H^{q-j}_\r{dR}(\b E^t_\varpi)\otimes_kL \end{align} of $L$-vector spaces \cite{GK02}*{Lemmas 2 \& 3} and \cite{Berk07}*{Corollary 5.5.2}. Here, $\b E^t_\varpi$ is the $k$-analytic space defined in Example \r ef{ex:torus}. Denote by $H^q_w(\b D,(\c Y,\c D),(D,\delta),W)$ the subspace of the left-hand side of \eqref{eq:decomposition} corresponding to the subspace \[\b igoplus_{j=0}^q H^j_\r ig(\c D/k)_{w-2(q-j)}\otimes_k H^{q-j}_\r{dR}(\b E^t_\varpi)\otimes_kL\] on the right-hand side. In particular, all elements in $H^{q-j}_\r{dR}(\b E^t_\varpi)$ are regarded to be of weight $2(q-j)$. Then we have a direct sum decomposition \b egin{align}\label{eq:decomp} H^q_\r{dR}(\b D\mathtt{i}mes_L(W,\pi^{-1}\c D_{\widetilde{L}}))=\b igoplus_{w\in\b Z}H^q_w(\b D,(\c Y,\c D),(D,\delta),W). \end{align} Finally, we denote by $H^q_{(w)}(\b D,(\c Y,\c D),(D,\delta),W)$ the subspace of $H^q_\r{dR}(\b D\mathtt{i}mes_LW)$ as the inverse image of $H^q_w(\b D,(\c Y,\c D),(D,\delta),W)$ under the restriction map \[H^q_\r{dR}(\b D\mathtt{i}mes_LW)\to H^q_\r{dR}(\b D\mathtt{i}mes_L(W,\pi^{-1}\c D_{\widetilde{L}})).\] In what follows, if $\b D$ is of dimension $0$, then we suppress it from all notations. \b egin{remark}\label{re:range} Note that $H^q_w(\b D,(\c Y,\c D),(D,\delta),W)=0$ unless $q\leq w\leq 2q$, and the decomposition \eqref{eq:decomp} is stable under base change along a residually algebraic extension of $K$ (and $L$ accordingly). We warn that the decomposition \eqref{eq:decomp} depends on all of the data $(\b D,(\c Y,\c D),(D,\delta),W)$, not just the $L$-analytic germ $\b D\mathtt{i}mes_L(W,\pi^{-1}\c D_{\widetilde{L}})$. (However, the dependence on $\b D$ and $W$ is very weak.) \end{remark} \b egin{definition} Let $X$ be a $K$-analytic space and $x\in X$ a point. \b egin{enumerate} \item Let $\f Et(X;x)$ be the category whose objects are fundamental charts of $(X;x)$, and a morphism \[\phi\colon(\b D_2,(\c Y_2,\c D_2),(D_2,\delta_2),W_2,\alpha_2;y_2)\to (\b D_1,(\c Y_1,\c D_1),(D_1,\delta_1),W_1,\alpha_1;y_1)\] consists implicitly extensions of related fields $K\subset L_1\subset L_2$ such that $k_1\subset k_2$, and a morphism $\Phi(\phi)\colon \b D_2\mathtt{i}mes_{L_2}W_2\to\b D_1\mathtt{i}mes_{L_1}W_1$ of $L_1$-analytic spaces sending $y_2$ to $y_1$, and such that \[\Phi(\phi)^*H^q_{(w)}(\b D_1,(\c Y_1,\c D_1),(D_1,\delta_1),W_1)\subset H^q_{(w)}(\b D_2,(\c Y_2,\c D_2),(D_2,\delta_2),W_2)\] for all $q,w\in \b Z$. Note that $\Phi(\phi)$ needs \emph{not} to respect each factors. \item Let $\acute{\r{E}}\r{t}(X;x)$ be the category of \'{e}tale neighborhoods of $(X;x)$. Recall that its objects are triples $(Y,\alpha;y)$ where $\alpha\colon Y\to X$ is an \'{e}tale morphism sending $y\in Y$ to $x$, and morphisms are defined in the obvious way. In the notation $(Y,\alpha;y)$, the morphism $\alpha$ will be suppressed if it is not relevant. For a presheaf $\s F$ on $X_{\acute{\r{e}}\r{t}}$, the stalk of $\s F$ at $x$ is defined to be $\s F_x\coloneqq \varinjlim_{(Y,\alpha;y)} \s F(Y)$ where the colimit is taken over the category $\acute{\r{E}}\r{t}(X;x)$. \item We have a functor $\Phi\colon\f Et(X;x)\to\acute{\r{E}}\r{t}(X;x)$ sending an object $(\b D,(\c Y,\c D),(D,\delta),W,\alpha;y)$ of $\f Et(X;x)$ to $(\b D\mathtt{i}mes_LW,\alpha;y)$, and a morphism $\phi$ to $\Phi(\phi)$. \end{enumerate} \end{definition} The following lemma generalizes \cite{Berk07}*{Proposition 2.1.1}. \b egin{lem}\label{le:initial} Suppose that $K$ is embeddable into $\b C_p$ and $X$ is a smooth $K$-analytic space. Fix an arbitrary point $x\in X$ and let $(Y,\alpha_0;y_0)$ be an object of $\acute{\r{E}}\r{t}(X;x)$. Then \b egin{enumerate} \item there exists an object $(\b D,(\c Y,\c D),(D,\delta),W,\alpha;y)\in\f Et(X;x)$ such that its image under $\Phi$ admits a morphism to $(Y,\alpha_0;y_0)$; \item given two morphisms $\b eta_i\colon\Phi(\b D_i,(\c Y_i,\c D_i),(D_i,\delta_i),W_i,\alpha_i;y_i) \to(Y,\alpha_0;y_0)$ in $\acute{\r{E}}\r{t}(X;x)$ for $i=1,2$, there exists an object $(\b D,(\c Y,\c D),(D,\delta),W,\alpha;y)\in\f Et(X;x)$ together with morphisms $\phi_i$ to $(\b D_i,(\c Y_i,\c D_i),(D_i,\delta_i),W_i,\alpha_i;y_i)$ in $\f Et(X;x)$ for $i=1,2$ such that the following diagram \[\xymatrix{ & \Phi(\b D_1,(\c Y_1,\c D_1),(D_1,\delta_1),W_1,\alpha_1;y_1) \ar[rd]^-{\b eta_1} \\ \Phi(\b D,(\c Y,\c D),(D,\delta),W,\alpha;y) \ar[ru]^-{\Phi(\phi_1)}\ar[rd]_-{\Phi(\phi_2)} && (Y,\alpha_0;y_0) \\ & \Phi(\b D_2,(\c Y_2,\c D_2),(D_2,\delta_2),W_2,\alpha_2;y_2) \ar[ru]_-{\b eta_2} }\] commutes. \end{enumerate} In particular, the functor $\Phi\colon\f Et(X;x)\to\acute{\r{E}}\r{t}(X;x)$ is initial. \end{lem} \b egin{proof} We may assume that $X$ is of dimension $n$. Put $t=t(x)$ and $s=s(x)$. For (1), by \cite{Berk07}*{Proposition 2.3.1}, after taking a finite extension of $K$, we may assume that $Y=\b D\mathtt{i}mes_K X'$ and $y_0=(0,x')$ (which makes sense) for a point $x'\in X'$ with $t(x')=t$ and $s(x')=s$, where $X'$ is a smooth $K$-analytic space of dimension $s+t$. Now we only need to apply Lemma \r ef{le:212} to $\f Y=\Spf K^\circ$, the pair $(X';x')$, and the structure morphism $X'\to\f Y_\acute{\r{e}}\r{t}a=\c M(K)$. The existence of $(D,\delta)$ is due to the argument in Part (iv) of the proof of \cite{GK02}*{Theorem 2.3}. For (2), we may assume that $K=L_1=L_2$. For $i=1,2$, we choose a relative compactification $\c Y_i\hookrightarrow\overline{\c Y_i}$ over $k_i^\circ$, where $\overline{\c Y_i}$ is proper. Then $W_i$ is open in $\f Y_{i\acute{\r{e}}\r{t}a}$, where $\f Y_i=\widehat{\overline{\c Y_i}}\widehat\otimes_{k_i^\circ}K^\circ$. Consider the \'{e}tale morphism \[\alpha'_0\colon Y'\coloneqq(\b D_1\mathtt{i}mes_KW_1)\mathtt{i}mes_Y(\b D_2\mathtt{i}mes_KW_2) \to Y,\] and a point $y'_0\in Y'$ projecting to $y_1$ (resp.\ $y_2$) in the first (resp.\ second) factor. Again by \cite{Berk07}*{Proposition 2.3.1}, we may find an object of the form $(\b D\mathtt{i}mes_K X',\alpha';(0,x'))$ in $\acute{\r{E}}\r{t}(X;x)$ as in (1) with a morphism to $(Y',\alpha'_0;y'_0)$. Now we apply Lemma \r ef{le:212} to $X'$, the point $x'$, $\f Y=\f Y_1\mathtt{i}mes_{K^\circ}\f Y_2$, the morphism \[X'\xrightarrow{(\b eta_1,\b eta_2)} W_1\mathtt{i}mes_K W_2 \subset \f Y_\acute{\r{e}}\r{t}a,\] where $\b eta_i$ equals the composition \[X'\simeq\{0\}\mathtt{i}mes_KX'\subset\b D\mathtt{i}mes_K X'\to\b D_i\mathtt{i}mes_K W_i\to W_i \quad(i=1,2)\] with the last arrow being the projection. We obtain a marked $k$-pair $(\c Y,\c D)$ of dimension $s+t$ and depth $t$, for some discrete non-Archimedean field $k$ containing $k_1,k_2$ and contained in (possibly a finite extension of) $K$; an open neighborhood $W$ of $(\widehat\c Y_{/\c D})_\acute{\r{e}}\r{t}a\widehat\otimes_kK$ in $\c Y_K^\r{an}$, a point $y'\in W$, a morphism of $K$-analytic spaces $\varphi\colon W\to X'$ such that $\varphi(y')=x'$, and a morphism of formal $K^\circ$-schemes $\psi=(\psi_1,\psi_2)\colon\widehat\c Y_{/\c D}\widehat\otimes_{k^\circ}K^\circ\to\f Y_1\mathtt{i}mes_{K^\circ}\f Y_2$ compatible with $\varphi$. As $\psi_i$ maps the generic point of $\c D_{\widetilde{K}}$ to the generic point of $(\c D_i)_{\widetilde{K}}$, we may replace $(\c Y,\c D)$ by an affine open such that $\psi_i(\c D_{\widetilde{K}})\subset(\c D_i)_{\widetilde{K}}$ for $i=1,2$. In particular, we have morphisms $\psi_i\colon\widehat\c Y_{/\c D}\widehat\otimes_{k^\circ}K^\circ\to\widehat{\c Y_i}_{/\c D_i}\widehat\otimes_{k_i^\circ}K^\circ$. Note that $\psi_i$ does not necessarily descent to any finite extension of $k$. By the proof of \cite{GK02}*{Theorem 2.3}, there is an integrally smooth $k$-affinoid algebra $D$ and an isomorphism $\delta$ as in \eqref{eq:splitting}. Now the object $(\b D,(\c Y,\c D),(D,\delta),W,\alpha;y)$ has been constructed with $y=(0,y')$ and the obvious $\alpha$. Let $\Phi(\phi_i)$ be the composite morphism $\b D\mathtt{i}mes_KW\to\b D\mathtt{i}mes_KX'\to\b D_i\mathtt{i}mes_K W_i$ for $i=1,2$. It remains to show that \b egin{enumerate}[(i)] \item For $i=1,2$, every $q$, every $w$, and an element $\omega\in H^q_\r ig(\c D_i/k_i)_w$, we have \[(\b eta_i\circ\varphi)^*(\delta_i^*)^{-1}\omega\in H^q_{(w)}((\c Y,\c D),(D,\delta),W).\] \item For $i=1,2$ and an arbitrary coordinate $T$ of $\b E^t_{\varpi_i}$ (where $\varpi_i$ is a uniformizer of $k_i$), we have \[(\b eta_i\circ\varphi)^*(\delta_i^*)^{-1}\f rac{\r d T}{T}\in H^1_{(2)}((\c Y,\c D),(D,\delta),W).\] \end{enumerate} Note that the composite morphism of formal $k^\circ$-schemes \[\Spf((E^t_\varpi)^\circ\widehat\otimes_{k^\circ}D^\circ)\to\Spf D^\circ[[T_1,\dots,T_t]]/(T_1\cdots T_t-\varpi)\xrightarrow{\delta}\widehat\c Y_{/\c D}\] induces an isomorphism \[H^q_\r{dR}(W,\pi^{-1}\c D_{\widetilde{K}})\simeq H^q_\r ig(\Spec\widetilde{E^t_\varpi}\mathtt{i}mes_{\widetilde{k}}\c D_{\widetilde{K}}/K)\] under which \[H^q_w((\c Y,\c D),(D,\delta),W)=H^q_\r ig(\Spec\widetilde{E^t_\varpi}\mathtt{i}mes_{\widetilde{k}}\c D_{\widetilde{K}}/K)_w\] for every $q$ and every $w$. For (i), as we have morphisms of formal $K^\circ$-schemes \[\Spf((E^t_\varpi)^\circ\widehat\otimes_{k^\circ}D^\circ\widehat\otimes_{k^\circ}K^\circ) \to\widehat{\c Y}_{/\c D}\widehat\otimes_{k^\circ}K^\circ\xrightarrow{\psi_i} \widehat{\c Y_i}_{/\c D_i}\widehat\otimes_{k_i^\circ}K^\circ\to\Spf D^\circ_i\widehat\otimes_{k_i^\circ}K^\circ,\] Lemma \r ef{le:dagger} implies that $(\b eta_i\circ\varphi)^*(\delta_i^*)^{-1}\omega$ coincides with $\varphi_i^*\omega$ in $H^q_\r{dR}(W,\pi^{-1}\c D_{\widetilde{K}})$, where \[\varphi_i\colon\Spec\widetilde{E^t_\varpi}\mathtt{i}mes_{\widetilde{k}}\c D_{\widetilde{K}} \to(\c D_i)_{\widetilde{K}}\] is the induced morphism of (affine smooth) $\widetilde{K}$-schemes. For (ii), we may assume that $\Spf D^\circ_i$ has a $K^\circ$-point by replacing $K$ by a finite extension (at the very beginning). Thus we have morphisms of formal $K^\circ$-schemes \[\Spf((E^t_\varpi)^\circ\widehat\otimes_{k^\circ}D^\circ\widehat\otimes_{k^\circ}K^\circ) \to\widehat{\c Y}_{/\c D}\widehat\otimes_{k^\circ}K^\circ\xrightarrow{\psi_i} \widehat{\c Y_i}_{/\c D_i}\widehat\otimes_{k_i^\circ}K^\circ\to\f E^t_{\varpi_i}\widehat\otimes_{k_i^\circ}K^\circ \xrightarrow{T}\Spf K^\circ[[T]].\] On the generic fiber, the image of the induced morphism $\c M(E^t_\varpi\widehat\otimes_kD\widehat\otimes_kK)\to D(0;1)$ does not contain $0$, which implies that it factors through a morphism $\c M(E^t_\varpi\widehat\otimes_kD\widehat\otimes_kK)\to\c M(K\langle r^{-1}T,rT^{-1}\r angle)$ for a unique $r<1$ in $\sqrt{|K^\mathtt{i}mes|}$ as $(E^t_\varpi)^\circ\widehat\otimes_{k^\circ}D^\circ$ is smooth over $k^\circ$. By taking a finite extension of $K$, we may assume that $r\in|K^{\circ\circ}|$. Then $K\langle r^{-1}T,rT^{-1}\r angle$ is integrally smooth, and we have $\Spec\widetilde{K\langle r^{-1}T,rT^{-1}\r angle}\simeq(\b G_\r{m})_{\widetilde{K}}$. Moreover, \[H^1_\r ig(\b G_\r{m}/K)_2=H^1_\r ig(\b G_\r{m}/K)\simeq H^1_\r{dR}(D(0,1),\c M(K\langle r^{-1}T,rT^{-1}\r angle))=K\{\f rac{\r d T}{T}\}.\] Thus Lemma \r ef{le:dagger} implies (ii). \end{proof} \b egin{remark}\label{re:decomp} The above lemma with its proof implies the following: For part of the data $(\b D,(\c Y,\c D),(D,\delta),W)$ from Definition \r ef{de:chart} and $f\in\c O^*(\b D\mathtt{i}mes_LW)$, we have \[\f rac{\r d f}{f}\in H^1_{(2)}(\b D,(\c Y,\c D),(D,\delta),W).\] Here, we regard $\f rac{\r d f}{f}$, a priori a closed $1$-form on $\b D\mathtt{i}mes_LW$, as an element in $H^1_\r{dR}(\b D\mathtt{i}mes_LW)$. \end{remark} Now we are ready to define the desired direct summand $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w$ in the weight decomposition of de Rham cohomology sheaves. \b egin{definition}[De Rham cohomology sheaves with weights]\label{de:weight} Suppose that $K$ is residually algebraic over $\b Q_p$ and $X$ is a smooth $K$-analytic space. For every object $U$ of $X_{\acute{\r{e}}\r{t}}$, define $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)(U)^\r{pre}_w\subset(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)(U)$ to be the image of elements $\omega\in\Omega^{q,\r{cl}}_X(U)$ such that for every point $u\in U$, there exists a fundamental chart $(\b D,(\c Y,\c D),(D,\delta),W,\alpha;y)$ of $(U;u)$ such that $\alpha^*\omega$, regarded as an element in $H^q_\r{dR}(\b D\mathtt{i}mes_L W)$, belongs to $H^q_{(w)}(\b D,(\c Y,\c D),(D,\delta),W)$. The assignment $U\mapsto(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)(U)^\r{pre}_w$ defines a sub-presheaf $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)^\r{pre}_w$ of $\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X$. We define $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w$ to be the sheafification of $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)^\r{pre}_w$, which is canonically a subsheaf of $\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X$. \end{definition} The following lemma can be proved by the same way as for \cite{Berk07}*{Corollary 5.5.3}. \b egin{lem}\label{le:base_change} Let $K'/K$ be an extension such that $K'$ is embeddable into $\b C_p$. Let $X$ be a smooth $K$-analytic space and $\varsigma\colon X'\coloneqq X\widehat\otimes_KK'\to X$ the canonical projection. Then the canonical map of sheaves on $X'_{\acute{\r{e}}\r{t}}$ \[\varsigma^{-1}(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)\otimes_LK'\to\Omega^{q,\r{cl}}_{X'}/\r d\Omega^{q-1}_{X'}\] is an isomorphism, where $L$ is the algebraic closure of $K$ in $K'$. \end{lem} The following theorem establishes the functorial weight decomposition of de Rham cohomology sheaves in Theorem \r ef{th:1} in the case of \'{e}tale topology. \b egin{theorem}\label{th:weight} If $K$ is embeddable into $\b C_p$ and $X$ is a smooth $K$-analytic space, then we have that \b egin{enumerate} \item under the situation of Lemma \r ef{le:base_change}, \[\varsigma^{-1}(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w\otimes_LK'=(\Omega^{q,\r{cl}}_{X'}/\r d\Omega^{q-1}_{X'})_w,\] for every $w\in\b Z$; \item the image of the composite map \[(\Omega^{q_1,\r{cl}}_X/\r d\Omega^{q_1-1}_X)_{w_1}\otimes(\Omega^{q_2,\r{cl}}_X/\r d\Omega^{q_2-1}_X)_{w_2} \to\Omega^{q_1,\r{cl}}_X/\r d\Omega^{q_1-1}_X\otimes\Omega^{q_2,\r{cl}}_X/\r d\Omega^{q_2-1}_X \xrightarrow{\wedge}\Omega^{q_1+q_2,\r{cl}}_X/\r d\Omega^{q_1+q_2-1}_X\] is contained in the subsheaf $(\Omega^{q_1+q_2,\r{cl}}_X/\r d\Omega^{q_1+q_2-1}_X)_{w_1+w_2}$; \item the sheaf $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w$ is zero unless $q\leq w\leq 2q$; \item the canonical map \[\b igoplus_{w\in\b Z}(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w\to\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X\] is an isomorphism; \item for every morphism $f\colon Y\to X$ of smooth $K$-analytic spaces, we have \[f^\#(f^{-1}(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w)\subset(\Omega^{q,\r{cl}}_Y/\r d\Omega^{q-1}_Y)_w\] for every $w\in\b Z$. Here, $f^\#$ denotes the canonical map $f^{-1}\Omega_X^\b ullet\to\Omega_Y^\b ullet$ and induced maps of cohomology sheaves. \end{enumerate} \end{theorem} \b egin{proof} Part (1) follows from the definition and Remark \r ef{re:range}. Part (2) follows from definition and Lemma \r ef{le:initial} (2). For the remaining parts, it suffices to work on stalks. Thus we fix a point $x\in X$ with $t=t(x)$ and $s=s(x)$. For (3), take an element $[\omega]$ in the stalk of $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w$ at $x$ for some $w<q$ or $w>2q$. We may assume that it has a representative $\omega\in\Omega^{q,\r{cl}}_X(U)$ for some \'{e}tale neighborhoods $(U;u)$ of $(X;x)$. By definition, we have a fundamental chart $(\b D,(\c Y,\c D),(D,\delta),W,\alpha;y)$ of $(U;u)$ such that $\alpha^*\omega=0$ in $H^q_\r{dR}(\b D\mathtt{i}mes_L(W,\pi^{-1}\c D_{\widetilde{L}}))$ by Remark \r ef{re:range}. Then there exists an open neighborhood $W^-$ of $\pi^{-1}\c D_{\widetilde{L}}$ in $W$, such that $\alpha^*\omega=0$ in $H^q_\r{dR}(\b D\mathtt{i}mes_LW)$. In other words, $[\omega]=0$ in the stalk of $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w$ at $x$. For (4), we first show that the map is injective. Let $[\omega]$ be an element in the stalk $\Omega^{q,\r{cl}}_{X,x}/\r d\Omega^{q-1}_{X,x}$. Suppose that we have $[\omega]=\sum[\omega]^1_w=\sum[\omega]^2_w$ in which both $[\omega]^1_w$ and $[\omega]^2_w$ are in the stalk of $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w$ at $x$. We may choose an object $(U;u)\in\acute{\r{E}}\r{t}(X,x)$ such that $[\omega]^i_w$ has a representative $\omega^i_w\in(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)(U)^\r{pre}_w$ for $i=1,2$ and every $w\in\b Z$, and $\sum\omega^1_w=\sum\omega^2_w$. In particular, $[\omega]$ has a representative $\omega\coloneqq\sum\omega^1_w=\sum\omega^2_w$ on $(U;u)$. Fix a weight $w\in\b Z$. It suffices to show that $[\omega^1_w]=[\omega^2_w]$ in the stalk at $x$. By Definition \r ef{de:weight}, there exist two fundamental charts $(\b D_i,(\c Y_i,\c D_i),(D_i,\delta_i),W_i,\alpha_i;y_i)$ of $(U;u)$ such that $\alpha_i^*\omega^i_w$ belongs to $H^q_{(w)}(\b D_i,(\c Y_i,\c D_i),(D_i,\delta_i),W_i)$ for $i=1,2$. By Lemma \r ef{le:initial}, we may find another fundamental chart $(\b D,(\c Y,\c D),(D,\delta),W,\alpha;y)\in\f Et(U;u)$ as in that lemma. Then we have $\Phi(\phi_i)^*\omega^i_w\in H^q_{(w)}(\b D,(\c Y,\c D),(D,\delta),W)$ for both $i=1,2$. However, $\Phi(\phi_1)^*\omega^1_w$ and $\Phi(\phi_2)^*\omega^2_w$, after restriction to $H^q_\r{dR}(\b D\mathtt{i}mes_L(W,\pi^{-1}\c D_{\widetilde{L}}))$, must be equal, as they are both the weight $w$ component of $\alpha^*\omega$ in $H^q_\r{dR}(\b D\mathtt{i}mes_L(W,\pi^{-1}\c D_{\widetilde{L}}))$ under the decomposition \eqref{eq:decomp}. As the map $H^q_\r{dR}(\b D\mathtt{i}mes_LW)\to (\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_x$ factors through $H^q_\r{dR}(\b D\mathtt{i}mes_L(W,\pi^{-1}\c D_{\widetilde{L}}))$, we have $[\omega]^1_w=[\omega]^2_w$. Finally, Lemma \r ef{le:stalk} below implies that the map in (4) is surjective as well. For (5), we take a point $y\in Y$ such that $f(y)=x$. We may take a fundamental chart $(\b D,(\c Y,\c D),(D,\delta),W,\alpha;y)$ of $(X;x)$ and replace $X$ by $\b D\mathtt{i}mes_LW$ and $x$ by a point $(0,x)$ where $x\in W$ with $t(x)=t$ and $s(x)=s$ such that $\dim W=s+t$. By the same proof of Lemma \r ef{le:initial} (2), we may find a fundamental chart $(\b D',(\c Y',\c D'),(D',\delta'),W',\alpha';y')$ of $(Y;y)$ such that $(f\circ\alpha')^*H^q_{(w)}(\b D,(\c Y,\c D),(D,\delta),W)\subset H^q_{(w)}(\b D',(\c Y',\c D'),(D',\delta'),W')$. This confirms Part (5) since $H^q_{(w)}(\b D,(\c Y,\c D),(D,\delta),W)$ (resp.\ $H^q_{(w)}(\b D',(\c Y',\c D'),(D',\delta'),W')$) restricts to the weight $w$ part in the stalk of $\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X$ (resp.\ $\Omega^{q,\r{cl}}_Y/\r d\Omega^{q-1}_Y$) at $x$ (resp.\ $y$), by Lemma \r ef{le:stalk} below. \end{proof} The following lemma is the most crucial and difficult part in the proof of the weight decomposition. \b egin{lem}\label{le:stalk} Let the assumptions be as in Theorem \r ef{th:weight}. We take a point $x\in X$. For any fixed weight $w$, an object $(\b D,(\c Y,\c D),(D,\delta),W,\alpha;y)\in\f Et(X,x)$, and an element $\omega\in H^q_{(w)}(\b D,(\c Y,\c D),(D,\delta),W)$, the induced class $[\omega]\in\Omega^{q,\r{cl}}_{X,x}/\r d\Omega^{q-1}_{X,x}$ belongs to the stalk of $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w$ at $x$. \end{lem} \b egin{proof} We may assume $L=K$ where $L$ is the finite extension of $K$ implicitly contained in the data of the fundamental chart. To simplify notation, we denote by $V$ the strictly $K$-affinoid domain $\pi^{-1}\c D_{\widetilde{K}}$ in $W$. As $\b D$ will be irrelevant in the discussion, we will regard $y$ as a point in $V$. Moreover, by possibly shrinking $(\c Y,\c D)$, the decomposition \eqref{eq:decomposition} and Remark \r ef{re:decomp}, we may assume that the image of $\omega$ in $H^q_\r{dR}(\b D\mathtt{i}mes_K(W,V))$ is in $H^q_\r ig(\c D/K)_w$. \mathtt{e}xtbf{Step 1.} We choose a smooth $k^\circ$-algebra $D^\natural$ (of dimension $s$) such that its $\varpi$-adic completion is $D^\circ$, where we recall that $\varpi$ is a uniformizer of the discrete non-Archimedean field $k\subset K$. In particular, we may identify $(\Spec D^\natural)_s$ with $\c D$, and $\c M(D)$ with a strictly $k$-affinoid domain in $(\Spec D^\natural)_k^\r{an}$. As in Lemma \r ef{le:dagger}, we have germs $(W,V)$ and $((\Spec D^\natural)_k^\r{an},\c M(D))$ and a morphism $V\to\c M(D)\widehat\otimes_kK$ induced from $\delta$. We choose a neighborhood $U_\epsilon$ of the graph of the previous morphism as in the proof of Lemma \r ef{le:dagger}, such that the induced map \[H^\b ullet_\r{dR}(W,V)\to H^\b ullet_\r{dR}(W\mathtt{i}mes_k(\Spec D^\natural)_k^\r{an},U_\epsilon\cap(V\mathtt{i}mes_k\c M(D)))\] is an isomorphism. By a similar argument in the proof of \cite{GK02}*{Lemma 2}, we may replace $W$ by a smaller open neighborhood of $V$ such that there is a morphism $W\to U_\epsilon$ sending $V$ into $U_\epsilon\cap(V\mathtt{i}mes_k\c M(D))$ whose induced map \[H^\b ullet_\r{dR}(W\mathtt{i}mes_k(\Spec D^\natural)_k^\r{an},U_\epsilon\cap(V\mathtt{i}mes_k\c M(D)))\to H^\b ullet_\r{dR}(W,V)\] is the inverse of the previous isomorphism. In other words, we have a morphism $\delta'\colon W\to(\Spec D^\natural)_K^\r{an}$ sending $V$ into $\c M(D)\widehat\otimes_kK$ such that, although $\delta'\mathbin{|}_{V}$ might not coincide with the original morphism $V\to\c M(D)\widehat\otimes_kK$ induced from $\delta$, we still have that the induced map \[H^\b ullet_\r ig(\c D/K)\simeq H^\b ullet_\r{dR}((\Spec D^\natural)_k^\r{an},\c M(D))\otimes_kK \xrightarrow{\delta'^*} H^\b ullet_\r{dR}(W,V)\] coincides with the map induced from the K\"{u}nneth decomposition \eqref{eq:decomposition} (where $\b D$ is trivial). \mathtt{e}xtbf{Step 2.} We choose a compactification $(\Spec D^\natural)_k\hookrightarrow\overline{\c S}_k$ over $k$, and define $\overline{\c S}$ to be the $k^\circ$-scheme $\overline{\c S}_k\coprod_{(\Spec D^\natural)_k}\Spec D^\natural$. Apply \cite{dJ96}*{Theorem 8.2} to the $k^\circ$-variety $\overline{\c S}$ and $Z=\emptyset$. We obtain a finite extension $k'/k$, an alteration $\c S^\natural\to\overline{\c S}_{k'^\circ}$ and a $k'^\circ$-compactification $\c S^\natural\hookrightarrow\c S$ where $\c S$ is a projective strictly semi-stable scheme over $k'^\circ$ such that $\c S\b ackslash\c S^\natural$ is a strict normal crossing divisor of $\c S$ (concentrated on the special fiber). We may further assume that all irreducible components of $\c S_s$ are geometrically irreducible. To ease notation, we replace $k$ by $k'$ and possibly $K$ by a finite extension. We may fix an irreducible component $\c E$ of $\c S_s$ such that its generic point belongs to $\c S^\natural_s$ and maps to the generic point of $\overline{\c S}_s\simeq\c D$. Note that the complement of $\c E^\natural\coloneqq\c E\cap\c S^\natural_s$ in $\c E$ is exactly $\c S_s^{[1]}\cap\c E$. Denote by $\sigma_\c E$ the unique point in $\c S_K^\r{an}$ who reduction is the geometric point of $\c E_{\widetilde{K}}$. Then $\pi^{-1}\c E_{\widetilde{K}}$ is an open neighborhood of $\sigma_\c E$. Define $W^\natural$ via the pullback square in the following diagram: \[\xymatrix{ W^\natural \ar[r]\ar[d]\ar@/^2pc/[rr]^-{\delta^\natural} & (\c S^\natural)_K^\r{an} \ar@{^(->}[r]\ar[d] & \c S_K^\r{an} \\ W \ar[r]^-{\delta'} & (\overline{\c S})_K^\r{an}, }\] and $\delta^\natural\colon W^\natural\to\c S_K^\r{an}$ as the composition in the upper row. We may choose a point $y^\natural\in W^\natural$ which lifts $y$ and maps to $\sigma_\c E$ in $\c S_K^\r{an}$. The image of the form $\omega$ in $H^q_\r ig(\c D/K)_w$ induces a class $[\omega^\natural]\in H^q_\r ig(\c E^\natural/K)_w$ via restriction along the alteration. By taking a finite unramified extension of $k$ (and possibly a finite extension of $K$), we may assume that $(\Fr^*-p^{fw})^N[\omega^\natural]=0$ for some integer $N\geq 1$, where $\#\widetilde{k}=p^{2f}$ and $\Fr$ denotes the relative Frobenius of $\c E^\natural/\widetilde{k}$. Put $\f S=\widehat\c S_{/\c E}$. We fix an open neighborhood $U$ of $\pi^{-1}\c E^\natural$ in $\f S_\acute{\r{e}}\r{t}a$ such that $[\omega^\natural]$ has a representative $\omega^\natural\in H^q_\r{dR}(U\widehat\otimes_kK)$. \mathtt{e}xtbf{Step 3.} We are now going to shrink $U$ such that $\omega^\natural$ has controlled behavior on $U\b ackslash\pi^{-1}\c E^\natural$. We may cover $\f S$ by finitely many special open formal $k^\circ$-subschemes $\f S_i$ satisfying the following conditions: Each $\f S_i$ is \'{e}tale over \[\Spf k^\circ[[t_0]]\langle t_1,\dots, t_r,t_{r+1},t_{r+1}^{-1},\dots,t_s,t_s^{-1}\r angle/(t_0\cdots t_r-\varpi)\] for some $0\leq r=r_i\leq s$; $\c E_i\coloneqq\c E\cap\f S_s$ is affine; and if we write $f_{i,j}$ for the image of $t_j$ in $\f S_i$, then $\c E^\natural_i\coloneqq\c E^\natural\cap\f S_s$ is defined by the equations $f_{i,0}=0$ and $f_{i,1}\cdots f_{i,r}\neq 0$. Define the formal $k^\circ$-scheme $\f S_i^\natural$ via the following pullback diagram \[\xymatrix{ \f S_i^\natural \ar[r]\ar[d] & \f S_i \ar[d] \\ \Spf k^\circ\langle t_1,t_1^{-1},\dots,t_s,t_s^{-1}\r angle \ar[r] &\Spf k^\circ[[t_0]]\langle t_1,\dots, t_r,t_{r+1},t_{r+1}^{-1},\dots,t_s,t_s^{-1}\r angle/(t_0\cdots t_r-\varpi). }\] Then $(\f S_i^\natural)_\acute{\r{e}}\r{t}a=\pi^{-1}\c E_i^\natural$ in $\f S_{i\acute{\r{e}}\r{t}a}$. For $0<\epsilon<1$, denote by $\f S_{i\acute{\r{e}}\r{t}a}(\epsilon)$ the open subset of $\f S_{i\acute{\r{e}}\r{t}a}$ defined by the inequality $|f_{i,0}|<|\varpi|^{1-\epsilon}$. Then $\f S_{i\acute{\r{e}}\r{t}a}(\epsilon)$ form a fundamental system of open neighborhoods of $(\f S_i^\natural)_\acute{\r{e}}\r{t}a$ in $\f S_{i\acute{\r{e}}\r{t}a}$. In fact, the open subset $\f S_{i\acute{\r{e}}\r{t}a}(\epsilon)$ does not depend on the choice of the \'{e}tale coordinates as above. We choose an open neighborhood $U_i$ of $(\f S_i^\natural)_\acute{\r{e}}\r{t}a$ in $\f S_\acute{\r{e}}\r{t}a$ contained in $U$, together with an absolute Frobenius lifting $\phi_i\colon U_i\to U$ satisfying properties \b egin{enumerate}[(a)] \item $\phi_i^*f_{i,j}=f_{i,j}^{p^{2f}}$ for $j=1,\dots,r$ (as in \cite{Chi98}*{Lemma 3.1.1}); \item $|(\phi_i^*g-g^{p^{2f}})(x)|<1$ for all regular functions $g$ on $\f S_i^\natural$ and all $x\in U_i$ at which both $g$ and $\phi_i^*g$ are defined (as \cite{Berk07}*{Lemma 6.1.1}); \item $(\phi_i^*-p^{fw})^M\omega^\natural=0$ in $H^q_\r{dR}(U_i\widehat\otimes_kK)$ for some integer $M\geq 1$. \end{enumerate} Since $U_i\cap\f S_{i\acute{\r{e}}\r{t}a}$ is an open neighborhood of $(\f S_i^\natural)_\acute{\r{e}}\r{t}a$ in $\f S_{i\acute{\r{e}}\r{t}a}$, there exists some $\epsilon_i>0$ such that $\f S_{i\acute{\r{e}}\r{t}a}(\epsilon_i)\subset U_i$. Take $\epsilon=\min\{\epsilon\}>0$, and replace $U$ by the union $\f S_\acute{\r{e}}\r{t}a(\epsilon)\coloneqq\b igcup_i\f S_{i\acute{\r{e}}\r{t}a}(\epsilon)$ in $\f S_\acute{\r{e}}\r{t}a$, which is an intrinsically defined open neighborhood of $\pi^{-1}\c E^\natural$ in $\f S_\acute{\r{e}}\r{t}a$. We suppose that $\epsilon$ is very close to $0$ in terms of $p^f,s,|\varpi|$. Now we replace $W^\natural$ by $W^\natural\cap(\delta^\natural)^{-1}(\f S_\acute{\r{e}}\r{t}a(\epsilon)\widehat\otimes_kK)$. By construction, we may remove a Zariski closed subset of $W^\natural$ of dimension at most $s+t-1$ such that the resulting morphism $\b D\mathtt{i}mes_K W^\natural\to \b D\mathtt{i}mes_K W\xrightarrow{\alpha} X$ is \'{e}tale. In particular, $(\b D\mathtt{i}mes_K W^\natural;(0,y^\natural))$ is an object of $\acute{\r{E}}\r{t}(X;x)$. \mathtt{e}xtbf{Step 4.} It remains to show the following claim: For every point $u\in W^\natural$, there exists a fundamental chart $(\b D',(\c Y',\c D'),(D',\delta'),W',\alpha';y')$ of $(W^\natural;u)$ such that $\alpha'^*(\delta^\natural)^*\omega^\natural$ belongs to $H^q_{(w)}(\b D',(\c Y',\c D'),(D',\delta'),W')$. We start similarly as in the proof of Lemma \r ef{le:initial}. By \cite{Berk07}*{Proposition 2.3.1}, after taking a finite extension of $K$, we may assume that $W^\natural=\b D\mathtt{i}mes_K X'$ and $u=(0,x')$ for a point $x'\in X'$ with $t(x')=t(u)$ and $s(x')=s(u)$, where $X'$ is a smooth $K$-analytic space of dimension $s(u)+t(u)$. Thus we have a morphism $\delta^\natural\colon X'\simeq\{0\}\mathtt{i}mes_KX'\to \f S_\acute{\r{e}}\r{t}a(\epsilon)\widehat\otimes_kK$. If $\delta^\natural(u)$ belongs to $(\pi^{-1}\c E^\natural)\widehat\otimes_kK$, the our claim follows in the same way as Claim (i) in the proof of Lemma \r ef{le:initial} (2). In general, $\delta^\natural(u)$ belongs to $\f S_{i\acute{\r{e}}\r{t}a}(\epsilon)\widehat\otimes_kK$ for some $i$, and we assume that its reduction $\pi(\delta^\natural(u))$ belongs to $(\c S_s^{[r']}\b ackslash\c S_s^{[r'+1]})\cap\c E_i$ for a unique $0\leq r'\leq r_i$. (If $r'=0$, then we are back to the previous special case.) Without lost of generality, we may assume that $r'=r_i=r$. Let $\c F\subset\c E$ be the irreducible component of $\c S_s^{[r]}\b ackslash\c S_s^{[r+1]}$ where $\pi(\delta^\natural(u))$ belongs to. By shrinking $\f S_i$, we may assume that $\c F$ is defined by the equations $f_{i,0}=\cdots=f_{i,r}=0$, and there exists an integrally smooth $k$-affinoid algebra $F$ together with an isomorphism \b egin{align}\label{eq:splitting_temp} \Spf F^\circ[[t_{i,0},\dots,t_{i,r}]]\simeq\widehat{\f S_i}_{/\c F} \end{align} of formal $k^\circ$-schemes, also sending $t_{i,j}$ to $f_{i,j}$. Therefore, we have an isomorphism of graded $k$-algebras \b egin{align}\label{eq:decomp_temp} H^\b ullet_\r{dR}(\f S_\acute{\r{e}}\r{t}a,\pi^{-1}\c F) \simeq H^\b ullet_\r ig(\Spec\widetilde{F}/k)\otimes_kH^\b ullet_\r{dR}(\b E^r_\varpi), \end{align} where $\b E^r_\varpi$ is the $k$-analytic space in Example \r ef{ex:torus}. By \cite{GK02}*{Lemma 3} and the above isomorphism, the restriction map \b egin{align}\label{eq:decomp_temp_1} H^\b ullet_\r{dR}(\f S_\acute{\r{e}}\r{t}a,\pi^{-1}\c F)\to H^\b ullet_\r{dR}(\f S_\acute{\r{e}}\r{t}a(\epsilon),\f S_\acute{\r{e}}\r{t}a(\epsilon)\cap\pi^{-1}\c F \end{align} is an isomorphism. Now it suffices to show that the class of $\omega^\natural$ in \[H^\b ullet_\r{dR}(\f S_\acute{\r{e}}\r{t}a(\epsilon)\widehat\otimes_kK,\f S_\acute{\r{e}}\r{t}a(\epsilon)\widehat\otimes_kK\cap\pi^{-1}\c F_{\widetilde{K}}) \simeq H^\b ullet_\r{dR}(\f S_\acute{\r{e}}\r{t}a(\epsilon),\f S_\acute{\r{e}}\r{t}a(\epsilon)\cap\pi^{-1}\c F)\otimes_kK\] is of weight $w$ with respect to the decomposition \eqref{eq:decomp_temp} and the isomorphism \eqref{eq:decomp_temp_1}. Then our claim follows in the same way as in the proof of Lemma \r ef{le:initial} (2). Without lost of generality, we now assume that $\omega^\natural$ is an element in $H^\b ullet_\r{dR}(\f S_\acute{\r{e}}\r{t}a(\epsilon),\f S_\acute{\r{e}}\r{t}a(\epsilon)\cap\pi^{-1}\c F)$. \mathtt{e}xtbf{Step 5.} To compute the weight, we use the Frobenius lifting $\phi_i\colon U_i\to\f S_\acute{\r{e}}\r{t}a(\epsilon)$ where $U_i\subset\f S_\acute{\r{e}}\r{t}a(\epsilon)$ is an open neighborhood of $(\f S_i^\natural)_\acute{\r{e}}\r{t}a$ in $\f S_\acute{\r{e}}\r{t}a$, which might be smaller than the one we start with. Assume that $U_i\cap\f S_{i\acute{\r{e}}\r{t}a}$ contains $\f S_{i\acute{\r{e}}\r{t}a}(\epsilon')$ for some $0<\epsilon'<\epsilon$. We introduce more notations as follows: We fix a positive integer $N$ such that $0<1/N<p^{-2f}\epsilon'$. Replacing $K$ by a finite extension, we may assume that there exists a totally ramified extension $k_+/k$ contained in $K$ with an element $\varpi_+\in k_+^\circ$ such that $\varpi_+^{rN}=\varpi$. We consider the following $k_+$-affinoid algebras \b egin{align*} F_0&=F\widehat\otimes_kk_+\langle\mathtt{a}u_1,\mathtt{a}u_1^{-1},\dots,\mathtt{a}u_r,\mathtt{a}u_r^{-1}\r angle, \\ F_1&=F\widehat\otimes_kk_+\left\langle\f rac{t_{i,0}}{\varpi_+^{rN-r}},\f rac{\varpi_+^{rN-r}}{t_{i,0}}, \f rac{t_{i,1}}{\varpi_+},\f rac{\varpi_+}{t_{i,1}},\dots,\f rac{t_{i,r}}{\varpi_+},\f rac{\varpi_+}{t_{i,r}} \r ight\r angle/(t_{i,0}\cdots t_{i,r}-\varpi),\\ F_2&=F\widehat\otimes_kk_+\left\langle\f rac{t_{i,0}}{\varpi_+^{rN-rp^{2f}}},\f rac{\varpi_+^{rN-rp^{2f}}}{t_{i,0}}, \f rac{t_{i,1}}{\varpi_+^{p^{2f}}},\f rac{\varpi_+^{p^{2f}}}{t_{i,1}},\dots,\f rac{t_{i,r}}{\varpi_+^{p^{2f}}},\f rac{\varpi_+^{p^{2f}}}{t_{i,r}} \r ight\r angle/(t_{i,0}\cdots t_{i,r}-\varpi). \end{align*} Note that $F_0$ is integrally smooth. We have natural isomorphisms \b egin{align*} \r ho_1&\colon F_1\xrightarrow{\sim} F_0, \quad t_{i,j}\mapsto\varpi_+\mathtt{a}u_j, 1\leq j\leq r,\; t_{i,0}\mapsto\varpi_+^{rN-r}\prod_{j=1}^r\mathtt{a}u_j^{-1};\\ \r ho_2&\colon F_2\xrightarrow{\sim} F_0, \quad t_{i,j}\mapsto\varpi_+^{p^{2f}}\mathtt{a}u_j, 1\leq j\leq r,\; t_{i,0}\mapsto\varpi_+^{rN-rp^{2f}}\prod_{j=1}^r\mathtt{a}u_j^{-1}. \end{align*} For $\alpha=1,2$, we define a formal $k_+^\circ$-scheme $\f S_i\langle\alpha\r angle$ via the following pullback diagram \[\xymatrix{ \f S_i\langle\alpha\r angle \ar[r]\ar[d] & \widehat{\f S_i}_{/\c F}\widehat\otimes_{k^\circ}k_+^\circ \ar[d]^-{\eqref{eq:splitting_temp}}\\ \Spf F_\alpha^\circ \ar[r] & \Spf F^\circ[[t_{i,0},\dots,t_{i,r}]]\otimes_{k^\circ}k_+^\circ },\] so that $\f S_i\langle\alpha\r angle_\acute{\r{e}}\r{t}a$ is canonically a strictly $k_+$-affinoid domain in $U_i\widehat\otimes_kk_+$ by our choice of $N$. Moreover, $\r ho_\alpha$ induces an isomorphism, denoted again by $\r ho_\alpha$, \[\r ho_\alpha\colon \Spf F_0^\circ\xrightarrow{\sim}\f S_i\langle\alpha\r angle\] of formal $k_+^\circ$-schemes. Properties (a) and (b) of the Frobenius lifting $\phi_i$ implies that it induces by restriction a morphism $\phi_i\colon\f S_i\langle1\r angle_\acute{\r{e}}\r{t}a\to\f S_i\langle2\r angle_\acute{\r{e}}\r{t}a$, and the composition $\r ho_{2\acute{\r{e}}\r{t}a}^{-1}\circ\phi_i\circ\r ho_{1\acute{\r{e}}\r{t}a}\colon\c M(F_0)\to\c M(F_0)$ is a Frobenius lifting. We fix a smooth $k_+$-affinoid germ $(V,\c M(F_0))$. Note that for $\alpha=1,2$, we have isomorphisms \[H^\b ullet_\r{dR}(\f S_\acute{\r{e}}\r{t}a\widehat\otimes_kk_+,\pi^{-1}\c F) \xrightarrow{\sim} H^\b ullet_\r{dR}(\f S_\acute{\r{e}}\r{t}a(\epsilon)\widehat\otimes_kk_+,\f S_\acute{\r{e}}\r{t}a(\epsilon)\widehat\otimes_kk_+\cap\pi^{-1}\c F) \xrightarrow{\sim} H^\b ullet_\r{dR}(U_i\widehat\otimes_kk_+,\f S_i\langle\alpha\r angle_\acute{\r{e}}\r{t}a)\] by \cite{GK02}*{Lemma 3}. In particular, we may equip $H^\b ullet_\r{dR}(U_i\widehat\otimes_kk_+,\f S_i\langle\alpha\r angle_\acute{\r{e}}\r{t}a)$ with a weight decomposition inherited from \eqref{eq:decomp_temp}. By construction and \cite{Bos81}*{Corollary 1}, we have \b egin{itemize} \item a morphism $\r ho_1^\dag\colon(V,\c M(F_0))\to(U_i\widehat\otimes_kk_+,\f S_i\langle1\r angle_\acute{\r{e}}\r{t}a)$ such that $\r ho_1^\dag\mathbin{|}_{\c M(F_0)}$ is very close to $\r ho_{1\acute{\r{e}}\r{t}a}$ which induces the same morphism on the special fiber, and moreover the induced restriction map \[(\r ho_1^\dag)^*\colon H^\b ullet_\r{dR}(U_i\widehat\otimes_kk_+,\f S_i\langle1\r angle_\acute{\r{e}}\r{t}a)\to H^\b ullet_\r ig(\Spec\widetilde{F_0}/k_+)\] is an isomorphism respecting weights, \item a morphism $\r ho_2^\dag\colon(U_i\widehat\otimes_kk_+,\f S_i\langle2\r angle_\acute{\r{e}}\r{t}a)\to(V,\c M(F_0))$ such that $\r ho_2^\dag\mathbin{|}_{\f S_i\langle2\r angle_\acute{\r{e}}\r{t}a}$ is very close to $\r ho_{2\acute{\r{e}}\r{t}a}^{-1}$ (not $\r ho_{2\acute{\r{e}}\r{t}a}$ !) which induces the same morphism on the special fiber, and moreover the induced restriction map \[(\r ho_2^\dag)^*\colon H^\b ullet_\r ig(\Spec\widetilde{F_0}/k_+)\to H^\b ullet_\r{dR}(U_i\widehat\otimes_kk_+,\f S_i\langle2\r angle_\acute{\r{e}}\r{t}a)\] is an isomorphism respecting weights. \end{itemize} In summary, we have weight preserving isomorphisms \[\mathbin{|}izebox{16cm}{!}{\xymatrix{ & H^\b ullet_\r{dR}(\f S_\acute{\r{e}}\r{t}a(\epsilon)\widehat\otimes_kk_+,\f S_\acute{\r{e}}\r{t}a(\epsilon)\widehat\otimes_kk_+\cap\pi^{-1}\c F)\ar[ld]\ar[rd] \\ H^\b ullet_\r{dR}(U_i\widehat\otimes_kk_+,\f S_i\langle1\r angle_\acute{\r{e}}\r{t}a) \ar[rd]_-{(\r ho_1^\dag)^*} && H^\b ullet_\r{dR}(U_i\widehat\otimes_kk_+,\f S_i\langle2\r angle_\acute{\r{e}}\r{t}a) \\ & H^\b ullet_\r ig(\Spec\widetilde{F_0}/k_+). \ar[ur]_-{(\r ho_2^\dag)^*} }}\] We will identify the top three objects in the above commutative diagram. Recall that we regard $\omega^\natural$ as an element in $H^\b ullet_\r{dR}(\f S_\acute{\r{e}}\r{t}a(\epsilon)\widehat\otimes_kk_+,\f S_\acute{\r{e}}\r{t}a(\epsilon)\widehat\otimes_kk_+\cap\pi^{-1}\c F)$. Let $\omega_0$ be the element in $H^q_\r ig(\Spec\widetilde{F_0}/k_+)$ such that $(\r ho_2^\dag)^*\omega_0=\omega^\natural$. By Property (c) of the Frobenius lifting $\phi_i$, we have that $((\r ho_1^\dag)^*\circ\phi_i^*\circ(\r ho_2^\dag)^*-p^{fw})^M\omega_0=0$. However, $\r ho_2^\dag\circ\phi_i\circ\r ho_1^\dag\colon (V,\c M(F_0))\to(V,\c M(F_0))$ is a Frobenius lifting of the Frobenius endomorphism of $\Spec\widetilde{F_0}$ over $\widetilde{k_+}=\widetilde{k}$. Therefore, $\omega_0$ and hence $\omega^\natural$ are of weight $w$. The lemma is finally proved! \end{proof} \b egin{remark} From the proof of Theorem \r ef{th:weight}, we know that the support of $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w$ is contained in the subset $\{x\in X\mathbin{|} s(x)\geq 2q-w, s(x)+t(x)\geq q\}$. \end{remark} \section{Logarithmic differential forms} \label{ss:log} In this section, we study the behavior of logarithmic differential forms in the rigid cohomology. Based on this and Theorem \r ef{th:weight}, we finish the proof of Theorem \r ef{th:1} for both topologies. Let $k$ be a finite extension of $\b Q_p$. Let $\c S$ be a proper strictly semi-stable scheme over $k^\circ$ of dimension $s$. We fix an irreducible component $\c E$ of $\c S_s$ and let $\c E_1,\dots,\c E_M$ be all other irreducible components that intersect $\c E$. For a subset $I\subset\{1,\dots,M\}$, put $\c E_I=(\b igcap_{i\in I}\c E_i)\cap\c E$ (in particular, $\c E_\emptyset=\c E$) and $\c E_I^\heartsuit=\c E_I\b ackslash\c S_s^{[|I|+1]}$. For two subsets $I,J$ of $\{1,\dots,M\}$, we write $I\prec J$ if $I\subset J$ and numbers in $J\b ackslash I$ are all greater than those in $I$. For $I\subset\{1,\dots,M\}$, we have the open immersion $\c E_I^\heartsuit\subset\c E_I\b ackslash\c S_s^{[|I|+2]}$, whose compliment is $\coprod_{I\subset J,|J|=|I|+1}\c E_J^\heartsuit$. Thus we have maps \[H^\b ullet_\r ig(\c E_I^\heartsuit/k)\to \b igoplus_{I\subset J,|J|=|I|+1}H^{\b ullet+1}_{\c E_J^\heartsuit,\r ig}(\c E_I\b ackslash\c S_s^{[|I|+2]}/k) \xrightarrow{\sim}\b igoplus_{I\subset J,|J|=|I|+1}H^{\b ullet-1}_\r ig(\c E_J^\heartsuit/k),\] where the second map is the Gysin isomorphism. In the above composite map, denote by $\xi^I_J$ the induced map from $H^\b ullet_\r ig(\c E_I^\heartsuit/k)$ to the component $H^{\b ullet-1}_\r ig(\c E_J^\heartsuit/k)$ if $I\prec J$, and the zero map if not. In general, for $I\prec J$, there is a unique strictly increasing sequence $I=I_0\prec I_1\prec\cdots\prec I_{|J\b ackslash I|}=J$ and we define \[\xi^I_J\coloneqq\xi^{I_{|J\b ackslash I|-1}}_J\circ\dots\circ\xi^I_{I_1}\colon H^\b ullet_\r ig(\c E_I^\heartsuit/k)\to H^{\b ullet-|J\b ackslash I|}_\r ig(\c E_J^\heartsuit/k),\] and $\xi^I_J=0$ if $I\prec J$ does not hold. Together, for $i\leq j$, they induce a map \b egin{align}\label{eq:gamma} \xi^i_j\colon\b igoplus_{|I|=i}H^\b ullet_\r ig(\c E_I^\heartsuit/k)\to\b igoplus_{|J|=j}H^{\b ullet+i-j}_\r ig(\c E_J^\heartsuit/k), \end{align} such that $\xi^i_j\mathbin{|}_{H^\b ullet_\r ig(\c E_I^\heartsuit/k)}$ is the direct sum of $\xi^I_J$ for all $J$ with $|J|=j$. First, we have the following lemma. \b egin{lem}\label{le:highest} Let notation be as above. For every $0\leq q\leq s$, the restriction of \[\xi^0_q\colon H^q_\r ig(\c E^\heartsuit/k)\to\b igoplus_{|J|=q}H^0_\r ig(\c E_J^\heartsuit/k)\] to $H^q_\r ig(\c E^\heartsuit/k)_{2q}$ is injective. \end{lem} \b egin{proof} By the long exact sequence of cohomology with support \eqref{eq:support}, the kernel of the map $\xi^0_q$ is a weight preserving extension of $k$-vector spaces $H^{q+|I|}_{\c E_I,\r ig}(\c E/k)$ for $|I|<q$. Therefore, the lemma follows since $H^{q+|I|}_{\c E_I,\r ig}(\c E/k)$ is pure of weight $q+|I|<2q$ by \cite{Tsu99}*{Theorems 5.2.1 \& 6.2.5} (with constant coefficients). \end{proof} Denote by $Z^i(\c E)^\heartsuit$ the abelian group generated by $\c E_I$ with $|I|=i$, modulo the subgroup generated by $\c E_I$ with $\c E_I=\emptyset$. Put $Z(\c E)^\heartsuit=\b igoplus_{i=0}^M Z^i(\c E)^\heartsuit$. The image of $\c E_I$ in $Z(\c E)^\heartsuit$ will be denoted by $[\c E_I]$. We define a wedge product \[\wedge\colon Z(\c E)^\heartsuit\otimes Z(\c E)^\heartsuit\to Z(\c E)^\heartsuit,\] which is group homomorphism uniquely determined by the following conditions: \b egin{itemize} \item $Z_1\wedge Z_2=(-1)^{ij}Z_2\wedge Z_1$, if $Z_1\in Z^i(\c E)^\heartsuit$ and $Z_2\in Z^j(\c E)^\heartsuit$; \item $[\c E_I]\wedge[\c E_J]=0$ if $I\cap J\neq\emptyset$; \item $[\c E_I]\wedge[\c E_J]=[\c E_{I\cup J}]$ if $I\cap J=\emptyset$ and $I\prec I\cup J$. \end{itemize} It is easy to see that $\wedge$ is associative and maps $Z^i(\c E)^\heartsuit\otimes Z^j(\c E)^\heartsuit$ into $Z^{i+j}(\c E)^\heartsuit$. We have an (injective) class map \[\r{cl}^\heartsuit\colon Z(\c E)^\heartsuit\to \b igoplus_{I}H^0_\r ig(\c E_I^\heartsuit/k)\simeq \b igoplus_{I}k^{\r{op}lus\pi_0(\c E_I^\heartsuit)}\] sending $[\c E_I]$ to the canonical generator on (each connected component of) $\c E_I^\heartsuit$. For an element $f\in\c O^*(\c S_k^\r{an},\pi^{-1}\c E^\heartsuit)$, that is, an invertible function on some open neighborhood of $\pi^{-1}\c E^\heartsuit$ in $\c S_k^\r{an}$, we can associate canonically an element $\DIV(f)\in Z^1(\c E)^\heartsuit$. In fact, there exists an element $c\in k^\mathtt{i}mes$ such that $|cf|=1$ on $\pi^{-1}\c E^\heartsuit$. Thus the reduction $\widetilde{cf}$ is an element in $\c O^*_\c E(\c E^\heartsuit)$, and we define $\DIV(f)$ to be the associated divisor of $\widetilde{cf}$, which is an element in $Z^1(\c E)^\heartsuit$. Obviously, it does not depend on the choice of $c$. Finally, note that by the definition of rigid cohomology, we have a canonical isomorphism $H^\b ullet_\r{dR}(\c S_k^\r{an},\pi^{-1}\c E^\heartsuit)\simeq H^\b ullet_\r ig(\c E^\heartsuit/k)$. \b egin{proposition}\label{pr:log} Let notation be as above. Given $f_1,\dots,f_q\in\c O^*(\c S_k^\r{an},\pi^{-1}\c E^\heartsuit)$, if we regard $\f rac{\r d{f_1}}{f_1}\wedge\cdots\wedge\f rac{\r d{f_q}}{f_q}$ as an element in $H^q_\r{dR}(\c S_k^\r{an},\pi^{-1}\c E^\heartsuit)\simeq H^q_\r ig(\c E^\heartsuit/k)$, then we have \b egin{align}\label{eq:log} \xi^0_q\left(\f rac{\r d{f_1}}{f_1}\wedge\cdots\wedge\f rac{\r d{f_q}}{f_q}\right) =\r{cl}^\heartsuit\left(\DIV(f_1)\wedge\cdots\wedge\DIV(f_q)\right). \end{align} \end{proposition} \b egin{proof} The question is local around the generic point of every connected component of $\c E_I$ with $|I|=q$. Thus, we may assume that $\c S$ is affine and admits a smooth morphism \[f\colon\c S\to\Spec k^\circ[T_0,\dots, T_q]/(T_0\cdots T_q-\varpi)\] where $\varpi$ is a uniformizer of $k$, such that \b egin{itemize} \item $\c E=\c E_0$ and $\c E_i$ ($i=1,\dots,q$) are all the irreducible components of $\c S_s$ that intersect $\c E$, where $\c E_i$ is defined by the ideal $(f^*T_i,\varpi)$; \item $\c E_I$ is irreducible and nonempty for $I\subset\{1,\dots,q\}$. \end{itemize} Since $\f rac{\r d{(cf)}}{cf}=\f rac{\r d{f}}{f}$; both sides of \eqref{eq:log} are multi-linear in $f_1,\dots,f_q\in\c O^*(\c S_k^\r{an},\pi^{-1}\c E^\heartsuit)$; and $\f rac{\r d f}{f}=\f rac{\r d f'}{f'}$ in $H^1_\r ig(\c E^\heartsuit/k)$ if $|f|=|f'|=1$ on $\pi^{-1}\c E^\heartsuit$ and $\widetilde{f}=\widetilde{f'}$, we may assume that $f_i=f^*T_i$. Then as both sides of \eqref{eq:log} are functorial in $f$ under pullback, we may assume that $\c S=\Spec k^\circ[T_0,\dots, T_q]/(T_0\cdots T_q-\varpi)$ and $f_i=T_i$. Put $\c S'=\Spec k^\circ[T_1,\dots,T_q]$ and let $g\colon\c S\to\c S'$ be the morphism sending $T_i$ to $T_i$ ($1\leq i\leq q$). For $I\subset\{1,\dots,q\}$, let $\c E'_I$ be the closed subscheme of $\c S'_s$ defined by the ideal $(\varpi,T_i\mathbin{|} i\in I)$. Then $g$ induces an isomorphism $\c E_I\simeq\c E'_I$. Similarly, we have maps \[{\xi'}^I_J\colon H^\b ullet_\r ig(\c E'^\heartsuit_I/k)\to H^{\b ullet+i-j}_\r ig(\c E'^\heartsuit_J/k)\] for $I\subset J$ and ${\xi'}^i_j$ for $i\leq j$, where $\c E'^\heartsuit_I=g(\c E^\heartsuit_I)$. It is easy to see that ${\xi'}^I_J=\xi^I_J$ if we identify $H^\b ullet_\r ig(\c E'^\heartsuit_I/k)$ with $H^\b ullet_\r ig(\c E^\heartsuit_I/k)$ through $g^*$. Therefore, it suffices to show the equality \eqref{eq:log} for $\c S'$, that is, \[{\xi'}^0_q\left(\f rac{\r d{T_1}}{T_1}\wedge\cdots\wedge\f rac{\r d{T_q}}{T_q}\right) =1\in H^0_\r ig(\c E'_{\{1,\dots,q\}}/k)\simeq k.\] However, $\c S'$, which is isomorphic to $\b A^q_{k^\circ}$ can be canonically embedded into the proper smooth scheme $\b P^q_{k^\circ}$ over $k^\circ$. Thus, the rigid cohomology $H^\b ullet_\r ig(\c E'^\heartsuit_I/k)$ and the map ${\xi'}^i_j$ can be computed on $(\b P^q_k)^\r{an}$. On the generic fiber $\c S'_k$, we similarly define $\c T_I$ to be the closed subscheme $\Spec k[T_1,\dots,T_q]/(T_i\mathbin{|} i\in I)$ of $\c S'_k$ for $I\subset\{1,\dots,q\}$, and $\c T_I^\heartsuit=\c T_I\b ackslash\b igcup_{I\varsubsetneq J}\c T_J$. We may similarly define maps $\alpha^I_J\colon H^\b ullet_\r{dR}(\c T_I^\heartsuit)\to H^{\b ullet-|J\b ackslash I|}_\r{dR}(\c T_J^\heartsuit)$ and $\alpha^i_j$ via algebraic de Rham cohomology theory. Then we have canonical vertical isomorphisms rendering the diagram \[\xymatrix{ H^\b ullet_\r{dR}(\c T_I^\heartsuit) \ar[r]^-{\alpha^I_J}\ar[d]_-{\simeq} & H^{\b ullet-|J\b ackslash I|}_\r{dR}(\c T_J^\heartsuit) \ar[d]^-{\simeq} \\ H^\b ullet_\r ig(\c E'^\heartsuit_I/k) \ar[r]^-{{\xi'}^I_J} & H^{\b ullet-|J\b ackslash I|}_\r ig(\c E'^\heartsuit_J/k) }\] commutative. From the standard computation in algebraic de Rham cohomology, we have \[\alpha^0_q\left(\f rac{\r d{T_1}}{T_1}\wedge\cdots\wedge\f rac{\r d{T_q}}{T_q}\right) =1\in H^0_\r{dR}(\c T_{\{1,\dots,q\}}),\] where $\c T_{\{1,\dots,q\}}$ is just the point of origin. Thus, the proposition is proved. \end{proof} Now we are ready to prove Theorem \r ef{th:1}. We begin with the case of \'{e}tale cohomology and then the case of analytic topology. \b egin{proof}[Proof of Theorem \r ef{th:1} in \'{e}tale topology] As in the previous section, sheaves like $\c O_X$, $\f c_X$, and the de Rham complex $(\Omega^\b ullet_X,\r d)$ are understood in the \'{e}tale topology. The direct sum decomposition has been proved in Theorem \r ef{th:weight} (4). Property (i) follows from Theorem \r ef{th:weight} (3). For Property (ii), the inclusion $\Upsilon^q_X\subset(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_{2q}$ follows from Theorem \r ef{th:weight} (2) and Remark \r ef{re:decomp}. Now we show that $(\Omega^{1,\r{cl}}_X/\r d\c O_X)_2\subset\Upsilon^1_X$. We check the inclusion on stalks. Take a point $x\in X$ with $s=s(x)$ and $t=t(x)$. For every class $[\omega]$ in the stalk of $(\Omega^{1,\r{cl}}_X/\r d\c O_X)_2$ at $x$, we may find a fundamental chart $(\b D,(\c Y,\c D),(D,\delta),W,\c Z,\alpha;y)$ of $(X;x)$ such that $[\omega]$ has a representative $\omega\in H^1_{(2)}(\b D,(\c Y,\c D),(D,\delta),W)$. Note that the decomposition \eqref{eq:decomposition} specializes to the decomposition \[H^1_\r{dR}(\b D\mathtt{i}mes_L(W,\pi^{-1}\c D_{\widetilde{L}}))=H^1_\r ig(\c D/L)\r{op}lus H^1_\r{dR}(\b E^t_\varpi\widehat\otimes_kL).\] If the restriction of $\omega$ to $H^1_\r{dR}(\b D\mathtt{i}mes_L(W,\pi^{-1}\c D_{\widetilde{L}}))$ belongs to $H^1_\r{dR}(\b E^t_\varpi\widehat\otimes_kL)$, then we are done. Otherwise, $\omega$ restricts to $H^1_\r ig(\c D/L)_2$. It suffices to show that classes in $H^1_\r ig(\c D/L)_2$ can be represented by logarithmic differential of invertible functions \'{e}tale locally, up to a constant multiple. We repeat certain process in Step 2 of the proof of Lemma \r ef{le:stalk}. Choose a smooth $k^\circ$-algebra $D^\natural$ (of dimension $s$) such that its $\varpi$-adic completion is $D^\circ$, a compactification $(\Spec D^\natural)_k\hookrightarrow\overline{\c S}_k$ over $k$, and define $\overline{\c S}$ to be the $k^\circ$-scheme $\overline{\c S}_k\coprod_{(\Spec D^\natural)_k}\Spec D^\natural$. Then we obtain a finite extension $k'/k$, an alteration $\c S^\natural\to\overline{\c S}_{k'^\circ}$ and a $k'^\circ$-compactification $\c S^\natural\hookrightarrow\c S$ where $\c S$ is a projective strictly semi-stable scheme over $k'^\circ$ such that $\c S\b ackslash\c S^\natural$ is a strict normal crossing divisor of $\c S$. We may further assume that all irreducible components of $\c S_s$ are geometrically irreducible. To ease notation, we replace $k$ by $k'$ and possibly $L$ by a finite extension. We may fix an irreducible component $\c E$ of $\c S_s$ such that its generic point belongs to $\c S^\natural_s$ and maps to the generic point of $\overline{\c S}_s\simeq\c D$. Thus there is a unique point $\sigma_\c E\in(\widehat{\c S^\natural})_\acute{\r{e}}\r{t}a$ such that $\pi(\sigma_\c E)$ is the generic point of $\c E$. Now we apply the setup in the beginning of this section to $\c S$ and $\c E$. Note that $\c E\cap\c S^\natural_s$ coincides with $\c E^\heartsuit$. It suffices to show that every class in $H^1_\r ig(\c E^\heartsuit/k)_2$ can be represented by the logarithmic differential of an invertible function on some \'{e}tale neighborhood of $\sigma_\c E$. Put $\c E^{[i]}=\c E\cap\c S_s^{[i]}$ for $i\geq 1$. We have $\c E^{[1]}\b ackslash\c E^{[2]}=\coprod_{i=1}^M\c E_{\{i\}}^\heartsuit$. There are exact sequences \b egin{align}\label{eq:gysin} H^1_\r ig(\c E/k)\to H^1_\r ig(\c E^\heartsuit/k)\to H^2_{\c E^{[1]},\r ig}(\c E/k) \to H^2_\r ig(\c E/k), \end{align} and \b egin{align*} H^2_{\c E^{[2]},\r ig}(\c E/k)\to H^2_{\c E^{[1]},\r ig}(\c E/k) \to H^2_{\c E^{[1]}\b ackslash\c E^{[2]},\r ig}(\c E\b ackslash\c E^{[2]}/k) \to H^3_{\c E^{[2]},\r ig}(\c E/k). \end{align*} As the codimension of $\c E^{[2]}$ in $\c E$ is at least $2$, we have $H^2_{\c E^{[2]},\r ig}(\c E/k)=H^3_{\c E^{[2]},\r ig}(\c E/k)=0$. Thus, \[H^2_{\c E^{[1]},\r ig}(\c E/k)\simeq H^2_{\c E^{[1]}\b ackslash\c E^{[2]},\r ig}(\c E\b ackslash\c E^{[2]}/k) \simeq\b igoplus_{i=1}^M H^0_\r ig(\c E^\heartsuit_{\{i\}}/k).\] Since the composition \[\b igoplus_{i=1}^M H^0_\r ig(\c E_{\{i\}}^\heartsuit/k)\simeq\b igoplus_{i=1}^M H^2_{\c E_{\{i\}},\r ig}(\c E/k)\to H^2_{\c E^{[1]},\r ig}(\c E/k)\to\b igoplus_{i=1}^M H^0_\r ig(\c E_{\{i\}}^\heartsuit/k)\] is an isomorphism, we may replace the term $H^2_{\c E^{[1]},\r ig}(\c E/k)$ in \eqref{eq:gysin} by $\b igoplus_{i=1}^M H^0_\r ig(\c E_{\{i\}}/k)$, which is isomorphic to the $k$-vector space $Z^1(\c E)^\heartsuit\otimes k$, and the boundary map \[Z^1(\c E)^\heartsuit\otimes k\to H^2_\r ig(\c E/k)\] becomes the cycle class map in rigid cohomology. As $H^1_\r ig(\c E/k)$ is of pure weight $1$, we have the isomorphism \b egin{align}\label{eq:gysin1} H^1_\r ig(\c E^\heartsuit/k)_2\xrightarrow{\sim}\Ker(Z^1(\c E)^\heartsuit\otimes k\to H^2_\r ig(\c E/k)). \end{align} Now take a divisor $D=\sum_{i=1}^Mc_i[\c E_{\{i\}}]$ with $c_i\in\b Z$ such that its cycle class in $H^2_\r ig(\c E/k)\simeq H^2_\r{\r{cris}}(\c E/k)$ is trivial. Then there exists some integer $\mu>0$ such that $\mu D$ is algebraically equivalent to zero, and in particular $\c O_{\c E}(\mu D)$ is an element in $\Pic^0_{\c E/\widetilde{k}}(\widetilde{k})$. Since $\Pic^0_{\c E/\widetilde{k}}$ is a projective scheme over the finite field $\widetilde{k}$, one may replace $\mu$ by some multiple such that $\c O_\c E(\mu D)$ is a trivial line bundle. Therefore, there exists a function $\mathtt{i}lde{f}\in\c O^*_\c E(\c E^\heartsuit)$ with $\DIV(\mathtt{i}lde{f})=\mu D$. We may assume that that $\mathtt{i}lde{f}$ lifts to a function $f\in\c O^*(\c S_k^\r{an},\pi^{-1}\c E^\heartsuit)$. (Otherwise, we may take an affine open subscheme $\Spec D'$ of $\c S$ such that $(\Spec D')_s$ is densely contained in $\c E^\heartsuit$ and $\mathtt{i}lde{f}\mathbin{|}_{(\Spec D')_s}$ lifts to a function $f\in\c O(\Spec D'_k)$, and repeat the above process to $\Spec D'$.) By Proposition \r ef{pr:log}, $\f rac{\r d{f}}{f}$ has image $D$ under the map $H^1_\r ig(\c E^\heartsuit/k)\to H^2_{\c E^{[1]},\r ig}(\c E/k)\simeq Z^1(\c E)^\heartsuit\otimes k$. Thus, (ii) is proved. For Property (iii), when $X$ has dimension $1$, it follows from (the proof of) \cite{Berk07}*{Theorem 4.3.1}. In general, it suffices to show that $(\Omega^{1,\r{cl}}_X/\r d\Omega^0_X)_1\subset\Psi_X$ by \cite{Berk07}*{Theorem 4.5.1 (i)} and Theorem \r ef{th:weight} (4). However, this follows from the definition of $\Psi_X$, Theorem \r ef{th:weight} (5), and the case of curves. \end{proof} \b egin{proof}[Proof of Theorem \r ef{th:1} in analytic topology] Now sheaves like $\c O_X$, $\f c_X$, $\Upsilon_X^q$, and the de Rham complex $(\Omega^\b ullet_X,\r d)$ are understood in the analytic topology. The corresponding objects in the \'{e}tale topology will be denoted by $\c O_{X_{\acute{\r{e}}\r{t}}}$, $\f c_{X_{\acute{\r{e}}\r{t}}}$, $\Upsilon_{X_{\acute{\r{e}}\r{t}}}^q$, and $(\Omega^\b ullet_{X_{\acute{\r{e}}\r{t}}},\r d)$. Note that we have a canonical morphism $\nu\colon X_{\acute{\r{e}}\r{t}}\to X$ of sites, and $\c O_X=\nu_*\c O_{X_{\acute{\r{e}}\r{t}}}$, $\f c_X=\nu_*\f c_{X_{\acute{\r{e}}\r{t}}}$, $\Omega^q_X=\nu_*\Omega^q_{X_{\acute{\r{e}}\r{t}}}$ for every $q$. We claim that the canonical map $\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X\to\nu_*(\Omega^{q,\r{cl}}_{X_{\acute{\r{e}}\r{t}}}/\r d\Omega^{q-1}_{X_{\acute{\r{e}}\r{t}}})$ is an isomorphism. It will follow from: (a) $\Omega^{q,\r{cl}}_X=\nu_*\Omega^{q,\r{cl}}_{X_{\acute{\r{e}}\r{t}}}$ as subsheaves of $\Omega^q_X$; (b) $\r d\Omega^{q-1}_X=\nu_*\r d\Omega^{q-1}_{X_{\acute{\r{e}}\r{t}}}$ as subsheaves of $\Omega^q_X$; (c) $\r R^1\nu_*\r d\Omega^{q-1}_{X_{\acute{\r{e}}\r{t}}}=0$. Assertion (a) is obvious. Both (b) and (c) will follow from the general fact that $\r R^i\nu_*\s F=0$ for $i>0$ and any sheaf of $\b Q$-vector spaces $\s F$ on $X_{\acute{\r{e}}\r{t}}$. In fact for every $x\in X$, we have $(\r R^i\nu_*\s F)_x=H^i(\c H(x),i_x^{-1}\s F)$, where $\c H(x)$ is the completed residue field of $x$ and $i_x\colon\c M(\c H(x))\to X$ is the canonical morphism, and we know that the profinite Galois cohomology $H^i(\c H(x),i_x^{-1}\s F)$ is torsion hence trivial for $i>0$. Now for $w\in\b Z$, we define $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w=\nu_*(\Omega^{q,\r{cl}}_{X_{\acute{\r{e}}\r{t}}}/\r d\Omega^{q-1}_{X_{\acute{\r{e}}\r{t}}})_w$. Then we have a decomposition \[\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X=\b igoplus_{w\in\b Z}(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_w,\] stable under base change and functorial in $X$ and satisfying Property (i). For Property (ii), we have the inclusion $\Upsilon^q_X\subset\nu_*\Upsilon^q_{X_{\acute{\r{e}}\r{t}}}$ as subsheaves of $\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X$, which is canonically isomorphic to $\nu_*(\Omega^{q,\r{cl}}_{X_{\acute{\r{e}}\r{t}}}/\r d\Omega^{q-1}_{X_{\acute{\r{e}}\r{t}}})$. Thus, we have the inclusion of sheaves $\Upsilon^q_X\subset(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)_{2q}$. When $q=1$, we have to show that $\nu_*\Upsilon^1_{X_{\acute{\r{e}}\r{t}}}\subset\Upsilon^1_X$. We check this on the stalk at an arbitrary point $x\in X$. Take an element $[\omega]$ in $(\nu_*\Upsilon^1_{X_{\acute{\r{e}}\r{t}}})_x$. We may assume that it has a representative $\omega\in\Omega^1_X(U)$ for some open neighborhood $U$ of $x$ satisfies $\alpha^*\omega=\f rac{\r d f'}{f'}+\r d g'$ for some finite \'{e}tale surjective morphism $\alpha\colon U'\to U$ and $f'\in\c O^*(U')$, $g'\in\c O(U')$. Then $\omega=\deg(\alpha)^{-1}\f rac{\r d f}{f}+\r d g$ where $f$ (resp.\ $g$) is the multiplicative (resp.\ additive) trace of $f'$ (resp.\ $g'$) along $\alpha$. \end{proof} \section{Tropical cycle class map} \label{ss:cycle} In this section, we study the sheaf $\Ker(\r d''\colon\s A_X^{q,0}\to\s A_X^{q,1})$ and its relation with de Rham cohomology sheaves. We construct tropical cycle class maps and show their compatibility with integration. In this section, sheaves like $\c O_X$, $\f c_X$, and the de Rham complex $(\Omega^\b ullet_X,\r d)$ are understood in the analytic topology. \b egin{definition}[Sheaf of rational Milnor $K$-theory]\label{de:milnor} Let $(X,\c O_X)$ be a ringed site. We define the \emph{$q$-th sheaf of rational Milnor $K$-theory} $\s K^q_X$ for $(X,\c O_X)$ to be the sheaf associated to the presheaf assigning an open $U$ in $X$ to $K^M_q(\c O_X(U))\otimes\b Q$ (\cite{Sou85}*{\S 6.1}). Here, $K^M_q(\c O_X(U))$ is the abelian group generated by the symbols $\{f_1,\dots,f_q\}$ where $f_1,\dots,f_q\in\c O_X^*(U)$, modulo the relations \b egin{itemize} \item $\{f_1,\dots,f_if'_i,\dots,f_q\}=\{f_1,\dots,f_i,\dots,f_q\}+\{f_1,\dots,f'_i,\dots,f_q\}$, \item $\{f_1,\dots,f,\dots,1-f,\dots,f_q\}=0$. \end{itemize} \end{definition} \b egin{example}\label{ex:scheme} Let $\c X$ be a smooth scheme of finite type over an arbitrary field $k$ of dimension $n$. Then by \cite{Sou85}*{\S 6.1, Remarque}, we have an isomorphism \b egin{align}\label{eq:class_milnor} \r{cl}_\s K\colon\CH^q(\c X)_\b Q\coloneqq\CH^q(\c X)\otimes\b Q\xrightarrow{\sim}H^q(\c X,\s K^q_\c X) \end{align} for every integer $q$. It can be viewed as a universal cycle class map. If $\c Z$ is an irreducible closed subscheme of $\c X$ of codimension $q$ that is a locally complete intersection, then $\r{cl}_\s K(\c Z)$ has an explicit description as follows: Choose a finite affine open covering $\c U_i$ of $\c X$ and $f_{i1},\dots,f_{iq}\in\c O_\c X(\c U_i)$ such that $\c Z\cap\c U_i$ is defined by the ideal $(f_{i1},\dots,f_{iq})$. Let $\c U_{ij}$ be the nonvanishing locus of $f_{ij}$. Then $\{\c U_{ij}\mathbin{|} j=1,\dots,q\}$ is an open covering of $\c U_i\b ackslash\c Z$. Thus the element $\{f_{i1},\dots,f_{iq}\}\in K_q^M(\c O_\c X(\b igcap_{j=1}^q\c U_{ij}))$ gives rise to an element in $H^{q-1}(\c U_i\b ackslash\c Z,\s K_\c X^q)$ and hence in $H^q_{\c Z\cap\c U_i}(\c U_i,\s K_\c X^q)$. One can show that the image in $H^q_{\c Z\cap\c U_i}(\c U_i,\s K_\c X^q)$ does not depend on the choice of $\{f_{i1},\dots,f_{iq}\}$. Therefore, we obtain a class $c(\c Z)$ in $H^0(\c X,\underline{H}^q_\c Z(\c X,\s K_\c X^q))$. By \cite{Sou85}*{Th\'{e}or\`{e}me 5}, we know that the map $\underline{H}^i(\c X,\s K_\c X^q)\to \underline{H}^i(\c X\b ackslash\c Z,\s K_\c X^q)$ is a bijection (resp.\ an injection) if $i\leq q-2$ (resp.\ $i=q-1$), and thus $\underline{H}^i_\c Z(\c X,\s K_\c X^q)=0$ for $i\leq q-1$. Thus, the local to global spectral sequence induces an isomorphism $H^q_\c Z(\c X,\s K_\c X^q)\simeq H^0(\c X,\underline{H}^q_\c Z(\c X,\s K_\c X^q))$. Then $\r{cl}_\s K(\c Z)$ is the image of $c(\c Z)$ under the map $H^0(\c X,\underline{H}^q_\c Z(\c X,\s K_\c X^q))\simeq H^q_\c Z(\c X,\s K_\c X^q)\to H^q(\c X,\s K_\c X^q)$. \end{example} We recall some facts from the theory of real forms on non-Archimedean analytic spaces developed by Chambert-Loir and Ducros in \cite{CLD12}. (See also \cite{Gub13} for a slightly different formulation.) Let $X$ be a $K$-analytic space. There is a bicomplex $(\s A_X^{\b ullet,\b ullet},\r d',\r d'')$ of sheaves of real vector spaces on (the underlying topological space of) $X$, where $\s A_X^{q,q'}$ is the \emph{sheaf of $(q,q')$-forms} (\cite{CLD12}*{\S 3.1}). Moreover, they define another bicomplex $(\s D_X^{\b ullet,\b ullet},\r d',\r d'')$ of sheaves of real vector spaces on $X$, where $\s D_X^{q,q'}$ is the \emph{sheaf of $(q,q')$-currents}, together with a canonical map \[\kappa_X\colon(\s A_X^{\b ullet,\b ullet},\r d',\r d'')\to (\s D_X^{\b ullet,\b ullet},\r d',\r d'')\] of bicomplexes given by integration (\cite{CLD12}*{\S 4.2 \& \S 4.3}). It is known that $\s A_X^{q,q'}=\s D_X^{q,q'}=0$ unless $0\leq q,q'\leq\dim(X)$. \b egin{definition}[Dolbeault cohomology] Let $X$ be a $K$-analytic space. We define the \emph{Dolbeault cohomology} (of forms) to be \[H^{q,q'}_{\s A}(X)\coloneqq\f rac{\Ker(\r d''\colon\s A_X^{q,q'}(X)\to\s A_X^{q,q'+1}(X))} {\IM(\r d''\colon\s A_X^{q,q'-1}(X)\to\s A_X^{q,q'}(X))},\] and the \emph{Dolbeault cohomology} (of currents) to be \[H^{q,q'}_{\s D}(X)\coloneqq\f rac{\Ker(\r d''\colon\s D_X^{q,q'}(X)\to\s D_X^{q,q'+1}(X))} {\IM(\r d''\colon\s D_X^{q,q'-1}(X)\to\s D_X^{q,q'}(X))},\] together with an induced map $\kappa_X\colon H^{q,q'}_{\s A}(X)\to H^{q,q'}_{\s D}(X)$. \end{definition} By \cite{Jel16}*{Corollary 4.6} and \cite{CLD12}*{Corollaire 3.3.7}, the complex $(\s A_X^{q,\b ullet},\r d'')$ is a fine resolution of $\Ker(\r d''\colon\s A^{q,0}_X\to\s A^{q,1}_X)$. In particular, we have a canonical isomorphism \[H^\b ullet(X,\Ker(\r d''\colon\s A^{q,0}_X\to\s A^{q,1}_X))\simeq H^{q,\b ullet}_\s A(X).\] Suppose that $X$ is of dimension $n$. By definition, we have a bilinear pairing \[\s D^{q,q'}_X(U)\mathtt{i}mes \s A^{n-q,n-q'}_X(U)_c\to\b R,\] for every open $U\subset X$, where $\s A^{n-q,n-q'}_X(U)_c\subset \s A^{n-q,n-q'}_X(U)$ is the subset of forms whose support is compact and disjoint from the boundary of $X$. In particular, if $X$ is compact and without boundary, then we have an induced pairing \[\langle\;,\;\r angle_X\colon H^{q,q'}_\s A(X)\mathtt{i}mes H^{n-q,n-q'}_\s A(X)\to\b R.\] \b egin{definition}\label{de:kernel} Let $X$ be a $K$-analytic space. We have the sheaf of rational Milnor $K$-theory $\s K^\b ullet_X$ for the ringed topological space $(X,\c O_X)$ (Definition \r ef{de:milnor}). \b egin{enumerate} \item We define a map of sheaves \[\mathtt{a}u^q_X\colon\s K_X^q\to\Ker(\r d''\colon\s A^{q,0}_X\to\s A^{q,1}_X)\] as follows. For a symbol $\{f_1,\dots,f_q\}\in\s K_X^q(U)$ with $f_1,\dots,f_q\in\c O_X^*(U)$, we have the induced moment morphism $(f_1,\dots,f_q)\colon U\to(\b G_{\r{m},K}^\r{an})^q$. Composing with the evaluation map $-\log|\;|\colon(\b G_{\r{m},K}^\r{an})^q\to\b R^q$, we obtain a continuous map \[\trop_{\{f_1,\dots,f_q\}}\colon U\to\b R^q.\] If we endow the target with coordinates $x_1,\dots,x_q$ where $x_i=-\log|f_i|$, then we define \[\mathtt{a}u^q_X(\{f_1,\dots,f_q\})=\r d x_1\wedge\cdots\wedge\r d x_q\in\Ker(\r d''\colon\s A^{q,0}_X(U)\to\s A^{q,1}_X(U)).\] It is easy to see that $\mathtt{a}u^q_X$ factors through the relations of Milnor $K$-theories, and thus induces a map of corresponding sheaves. \item If $X$ is moreover smooth, then we define another map of sheaves \[\lambda^q_X\colon\s K_X^q\to\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X\] as follows. For a symbol $\{f_1,\dots,f_q\}\in\s K_X^q(U)$ with $f_1,\dots,f_q\in\c O_X^*(U)$, we put \[\lambda^q_X(\{f_1,\dots,f_q\})=\f rac{\r d f_1}{f_1}\wedge\cdots\wedge\f rac{\r d f_q}{f_q},\] where the right-hand side is regarded as an element in $\Omega^{q,\r{cl}}_X(U)$ and hence in $(\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)(U)$. It is easy to see that $\lambda^q_X$ factors through the relations of Milnor $K$-theories, and thus induces a map of corresponding sheaves. \item We introduce the following quotient sheaves: \[\s T_X^q=\s K_X^q/\Ker\mathtt{a}u^q_X,\quad\s L_X^q=\s K_X^q/\Ker\lambda^q_X\] whenever the maps are defined. \end{enumerate} \end{definition} \b egin{proposition}\label{pr:kernel} Let $K$ be a non-Archimedean field and $X$ a smooth $K$-analytic space. Then $\mathtt{a}u^q_X$ induces an isomorphism \[\s T_X^q\otimes_\b Q\b R\simeq\Ker(\r d''\colon\s A^{q,0}_X\to\s A^{q,1}_X).\] \end{proposition} \b egin{proof} If suffices to show the isomorphism on stalks. We fix a point $x\in X$ with $s=s(x)$ and $t=t(x)$. We first describe a typical section of $\Ker(\r d''\colon\s A^{q,0}_X\to\s A^{q,1}_X)$ around $x$. We say a collection of data $(U;f_1,\dots,f_N)$ where $U$ is an open neighborhood of $x$ and $f_1,\dots,f_N\in\c O^*_X(U)$ is \emph{basic at $x$} if, under the induced tropicalization map \[\trop_U\colon U\xrightarrow{f_1,\dots,f_N}(\b G_{\r{m},K}^\r{an})^N\xrightarrow{-\log|\;|} T_N\otimes_\b Z\b R\simeq\b R^N\] where $T_N$ is the cocharacter lattice of $\b G_{\r{m}}^N$, there exists a rational polyhedral complex $\c C$ of dimension $s+t$ with a unique minimal polyhedron $\sigma_U$, which is of dimension $t$, such that $\trop_U(U)$ is an open subset of $\c C$ and $\trop_U(x)$ is contained in $\sigma_U$. For every polyhedron $\mathtt{a}u$ of $\c C$, we denote by $\d L(\mathtt{a}u)$ the underlying linear $\b Q$-subspace of $T_{N,\b Q}\coloneqq T_N\otimes_\b Z\b Q$. Then we have an inclusion \[\sum_{\sigma_U\prec\mathtt{a}u\in\c C}\wedge^q\d L(\mathtt{a}u)\subset\wedge^qT_{N,\b Q}\] of $\b Q$-vector spaces, and thus a map \[\Hom_\b Q(\wedge^qT_{N,\b Q},\b R)\to\Hom_\b Q(\sum_{\sigma_U\prec\mathtt{a}u\in\c C}\wedge^q\d L(\mathtt{a}u),\b R).\] By \cite{JSS15}*{Proposition 3.16}, the canonical map \[\Hom_\b Q(\wedge^qT_{N,\b Q},\b R)\to\Ker(\r d''\colon\s A^{q,0}_X(U)\to\s A^{q,1}_X(U))\] factors through $\Hom_\b Q(\sum_{\sigma_U\prec\mathtt{a}u\in\c C}\wedge^q\d L(\mathtt{a}u),\b R)$, and moreover every element in the stalk $\Ker(\r d''\colon\s A^{q,0}_{X,x}\to\s A^{q,1}_{X,x})$ has a representative in $\Hom_\b Q(\sum_{\sigma_U\prec\mathtt{a}u\in\c C}\wedge^q\d L(\mathtt{a}u),\b R)$ for some basic data $(U;f_1,\dots,f_N)$. This implies that the induced map $\s T_X^q\otimes_\b Q\b R\simeq\Ker(\r d''\colon\s A^{q,0}_X\to\s A^{q,1}_X)$ is injective, as well as surjective since elements in $\Hom_\b Q(\wedge^qT_{N,\b Q},\b Q)$ are in the image of $\mathtt{a}u_X^q$. \end{proof} \b egin{remark} Proposition \r ef{pr:kernel} implies that for all $q,q'\geq 0$, we have a canonical isomorphism \[H^{q'}(X,\s T_X^q\otimes_\b Q\b R)\simeq H^{q,q'}_\s A(X).\] In particular, the real vector space $H^{q,q'}_\s A(X)$ has a canonical rational structure coming from the isomorphism $H^{q'}(X,\s T_X^q\otimes_\b Q\b R)\simeq H^{q'}(X,\s T_X^q)\otimes_\b Q\b R$. \end{remark} \b egin{definition}[Tropical cycle class map]\label{de:tropical} Let $K$ be a non-Archimedean field and $\c X$ a smooth scheme over $K$. \b egin{enumerate} \item The \emph{tropical cycle class map (in forms)} $\r{cl}_\s A$ is defined to be the composition \b egin{multline*} \r{cl}_\s A\colon\CH^q(\c X)_\b Q\xrightarrow{\r{cl}_\s K}H^q(\c X,\s K_\c X^q)\to H^q(\c X^\r{an},\s K_{\c X^\r{an}}^q)\\ \xrightarrow{H^q(\c X^\r{an},\mathtt{a}u^q_{\c X^\r{an}})} H^q(\c X^\r{an},\s T_{\c X^\r{an}}^q)\to H^q(\c X^\r{an},\s T_{\c X^\r{an}}^q\otimes_\b Q\b R)\xrightarrow{\sim} H^{q,q}_\s A(\c X^\r{an}), \end{multline*} which can be regarded as a cycle class map valued in Dolbeault cohomology of forms. \item The \emph{tropical cycle class map (in currents)} $\r{cl}_\s D$ is defined to be the further composition \[\r{cl}_\s D\colon H^q(\c X,\s K_\c X^q)\xrightarrow{\r{cl}_\s A}H^{q,q}_\s A(\c X^\r{an}) \xrightarrow{\kappa_{\c X^\r{an}}}H^{q,q}_\s D(\c X^\r{an}),\] which can be regarded as a cycle class map valued in Dolbeault cohomology of currents. \end{enumerate} It is clear that both $\r{cl}_\s A$ and $\r{cl}_\s D$ are homomorphisms of graded $\b Q$-algebras. \end{definition} The following theorem establishes the compatibility of tropical cycle class maps and integration, which can be viewed as a tropical version of Cauchy formula in multi-variable complex analysis. \b egin{theorem}\label{th:cycle} Let $K$ be a non-Archimedean field and $\c X$ a smooth scheme over $K$ of dimension $n$. Then for every algebraic cycle $\c Z$ of $\c X$ of codimension $q$, we have the equality \[\langle\r{cl}_\s D(\c Z),\omega\r angle_{\c X^\r{an}}=\int_{\c Z^\r{an}}\omega\] for every $\r d''$-closed form $\omega\in \s A^{n-q,n-q}_{\c X^\r{an}}(\c X^\r{an})_c$ with compact support. \end{theorem} \b egin{proof} We may assume that $\c Z$ is prime, that is, a reduced irreducible closed subscheme of $\c X$ of codimension $q$. Let $\c Z_{\r{sing}}\subset\c Z$ be the singular locus, which is a closed subscheme of $\c X$ of codimension $>q$. Put $\c U=\c X\b ackslash\c Z_{\r{sing}}$, $\c Z_{\r{sm}}=\c Z\b ackslash\c Z_{\r{sing}}$, $X=\c X^\r{an}$, $U=\c U^\r{an}$, and $Z=\c Z_{\r{sm}}^\r{an}$. In particular, $Z$ is a Zariski closed subset of $U$. To ease notation, we put \[\s A^{q,q',\r{cl}}_X=\Ker(\r d''\colon\s A_X^{q,q'}(X)\to\s A_X^{q,q'+1}(X)),\quad \s D^{q,q',\r{cl}}_X=\Ker(\r d''\colon\s D_X^{q,q'}(X)\to\s D_X^{q,q'+1}(X)).\] We fix a form $\omega\in\s A^{q,q,\r{cl}}_X(X)_c$. By \cite{CLD12}*{Lemme 3.2.5}, $\omega$ belongs to $\s A^{q,q,\r{cl}}_X(U)_c$. \mathtt{e}xtbf{Step 1.} Using Example \r ef{ex:scheme}, we describe explicitly the class $\r{cl}_\s D(\c Z_{\r{sm}})$. We choose a finite affine open covering $\c U_i$ of $\c U$ and $f_{i1},\dots,f_{iq}\in\c O_\c U(\c U_i)$ such that $\c Z_{\r{sm}}\cap\c U_i$ is defined by the ideal $(f_{i1},\dots,f_{iq})$. Let $\c U_{ij}$ be the nonvanishing locus of $f_{ij}$. Put $U_i=\c U_i^\r{an}$ and $U_{ij}=\c U_{ij}^\r{an}$. Then $\{ U_{ij}\mathbin{|} j=1,\dots,q\}$ is an open covering of $U_i\b ackslash Z$. Thus the element $\mathtt{a}u^q_U(\{f_{i1},\dots,f_{iq}\})$ gives rise to an element in $H^{q-1}(U_i\b ackslash Z,\s A^{q,0,\r{cl}}_U)\simeq H^{q-1}(U_i\b ackslash Z,\s A^{q,\b ullet}_U)$, and we denote its image under the composite map \[H^{q-1}(U_i\b ackslash Z,\s A^{q,\b ullet}_U)\to H^{q-1}(U_i\b ackslash Z,\s D^{q,\b ullet}_U) \to H^q_{Z\cap U_i}(U_i,\s D^{q,\b ullet}_U)\] by $c(Z)_i$. It is easy to see that $c(Z)_i$ does not depend on the choice of $f_{i1},\dots,f_{iq}$. Therefore, $\{c(Z)_i\}$ gives rise to an element $c(Z)\in H^0(U,\underline{H}^q_Z(\s D^{q,\b ullet}_U))$. Again by \cite{CLD12}*{Lemme 3.2.5}, $\underline{H}^i_Z(\s D^{q,\b ullet}_U)=0$ for $i<q$, and we have an isomorphism $H^q_Z(U,\s D^{q,\b ullet}_U)\simeq H^0(U,\underline{H}^q_Z(\s D^{q,\b ullet}_U))$. The image of $c(Z)$ in $H^q(U,\s D^{q,\b ullet}_U)\simeq H^{q,q}_\s D(U)$ coincides with $\r{cl}_\s D(\c Z_{\r{sm}})$. \mathtt{e}xtbf{Step 2.} We study $H^q_{U_i\cap Z}(U_i,\s D^{q,\b ullet}_U)$ in more details. Put \[\s D^{q,\b ullet}_{U,Z}(U_i)=\Ker(\s D^{q,\b ullet}_U(U_i)\to\s D^{q,\b ullet}_U(U_i\b ackslash Z)),\] with the induced differential $\r d''$, and put \[H^{q,q'}_{Z,\s D}(U_i)=\f rac{\Ker(\r d''\colon\s D^{q,q'}_{U,Z}(U_i)\to\s D^{q,q'+1}_{U,Z}(U_i))} {\IM(\r d''\colon\s D^{q,q'-1}_{U,Z}(U_i)\to\s D^{q,q'}_{U,Z}(U_i))}.\] As $\s D^{q,\b ullet}_U$ is a complex of flasque sheaves, we have the following commutative diagram \[\xymatrix{ H^{q,q'-1}_\s D(U_i\b ackslash Z) \ar[r]^-{\delta''}\ar[d]_-{\simeq} & H^{q,q'}_{Z,\s D}(U_i) \ar[r]\ar[d]^-{\simeq} & H^{q,q'}_\s D(U_i) \ar[d]^-{\simeq} \ar[r]\ar[d]^-{\simeq} & H^{q,q'}_\s D(U_i\b ackslash Z) \ar[d]^-{\simeq} \\ H^{q'-1}(U_i\b ackslash Z,\s D^{q,\b ullet}_U) \ar[r] & H^{q'}_{U_i\cap Z}(U_i,\s D^{q,\b ullet}_U) \ar[r] & H^{q'}(U_i,\s D^{q,\b ullet}_U) \ar[r] & H^{q'}(U_i\b ackslash Z,\s D^{q,\b ullet}_U). }\] In particular, when $q'=q$ we have \[H^q_{U_i\cap Z}(U_i,\s D^{q,\b ullet}_U)\simeq H^{q,q}_{Z,\s D}(U_i)\simeq \Ker(\s D^{q,q,\r{cl}}_U(U_i)\to\s D^{q,q}_U(U_i\b ackslash Z))\subset\s D^{q,q,\r{cl}}_U(U_i).\] Let $\theta_i\in\s A^{q,q-1,\r{cl}}_U(U_i\b ackslash Z)$ be a Dolbeault representative of $\mathtt{a}u^q_U(\{f_{i1},\dots,f_{iq}\})$ as a cohomology class in $H^{q-1}(U_i\b ackslash Z,\s A^{q,0,\r{cl}}_U)$, with induced class $[\theta_i]\in H^{q,q-1}_\s D(U_i\b ackslash Z)$. By partition of unity, we may write $\omega=\sum_i\omega_i$ with $\omega_i\in\s A^{n-q,n-q}_U(U_i)_c$. Note that \[\langle\r{cl}_\s D(\c Z),\omega\r angle_{\c X^\r{an}}=\langle\r{cl}_\s D(\c Z_{\r{sm}}),\omega\r angle_U =\langle\delta''([\theta]),\omega\r angle_U=\sum_i\langle\delta''([\theta]),\omega_i\r angle_U =\sum_i\langle\delta''([\theta_i]),\omega_i\r angle_U\] and \[\quad \int_{\c Z^\r{an}}\omega=\sum_i\int_{U_i\cap Z}\omega_i.\] To prove the theorem, it suffices to show that \[\langle\delta''([\theta_i]),\omega_i\r angle_U=\int_{U_i\cap Z}\omega_i\] for every $i$. \mathtt{e}xtbf{Step 3.} In what follows, we suppress the subscript $i$. We summarize our data as follows: \b egin{itemize} \item an affine smooth scheme $\c U$ over $K$ of dimension $n$, with $U=\c U^\r{an}$, \item a smooth irreducible closed subscheme $\c Z$ of codimension $q$ defined by the ideal $(f_1,\dots,f_q)$ where $f_1,\dots,f_q\in\c O_\c U(\c U)$, with $Z=\c Z^\r{an}$, \item $\theta\in\s A^{q,q-1,\r{cl}}_U(U\b ackslash Z)$ a Dolbeault representative of $\mathtt{a}u^q_U(\{f_1,\dots,f_q\})$ as a cohomology class in $H^{q-1}(U\b ackslash Z,\s A^{q,\b ullet}_U)$, and \item $\omega\in\s A^{n-q,n-q}_U(U)_c$. \end{itemize} Our goal is to show that \[\langle\delta''([\theta]),\omega\r angle_U=\int_Z\omega.\] Here we recall that $[\theta]\in H^{q,q-1}_\s D(U\b ackslash Z)$ is the class induced by $\theta$, and $\delta''\colon H^{q,q-1}_\s D(U\b ackslash Z)\to H^{q,q}_{Z,\s D}(U)$ is the coboundary map, in which the target $H^{q,q}_{Z,\s D}(U)$ is a subspace of $\s D^{q,q,\r{cl}}_U(U)$. As $Z$ is a closed Zariski subset of $U$ of codimension $q$, the image of $\s A^{n-q,n-q}_U(U)_c$ under $\r d''$ is in $\s A^{n-q,n-q+1,\r{cl}}_U(U\b ackslash Z)_c$. By definition, the following diagram \[\xymatrix{ H^{q,q-1}_\s D(U\b ackslash Z)\!\!\!\!\!\!\!\!\!\! \ar[d]_-{\delta''} & \mathtt{i}mes & \!\!\!\!\!\!\!\!\!\!\s A^{n-q,n-q+1,\r{cl}}_U(U\b ackslash Z)_c \ar[rr]^-{\langle\;,\;\r angle_U} && \b R \ar@{=}[d] \\ \s D^{q,q,\r{cl}}_U(U)\!\!\!\!\!\!\!\!\!\! & \mathtt{i}mes & \!\!\!\!\!\!\!\!\!\!\s A^{n-q,n-q}_U(U)_c \ar[u]_-{\r d''}\ar[rr]^-{\langle\;,\;\r angle_U} && \b R }\] is commutative. Therefore, we have \[\langle\delta''([\theta]),\omega\r angle_U=\int_{U\b ackslash Z}\theta\wedge\r d''\omega.\] Thus it suffices to show that \[\int_{U\b ackslash Z}\theta\wedge\r d''\omega=\int_Z\omega.\] Obviously, the equality does not depend on the choice of the Dolbeault representative. \mathtt{e}xtbf{Step 4.} Let $\c U_i\subset\c U$ be the nonvanishing locus of $f_i$. Then we have an open covering $\underline{U}=\{U_i\}$ of $U\b ackslash Z$, where $U_i=\c U_i^\r{an}$. For $I\subset\{1,\dots,q\}$, put $U_I=\b igcap_{i\in I}U_i$. Let us recall the construction of a Dolbeault representative $\theta$. We inductively construct elements $\theta_i\in H^{q-i-1}(U\b ackslash Z,\s A_U^{q,i,\r{cl}})$ represented by an (alternative) closed \v{C}ech cocycle \[\theta_i=\{\theta_{i,I}\in\s A^{q,i,\r{cl}}_U(U_I)\mathbin{|} |I|=q-i\}\] for $i=0,\dots,q-1$. The class $\theta_0$ is simply \[\{\theta_{0,\{1,\dots,q\}}=\mathtt{a}u^q_U(\{f_1,\dots,f_q\})\in\s A^{q,0,\r{cl}}_U(U_{\{1,\dots,q\}})\}.\] Suppose that we have $\theta_{i-1}$ for some $1\leq i\leq q-1$. By Poincar\'{e} lemma, we have an exact sequence \[0\to\s A^{q,i-1,\r{cl}}_U\to\s A^{q,i-1}_U\to\s A^{q,i,\r{cl}}_U\to 0.\] As $\s A^{q,i-1}_U$ is a fine sheaf, the \v{C}ech cohomology $H^{q-i}(\underline{U},\s A^{q,i-1}_U)$ is trivial. Thus there exists $\vartheta_i=\{\vartheta_{i,J}\in\s A^{q,i-1}_U(U_J)\mathbin{|}|J|=q-i\}$ with $\delta_{\underline{U}}\vartheta_i=\theta_{i-1}$, where $\delta_{\underline{U}}$ denotes the \v{C}ech differential for the covering $\underline{U}$. Now we set $\theta_i=\r d''\vartheta_i\coloneqq\{\r d''\vartheta_{i,J}\in\s A^{q,i,\r{cl}}_U(U_J)\mathbin{|}|J|=q-i\}$. The last closed \v{C}ech cocycle $\theta_{q-1}=\{\theta_{q-1,\{i\}}\in\s A^{q,q-1,\r{cl}}_U(U_i)\mathbin{|} i=1,\dots,q\}$ is simply a Dolbeault representative of $\mathtt{a}u^q_U(\{f_1,\dots,f_q\})\in H^{q-1}(U\b ackslash Z,\s A^{q,\b ullet}_U)$. For $\epsilon>0$ and $I\subset\{1,\dots,q\}$, put \[V_\epsilon^I=\{x\in U\mathbin{|} f_i(x)\in \partial\overline{D(0,\epsilon)},i\in I;\; f_j(x)\in \overline{D(0,\epsilon)},j\not\in I\},\] and $U_\epsilon=\overline{U\b ackslash V^\emptyset_\epsilon}$. Here, $\overline{D(0,\epsilon)}$ is the closed disc of radius $\epsilon$ with center at zero, and $\overline{U\b ackslash V^\emptyset_\epsilon}$ is the closure of $U\b ackslash V^\emptyset_\epsilon$ in $U$. As $\r d''\omega\in\s A^{n-q,n-q+1}_U(U\b ackslash Z)_c$, there is a real number $\epsilon_0>0$ such that $\r d''\omega=0$ on $V_{\epsilon_0}^\emptyset$. Thus for every $0<\epsilon<\epsilon_0$, we have \b egin{align}\label{eq:cycle_1} \int_{U\b ackslash Z}\theta\wedge\r d''\omega=\int_{\overline{U\b ackslash V^\emptyset_\epsilon}}\theta\wedge\r d''\omega= -\int_{\overline{U\b ackslash V^\emptyset_\epsilon}}\r d''(\theta\wedge\omega)=-\int_{U_\epsilon}\r d''(\theta_{q-1}\wedge\omega). \end{align} Since $U_\epsilon$ is a closed subset of $U$, the forms $\omega$ and hence $\theta\wedge\omega$ have compact support on $U_\epsilon$. \mathtt{e}xtbf{Step 5.} Now we have to use integration on boundaries $V^I_\epsilon$ and the corresponding Stokes' formula. We use the formulation of boundary integration through contraction as in \cite{Gub13}*{\S 2}. We consider first a tropical chart $\trop_W\colon W\to(\b G_{\r{m},K}^\r{an})^N\xrightarrow{-\log|\;|}\b R^N$, where $W$ is an open subset of $U_\epsilon$. Since $V^I_\epsilon$ is a $K^I_\epsilon$-analytic space of dimension $n-|I|$ for some extension $K^I_\epsilon/K$ of non-Archimedean fields, the image $\sigma_I\coloneqq\trop_W(W\cap V^I_\epsilon)$ consists of closed faces of codimension $|I|$ of $\trop_W(W)$. For every $i\in I$, we choose a tangent vector $\omega_i$ for the closed face $\sigma_{\{i\}}$ of $\sigma_\emptyset$ of codimension $1$, as defined in \cite{Gub13}*{2.8}. Suppose that $I=\{m_1,\dots,m_j\}$ where $1\leq m_1\leq\cdots\leq m_j\leq q$. If $\alpha$ is an $(n,n-i)$-superform on $W$ with compact support, then we define \[\int_{\sigma_I}\alpha\coloneqq \int_{\sigma_I}\langle\alpha;-\omega_{m_1},\dots,-\omega_{m_j}\r angle_{\{1,\dots,j\}}.\] It is easy too see that the above integral does not depend on the choice of $\omega_i$; however, it does depend on the order. We may patch the above integral to define the integral $\int_{V^I_\epsilon}\alpha$ for an $(n,n-|I|)$-form $\alpha$ on $V^I_\epsilon$ with compact support. The negative signs for $\omega_i$ ensure that we have the following Stokes' formula \[\int_{V^I_\epsilon}\r d''\alpha=\sum_{j\not\in I}(-1)^{(j,I\cup\{j\})}\int_{V^{I\cup\{j\}}_\epsilon}\alpha\] for an $(n,n-|I|-1)$-form $\alpha$ on $V^I_\epsilon$ with compact support, for $|I|\geq 1$. Here, $(j,J)$ is the position from the rear of the index $j$ when $J$ is ordered in the usual manner. However, for the initial Stokes' formula, we have \[\int_{U_\epsilon}\r d''\alpha=-\int_{\partial U_\epsilon}\alpha=-\sum_{|I|=1}\int_{V^I_\epsilon}\alpha\] for an $(n,n-1)$-form $\alpha$ on $U_\epsilon$ with compact support. In particular, we have \b egin{align}\label{eq:cycle_2} -\int_{U_\epsilon}\r d''(\theta_{q-1}\wedge\omega)=\int_{\partial U_\epsilon}\theta_{q-1}\wedge\omega =\sum_{|I|=1}\int_{V^I_\epsilon}\theta_{q-1,I}\wedge\omega. \end{align} In general, for $1\leq i\leq q-1$, we have \b egin{align*} \sum_{|I|=i}\int_{V^I_\epsilon}\theta_{q-i,I}\wedge\omega&=\sum_{|I|=i}\int_{V^I_\epsilon}\r d''\vartheta_{q-i,I}\wedge\omega\\ &=\sum_{|I|=i}\int_{V^I_\epsilon}\r d''(\vartheta_{q-i,I}\wedge\omega)\\ &=\sum_{|I|=i}\sum_{j\not\in I}(-1)^{(j,I\cup\{j\})}\int_{V^{I\cup\{j\}}_\epsilon}\vartheta_{q-i,I}\wedge\omega\\ &=\sum_{|J|=i+1}\int_{V^J_\epsilon}\sum_{j\in J}(-1)^{(j,J)}\vartheta_{q-i,J\b ackslash\{j\}}\wedge\omega\\ &=\sum_{|J|=i+1}\int_{V^J_\epsilon}(\delta_{\underline{U}}\vartheta_{q-i})_J\wedge\omega\\ &=\sum_{|J|=i+1}\int_{V^J_\epsilon}\theta_{q-(i+1),J}\wedge\omega. \end{align*} Combining with \eqref{eq:cycle_1}, \eqref{eq:cycle_2}, we have \b egin{align}\label{eq:cycle_3} \int_{U\b ackslash Z}\theta\wedge\r d''\omega=\int_{V^{\{1,\dots,q\}}_\epsilon}\theta_{0,\{1,\dots,q\}}\wedge\omega =\int_{V^{\{1,\dots,q\}}_\epsilon}\mathtt{a}u^q_U(\{f_1,\dots,f_q\})\wedge\omega \end{align} for every $0<\epsilon<\epsilon_0$. \mathtt{e}xtbf{Step 6.} By \eqref{eq:cycle_3}, the theorem is reduced to the formula \b egin{align}\label{eq:cycle_4} \int_{V^{\{1,\dots,q\}}_\epsilon}\mathtt{a}u^q_U(\{f_1,\dots,f_q\})\wedge\omega =\int_Z\omega \end{align} for sufficiently small $\epsilon>0$. We may choose a finite admissible covering of $U$ by affinoid domains $W_k$, a tropical chart $\trop_{W_k}\colon W_k\to(\b G_{\r{m},K}^\r{an})^{N_k}\xrightarrow{-\log|\;|}\b R^{N_k}$, an $(n-q,n-q)$-superform $\alpha_k$ on $\trop_{W_k}(W_k)$ whose support is contained in the interior of $\trop_{W_k}(W_k)$, such that $\omega=\sum_k\trop_{W_k}^*\alpha_k$. It suffices to check \eqref{eq:cycle_4} on each $W_k$. Now we fix an arbitrary $k$ and suppress it from notation. Suppose that the moment morphism $W\to (\b G_{\r{m},K}^\r{an})^N$ is defined by functions $g_1,\dots,g_N\in\c O^*_U(W)$. To check \eqref{eq:cycle_4}, we may assume that the morphism $(f_1,\dots,f_q)\colon W\to (\b A^q_K)^\r{an}$ is purely of relative dimension $n-q$ and $W_{\b{0}}\neq\emptyset$, where $W_{\b{0}}$ is the fiber over the origin. Put $W_\epsilon=W\cap (V^\emptyset_\epsilon\b ackslash Z)$. Applying \cite{CLD12}*{Proposition 4.6.6} successively, we know that there is some $\delta>0$, such that $\trop_W(W_\delta)_n$ is isomorphic to $\trop_W(W_{\b{0}})_{n-q}\mathtt{i}mes[-\log\delta,+\infty)^q$. Here, for a polyhedral complex $\c C$ of dimension $n$, we denote by $\c C_n$ the union of all polyhedra of dimension $n$. Therefore, \eqref{eq:cycle_4} follows for every $0<\epsilon<\delta$, as on $\trop_W(W_\delta)$ we may take $\omega_i$ to be $-\f rac{\partial}{\partial x_i}$, where $(x_1,\dots,x_q)$ is the natural coordinate on $[-\log\delta,+\infty)^q$. \end{proof} \b egin{corollary}\label{co:cycle} Let $K$ be a non-Archimedean field and $\c X$ a proper smooth scheme over $K$ of dimension $n$. Then for every algebraic cycle $\c Z$ of $\c X$ of dimension $0$, we have \[\int_{\c X^\r{an}}\r{cl}_\s A(\c Z)=\deg\c Z.\] \end{corollary} The last result in this section establishes the relation of maps $\mathtt{a}u_X^q$ and $\lambda^q_X$. \b egin{theorem}\label{th:kernel} Let $K$ be a non-Archimedean field embeddable into $\b C_p$, and $X$ a smooth $K$-analytic space. Then $\Ker\mathtt{a}u^q_X=\Ker\lambda_X^q$. In other words, we have a canonical isomorphism $\s T_X^q\simeq\s L_X^q$. \end{theorem} \b egin{proof} It suffices to check the equality on stalks. Thus we fix a point $x\in X$ with $s=s(x)$ and $t=t(x)$. \mathtt{e}xtbf{Step 1.} Let $U$ be an open neighborhood of $x$. Take an element $F=\sum_{i=1}^N c_i\{f_{i1},\dots,f_{iq}\}\in\s K^q_X(U)$ where $c_i\in\b Q$ and $f_{ij}\in\c O^*_X(U)$. By \cite{Berk07}*{Propositions 2.1.1, 2.3.1}, K\"{u}nneth formula, and (the proof of) Theorem \r ef{th:1} (ii), there exist \b egin{itemize} \item a proper strictly semi-stable scheme $\c S$ over $k^\circ$ of dimension $s$, where $k$ is a finite extension of $\b Q_p$; \item an irreducible component $\c E$ of $\c S_s$ that is geometrically irreducible; \item an open neighborhood $W$ of $\pi^{-1}\c E_{\widetilde{L}}$ in $\c S_L^\r{an}$ where $L$ is a finite extension of $K$ containing $k$; \item a closed subset $\c Z$ of dimension at most $s-1$ of $\c S_k$; \item a point $y\in V\coloneqq \b D\mathtt{i}mes\prod_{k=1}^tB(0;r_k,R_k)\mathtt{i}mes W$ which projects to $\sigma_\c E$ in $W$; \item a morphism $\alpha\colon V\to U$ that is \'{e}tale away from $\b D\mathtt{i}mes\prod_{k=1}^tB(0;r_k,R_k)\mathtt{i}mes(W\cap\c Z_L^\r{an})$, such that $\alpha(y)=x$; \item for each $i,j$, integers $d_{ij1},\dots,d_{ijt}$ and $g_{ij}\in\c O^*(W,\pi^{-1}\c E_{\widetilde{L}})$, such that \[\alpha^*\f rac{\r d f_{ij}}{f_{ij}}-\f rac{\r d\left(\b eta^*g_{ij}\prod_{k=1}^tT_k^{d_{ijk}}\right)}{\b eta^*g_{ij}\prod_{k=1}^tT_k^{d_{ijk}}}\] is an exact $1$-form on $V$. Here, $T_k$ is the coordinate function on $B(0;r_k,R_k)$ for $1\leq k\leq t$, which will be regarded as a function in $\c O^*(V)$ via the obvious pullback; and $\b eta\colon V\to W$ is the projection morphism. \end{itemize} In particular, if we put $h_{ij}=\b eta^*g_{ij}\prod_{k=1}^nT_k^{d_{ijk}}$, then $|\alpha^*f_{ij}\cdot h^{-1}_{ij}|$ is equal to a constant $c_{ij}\in\b R_{>0}$ on $V$. \mathtt{e}xtbf{Step 2.} We define three tropical charts as follows. \b egin{itemize} \item The first one uses $f_{ij}$ ($1\leq i\leq N,1\leq j\leq q$), which induce a moment morphism $U\to(\b G_{\r{m},K}^\r{an})^{Nq}$, and thus a tropicalization map $\trop_U\colon U\to(\b G_{\r{m},K}^\r{an})^{Nq}\xrightarrow{-\log|\;|}\b R^{Nq}$. \item The second one uses functions $g_{ij}$ ($1\leq i\leq N,1\leq j\leq q$), which induce a moment morphism $W\to(\b G_{\r{m},L}^\r{an})^{Nq}$, and thus a tropicalization map $\trop_W\colon W\to(\b G_{\r{m},L}^\r{an})^{Nq}\xrightarrow{-\log|\;|}\b R^{Nq}$. \item The third one uses functions $T_k$ ($1\leq k\leq t$) and $\b eta^*g_{ij}$ ($1\leq i\leq N,1\leq j\leq q$), which induce a moment morphism $V\to(\b G_{\r{m},L}^\r{an})^{t+Nq}$, and thus a tropicalization map $\trop_V\colon V\to(\b G_{\r{m},L}^\r{an})^{t+Nq}\xrightarrow{-\log|\;|}\b R^{t+Nq}$. \end{itemize} We have a commutative diagram \[\xymatrix{ W \ar[rr]^-{\trop_W} && \b R^{Nq} \\ V \ar[d]_-{\alpha}\ar[u]^-{\b eta}\ar[rr]^-{\trop_V} && \b R^{t+Nq} \ar[d]^-{\b reve\alpha}\ar[u]_-{\b reve\b eta} \\ U \ar[rr]^-{\trop_U} && \b R^{Nq} }\] in which $\b reve\alpha$ sends a point $(x_k,x_{ij})\in\b R^t\mathtt{i}mes\b R^{Nq}$ to $(y_{ij})$ where $y_{ij}=-\log c_{ij}+y_{ij}+\sum_{k=1}^tx_k$, and $\b reve\b eta$ is the projection onto the last $Nq$ factors. Note that \[\mathtt{a}u^q_X(F)=\sum_{i=1}^Nc_i\b igwedge_{j=1}^q\r d y_{ij},\] and thus \[\b reve\alpha^*\mathtt{a}u^q_X(F)=\sum_{i=1}^Nc_i\b igwedge_{j=1}^q \left(\r d x_{ij}+\sum_{k=1}^td_{ijk}\r d x_k\right)\] as a $q$-form on $\b R^{t+Nq}$. We may write $\b reve\alpha^*\mathtt{a}u^q_X(F)=\sum_{I\subset\{1,\dots,t\},|I|\leq q}\r d x_I\wedge\b reve\b eta^*\zeta_I$ for some $(q-|I|)$-form $\zeta_I$ on $\b R^{Nq}$. \mathtt{e}xtbf{Step 3.} We show that $(\Ker\lambda_X^q)_x\subset(\Ker\mathtt{a}u^q_X)_x$. Thus we assume that $\lambda^q_X(F)$ is an exact $q$-form on $U$ and we need to show that $\mathtt{a}u^q_X(F)=0$ on a possibly smaller open neighborhood of $x$. It suffices to that $\b reve\alpha^*\mathtt{a}u^q_X(F)=0$ when restricted to $\trop_V(V)$. This is true as, by Proposition \r ef{pr:log}, we have that $\zeta_I=0$ when restricted to $\trop_W(W)$ for every $I$. \mathtt{e}xtbf{Step 4.} We show that $(\Ker\mathtt{a}u^q_X)_x\subset(\Ker\lambda_X^q)_x$. Thus we may assume that $\mathtt{a}u^q_X(F)=0$ when restricted to $\trop_U(U)$ and we need to show that $\lambda^q_X(F)$ is an exact $q$-form on a possibly smaller open neighborhood of $x$. Then $\b reve\alpha^*\mathtt{a}u^q_X(F)=0$ when restricted to $\trop_V(V)$, and thus $\zeta_I=0$ when restricted to $\trop_W(W)$ for every $I$. By Proposition \r ef{pr:log}, the image of $\alpha^*\lambda^q_X(F)$ in $H^q_\r{dR}(V)$ is $0$ after possibly replacing $W$ by a smaller open neighborhood of $\pi^{-1}\c D_{\widetilde{L}}$, as the map $\xi^0_q$ \eqref{eq:gamma} is injective on $H^q_\r ig(\c E^\heartsuit/k)_{2q}$. In particular, there is an open neighborhood $V'$ of $y$ in $V$ such that the induced morphism $\alpha\colon V'\to U'$ is finite \'{e}tale where $U'\subset U$ is the image of $\alpha\mathbin{|}_{V'}$, and $\alpha^*\lambda^q_X(F)\mathbin{|}_{V'}=\r d\omega'$ for some $(q-1)$-form $\omega'$ on $V'$. Thus $\lambda^q_X(F)\mathbin{|}_{U'}=\deg(\alpha\mathbin{|}_{V'})^{-1}\r d\omega$ where $\omega$ is the trace of $\omega'$ along $\alpha\colon V'\to U'$. The theorem follows. \end{proof} \section{Cohomological triviality} \label{ss:triviality} In this section, we study the relation between algebraic de Rham cycle classes and tropical cycle classes. In this section, sheaves like $\c O_X$, $\f c_X$, $\Upsilon_X^q$, and the de Rham complex $(\Omega^\b ullet_X,\r d)$ are understood in the analytic topology. We fix an embedding $\b R\hookrightarrow\b C_p$ throughout this section. Moreover, we have to use adic topology. By \cite{Sch12}*{Theorem 2.24}, we may associate to a $K$-analytic space $X$ an adic space $X^\r{ad}$, and we have a canonical continuous map $\gamma_X\colon X^\r{ad}\to X$ of topological spaces. \b egin{lem}\label{le:log} Let $K$ be a non-Archimedean field embeddable into $\b C_p$, and $X$ a smooth $K$-analytic space. Then the canonical map $\s L^q_X\otimes_\b Q\f c_X\to\Upsilon_X^q$ is an isomorphism for every $q\geq 0$. \end{lem} \b egin{proof} By definition, it suffices to show that the map $\s L^q_X\otimes_\b Q\f c_X\to\Upsilon_X^q$ is injective on stalks. Thus we fix a point $x\in X$ with $s=s(x)$ and $t=t(x)$. Take an element $\sum_{l=1}^M b_l \lambda_X^q(F^l)\in\s L^q_X(U)\otimes_\b Q\f c_X(U)$ such that $F=0$ in $\Upsilon_X^q(U)$, where $U$ is a connected open neighborhood of $x$, and $b_l\in\f c_X(U)$, $F^l\in\s K^q_X(U)$. It suffices to show that possibly after shrinking $U$, the elements $\lambda_X^q(F^l)$ are linearly dependent in $\Upsilon_X^q(U)$ over $\b Q$. Write $F^l=\sum_{i=1}^{N_l} c^l_i\{f^l_{i1},\dots,f^l_{iq}\}$ where $c^l_i\in\b Q$ and $f^l_{ij}\in\c O^*_X(U)$. We copy Step 1 of the proof of Theorem \r ef{th:kernel} to the element $F\coloneqq\sum_{l=1}^M b_lF^l$. Then for every $I\subset\{1,\dots,t\}$ with $|I|\leq q$, we have that \b egin{align}\label{eq:dependence} \sum_{l=1}^Mb_l\sum_{i=1}^{N_l}c^l_i\sum_{\jmath}\epsilon_\jmath\left(\prod_{k\in I}d^l_{i\jmath(k)k}\right) \r{cl}^\heartsuit\left(\b igwedge_{j\not\in\IM\jmath}\DIV g^l_{ij}\right)\in \b igoplus_{J,|J|=q-|I|}H^0_\r ig(\c E_J^\heartsuit/L) \end{align} vanishes, for some finite extension of non-Archimedean fields $L/\f c_X(U)$. Here, $\jmath$ is taken over all injective maps $I\to\{1,\dots,q\}$; the multi-wedge product $\b igwedge_{j\not\in\IM\jmath}\DIV g^l_{ij}$ is taken in the increasing order for the index $j$; and $\epsilon_\jmath\in\{\pm1\}$ is determined by $\jmath$. Note that $H^0_\r ig(\c E_J^\heartsuit/L)$ is canonically isomorphic to $\b Q^{\r{op}lus\pi_0(\c E_J^\heartsuit)}\otimes_\b Q L$, and for every $l$, \[\sum_{i=1}^{N_l}c^l_i\sum_{\jmath}\epsilon_\jmath\left(\prod_{k\in I}d^l_{i\jmath(k)k}\right) \r{cl}^\heartsuit\left(\b igwedge_{j\not\in\IM\jmath}\DIV g^l_{ij}\right)\in \b igoplus_{J,|J|=q-|I|}\b Q^{\r{op}lus\pi_0(\c E_J^\heartsuit)}.\] Thus, there exist $b'_l\in\b Q$, not all being zero, such that \eqref{eq:dependence} vanishes for every $I$ if we replace $b_l$ by $b'_l$. This implies that there is an open neighborhood $V'$ of $y$ in $V$ such that the induced morphism $\alpha\colon V'\to U'$ is finite \'{e}tale where $U'\subset U$ is the image of $\alpha\mathbin{|}_{V'}$, and $\alpha^*\lambda^q_X(F')=\r d\omega'$ for some $\omega'\in\Omega^{q-1}(V')$ where $F'=\sum_{l=1}^M b'_l \lambda_X^q(F^l)$. Then $\lambda^q_X(F')=\deg(\alpha)^{-1}\r d\omega$ where $\omega$ is the trace of $\omega'$ along $\alpha\colon V'\to U'$. The lemma follows. \end{proof} The following theorem shows the finiteness of $H^{1,1}_\s A$ and studies the tropical cycle class of line bundles. \b egin{theorem}\label{th:line} Let $\c X$ be a proper smooth scheme over $\b C_p$. Then \b egin{enumerate} \item $H^{1,1}_\s A(\c X^\r{an})$ is finite dimensional; \item for a line bundle $\c L$ on $\c X$ whose (algebraic) de Rham Chern class $\r{cl}_\r{dR}(\c L)\in H^2_\r{dR}(\c X)$ is trivial, we have $\r{cl}_\s A(\c L)=0$. \end{enumerate} \end{theorem} \b egin{proof} We put $X=\c X^\r{an}$. By Theorem \r ef{th:1} and Lemma \r ef{le:log}, we know that $H^1(X,\s L_X^1\otimes_\b Q\b C_p)\simeq H^1(X,\s L_X^1)\otimes_\b Q\b C_p$ is a direct summand of $H^1(X,\Omega^{1,\r{cl}}_X/\r d\c O_X)$. For (1), it suffices to show that $\dim_{\b C_p}H^1(X,\Omega^{1,\r{cl}}_X/\r d\c O_X)<\infty$. In fact, we have a spectral sequence $E^{p,q}_r$ abutting to $H^\b ullet_\r{dR}(X)=H^\b ullet(X,\Omega^\b ullet_X)$ with the second page terms $E^{p,q}_2=H^p(X,\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)$. Thus, it suffices to show that both $H^3(X,\b C_p)$ and $H^2(X,\Omega^\b ullet_X)$ are finite dimensional. Since the homotopy type of $X$ is a finite CW complex, $\dim_{\b C_p}H^i(X,\b C_p)<\infty$ for every $i\in\b Z$. By GAGA, $H^i(X,\Omega^\b ullet_X)$ is canonically isomorphic to the algebraic de Rham cohomology $H^i_\r{dR}(\c X)$ for every $i$, and thus finite dimensional. For (2), note that the map $\r{cl}_\s A\colon \CH^q(\c X)_\b Q\to H^{q,q}_\s A(X)$ factors through $H^q(X,\s L^q_X)$. We denote by $\r{cl}(\c L)$ the corresponding class in $H^1(X,\s L^1_X)$. It suffices to show that $\r{cl}(\c L)$ is zero in $H^1(X,\Omega^{1,\r{cl}}_X/\r d\c O_X)$. Now we regard $\r{cl}(\c L)$ as an element in the latter cohomology group. Note that $\r{cl}(\c L)$ maps to zero under the coboundary map $\delta\colon H^1(X,\Omega^{1,\r{cl}}_X/\r d\c O_X)\to H^3(X,\b C_p)$, as the composite map $\CH^1(\c X)_\b Q\to H^3(X,\b C_p)$ fits into the following commutative diagram \[\xymatrix{ \CH^1(\c X)_\b Q \ar[r] & H^1(\c X,\Omega^{1,\r{cl}}_\c X/\r d\c O_\c X) \ar[r]\ar[d]& H^1(X,\Omega^{1,\r{cl}}_X/\r d\c O_X) \ar[d]^-{\delta} \\ & H^3(\c X,\b C_p) \ar[r] & H^3(X,\b C_p) }\] in which $H^3(\c X,\b C_p)$ vanishes. Thus, it suffices to show that the image of $\r{cl}(\c L)$ vanishes under the map $\Ker(\delta\colon H^1(X,\Omega^{1,\r{cl}}_X/\r d\c O_X)\to H^3(X,\b C_p))\to H^2(X,\Omega_X^\b ullet)/H^2(X,\b C_p)$. However, by comparing the definitions of two cycle class maps, we know that it is also the image of $\r{cl}_\r{dR}(\c L)$ under the composite map $H^2_\r{dR}(\c X)\simeq H^2(X,\Omega_X^\b ullet)\to H^2(X,\Omega_X^\b ullet)/H^2(X,\b C_p)$, thus vanishes. \end{proof} The following lemma is the analytic version of the corresponding statement in the algebraic setting. \b egin{lem}\label{le:top} Let $K$ be a non-Archimedean field. Let $\c X$ be a geometrically connected proper smooth scheme over $K$ of dimension $n$. Then we have $H^n(\c X^\r{an},\Omega^n_{\c X^\r{an}}/\r d\Omega^{n-1}_{\c X^\r{an}}) \simeq K$. \end{lem} \b egin{proof} Put $X=\c X^\r{an}$. By the spectral sequence $E^{p,q}_2=H^p(X,\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)\Rightarrow H^{p+q}(X,\Omega^\b ullet_X)$ and the GAGA comparison isomorphism $H^\b ullet_\r{dR}(X)\simeq H^\b ullet_\r{dR}(\c X)$, it suffices to show that $H^i(X,\s F)=0$ for $i>n$ and every abelian sheaf $\s F$ on $X$. Then we have $H^n(X,\Omega^n_X/\r d\Omega^{n-1}_X)\simeq H^{2n}_\r{dR}(\c X)\simeq K$. By \cite{Berk93}*{Proposition 1.3.6 \& Lemma 1.6.2} and \cite{Sch12}*{Theorem 2.21}, we have a canonical isomorphism $H^i(X,\s F)\simeq H^i(X^\r{ad},\gamma_X^{-1}\s F)$. In fact, we will show that $H^i(X^\r{ad},\s F)=0$ for $i>n$ and every abelian sheaf $\s F$ on $X^\r{ad}$. Recall that a formal model of $X$ is a proper flat formal $K^\circ$-scheme $\f X$ with an isomorphism $\f X_\acute{\r{e}}\r{t}a\simeq X$. A formal model $\f X$ induces a continuous map $\gamma_\f X\colon X^\r{ad}\to\f X$. By \cite{Sch12}*{Theorem 2.22} and \cite{SP}*{094L, 0A2Z}, we have an isomorphism $X^\r{ad}\simeq\varprojlim_\f X\f X$ of spectral spaces, where the (cofiltered) limit is taken over all formal models $\f X$ of $X$. By \cite{SP}*{0A37}, we have an isomorphism $\varinjlim_{\f X}H^i(\f X,\gamma_{\f X*}\s F)\simeq H^i(X^\r{ad},\s F)$. Now as (the underlying space of) $\f X$ is a Noetherian topological space of dimension (at most) $n$, it follows that $H^i(\f X,\gamma_{\f X*}\s F)=0$ for $i>n$ by Grothendieck vanishing theorem \cite{SP}*{02UZ}. The lemma then follows. \end{proof} \b egin{definition} Let $X$ be a compact smooth $\b C_p$-analytic space of dimension $n$. We have a total integration map \[\int_X\colon H^n(X,\s T^n_X\otimes_\b Q\b R)\simeq H^{n,n}_\s A(X)\to\b R.\] By the isomorphism $\s T^n_X\simeq\s L^n_X$ in Theorem \r ef{th:kernel}, Lemma \r ef{le:log}, and by extending the above map linearly over $\b C_p$, we obtain a $\b C_p$-linear map \[\Tr_X^\s A\colon H^n(X,\Upsilon^n_X)\simeq H^n(X,\s L^n_X\otimes_\b Q\b C_p)\to\b C_p,\] called the \emph{trace map} for $X$. \end{definition} The following proposition can be regarded as certain algebraicity property of the transcendental map of integration. \b egin{proposition}\label{pr:trace} Let $k\subset\b C_p$ be a discrete non-Archimedean subfield, and $\c X$ a geometrically connected proper smooth scheme over $k$ of dimension $n$. If we put $\c X_\r a=\c X\otimes_k\b C_p$, then the map $\Tr_{\c X^\r{an}_\r a}^\s A$ factors through the canonical map \[H^n(\c X_\r a^\r{an},\Upsilon^n_{\c X_\r a^\r{an}})\to H^n(\c X_\r a^\r{an},\Omega^n_{\c X_\r a^\r{an}}/\r d\Omega^{n-1}_{\c X_\r a^\r{an}})\simeq\b C_p,\] where we have used Lemma \r ef{le:top} for the last isomorphism. In particular, $H^n(\c X_\r a^\r{an},(\Omega^n_{\c X_\r a^\r{an}}/\r d\Omega^{n-1}_{\c X_\r a^\r{an}})_w)\simeq\b C_p$ if $w=2n$, and is trivial otherwise. \end{proposition} \b egin{proof} We put $X=\c X_\r a^\r{an}$. Define $\Xi^n_X$ to be the quotient sheaf in the following exact sequence \[0\to \Upsilon^n_X \to (\Omega^n_X/\r d\Omega^{n-1}_X)_{2n}\to \Xi^n_X \to 0.\] It is functorial in $X$. The best hope is that $\Xi^n_X$ is trivial; but we do not know so far. However, one can show that $\Xi^n_X$ is supported on $\{x\in X\mathbin{|} s(x)\geq 2\}$. This suggests that one should expect $H^i(X,\Xi^n_X)=0$ for $i\geq n-1$, which suffices for the proposition. In fact, such vanishing result can be proved if we have semi-stable resolution instead of alteration. In the absence of semi-stable resolution, we need an \emph{ad hoc} argument. We may assume that $\c X$ is projective, as we will eventually take an alteration of $\c X$. Take a cohomology class $\alpha\in H^{n-1}(X,\Xi^n_X)$. Since $X$ is (Hausdorff and) compact, by \cite{SP}*{09V2, 01FM}, there is a finite open covering $\underline{U}=\{U_i\mathbin{|} i=1,\dots,N\}$ of $X$ such that $\alpha$ is represented by an (alternative) \v{C}ech cocycle $\underline\alpha=\{\alpha_I\in\Xi^n_X(U_I)\mathbin{|} I\subset\{1,\dots,N\}, |I|=n\}$ on $\underline{U}$, where $U_I=\b igcap_{i\in I}U_i$ as always. By refining $\underline{U}$, we may assume that $\alpha_I$ is in the image of the map $(\Omega^n_X/\r d\Omega^{n-1}_X)(U_I)^\r{pre}_{2n}\to\Xi^n_X(U_I)$ for every $I$ (See Definition \r ef{de:weight} for the notation). By \cite{Pay09}*{Theorem 4.2}, taking blow-ups, and possibly taking a finite extension of $k$ inside $\b C_p$, we have a (proper flat) integral model $\c Y$ of $\c X$ such that if $\c Z_1,\dots,\c Z_M$ are all reduced irreducible components of $\c Y_s$, then the covering $\{\pi^{-1}\c Z_i\widehat\otimes_k\b C_p\mathbin{|} i=1,\dots,M\}$ refines $\underline{U}$. By \cite{dJ96}*{Theorem 8.2}, possibly after taking further finite extension of $k$ inside $\b C_p$, we have a (proper) strictly semi-stable scheme $\c Y'$ over $k^\circ$ with an alteration $\c Y'\to \c Y$. For simplicity, we may also assume that every irreducible component of $\c Y'^{[t]}_s$ ($0\leq t\leq n$) is geometrically irreducible. In particular, if we denote by $\c Z'_1,\dots,\c Z'_{M'}$ all irreducible components of $\c Y'_s$, then the covering $\{\pi^{-1}\c Z'_i\widehat\otimes_k\b C_p\mathbin{|} i=1,\dots,M'\}$ refines $f^{-1}\underline{U}\coloneqq\{f^{-1}U_i\mathbin{|} i=1,\dots,N\}$. We fix an index function $\varrho\colon\{1,\dots,M'\}\to\{1,\dots,N\}$ for the refinement; in other words, $\pi^{-1}\c Z'_i\widehat\otimes_k\b C_p$ is contained in $f^{-1}U_{\varrho(i)}$. We fix a uniformizer $\varpi$ of $k$, and put $X'=(\c Y'\otimes_k\b C_p)^\r{an}$. We claim that $f^*\alpha=0$, where $f^*\alpha$ is the canonical image of $f^{-1}\alpha$ in $H^{n-1}(X',\Xi^n_{X'})$. For every $1\leq i\leq M'$ and $0<\epsilon<1$, we denote by $U_i(\epsilon)$ the open subset of $\pi^{-1}\c Z'_i\widehat\otimes_k\b C_p\subset X'$ as in Step 3 in the proof of Lemma \r ef{le:stalk}. Then we have that \[\pi^{-1}\c Z'_i\widehat\otimes_k\b C_p=\b igcup_{0<\epsilon<1}U_i(\epsilon),\quad \pi^{-1}(\c Z'_i\b ackslash\c X'^{[1]}_s)\widehat\otimes_k\b C_p=\b igcap_{0<\epsilon<1}U_i(\epsilon).\] By definition, $\underline{U}(\epsilon)\coloneqq\{U_i(\epsilon)\mathbin{|} 1\leq i\leq M'\}$ form an open covering if and only if $\epsilon>1/2$. Fix a real number $1/2<\epsilon<1$. We study a typical $n$-fold intersection of $\underline{U}(\epsilon)$. Without lost of generality, we consider $U_{\{1,\dots,n\}}(\epsilon)\coloneqq\b igcap_{i=1}^nU_i(\epsilon)$. If $\b igcap_{i=1}^n\c Z'_i=\emptyset$, then $U_{\{1,\dots,n\}}(\epsilon)=\emptyset$. So we may assume that $\b igcap_{i=1}^n\c Z'_i=\b igsqcup_{l=1}^L\c C_l$, where each $\c C_l$ is a geometrically irreducible proper smooth curve over $\widetilde{k}$. Take a typical member $\c C$ of $\{\c C_1,\dots,\c C_L\}$ and put $U_\c C(\epsilon)=U_{\{1,\dots,n\}}(\epsilon)\cap\pi^{-1}\c C\widehat\otimes_k\b C_p$. Recall the $k$-analytic space $\b E^{n-1}_\varpi$ defined in Example \r ef{ex:torus}. Let $\b E^t_\varpi(\epsilon)\subset\b E^t_\varpi$ be the subspace such that $|T_i|<|\varpi|^{1-\epsilon}$ for every $0\leq i\leq t$. By \cite{GK02}*{Lemma 3}, the canonical map $H^\b ullet_\r{dR}(\b E^t_\varpi)\to H^\b ullet_\r{dR}(\b E^t_\varpi(\epsilon))$ is an isomorphism. Put $\c C^\heartsuit=\c C\b ackslash\c X'^{[n]}_s$ and $U_\c C^\heartsuit(\epsilon)=U_\c C(\epsilon)\cap\pi^{-1}\c C^\heartsuit\widehat\otimes_k\b C_p$. By the proof of \cite{GK02}*{Theorem 2.3}, we have isomorphisms \b egin{align*} H^\b ullet_\r{dR}(U_\c C(\epsilon),U_\c C^\heartsuit(\epsilon))&\simeq \Tot(H^\b ullet_\r{dR}(\b E^{n-1}_\varpi(\epsilon)) \otimes_k H^\b ullet_\r ig(\c C^\heartsuit/k))\otimes_k\b C_p\\ &\simeq\Tot(H^\b ullet_\r{dR}(\b E^{n-1}_\varpi) \otimes_k H^\b ullet_\r ig(\c C^\heartsuit/k))\otimes_k\b C_p \end{align*} of graded $\b C_p$-vector spaces. By Theorem \r ef{th:1} (ii), there is an open neighborhood $U_1$ of $U_\c C^\heartsuit(\epsilon)$ in $U_\c C(\epsilon)$ such that the image of $f^{-1}\alpha_{\{\varrho(1),\dots,\varrho(n)\}}\mathbin{|}_{U_1}$ in $\Xi_{X'}^\r{an}(U_1)$ zero. Put $U_2\coloneqq U_\c C(\epsilon)\cap\pi^{-1}(\c C\b ackslash\c C^\heartsuit)\widehat\otimes_k\b C_p$. Then $H^\b ullet_\r{dR}(U_2)$ is isomorphic to a finite copy of $H^\b ullet_\r{dR}(\b E^n_\varpi)\otimes_k\b C_p$, and in particular, the image of $f^{-1}\alpha_{\{\varrho(1),\dots,\varrho(n)\}}\mathbin{|}_{U_2}$ in $\Xi_{X'}^\r{an}(U_2)$ is zero. Finally, note that $U_\c C(\epsilon)=U_1\cup U_2$, which implies that $f^*\alpha=0$. Going back to $X$, we have the following commutative diagram \[\xymatrix{ H^{n-1}(X,\Xi^n_X) \ar[r]\ar[d]_-{f^*} & H^n(X,\Upsilon^n_X) \ar[rr]^-{\Tr_X^\s A}\ar[d]^-{f^*} && \b C_p \ar[d]^-{\deg(f)} \\ H^{n-1}(X',\Xi^n_{X'}) \ar[r] & H^n(X',\Upsilon^n_{X'}) \ar[rr]^-{\Tr_{X'}^\s A} && \b C_p, }\] where the right vertical arrow is the multiplication by $\deg(f)$. Therefore, $\Tr_X^\s A$ factors through the map $H^n(X,\Upsilon^n_X)\to H^n(X,\Omega^n_X/\r d\Omega^{n-1}_X)$. The last statement follows from the combination of \b egin{itemize} \item $\Tr_X^\s A$ is surjective as one can write down an $(n,n)$-form on $X$ with nonzero total integral; \item $H^n(X,\Omega^n_X/\r d\Omega^{n-1}_X)=\b igoplus_{w\in\b Z}H^n(X,(\Omega^n_X/\r d\Omega^{n-1}_X)_w)$ by Theorem \r ef{th:1}; \item $H^n(X,\Omega^n_X/\r d\Omega^{n-1}_X)\simeq\b C_p$ by Lemma \r ef{le:top}; and \item the image of $H^n(X,\Upsilon^n_X)\to H^n(X,\Omega^n_X/\r d\Omega^{n-1}_X)$ is contained in $H^n(X,(\Omega^n_X/\r d\Omega^{n-1}_X)_{2n})$. \end{itemize} \end{proof} The next theorem shows that algebraic cycles that are cohomologically trivial in the algebraic de Rham cohomology are cohomologically trivial in of Dolbeault cohomology of currents as well. \b egin{theorem}\label{th:trivial} Let $k\subset\b C_p$ be a finite extension of $\b Q_p$ and $\c X$ a proper smooth scheme over $k$ of dimension $n$. Let $\c Z$ be an algebraic cycle of $\c X$ of codimension $q$ such that $\r{cl}_\r{dR}(\c Z)=0$. If we put $\c X_\r a=\c X\otimes_k\b C_p$ and $\c Z_\r a=\c Z\otimes_k\b C_p$, then $\r{cl}_\s D(\c Z_\r a)=0$, that is, \[\int_{\c Z_\r a^\r{an}}\omega=0\] for every $\r d''$-closed form $\omega\in\s A^{n-q,n-q}(\c X_\r a^\r{an})$. \end{theorem} We need some preparation before the proof of the theorem. We start from the following lemma. \b egin{lem}\label{le:finite} Let the assumption and notation be as in Theorem \r ef{th:trivial}. Then for every $i$ and $q$, the canonical map \[\varinjlim_{k'}H^i(\c X_{k'}^\r{an},\s T^q_{\c X_{k'}^\r{an}})\to H^i(\c X_\r a^\r{an},\s T^q_{\c X_\r a^\r{an}})\] is an isomorphism, where the colimit is taken over all finite extensions $k'$ of $k$ in $\b C_p$. \end{lem} \b egin{proof} We have an isomorphism of spectral spaces $\c X_\r a^\r{ad}\simeq\varprojlim_{k'}\c X_{k'}^\r{ad}$. Thus by \cite{SP}*{0A37}, it suffices to show that the canonical map $\varinjlim_{k'}\varsigma_{k'}^{-1}\gamma^{-1}_{\c X_{k'}^\r{an}}\s T^q_{\c X_{k'}^\r{an}}\to\gamma^{-1}_{\c X_\r a^\r{an}}\s T^q_{\c X_\r a^\r{an}}$ is an isomorphism, where $\varsigma_{k'}\colon\c X_\r a^\r{ad}\to\c X_{k'}^\r{ad}$ is the canonical map. However, this follows from the fact that for every $f\in\c O^*(\c X_\r a^\r{an},V)$ where $V$ is a rational affinoid domain, there is a function $g\in\c O^*(\c X_{k'}^\r{an},V')$ for some $k'$ such that $\varsigma_{k'}^{-1}(V')=V$ and $f^{-1}\cdot\varsigma_{k'}^*g$ has norm $1$ on some open neighborhood of $V$. Here, we have used \cite{Berk07}*{Lemma 2.1.3 (ii)}. \end{proof} We review some facts about cup products from \cite{SP}*{01FP}. Let $X$ be a topological space, $k$ a field, $n\geq 0$ an integer. Let $\Omega$ be a sheaf of $k$-vector spaces on $X$. Suppose that we have two bounded complexes $\s F^\b ullet,\s G^\b ullet$ of sheaves of $k$-vector spaces on $X$, with a map of complexes of sheaves of $k$-vector spaces \[\chi\colon\Tot(\s F^\b ullet\otimes_k\s G^\b ullet)\to\Omega[n].\] Then we have a bilinear pairing \[\cup_\chi\colon H^i(X,\s F^\b ullet)\mathtt{i}mes H^{2n-i}(X,\s G^\b ullet)\to H^{2n}(X,\Omega[n])=H^n(X,\Omega)\] for every $i\in\b Z$. Now suppose that we have four bounded complexes $\s F^\b ullet_j,\s G^\b ullet_j$ ($j=1,2$) of sheaves of $k$-vector spaces on $X$, maps $\alpha_1\colon\s F_1^\b ullet\to\s F_2^\b ullet$, $\alpha_2\colon\s G_2^\b ullet\to\s G_1^\b ullet$, and $\chi_j\colon\Tot(\s F^\b ullet_j\otimes_k\s G^\b ullet_j)\to\Omega[n]$ ($j=1,2$), such that $\chi_1\circ(\r{id}_{\s F_1^\b ullet}\otimes \alpha_2)=\chi_2\circ(\alpha_1\otimes\r{id}_{\s G_2^\b ullet})$. Then we have the following commutative diagram \b egin{align}\label{eq:cup} \xymatrix{ H^i(X,\s F^\b ullet_1)\!\!\!\!\!\!\!\!\!\! \ar[d]_-{H^i(X,\alpha_1)} & \mathtt{i}mes & \!\!\!\!\!\!\!\!\!\!H^{2n-i}(X,\s G^\b ullet_1) \ar[rr]^-{\cup_{\chi_1}} && H^n(X,\Omega) \ar@{=}[d] \\ H^i(X,\s F^\b ullet_2)\!\!\!\!\!\!\!\!\!\! & \mathtt{i}mes & \!\!\!\!\!\!\!\!\!\!H^{2n-i}(X,\s G^\b ullet_2) \ar[u]_-{H^{2n-i}(X,\alpha_2)}\ar[rr]^-{\cup_{\chi_2}} && H^n(X,\Omega) } \end{align} for every $i\in\b Z$. \b egin{proof}[Proof of Theorem \r ef{th:trivial}] Without lost of generality, we may assume that $\c X$ is geometrically irreducible over $k$ (of dimension $n$). Put $X=\c X^\r{an}$. \mathtt{e}xtbf{Step 1.} By Proposition \r ef{pr:kernel} and Theorem \r ef{th:kernel}, we have the following commutative diagram \[\xymatrix{ H^{q,q}_\s A(\c X_\r a^\r{an})\mathtt{i}mes H^{n-q,n-q}_\s A(\c X_\r a^\r{an}) \ar[r]^-{\cup}\ar[d]_-{\simeq}& H^{n,n}_\s A(\c X_\r a^\r{an}) \ar[d]^-{\simeq} \\ H^q(\c X_\r a^\r{an},\s T_{\c X_\r a^\r{an}}^q\otimes_\b Q\b R) \mathtt{i}mes H^{n-q}(\c X_\r a^\r{an},\s T_{\c X_\r a^\r{an}}^{n-q}\otimes_\b Q\b R) \ar[r]^-{\cup}\ar[d] & H^n(\c X_\r a^\r{an},\s T_{\c X_\r a^\r{an}}^n\otimes_\b Q\b R) \ar[d]\\ H^q(\c X_\r a^\r{an},\Omega^{q,\r{cl}}_{\c X_\r a^\r{an}}/\r d\Omega^{q-1}_{\c X_\r a^\r{an}}) \mathtt{i}mes H^{n-q}(\c X_\r a^\r{an},\Omega^{n-q,\r{cl}}_{\c X_\r a^\r{an}}/\r d\Omega^{n-q-1}_{\c X_\r a^\r{an}}) \ar[r]^-{\cup}& H^n(\c X_\r a^\r{an},\Omega^n_{\c X_\r a^\r{an}}/\r d\Omega^{n-1}_{\c X_\r a^\r{an}}), }\] in which the first cup product is induced by the wedge product of real forms. To prove the theorem, it suffices to consider an arbitrary element $\omega\in H^{n-q}(\c X_\r a^\r{an},\s T_{\c X_\r a^\r{an}}^{n-q})$. In view of Lemma \r ef{le:finite}, after replacing $k$ by a finite extension in $\b C_p$, we may assume that $\omega\in H^{n-q}(X,\s T_X^{n-q})$. Note that the tropical cycle class map $\r{cl}_\s A$ (Definition \r ef{de:tropical}) factors as \[\CH^q(\c X)_\b Q\to H^q(X,\s T_X^q)\to H^q(\c X_\r a^\r{an},\s T_{\c X_\r a^\r{an}}^q) \to H^q(\c X_\r a^\r{an},\s T_{\c X_\r a^\r{an}}^q\otimes_\b Q\b R)\simeq H^{q,q}_\s A(\c X_\r a^\r{an}),\] in which we denote the first map by $\r{cl}_\s T$. By Theorem \r ef{th:cycle} and Proposition \r ef{pr:trace}, it suffices to show that the image of $\r{cl}_\s T(\c Z)\cup\omega$ in $H^n(X,\Omega^n_X/\r d\Omega^{n-1}_X)$, which is isomorphic to $k$ by Lemma \r ef{le:top}, is zero. Denote by $\zeta$ the image of $\r{cl}_\s T(\c Z)$ in $H^q(X,\Omega^{q,\r{cl}}_X/\r d\Omega^{q-1}_X)$, and regard $\omega$ as in $H^{n-q}(X,\Omega^{n-q,\r{cl}}_X/\r d\Omega^{n-q-1}_X)$. In fact, we can prove that $\zeta\cup\omega=0$ if we have semi-stable resolution instead of alteration. In the absence of semi-stable resolution, we need an \emph{ad hoc} argument. \mathtt{e}xtbf{Step 2.} To proceed, we need the adic topology of $X$. Recall that we have a continuous map $\gamma_X\colon X^\r{ad}\to X$. Let $(\Omega^\b ullet_{X^\r{ad}},\r d)$ the de Rham complex on $X^\r{ad}$. Then we have a canonical map $\gamma_X^{-1}(\Omega^\b ullet_X,\r d)\to(\Omega^\b ullet_{X^\r{ad}},\r d)$ of complexes of sheaves of $k$-vector spaces on $X^\r{ad}$. Denote by $\zeta_\r{ad}$ (resp.\ $\omega_\r{ad}$) the image of $\zeta$ (resp.\ $\omega$) under the canonical map \[H^i(X,\Omega^{i,\r{cl}}_X/\r d\Omega^{i-1}_X)\to H^i(X^\r{ad},\Omega^{i,\r{cl}}_{X^\r{ad}}/\r d\Omega^{i-1}_{X^\r{ad}})\] for $i=q$ (resp.\ $i=n-q$). Note that when $i=n$, the above map is an isomorphism, by the same argument for Lemma \r ef{le:top}. We claim that there exists an alteration $f\colon\c X'\to\c X$ possibly after a finite extension of $k$ in $\b C_p$, such that $f^*\omega_\r{ad}$ is in the image of the canonical map \[H^{2n-2q}(X'^\r{ad},\mathtt{a}u_{\leq n-q}\Omega^\b ullet_{X'^\r{ad}}) \to H^{n-q}(X'^\r{ad},\Omega^{n-q,\r{cl}}_{X'^\r{ad}}/\r d\Omega^{n-q-1}_{X'^\r{ad}}),\] where $X'=\c X'^\r{an}$. Assuming the above claim, we deduce the theorem as follows. Applying \eqref{eq:cup} to $X'^\r{ad}$ and the sheaf $\Omega\coloneqq\Omega^n_{X'^\r{ad}}/\r d\Omega^{n-1}_{X'^\r{ad}}$, we obtain the following commutative diagram \[\xymatrix{ H^q(X'^\r{ad},\Omega^{q,\r{cl}}_{X'^\r{ad}}/\r d\Omega^{q-1}_{X'^\r{ad}})\!\!\!\!\!\!\!\!\!\! \ar[d]_-{\alpha_1} & \mathtt{i}mes & \!\!\!\!\!\!\!\!\!\!H^{n-q}(X'^\r{ad},\Omega^{n-q,\r{cl}}_{X'^\r{ad}}/\r d\Omega^{n-q-1}_{X'^\r{ad}}) \ar[rr] && H^n(X'^\r{ad},\Omega) \ar@{=}[d] \\ H^{2q}(X'^\r{ad},\mathtt{a}u_{\geq q}\Omega^\b ullet_{X'^\r{ad}})\!\!\!\!\!\!\!\!\!\! & \mathtt{i}mes & \!\!\!\!\!\!\!\!\!\!H^{2n-2q}(X'^\r{ad},\mathtt{a}u_{\leq n-q}\Omega^\b ullet_{X'^\r{ad}}) \ar[u]_-{\alpha_2}\ar[d]^-{\b eta_2}\ar[rr] && H^n(X'^\r{ad},\Omega) \ar@{=}[d] \\ H^{2q}(X'^\r{ad},\Omega^\b ullet_{X'^\r{ad}})\!\!\!\!\!\!\!\!\!\! \ar[u]^-{\b eta_1} & \mathtt{i}mes & \!\!\!\!\!\!\!\!\!\!H^{2n-2q}(X'^\r{ad},\Omega^\b ullet_{X'^\r{ad}}) \ar[rr] && H^n(X'^\r{ad},\Omega) }\] in which the maps among various complexes of sheaves are defined in the obvious way. By the above claim, there exists $\omega'\in H^{2n-2q}(X'^\r{ad},\mathtt{a}u_{\leq n-q}\Omega^\b ullet_{X'^\r{ad}})$ such that $\alpha_2(\omega')=f^*\omega_\r{ad}$. Thus, \[f^*\zeta_\r{ad}\cup f^*\omega_\r{ad} = f^*\zeta_\r{ad}\cup\alpha_2(\omega') =\alpha_1(f^*\zeta_\r{ad})\cup\omega'=\b eta_1(\r{cl}_\r{dR}(f^*\c Z))\cup\omega' =\r{cl}_\r{dR}(f^*\c Z)\cup\b eta_2(\omega'),\] where we regard $\r{cl}_\r{dR}(f^*\c Z)$ as an element in $H^{2q}(X'^\r{ad},\Omega^\b ullet_{X'^\r{ad}})$ under the comparison map (which is in fact an isomorphism) \[H^{2q}_\r{dR}(\c X')=H^{2q}(\c X',\Omega^\b ullet_{\c X'})\to H^{2q}(X'^\r{ad},\Omega^\b ullet_{X'^\r{ad}}).\] As $\r{cl}_\r{dR}(\c Z)=0$, we have $\r{cl}_\r{dR}(f^*\c Z)=0$ and hence $f^*\zeta_\r{ad}\cup f^*\omega_\r{ad}=0$. Thus, $f^*\zeta\cup f^*\omega=0$, and in particular, \[\int_{\c Z_\r a^\r{an}}\omega=\deg(f)^{-1}\int_{(f^*\c Z)_\r a^\r{an}}f^*\omega=0.\] The theorem is proved. \mathtt{e}xtbf{Step 3.} Now we fucus on the claim in Step 2. For an integral model $\c Y$ of $X$, define $\s K^q_{X,\c Y}$ to be the sheaf on $\c Y_s$ associated to the presheaf \[\c U\mapsto \varinjlim_{\pi^{-1}\c U\subset U} K^M_q(\c O_X(U))\otimes\b Q,\quad\c U\subset\c Y_s\] where the colimit is taken over all open neighborhoods $U$ of $\pi^{-1}\c U$ in $X$. We remark that there is a canonical morphism $\s K^q_{X,\c Y}\to\gamma_{\c Y*}\gamma^{-1}_X\s K^q_X$ which is in general not an isomorphism, where $\gamma_\c Y\colon X^\r{ad}\to\c Y_s$ is the induced continuous map. Put $\Omega^{\dag,q}_{X,\c Y}=\gamma_{\c Y*}\gamma^{-1}_X\Omega^q_X$. Then we have a complex of sheaves of $k$-vector spaces $(\Omega^{\dag,\b ullet}_{X,\c Y},\r d)$ on $\c Y_s$. And we have a canonical map \[\lambda_{X,\c Y}^q\colon \s K^q_{X,\c Y}\to \Omega^{\dag,q,\r{cl}}_{X,\c Y}/\r d\Omega^{\dag,q-1}_{X,\c Y}\] similar to Definition \r ef{de:kernel}. Denote by $\s L_{X,\c Y}^q$ the image sheaf of $\lambda_{X,\c Y}^q$ in the above map. Since sheafification commutes with pullback and taking colimit, we have a canonical isomorphism \[\varinjlim_\c Y\gamma_\c Y^{-1}\s K^q_{X,\c Y}\simeq\gamma^{-1}_X\s K^q_X\] of sheaves on $X^\r{ad}$, where the filtered colimit is taken over all integral models $\c Y$ of $X$. On the other hand, we have an obvious isomorphism \[\varinjlim_\c Y\gamma_\c Y^{-1}\Omega^{\dag,\b ullet}_{X,\c Y}\simeq\gamma_X^{-1}\Omega^\b ullet_X.\] Passing to the quotient, we have a canonical isomorphism \[\varinjlim_\c Y\gamma_\c Y^{-1}\s L_{X,\c Y}^q\simeq\gamma_X^{-1}\s L_X^q.\] Note that originally, $\omega$ belongs to $H^{n-q}(X,\s T_X^{n-q})\simeq H^{n-q}(X,\s L_X^{n-q})$. By a similar argument for Lemma \r ef{le:top}, there is an integral model $\c Y$ of $X$ such that $\omega$ is in the image of the canonical map \[H^{n-q}(\c Y_s,\s L_{X,\c Y}^{n-q}) \to H^{n-q}(X^\r{ad},\gamma^{-1}\s L_X^{n-q})\simeq H^{n-q}(X,\s L_X^{n-q}).\] By \cite{dJ96}*{Theorem 8.2}, possibly after taking further finite extension of $k$ inside $\b C_p$, we have a projective strictly semi-stable scheme $\c Y'$ over $k^\circ$ with an alteration $f\colon\c Y'\to\c Y$. Put $X'=\c Y'^\r{an}$. If we put $\Omega^q_{X',\c Y'}=\gamma_{\c Y'*}\Omega^q_{X'^\r{ad}}$, then $(\Omega^\b ullet_{X',\c Y'},\r d)$ is a complex of sheaves of $k$-vector spaces on $\c Y'_s$ and we have a canonical map $(\Omega^{\dag,\b ullet}_{X',\c Y'},\r d)\to (\Omega^\b ullet_{X',\c Y'},\r d)$. The claim in Step 2 will follow if we can show that the composite map \b egin{multline}\label{eq:trivial} H^{n-q}(\c Y'_s,\s L_{X',\c Y'}^{n-q}) \to H^{n-q}(\c Y'_s,\Omega^{\dag,n-q,\r{cl}}_{X',\c Y'}/\r d\Omega^{\dag,n-q-1}_{X',\c Y'})\\ \to H^{n-q}(\c Y'_s,\Omega^{n-q,\r{cl}}_{X',\c Y'}/\r d\Omega^{n-q-1}_{X',\c Y'}) \to H^{2n-2q+1}(\c Y'_s,\mathtt{a}u_{\leq n-q-1}\Omega^\b ullet_{X',\c Y'}) \end{multline} is zero. The advantage of $(\Omega^\b ullet_{X',\c Y'},\r d)$ is that the \emph{entire} complex admits a canonical Frobenius action. More precisely, we fix a uniformizer $\varpi$ of $k$; let $\c Y'^\mathtt{i}mes_s$ be the log scheme $\c Y'_s$ equipped with log structure as in \cite{HK94}*{(2.13.2)}; and $\Spf W(\widetilde{k})^\mathtt{i}mes$ be the formal log scheme $\Spf W(\widetilde{k})$ equipped with log structure $1\mapsto 0$. Here, we use Zariski topology in the construction of log schemes and log crystal sites instead of \'etale one in \cite{HK94}. There is a canonical morphism $u\colon(\c Y'^\mathtt{i}mes_s/\Spf W(\widetilde{k})^\mathtt{i}mes)_{\r{log}\text{-}\r{cris}}\to\c Y'_s$ of sites. Then by (the proof of) \cite{HK94}*{Theorem 5.1}, we have a canonical isomorphism \[\r R u_*\c O_{\c Y'^\mathtt{i}mes_s/\Spf W(\widetilde{k})^\mathtt{i}mes}^{\r{log}\text{-}\r{cris}}\otimes_{W(\widetilde{k})} k\simeq(\Omega^\b ullet_{X',\c Y'},\r d)\] in the derived category of abelian sheaves on $\c Y'_s$, where $\c O_{\c Y'^\mathtt{i}mes_s/\Spf W(\widetilde{k})^\mathtt{i}mes}^{\r{log}\text{-}\r{cris}}$ denotes the structure sheaf in the log crystal site. Since $\c O_{\c Y'^\mathtt{i}mes_s/\Spf W(\widetilde{k})^\mathtt{i}mes}$ admits a Frobenius action over $\Spec\widetilde{k}$, we obtain a Frobenius action on the entire complex $(\Omega^\b ullet_{X',\c Y'},\r d)$ in the derived category. For $w\in\b Z$, denote by $(\Omega^{q,\r{cl}}_{X',\c Y'}/\r d\Omega^{q-1}_{X',\c Y'})_w$ the maximal subsheaf of $\Omega^{q,\r{cl}}_{X',\c Y'}/\r d\Omega^{q-1}_{X',\c Y'}$ generated by sections of generalized weight $w$. We claim that \b egin{enumerate}[(a)] \item the image of the canonical map $\s K^q_{X',\c Y'}\to\s L_{X',\c Y'}^q\to \Omega^{q,\r{cl}}_{X',\c Y'}/\r d\Omega^{q-1}_{X',\c Y'}$ is contained in the subsheaf $(\Omega^{q,\r{cl}}_{X',\c Y'}/\r d\Omega^{q-1}_{X',\c Y'})_{2q}$ for every $q$; \item the image of the canonical map $\Omega^{\dag,q,\r{cl}}_{X',\c Y'}/\r d\Omega^{\dag,q-1}_{X',\c Y'}\to \Omega^{q,\r{cl}}_{X',\c Y'}/\r d\Omega^{q-1}_{X',\c Y'}$ is contained in the subsheaf $\b igoplus_{w=0}^{2q}(\Omega^{q,\r{cl}}_{X',\c Y'}/\r d\Omega^{q-1}_{X',\c Y'})_w$ for every $q$. \end{enumerate} Then the triviality of the map \eqref{eq:trivial} follows easily from an argument of spectral sequences. In fact, we have a spectral sequence $E^{p,q}_r$ abutting to $H^\b ullet(\c Y'_s,\Omega^\b ullet_{X',\c Y'})$ equipped with a Frobenius action, such that $E^{p,q}_2=H^p(\c Y'_s,\Omega^{q,\r{cl}}_{X',\c Y'}/\r d\Omega^{q-1}_{X',\c Y'})$. By (a) and (b), the restriction of all differentials $\r d^{n-q,n-q}_r$ in the spectral sequence with $r\geq 2$ to the image of the map \[H^{n-q}(\c Y'_s,\s L_{X',\c Y'}^{n-q})\to H^{n-q}(\c Y'_s,\Omega^{n-q,\r{cl}}_{X',\c Y'}/\r d\Omega^{n-q-1}_{X',\c Y'})=E^{n-q,n-q}_2\] is zero by weight consideration. Thus, \eqref{eq:trivial} is the zero map. \mathtt{e}xtbf{Step 4.} The last step is devoted to the proof of the two claims (a) and (b) in Step 3. We remark that they are not formal consequences of Theorem \r ef{th:1}. By definition, $\Omega^{q,\r{cl}}_{X',\c Y'}/\r d\Omega^{q-1}_{X',\c Y'}$ is the sheaf on $\c Y'_s$ associated to the presheaf $\c U\mapsto H^q_\r{dR}(\pi^{-1}\c U)$, and $\Omega^{\dag,q,\r{cl}}_{X',\c Y'}/\r d\Omega^{\dag,q-1}_{X',\c Y'}$ is the sheaf on $\c Y'_s$ associated to the presheaf $\c U\mapsto H^q_\r{dR}(X',\pi^{-1}\c U)$ by \cite{Berk07}*{Lemma 5.2.1}. We check (a) and (b) on stalks and thus fix a point $x\in\c Y'_s$. To prove (a), it suffices to consider the case where $q=1$. Let $\c U\subset\c Y'$ be an open affine neighborhood of $x$. Take $f\in\c O^*(X',\pi^{-1}\c U)$; we have to show that the image of $\f rac{\r d f}{f}$ in $H^1_\r{dR}(\pi^{-1}\c U_s)$ is of generalized weight $2$ for a possibly smaller open neighborhood $\c U$ of $x$. First, we may replace $f$ by the restriction of an (algebraic) function $f\in\c O(\c U_k)$ without changing the image of $\f rac{\r d f}{f}$ in $H^1_\r{dR}(\pi^{-1}\c U_s)$. Since $\c Y'$ is projective, we may choose a closed embedding $\c Y'\hookrightarrow\b P^N_{k^\circ}$ into a projective space. Choose an open affine neighborhood $\c V$ of $x$ in $\b P^N_{k^\circ}$ such that $\c V\cap\c Y'\subset\c U$ and $f\mathbin{|}_{\c V\cap\c Y'}=g\mathbin{|}_{\c V\cap\c Y'}$ for some $g\in\c O^*(\c V_k)$. Since $\b P^N_{k^\circ}$ is smooth, by Remark \r ef{re:decomp}, $\f rac{\r d g}{g}$ belongs to $H^1_\r ig(\c V_s/k)_2$ and thus its image in $H^1_\r{dR}(\pi^{-1}\c V_s)$ is of generalized weight $2$. By functoriality of log crystal sites for the morphism $\c Y'\to\b P^N_{k^\circ}$, we conclude that the image of $\f rac{\r d f}{f}$ in $H^1_\r{dR}(\pi^{-1}(\c V\cap\c Y')_s)$ is of generalized weight $2$. Here, $\pi^{-1}(\c V\cap\c Y')_s$ is the inverse image in $X'$. Claim (b) is a consequence of \cite{GK05}*{Theorem 0.1} and \cite{Chi98}*{Theorem 2.3}. In fact, we have a functorial map of spectral sequences $\pres{\r{prim}e}E^{p,q}_r\to\pres{\r{prim}e\r{prim}e}E^{p,q}_r$ abutting to $H^\b ullet_\r{dR}(X',\pi^{-1}\c U)\to H^\b ullet_\r{dR}(\pi^{-1}\c U)$ with the first page being \[\pres{\r{prim}e}E^{p,q}_1=H^q_\r ig(\c U_s^{(p)}/\Spf W(\widetilde{k})^\mathtt{i}mes)\otimes_{W(\widetilde{k})[1/p]}k \to\pres{\r{prim}e\r{prim}e}E^{p,q}_1=H^q_{\r{log}\text{-}\r{cris}}(\c U_s^{(p)}/\Spf W(\widetilde{k})^\mathtt{i}mes)\otimes_{W(\widetilde{k})}k,\] where $\c U_s^{(p)}$ is the disjoint union of irreducible components of $\c U_s^{[p]}$, equipped with the induced log structure from $\c Y'^\mathtt{i}mes_s$. By \cite{GK05}*{Theorem 3.1, Lemma 4.6} and \cite{Chi98}*{Theorem 2.3}, we know that the weights of (the finite dimensional $k$-vector space) $\pres{\r{prim}e}E^{p,q}_1$ are in the range $[q,2q]$, and thus the weights of $H^q_\r{dR}(X',\pi^{-1}\c U)$ are in the range $[0,2q]$. \end{proof} \appendix \b egin{bibdiv} \b egin{biblist} \b ib{SP}{book}{ label={SP}, author={The Stacks Project Authors}, title={Stacks Project}, note={available at \r{ur}l{http://math.columbia.edu/algebraic_geometry/stacks-git/}}, } \b ib{Berk90}{book}{ author={Berkovich, Vladimir G.}, title={Spectral theory and analytic geometry over non-Archimedean fields}, series={Mathematical Surveys and Monographs}, volume={33}, publisher={American Mathematical Society, Providence, RI}, date={1990}, pages={x+169}, isbn={0-8218-1534-2}, review={\MR{1070709}}, } \b ib{Berk93}{article}{ author={Berkovich, Vladimir G.}, title={\'Etale cohomology for non-Archimedean analytic spaces}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, number={78}, date={1993}, pages={5--161 (1994)}, issn={0073-8301}, review={\MR{1259429}}, } \b ib{Berk96}{article}{ author={Berkovich, Vladimir G.}, title={Vanishing cycles for formal schemes. II}, journal={Invent. Math.}, volume={125}, date={1996}, number={2}, pages={367--390}, issn={0020-9910}, review={\MR{1395723}}, doi={10.1007/s002220050078}, } \b ib{Berk04}{article}{ author={Berkovich, Vladimir G.}, title={Smooth $p$-adic analytic spaces are locally contractible. II}, conference={ title={Geometric aspects of Dwork theory. Vol. I, II}, }, book={ publisher={Walter de Gruyter GmbH \& Co. KG, Berlin}, }, date={2004}, pages={293--370}, review={\MR{2023293}}, } \b ib{Berk07}{book}{ author={Berkovich, Vladimir G.}, title={Integration of one-forms on $p$-adic analytic spaces}, series={Annals of Mathematics Studies}, volume={162}, publisher={Princeton University Press, Princeton, NJ}, date={2007}, pages={vi+156}, isbn={978-0-691-12862-7}, isbn={0-691-12862-6}, review={\MR{2263704}}, doi={10.1515/9781400837151}, } \b ib{Bert97}{article}{ label={Bert97}, author={Berthelot, Pierre}, title={Finitude et puret\'e cohomologique en cohomologie rigide}, language={French}, note={With an appendix in English by Aise Johan de Jong}, journal={Invent. Math.}, volume={128}, date={1997}, number={2}, pages={329--377}, issn={0020-9910}, review={\MR{1440308}}, doi={10.1007/s002220050143}, } \b ib{Bos81}{article}{ author={Bosch, Siegfried}, title={A rigid analytic version of M. Artin's theorem on analytic equations}, journal={Math. Ann.}, volume={255}, date={1981}, number={3}, pages={395--404}, issn={0025-5831}, review={\MR{615859}}, doi={10.1007/BF01450712}, } \b ib{CLD12}{article}{ author={Chambert-Loir, A.}, author={Ducros, A.}, title={Formes diff\'{e}rentielles r\'{e}alles et courants sur les espaces de Berkovich}, note={\href{http://arxiv.org/abs/1204.6277}{arXiv:math/1204.6277}}, date={2012}, } \b ib{Chi98}{article}{ author={Chiarellotto, Bruno}, title={Weights in rigid cohomology applications to unipotent $F$-isocrystals}, language={English, with English and French summaries}, journal={Ann. Sci. \'Ecole Norm. Sup. (4)}, volume={31}, date={1998}, number={5}, pages={683--715}, issn={0012-9593}, review={\MR{1643966}}, doi={10.1016/S0012-9593(98)80004-9}, } \b ib{dJ96}{article}{ author={de Jong, A. J.}, title={Smoothness, semi-stability and alterations}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, number={83}, date={1996}, pages={51--93}, issn={0073-8301}, review={\MR{1423020}}, } \b ib{GK02}{article}{ author={Grosse-Kl{\"o}nne, Elmar}, title={Finiteness of de Rham cohomology in rigid analysis}, journal={Duke Math. J.}, volume={113}, date={2002}, number={1}, pages={57--91}, issn={0012-7094}, review={\MR{1905392}}, doi={10.1215/S0012-7094-02-11312-X}, } \b ib{GK05}{article}{ author={Grosse-Kl{\"o}nne, Elmar}, title={Frobenius and monodromy operators in rigid analysis, and Drinfeld's symmetric space}, journal={J. Algebraic Geom.}, volume={14}, date={2005}, number={3}, pages={391--437}, issn={1056-3911}, review={\MR{2129006}}, doi={10.1090/S1056-3911-05-00402-9}, } \b ib{Gub13}{article}{ author={Gubler, W.}, title={Forms and currents on the analytification of an algebraic variety (after Chambert-Loir and Ducros)}, note={\href{http://arxiv.org/abs/1303.7364}{arXiv:math/1303.7364}}, date={2013}, } \b ib{HK94}{article}{ author={Hyodo, Osamu}, author={Kato, Kazuya}, title={Semi-stable reduction and crystalline cohomology with logarithmic poles}, note={P\'eriodes $p$-adiques (Bures-sur-Yvette, 1988)}, journal={Ast\'erisque}, number={223}, date={1994}, pages={221--268}, issn={0303-1179}, review={\MR{1293974}}, } \b ib{Jel16}{article}{ author={Jell, P.}, title={A Poincar\'e lemma for real-valued differential forms on Berkovich spaces}, journal={Math. Z.}, volume={282}, date={2016}, number={3-4}, pages={1149--1167}, issn={0025-5874}, review={\MR{3473662}}, doi={10.1007/s00209-015-1583-8}, } \b ib{JSS15}{article}{ author={Jell, P.}, author={Shaw, K.}, author={Smacka, J.}, title={Superforms, tropical cohomology and Poincar\'{e} duality}, note={\href{http://arxiv.org/abs/1512.07409}{arXiv:math/1512.07409}}, date={2015}, } \b ib{LS07}{book}{ author={Le Stum, Bernard}, title={Rigid cohomology}, series={Cambridge Tracts in Mathematics}, volume={172}, publisher={Cambridge University Press, Cambridge}, date={2007}, pages={xvi+319}, isbn={978-0-521-87524-0}, review={\MR{2358812}}, doi={10.1017/CBO9780511543128}, } \b ib{Pay09}{article}{ author={Payne, Sam}, title={Analytification is the limit of all tropicalizations}, journal={Math. Res. Lett.}, volume={16}, date={2009}, number={3}, pages={543--556}, issn={1073-2780}, review={\MR{2511632}}, doi={10.4310/MRL.2009.v16.n3.a13}, } \b ib{Sch12}{article}{ author={Scholze, Peter}, title={Perfectoid spaces}, journal={Publ. Math. Inst. Hautes \'Etudes Sci.}, volume={116}, date={2012}, pages={245--313}, issn={0073-8301}, review={\MR{3090258}}, doi={10.1007/s10240-012-0042-x}, } \b ib{Sou85}{article}{ author={Soul{\'e}, Christophe}, title={Op\'erations en $K$-th\'eorie alg\'ebrique}, language={French}, journal={Canad. J. Math.}, volume={37}, date={1985}, number={3}, pages={488--550}, issn={0008-414X}, review={\MR{787114}}, doi={10.4153/CJM-1985-029-x}, } \b ib{Tsu99}{article}{ author={Tsuzuki, Nobuo}, title={On the Gysin isomorphism of rigid cohomology}, journal={Hiroshima Math. J.}, volume={29}, date={1999}, number={3}, pages={479--527}, issn={0018-2079}, review={\MR{1728610}}, } \end{biblist} \end{bibdiv} \end{document}
\begin{document} \begin{center} {\Large\textbf{On the number of coverings of the sphere\\ ramified over given points}}\\ {\textbf{Boris~Bychkov}}\footnote{Research is supported by the RFBR grants 12-01-31233, 13-01-00383}\\ {\small\emph{Department of mathematics, National Research University Higher School of Economics, Vavilova str. 7, 117312, Moscow, Russia }}\\ {\small\emph{[email protected]}} \end{center} \noindent\textbf{Abstract} We present the generating function for the numbers of isomorphism classes of coverings of the two-dimensional sphere by the genus $g$ compact oriented surface not ramified outside of a given set of $m+1$ points in the target, fixed ramification type over one point, and arbitrary ramification types over the remaining $m$ points. We present the genus expansion of this generating function and prove, that the generating function of coverings of genus $0$ satisfies some system of differential equations. We show that this generating function is a specialization of the function from paper \cite{GJ} and, therefore, satisfies the KP-hierarchy. \section{Introduction} The problem of enumerating coverings of the sphere by two-dimensional surfaces with fixed ramification types over given points was posed by Hurwitz~\cite{Hu}. During the last decades, we saw growing interest to this problem, due to discovery of its connections with various physical theories, Gromov--Witten invariants and geometry of moduli spaces of complex curves. In this paper, we consider a close problem, namely, that of enumerating isomorphism classes of coverings of the two-dimensional sphere by the genus $g$ compact oriented surface not ramified outside of a given set of $m+1$ points in the target, fixed ramification type over one point, and arbitrary ramification types over the remaining $m$ points. We start with paper \cite{BMS} by M.~Bousquet-Melou and G.~Schaeffer, where the following formula for the number of genus~$g=0$ such coverings was deduced. {\theorem[Bousquet-Melou, Schaeffer] Let $\sigma_0\in S_n$ be a permutation having $d_i$ cycles of length $i\;(i=1,2,3,\dots)$ and let $l(\sigma_0)$ be the total number of cycles in $\sigma_0$, then \begin{equation} G_{\sigma_0}(m) = m\dfrac{(mn-n-1)!}{(mn-n-l(\sigma_0)+2)!}\prod\lambdaimits_{i\geq 1}\lambdaeft(i\binom{mi-1}{i}\right)^{d_i}.\lambdaabel{0} \end{equation}} In this paper, we present the generating function for the numbers of isomorphism classes of coverings of arbitrary genus. Our argument is based on M.~Kazaryan's remark about the operator acting in the center of the group algebra of a symmetric group by multiplication by the sum of all permutations and its eigenvalues. \section{Results} \hspace{6mm}Let $S_n$ be the group of permutations of $n$ elements. Let $\nu_1,\lambdadots,\nu_t$ be the lengths of the cycles in the decomposition of a permutation $\sigma \in S_n$ into the product of disjoint cycles; we say that $\sigma$ {\it has the cycle type $\nu$}. Introduce notation $l(\nu) = t,\; n = |\nu| = \nu_1 + \cdots + \nu_t$ and let $\nu = \rho(\sigma) = \{\nu_1,\lambdadots,\nu_t\}$ be the set of the lengths of the cycles. Irreducible representations of $S_n$ are in one-to-one correspondence with partitions $\nu\vdash n$ of $n$ (see, e.g., \cite[\S 3]{Na}). We will denote irreducible representa-\\tions by partitions. Denote by $\mathrm{dim}_{\nu}$ the dimension of the irreducible representation $\nu$ of $S_{|\nu|}$. Denote by $|\mathrm{Aut}(\nu)|$ the order of the automorphism group of the partition $\nu$. If the partition $\nu$ has $d_i$ parts of length $i,\; (i=1,2,3,\lambdadots)$, then $|\mathrm{Aut}(\nu)| = d_1!d_2!d_3!\lambdadots$. Denote by $p_i = p_i(x_1,\lambdadots,x_n)$ the symmetric power polynomial in $n$ variables defined by the formula \begin{equation} p_i(x_1,x_2,\lambdadots,x_n) = \sum\lambdaimits_{j=1}^n x_j^i.\lambdaabel{7} \end{equation} Denote by $s_\nu$ the {\it Schur function} corresponding to the partition $\nu$. Schur functions are polynomials in the symmetric power polynomials $p_i$ (Definition~\ref{d1} below). Define the corresponding {\it scaled Schur function} by the equation \begin{equation*} s\hbar_\nu(\hbar,p_1,p_2,p_3,\lambdadots) = s_\nu(p_1,p_2\hbar,p_3\hbar^2,\lambdadots). \end{equation*} For a partition $\nu = \{\nu_1,\lambdadots,\nu_t\}$, denote by $p_\nu$ the product $p_\nu = p_{\nu_1}\lambdadots p_{\nu_t}$. Recall that partitions can be usefully presented by intuitive geometric objects, {\it Young diagrams}. A Young diagram is a finite subset of the two-dimensional positive integer quadrant such that together with any point of the integer lattice it contains all the lexicographically smaller points. Points are usually represented by unit squares, the total amount of points is equal to $|\nu|$, the squares are arranged in downright rows of nonincreasing lengths $\nu_1\ge\nu_2\ge\dots\ge\nu_t$, and the total number~$t$ of the rows is equal to the size $l(\nu)$ of the partition $\nu$. \df The {\it content} of a cell $k$, which lies on the intersection of the $i$th column and $j$th row of the Young diagram, is the value $c(k) = j - i$. For an arbitrary cell in the Young diagram, consider the set of cells consisting of itself and all the cells that lie in the same row to the right of it and in the same column down of it. This set is the {\it hook} of the cell~$k$. The {\it length of the hook} $h(k)$ is the number of cells in this hook. Let us recall the connection between ramified coverings of the two-dimensio\-nal sphere and decompositions of a permutation into products of permutations. \df Let $X$ and $Y$ be two topological spaces, $Y$ being path connected, let $f:X\rightarrow Y$ be a continuous mapping, and let~$S$ be a discrete set. If for any point $y\in Y$ there exists a neighbourhood $V$ of $y$ such that the preimage $f^{-1}(V)\subset X$ is homeomorphic to $V\times S$, then the triple $(X,Y,f)$ is called an \textit{unramified covering} of $Y$ by $X$ with the fiber~$S$. The cardinality of the set $S$ is called the {\it degree} of the covering. A covering of degree~$n$ is also said to be {\it $n$-sheeted}. Let $f:X\rightarrow Y$ be an unramified covering, $y_0\in Y$. Every continuous path $\gamma : [0,1] \rightarrow Y$ that starts and ends in $y_0$ defines a permutation of the set $f^{-1}(y_0)$. This permutation is called the {\it monodromy\/} along~$\gamma$. Let $f:X\rightarrow\mathbb{C}P^1\setminus T$ be a finite-sheeted covering of $\mathbb{C}P^1$ with $k$ punctures $y_1,\lambdadots, y_k,\; T = \{y_1,\lambdadots, y_k\}\subset\mathbb{C}P^1$. The projective line $\mathbb{C}P^1$ is endowed with the orientation induced by the complex structure. Let $y_0 \in \mathbb{C}P^1\setminus T$. Consider $k$ oriented paths $c_i$ connecting $y_0$ with $y_i,\;i = 1,\lambdadots, k$ on $\mathbb{C}P^1$ such that they do not have intersections outside $y_0$ and enter $y_0$ in the order prescribed by their numbering. Transform every path $c_i$ into a loop $\gamma_i \in \pi_1(\mathbb{C}P^1\setminus T,y_0)$ going around the point~$y_i$ in the positive direction an returning back along~$c_i$ to~$y_0$. We denote by~$g_i$ the monodromy permutation along the loop $\gamma_i$. The group $G=\lambdaangle g_1,\lambdadots,g_k\rightarrowngle$ generated by monodromy permutations is called \textit{the monodromy group\/} of the covering. It consists of all the monodromy permutations along paths starting and ending in~$y_0$. To each finite-sheeted covering of a punctured sphere, the unique ramified covering of the sphere without punctures is associated: compactify the punc-\\tured sphere $\mathbb{C}P^1\setminus T$ by adding all the points $y_1,\lambdadots, y_k$ and for every point $y_i$ add to $X$ as many points as there are independent cycles in the permutation $g_i$. New points in $X$ will be the preimages of the new points $y_i$ with multiplicities equal to the lengths of the corresponding cycles in the permutation $g_i$. \df Let $X$ be a compact two-dimensional orientable surface. If there exists a finite set of points $T = \{y_1,\lambdadots, y_k\}\subset \mathbb{C}P^1$ such that $f$ is obtained from an unramified covering of the punctured sphere $\mathbb{C}P^1\setminus T$ by the construction described above, then the continuous mapping $f:X\rightarrow\mathbb{C}P^1$ is called a \textit{ramified covering}. The {\it ramification type\/} of a ramification point $y_i$ is the cycle type of the permutation $g_i$. \\ Let $f_1:X_1\rightarrow \mathbb{C}P^1$ and $f_2:X_2\rightarrow \mathbb{C}P^1$ be two ramified coverings. If there exists an orientation preserving homeomorphism $u:X_1\rightarrow X_2$ such that the diagram below is commutative, then the two ramified coverings are said to be {\it isomorphic}: $$ \xymatrix{ X_1 \ar[dr]_{f_1} \ar[rr]^u && X_2 \ar[dl]^{f_2}\\ & \mathbb{C}P^1 } $$ Denote by $b_{g,\nu,m}$ the total number of isomorphism classes of ramified coverings of the two-dimensional sphere by a surface of genus $g$ having $m$ ramification points with arbitrary ramification types and a distinguished ramification point with given ramification type $\nu = \{\nu_1,\lambdadots,\nu_c\}$ over it. Denote by $S$ the following generating function for the numbers $b_{g,\nu,m}$: \begin{equation*} S(\hbar,p_1,p_2,\lambdadots;m) = \sum\lambdaimits_{n=0}^{\infty}\sum\lambdaimits_{\nu\vdash n} b_{g,\nu,m}p_\nu \hbar^{2g}, \end{equation*} where $p_\nu = p_{\nu_1}p_{\nu_2}\lambdadots$ and the genus $g$ of the covering surface can be computed from the Riemann--Hurwitz formula, $2-2g = 2n - \sum\lambdaimits_P (k(P)-1)$, $k(P)$ is the ramification order at the ramification point~$P$, see Sec.~\ref{s4}. The main result of this paper is the following {\theorem The function $S$ admits the following representation: \begin{equation} S(\hbar,p_1,p_2,\lambdadots;m) = \hbar^2\lambdaog \lambdaeft(\sum\lambdaimits_{n=0}^{\infty}\sum_{\nu\vdash n} \prod\lambdaimits_{k\in\nu}(1+c(k)\hbar)^m \dfrac{\mathrm{dim}_\nu}{n!} \hbar^{-2n}s\hbar_\nu \right). \lambdaabel{1} \end{equation}\lambdaabel{t1} } The generating function $S$ has the genus expansion \begin{multline*} S(\hbar,p_1,p_2,\lambdadots;m) = S_0(p_1,p_2,\lambdadots;m) + \hbar^2S_1(p_1,p_2,\lambdadots;m) + \\ \hbar^4S_2(p_1,p_2,\lambdadots;m)+\lambdadots, \end{multline*} where the functions $S_g,\; g=0,1,2\lambdadots$, are the generating functions for numbers of coverings of genus g. The coefficients $b_{0,\nu,m}$ of the function $S_0$ are the Bousquet-Melou--Schaeffer numbers $G_{\sigma_0}(m)$ (see Eq.~(\ref{0})) divided by $|\mathrm{Aut}(\nu)|\prod\lambdaimits_{i=1}^{l(\nu)}\nu_i$, where $\nu$ is the cycle type of the permutation $\sigma_0$. Indeed, the Bousquet-Melou--Schaeffer number is equal to the number of genus~$0$ decompositions of a given permutation $\sigma_0$, while $b_{0,\nu,m}$ enumerate decomposi-\\tions of arbitrary permutations from a given conjugacy class. For higher genera, Eq.~(\ref{1}) produces an efficient way to compute the coefficients of the expansion. For example, the generating function $S_1$ for genus~1 (the coefficient of the series (\ref{1}) of $\hbar^2$) up to the covering degree~$4$ is \begin{multline*} S_1(p_1,p_2,\lambdadots;m) = \dfrac{1}{48}m(m-1)(m-2)(m-3)p_1^2 + \dfrac{1}{12}m(m-1)(m-2)p_2 + \\ \dfrac{1}{72}m(m-1)(m-2)(4m^3-21m^2+35m-20)p_1^3 + \\ \dfrac{1}{6}m(2m-3)(m-2)(m-1)^2p_2p_1 + \dfrac{1}{24}m(3m-5)(m-1)(3m-2)p_3 + \\ \dfrac{1}{96}m(m-1)(m-2)(13m^5-99m^4+297m^3-445m^2+337m-105)p_1^4 + \\ \dfrac{1}{24}m(m-1)^2(m-2)(26m^3-103m^2+135m-60)p_2p_1^2 + \\ \dfrac{1}{24}m(m-1)^2(4m-5)(4m^2-10m+5)p_2^2 + \\ \dfrac{1}{16}m(m-1)^2(3m-5)(3m-4)(3m-2)p_3p_1 + \\ \dfrac{1}{12}m(m-1)(4m-3)(2m-1)(2m-3)p_4+\dots. \end{multline*} Generating series enumerating coverings often are solutions to integrable hierarchies, see, e.g.,~\cite{Ok}. The generating function $S$ is a specialization of a function from paper \cite{GJ} and, therefore, is a solution to the KP-hierarchy. In particular, it satisfies the first of the infinite series of equations in the KP-hierarchy: \begin{equation*} \dfrac{\partial^2 S}{\partial p_1\partial p_{3}} - \dfrac{\partial^2 S}{\partial p_2^2} = \dfrac{1}{2}\dfrac{\partial^2 S}{\partial p_1^2} - \dfrac{1}{12}\dfrac{\partial^4 S}{\partial p_1^4}. \end{equation*} Let $Y(\nu) = \prod\lambdaimits_{k\in\nu}y_{c(k)}$ denote the content product for the Young diagram corresponding to the partition $\nu\vdash n$, for the indeterminates $y_c$, $c=\dots,-2,-1,$ $0,1,2,\dots$. Denote by $F$ the generating function \begin{equation*} F(\dots,y_{-2},y_{-1},y_{0},y_{1},y_{2},\dots;p_1,p_2,\lambdadots)=\lambdaog\lambdaeft(\sum_{n=0}^\infty\sum_{\nu\vdash n}\prod_{k\in\nu}y_{c(k)} \frac{dim_\nu}{n!}p_\nu\right). \end{equation*} The following statement is the immediate consequence of Theorems 2.3 and 3.1 from \cite{GJ}: {\stm The generating function $F$ is a solution to the KP-hierarchy.} Consider the perturbation of the KP-hierarchy by the substitution $p_i = \dfrac{p_i}{\hbar^{i+1}}$, where $\hbar$ is a formal parameter and $i=0,1,2,\dots$. For example the first equation of the hierarchy will be \begin{equation*} \dfrac{\partial^2 S}{\partial p_1\partial p_{3}} - \dfrac{\partial^2 S}{\partial p_2^2} = \dfrac{1}{2}\dfrac{\partial^2 S}{\partial p_1^2} - \dfrac{\hbar^2}{12}\dfrac{\partial^4 S}{\partial p_1^4}. \end{equation*} {\imp The function $S(\hbar,p_1,p_2,\lambdadots;m)$ is a solution to the perturbed KP-hierarchy.} {\it Proof.}\;\; One can obtain the function $S$ from $F$ by the substitutions $y_{c} = (1+c\hbar)^m$ and $p_i = \dfrac{p_i}{\hbar^{i+1}}$. \qed Computations using Theorem~\ref{t1} lead to the following conjectural version of the Bousquet-Melou--Schaeffer formula to the case of genus $g=1$. {\conj The Bousquet-Melou--Schaeffer numbers for coverings of the sphere by the torus admit the following representation: \begin{equation} b_{1,\nu,m} = P_{2t-1}(m,\nu)m\prod\lambdaimits_{i=1}^t (m\nu_i-2)_{(\nu_i-1)}. \end{equation} Here $P_{2t-1}$ is a polynomial of degree $2t-1$ and $(m\nu_i-2)_{\nu_i-1} = (m\nu_i-2)(m\nu_i-3)\lambdadots (m\nu_i-\nu_i)$ is the descending factorial. } In Sec.~3, we present necessary well-known facts about symmetric functions and representations of symmetric groups. In Sec.~4, we prove main statements of the paper. \section{Representations of symmetric groups} \hspace{6mm}Let us reformulate our problem in the language of permutations. We will follow the terminology of~\cite{LZ}. \df A {\it constellation} is a sequences of permutations $[g_1,\lambdadots, g_k],$ $g_i \in S_n$ such that the group $\lambdaangle g_1,\lambdadots, g_k \rightarrowngle$ acts transitively on an~$n$-element set and $g_1\cdot\lambdadots\cdot g_k = id$. It is not hard to check that the $k$-tuple of permutations $[g_1,\lambdadots, g_k]$ generating the monodromy group of an unramified covering of a punctured sphere form a constellation. This construction works in the opposite direction as well: for any constellation, there exists a corresponding unramified covering of the punctured sphere. Consider two constellations $[g_1,\lambdadots, g_k]$ and $[g'_1,\lambdadots, g'_k]$. If there exists a permutation $h \in S_n$ such that $g'_i = h^{-1}g_ih$ for all $i = 1,\lambdadots, k,$ then these two constellations are said to be {\it isomorphic}. Two unramified coverings of a punctured sphere $\mathbb{C}P^1\setminus Y$ are isomorphic if and only if the corresponding constellations are isomorphic. {\theorem[\rm\bf Riemann's existence theorem] Consider any sequence of points $[y_1,\lambdadots, y_k]\in \mathbb{C}P^1$ and any constellation $[g_1,\lambdadots, g_k],\; g_i \in S_n$. Then there exists a Riemann surface~$X$ and a meromorphic function $f:X\rightarrow \mathbb{C}P^1$ such that $y_1,\lambdadots, y_k$ are the points of ramification of~$f$ and $g_1,\lambdadots, g_k$ are the corresponding monodromy permutations. The unramified covering $f:X\rightarrow \mathbb{C}P^1$ is unique up to isomorphism.} One can find a proof in \cite{LZ}. Thus, the numbers $b_{g,\nu,m}$ enumerate decompositions of permutations $\sigma_0\in S_n$ with given cycle type $\nu$ into a product of~$m$ permutations from $S_n$ (we count $m$-tuples of permutations up to common conjugation) provided the group generated by these permutations acts transitively on the set of $n$ elements. The genus of the covering surface can be determined from the Riemann--Hurwitz formula. \df The {\it group algebra} $\mathbb{K}G$ of a finite group $G$ is the $|G|$-dimensional vector space over the field $\mathbb{K}$ freely spanned by the elements of $G$. The product in $\mathbb{K}G$ is induced by the group operation in~$G$. We will use only the field $\mathbb{K} = \mathbb{C}$. Proofs of all the facts below about the group algebras of symmetric groups can be found, e.g., in~\cite{Vi} and~\cite{Mc}. It is well known that every linear representation~$R:G\to GL(V)$ of a group~$G$ in a vector space $V$ over~$\mathbb{C}$ admits the unique extension to a linear representation of the algebra $\mathbb{C}G$ in the same space according to the formula $R(\sum\lambdaimits_{g\in G}a_gg) = \sum\lambdaimits_{g\in G}a_gR(g)$. Note that the inverse statement is also true, so that there is a natural bijection between representations of a group $G$ and those of the group algebra $\mathbb{C}G$. Let~$T$ be the regular representation of the algebra $\mathbb{C}G$, that is, the representation of $\mathbb{C}G$ on itself (if we think about $\mathbb{C}G$ as about a vector space) defined by the rule $T(a)x = ax,\; \forall a,x\in \mathbb{C}G$. Define the scalar product in $\mathbb{C}G$ in the standard way: \begin{equation} (a,b) = \mathrm{tr}\,T(ab) = \mathrm{tr}\,T(a)T(b).\lambdaabel{d} \end{equation} Denote by $C_\nu$ the element in the group algebra~$\mathbb{C}S_{|\nu|}$ of the symmetric group equal to the sum of all permutations whose cycle type is~$\nu$. The elements $C_\nu$ form a basis of the center of the group algebra $\mathbb{C}S_{|\nu|}$. Consider the space $\mathbb{C}[G]$ of functions on a group~$G$. The formula \begin{equation} \varphi(\sum\lambdaimits_{g\in G}a_gg) = \sum\lambdaimits_{g\in G}a_g\varphi(g),\; \varphi\in\mathbb{C}[G], \lambdaabel{5} \end{equation} extends every function $\varphi\in\mathbb{C}[G]$ to a linear function on the group algebra $\mathbb{C}G$. Hence we have a natural bijection between $\mathbb{C}[G]$ and the dual space $\mathbb{C}G^*$ to the space $\mathbb{C}G$ given by Eq.~(\ref{5}). The scalar product~(\ref{d}) defines an isomorphism between $\mathbb{C}G$ and its dual space, $g\mapsto\varphi_g,$ where \begin{equation} \varphi_g(h) = (g,h) = \lambdaeft\{\begin{aligned} |G|,\; gh = e\\ 0,\; gh \neq e\\ \end{aligned} \right. \;\;\forall g,h\in\mathbb{C}G.\lambdaabel{d2} \end{equation} Let us transfer the scalar product to the space $\mathbb{C}[G]$: \begin{equation} (\varphi_1,\varphi_2) = \dfrac{1}{|G|}\sum\lambdaimits_{g\in G}\varphi_1(g)\varphi_2(g^{-1}).\lambdaabel{d3} \end{equation} Let $R:G\rightarrow GL(V)$ be an arbitrary linear representation of a group $G$. \df Define the function $\chi\in \mathbb{C}[G]$ by the formula $\chi(g) = \mathrm{tr}\,R(g),\; g\in G$. This function is called the \textit{character} of the representation $R$. Denote by $\chi^\nu$ the character of the irreducible representation of $S_n$ \\ corresponding to the partition $\nu\vdash n$. Characters $\chi^\nu$ are idempotents: \begin{equation} \chi^\mu\chi^\nu = \dfrac{\mathrm{dim_\nu}}{n!}\delta_{\mu}^{\nu}\chi^\mu. \end{equation} Consider the mapping $\psi$ from $S_n$ to the space of quasihomogeneous polynomials of degree $n$ in variables $p_i$, $\psi:\;\sigma\mapsto p_{\rho(\sigma)} = p_{\nu_1}\lambdadots p_{\nu_t}$, where weight $i$ is assigned to the variable $p_i$. Define the \textit{characteristic mapping} $\mathrm{ch}$ from $Z\mathbb{C}S_n^*$ to the space of quasiho-\\mogeneous polynomials of degree~$n$ in the variables~$p_i$ by the formula: \begin{equation*} \mathrm{ch}(f) = \dfrac{1}{n!}\sum\lambdaimits_{g\in G}f(g)\psi(g). \end{equation*} \df Let $\nu$ be a partition of length less than $l+1$, then the \textit{Schur function} $s_\nu(x_1,x_2,\lambdadots,x_l)$ of $\nu$ is the quotient of two determinants:\lambdaabel{d1} \begin{equation} s_\nu(x_1,x_2,\lambdadots,x_l) = \dfrac{\mathrm{det}(x_i^{\nu_j+l-j})_{1\lambdaeq i,j\lambdaeq l}}{\mathrm{det}(x_i^{l-j})_{1\lambdaeq i<j\lambdaeq l}}. \end{equation} We will use Schur functions rewritten in the variables $p_1,p_2,\lambdadots$. Their equivalent definition looks like follows. First, the Schur function $s_k(p_1,p_2,\lambdadots)$ of a one-part partition is the coefficient of $t^k$ in the series $\mathrm{exp}\lambdaeft(\sum\lambdaimits_{i=1}^\infty\dfrac{p_i}{i}t^i\right)=\sum\lambdaimits_{i=0}^\infty s_kt^k$. Next, the Schur function $s_\nu(p_1,p_2,\lambdadots)$ is the determinant of the following matrix formed by one-part Schur functions: $$ s_\nu(p_1,p_2,\lambdadots) = \mathrm{det}(s_{\nu_i-i+j})_{1\lambdaeq i,j\lambdaeq l(\nu)}. $$ {\stm[\cite{Mc},1.7.3] The mapping $\mathrm{ch}$ is an isometric bijection between the center of the group algebra $Z\mathbb{C}S_n^*$ and the space of quasihomogeneous polynomials of degree $n$, under the isomorphism $\mathrm{ch}(\chi^\nu) = s_\nu$.} In addition, there is a correspondence between $Z\mathbb{C}S_n$ and $Z\mathbb{C}S_n^*$. Thus, the element $C_\nu=C_{\nu_1,\lambdadots,\nu_t} \in Z\mathbb{C}S_n$ is taken to the monomial $|C_\nu|p_{\nu_1}\lambdadots p_{\nu_t}$, where $|C_\nu|$ is the number of elements in the conjugacy class of a permutation of cycle type $\nu$. \section{Proof of the main theorem}\lambdaabel{s4} \hspace{6.2mm}In this section we will prove the main Theorem \ref{t1}. Any element $a\in Z\mathbb{C}S_n$ has expansion in the both bases $C_\nu$ and $\chi^\nu$: $$ a = \sum\lambdaimits_{\nu\vdash n} \dfrac{(a,C_\nu)}{|C_\nu|n!}C_\nu = \sum\lambdaimits_{\nu\vdash n} (a,\chi^\nu)\chi^\nu. $$ Due to the fact that the characters $\chi^\nu$ are idempotent, we have $$ a\cdot b = \sum\lambdaimits_{\nu\vdash n} (a,\chi^\nu)(b,\chi^\nu)\chi^\nu,\; \forall\; a, b\in Z\mathbb{C}S_n. $$ Let us associate to an arbitrary element $a\in Z\mathbb{C}S_n$ the operator on $Z\mathbb{C}S_n$ acting by multiplication by $a$. Then $\chi^\nu$ are eigenvectors of this operator with the eigenvalues $(a,\chi^\nu).$ Consider the operator $B:Z\mathbb{C}S_n\rightarrow Z\mathbb{C}S_n$ defined by the formula $B = \sum\lambdaimits_{\nu \vdash n} \hbar^{|\nu|-l(\nu)} C_\nu$ (in other words, $B$ is a scaled sum of all the elements of the group $S_n$); here $\hbar$ is a formal variable. Recall that we count $m$-tuples of permutations whose product is a permutation of cyclic type $\nu$. Expand the operator $B^m$ in the eigenbasis of characters and note that the eigenvalue of the vector $\chi^\nu$ is exactly the desired number. Denote by $B_\nu$ the eigenvalue of the operator $B$ on the eigenvector $\chi^\nu$. When computing~$B_\nu$, we will use the following statement. {\lambdaemma[\cite{KOV},4.1.2] The function $\Gamma\in \mathbb{C}[S_n], \Gamma:\sigma_\nu\mapsto \hbar^{l(\nu)}$, where $\hbar$ is a formal variable and $\sigma_\nu$ is an arbitrary permutation of cyclic type $\nu$, has the following expansion in the basis of characters: \begin{equation*} \Gamma(\cdot) = \sum\lambdaimits_{\nu\vdash n} \prod\lambdaimits_{k\in\nu}\dfrac{\hbar+c(k)}{h(k)}\chi^\nu; \end{equation*} here $k$ runs over the set of cells of the Young diagram corresponding to the partition $\nu$, $c(k)$ is the content of the cell $k$, $h(k)$ is the length of the corresponding hook.\lambdaabel{l} } For the sake of completeness, we reproduce the proof of this statement from~\cite{KOV}. {\it Proof.}\;\; By definition (see \cite{Mc}, 1.7), for an arbitrary function $f\in \mathbb{C}[ZS_n]$, the equality $\mathrm{ch}(f) = \sum\lambdaimits_{\nu\vdash n}z_\nu^{-1}f(\sigma_\nu)p_\nu$ is valid, where $z_\nu = 1^{\nu_1}2^{\nu_2}\lambdadots\nu_1!\nu_2!\lambdadots$. Consider the generating series $\sum\lambdaimits_{n\geq 1} \mathrm{ch}(\Gamma)u^n$ for the numbers $\mathrm{ch}(\Gamma)$. By definition of the functions $\mathrm{ch}$ and $z_\nu$, we have \begin{equation*} 1 + \sum\lambdaimits_{n\geq 1} \mathrm{ch}(\Gamma)u^n = \sum\lambdaimits_{n=\nu_1+2\nu_2+\lambdadots} \dfrac{\hbar^{\nu_1+\nu_2+\lambdadots}u^{\nu_1+2\nu_2+\lambdadots}}{1^{\nu_1}2^{\nu_2}\lambdadots\nu_1!\nu_2!\lambdadots}p_1^{\nu_1}p_2^{\nu_2}\lambdadots \end{equation*} Rewrite the last sum in more detail: \begin{equation*} \sum\lambdaimits_{n=\nu_1+2\nu_2+\lambdadots} \dfrac{\hbar^{\nu_1+\nu_2+\lambdadots}u^{\nu_1+2\nu_2+\lambdadots}}{1^{\nu_1}2^{\nu_2}\lambdadots\nu_1!\nu_2!\lambdadots}p_1^{\nu_1}p_2^{\nu_2}\lambdadots = \prod\lambdaimits_{n=1}^{\infty}\sum\lambdaimits_{k=0}^{\infty}\dfrac{\hbar^k u^{nk}}{n^kk!}p_n^k. \end{equation*} Note that this is the exponent \begin{equation*} \prod\lambdaimits_{n=1}^{\infty}\sum\lambdaimits_{k=0}^{\infty}\dfrac{\hbar^k u^{nk}}{n^kk!}p_n^k = \mathrm{exp}\,\hbar\,\lambdaeft(up_1+\dfrac{u^2p_2}{2}+\dfrac{u^3p_3}{3}+\lambdadots\right), \end{equation*} and after rewriting the polynomials $p_i$ in the variables $x_i$ (see Eq.~(\ref{7})) we obtain \begin{equation*} \mathrm{exp}\,\hbar\,\lambdaeft(up_1+\dfrac{u^2p_2}{2}+\dfrac{u^3p_3}{3}+\lambdadots\right) = \mathrm{exp}\,\hbar\,\sum\lambdaimits_{i=1}^\infty\lambdaeft(ux_1+\dfrac{(ux_2)^2}{2}+\dfrac{(ux_3)^3}{3}+\lambdadots\right). \end{equation*} The last sum can be rewritten as \begin{multline*} \mathrm{exp}\,\hbar\,\sum\lambdaimits_{i=1}^\infty\lambdaeft(ux_1+\dfrac{(ux_2)^2}{2}+\dfrac{(ux_3)^3}{3}+\lambdadots\right) = \mathrm{exp}\lambdaeft(\,\hbar\,\sum\lambdaimits_{i=1}^\infty\mathrm{ln}(1-ux_i)\right) = \\ = \prod\lambdaimits_{i=1}^\infty (1-ux_i)^{-\hbar}. \end{multline*} Therefore, it remains to prove that \begin{equation*} \prod\lambdaimits_{i=1}^\infty (1-x_i)^{-\hbar} = \sum\lambdaimits_{\nu\vdash n} \prod\lambdaimits_{k\in\nu}\dfrac{\hbar+c(k)}{h(k)}\cdot s_\nu(x_1,x_2,\lambdadots). \end{equation*} The right hand side of the last equality is polynomial in~$\hbar$, whence it suffices to prove the equality for natural values of $\hbar$. For $N\in\mathbb{N}$, we will prove the equality \begin{equation} \prod\lambdaimits_{i=1}^\infty (1-x_i)^{-N} = \sum\lambdaimits_{\nu\vdash n} \prod\lambdaimits_{k\in\nu}\dfrac{N+c(k)}{h(k)}\cdot s_\nu(x_1,x_2,\lambdadots).\lambdaabel{2} \end{equation} We will use two statements from \cite{Mc}. First, \begin{equation} \prod_{j=1}^{\infty}\prod_{i=1}^{\infty}(1-u_jx_i)^{-1} = \sum_\nu s_\nu(u_1,u_2,\lambdadots)s_\nu(x_1,x_2,\lambdadots).\lambdaabel{3} \end{equation} The relationship between the left and the right hand sides of Eq.~(\ref{3}) follows from statements about complete symmetric polynomials. The complete sym-\\metric polynomial~$h_r$ is defined by the formula \begin{equation*} h_r(x_1,x_2,\lambdadots) = \sum\lambdaimits_{|\nu|=r}\sum\lambdaimits_{\tau} x_1^{\nu_{\tau(1)}}x_2^{\nu_{\tau(2)}}\lambdadots x_{l(\nu)}^{\nu_{\tau(l(\nu))}}, \end{equation*} where the second summation runs over all permutations~$\tau$ of the parts of the partition $\nu$. The generating series $H(t)$ for the polynomials $h_r$ can be represented in the form $H(t) = \sum\lambdaimits_{r\geq 0}h_rt^r = \prod\lambdaimits_{i\geq 1}(1-x_it)^{-1}.$ Note that we can represent every factor on the right hand side of the last equality as the sum of an infinite geometric progression. For details of the proof of Eq.~(\ref{3}), see \cite{Mc}, Sec.~1.4, p. 48. Next, \cite{Mc}, Sec.~1.3, Example~4 yields \begin{equation} \prod\lambdaimits_{k\in\nu}\dfrac{N+c(k)}{h(k)} = s_\nu\underbrace{(1,\lambdadots,1)}_{N}.\lambdaabel{4} \end{equation} Recall (Definition \ref{d1}) that $s_\nu(x_1,x_2,\lambdadots)$ is the quotient of two determinants. If we substitute $x_i = q^{i-1}$, then both determinants will become Vandermonde determinants and \begin{equation} s_\nu(1,q,q^2,\lambdadots) = q^{n(\nu)}\prod\lambdaimits_{x\in\nu}\dfrac{1-q^{n+c(x)}}{1-q^{h(x)}}, \lambdaabel{6} \end{equation} where $n$ is the total amount of parts in the partition $\nu$ and $n(\nu) = \sum\lambdaimits_{i=1}^n (i-1)\nu_i$ (for details, see \cite{Mc}, Sec.~1.3, Example~1). Now, if we set $t=1$, then the expression $\dfrac{\prod\lambdaimits_{i=1}^k (1-t^i)}{(1-t)^k}$ is equal to $k!$. Hence, Eq.~(\ref{4}) follows from Eq.~(\ref{6}) after substituting $q=1.$ Equation~(\ref{2}) becomes obvious if we put $u_1=\lambdadots=u_N=1,\;u_{N+1}=\lambdadots = 0$ in Eq.~(\ref{3}) and use Eq.~(\ref{4}). The lemma is proved. $\qed$ {\imp The eigenvalue $B_\nu$ of the operator $B$ on the eigenvector $\chi^\nu$ is $$ B_\nu = \dfrac{\mathrm{dim}_\nu}{n!} \prod\lambdaimits_{k\in\nu}(1+c(k)\hbar). $$} {\it Proof.}\;\; Consider the function $\tilde\Gamma:\sigma_\nu\mapsto \hbar^{n-l(\nu)}$. Let us expand $\tilde\Gamma$ in the basis of characters using Lemma~\ref{l}: \begin{multline*} \tilde\Gamma(\cdot)= \sum\lambdaimits_{\nu\vdash n} \hbar^n \prod\lambdaimits_{k\in\nu}\dfrac{\dfrac{1}{\hbar}+c(k)}{h(k)}\chi^\nu = \sum\lambdaimits_{\nu\vdash n} \dfrac{\hbar^n}{\hbar^n} \prod\lambdaimits_{k\in\nu}\dfrac{1+c(k)\hbar}{h(k)}\chi^\nu = \\ = \sum\lambdaimits_{\nu\vdash n} \dfrac{\mathrm{dim}_\nu}{n!}\prod\lambdaimits_{k\in\nu}(1+c(k)\hbar)\chi^\nu. \end{multline*} To deduce the last equality, we made use of the famous hook-length formula: \begin{equation*} \mathrm{dim}_\nu = \dfrac{n!}{\prod\lambdaimits_{k\in\nu} h(k)}. \end{equation*} The corollary is proved. $\qed$ \\ \textit{Proof of Theorem~\ref{t1}.\hspace{2mm}} Consider the series \begin{equation*} \sum_{n=0}^\infty\sum_{\nu\vdash n} \prod\lambdaimits_{k\in\nu}(1+c(k)\hbar)^m \dfrac{\mathrm{dim}_\nu}{n!} s\hbar_\nu(\hbar,p_1,p_2,\lambdadots) \hbar^{-2n} \end{equation*} in variables $\hbar$ and $p_1, p_2,\lambdadots$. Its coefficients are the eigenvalues of the operator $B^m$ on the eigenvectors $\chi^\nu$ written in the basis of shifted Schur functions. This power series enumerates both connected and disconnected coverings. Take the logarithm of this series to isolate connected coverings. The Riemann--Hurwitz formula gives the genus of the covering surface: \begin{equation*} 2g = 2 - 2n + \sum\lambdaimits_{P} (k(P)-1). \end{equation*} Here the summation is over all ramification points $P$ in $\mathbb{C}P^1$, and $k(P)$ is the ramification order at the ramification point $P$. Recall that the ramification order~$k(P)$ at the ramification point $P$ is defined as follows. Let $g_P$ be the monodromy permutation of the covering over the ramification point $P$ and let $l$ be the number of cycles in $g_P$. Then $k(P)=n-l$. Notice that the contribution to the power of~$\hbar$ from scaling of the Schur functions is equal to the sum $\sum\lambdaimits_P (k(P)-1)$, due to the Riemann--Hurwitz formula. We multiply by $\hbar^{-2n}$ and by $\hbar^2$ (see Eq.~(\ref{1})) in order to compensate the exponent of the variable $\hbar$ and the genus of the covering surface taken twice. $\qed$ It would be interesting to give a geometric interpretation of these formulas. In particular, is it true that the Bousquet-Melou--Schaeffer numbers are equal to the intersection numbers of certain natural characteristic classes on suitable moduli spaces similar to the well-known ELSV formula \cite{ELSV}? Let us recall this formula enumerating coverings by a surface of given genus~$g$ with simple ramification over all ramification points but one, where the ramification type is $\nu_1,\lambdadots,\nu_t$: \begin{equation*} h_{g,\{\nu_1,\lambdadots\nu_t\}} = \lambdaeft(\prod_{i=1}^t \dfrac{\nu_i^{\nu_i}}{\nu_i!}\right)\int\lambdaimits_{\mathcal{\overline{M}}_{g,t}} \dfrac{1 - \lambda_1+ \lambdadots + (-1)^g\lambda_g}{(1-\nu_1\psi_1)\cdot\lambdadots\cdot(1-\nu_t\psi_t)}. \end{equation*} In particular, for the genus zero coverings it gives Hurwitz's formula \cite{Hu}: \begin{equation*} h_{0,\{\nu_1,\lambdadots\nu_t\}} = n^{c-3}\lambdaeft(\prod_{i=1}^t \dfrac{\nu_i^{\nu_i}}{\nu_i!}\right). \end{equation*} The Bousquet-Melou--Schaeffer formula \begin{equation*} |\mathrm{Aut(\nu)}| b_{0,\nu,m} = m((m-1)n-1)_{t-3}\prod\lambdaimits_{i=1}^t \dfrac{(m\nu_i-1)_{\nu_i}}{\nu_i!} \end{equation*} is very similar to the Hurwitz formula for $h_{0,\{\nu_1,\lambdadots\nu_t\}}$, with powers replaced by the descending factorials, but its geometric interpretation is unknown. \end{document}
\begin{document} {\Large \bf Beyond Topologies, Part I} \\ {\bf Elem\'{e}r E Rosinger, Jan Harm van der Walt} \\ Department of Mathematics \\ and Applied Mathematics \\ University of Pretoria \\ Pretoria \\ 0002 South Africa \\ [email protected] \\ {\bf Abstract} \\ Arguments on the need, and usefulness, of going beyond the usual Hausdorff-Kuratowski-Bourbaki, or in short, HKB concept of topology are presented. The motivation comes, among others, from well known {\it topological type processes}, or in short TTP-s, in the theories of Measure, Integration and Ordered Spaces. These TTP-s, as shown by the classical characterization given by the {\it four Moore-Smith conditions}, can {\it no longer} be incorporated within the usual HKB topologies. One of the most successful recent ways to go beyond HKB topologies is that developed in Beattie \& Butzmann. It is shown in this work how that extended concept of topology is a {\it particular} case of the earlier one suggested and used by the first author in the study of generalized solutions of large classes of nonlinear partial differential equations. \\ \\ {\large \bf 1. Introduction} \\ {\bf Some of the Main Facts} \\ The starting observation of the approach to pseudo-topologies presented here and introduced earlier in Rosinger [1-7] is the difference between {\it rigid}, and on the other hand, {\it nonrigid} mathematical structures, a difference which is explained in the sequel. \\ As it happens, the usual concept of HKB topology is a {\it rigid} mathematical structure, and this is precisely one of the main reasons for a number of its important deficiencies. In contradistinction to that, the pseudo-topologies we deal with are {\it nonrigid} mathematical structures, and thus turn out to have a convenient flexibility. \\ There are two novelties in the way the usual HKB concept of topology is extended in this work. \\ First, the usual sequences, nets or filters used in describing limits, convergence, etc., are replaced with pre-ordered sets. This extension proves to be quite natural, and not excessive, since under rather general conditions, it can - if so required - lead back to the usual sequences, nets or filters. \\ Second, the basic concept is {\it not} that of convergence, but one of {\it Cauchy-ness} that models in a rather extended fashion the property of {\it Cauchy} sequences, nets or filters. Of course, convergence is recovered as a particular case of Cauchy-ness, and the advantage of such an approach is that {\it completion} becomes easily available in the general case. \\ To be more precise, let us consider the case when filters are used. Then convergence on a space $X$ is typically a relationship between some filters ${\cal F}$ on $X$ and corresponding points $x$ in $X$. \\ On the other hand, the mentioned Cauchy-ness is a {\it binary relation} $\Xi$ between certain pairs of filters on $X$. And a filter ${\cal F}$ on $X$ is Cauchy, if it relates to {\it itself} according to that binary relation, that is, if ${\cal F} ~\Xi~ {\cal F}$. \\ The particular case of a filter ${\cal F}$ convergent to a point $x$ is recovered when that filter is in the relation $\Xi$ with the ultrafilter generated by $x$, or more generally, with some filters closely related to that ultrafilter. \\ The point however is that instead of filters, sequences or nets, one uses this time pre-ordered sets. Furthermore, instead of starting with convergence, one starts with the concept of Cauchy-ness. \\ As it turns out, the extension of the usual HKB concept of topology presented here is such that it can {\it no longer} be contained within the usual Eilenberg - Mac Lane concept of category, Rosinger [28,29]. In other words, the totality of pseudo-topologies dealt with here no longer constitutes a usual Eilenberg - Mac Lane category, but one in a more general sense. The reason for that, however, is not in the possibly excessive nature of the extension of the HKB concept of topology implemented here, but it is more simple. Namely, it is due to the fact that the HKB concept of topology is rigid, while the concept of pseudo-topology dealt with here is nonrigid. \\ This issue, however, will not be pursued here, and respective details can be found in Rosinger [29]. \\ {\bf Pseudo-Topologies are Nonrigid Structures} \\ Let us note that, as pointed out in Rosinger [2, p. 225], in the case of the usual HKB concept of topology all the topological entities, such as open sets, closed sets, compact sets, convergence, continuous functions, etc., are {\it uniquely} determined by {\it one single} such entity, for instance, that of open sets. In other words, if one for example chooses to start with the open sets, then all the other topological entities can be defined in a unique manner based on the given open sets. \\ In contradistinction to that usual situation encountered with the HKB topologies, and as seen in the sequel, in pseudo-topological structures there is a {\it relative independence} between various topological entities. Namely \begin{itemize} \item the connection between topological entities is no longer rigid to the extreme as in the case of the usual HKB concept of topology, where one of the topological entities determines in a unique manner all the other ones; instead, the various topological entities are only required to satisfy certain compatibility conditions, Rosinger [2, p. 225], \item there exist, however, several topological entities of prime importance, among them, that of {\it Cauchy-ness} which is a binary relation between two arbitrary sequences, nets, filters, etc., and it was first introduced in Rosinger [2], see also Rosinger [1,3-7] \end{itemize} In this way, the usual HKB concept of topology can be seen as a {\it rigid} mathematical structure, while the pseudo-topologies dealt with in this work are {\it nonrigid}. \\ Needless to say, there are many other nonrigid mathematical structures, such a rings, spaces with measure, topological groups, topological vector spaces, and so on, Rosinger [29, pp. 8,9]. \\ Obviously, an important advantage of a rigid mathematical structure, and in particular, of the usual HKB concept of topology, is a simplicity of the respective theoretical development. Such simplicity comes from the fact that one can start with only one single entity, like for instance the open sets in the case of HKB topologies, and then based on that first entity, all the other entities can be defined or constructed in a clear and unique manner. \\ Consequently, the impression may be created that one has managed to develop a kind of universal theory, universal in the sense that there may not be any need for alternative theories in the respective discipline, as for instance is often the perception about the HKB topology. \\ The disadvantage of a rigid mathematical structure is in a consequent built in lack of flexibility regarding the interdependence of the various entities involved, since each of them, except for a single starting one, are determined uniquely in terms of that latter. And in the case of the HKB topologies this is manifested, among others, in the difficulties related to dealing with suitable topologies on spaces of continuous functions, as seen in the sequel. \\ Nonrigid mathematical structures, and in particular, pseudo-topologies, can manifest fewer difficulties coming from a lack of flexibility. \\ A disadvantage of such nonrigid mathematical structures - as for instance with various approaches to pseudo-topologies - is in the large variety of ways the respective theories can be set up. Also, their respective theoretical development may turn out to be more complex than is the case with rigid mathematical structures. \\ Such facts can lead to the impression that one could not expect to find a universal enough nonrigid mathematical structure, and for instance, certainly not in the realms of topological type structures, or in short, TTS-s, and in particular, not in the case of pseudo- topologies. \\ As it happens so far in the literature on pseudo-topologies, there seems not to be a wider and explicit enough awareness about the following two facts \begin{itemize} \item one should rather use nonrigid structures in order to avoid the difficulties coming from the lack of flexibility of the rigid concept of usual HKB topology, \item the likely consequence of using nonrigid structures is the lack of a sufficiently universal concept of pseudo-topology. \end{itemize} As it happens, such a lack of awareness leads to a tendency to develop more and more general concepts of pseudo-topology, hoping to reach a sufficiently universal one, thus being able to replace once and for all the usual HKB topology with "THE" one and only "winning" concept of pseudo-topology. \\ Such an unchecked search for increased generality, however, may easily lead to rather meagre theories. \\ It also happens in the literature that, even if mainly intuitively, when setting up various concepts of pseudo-topology a certain restraint is manifested when going away from a rigid theory towards some nonrigid ones. And certainly, the reason for such a restraint is that one would like to hold to the advantage of rigid theories which are more simple to develop than the nonrigid ones. \\ After some decades, the literature on pseudo-topologies appears to have settled in some of its main trends. And one of them is the preference to start with formalizing in a rather large variety of ways the concept of {\it convergence}. \\ In this regard it is worth noting that, when back in 1914, Hausdorff created the modern concept of topology, metric spaces, with their open sets, were taken as a starting point for generalization. And obviously, a natural way to generalize metric spaces is to leave aside the metric, but keep the concept of open sets. \\ Unfortunately, the resulting HKB concept of topology proved to have a number of important deficiencies. And they were manifested not so much on the level of a given topological space, as on the level of continuous mappings between two arbitrary topological spaces. \\ This points to the fact that the HKB concept of topology tends to fail in a {\it categorial} sense, that is, through its {\it morphisms}, rather than through its {\it objects}. And by now this categorial level of failure is quite well understood and formulated by pointing out the fact that the category of usual HKB topologies is {\it not} Cartesian closed, Herrlich. \\ From this point of view the tendency to base notions of pseudo-topology on concepts of convergence seems indeed to be more deep and sophisticated than the traditional basing of the usual HKB concept of topology on open sets. Indeed, unlike open subsets, convergence does inevitably involve {\it nontrivial} morphisms. Namely, mappings from pre-ordered directed sets to the respective pseudo-topological spaces. \\ {\bf Limits of Axiomatic Theories} \\ In modern Mathematics it is "axiomatic" that theories are built as {\it axiomatic systems}. \\ Unfortunately however, ever since the early 1930s and G\"{o}del's Incompleteness Theorem, we cannot disregard the deeply inherent limitations of axiomatic mathematical theories. \\ And that limitation cannot be kept away from nonrigid mathematical structures either, since such structures are also built as axiomatic theories. \\ Consequently, various theories of pseudo-topologies, including the one dealt with here, are quite likely bound to suffer from limitations when trying to model in their ultimately inclusive extent such nontrivial mathematical phenomena like topological type structures, or in short, TTS-s. \\ After all, G\"{o}del's incompleteness already is manifested in the Peano axiomatic construction of the natural numbers $\mathbb{N}$. And obviously, TTS-s do not by any means seem to be simpler than the story of $\mathbb{N}$ ... \\ A possible further development, away from such limitations, has recently appeared, even if it is not yet clearly in the awareness of those dealing with topology and pseudo-topology. Namely, {\it self-referential} axiomatic mathematical theories are being developed, Barwise \& Moss. The novelty - until recently considered to be nothing short of a sheer scandal - in such theories is in the use of concepts given by definitions which involve a {\it vicious circle}, that is, are self-referential. \\ It may, however, happen that a further better understanding and modelling of TTS-s may benefit from such a truly novel approach ... \\ {\bf Origins of the Usual Concept of Topology} \\ Our present usual concept of {\it topology} was first formulated in modern terms by Felix Hausdorff in his 1914 book "Grundz\"{u}ge der Mengenlehre". Hausdorff built on the earlier work of M Fr\'{e}chet. During the next two decades, major contributions to the establishment of that abstract or general concept of topology in its present form have been made by LEJ Brouwer, K Kuratowski, and the Bourbaki group of mathematicians. The Bourbaki group completed that topological venture with the introduction of the concept of {\it uniform} topology, towards the end of the 1930s. That was a highly important {\it particular} case of topology, yet extending fundamental properties of metric spaces, such as for instance, the construction of the topological {\it completion} of a space. \\ {\bf Difficulties with the Usual Concept of Topology} \\ Strangely enough, soon after, serious {\it limitations} and {\it deficiencies} of that Hausdorff-Kuratowski-Bourbaki, or in short, HKB concept of topology started to surface. \\ One of the more important such problems came from the shockingly surprising difficulties in setting up suitable topologies on important and frequently used {\it spaces of functions}. In this regard most simple {\it linear} situations would already give highly worrying signals. \\ Let for instance $E$ be a locally convex topological space on $\mathbb{R}$, and let $E^*$ be its topological dual made up of all the continuous linear functionals from $E$ to $\mathbb{R}$. Let $E^*$ be endowed with any locally convex topology. We consider the usual {\it evaluation} mapping defined by duality, namely \\ $~~~~~~ ev : E \times E^* \ni ( x, x^* ) \longmapsto\,\, <\, x, x^* \,>\,\, =~ x^* ( x ) \in \mathbb{R} $ \\ Then rather surprisingly - and also, most inconveniently - it follows that the {\it joint continuity} of this evaluation mapping $ev$ will actually {\it force} the locally convex topology on $E$ to be so particular, as to be {\it normable}. \\ Indeed, let us assume that the evaluation mapping $ev : E \times E^* ~\longrightarrow~ \mathbb{R}$ is jointly continuous. Then there exist neighbourhoods $U \subseteq E,~ U^* \subseteq E^*$ of $0 \in E$ and $0 \in E^*$, respectively, such that $ev ( U, U^* ) \subseteq [ -1, 1]$. But then $U$ is contained in the polar of $U^*$, therefore $U$ is bounded in $E$. And since $E$ admits a bounded neighbourhood $U$, it follows that $E$ is normable. \\ There have also been other somewhat less shocking, but on the other hand more pervasive instances of the deficiencies of the HKB concept of topology. \\ Measure and Integration Theory has been the long ongoing source of some of them. Indeed, about a decade prior to Hausdorff's introduction of the modern concept of topology, H Lebesgue established his "dominated convergence" theorem which plays a fundamental role in Lebesgue integration, and it has no similarly strong version in the case of the Riemann integral. \\ This theorem, involving infinite sequences of integrable functions, their limits, as well as the limits of their integrals, is obviously about certain {\it topological type processes}, or in short, TTP-s, in the space of integrable functions. \\ Similar is the situation with Lusin's theorem about the approximation of measurable functions by continuous ones, or with Egorov's theorem about the almost uniform convergence of point-wise convergent sequences of measurable functions. \\ Yet none of these three theorems has ever been expressed in terms of HKB topologies. And the reason - although apparently hardly known well enough, and seldom, if ever, stated explicitly - is that, simply, they {\it cannot} be expressed in such a manner, due to the fact that they {\it fail} the Moore-Smith criterion, mentioned in the sequel. \\ Connected with measure theory, one can also mention such a simple and basic concept like convergence almost everywhere which, however, proves in general {\it not} to be expressible in terms of HKB topologies, Ordman, since it does not satisfy the mentioned Moore-Smith criterion. \\ There have been several notable early studies related to extensions of the usual HKB concept of topology, Choquet, Fischer. However, the respective problems with HKB topologies did not receive much attention until recent times, Beattie \& Butzmann. \\ Topological type structures, or in short, TTS-s, on ordered spaces have been another rather considerable source bringing to the fore the limitations of the HKB concept of topology. For instance, on ordered spaces there is a wealth of natural and useful concepts of convergence which - in view of the same Moore-Smith criterion - cannot be incorporated within HKB topologies, Luxemburg \& Zaanen, Zaanen. \\ On the other hand, the particular usefulness of such order based topological structures is obvious. One of such examples is the 1936 "spectral theorem" of Freudenthal, mentioned later. A recent remarkable example can be seen in Van der Walt, and for further details in this regard see also Rosinger [27]. \\ Recently, in connection with attempts in establishing a systematic topological and differential study of infinite dimensional smooth manifolds, the HKB concept of topology has met yet another challenging alternative in what came to be called a "convenient setting", Kriegl \& Michor. That approach - even if so far has not attained all of its main objectives - can nevertheless quite clearly show the extent of the need to go beyond the HKB concept of topology. \\ Further relevant facts and arguments related to the need for an extension of the usual HKB concept of topology are presented in Beattie \& Butzmann [pp. xi-xiii], Beattie, as well as in the literature mentioned there. \\ {\bf Spaces of Functions and Continuous Convergence} \\ There is a notorious, even if less often mentioned, difficulty with finding suitable HKB topologies on spaces of functions. And this already happens with linear mappings between vector spaces. In this regard, the case of the evaluation mapping \\ $~~~~~~ ev : E \times E^* \ni ( x, x^* ) \longmapsto\,\, <\, x, x^* \,>\,\, =~ x^* ( x ) \in \mathbb{R} $ \\ mentioned above is just one of the simpler, even if critically inconvenient, examples. \\ Here we recall some of the difficulties with the HKB concept of topology related to spaces of continuous linear mappings, and do so in a more general manner, as pointed out in Rosinger [2, pp. 223,224]. \\ Let $E$ and $F$ be two locally convex topological vector spaces over $\mathbb{R}$, with their topology generated respectively by the families of semi-norms $( p_i )_{i \in I}$ and $( q_j )_{j \in J}$. \\ In such a case it would be most convenient in many situations if the set ${\cal L} ( E, F )$ of linear and continuous mappings from $E$ to $F$ could be obtained in a simple and explicit manner from the family of sets of linear continuous mappings ${\cal L} ( E_i, F_j )$, with $i \in I, j \in J$, where $E_i$ and $F_j$ denote respectively the topologies on $E$ and $F$, generated by the semi-norms $p_i$ and $q_j$. Indeed, the structure of these sets ${\cal L} ( E_i, F_j )$ of linear continuous mappings is perfectly well understood, since they act between semi-normed spaces. \\ Now, since the topologies on $E$ and $F$ are respectively given by $\sup_{i \in I} E_i$ and $\sup_{j \in J} F_j$, we obviously have the relations \\ $~~~ \bigcup_{\, i \in I} {\cal L} ( E_i, F ) ~\subseteq~ {\cal L} ( E, F ) ~\subseteq~ \bigcap_{\, j \in J} {\cal L} ( E, F_j ) $ \\ $~~~ {\cal L} ( E_i, F ) ~\subseteq~ \bigcap_{\, j \in J} {\cal L} ( E_i, F_j ),~~~ i \in I $ \\ $~~~ {\cal L} ( E, F_j ) ~\supseteq~ \bigcup_{\, i \in I} {\cal L} ( E_i, F_j ),~~~ j \in J $ \\ Or to argue more simply, let us assume that $J$ has one single element $j$. Then it follows immediately that \\ $~~~ {\cal L} ( E, F ) ~=~ {\cal L} ( \sup_{i \in I} E_i, F_j ) ~\supseteq~ \bigcup_{i \in I} {\cal L} ( E_i, F_j ) $ \\ Similarly, if $I$ has only one element $i$, then \\ $~~~ {\cal L} ( E, F ) ~=~ {\cal L} ( E_i, \sup_{j \in J} F_j ) ~\subseteq~ \bigcap_{j \in J} {\cal L} ( E_i, F_j ) $ \\ Thus in conclusion, in the general case, the natural relation to expect would be \\ ($\cup\cap$) $~~~ {\cal L} ( E, F ) ~=~ \bigcup_{i \in I} \bigcap_{j \in J} {\cal L} ( E_i, F_j ) $ \\ or alternatively \\ ($\cap\cup$) $~~~ {\cal L} ( E, F ) ~=~ \bigcap_{j \in J} \bigcup_{i \in I} {\cal L} ( E_i, F_j ) $ \\ However, in general, none of these two relations holds, since the connection between ${\cal L} ( E, F )$, and on the other hand, the family ${\cal L} ( E_i, F_j )$, with $i \in I, j \in J$, turns out typically to be far more involved, thus also far less explicit or simple. \\ This failure to have relations like ($\cup\cap$) or ($\cap\cup$) in general is precisely the problem address by Marinescu [1,2], although the respective formulation is somewhat different and less general than the one above. And the suggested solution only regards certain frequent and useful particular cases, by introducing a suitable concept of {\it pseudo-topology} as an extension of the HKB concept of topology. \\ One of the remarkable and surprisingly useful concepts in Beattie \& Butzmann is that of the {\it continuous convergence}. And it addresses precisely the problem of finding suitable topological type structures on spaces of functions, Beattie \& Butzmann [25-42]. \\ An important consequence of using this concept of continuous convergence is the far more convenient {\it duality} theory it allows for a large class of vector spaces, Beattie \& Butzmann [chap. 4], Beattie. \\ {\bf Category Language Helps in Understanding} \\ Put in short terms, a major {\it failure} of the HKB concept of topology is that the corresponding {\it category} of topological spaces is {\it not} Cartesian closed, Herrlich. In other words, let $X, Y$ and $Z$ be three spaces with respective usual HKB topologies, and let endow $X \times Y$ with the corresponding product topology. Set theoretically we have the relation \\ (EXP) $~~~ Z^{X \times Y} ~=~ ( Z^X )^Y $ \\ which means the existence of the {\it bijective} mapping \\ $~~~~~~ Z^{X \times Y} \ni f ~~\longmapsto~~ f_{ev} \in ( Z^X )^Y $ \\ where \\ $~~~~~~ ( f_{ev} ( y ) ) ( x ) ~=~ f ( x, y ),~~~ x \in X,~ y \in Y $ \\ On the other hand, one {\it cannot} in general find a usual HKB topology on the space ${\cal C} ( X, Z )$ of continuous functions from $X$ to $Z$, such that by restricting the above mapping $f \longmapsto f_{ev}$, one would again obtain a bijection, namely \\ $~~~~~~ {\cal C} ( X \times Y, Z ) \ni f ~~\longmapsto~~ f_{ev} \in {\cal C} ( Y, {\cal C} ( X, Z ) ) $ \\ In other words, within the HKB topologies, we {\it cannot} in general obtain the relation \\ (CONT-EXP) $~~~ {\cal C} ( X \times Y, Z ) ~=~ {\cal C} ( Y, {\cal C} ( X, Z ) ) $ \\ which would be the particular continuous version of (EXP). \\ {\bf The Four Moore-Smith Conditions} \\ The early, 1922 paper of Moore \& Smith introduced the systematic use of {\it nets} in topology, Kelley [pp. 62-83]. As it happened at the time, the motivation came from a study of summability. \\ Nets with values in a given set $E$ are natural generalizations of usual sequences $s : \mathbb{N} \longrightarrow E$. Namely, the index set $\mathbb{N}$ is replaced with an {\it arbitrary} set $I$, which however is assumed to be endowed with a {\it pre-order} relation $\leq$ that is at the same time {\it directed}. More precisely, a net in the set $E$ is any mapping $s : I \longrightarrow E$, where the index set $I$ has a binary relation $\leq$ which is {\it reflexive} and {\it transitive}, thus it is a {\it pre-order}, and in addition, it has the {\it directedness} property that $\forall~ i, j \in I : \exists~ k \in I : i, j \leq k$. In general, $\leq$ need not be antisymmetric as well, namely, one does not necessarily ask that $\forall~ i, j \in I : (~ i \leq j,~ j \leq i ~) \Longrightarrow i = j $. \\ One of the basic topological issues arising in connection with nets is the following one, Kelley [pp. 73,74]. \\ Suppose for a certain set $E$, we are given a class ${\cal S}$ of pairs $( s, x )$, where $s$ are nets in $E$, while $x \in E$. \\ The intended meaning of the binary relationship $( s, x ) \in {\cal S}$ is that the net $s$ {\it converges to} $x$ in the sense of ${\cal S}$, and it is denoted by \\ $~~~~~~ \lim_{\,{\cal S}}~ s ~=~ x $ \\ The topological issue is : \begin{itemize} \item Which are the {\it necessary} and {\it sufficient} conditions on the class ${\cal S}$, so that there exists a usual HKB topology $\tau$ on $E$, for which we have the {\it equivalence} between the usual convergence in topology, and on the other hand, that given by the class ${\cal S}$, namely \end{itemize} (1.1)~~~ $ \begin{array}{l} \forall~ x \in E,~ s : I \longrightarrow E ~~\mbox{net in}~~ E : \\ \\ ~~~~ \lim_{\,\tau}~ s ~=~ x ~~~\Longleftrightarrow~~~ \lim_{\,{\cal S}}~ s ~=~ x \end{array} $ \\ The {\it characterization} of this equivalence is given by the following {\it four Moore-Smith conditions} (1.2) - (1.5) on the class ${\cal S}$, Kelley [p. 74], namely \\ (1.2)~~~ $ \begin{array}{l} \forall~ x \in E,~ s : I \longrightarrow E ~~\mbox{net in}~~ E : \\ \\ ~~~~ (~~ s ( i ) ~=~ x, ~~\mbox{for}~ i \in I ~~) ~~~\Longrightarrow~~~ \lim_{\,{\cal S}}~ x \end{array} $ \\ \\ \\ (1.3)~~~ $ \begin{array}{l} \forall~ ( s , x ) \in {\cal S} : \\ \\ \forall~ t : I \longrightarrow E ~~\mbox{net in}~~ E : \\ \\ ~~~~ (~~ t ~~\mbox{subnet of}~~ s ~~) ~~~\Longrightarrow~~~ \lim_{\,{\cal S}}~ t ~=~ x \end{array} $ \\ \\ \\ (1.4)~~~ $ \begin{array}{l} \forall~ x \in E,~ s : I \longrightarrow E ~~\mbox{net in}~~ E : \\ \\ ~~~~~~ ( s, x ) \notin {\cal S} ~~~\Longrightarrow~~~ \left ( ~~ \begin{array}{l} \exists~ t ~~\mbox{subnet of}~~ s : \\ \forall~ t^{\,\prime} ~~\mbox{subnet of}~~ t : \\ ~~~~ ( t^{\,\prime}, x ) \notin {\cal S} \end{array} ~~ \right ) \end{array} $ \\ \\ We note that an equivalent form of condition (1.4) is the following one \\ (1.4$^{\,\prime}$\,)~~~ $ \begin{array}{l} \forall~ x \in E,~ s : I \longrightarrow E ~~\mbox{net in}~~ E : \\ \\ ~~~~\left ( ~~ \begin{array}{l} \forall~ t ~~\mbox{subnet of}~~ s : \\ \exists~ t^{\,\prime} ~~\mbox{subnet of}~~ t : \\ ~~~~ \lim_{\,{\cal S}}~ t^{\,\prime} ~=~ x \end{array} ~~ \right ) ~~~\Longrightarrow~~~ \lim_{\,{\cal S}}~ s ~=~ x \end{array} $ \\ \\ For the formulation of the last, that is, {\it fourth Moore-Smith condition}, we need some preparation. Let $( \Lambda, \leq )$ be a directed pre-order. Let $( I_\lambda, \leq )$ be a directed pre-order, for every $\lambda \in \Lambda$. Then on the set \\ $~~~~~~ J ~=~ \Lambda \times \prod_{\, \lambda \in \Lambda}\, I_\lambda $ \\ there is a natural directed pre-order $\leq$ defined by \\ $~~~~~~ ( \lambda, f ) ~\leq~ ( \mu, g ) ~~~\Longleftrightarrow~~~ \left( \begin{array}{l} ~~~*)~~ \lambda \leq \mu \\ \\ ~**)~~ f ( \nu ) \leq g ( \nu ), ~~\mbox{for}~ \nu \in \Lambda \end{array} ~~\right) $ \\ \\ where $\lambda, \mu \in \Lambda$ and $f, g \in \prod_{\, \nu \in \Lambda}\, I_\nu$. Let us now define the {\it diagonal} mapping \\ $~~~~~~ \delta : J ~=~ \Lambda \times \prod_{\, \nu \in \Lambda}\, I_\nu \ni ( \lambda, f ) ~\longmapsto~ f ( \lambda ) \in I_\lambda $ \\ Suppose for every $\lambda \in \Lambda$, one is given $( s_\lambda, x_\lambda ) \in {\cal S}$, where $s_\lambda : I_\lambda \longrightarrow E$ is a net and $x_\lambda \in E$. Then one can define the net $s : \Lambda \longrightarrow E$ by $s ( \lambda ) = x_\lambda$. Further, one defines the net $t : J \longrightarrow E$ by $t ( \lambda, f ) = s_\lambda ( \delta ( \lambda, f ) ) = s_\lambda ( f ( \lambda ) )$, for $\lambda \in \Lambda$ and $f \in \prod_{\, \nu \in \Lambda}\, I_\nu$. \\ Finally, suppose given $x \in E$. \\ Then the {\it fourth Moore-Smith condition} is as follows \\ (1.5)~~~ $ \left( \begin{array}{l} ~~~*)~~ \lim_{\,{\cal S}}~ s_\lambda ~=~ x_\lambda, ~~\mbox{for}~ \lambda \in \Lambda \\ \\ ~**)~~ \lim_{\,{\cal S}}~ s ~=~ x \end{array} ~~\right) ~~~\Longrightarrow~~~ \lim_{\,{\cal S}}~ t ~=~ x $ \\ \\ A consequence of the above four Moore-Smith conditions is that, in case one or more of them are {\it not} satisfied by the class ${\cal S}$, then for whichever usual HKB topology on $E$, there {\it cannot} be precisely the same amount of convergent nets, thus the equivalence (1.1) does {\it not} hold, since at least one of the two implications "$\Longrightarrow$" or " $\Longleftarrow$" will fail to be valid. \\ Clearly, it is easy to construct topologies $\tau$ on $E$ such that one or the other of the implications in (1.1) holds. \\ For instance, we can consider the set ${\cal T}op_{+}$ of all topologies $\tau$ on $E$, for which we have the implication \\ (1.6)~~~ $ \begin{array}{l} \forall~ x \in E,~ s : I \longrightarrow E ~~\mbox{net in}~~ E : \\ \\ ~~~~ \lim_{\,\tau}~ s ~=~ x ~~~\Longrightarrow~~~ \lim_{\,{\cal S}}~ s ~=~ x \end{array} $ \\ Obviously, this set ${\cal T}op_{+}$ is not void, since the finest topology on $E$ belongs to it. Indeed, in that finest topology, only constant nets converge. And then, according to (1.2), the implication "$\Longrightarrow$" in (1.6) does hold. \\ Conversely, we can consider the set ${\cal T}op_{-}$ of all topologies $\tau$ on $E$, for which we have the implication \\ (1.7)~~~ $ \begin{array}{l} \forall~ x \in E,~ s : I \longrightarrow E ~~\mbox{net in}~~ E : \\ \\ ~~~~ \lim_{\,\tau}~ s ~=~ x ~~~\Longleftarrow~~~ \lim_{\,{\cal S}}~ s ~=~ x \end{array} $ \\ Then again, this set ${\cal T}op_{-}$ is not void, since the coarsest topology on $E$ belongs to it. Indeed, in that coarsest topology, all nets converge. And then the implication "$\Longleftarrow$" in (1.7) trivially holds. \\ It should be mentioned that there are more recent alternative versions of the four Moore-Smith conditions (1.2) - (1.5), Schechter [pp. 413, 414]. \\ Two typical examples regarding the {\it failure} of the equivalence in (1.1) are given by the {\it convergence almost everywhere} and the {\it uniform convergence almost everywhere}, Ordman, Cohn, Schechter [p. 564]. \\ For convenience, let us recall here a few details. Let $( X, \Sigma, \mu )$ be a space with a measure. Let ${\cal M} ( X, \Sigma )$ be the space of real valued measurable functions on $X$. Further, let $s : I \longrightarrow {\cal M} ( X, \Sigma )$ be any net, and let $f \in {\cal M} ( X, \Sigma )$ be any real valued measurable function. \\ We say that the measurable functions $f_i = s ( i )$, with $i \in I$, {\it converge almost everywhere} to the measurable function $f \in {\cal M} ( X, \Sigma )$ , if and only if \\ $~~~~~~ \mu^*\, ( X \setminus \{ x \in X ~|~ \lim_{\,i \in I}~ f_i ( x ) = f ( x ) \} ) ~=~ 0 $ \\ where $\mu^*$ is the {\it outer measure} generated by $\mu$. \\ Let us now take $E = {\cal M} ( X, \Sigma )$, and define the class ${\cal S}_{ae}$ by the condition \\ $ ( s, f ) \in {\cal S}_{ae} ~~~\Longleftrightarrow~~~ (~~ f_i = s ( i ), ~\mbox{with}~ i \in I, ~~\mbox{converge almost everywhere to}~ f ~~) $ \\ where the measurable functions $f \in E = {\cal M} ( X, \Sigma )$ and the nets $s : I \longrightarrow E = {\cal M}( X, \Sigma )$ are arbitrary. \\ Then as is well known, Schechter [p. 564], the third of the above Moore-Smith conditions, namely (1.4), is not satisfied in general. \\ Therefore, there {\it cannot} in general exist any usual HKB topology on the space $E = {\cal M}( X, \Sigma )$ of measurable functions which would give precisely the same convergence as the convergence almost everywhere. \\ A similar situation happens, Schechter [p. 564], with uniform convergence almost everywhere, which is defined as follows. The measurable functions $f_i = s ( i )$, with $i \in I$, {\it converge uniformly almost everywhere} to the measurable function $f \in {\cal M} ( X, \Sigma )$ , if and only if \\ $~~~~~~ \begin{array}{l} \forall~~ \epsilon > 0 : \\ \\ \exists~~ X_\epsilon \in \Sigma : \\ \\ ~~~ f_i ~~\mbox{converges uniformly to}~~ f ~~\mbox{on}~~ X \setminus X_\epsilon \end{array} $ \\ A consequence of the above is that the celebrated "dominated convergence" theorem of Lebesgue {\it cannot} be formulated in terms of HKB topologies. Indeed, this theorem is about sequences of integrable functions which {\it converge almost everywhere}, and as mentioned above, such a convergence is beyond the reach of HKB topologies. \\ {\bf Going Beyond the HKB Topology} \\ Nearly a century of parallel developments have taken place in which the usual HKB concept of topology has been pursued along with a variety of alternative topological type processes and structures in Measure and Integration Theory, Theory of Ordered Spaces, etc. These parallel developments, however, have left the issue of a proper extension of the HKB concept of topology just about as open as it had always been. \\ And it is by now quite obvious that the way forward towards such an extension is not simply by setting up one or another concept of convergence with possible relaxations of some of the four Moore-Smith conditions which characterize convergence in usual HKB topologies. \\ In other words, finding a more general concept of topology, and at the same time, not losing the essential - even if less explicitly manifest - phenomena, aspects, properties, etc., involved has proven not to be an easy task. \\ It is, of course, due precisely to this difficulty that a variety of rather different pseudo-topological attempts have been made in order to go beyond the HKB framework. \\ And as if to highlight the {\it complexity} and {\it depth} of that enterprise, we can recall that even Category Theory has got involved in it. For instance, as earlier in Herrlich, so more recently in Clementino et.al., is stated that : \begin{quote} "Failure to be Cartesian closed is one of the main defects of the category of topological spaces." \end{quote} {\large \bf 2. Topological Type Structures, or TTS-s} \\ {\bf Towards an Extension of the HKB Concept of Topology} \\ We shall present one of the extensions of the HKB concept of topology which was suggested in the 1960s, in Rosinger [1-7]. \\ Then we shall show how that extension of the HKB concept of topology incorporates as a {\it particular} case the more recent - and highly successful - such extended concept of topology in Beattie \& Butzmann. \\ The motivation for Rosinger [1-7] was given by well known difficulties in Functional Analysis manifested by the HKB concept of topology. More specifically, these difficulties arose in the study of generalized solutions of linear and nonlinear partial differential equations. \\ One of the early formulations of these difficulties was in the celebrated 1954 impossibility result of L Schwartz which was claiming that the space ${\cal D}^{\,\prime}$ of the Schwartz distributions could not be usefully embedded into differential algebras of generalized functions. \\ As it turned out not much later, that claim proved to be {\it incorrect}. \\ Indeed, large classes of differential algebras of generalized functions were constructed and used in the solution of a considerable variety of linear and nonlinear partial differential equations, as well as in applications to singular differential geometry, Lie groups, and so on, see Rosinger [5-7, 9-26], Mallios \& Rosinger [1,2], Rosinger \& Walus [1,2], Mallios, and also the whole subject field : {~~~~~~~~~~} 46F30 ~~at~~ www.ams.org/msc/46Fxx.html And all these algebras contain, among other spaces of generalized functions, also the space ${\cal D}^{\,\prime}$ of the Schwartz distributions\\ Over the years, a variety of ways for going beyond the HKB concept of topology have been suggested in the literature. Some of the earlier ones can be found in Alfsen \& Njastad, Choquet, Cs\'{a}sz\'{a}r, Doicinov, Dupont, Fischer, Fr\"{o}licher \& Bucher, Hacque, Haddad, Hammer, Jarchow [1,2], Leader, Marinescu [1,2], Steiner. More recent references are presented in Beattie \& Butzmann. \\ One of the ever most seminal and successful extensions of the HKB concept of topology was presented in Beattie \& Butzmann. \\ As mentioned, however, this extended concept proves to be a {\it particular} case of the {\it topological type structures}, or in short, TTS-s, introduced and developed in Rosinger [2-7], and presented briefly in the sequel. \\ {\bf Topological Type Structures and Topological Type Processes} \\ Let us make a further clarification of the terminology. \\ "Topological type structures", or TTS-s, are meant in this work to be the discipline studied by the usual HKB concept of topology, as well as by its various pseudo-topological type extensions. And as we have seen, in Measure and Integration Theory, or in the Theory of Ordered Spaces, there are plenty of important TTS-s which cannot be dealt with conveniently by the HKB concept of topology. \\ On the other hand, such topological type structures are supposed to be defined in terms of certain "topological type processes", or TTP-s, such as for instance sequences, nets, filters, and so on. \\ In this way, topological type processes, or TTP-s, are but the {\it building blocks} of topological type structures, or TTS-s. \\ {\bf Topological Type Processes, or TTP-s} \\ Let $E$ be any nonvoid set on which we are interested in a {\it topological type structure}, or TTS. \\ As mentioned, a first departure - introduced up-front - from the usual ways encountered in topology, or for that matter pseudo-topology, appears already in the definition of the {\it topological type processes}, or TTP-s, on any given set $E$. Namely, instead of considering for that purpose the usual sequences, nets, filters, etc., on the set $E$, we go one level higher in abstraction and consider any nonvoid set $E^{\,\prime}$ which may give all the {\it topological type processes} on $E$. \\ Also as mentioned, this rather general step, however, is tempered by the fact that, without loss of generality, we can consider such sets $E^{\,\prime}$ of TTP-s as being endowed with a pre-order. And if and when desired, this pre-order structure proves to be able to lead back to nets or filters on $E$. \\ The motivation for such a generalization is simple. Indeed, as is well known, usual sequences of elements in $E$, that is, mappings $s : \mathbb{N} \longrightarrow E$, are in general insufficient for the description of a large variety of topological type structures on $E$ even in the particular case of HKB topologies. On the other hand, nets which, as mentioned, are mappings $s : I \longrightarrow E$, where $I$ is an arbitrary {\it pre-ordered directed} set of {\it indices}, present technical difficulties. First of all, since they involve arbitrary sets of indices $I$, the totality of nets on $E$ is no longer a set, but a category. Further, in case there is an algebraic structure on $E$, such as a group, vector space, algebra, etc., it is not easy to consider a similar algebraic structure on the totality of nets on $E$, since obviously not all such nets have the same set $I$ of indices. Consequently, it is not easy in terms of arbitrary nets to bring algebra and topology together. \\ Such technical difficulties were in part the reason why filters on $E$ have long been considered. And such filters have the major advantage to be definable in terms of $E$ alone, without the need for any other sets, such as for instance, the arbitrary index sets $I$ needed for nets. As it happens, however, filters on $E$ are again not convenient when there is some algebraic structure on that set and we want in addition to bring in a topology compatible with it. Indeed, say, E is a group with the group operation $\circ$. Let $Fil ( E )$ denote the set of all filters on $E$. Then we can easily and quite naturally extend the operation $\circ$ to $Fil ( E )$, as follows. Given on $E$ two filters ${\cal F, G} \in Fil ( E )$, we can define ${\cal F} \circ {\cal G} = \{~ F \circ G ~~|~~ F \in {\cal F},~ G \in {\cal G} ~\}$, where as usual, we defined $F \circ G = \{~ x \circ y ~~|~~ x \in F,~ y \in G ~\}$, taking into account that $F, G \subseteq E$, in view of the definition of filters on $E$. However, this extended operation $\circ$ on $Fil ( E )$ will {\it no} longer be a group, unless we are in the trivial case when $E$ has one single element. \\ In this way, in order to avoid such rather simple but awkward technical difficulties, it appears to be quite appropriate to consider any suitable, but otherwise quite arbitrary set $E^{\, \prime}$ as giving all the topological type processes, or TTP-s, which we shall use in order to define one or another topological type structure, or TTS, on $E$. \\ And as seen later, assuming the existence of a pre-order on $E^{\, \prime}$ is not going to lead to technical difficulties. \\ {\bf Presence of Natural Pre-Order on Topological Type \\ Processes} \\ It is important to note, Rosinger [7], that in the case of the usual topological type processes, or TTP-s, namely, when $E^{\,\prime}$ is given by the set of all sequences, nets, filters, etc., on $E$, there is a natural {\it pre-order} relation $\leq$ on $E^{\,\prime}$, that is, a binary relation which is {\it reflexive} and {\it transitive}, but need {\it not} always be as well {\it antisymmetric}. \\ Indeed, sequences have subsequences, nets have subnets, filters have finer filters, and so on. \\ Therefore, by considering such a pre-order $\leq$ on $E^{\,\prime}$, one can compensate for the rather high level of abstraction which brings in an arbitrary set $E^{\,\prime}$ as the topological type processes, or TTP-s, on the space of interest $E$. \\ And as mentioned, such a compensation can be made naturally and always, thus without loss of generality. \\ We shall deal with this issue in more detail in the sequel. \\ {\bf Cauchy-ness as a the Basic Concept} \\ As second departure, we note that in usual HKB concept of topology the first and most general concept is {\it not} that of a uniform space, but of a topological one which need not necessarily be uniform. \\ In other words, the first and most general concept is that of {\it convergent} topological type processes, TTP-s, be they sequences, nets, filters, etc., and {\it not} of {\it Cauchy} TTP-s. \\ This specific order of priority in defining concepts creates difficulties when we try to go beyond the usual HKB concept of topology, since in the case of the corresponding more general TTS-s defined in terms of convergence it is not so easy to find appropriate extended concepts of Cauchy-ness. \\ Consequently, we reverse the above usual priority order, and when defining the most general TTS-S, we shall start with {\it Cauchy} TTP-s, and after that define the {\it convergent} TTP-s, and do so in terms of the already defined Cauchy TTP-s \\ {\bf Cauchy-ness as a Binary Relation} \\ The third departure is based on a fact which, no matter how obvious at a more careful analysis, it is nevertheless not always explicitly enough brought to the fore. Let us present the respective issue in a simple and well known situation. Let $E$ be a space with a metric $d : E \times E \longrightarrow [ 0, \infty )$. Then, as is well known, a sequence $s : \mathbb{N} \longrightarrow E$ is called {\it Cauchy}, if and only if \\ ~~~~~~$ \begin{array}{l} \forall~ \epsilon > 0 : \\ \exists~ n \in \mathbb{N} : \\ \forall~ k, l \in \mathbb{N} : \\ ~~~ k, l \geq n ~~\Longrightarrow~~ d ( x_k, x_l ) \leq \epsilon \end{array} $ \\ Now clearly, the property involved, namely \\ ~~~~~~$ k, l \geq n ~~\Longrightarrow~~ d ( x_k, x_l ) \leq \epsilon $ \\ is a {\it binary} relation with respect to the terms of the given sequence $s$. And it is expressed exclusively with reference to the terms of the respective sequence $s$. That is, no limit of the sequence is involved, even if such a limit may happen to exist. \\ This is in obvious contradistinction with what happens in the case the sequence $s$ is {\it convergent} in the usual sense. Indeed, $s$ is convergent in the given metric space to some $x \in E$, if and only if \\ ~~~~~~$ \begin{array}{l} \forall~ \epsilon > 0 : \\ \exists~ n \in \mathbb{N} : \\ \forall~ m \in \mathbb{N} : \\ ~~~ m \geq n ~~\Longrightarrow~~ d ( x, x_m ) \leq \epsilon \end{array} $ \\ And here, the property involved, namely \\ ~~~~~~$ m \geq n ~~\Longrightarrow~~ d ( x, x_m ) \leq \epsilon $ \\ is clearly a {\it unary} and {\it not} binary relation with respect to the terms of the sequence $s$ under consideration. Furthermore, it is {\it not} formulated exclusively in terms of the sequence $s$ alone, since it refers to its limit $x$ as well. \\ In view of the above, and in view of the fact that here in this work the first basic concept is Cauchy-ness, and not convergence, we shall define a TTS on the set$E$ by a {\it binary} relation $\Xi$ on the TTP-s given by $E^{\, \prime}$, namely \\ (2.1)~~~ $ \Xi ~\subseteq~ E^{\, \prime} \times E^{\, \prime} $ \\ The presence of this binary relation $\Xi$ ~- supposed to model the {\it Cauchy-ness} of the TTP-s in $E^{\, \prime}$ - is in fact the main {\it novelty} of the approach in Rosinger [2-7]. \\ Given two arbitrary TTP-s $x^{\, \prime}, y^{\, \prime} \in E^{\, \prime}$, the meaning of the fact that they are in the binary relation $\Xi$, that is \\ (2.2)~~~ $ ( x^{\, \prime}, y^{\, \prime} ) ~\in~ \Xi $ \\ or written equivalently and more simply \\ (2.2$^{\, \prime}$)~~~ $ x^{\, \prime} ~\Xi~ y^{\, \prime} $ \\ is that, in case the space $E$ had a completion $E^{\#}$, then {\it both}~ $x^{\, \prime}$ and $y^{\, \prime}$ would {\it converge} in that completion to the {\it same} element $x^{\#}$, see (2.14) in the sequel. \\ Whenever for two TTP-s\, $x^{\, \prime}, y^{\, \prime} \in E^{\, \prime}$ we have the relationship $x^{\, \prime} ~\Xi~ y^{\, \prime}$, we shall say that $x^{\, \prime}$ and $y^{\, \prime}$ are {\it Cauchy related}. \\ It is important to note that, in general, the binary relation $\Xi$ is {\it not} supposed to be reflexive or transitive. However, as seen in (TTS2) below, it is always required to be {\it symmetric}. Also, when restricted to the Cauchy TTP-s, it is reflexive as well. \\ {\bf Localization of Topological Type Processes} \\ As is well known, in topology it is important to consider sequences, nets, filters, etc., which are defined not only on the whole space $E$ under consideration, but also on various of its nonvoid subsets $A \subseteq E$. \\ The generalization we shall use in this work is given by mappings \\ (2.3)~~~ $ T : {\cal P} ( E ) ~\longrightarrow~ {\cal P} ( E^{\, \prime} ) $ \\ from the power set of the space $E$ under consideration to the power set of $E^{\, \prime}$ of associated TTP-s. For $A \subseteq E$, the corresponding subset $T ( A ) \subseteq E^{\, \prime}$ is supposed to represent all the TTP-s based in $A$. For instance, if $E^{\, \prime}$ is given by all the usual sequences $s : \mathbb{N} \longrightarrow E$, then $T ( A )$ is the set of all usual sequences with values in $A$, that is, the set of all the sequences $s : \mathbb{N} \longrightarrow A$. Of course, upon convenience, one may as well define $T ( A )$ as the larger set \\ ~~~~~~$ T ( A ) ~=~ \left \{~ s : \mathbb{N} \longrightarrow E ~~~ \begin{array}{|l} ~~~ \exists~ n \in \mathbb{N} : \\ ~~~ \forall~ m \in \mathbb{N} : \\ ~~~~~ m \geq n ~\Longrightarrow~ s ( m ) \in A \end{array} ~\right \} $ \\ Obviously, if one uses nets, filters, etc., for TTP-s, one can define the mapping $T$ similarly. \\ The {\it two} conditions such a mapping $T$ is supposed to satisfy are \\ (TTP1)~~~ $ \phi \neq A \subseteq B \subseteq E ~~\Longrightarrow~~ \phi \neq T ( A ) \subseteq T ( B ) \subseteq E^{\, \prime} $ \\ (TTP2)~~~ $ T ( E ) ~=~ E^{\, \prime} $ \\ The meaning and necessity of condition (TTP1) is obvious. The condition (TTP2) simply means that in the set $E^{\, \prime}$ of TTP-s associated with $E$ there are {\it no} redundant elements. In other words, all TTP-s in $E^{\, \prime}$ are based in $E$ since they belong to $T ( E )$. \\ {\bf Definition of Topological Type Structures} \\ With the above preparation and motivation, we can come now to the general definition of {\it topological type structures}, or in short, TTS-s, Rosinger [2-7], which are given by any {\it quadruplet} \\ (TTS)~~~ $ \sigma ~=~ ( E, E^{\, \prime}, T , \Xi ) $ \\ which satisfies the conditions (TTP1), (TTP2) with respect to the mapping $T$, as well as the following three additional ones, with respect to the binary relation $\Xi$ of Cauchy-ness, namely \\ (TTS1)~~~ $ \begin{array}{l} \forall~ x \in E : \\ \\ \forall~ x^{\, \prime}, y^{\, \prime} \in T ( \{ x \} ) \subseteq E^{\, \prime} : \\ \\ ~~~ x^{\, \prime} ~\Xi~ y^{\, \prime} \end{array} $ \\ also \\ (TTS2)~~~ $ \begin{array}{l} \forall~ x^{\, \prime}, y^{\, \prime} \in E^{\, \prime} : \\ \\ ~~~ x^{\, \prime} ~\Xi~ y^{\, \prime} ~~\Longrightarrow~~ y^{\, \prime} ~\Xi~ x^{\, \prime} \end{array} $ \\ as well as \\ (TTS3)~~~ $ \begin{array}{l} \forall~ x^{\, \prime}, y^{\, \prime} \in E^{\, \prime} : \\ \\ ~~~ x^{\, \prime} ~\Xi~ y^{\, \prime} ~~\Longrightarrow~~ x^{\, \prime} ~\Xi~ x^{\, \prime} \end{array} $ \\ The meaning of these three conditions is as follows. Condition (TTS1) is in fact a rather {\it trivial} requirement. Indeed, it says that if two TTP-s $x^{\, \prime}, y^{\, \prime} \in E^{\,\prime}$ are based in the same point $x$ of the space $E$, then they are Cauchy related. Here we note that TTP-s based in a single point $x \in E$ are usually constant sequences or nets with the terms equal to $x$, filters generated by the single point $x$, thus fixed ultrafilters, etc. Therefore, in usual HKB topologies they are certainly convergent to such an $x$, while in uniform topologies they are also Cauchy. \\ The condition (TTS2) is {\it natural}, since is requires that the binary relation $\Xi$ be symmetric. And certainly, there seems to exist no a priori reason why one should not assume that. \\ Finally, the condition (TTS3) is the other {\it novelty}. And it requires that in case two TTP-s $ x^{\, \prime}, y^{\, \prime} \in E^{\, \prime}$ are Cauchy related to one another, then each of them is Cauchy related to {\it itself}. The meaning of this condition is further clarified in the definition of {\it Cauchy TTP-s} given in (2.4) below. \\ {\bf Cauchy and Convergent Topological Type Processes} \\ Given $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$ any TTS on $E$, we define the set of {\it Cauchy TTP-s}~ in $\sigma$ as given by \\ (2.4)~~~ $ Cauchy~ ( \sigma ) ~=~ \{~ x^{\, \prime} \in E^{\, \prime} ~~|~~ x^{\, \prime} ~\Xi~ x^{\, \prime} ~\} $ \\ Further, for any $x \in E$, we define the set of {\it TTP-s convergent to} $x$ in $\sigma$ as being given by \\ (2.5)~~~ $ Conver~ ( \sigma, x ) ~=~ \left \{~ x^{\, \prime} \in E^{\, \prime} ~~ \begin{array}{|l} ~~ \exists~ x^{\, \prime}_0 \in T ( \{ x \} ) \subseteq E^{\, \prime} : \\ \\ ~~~~~ x^{\, \prime} ~\Xi~ x^{\, \prime}_0 \end{array} ~\right \} $ \\ Clearly, in view of (TTS1), (TTS3), we have the inclusions \\ (2.6)~~~ $ \begin{array}{l} \forall~ x \in E : \\ \\ ~~~~ \phi \neq T ( \{ x \} ) ~\subseteq~ Conver~ ( \sigma, x ) ~\subseteq~ Cauchy~ ( \sigma ) ~\subseteq~ E^{\, \prime} \end{array} $ \\ which extend the customary relationships between constant, convergent and Cauchy sequences, nets, filters, etc., in case of usual uniform spaces. \\ We note, see (2.4), that in a TTS $\sigma = ( E, E^{\, \prime}, T , \Xi )$, the binary relation $\Xi$ is {\it reflexive} precisely on the subset $Cauchy~ ( \sigma )$ of all the TTP-s $E^{\,\prime}$. \\ {\bf Complete Spaces} \\ From the above follows naturally the definition of complete TTS-s. Namely, given any $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$, we say that $\sigma$ is {\it complete}, if and only if \\ (2.7)~~~ $ Cauchy~ ( \sigma ) ~=~ \bigcup_{x \in E}~ Conver~ ( \sigma, x ) $ \\ In view of (2.6), the inclusion $"\supseteq"$ in (2.7) always holds. \\ {\bf Topological Support} \\ Given $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$ any TTS on $E$, it will often be convenient to consider its {\it topological support on} $E$ \\ (2.8)~~~ $ ( E, E^{\, \prime}, T ) $ \\ Clearly, such a topological support on a set $E$ can be defined or considered independently of the binary relation $\Xi \subseteq E^{\,\prime} \times E^{\,\prime}$. \\ Also, a given topological support (2.8) can be associated with a variety of binary relations $\Xi \subseteq E^{\,\prime} \times E^{\,\prime}$, so as to form corresponding TTS-s $\sigma = ( E, E^{\, \prime}, T , \Xi )$. \\ {\bf Pre-Order on Topological Type Processes} \\ As mentioned, it is natural to assume the existence of a pre-order $\leq$ on the set $E^{\,\prime}$ of topological type processes, or TTP-s, associated with the space $E$. Let us therefore see how the above definitions leading to that in (TTS) are adapted in such a case. \\ First, the mapping $T : {\cal P} ( E ) \longrightarrow {\cal P} ( E^{\,\prime} )$ can naturally be required to satisfy in addition to (TTP1) and (TTP2), also the condition \\ (TTP3)~~~ $ A \subseteq E,~ x^{\,\prime} \in T ( A ),~ y^{\,\prime} \in E^{\,\prime},~ x^{\,\prime} \leq y^{\,\prime} ~~~\Longrightarrow~~~ y^{\,\prime} \in T ( A ) $ \\ Then the binary relation $\Xi \subseteq E^{\,\prime} \times E^{\,\prime}$ of Cauchy-ness, in addition to (TTS1) - (TTS3), can naturally be required to satisfy the condition \\ (TTS4)~~~ $ x^{\,\prime} ~\Xi~ y^{\,\prime},~ x^{\,\prime} \leq u^{\,\prime},~ y^{\,\prime} \leq v^{\,\prime} ~~~\Longrightarrow~~~ u^{\,\prime} ~\Xi~ v^{\,\prime} $ \\ for every $x^{\,\prime}, y^{\,\prime}, u^{\,\prime}, v^{\,\prime} \in E^{\,\prime}$. \\ Related or alternative such natural conditions, together with their consequences, are presented in (3.1) - (3.3) and (3.20). \\ {\bf Topological Type Structures with Refinement, or TTSR-s} \\ In this way we can augment the definition of topological type structures in (TTS) with the following definition of {\it topological type structures with refinement of their topological type processes}, or in short TTSR-s, given by \\ (TTSR)~~~ $ \sigma ~=~ ( E, ( E^{\,\prime}, \leq ), T, \Xi ) $ \\ where $( E^{\,\prime}, \leq )$ is a pre-ordered set, and the conditions (TTP1) - (TTP3), together with (TTS1) - (TTS4), are satisfied. \\ In view of (2.4), (2.5), it follows easily that we have \\ (2.9)~~~ $ x^{\,\prime} \in Cauchy ( \sigma ),~ y^{\,\prime} \in E^{\,\prime},~ x^{\,\prime} \leq y^{\,\prime} ~~~\Longrightarrow~~~ y^{\,\prime} \in Cauchy ( \sigma ) $ \\ while for every $x \in E$, we have \\ (2.10)~~~ $ \begin{array}{l} x^{\,\prime} \in Conver ( \sigma, x ),~ y^{\,\prime} \in E^{\,\prime},~ x^{\,\prime} \leq y^{\,\prime} ~~~\Longrightarrow \\ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\Longrightarrow~~~ y^{\,\prime} \in Conver ( \sigma, x ) \end{array} $ \\ \\ {\bf Two Basic Examples} \\ {\bf Usual Uniform Topologies as TTS-s.} Suppose we are given a usual uniform topology $\upsilon$ on $E$. In this case we can, for instance, choose \\ (2.11)~~~ $ E^{\, \prime} = Fil ( E ) $ \\ Then the mapping (2.3) can be taken as \\ (2.12)~~~ $ T ( A ) ~=~ \{~ {\cal F} \in Fil ( E ) ~~|~~ A \in {\cal F} ~\},~~~ \mbox{for}~~ A \subseteq E,~~ A \neq \phi $ \\ while $T ( \phi ) = \phi$, and we obtain the respective topological support on $E$ given by \\ (2.13)~~~ $ ( E, E^{\,\prime}, T ) ~=~ ( E, Fil ( E ), T ) $ \\ As for the binary relation (2.1), we can take it defined for any two filters ${\cal F, G} \in Fil ( E )$, by \\ (2.14)~~~ $ {\cal F} ~\Xi_{\,\upsilon}~ {\cal G} ~~~\Longleftrightarrow~~~ \left(~~ \begin{array}{l} \exists~ x^{\#} \in E^{\#} : \\ \\ ~~~~ {\cal F} ~~\mbox{and}~~ {\cal G} ~~\mbox{converge in}~~ E^{\#} ~~\mbox{to}~~ x^{\#} \end{array} ~~\right) $ \\ where by $E^{\#}$ we denoted the usual completion of $E$ in the uniform topology $\upsilon$. \\ It is now easy to see that with the above choices \\ (2.15)~~~ $\sigma_\upsilon ~=~ ( E, E^{\, \prime}, T, \Xi_{\,\upsilon} ) $ \\ is a TTS on $E$, and $Cauchy~ ( \sigma_\upsilon )$ and $Conver~ ( \sigma_\upsilon, x )$, with $x \in E$, are the respective sets of Cauchy and convergent filters on $E$ in the usual uniform topology $\upsilon$ on $E$. \\ Also, $E$ is complete in the TTS $\sigma_\upsilon$, if and only if it is complete in the usual uniform topology $\sigma$. \\ Furthermore, in this case the binary relation $\Xi_{\,\upsilon}$ is obviously {\it transitive}, which in the general definition of TTS-s is not required. \\ If we consider on $E^{\, \prime} = Fil ( E )$ the usual pre-order of filter refinement, which in fact is a partial order, and it is simply given by the inclusion $\subseteq$ among filters, then we have the stronger result \\ (2.15$^{\,\prime}$)~~~ $\sigma_\upsilon ~=~ ( E, ( E^{\, \prime}, \subseteq ), T, \Xi_{\,\upsilon} ) $ \\ is a TTSR on $E$. \\ {\bf Usual Topologies as TTS-s.} Now, let us be given a usual topology $\tau$ on $E$. Then we can take on $E$ the same topological support as in (2.13). For the binary relation (2.1), we take \\ (2.16)~~~ $ \Xi_{\,\tau} ~=~ \left \{~ ( {\cal F, G } ) \in Fil ( E) \times Fil ( E ) ~~ \begin{array}{|l} \exists~ x \in E : \\ \\ ~~~~ {\cal F, G} ~~\stackrel{\tau} \longrightarrow~~ x \end{array} ~ \right \} $ \\ where ${\cal F} ~~\stackrel{\tau} \longrightarrow~~ x$ means that the filter ${\cal F}$ on $E$ converges to $x$ in the usual topology $\tau$ on $E$. \\ It follows easily that with the above choices \\ (2.17)~~~ $ \sigma_\tau ~=~ ( E, E^{\, \prime}, T, \Xi_{\,\tau} ) $ \\ is a TTS on $E$, with $Conver~ ( \sigma_\tau, x )$ being the set of filters on $E$ convergent to $x$ in the usual topology $\tau$ on $E$. \\ Clearly, we also have \\ (2.18)~~~ $ Cauchy~ ( \sigma_\tau ) ~=~ \bigcup_{x \in E}~ Conver~ ( \sigma_\tau, x ) $ \\ which means that $\sigma_\tau$ is a complete TTS on $E$. Therefore this particular way, namely, $\tau \longmapsto \sigma_\tau$, to associate a TTS on $E$ to a usual topology $\tau$ on $E$ is rather trivial. \\ We also note that the binary relation $\Xi_{\,\tau}$ is {\it transitive}. \\ If again, we consider on $E^{\, \prime} = Fil ( E )$ the usual partial order of filter refinement, which in fact is simply given by the inclusion $\subseteq$ among filters, then we have the stronger result \\ (2.17$^{\,\prime}$)~~~ $ \sigma_\tau ~=~ ( E, ( E^{\, \prime}, \subseteq ), T, \Xi_{\,\tau} ) $ \\ is a TTSR on $E$. \\ {\bf Associations of Alternative TTS-s.} Obviously, to a given usual topology $\tau$ on $E$ one may as well associate other TTS-s than the above $\sigma_\tau$. And this may, among others, be done in order to obtain a completion of $E^{\#}$ which, unlike as in (2.18), may be larger than $E$ itself. One way to do that is by considering on $E$ certain TTS-s \\ (2.19)~~~ $ \sigma ~=~ ( E, E^{\,\prime}, T, \Xi ) $ \\ for which we have \\ (2.20)~~~ $ \Xi_{\,\tau} ~\subseteq~ \Xi $ \\ In such a case obviously we will have \\ (2.21)~~~ $ \begin{array}{l} Cauchy~ ( \sigma_\tau ) ~\subseteq~ Cauchy~ ( \sigma ) \\ \\ Conver~ ( \sigma_\tau, x ) ~\subseteq~ Conver~ ( \sigma_\tau, x ), ~~\mbox{for}~~ x \in E \end{array} $ \\ while at the same time it may happen that $\sigma$ itself is complete, that is, it satisfies (2.7). \\ {\bf The Nonrigidity of TTSR-s} \\ It is important to note that the TTSR-s defined above are {\it nonrigid} mathematical structures. Indeed, given $\sigma = ( E, ( E^{\,\prime}, \leq ), T, \Xi )$ a TTSR on a set $E$, it involves {\it four} defining entities, namely, a set $E^{\,\prime}$, a pre-order $\leq$ on that set, a mapping $T$, and a binary relation $\Xi$ on $E^{\,\prime}$. And as shown by the respective conditions required about these four entities, those conditions are merely {\it compatibility} conditions, thus they are {\it not} determining any one of these four entities in function of the other three. \\ This is in sharp contradistinction to the case of the usual HKB concept of topology, where for instance, the open sets are supposed to determine uniquely all the other topological entities. \\ Therefore, as mentioned earlier, it is precisely this {\it nonrigid} aspect of the TTSR-s which offers the possibility of a suitable extension of the usual HKB concept of topology. \\ \\ {\large \bf 3. Topological Type Structures with Pre-ordered \\ \hspace*{0.55cm} Topological Type Processes Can Lead Back to \\ \hspace*{0.55cm} Usual HKB Topologies} \\ Here we shall show that, under rather general and natural conditions, TTS-s which have pre-ordered TTP-s can {\it lead back} to usual HKB topologies. And in fact, we shall give two such constructions, Rosinger [7, pp. 141-151]. Needless to say, there may be other similar constructions as well. \\ These constructions give an indication of the fact that the concept of TTS as defined in this work and originated in Rosinger [7], is {\it not} unduly abstract or general. \\ Also, they show how natural is to consider a pre-order on TTP-s. \\ On the other hand, one should note that the mentioned natural constructions leading from TTS-s back to usual HKB topologies do {\it not} necessarily mean that the former are reduced to the latter, since as mentioned, the former will typically be more general due, among others, to their {\it nonrigid} structure. Moreover, one can only do so under certain conditions which, albeit, appear to be quite natural. \\ This argument is further supported by the fact that, as seen next, from a given TTS one can go back in more than one natural way to usual HKB topologies. \\ {\bf Compatible Pre-ordered Topological Type Processes} \\ In order to illustrate the conceptual nonrigidity involved in the present approach to pseudo-topologies, here we deal with the possible connection between TTS-s and pre-orders on TTP-s in an alternative manner to that leading to the definition of TTSR-s given in (TTSR) above. \\ It is important to note that usual topological type processes, or TTP-s, have a {\it two fold} connection with pre-orders, Rosinger [7, pp. 141-151], namely : \begin{itemize} \item each such TTP on a given space $E$ is a mapping of a pre-ordered directed set into a certain suitably given set, \item there is a further pre-order on the set of all such TTP-s associated to the space $E$. \end{itemize} As mentioned earlier, certainly the above is the case when, for instance, TTP-s are given by nets or filters. Indeed, nets on a given nonvoid set $E$ are mappings of directed pre-orders into $E$, while two such nets can be compared with one another when one is a subnet of the other. Similarly, filters ${\cal F}$ on $E$ can be seen as mappings ${\cal F} \ni F \longmapsto F \in {\cal P} ( E )$. Thus they are nets with values in ${\cal P} ( E )$, since ${\cal F}$ is directed with respect to the partial order $\supseteq$. Furthermore, two such filters on $E$ can be compared with one another when one is more fine than the other. \\ Let $\sigma = ( E, E^{\, \prime}, T , \Xi )$ be any TTS on the nonvoid set $E$ and let $\leq$ be any pre-order on $E^{\, \prime}$, which we recall, is the set of TTP-s of $\sigma$. \\ We call $\sigma = ( E, E^{\, \prime}, T , \Xi )$ and $( E^{\, \prime}, \leq )$ {\it compatible}, if and only if the following three conditions hold \\ (3.1)~~~ $ \begin{array}{l} \forall~~~ A \subseteq E,~ x^{\, \prime} \in T ( A ),~ y^{\, \prime} \in E^{\, \prime} ~: \\ \\ ~~~~~ x^{\, \prime} ~\leq~ y^{\, \prime} ~~~\Longrightarrow~~~ y^{\, \prime} \in T ( A ) \end{array} $ \\ \\ \\ (3.2)~~~ $ \begin{array}{l} \forall~~~ x \in E,~ x^{\, \prime} \in Conver~ ( \sigma, x ),~ y^{\, \prime} \in E^{\, \prime} ~: \\ \\ ~~~~~ x^{\, \prime} ~\leq~ y^{\, \prime} ~~~\Longrightarrow~~~ y^{\, \prime} \in Conver~ ( \sigma, x ) \end{array} $ \\ \\ \\ (3.3)~~~ $ \begin{array}{l} \forall~~~ x^{\, \prime} \in Cauchy~ ( \sigma ),~ y^{\, \prime} \in E^{\, \prime} ~: \\ \\ ~~~~~ x^{\, \prime} ~\leq~ y^{\, \prime} ~~~\Longrightarrow~~~ \left ( ~~ \begin{array}{l} y^{\, \prime} \in Cauchy~ ( \sigma ) \\ \\ ( x^{\, \prime}, y^{\, \prime} ) \in~ \Xi \end{array} ~~ \right ) \end{array} $ \\ \\ It is easy to see that in case the TTP-s are given by nets or filters, then the above compatibility conditions (3.1) - (3.3) are indeed satisfied. \\ Let us further note that (3.1) is but an equivalent reformulation of the earlier (TTP3), while in view of (TTS3), the condition (3.3) is implied by (TTS4). \\ {\bf Closed and Open Subsets, Neighbourhoods} \\ Let us for a moment return to general TTS-s $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$. A subset $A \subseteq E$ is called $\sigma$-{\it closed}, if and only if \\ (3.4)~~~ $ \begin{array}{l} \forall~~~ x^{\, \prime} \in T ( A ),~ x \in E ~: \\ \\ ~~~~~ x^{\, \prime} \in Conver~ ( \sigma, x ) ~~~\Longrightarrow~~~ x \in A \end{array} $ \\ \\ We denote by $Cl~ ( \sigma )$ the set of all $\sigma$-closed subsets of $E$. \\ A subset $A \subseteq E$ is called $\sigma$-{\it open}, if and only if $E \setminus A$ is $\sigma$-closed. And we denote by $Op~ ( \sigma )$ the set of all $\sigma$-open subsets of $E$. \\ It is easy to see that \\ (3.5)~~~ \mbox{arbitrary intersections of}~ $\sigma$-closed ~\mbox{sets are}~ $\sigma$-closed \\ (3.6)~~~ \mbox{arbitrary unions of}~ $\sigma$-open ~\mbox{sets are}~ $\sigma$-open \\ (3.7)~~~ $\phi$ ~\mbox{and}~ $E$ ~\mbox{are both}~ $\sigma$-closed ~\mbox{and}~ $\sigma$-open \\ Consequently, we can define the operation of $\sigma$-{\it closure} as follows \\ (3.8)~~~ $ E \supseteq A ~~~\longmapsto~~~ cl_\sigma A ~=~ \bigcap_{\, B\, \in\, Cl\, ( \sigma ),~ B \supseteq A}~ B \in Cl~ ( \sigma ) $ \\ and of $\sigma$-{\it interior} by \\ (3.9)~~~ $ E \supseteq A ~~~\longmapsto~~~ in_\sigma A ~=~ \bigcup_{\, B\, \in\, Op\, ( \sigma ),~ B \subseteq A}~ B \in Op~ ( \sigma ) $ \\ It is easy to see that the following properties hold for all subsets $A \subseteq E$ \\ (3.10)~~~ $ A \in Cl~ ( \sigma ) ~~~\Longleftrightarrow~~~ A ~=~ cl_\sigma A $ \\ (3.11)~~~ $ A ~\subseteq~ cl_\sigma A ~=~ cl_\sigma~ cl_\sigma A $ \\ (3.12)~~~ $ A \in Op~ ( \sigma ) ~~~\Longleftrightarrow~~~ A ~=~ op_\sigma A $ \\ (3.13)~~~ $ A ~\supseteq~ op_\sigma A ~=~ op_\sigma~ op_\sigma A $ \\ Also, for $A \subseteq B \subseteq E$ we have \\ (3.14)~~~ $ cl_\sigma A ~\subseteq~ cl_\sigma B,~~~ op_\sigma A ~\subseteq~ op_\sigma B $ \\ while \\ (3.15)~~~ $ cl_\sigma \phi ~=~ op_\sigma \phi ~=~ \phi,~~~ cl_\sigma E ~=~ op_\sigma E ~=~ E $ \\ Let us now define the set of $\sigma$-{\it neighbourhoods} of a point $x \in E$ by \\ (3.16)~~~ $ {\cal V}_\sigma ( x ) ~=~ \{~ A \subseteq E ~~|~~ \exists~~ B \in Op~ ( \sigma ) ~:~ x \in B \subseteq A ~\} $ \\ It follows easily that for a subset $A \subseteq E$, we have \\ (3.17)~~~ $ A \in Op~ ( \sigma ) ~~~\Longleftrightarrow~~~ \forall~~ x \in A ~:~ A \in {\cal V}_\sigma ( x ) $ \\ {\bf From TTS-s Back to HKB Topologies} \\ As seen above in (3.5) - (3.17), within arbitrary TTS-s $\sigma = ( E, E^{\, \prime}, T , \Xi )$ one can naturally define corresponding concepts of closed and open sets, closure and interior operators, as well as neighbourhoods of points in $E$. And in view of (3.5) and (3.6), the only thing which prevents such associated concepts from being identical with those in usual HKB topologies is that, in general, finite unions of closed sets need not be closed, and thus, finite intersections of open sets need not be open. \\ Here we show that in case TTS-s $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$ are {\it compatible} with suitably associated pre-orders $( E^{\, \prime}, \leq )$, then the above association in (3.5) - (3.17) with corresponding topological concepts does in fact lead to usual HKB topologies. \\ For that purpose, it is useful to introduce the following $\sigma$-{\it adherence} operator, Rosinger [7, pp. 141-151] \\ (3.18)~~~ $ T_\sigma : {\cal P} ( E ) \times {\cal P} ( E ) ~~~\longrightarrow~~~ {\cal P} ( E^{\, \prime} ) $ \\ defined for $A, B \subseteq E$ by \\ (3.19)~~~ $ T_\sigma ( A, B ) ~=~ \{~ x^{\, \prime} \in T ( B ) ~~|~~ \exists~~~ y^{\, \prime} \in T ( A ) ~:~ x^{\, \prime} \leq y^{\, \prime} ~\} $ \\ In terms of this operator, the earlier compatibility conditions (3.1) - (3.3) are now strengthened as follows. \\ We call $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$ and $( E^{\, \prime}, \leq )$ {\it strongly compatible}, if and only if in addition to (3.1) - (3.3), the following condition holds : \\ (3.20)~~~ $ \begin{array}{l} \forall~~~ A, B \subseteq E ~: \\ \\ ~~~~~ T ( A \cup B ) ~\subseteq~ T_\sigma ( A, A \cup B ) \cup T_\sigma ( B, A \cup B ) \end{array} $ \\ Once again it is easy to see that in case the TTP-s are given by nets or filters, then the above compatibility condition (3.20) is indeed satisfied. \\ Now we can obtain the following \\ {\bf Proposition} ( Rosinger [7, pp. 145,146] )\\ Given $\sigma$ and $( E^{\, \prime}, \leq )$ strongly compatible, then \\ (3.21)~~~ finite unions of $\sigma$-closed sets are $\sigma$-closed \\ (3.22)~~~ finite intersections of $\sigma$-open sets are $\sigma$-open \\ Also, for every $x \in E,~ x^{\, \prime} \in E^{\, \prime}$, we have \\ (3.23)~~~ $ x^{\, \prime} \in Conver ( \sigma, x ) \Longrightarrow \left ( \begin{array}{l} \forall~ A \in {\cal V}_\sigma ( x ) ~: \\ \\ \exists~ y^{\, \prime} \in Conver ( \sigma, x ) \cap T ( A ) ~: \\ \\ ~~~ x^{\, \prime} \leq y^{\, \prime} \end{array} \right ) $ \\ {\bf Proof} \\ For (3.21), let $A, B$ be $\sigma$-closed in $E$, $x^{\, \prime} \in T ( A \cup B )$ and $x \in E$, such that $x^{\, \prime} \in Conver ( \sigma, x )$. Then we show that $x \in A \cup B$. Thus in view of (3.4), that will result in $A \cup B$ being $\sigma$-closed. \\ Indeed $x^{\, \prime} \in T ( A \cup B )$ and (3.20) give $x^{\, \prime} \in T_\sigma ( A, A \cup B )$, or $x^{\, \prime} \in T_\sigma ( B, A \cup B )$. In case the first relation holds, then according to (3.19) it follows that $~\exists~~ y^{\, \prime} \in T ( A ) ~:~ x^{\, \prime} \leq y^{\, \prime}$. Hence $x^{\, \prime} \in Conver ( \sigma, x )$ and (3.2) give $y^{\, \prime} \in Conver ( \sigma, x )$. But then $y^{\, \prime} \in T ( A )$ and $A$ being $\sigma$-closed imply $x \in A$, and thus indeed (3.21) holds. Consequently, so does (3.22). \\ We show now (3.23). Let $A \in {\cal V}_\sigma ( x )$, then (3.16) gives $B \in Op~ ( \sigma )$, with $x \in B \subseteq A$. But obviously $x^{\, \prime} \in Conver ( \sigma, x ) \subseteq E^{\, \prime} = T ( E ) \subseteq T_\sigma ( B, E ) \cup T_\sigma ( E \setminus B, E )$. Assume now that $x^{\, \prime} \in T_\sigma ( E \setminus B, E )$. And then $x^{\, \prime} \in Conver ( \sigma, x )$ and $E \setminus B$ being $\sigma$-closed imply $x \in E \setminus B$, which in view of the above is absurd. \\ Thus $x^{\, \prime} \in T_\sigma ( B, E )$ which according to (3.19) gives $y^{\, \prime} \in T ( B )$, with $x^{\, \prime} \leq y^{\, \prime}$. But then (3.2) yields $y^{\, \prime} \in Conver ( \sigma, x )$, and the proof of (3.23) is completed since $y^{\, \prime} \in T ( B ) \subseteq T ( A )$, as we have $A \subseteq B$. $\Box$ \\ In view of (3.21), (3.22), it follows that under the conditions of the above Proposition, we can associate with the TTS $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$ a usual HKB topology $\tau_\sigma$ on $E$ which will have the {\it same} closed and open sets. \\ Here, however, one should note that the associated topology $\tau_\sigma$ does {\it not} in general express all the structure or information in the initial TTS $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$. For instance, it need not fully express the {\it uniform} aspects involved in $\Xi$. \\ {\bf Associating Compatible Topological Type Processes} \\ In view of the above, there is an interest in associating to {\it arbitrary} TTS-s $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$ certain {\it compatible} pre-ordered TTP-s, in order to be further able to associate usual HKB topologies, in case the respective pre-ordered TTP-s turn out to be strongly compatible as well. \\ Here we shall give {\it two} such constructions in which we associate compatible pre-ordered TTP-s with {\it arbitrary} TTS-s. \\ As for conditions when such associated pre-ordered TTP-s are in fact strongly compatible with the given TTS-s, this is an issue which can be dealt with separately. \\ {\bf The First Association of Pre-ordered TTP-s} \\ Given any $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$ which is a TTS on the nonvoid set $E$. We associate with $\sigma$ the following TTS on the same set $E$, as defined by \\ (3.24)~~~ $ \sigma_P ~=~ ( E, E^{\, \prime}_P, T_P , \Xi_P ) $ \\ Here \\ (3.25)~~~ $ E^{\, \prime}_P ~=~ {\cal P} ( E^{\, \prime} ) $ \\ while the pre-order, in fact, a partial order $\leq_P$ on $ E^{\, \prime}_P$ is defined by \\ (3.26)~~~ $ x^{\, \prime}_P ~\leq_P~ y^{\, \prime}_P ~~~\Longleftrightarrow~~~ x^{\, \prime}_P ~\supseteq~ y^{\, \prime}_P $ \\ for every $x^{\, \prime}_P, y^{\, \prime}_P \in E^{\, \prime}_P$. \\ Further, the mapping \\ (3.27)~~~ $ T_P : {\cal P} ( E ) ~~\longrightarrow~~ {\cal P} ( E^{\, \prime}_P ) $ \\ is defined for $A \subseteq E$, by \\ (3.28)~~~ $ T_P ( A ) ~=~ {\cal P} ( T ( A ) ) $ \\ Finally, $\Xi_P$ is defined as being the set of all pairs $( x^{\, \prime}_P, y^{\, \prime}_P ) \in E^{\, \prime}_P \times E^{\, \prime}_P$ such that \\ (3.29)~~~ $ \begin{array}{l} \forall~~~ x^{\, \prime}, y^{\, \prime} \in x^{\, \prime}_P \cup y^{\, \prime}_P ~: \\ \\ ~~~~~ ( x^{\, \prime}, y^{\, \prime} ) \in \Xi \end{array} $ \\ \\ The result of interest for us is in the following easy to prove \\ {\bf Proposition} ( Rosinger [7, p. 148] ) \\ Given any $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$ which is a TTS on the nonvoid set $E$. Then $\sigma_P$ and $( E^{\, \prime}, \leq_P )$ are compatible. \\ {\bf The Second Association of Pre-ordered TTP-s} \\ Let $\eta = ( E, E^{\, \prime}, T )$ be a topological support on the nonvoid set $E$, see (2.8). We define the mapping \\ (3.30)~~~ $ E^{\, \prime} \ni x^{\, \prime} ~\longmapsto~ {\cal W}_\eta ( x^{\, \prime} ) ~=~ \{~ \phi \neq A ~\subseteq~ E ~~|~~ x^{\, \prime} \in T ( A ) ~\} $ \\ The we have for $x^{\, \prime} \in E^{\, \prime}$ the relations \\ (3.31)~~~ $ \phi \notin {\cal W}_\eta ( x^{\, \prime} ) ~\neq~ \phi $ \\ while for $\phi \neq A \subseteq B \subseteq E$, we have \\ (3.32)~~~ $ A \in {\cal W}_\eta ( x^{\, \prime} ) ~~\Longrightarrow~~ B \in {\cal W}_\eta ( x^{\, \prime} ) $ \\ Consequently, ${\cal W}_\eta ( x^{\, \prime} )$, with $x^{\, \prime} \in E^{\, \prime}$, are filters on $E$, if and only if \\ (3.33)~~~ $ T ( A ) \cap T ( B ) ~\subseteq~ T ( A \cap B ),~~ \mbox{with}~ \phi \neq A, B \subseteq E $ \\ Finally, we define the partial order $\leq_\eta$ on $E^{\, \prime}$, by \\ (3.34)~~~ $ x^{\, \prime} ~\leq_\eta~ y^{\, \prime} ~~~\Longleftrightarrow~~~ {\cal W}_\eta ( x^{\, \prime} ) ~\subseteq~ {\cal W}_\eta ( y^{\, \prime} ) $ \\ where $x^{\, \prime}, y^{\, \prime} \in E^{\, \prime}$. \\ Now we obtain the following \\ {\bf Proposition} ( Rosinger [7, p. 151] ) \\ Given $\sigma ~=~ ( E, E^{\, \prime}, T , \Xi )$ a TTS on the nonvoid set $E$, and let $\eta = ( E, E^{\, \prime}, T )$ be its topological support. \\ Then $\sigma$ and $( E^{\, \prime}, \leq_\eta )$ are compatible, if and only if (3.2) and (3.3) hold. \\ {\bf Proof} \\ It suffices to show that (3.1) is satisfied. This however follows immediately from (3.34). \\ \\ {\large \bf 4. The Particular Case of Beattie-Butzmann \\ \hspace*{0.55cm} Convergence and Uniform Convergence Spaces} \\ {\bf Convergence Spaces} \\ As mentioned, one of the most successful and seminal ways to go beyond usual topological or uniform spaces was developed in a series of publications by Beattie \& Butzmann. Let us for convenience recall here their first basic definition, Beattie \& Butzmann [p. 2]. \\ {\bf Definition.} On a given set $E$ we call {\it convergence structure} any mapping \\ (4.1)~~~ $ \lambda : E ~\longrightarrow~ {\cal P} ( Fil ( E ) ) $ \\ of $E$ into the powers set of all filters on $E$, which satisfies the following three conditions. First, we have \\ (4.2)~~~ $ {\cal U}_{\,x} \in \lambda ( x ), ~~\mbox{for}~~ x \in E $ \\ where ${\cal U}_{\,x}$ denotes the filter, in fact, an ultrafilter, generated on $E$ by the one point set $\{ x \}$. Second, we have for all $x \in E$ \\ (4.3)~~~ $ {\cal F, G} \in \lambda ( x ) ~~~\Longrightarrow~~~ {\cal F} \cap {\cal G} \in \lambda ( x ) $ \\ Finally, for all $x \in E$, we also have \\ (4.4)~~~ $ {\cal F} \in \lambda ( x ),~ {\cal G} \in Fil ( E ),~ {\cal G} ~\supseteq~ {\cal F} ~~~\Longrightarrow~~~ {\cal G} \in \lambda ( x ) $ \\ In such a case one calls $( E, \lambda )$ a {\it convergence space} on $E$. Furthermore, for every $x \in E$ and ${\cal F} \in \lambda ( x )$, we say that the filter ${\cal F}$ {\it converges to} $x$ in the convergence structure $( E, \lambda )$, and thus write ${\cal F} \stackrel{\lambda} \longrightarrow x$. \\ {\bf Motivation.} The motivation of the above definition is rather straightforward. Let $\tau$ be any usual topology on $E$. Then the mapping $\lambda$ in (4.1) is defined by \\ (4.5)~~~ $ \lambda ( x ) ~=~ \{~ {\cal F} \in Fil ( E ) ~~|~~ {\cal V}_x \subseteq {\cal F} ~\},~~ x \in E $ \\ where ${\cal V}_x$ denotes the filter of {\it neighbourhoods} of $x$ in the topology $\tau$. \\ {\bf Inclusion Among TTS-s.} We show now the simple way in which the convergence spaces $( E, \lambda )$ are {\it particular} cases of the TTS-s defined in section 1. \\ Indeed, there is a natural way one can associate with each convergence space $( E, \lambda )$ a TTS~ $\sigma_{\,\lambda}$ on $E$. Namely, we take \\ (4.6)~~~ $ \sigma_{\,\lambda} ~=~ ( E, E^{\,\prime}, T, \Xi_{\,\lambda} ) $ \\ where the topological support $( E, E^{\,\prime}, T)$ is the same with the one in (2.13), while similar to (2.16), we take \\ (4.7)~~~ $ \Xi_{\,\lambda} ~=~ \left \{~ ( {\cal F, G } ) \in Fil ( E) \times Fil ( E ) ~~ \begin{array}{|l} ~~\exists~ x \in E : \\ \\ ~~~~~~ {\cal F, G} ~~\stackrel{\lambda} \longrightarrow~~ x \end{array} ~ \right \} $ \\ It is easy to see that $\sigma_{\,\lambda} ~=~ ( E, E^{\,\prime}, T, \Xi_{\,\lambda} )$ is indeed a TTS on $E$. \\ Furthermore, as in (2.17$^{\,\prime}$) we can associate with the convergence space $( E, \lambda )$ not only a TTS, but also a TTSR. \\ {\bf Uniform Convergence Spaces} \\ The second basic definition in Beattie \& Butzmann, see [p. 61], is the following \\ {\bf Definition.} On a given set $E$ a {\it uniform convergence structure} is any set {\large \bf U} of filters on $E \times E$, that is, {\large \bf U} $\subseteq Fil ( E \times E )$, such that the following five conditions hold \\ (4.8)~~~ ${\cal U}_{( x, x )} \in$~{\large \bf U},~~ for $x \in E$ \\ (4.9)~~~ ${\cal U,~ V} \in$~{\large \bf U} $~~~\Longrightarrow~~~ {\cal U} \bigcap {\cal V} \in$~{\large \bf U} \\ (4.10)~~~ ${\cal U} \in$~{\large \bf U},~ ${\cal V} \in Fil ( E \times E ),~ {\cal V} \supseteq {\cal U} ~~~\Longrightarrow~~ {\cal V} \in$~ {\large \bf U} \\ (4.11)~~~ ${\cal U} \in$~{\large \bf U} $~~~\Longrightarrow~~~ {\cal U}^{-1} \in$~{\large \bf U} \\ and \\ (4.12)~~~ ${\cal U,~ V} \in$~{\large \bf U},~~ ${\cal U} \circ {\cal V} \in Fil ( E \times E ) ~~~\Longrightarrow~~~ {\cal U} \circ {\cal V} \in$~{\large \bf U} \\ Above, as usual, we denoted \\ ~~~~~~ $ {\cal U}_{( x, x )} = \{~ C \subseteq E \times E ~|~ ( x, x ) \in C ~\} $ \\ ~~~~~~ $ {\cal U}^{-1} = \{~ C^{-1} ~|~ C \in {\cal U} ~\} $ \\ where \\ ~~~~~~ $ C^{-1} = \{~ ( y, x ) ~|~ ( x, y ) \in C ~\} $ \\ and lastly \\ ~~~~~~ $ {\cal U} \circ {\cal V} = \{~ C \subseteq E \times E ~|~ \exists~ A \in {\cal U},~ B \in {\cal V} : A \circ B \subseteq C ~\} $ \\ where \\ ~~~~~~ $ A \circ B = \{~ ( x, z ) ~|~ \exists~ y \in E : ( x, y ) \in A,~ ( y, z ) \in B ~\} $ \\ In this case one calls ( $E$, {\large \bf U} ) a {\it uniform convergence space}. \\ {\bf Inclusion Among TTS-s.} We can now show the simple way in which the uniform convergence spaces ( $E$, {\large \bf U} ) are again {\it particular} cases of the TTS-s defined in section 1. \\ There is, indeed, a natural way one can associate with each uniform convergence space ( $E$, {\large \bf U} ) a TTS~ $\sigma_{\,{\large \bf U}}$ on $E$. Namely, we take \\ (4.13)~~~ $ \sigma_{\,{\large \bf U}} ~=~ ( E, E^{\,\prime}, T, \Xi_{\,{\large \bf U}} ) $ \\ where the topological support $( E, E^{\,\prime}, T)$ is the same with the one in (2.13), while \\ (4.14)~~~ $ \Xi_{\,{\large \bf U}} ~=~ \{~ ( {\cal F, G } ) \in Fil ( E) \times Fil ( E ) ~~|~~ {\cal F} \times {\cal G} \in$~ {\large \bf U} ~\} \\ where as usual, we denoted \\ ~~~~~~ $ {\cal F} \times {\cal G} ~=~ \{~ C \subseteq E \times E ~~|~~ \exists~ A \in {\cal F},~ B \in {\cal G} : A \times B \subseteq C ~\} $ \\ It is easy to see that $\sigma_{\,{\large \bf U}} ~=~ ( E, E^{\,\prime}, T, \Xi_{\,{\large \bf U}} )$ is indeed a TTS on $E$. \\ \end{document}
\begin{document} \title{Knowing Values and Public Inspection} \titlerunning{Knowing Values and Public Inspection} \author{Jan van Eijck\inst{1,2}, Malvin Gattinger\inst{1}, Yanjing Wang\inst{3}} \tocauthor{van Eijck et al.} \institute{ILLC, University of Amsterdam, Amsterdam, The Netherlands \and SEN1, CWI, Amsterdam, The Netherlands \and Department of Philosophy, Peking University, Beijing, China} \maketitle \begin{abstract} We present a basic dynamic epistemic logic of ``knowing the value''. Analogous to public announcement in standard DEL, we study ``public inspection'', a new dynamic operator which updates the agents' knowledge about the values of constants. We provide a sound and strongly complete axiomatization for the single and multi-agent case, making use of the well-known Armstrong axioms for dependencies in databases. \keywords{Knowing what, Bisimulation, Public Announcement Logic.} \end{abstract} \section{Introduction} Standard epistemic logic studies propositional knowledge expressed by ``knowing that''. However, in everyday life we talk about knowledge in many other ways, such as ``knowing what the password is'', ``knowing how to swim'', ``knowing why he was late'' and so on. Recently the epistemic logics of such expressions are drawing more and more attention (see \cite{Wang16} for a survey). Merely reasoning about static knowledge is important but it is also interesting to study the changes of knowledge. Dynamic Epistemic Logic (DEL) is an important tool for this, which handles how knowledge (and belief) is updated by events or actions \cite{DitHoekKooi2007:del}. For example, extending standard epistemic logic, one can update the propositional knowledge of agents by making propositional announcements. They are nicely studied by public announcement logic \cite{Plaza2007:LoPC} which includes reduction axioms to completely describe the interplay of ``knowing that'' and ``announcing that''. Given this, we can also ask: What are natural dynamic counterparts the knowledge expressed by other expressions such as knowing what, knowing how etc.? How do we formalize ``announcing what''? In this paper, we study a basic dynamic operation which updates the knowledge of the values of certain constants.\footnote{In this paper, by \textit{constant} we mean something which has a single value given the actual situation. The range of possible values of a constant may be infinite. This terminology is motivated by first-order modal logic as it will become more clear later.} The action of \textit{public inspection} is the knowing value counterpart of public announcement and we will see that it fits well with the logic of knowing value. As an example, we may use a sensor to measure the current temperature of the room. It is reasonable to say that after using the sensor you will know the temperature of the room. Note that it is not reasonable to encode this by standard public announcement since it may result in a possibly infinite formula: $[t=\SI{27.1}{\degreeCelsius}]\ensuremath{\textit{K}}(t=\SI{27.1}{\degreeCelsius})\land [t=\SI{27.2}{\degreeCelsius}]\ensuremath{\textit{K}}(t=\SI{27.2}{\degreeCelsius})\land \dots$, and the inspection action itself may require an infinite action model in the standard DEL framework of \cite{BalMosSol98:tlopa} with a separate event for each possible value. Hence public inspection can be viewed as a public announcement of the actual value, but new techniques are required to express it formally. In our simple framework we define knowing and inspecting values as primitive operators, leaving the actual values out of our logical language. The notions of knowing and inspecting values have a natural connection with dependencies in databases. This will also play a crucial role in the later technical development of the paper. In particular, our completeness proofs employ the famous set of axioms from \cite{armstrong1974dependency}. For now, consider the following example. \begin{example} Suppose a university course was evaluated using anonymous questionnaires which besides an assessment for the teacher also asked the students for their main subject. See Table \ref{table:ExampleEvaluation} for the results. Now suppose a student tells you, the teacher, that his major is Computer Science. Then clearly you know how that student assessed the course, since there is some dependency between the two columns. More precisely, in the cases of students 3 and 4, telling you the value of ``Subject'' effectively also tells you the value of ``Assessment''. In practice, a better questionnaire would only ask for combinations of questions that do not allow the identification of students. \begin{table} \centering \begin{tabular}{llll} Student & Subject & Assessment \\ \hline 1 & Mathematics & good \\ 2 & Mathematics & very good \\ 3 & Logic & good \\ 4 & Computer Science & bad \\ \end{tabular} \caption{Evaluation Results} \label{table:ExampleEvaluation} \end{table} \end{example} Other examples abound: The author of \cite{Sweeney2015:oy} gives an account of how easily so-called `de-identified data' produced from medical records could be `re-identified', by matching patient names to publicly available health data. These examples illustrate that reasoning about knowledge of values in isolation, i.e.~separated from knowledge \emph{that}, is both possible and informative. It is such knowledge and its dynamics that we will study here. \section{Existing Work} Our work relates to a collection of papers on epistemic logics with other operators than the standard ``knowing that'' $\ensuremath{\textit{K}}\varphi$. In particular we are interested in the $\ensuremath{\textit{K}}v$ operator expressing that an agent knows a value of a variable or constant. This operator is already mentioned in the seminal work \cite{Plaza2007:LoPC} which introduced public announcement logic (PAL). However, a complete axiomatization of PAL together with $\ensuremath{\textit{K}}v$ was only given in \cite{WangFan2013KvPAL,WangFan2014CondKWhat} using the relativized operator $\ensuremath{\textit{K}}v(\varphi,c)$ for the single and multi-agent cases. Moreover, it has been shown in \cite{GuWang2016KvNormal} that by treating the negation of $\ensuremath{\textit{K}}v$ as a primitive diamond-like operator, the logic can be seen as a normal modal logic in disguise with binary modalities. Inspired by a talk partly based on an earlier version of this paper, Baltag proposed the very expressive Logic of Epistemic Dependency (LED) \cite{Baltag2016:KVV}, where knowing that, knowing value, announcing that, announcing value can all be encoded in a general language which also includes equalities like $c=4$ to facilitate the axiomatization. In this paper we go in the other direction: Instead of extending the standard PAL framework with $\ensuremath{\textit{K}}v$, we study it in isolation together with its dynamic counterpart $[c]$ for public inspection. In general, the motto of our work here is to see how far one can get in formalizing knowledge and inspection of values without going all the way to or even beyond PAL. In particular we do not include values in the syntax and we do not have any nested epistemic modalities. As one would expect, our simple language is accompanied by simpler models and also the proofs are less complicated than existing methods. Still we consider our Public Inspection Logic (PIL) more than a toy logic. Our completeness proof includes a novel construction which we call ``canonical dependency graph'' (Definition \ref{def:canonical-g-and-m}). We also establish the precise connection between our axioms and the Armstrong axioms widely used in database theory \cite{armstrong1974dependency}. Table \ref{table:LanguageComparison} shows how PIL fits into the family of existing languages. Note that \cite{Baltag2016:KVV} is the most expressive language in which all operators are encoded using $\ensuremath{\textit{K}}_i^{t_1, \dots, t_n}t$ which expresses that given the current values of $t_1$ to $t_n$, agent $i$ knows the value of $t$. Moreover, to obtain a complete proof system for LED one also needs to include equality and rigid constants in the language. It is thus an open question to find axiomatizations for a language between PIL and LED without equality. \begin{table} \[ \arraycolsep=5pt \begin{array}{l l l l l l l l l} \text{PAL} & p & \ensuremath{\textit{K}} \varphi & & & & [!\varphi] \varphi & & \text{\cite{Plaza2007:LoPC}} \\ \text{PAL}+\ensuremath{\textit{K}}v & p & \ensuremath{\textit{K}} \varphi & \ensuremath{\textit{K}}v(c) & & & [!\varphi] \varphi & & \text{\cite{Plaza2007:LoPC}}\\ \text{PAL}+\ensuremath{\textit{K}}v^r & p & \ensuremath{\textit{K}} \varphi & \ensuremath{\textit{K}}v(c) & \ensuremath{\textit{K}}v(\varphi,c) & & [!\varphi] \varphi & & \text{\cite{WangFan2013KvPAL,WangFan2014CondKWhat,GuWang2016KvNormal}}\\ \text{PIL} & & & \ensuremath{\textit{K}}v(c) & & [c] \varphi & & & \text{this paper}\\ \text{PIL}+\ensuremath{\textit{K}} & & \ensuremath{\textit{K}} \varphi & \ensuremath{\textit{K}}v(c) & & [c] \varphi & & & \text{future work}\\ \text{LED} & p & \ensuremath{\textit{K}} \varphi & \ensuremath{\textit{K}}v(c) & \ensuremath{\textit{K}}v(\varphi,c) & [c] \varphi & [!\varphi] \varphi & c=c & \text{\cite{Baltag2016:KVV} }\\ \end{array} \] \caption{Comparison of Languages} \label{table:LanguageComparison} \end{table} All languages include the standard boolean operators $\top$, $\lnot$ and $\land$ which we do not list in Table \ref{table:LanguageComparison}. We also discuss other related works not in this line at the end of the paper. \section{Single-Agent PIL} We first consider a simple single-agent language to talk about knowing and inspecting values. Throughout the paper we assume a fixed set of constants $\mathbb{C}$. \begin{definition}[Syntax] Let $c$ range over $\mathbb{C}$. The language $\mathcal{L}_1$ is given by: \[ \varphi ::= \top \mid \lnot\varphi \mid \varphi\land\varphi \mid \ensuremath{\textit{K}}v(c) \mid [c]\varphi \] \end{definition} Besides standard interpretations of the boolean connectives, the intended meanings are as follows: $\ensuremath{\textit{K}}v(c)$ reads ``the agent knows the value of $c$'' and the formula $[c]\varphi$ is meant to say ``after revealing the actual value of $c$, $\varphi$ is the case''. We also use the standard abbreviations $\varphi\lor\psi := \lnot(\lnot\varphi\land\lnot\psi)$ and $\varphi\to\psi := \lnot\varphi\lor\psi$. \begin{definition}[Models and Semantics]\label{def:PIL-models-and-semantics} A model for $\mathcal{L}_1$ is a tuple $\mathcal{M} = \langle S, \mathcal{D}, V \rangle$ where $S$ is a non-empty set of worlds (also called states), $\mathcal{D}$ is a non-empty domain and $V$ is a valuation $V : (S \times \mathbb{C}) \to \mathcal{D}$. To denote $V(s,c)=V(t,c)$, i.e.~that $c$ has the same value at $s$ and $t$ according to $V$, we write $s =_c t$. If this holds for all $c \in C\subseteq \mathbb{C}$ we write $s =_C t$. The semantics are as follows: \[ \begin{array}{lll} \hline \mathcal{M},s\vDash \top & & \textrm{always}\\ \mathcal{M},s\vDash \neg\varphi &\Leftrightarrow& \mathcal{M},s\nvDash \varphi \\ \mathcal{M},s\vDash \varphi\land \psi &\Leftrightarrow&\mathcal{M},s\vDash \varphi \textrm{ and } \mathcal{M},s\vDash \psi \\ \mathcal{M},s\vDash \ensuremath{\textit{K}}v(c)&\Leftrightarrow&\text{for all } t \in S : s =_c t\\ \mathcal{M},s\vDash [c]\varphi & \Leftrightarrow & \mathcal{M}|^s_c,s\vDash\varphi\\ \hline \end{array} \] where $\mathcal{M}|^s_c$ is $\langle S', \mathcal{D}, V|_{S' \times \mathbb{C}} \rangle$ with $S'=\{t \in S \mid s =_c t \}$. If for a set of formulas $\Gamma$ and a formula $\varphi$ we have that whenever a model $\mathcal{M}$ and a state $s$ satisfy $\mathcal{M},s \vDash \Gamma$ then they also satisfy $\mathcal{M},s \vDash \varphi$, then we say that $\varphi$ follows semantically from $\Gamma$ and write $\Gamma \vDash \varphi$. If this hold for $\Gamma = \varnothing$ we say that $\varphi$ is semantically valid and write $\vDash \varphi$. \end{definition} Note that the actual state $s$ plays an important role in the last clause of our semantics: Public inspection of $c$ at $s$ reveals the \emph{local actual} value of $c$ to the agent. The model is restricted to those worlds which agree on $c$ with $s$. This is different from PAL and other DEL variants based on action models, where updates are usually defined on models directly and not on pointed models. We employ the usual abbreviation $\langle c \rangle \varphi$ as $\neg [c]\neg \varphi$. Note however, that public inspection of $c$ can always take place and is deterministic. Hence the determinacy axiom $\langle c \rangle \varphi \leftrightarrow [c] \varphi$ is semantically valid and we include it in the following system. \begin{definition} The proof system $\mathbb{SPIL}_1$ for $\textsf{PIL}$ in the language $\mathcal{L}_1$ consists of the following axiom schemata and rules. If a formula $\varphi$ is provable from a set of premises $\Gamma$ we write $\Gamma \vdash \varphi$. If this holds for $\Gamma = \varnothing$ we also write $\vdash \varphi$. \begin{minipage}[t][][b]{0.6\textwidth} \begin{center} \begin{tabular}{lc} \multicolumn{2}{l}{\textbf{Axiom Schemata}}\\[0.4em] {\texttt{TAUT}} & \tr{all instances of propositional tautologies}\\ \mathcal{D}IST & $[c](\varphi \rightarrow \psi) \rightarrow ([c]\varphi \rightarrow [c]\psi)$\\ {\texttt{LEARN}} & $[c]\ensuremath{\textit{K}}v(c)$\\ {\texttt{NF}} & $\ensuremath{\textit{K}}v(c) \to [d]\ensuremath{\textit{K}}v(c)$\\ \mathcal{D}ET & $\langle c\rangle\varphi \leftrightarrow [c]\varphi$\\ \mathbb{C}OM & $[c][d]\varphi \leftrightarrow [d][c]\varphi$\\ {\texttt{IR}} & $\ensuremath{\textit{K}}v(c) \rightarrow ([c]\varphi \to \varphi)$\\ \textbf{ } \end{tabular} \end{center} \end{minipage} \hspace{1em} \begin{minipage}[t][][b]{0.2\textwidth} \begin{center} \begin{tabular}{lc} \multicolumn{2}{l}{\textbf{Rules}}\\[0.4em] \mathcal{M}P& $\dfrac{\varphi,\varphi\to\psi}{\psi}$\\ \\ {\texttt{NEC}} &$\dfrac{\varphi}{[c]\varphi}$ \end{tabular} \end{center} \end{minipage} \end{definition} Intuitively, {\texttt{LEARN}}\ captures the effect of the inspection; {\texttt{NF}}\ says that the agent does not forget; \mathcal{D}ET\ says that inspection is deterministic; \mathbb{C}OM\ says that inspections commute; finally, {\texttt{IR}}\ expresses that inspection does not bring any new information if the value is known already. Note that \mathcal{D}ET\ says that $[c]$ is a function. It also implies seriality which we list in the following Lemma. \begin{lemma}\label{lem:provable} The following schemes are provable in $\mathbb{SPIL}_1$: \begin{itemize} \item $\langle c \rangle \top$ (seriality) \item $\ensuremath{\textit{K}}v(c) \rightarrow (\varphi \to [c]\varphi)$ ({\texttt{IR}}') \item $[c](\varphi \land \psi) \leftrightarrow [c]\varphi \land [c]\psi$ (\mathcal{D}IST') \item $[c_1]\dots[c_n](\varphi \to \psi) \to ([c_1]\dots[c_n]\varphi \to [c_1]\dots[c_n]\psi)$ (multi-\mathcal{D}IST) \item $[c_1]\dots[c_n](\varphi \land \psi) \leftrightarrow [c_1]\dots[c_n]\varphi \land [c_1]\dots[c_n]\psi$ (multi-\mathcal{D}IST') \item $[c_1]\dots[c_n](\ensuremath{\textit{K}}v(c_1) \land \dots \ensuremath{\textit{K}}v(c_n))$ (multi-{\texttt{LEARN}}) \item $(\ensuremath{\textit{K}}v(c_1) \land \dots \land \ensuremath{\textit{K}}v(c_n)) \rightarrow [d_1]\dots[d_n] (\ensuremath{\textit{K}}v(c_1) \land \dots \land \ensuremath{\textit{K}}v(c_n))$ (multi-{\texttt{NF}}) \item $(\ensuremath{\textit{K}}v(c_1) \land \dots \land \ensuremath{\textit{K}}v(c_n)) \rightarrow ([c_1]\dots[c_n]\varphi \to \varphi)$ (multi-{\texttt{IR}}) \end{itemize} Moreover, the multi-{\texttt{NEC}}\ rule is admissible: If $\vdash \varphi$, then $\vdash [c_1]\dots[c_n]\varphi$. \end{lemma} \begin{proof} For reasons of space we only prove three of the items and leave the others as an exercise for the reader. For {\texttt{IR}}', we use \mathcal{D}ET\ and {\texttt{TAUT}}: \begin{prooftree} \AxiomC{} \RightLabel{({\texttt{IR}})} \UnaryInfC{$\ensuremath{\textit{K}}v(c) \rightarrow ([c]\lnot\varphi \to \lnot\varphi)$} \RightLabel{(\mathcal{D}ET)} \UnaryInfC{$\ensuremath{\textit{K}}v(c) \rightarrow (\lnot[c]\varphi \to \lnot\varphi)$} \RightLabel{({\texttt{TAUT}})} \UnaryInfC{$\ensuremath{\textit{K}}v(c) \rightarrow (\varphi \to [c]\varphi)$} \end{prooftree} To show multi-{\texttt{NEC}}, we use \mathcal{D}IST, {\texttt{NEC}}\ and {\texttt{TAUT}}. For simplicity, consider the case where $C=\{c_1, c_2\}$. \begin{prooftree} \AxiomC{} \RightLabel{(\mathcal{D}IST)} \UnaryInfC{$[c_2](\varphi \to \psi) \to ([c_2]\varphi \to [c_2]\psi)$} \RightLabel{({\texttt{NEC}})} \UnaryInfC{$[c_1]([c_2](\varphi \to \psi) \to ([c_2]\varphi \to [c_2]\psi))$} \RightLabel{(\mathcal{D}IST, {\texttt{TAUT}})} \UnaryInfC{$[c_1][c_2](\varphi \to \psi) \to [c_1]([c_2]\varphi \to [c_2]\psi)$} \RightLabel{(\mathcal{D}IST, {\texttt{TAUT}})} \UnaryInfC{$[c_1][c_2](\varphi \to \psi) \to ([c_1][c_2]\varphi \to [c_1][c_2]\psi)$} \end{prooftree} For multi-{\texttt{LEARN}}, we use {\texttt{LEARN}}, {\texttt{NEC}}, \mathbb{C}OM, \mathcal{D}IST' and {\texttt{TAUT}}: \begin{prooftree} \AxiomC{} \RightLabel{({\texttt{LEARN}})} \UnaryInfC{$[c_1]\ensuremath{\textit{K}}v(c_1)$} \RightLabel{({\texttt{NEC}})} \UnaryInfC{$[c_2][c_1]\ensuremath{\textit{K}}v(c_1)$} \RightLabel{(\mathbb{C}OM)} \UnaryInfC{$[c_1][c_2]\ensuremath{\textit{K}}v(c_1)$} \AxiomC{} \RightLabel{({\texttt{LEARN}})} \UnaryInfC{$[c_2]\ensuremath{\textit{K}}v(c_2)$} \RightLabel{({\texttt{NEC}})} \UnaryInfC{$[c_1][c_2]\ensuremath{\textit{K}}v(c_2)$} \RightLabel{(\mathcal{D}IST', {\texttt{TAUT}})} \BinaryInfC{$[c_1]([c_2]\ensuremath{\textit{K}}v(c_1) \land [c_2]\ensuremath{\textit{K}}v(c_2))$} \RightLabel{(\mathcal{D}IST', {\texttt{TAUT}})} \UnaryInfC{$[c_1][c_2](\ensuremath{\textit{K}}v(c_1) \land \ensuremath{\textit{K}}v(c_2))$} \end{prooftree} \end{proof} \begin{definition}\label{def:abbreviations} We use the following abbreviations for any two finite sets of constants $C=\{c_1,\dots,c_m\}$ and $D=\{d_1,\dots,d_n\}$. \begin{itemize} \item $\ensuremath{\textit{K}}v(C) := \ensuremath{\textit{K}}v(c_1) \land \dots \land \ensuremath{\textit{K}}v(c_m)$ \item $[C]\varphi := [c_1]\dots [c_m]\varphi$ \item $\ensuremath{\textit{K}}v(C,D) := [C]\ensuremath{\textit{K}}v(D)$. \end{itemize} \end{definition} Note that by multi-\mathcal{D}IST' and \mathbb{C}OM\, the exact enumeration of $C$ and $D$ in Definition \ref{def:abbreviations} do not matter modulo logical equivalence. In particular, these abbreviations allow us to shorten the ``multi'' items from Lemma \ref{lem:provable} to $\ensuremath{\textit{K}}v(C,C)$, $\ensuremath{\textit{K}}v(C) \to \ensuremath{\textit{K}}v(D,C)$ and $\ensuremath{\textit{K}}v(C) \to ([C]\varphi \to \varphi)$. The abbreviation $\ensuremath{\textit{K}}v(C,D)$ allows us to define dependencies and it will be crucial in our completeness proof. We have that: \[ \begin{array}{c} \hline \mathcal{M},s\vDash \ensuremath{\textit{K}}v(C,D) \Leftrightarrow \text{for all } t \in S : \text{if } s =_C t \text{ then } s =_D t \\ \hline \end{array} \] \begin{definition} Let $\mathcal{L}_2$ be the language given by $\varphi ::= \top \mid \lnot\varphi \mid \varphi\land\varphi \mid \ensuremath{\textit{K}}v(C,C)$. \end{definition} Note that this language is essentially a fragment of $\mathcal{L}_1$ due to the above abbreviation, where (possibly multiple) $[c]$ operators only occur in front of $\ensuremath{\textit{K}}v$ operators (or conjunctions thereof). Moreover, the next Lemma might count as a small surprise. \begin{lemma}\label{lemma:equiexpressive} $\mathcal{L}_1$ and $\mathcal{L}_2$ are equally expressive. \end{lemma} \begin{proof} As $\ensuremath{\textit{K}}v(\cdot,\cdot)$ was just defined as an abbreviation, we already know that $\mathcal{L}_1$ is at least as expressive as $\mathcal{L}_2$: we have $\mathcal{L}_2 \subseteq \mathcal{L}_1$. We can also translate in the other direction by pushing all sensing operators through negations and conjunctions. Formally, let $t : \mathcal{L}_1 \to \mathcal{L}_2$ be defined by \[\begin{array}{lcl} \ensuremath{\textit{K}}v(d) & \mapsto & \ensuremath{\textit{K}}v(\varnothing,\{d\})\\ \lnot \varphi & \mapsto & \lnot t(\varphi)\\ \varphi \land \psi & \mapsto & t(\varphi) \land t(\psi)\\ \end{array} \hspace{2em} \begin{array}{lcl} [c] \lnot \varphi & \mapsto & \lnot t([c]\varphi)\\ {[c]} (\varphi \land \psi) & \mapsto & t([c]\varphi) \land t([c]\psi)\\ {[c]} \top &\mapsto & \top\\ {[c_1]} \dots [c_n] \ensuremath{\textit{K}}v(d) & \mapsto & \ensuremath{\textit{K}}v(\{c_1,\dots,c_n\},\{d\}) \end{array}\] Note that this translation preserves and reflects truth because determinacy and distribution are valid (determinacy allows us to push $[c]$ through negations, distribution to push $[c]$ through conjunctions). At this stage we have not yet established completeness, but determinacy is also an axiom. Hence we can note separately that $\varphi \leftrightarrow t(\varphi)$ is provable and that $t$ preserves and reflects provability and consistency. \end{proof} \begin{example} Note that the translation of $[c]\varphi$ formulas also depends on the top connective within $\varphi$. For example we have \[ \begin{array}{rcl} t([c](\lnot\ensuremath{\textit{K}}v (d) \land [e]\ensuremath{\textit{K}}v (f))) & = & t([c]\lnot\ensuremath{\textit{K}}v (d)) \land t([c][e]\ensuremath{\textit{K}}v (f))\\ & = & \lnot\ensuremath{\textit{K}}v(\{c\},\{d\}) \land \ensuremath{\textit{K}}v(\{c,e\},\{f\}) \end{array} \] \end{example} The language $\mathcal{L}_2$ allows us to connect PIL to the maybe most famous axioms about database theory and dependence logic from \cite{armstrong1974dependency}. \begin{lemma}\label{lemma:armstrong} Armstrong's axioms are semantically valid and derivable in $\mathbb{SPIL}_1$: \begin{itemize} \item $\ensuremath{\textit{K}}v(C,D)$ for any $D \subseteq C$ (projectivity) \item $\ensuremath{\textit{K}}v(C,D) \land \ensuremath{\textit{K}}v(D,E) \rightarrow \ensuremath{\textit{K}}v(C,E)$ (transitivity) \item $\ensuremath{\textit{K}}v(C,D) \land \ensuremath{\textit{K}}v(C,E) \rightarrow \ensuremath{\textit{K}}v(C,D \cup E)$ (additivity) \end{itemize} \end{lemma} \begin{proof} The semantic validity is easy to check, hence we focus on the derivations. For projectivity, take any two finite sets $D \subseteq C$. If $D=C$, then we only need a derivation like the following which basically generalizes learning to finite sets. \begin{prooftree} \AxiomC{} \RightLabel{({\texttt{LEARN}})} \UnaryInfC{$[c_1]\ensuremath{\textit{K}}v(c_1)$} \RightLabel{({\texttt{NEC}})} \UnaryInfC{$[c_2][c_1]\ensuremath{\textit{K}}v(c_1)$} \RightLabel{(\mathbb{C}OM)} \UnaryInfC{$[c_1][c_2]\ensuremath{\textit{K}}v(c_1)$} \AxiomC{} \RightLabel{({\texttt{LEARN}})} \UnaryInfC{$[c_2]\ensuremath{\textit{K}}v(c_2)$} \RightLabel{({\texttt{NEC}})} \UnaryInfC{$[c_1][c_2]\ensuremath{\textit{K}}v(c_1)$} \RightLabel{(\mathcal{D}IST)} \BinaryInfC{$[c_1]( [c_2]\ensuremath{\textit{K}}v(c_1) \land [c_2]\ensuremath{\textit{K}}v(c_2) )$} \RightLabel{(\mathcal{D}IST)} \UnaryInfC{$[c_1][c_2] ( \ensuremath{\textit{K}}v(c_1) \land \ensuremath{\textit{K}}v(c_2) )$} \end{prooftree} If $D \subsetneq C$, then continue by applying {\texttt{NEC}}\ for all elements of $C\setminus D$ to get $\ensuremath{\textit{K}}v(C,D)$. Transitivity follows from {\texttt{IR}}\ and {\texttt{NF}}\ as follows. For simplicity, first we only consider the case where $C$, $D$ and $E$ are singletons. \begin{prooftree} \AxiomC{} \RightLabel{({\texttt{NF}})} \UnaryInfC{$\ensuremath{\textit{K}}v(e) \to [c]\ensuremath{\textit{K}}v(e)$} \RightLabel{({\texttt{NEC}})} \UnaryInfC{$[d](\ensuremath{\textit{K}}v(e) \to [c]\ensuremath{\textit{K}}v(e))$} \RightLabel{(\mathcal{D}IST)} \UnaryInfC{$[d]\ensuremath{\textit{K}}v(e) \to [d][c]\ensuremath{\textit{K}}v(e)$} \RightLabel{(\mathbb{C}OM)} \UnaryInfC{$[d]\ensuremath{\textit{K}}v(e) \to [c][d]\ensuremath{\textit{K}}v(e)$} \AxiomC{} \RightLabel{({\texttt{IR}})} \UnaryInfC{$\ensuremath{\textit{K}}v(d) \to ([d] \ensuremath{\textit{K}}v(e) \to \ensuremath{\textit{K}}v(e))$} \RightLabel{({\texttt{NEC}})} \UnaryInfC{$[c] (\ensuremath{\textit{K}}v(d) \to ([d] \ensuremath{\textit{K}}v(e) \to \ensuremath{\textit{K}}v(e)))$} \RightLabel{(\mathcal{D}IST)} \UnaryInfC{$[c] \ensuremath{\textit{K}}v(d) \to [c]([d] \ensuremath{\textit{K}}v(e) \to \ensuremath{\textit{K}}v(e))$} \RightLabel{(\mathcal{D}IST)} \UnaryInfC{$[c] \ensuremath{\textit{K}}v(d) \to ([c][d] \ensuremath{\textit{K}}v(e) \to [c]\ensuremath{\textit{K}}v(e))$} \RightLabel{({\texttt{TAUT}})} \BinaryInfC{$[c] \ensuremath{\textit{K}}v(d) \to ( [d] \ensuremath{\textit{K}}v(e) \to [c]\ensuremath{\textit{K}}v(e) ) $} \end{prooftree} Now consider any three finite sets of constants $C=\{c_1,\dots,c_l\}$. Using the abbreviations from Definition \ref{def:abbreviations} and the ``multi'' rules given in Lemma \ref{lem:provable} it is easy to generalize the proof. In fact, the proof is exactly the same with capital letters. Similarly, additivity follows immediately from multi-\mathcal{D}IST'. \end{proof} We can now use Armstrong's axioms to prove completeness of our logic. The crucial idea is a new definition of a canonical dependency graph. \begin{theorem}[Strong Completeness]\label{thm:PIL-strong-completeness} For all sets of formulas $\mathcal{D}elta \subseteq \mathcal{L}_1$ and all formulas $\varphi \in \mathcal{L}_1$, if $\mathcal{D}elta \vDash \varphi$, then also $\mathcal{D}elta \vdash \varphi$. \end{theorem} \begin{proof} By contraposition using a canonical model. Suppose $\mathcal{D}elta \nvdash \varphi$. Then $\mathcal{D}elta \cup \{\lnot \varphi\}$ is consistent and there is a maximally consistent set $\Gamma \subseteq \mathcal{L}_1$ such that $\Gamma \supseteq \mathcal{D}elta \cup \{\lnot \varphi\} $. We will now build a model $\mathcal{M}_\Gamma$ such that for the world $\mathbb{C}$ in that model we have $\mathcal{M}_\Gamma, \mathbb{C} \vDash \Gamma$ which implies $\mathcal{D}elta \nvDash \varphi$. \begin{definition}[Canonical Graph and Model]\label{def:canonical-g-and-m} Let the graph $G_\Gamma := (\mathcal{P}(\mathbb{C}),\rightarrow)$ be given by $A \rightarrow B$ iff $\ensuremath{\textit{K}}v(A,B) \in \Gamma$. By Lemma \ref{lemma:armstrong} this graph has properties corresponding to the Armstrong axioms: projectivity, transitivity and additivity. We call a set of variables $s \subseteq \mathbb{C}$ \emph{closed} under $G_\Gamma$ iff whenever $A \subseteq s$ and $A \to B$ in $G_\Gamma$, then also $B \subseteq s$. Then let the canonical model be $\mathcal{M}_\Gamma := (S, \mathcal{D}, V)$ where \[ S := \{ s \subseteq \mathbb{C} \mid s \text{ is closed under $G_\Gamma$} \}, \mathcal{D} := \{0, 1\} \text{ and } V(s,c) = \left\{ \begin{array}{cl} 0 & \text{if }c \in s \\ 1 & \text{otherwise} \end{array} \right. \] \end{definition} Note that our domain is just $\{0,1\}$. This is possible because we do not have to find a model where the dependencies hold globally. Instead, $\ensuremath{\textit{K}}v(C,d)$ only says that given the $C$-values at the actual world, also the $d$ values are the same at the other worlds. The dependency does not need to hold between two non-actual worlds. This distinguishes our models from relationships as discussed in \cite{armstrong1974dependency} where no actual world or state is used, see Example \ref{ex:pointed-difference} below. Given the definition of a canonical model we can now show: \begin{lemma}[Truth Lemma] $\mathcal{M}_\Gamma, \mathbb{C} \vDash \varphi$ iff $\varphi \in \Gamma$. \end{lemma} Before going into the proof, let us emphasize two peculiarities of our truth lemma: First, the states in our canonical model are not maximally consistent sets of formulas but sets of constants. Second, we only claim the truth Lemma at one specific state, namely $\mathbb{C}$ where all constants have value $0$. As our language does not include nested epistemic modalities, we actually never evaluate formulas at other states of our canonical model. \begin{proof}[Truth Lemma] Note that it suffices to show this for all $\varphi$ in $\mathcal{L}_2$: Given some $\varphi \in \mathcal{L}_1$, by Lemma \ref{lemma:equiexpressive} we have that $\mathcal{M}_\Gamma, \mathbb{C} \vDash \varphi \iff \mathcal{M}_\Gamma, \mathbb{C} \vDash t(\varphi)$ because the translation preserves and reflects truth. Moreover, we have $\varphi \in \Gamma \iff t(\varphi) \in \Gamma$, because $\varphi \leftrightarrow t(\varphi)$ is provable in $\mathbb{SPIL}_1$. Hence it suffices to show that $\mathcal{M}_\Gamma, \mathbb{C} \vDash t(\varphi)$ iff $t(\varphi) \in \Gamma$, i.e.~to show the Truth Lemma for $\mathcal{L}_2$. Again, negation and conjunction are standard, the crucial case are dependencies. Suppose $\ensuremath{\textit{K}}v(C,D) \in \Gamma$. By definition $C\to D$ in $G_\Gamma$. To show $\mathcal{M}_\Gamma, \mathbb{C} \vDash \ensuremath{\textit{K}}v(C,D)$, take any $t$ such that $\mathbb{C} =_C t$ in $\mathcal{M}_\Gamma$. Then by definition of $V$ we have $C \subseteq t$. As $t$ is closed under $G_\Gamma$, this implies $D \subseteq t$. Now by definition of $V$ we have $\mathbb{C} =_D t$. For the converse, suppose $\ensuremath{\textit{K}}v(C,D) \not\in \Gamma$. Then by definition $C \not\to D$ in $G_\Gamma$. Now, let $t := \{ c' \in \mathbb{C} \mid C \to \{c'\} \text{ in } G_\Gamma \}$. This gives us $C \subseteq t$. But we also have $D \not\subseteq t$ because otherwise additivity would imply $C \to D$ in $G_\Gamma$. Moreover, because $G_\Gamma$ is transitive it is enough to ``go one step'' in $G_\Gamma$ to get a set that is closed under $G_\Gamma$. This means that $t$ is closed under $G_\Gamma$ and therefore a state in our model, i.e.~we have $t \in S$. Now by definition of $V$ and projectivity, we have $\mathbb{C} =_C t$ but $\mathbb{C} \neq_D t$. Thus $t$ is a witness for $\mathcal{M}_\Gamma, \mathbb{C} \nvDash \ensuremath{\textit{K}}v(C,D)$. \end{proof} This also finishes the completeness proof. Note that we used all three properties corresponding to the Armstrong axioms. \end{proof} \begin{example}\label{ex:canonical-graph} To illustrate the idea of the canonical dependency graph, let us study a concrete example of what the graph and model look like. Consider the maximally consistent set $\Gamma = \{ \lnot \ensuremath{\textit{K}}v(c), \lnot \ensuremath{\textit{K}}v(d), \ensuremath{\textit{K}}v(e), \ensuremath{\textit{K}}v(c,d), \dots \}$. The interesting part of the canonical graph $G_\Gamma$ then looks as follows, where the nodes are subsets of $\{c,d,e\}$. For clarity we only draw $\to \cap \not\subseteq$, i.e.~we omit edges given by inclusions. For example all nodes will also have an edge going to the $\varnothing$ node. \begin{center} \begin{tikzpicture}[node distance=2cm,>=latex,->] \node (cde) {$\{c,d,e\}$}; \node (ec) [left of=cde] {$\{e,c\}$}; \node (cd) [below of=cde, node distance=1cm] {$\{c,d\}$}; \node (de) [left of=ec] {$\{d,e\}$}; \node (c) [left of=cd] {$\{c\}$}; \node (d) [left of=c] {$\{d\}$}; \node (0) [left of=d] {$\varnothing$}; \node (e) [left of=de] {$\{e\}$}; \draw (c) -- (d); \draw (c) -- (cd); \draw (0) -- (e); \draw (d) -- (de); \draw (c) -- (ec); \draw (cd) -- (cde); \draw (ec) -- (cde); \draw (ec) -- (cde); \draw (c) -- (de); \draw (c) -- (cde); \end{tikzpicture} \end{center} To get a model out of this graph, note that there are exactly three subsets of $\mathbb{C}$ closed under following the edges. Namely, let $S = \{ s:\{e\}, t:\{d,e\}, u:\{c,d,e\} \}$ and use the binary valuation which says that a constant has value 0 iff it is an element of the state. It is then easy to check that $\mathcal{M}, u \vDash \Gamma$. \begin{center} \begin{tabular}{cccc} & s & t & u \\ \hline c & 1 & 1 & 0 \\ d & 1 & 0 & 0 \\ e & 0 & 0 & 0 \\ \end{tabular} \end{center} \end{example} It is also straightforward to define an appropriate notion of bisimulation. \begin{definition}\label{def:single-bisim} Two pointed models $((S, \mathcal{D}, V),s)$ and $((S', \mathcal{D}', V'),s')$, are \emph{bisimilar} iff (i) For all finite $C \subseteq \mathbb{C}$ and all $d \in \mathbb{C}$: If there is a $t \in S$ such that $s =_C t$ and $s \neq_d t$, then there is a $t' \in S'$ such that $s' =_C t'$ and $s' \neq_d t'$; and (ii) Vice versa. \end{definition} Note that we do not need the bisimulation to also link non-actual worlds. This is because all formulas are evaluated at the same world. In fact it would be too strong for the following characterization. \begin{theorem}\label{thm:PIL-bisim} Two pointed models satisfy the same formulas iff they are bisimilar. \end{theorem} \begin{proof} By Lemma \ref{lemma:equiexpressive} we only have to consider formulas of $\mathcal{L}_2$. Moreover, it suffices to consider formulas $\ensuremath{\textit{K}}v(C,d)$ with a singleton in the second set because $\ensuremath{\textit{K}}v(C,D)$ is equivalent to $\bigwedge_{d\in D} \ensuremath{\textit{K}}v(C,d)$. Then it is straightforward to show that if $\mathcal{M},s$ and $\mathcal{M}',s'$ are bisimilar then $\mathcal{M},s\vDash\neg \ensuremath{\textit{K}}v(C,d)\iff \mathcal{M}',s'\vDash\neg \ensuremath{\textit{K}}v(C,d)$ by definition of our bisimulation. The other way around is also obvious since the two conditions for bisimulation are based on the semantics of $\neg \ensuremath{\textit{K}}v(C,d)$. \end{proof} Note that a bisimulation characterization for a language without the dynamic operator can be obtained by restricting Definition \ref{def:single-bisim} to $C=\varnothing$. We leave it as an exercise for the reader to use this and Theorem \ref{thm:PIL-bisim} to show that $[c]$ is not reducible, which distinguishes it from the public announcement $[\varphi]$ in PAL. \begin{example}[Pointed Models Make a Difference]\label{ex:pointed-difference} It seems that the following theorem of our logic does not translate to Armstrong's system from \cite{armstrong1974dependency}. \[ [c](\ensuremath{\textit{K}}v(d) \lor \ensuremath{\textit{K}}v(e)) \leftrightarrow ([c]\ensuremath{\textit{K}}v(d) \lor [c]\ensuremath{\textit{K}}v(e)) \] First, to see that this is provable, note that it follows from determinacy and seriality. Second, it is valid because we consider pointed models which convey more information than a simple list of possible values. Consider the following table which represents $4$ possible worlds. \begin{center} \begin{tabular}{lll} $c$ & $d$ & $e$ \\ \hline 1 & 1 & 3 \\ 1 & 1 & 2 \\ 2 & 2 & 1 \\ 2 & 3 & 1 \\ \end{tabular} \end{center} Here we would say that ``After learning $c$ we know $d$ or we know $e$.'', i.e.~the antecedent of above formula holds. However, the consequent only holds if we evaluate formulas while pointing at a specific world/row: It is globally true that given $c$ we will learn $d$ or that given $c$ we will learn $e$. But none of the two disjuncts holds globally which would be needed for a dependency in Armstrong's sense. Note that this is more a matter of expressiveness than of logical strength. In Armstrong's system there is just no way to express $[c](\ensuremath{\textit{K}}v(d) \lor \ensuremath{\textit{K}}v(e))$. \end{example} \section{Multi-Agent PIL} We now generalize the Public Inspection Logic to multiple agents. In the language we use $\ensuremath{\textit{K}}v_i$ to say that agent $i$ knows the value of $c$ and in the models an accessibility relation for each agent is added to describe their knowledge. To obtain a complete proof system we can leave most axioms as above but have to restrict the irrelevance axiom. Again the completeness +proof uses a canonical model construction and a truth lemma for a +restricted but equally expressive syntax. The only change is that we now define a dependency graph for each agent in order to define accessibility relations instead of restricted sets of worlds. \begin{definition}[Multi-Agent PIL]\label{def:multi-agent-PIL} We fix a non-empty set of agents $I$. The language $\mathcal{L}^I_1$ of multi-agent Public Inspection Logic is given by $$\varphi ::= \top \mid \lnot\varphi \mid \varphi\land\varphi \mid \ensuremath{\textit{K}}v_i c \mid [c]\varphi$$ where $i\in I$. We interpret it on models $\langle S, \mathcal{D}, V, R \rangle$ where $S$, $\mathcal{D}$ and $V$ are as before and $R$ assigns to each agent $i$ an equivalence relation $\sim_i$ over $S$. The semantics are standard for the booleans and as follows: \[ \begin{array}{lll} \hline \mathcal{M},s \vDash \ensuremath{\textit{K}}v_i c & \iff & \forall t \in S : s \sim_i t \Rightarrow s =_c t \\ \mathcal{M},s\vDash [c]\varphi & \iff & \mathcal{M}|^s_c,s\vDash\varphi\\ \hline \end{array} \] where $\mathcal{M}|^s_c$ is $\langle S', \mathcal{D}, V|_{S' \times \mathbb{C}}, R|_{S' \times S'} \rangle$ with $S'=\{t \in S \mid s =_c t \}$. Analogous to Definition \ref{def:abbreviations} we define the following abbreviation to express dependencies known by agent $i$ and note its semantics: \[ \ensuremath{\textit{K}}v_i(C,D) := [c_1]\dots[c_n](\ensuremath{\textit{K}}v_i(d_1) \land \dots \land \ensuremath{\textit{K}}v_i(d_m)) \] \[ \begin{array}{c} \hline \mathcal{M}, s \vDash \ensuremath{\textit{K}}v_i(C,D) \Leftrightarrow \text{for all } t \in S : \text{if } s \sim_i t \text{ and } s =_C t \text{ then } s =_D t \\ \hline \end{array} \] The proof system $\mathbb{SPIL}$ for $\textsf{PIL}$ in the language $\mathcal{L}^I_1$ is obtained by replacing each $\ensuremath{\textit{K}}v$ in the axioms of $\mathbb{SPIL}_1$ by $\ensuremath{\textit{K}}v_i$, and replacing {\texttt{IR}}\ by the following restricted version: \begin{center} \begin{tabular}{ll} {\texttt{RIR}} \hspace{1.5em} & $\ensuremath{\textit{K}}v_i c \rightarrow ([c]\varphi \to \varphi)$ where $\varphi$ does not mention any agent besides $i$ \\ \end{tabular} \end{center} \end{definition} Before summarizing the completeness proof for the multi-agent setting, let us highlight some details of this definition. As be fore the actual state $s$ plays an important role in the semantics of $[c]$. However, we could also use an alternative but equivalent definition: Instead of deleting states, only delete the $\sim_i$ links between states that disagree on the value of $c$. Then the update no longer depends on the actual state. For traditional reasons we define $\sim_i$ to be an equivalence relation. This is not strictly necessary, because our language can not tell whether the relation is reflexive, transitive or symmetric. Removing this constraint and extending the class of models would thus not make any difference in terms of validities. For the proof system, note that the original irrelevance axiom {\texttt{IR}}\ is \emph{not} valid in the multi-agent setting because $\varphi$ might talk about other agents for which the inspection of $c$ does matter. \begin{theorem}[Strong Completeness for $\mathbb{SPIL}$] For all sets of formulas $\mathcal{D}elta \subseteq \mathcal{L}^I_1$ and all formulas $\varphi \in \mathcal{L}^I_1$, if $\mathcal{D}elta \vDash \varphi$, then also $\mathcal{D}elta \vdash \varphi$. \end{theorem} \begin{proof} By the same methods as for Theorem \ref{thm:PIL-strong-completeness}. Given a maximally consistent set $\Gamma \subseteq \mathcal{L}^I_1$ we want to build a model $\mathcal{M}_\Gamma$ such that for the world $\mathbb{C}$ in that model we have $\mathcal{M}_\Gamma, \mathbb{C} \vDash \Gamma$. First, for each agent $i \in I$, let $G_\Gamma^i$ be the graph given by $A \rightarrow_i B \ \ :\iff \ \Gamma \vdash \ensuremath{\textit{K}}v_i(A,B)$. Given that the proof system $\mathbb{SPIL}$ was obtained by indexing the axioms of $\mathbb{SPIL}_1$, it is easy to check that indexed versions of the Armstrong axioms are provable and therefore all the graphs $G_\Gamma^i$ for $i \in I$ will have the corresponding properties. In particular ${\texttt{RIR}}$ suffices for this. Second, define the canonical model $\mathcal{M}_\Gamma := (S, \mathcal{D}, V, R)$ where $S := \mathcal{P}(\mathbb{C})$, $\mathcal{D} := \{0,1\}$, $V(s,c):=0$ if $c \in s$ and $V(s,c):=1$ otherwise, and $s \sim_i t$ iff $s$ and $t$ are both closed or both not closed under $G_\Gamma^i$. \begin{lemma}[Multi-Agent Truth Lemma] $\mathcal{M}_\Gamma, \mathbb{C} \vDash \varphi$ iff $\varphi \in \Gamma$. \end{lemma} \begin{proof} Again it suffices to show the Truth Lemma for a restricted language and we only consider the state $\mathbb{C}$. We proceed by induction on $\varphi$. The crucial case is when $\varphi$ is of form $\ensuremath{\textit{K}}v_i(C,D)$. Suppose $\ensuremath{\textit{K}}v_i(C,D) \in \Gamma$. Then by definition $C \to D$ in $G_\Gamma^i$. To show $\mathcal{M}_\Gamma, \mathbb{C} \vDash \ensuremath{\textit{K}}v_i(C,D)$, take any $t$ such that $\mathbb{C} \sim_i t$ and $\mathbb{C} =_C t$ in $\mathcal{M}_\Gamma$. Then by definition of $V$ we have $C \subseteq t$. Moreover, $\mathbb{C}$ is closed under $G_\Gamma^i$. Hence by definition of $\sim_i$ also $t$ must be closed under $G_\Gamma^i$ which implies $D \subseteq t$. Now by definition of $V$ we have $\mathbb{C} =_D t$. For the converse, suppose $\ensuremath{\textit{K}}v_i(C,D) \not\in \Gamma$. Then by definition $C \not\to D$ in $G_\Gamma^i$. Now, let $t := \{ c' \in \mathbb{C} \mid C \to \{c'\} \text{ in } G_\Gamma^i \}$. This gives us $C \subseteq t$. But we also have $D \not\subseteq t$ because otherwise additivity would imply $C \to D$ in $G_\Gamma^i$. Moreover, because $G_\Gamma^i$ is transitive it is enough to ``go one step'' in $G_\Gamma^i$ to get a set that is closed under $G_\Gamma^i$. This means that $t$ is closed under $G_\Gamma^i$ and therefore by definition of $\sim_i$ we have $\mathbb{C} \sim_i t$. Now by definition of $V$ and projectivity, we have $\mathbb{C} =_C t$ but $\mathbb{C} \neq_D t$. Thus $t$ is a witness for $\mathcal{M}_\Gamma, \mathbb{C} \nvDash \ensuremath{\textit{K}}v_i(C,D)$. \end{proof} Again the Truth Lemma also finishes the completeness proof. \end{proof} \begin{figure} \caption{Two canonical dependency graphs and the resulting canonical model.} \label{fig:multi-canonical-constructs} \end{figure} \begin{example} Analogous to Example \ref{ex:canonical-graph}, the following illustrates the multi-agent version of our canonical construction. Consider the maximally consistent set $\Gamma = \{ \lnot \ensuremath{\textit{K}}v_1(d), \ensuremath{\textit{K}}v_1(c,d), \lnot \ensuremath{\textit{K}}v_1(d,c), \lnot \ensuremath{\textit{K}}v_2(c), \lnot \ensuremath{\textit{K}}v_2(c,d), \ensuremath{\textit{K}}v_1(d,c), \dots \}$. Note that agents $1$ and $2$ do not differ in which values they know right now but there is a difference in what they will learn from inspections of $c$ and $d$. The two canonical dependency graphs generated from $\Gamma$ are shown in Figure \ref{fig:multi-canonical-constructs}. Again for clarity we only draw the non-inclusion arrows. The subsets of $\mathbb{C}=\{c,d\}$ closed under the graphs are thus $\{ \{c,d\}, \{d\}, \varnothing \}$ and $\{ \{c,d\}, \{c\}, \varnothing \}$ for agent $1$ and $2$ respectively, inducing the equivalence relations as shown in Figure \ref{fig:multi-canonical-constructs}. \end{example} It is also not hard to find the right notion of bisimulation for $\mathbb{SPIL}$. \begin{definition} Given two models $(S,\mathcal{D},V,R)$ and $(S',\mathcal{D}',V',R')$, a relation $Z \subseteq S \times S'$ is a \emph{multi-agent bisimulation} iff for all $s Z s'$ we have (i) For all finite $C \subseteq \mathbb{C}$, all $d \in \mathbb{C}$ and all agents $i$: If there is a $t \in S$ such that $s \sim_i t$ and $s =_C t$ and $s \neq_d t$, then there is a $t' \in S'$ such that $t Z t'$ and $s' \sim_i t'$ and $s =_C t$ and $s' \neq_d t'$; and (ii) Vice versa. \end{definition} \begin{theorem} Two pointed models satisfy the same formulas of the multi-agent language $\mathcal{L}^I_1$ iff there is a multi-agent bisimulation linking them. \end{theorem} As it is very similar to the one of Theorem \ref{thm:PIL-bisim}, we omit the proof here. \section{Future Work} Between our specific approach and the general language of \cite{Baltag2016:KVV}, a lot can still be explored. An advantage of having a weaker language with explicit operators, instead of encoding them in a more general language, is that we can clearly see the properties of those operators showing up as intuitive axioms. The framework can be extended in different directions. We could for example add equalities $c=d$ to the language, together with knowledge $\ensuremath{\textit{K}}(c=d)$ and announcement $[c=d]$. No changes to the models are needed, but axiomatizing these operators seems not straightforward. Alternatively, just like Plaza added $\ensuremath{\textit{K}}v$ to PAL, we can also add $\ensuremath{\textit{K}}$ to PIL. Another next language to be studied is thus $\textsf{PIL} + \ensuremath{\textit{K}}$ from Table \ref{table:LanguageComparison} above and given by $$\varphi ::= \top \mid \lnot\varphi \mid \varphi\land\varphi \mid \ensuremath{\textit{K}}v_i c \mid \ensuremath{\textit{K}}_i \varphi \mid [c]\varphi.$$ Note that in this language, we can also express \emph{knowledge of} dependency in contrast to \textit{de facto} dependency. For example, $\ensuremath{\textit{K}}_i [c]\ensuremath{\textit{K}}v_i d$ expresses that agent $i$ knows that $d$ functionally depends on $c$, while $[c]\ensuremath{\textit{K}}v_i d$ express that the value of $d$ (given the information state of $i$) is determined by the \textit{actual value} of $c$ \textit{de facto}. In particular the latter does not imply that $i$ knows this. The agent can still consider other values of $c$ possible that would not determine the value of $d$. To see the difference technically, we can spell out the truth condition for $\ensuremath{\textit{K}}_i[c]\ensuremath{\textit{K}}v_i(d)$ under standard Kripke semantics for $\ensuremath{\textit{K}}_i$ on S5 models: \[ \mathcal{M},s\vDash \ensuremath{\textit{K}}_i[c]\ensuremath{\textit{K}}v_i(d) \Leftrightarrow \text{ for all }t_1\sim_is, t_2\sim_i s: t_1=_{c} t_2 \implies t_1=_{d} t_2\\ \] Now consider Example~\ref{ex:pointed-difference}: $[c]\ensuremath{\textit{K}}v(d)$ holds in the first row, but $\ensuremath{\textit{K}}[c]\ensuremath{\textit{K}}v(d)$ does not hold since the semantics of $\ensuremath{\textit{K}}$ require $[c]\ensuremath{\textit{K}}v(d)$ to hold at \textit{all} worlds considered possible by the agent. This also shows that $[c]\ensuremath{\textit{K}}v(d)$ is not positively introspective (i.e.~the formula $ [c]\ensuremath{\textit{K}}v(d)\to \ensuremath{\textit{K}}_i [c]\ensuremath{\textit{K}}v(d)$ is not valid), and it is essentially not a subjective epistemic formula. In this way, $\ensuremath{\textit{K}} [c]\ensuremath{\textit{K}}v(d)$ can also be viewed as the atomic formula $=\!\!(c, d)$ in \textit{dependence logic} (DL) from \cite{Depbook}. A \textit{team model} of DL can be viewed as the set of epistemically accessible worlds, i.e., a single-agent model in our case. The connection with dependence logic also brings PIL closer to the first-order variant of \textit{epistemic inquisitive logic} by \cite{CiardelliR15}, where knowledge of entailment of interrogatives can also be viewed as the knowledge of dependency. For a detailed comparison with our approach, see \cite[Sec. 6.7.4]{Ivano16}. Another approach is to make the dependency more explicit and include functions in the syntax. In \cite{Ding15thesis} a functional dependency operator $\ensuremath{\textit{K}}f_i$ is added to the epistemic language with $\ensuremath{\textit{K}}v_i$ operators: $\ensuremath{\textit{K}}f_i(c, d):= \exists f \ensuremath{\textit{K}}_i(d=f(c))$ where $f$ ranges over a pool of functions. Finally, there is an independent but related line of work on (in)dependency of variables using predicates, see for example \cite{MoreN10,Naumov12,NaumovN14,HarNau13:FunDepStratG}. In particular, \cite{NaumovN14} also uses a notion of dependency as an epistemic implication ``Knowing c implies knowing d.'', similar to our formula $\ensuremath{\textit{K}}v(c,d)$. In \cite{HarNau13:FunDepStratG} also a ``dependency graph'' is used to describe how different variables, in this case payoff functions in strategic games, may depend on each other. Note however, that these graphs are not the same as our canonical dependency graphs from Definition \ref{def:canonical-g-and-m}. Our graphs are directed and describe determination between sets of variables. In contrast, the graphs in \cite{HarNau13:FunDepStratG} are undirected and consist of singleton nodes for each player in a game. We leave a more detailed comparison for a future occasion. \subsubsection{Acknowledgements.} We thank the following people for useful comments on this work: Alexandru Baltag, Peter van Emde Boas, Hans van Ditmarsch, Jie Fan, Kai Li and our anonymous reviewers. \end{document}
\begin{document} \begin{abstract} For given integers $m,n \geq 2$ there are examples of ideals $I$ of complete determinantal local rings $(R,\mathfrak{m}), \dim R = m+n-1, \grade I = n-1,$ with the canonical module $\omega_R$ and the property that the socle dimensions of $H^{m+n-2}_I(\omega_R)$ and $H^m_{\mathfrak{m}}(H^{n-1}_I(\omega_R))$ are not finite. In the case of $m = n$, i.e. a Gorenstein ring, the socle dimensions provide further information about the $\tau$-numbers as studied in \cite{MS}. Moreover, the endomorphism ring of $H^{n-1}_I(\omega_R)$ is studied and shown to be an $R$-algebra of finite type but not finitely generated as $R$-module generalizing an example of \cite{Sp6}. \end{abstract} \subjclass[2010] {Primary: 13D45 ; Secondary: 13H10, 13J10} \keywords{local cohomology, determinantal ring, ring of endomorphisms} \maketitle \section{Introduction} Let $I$ denote an ideal of a local ring $(R,\mathfrak{m})$ with $\Bbbk = R/\mathfrak{m}$ its residue field. Let $M$ be a finitely generated $R$-module, and let $H^i_I(M), i \in \mathbb{Z},$ denote the local cohomology modules of $M$ with respect to $I$ (see \cite{Ga2} or \cite{BrS} for definitions). By $\grade (I,M)$ we denote the length of the largest regular sequence contained in $I$. In the case of $M= R$ , an equicharacteristic complete regular local ring, the following is known: \begin{itemize} \item[(a)] The Bass numbers $\dim_{\Bbbk} \Ext_R^i(\Bbbk,H^i_I(R))$ are finite (see \cite{HS} and \cite{Lg1}). \item[(b)] The natural homomorphism $R \to \Hom_R(H^c_I(R),H^c_I(R)), c = \grade I,$ is an isomorphism if and only if $H^i_I(R) = 0$ for $i = d-1,d,$ and $c < d-1, d = \dim R$ (see \cite{Sp6}). \end{itemize} The socle $\Hom_R(\Bbbk, H^i_I(M))$ is in general not finite dimensional as has been shown at first by R. Hartshorne (see \cite{Hr4}) by disproving a question by A. Grothendieck about cofiniteness of local cohomology, i.e. the finiteness of $\Hom_R(R/I, H^i_I(R))$. In their paper Huneke and Koh (see \cite{HK}) studied cofiniteness of various ideals. Moreover they showed that for a field $\Bbbk$ of characteristic zero and a polynomial ring $R = \Bbbk[x_{i,j}, 1 \leq 2, 1 \leq j \leq 3]$ the local cohomology $H^3_I(R)$ is not $I$-cofinite for $I$ the ideal generated by the maximal minors of the $2 \times 3$-matrix $(x_{i,j})$. A large class of local cohomology modules with infinite socle has been constructed by T. Marley and C. Vassilev (see \cite{MV}). The main topic of the present note is to discuss and relate the properties (a) and (b) above in the non-smooth situation of certain determinantal rings. More precisely: \begin{theorem} \label{rem-1} Let $R$ denote the completion of the coordinate ring of the vanishing ideal of the $2\times2$ minors of an $m\times n$-matrix of variables and therefore $\dim R = m+n-1$. For the ideal $I$ generated by the first $n-1$ columns we have $\dim R/I = m$ and $\grade I = n-1$. Let $\omega_R$ denote the canonical module of $R$. \begin{itemize} \item[(a)] $\Hom_R(\Bbbk, H^{m+n-2}_I(\omega_R))$ and $\Hom_R(\Bbbk, H^{m+n-2}_I(R))$ are not finite dimensional, that is $I$ is not cofinite and $H^i_I(\omega_R) = 0$ for $i \not= n-1,m+n-2$. \item[(b)] $\Hom_R(H^{n-1}_I(\omega_R),H^{n-1}_I(\omega_R))$ is an $R$-algebra of finite type, not finitely generated over $R$. \item[(c)] $\Hom_R(\Bbbk,H^m_{\mathfrak{m}}(H^{n-1}_I(\omega_R)))$ and $\Hom_R(\Bbbk,H^0_{\mathfrak{m}}(H^{m+n-2}_I(\omega_R)))$ are not finite dimensional. \end{itemize} \end{theorem} In the case of $m=n$ the ring $R$ is a Gorenstein ring and $\omega_R \cong R$. For $m=n=2$ we recover R. Harsthorne's example (see \cite{Hr4}) by completely different arguments. For an ideal $I$ the cohomological dimension is defined by $\cd I = \sup \{i \in \mathbb{N}| H^i_I(R) \not= 0\}$ (introduced by R. Hartshorne (see \cite{Hr2})). Whence, $\cd I = m+n-2$ for the examples above. That is, possible non-cofinite ideals could occur in the highest non-vanishing cohomological level. In the case of a $d$-dimensional Gorenstein ring $(R,\mathfrak{m})$ the socle dimension $\Hom_R(\Bbbk, H^i_{\mathfrak{m}}(H^{d-j}_I(R)))$ are called $\tau$-numbers of type $(i,j)$ of $I$ (see \cite{MS}). In the case of a regular local ring containing a field they coincide with the Lyubeznik numbers introduced by Lyubeznik in \cite{Lg1} (see \cite[3.5]{MS} for the details). Therefore, the above results yield further information about these $\tau$-numbers. In Section 2 we prove the essentials about local cohomology for our purposes. Theorem \ref{thm-1} is the needed technical result for our constructions. Furthermore, we introduce the basics for the construction of our examples, based about some results of Segre varieties. In Section 3 we prove the statements of the examples introduced in Section 2. For the needed results about Commutative Algebra we refer to Matsumura's book \cite{Mh}. For a few homological arguments resp. some facts about determinantal ideals we refer to \cite{SS} resp. to \cite{BrH}. \section{Preliminaries and constructions} At the beginning let us recall a few basics from local duality. \begin{notation and recalls} \label{not-1} (A) Let $(R,\mathfrak{m})$ denote a $d$-dimensional Cohen-Macaulay ring with $E = E_R(R/\mathfrak{m})$ the injective hull of its residue field. Suppose that $R$ possesses a normalized dualizing complex $D$ in the sense of \cite[11.4.6]{SS}. Then $R$ admits a canonical module $\omega_R$ (see \cite{Sp1}). Moreover, for an arbitrary $R$-module $X$ there is the following Local Duality Theorem \[ H^i_{\mathfrak{m}}(X) \cong \Hom_R(\Ext_R^{d-i}(X,\omega_R),E) \] for all $i \geq 0$ (see e.g. \cite[10.4.3]{SS}). For further properties of $\omega_R$ we refer to \cite{SS} and \cite{Sp1}. \\ (B) With the notation of (A) let $M$ denote a finitely generated $R$-module. Then $\omega_R(M)$, the canonical module of $M$ (in the sense of \cite{Sp1}), exists and there is an isomorphism $\Ext_R^c(M,\omega_R) \cong \omega_R(M)$, where $c = \dim R - \dim_RM$ (see also \cite{Sp1} for more details). \\ (C) With the notation of (A) it follows that $\omega_R \qism D$, a minimal injective resolution of $\omega_R$, is a normalized dualizing complex in the sense of \cite[11.4.6]{SS}. Let $M$ denote a finitely generated $R$-module. Then there is a natural morphism \[ M \to \Hom_R(\Hom_R(M,D),D) \] that is an isomorphism in cohomology. Let $c = d -\dim_RM$, then it induces a natural homomorphism \[ M \to \Ext_R^c(\Ext_R^c(M,\omega_R),\omega_R) = \omega_R(\omega_R(M)). \] (D) It is known that for an $R$-module $M$ the module $\omega_R(\omega_R(M))$ is the $S_2$-fication of $M$ (see \cite{Sp1}). Moreover, the natural homomorphism $M \to \omega_R(\omega_R(M))$ induces a short exact sequence \[ 0 \to M/u(M) \to \omega_R(\omega_R(M)) \to C \to 0, \] where $u(M)$ denotes the intersection of those primary components of $M$ that are of highest dimension $\dim_RM$ and $\dim_R C \leq \dim_R M -2$ for $C$ the cokernel. Let $I \subset R$ an ideal of grade $c$ in the Cohen-Macaulay ring $R$. Then we get an inverse system of homomorphisms $R/I^{\alpha} \to \Ext_R^c(\Ext_R^c(R/I^{\alpha},\omega_R), \omega_R)$. By passing to the inverse limit there is a homomorphism \[ \hat{R}^I \to B := \varprojlim \Ext_R^c(\Ext_R^c(R/I^{\alpha},\omega_R), \omega_R) \cong \Ext_R^c(H^c_I(\omega_R),\omega_R), \] where $\hat{R}^I$ denotes the $I$-adic completion of $I$. Note that the inverse maps are induced by $R/I^{\alpha +1} \to R/I^{\alpha}$. By a slight modification of the arguments of \cite[Section 3]{Sp6} it follows that $B$ is a commutative ring. \\ (E) For an arbitrary $R$-module $X$ there is a natural homomorphism $R \to \Hom_R(X,X), r \mapsto \phi(r)$, the multiplication by $r \in R$ on $X$. Its kernel is $\Ann_RX$. In general the endomorphism ring $\Hom_R(X,X)$ is not commutative. \end{notation and recalls} For the property of the ring $B$ in \ref{not-1} (D) we need a few more intrinsic properties. \begin{theorem} \label{thm-1} Let $(R,\mathfrak{m},\Bbbk)$ be a $d$-dimensional complete local Cohen-Macaulay ring. Let $\omega_R$ denote its canonical module. For an ideal $I \subset R$ with $c = \grade I$ we have: \begin{itemize} \item[(a)] There are isomorphisms \[ \Hom_R(H^c_I(\omega_R),H^c_I(\omega_R)) \cong \Ext_R^c(H^c_I(\omega_R),\omega_R) \cong \Hom_R(H^{d-c}_{\mathfrak{m}}(H^c_I(\omega_R)),E). \] \item[(b)] The endomorphism ring $\Hom_R(H^c_I(\omega_R),H^c_I(\omega_R))$ is a finitely generated $R$-module if and only if $\dim_{\Bbbk} \Hom_R(\Bbbk,H^{d-c}_{\mathfrak{m}}(H^c_I(\omega_R)))$ is finite. \item[(c)] The natural ring homomorphism $\rho :R \to \Hom_R(H^c_I(\omega_R),H^c_I(\omega_R))$ is onto if and only if $\dim_{\Bbbk} \Hom_R(\Bbbk,H^{d-c}_{\mathfrak{m}}(H^c_I(\omega_R))) = 1$ \end{itemize} \end{theorem} \begin{proof} (a): In order to prove the first isomorphism we refer to \cite[2.2 (d)]{Sp8}. In order to proof the second one we have that \[ \Ext_R^c(H^c_I(\omega_R),\omega_R) \cong \Hom_R(H^{d-c}_{\mathfrak{m}}(H^c_I(\omega_R)),E) \] as follows by the Local Duality for the complete local Cohen-Macaulay ring $R$ (see \ref{not-1} (A)). \\ (b): By virtue of (a) we have isomorphisms $B/\mathfrak{m}^{\alpha} \cong \Hom_R(\Hom_R(R/\mathfrak{m}^{\alpha},H^{d-c}_{\mathfrak{m}}(H^c_I(\omega_R)),E)$ that form an inverse system. By passing to the inverse limit there are isomorphisms \[ \hat{B}^{\mathfrak{m}} = \varprojlim \Hom_R(\Hom_R(R/\mathfrak{m}^{\alpha},H^{d-c}_{\mathfrak{m}}(H^c_I(\omega_R)),E) \cong \Hom_R(H^0_{\mathfrak{m}}(H^{d-c}_{\mathfrak{m}}(H^c_I(\omega_R))),E) \cong B \] because of $\Supp_R H^{d-c}_{\mathfrak{m}}(H^c_I(\omega_R)) \subseteq V(\mathfrak{m})$. That is, $B$ is $\mathfrak{m}$-adic complete. Then $B$ is a finitely generated $R$-module if and only if $\dim_{\Bbbk} B/\mathfrak{m}B < \infty$ (see e.g. \cite[Theorem 8.4]{Mh}). \\ (c): Since $H^c_I(\omega_R) \not= 0$ it follows that the natural homomorphism \[ \rho \otimes 1_{\Bbbk}: R\otimes_R \Bbbk \to \Hom_R(H^c_I(\omega_R),H^c_I(\omega_R)) \otimes_R\Bbbk \] is not zero. Then the statement follows by (b). \end{proof} \begin{construction} \label{con-1} (A) Let $\Bbbk$ denote a field. Let $m, n \geq 2$ denote integers and let $$ \mathcal{X} = \begin{pmatrix} x_{11} & \ldots & x_{1n} \\ \vdots & \ddots & \vdots\\ x_{m1} & \ldots & x_{mn} \end{pmatrix} $$ denote a $m \times n$ matrix of $mn$ variables over $\Bbbk$. Let $P = \Bbbk[[\mathcal{X}]]$ denote the formal powers series ring in the $mn$ variables of $\mathcal{X}$. We define $R = P/I_2$, where $I_2 = I_2(\mathcal{X})$ denotes the ideal generated by the $2 \times 2$ minors of $\mathcal{X}$. Then $R$ is a local Cohen-Macaulay domain of dimension $m+n-1$ (see \cite{HR}).\\ (B) We put $\underline x_i = x_{1i}, \ldots, x_{mi}, i = 1,\ldots,n,$ the elements of the $i$-th column of $\mathcal{X}$ and $\underline x = \underline x_n$ the elements of the last column. We define $I = (\underline x_1, \ldots,\underline x_{n-1})R$ the ideal generated by the elements of the first $n-1$ columns of $\mathcal{X}$. Then $\dim R/I = m$ and $\height I = \grade I = n-1$. In the following we recall the form ring $G_I(R)$ of $I \subset R$. First we define a $m \times n$-matrix \[ \mathcal{Y} = \begin{pmatrix} T_{11} & \ldots & T_{1,n-1}& x_{1n} \\ \vdots & \ddots & \vdots & \vdots \\ T_{m1} & \ldots & T_{m,n-1} & x_{mn}\end{pmatrix} \] where $T_{ij}, 1 \leq i \leq m, 1 \leq j \leq n-1,$ are variables over $R$. Let $J = I_2(\mathcal{Y})$ the ideal generated by the $2 \times 2$-minors of $\mathcal{Y}$. Let $\mathcal{T}$ denote the set of all variables $T_{i,j}, 1 \leq i \leq m, 1 \leq j \leq n-1$ of degree 1 and let $\mathcal{G} =(R/I)[\mathcal{T}]$. Then consider the induced homogeneous map \[ \Theta: \mathcal{G}/J \to Gr_I(R), \; T_{ij} +J \mapsto x_{ij} +I/I^2, \; 1 \leq i \leq m, 1 \leq j \leq n-1. \] Note that it is well defined and surjective. Clearly, $\mathcal{G}/J$ is a Cohen-Macaulay determinantal domain with $\dim \mathcal{G}/J = m+n-1$, i.e. $J$ is a prime ideal. Moreover $Gr_I(R)$ is a ring with $\dim Gr_I(R) = m+n-1$. Let $\mathcal{J}$ be the preimage of $\ker \Theta$ in $\mathcal{G}$. Let $\mathcal{P} \subset \Spec \mathcal{G}$ denote the preimage of a highest dimensional prime ideal in $Gr_I(R)$. Then $J \subseteq \mathcal{J} \subseteq \mathcal{P}$ and $\dim \mathcal{G}/J = \dim \mathcal{G}/\mathcal{P}$ and therefore $J = \ker \Theta$. That is, $\Theta$ is an isomorphism and $Gr_I(R)$ is a domain. \\ (C) Next we introduce the ring $S = \Bbbk[[\underline x,\underline a]]$, where $\underline x = x_1,\ldots,x_m$ with $x_i= x_{in}, i = 1,\ldots,m$ and $\underline a = a_1,\ldots,a_{n-1}$ are sequences of variables over $\Bbbk$. Then $S$ is a regular local ring with $\underline a = a_1,\ldots,a_{n-1}$ a regular sequence. Therefore $Gr_{(\underline a)}(S) \cong (S/\underline a)[\underline T] \cong \Bbbk[[\underline x]][\underline T]$ with $\underline T = T_1,\ldots,T_{n-1}$ variables of degree $1$. We define a ring homomorphism \[ \phi : R \to S, \; x_{ij} \mapsto a_i x_j, \; i = 1,\ldots, n-1, j = 1,\ldots,m. \] Note that $S$ is a domain and $\dim R = \dim S = m+n+1$. Therefore $\phi$ is injective. The homomorphism $\phi$ extends to a homomorphism \[ \psi : Gr_I(R) \to Gr_{(\underline a)}(S), \; T_{ij} +J \mapsto x_jT_i, \; i = 1,\ldots, m, j = 1,\ldots,n-1. \] Since both $Gr_I(R)$ and $Gr_{(\underline a)}(S)$ are domains of the same dimension $\psi$ is injective. \end{construction} \section{Examples and remarks} In the following we denote by $\omega_R$ the canonical module of a Cohen-Macaulay ring $R$. For basic definitions and properties of $\omega_R$ we refer to \cite[3.3]{BrH}, the generalization in \cite{Sp1} and the summary in \ref{not-1}. For a local ring $(R,\mathfrak{m})$ we denote by $E = E_R(R/\mathfrak{m})$ the injective hull of the residue field. \begin{example} \label{ex-1} We fix the notation of \ref{con-1}, that is, $R$ is the determinantal Cohen-Macaulay ring of dimension $m+n-1$ with $I = (\underline x_1, \ldots,\underline x_{n-1})R$ the ideal generated by the elements of the first $n-1$ columns of $\mathcal{X}$. Then there are the following results: \begin{itemize} \item[(a)] $H^i_I(\omega_R) \not= 0, i = n-1, m+n-2,$ that is, $\cd(I) = m+n-2$. \item[(b)] $\dim_{\Bbbk} \Hom_R(\Bbbk, H^{m+n-2}_I(\omega_R))$ and $\dim_{\Bbbk} \Hom_R(\Bbbk, H^{m+n-2}_I(R))$ are not finite, that is $I$ is not cofinite. \item[(c)] $\Hom_R(H^{n-1}_I(\omega_R),H^{n-1}_I(\omega_R))$ is a commutative Noetherian ring, not finitely generated over $R$. \item[(d)] $\dim_{\Bbbk} \Hom_R(\Bbbk,H^m_{\mathfrak{m}}(H^{n-1}_I(\omega_R)))$ and $\dim_{\Bbbk} \Hom_R(\Bbbk,H^0_{\mathfrak{m}}(H^{m+n-2}_I(\omega_R)))$ are not finite. \end{itemize} \end{example} \begin{proof} The ring homomorphism $\phi$ of \ref{con-1} (C) induces ring homomorphisms \[ \phi^{\alpha} : R/I^{\alpha} \to S/(\underline a)^{\alpha}S \; \mbox{ for all } \alpha \geq 1, \] which are well-defined. We claim that $\phi^{\alpha}$ is injective for all $\alpha \geq 1$. The restriction $\psi^{\alpha}$ of the injection $\psi$ of \ref{con-1} (C) to degree $\alpha$ yields injective homomorphisms $$ {\psi}^{\alpha} : I^{\alpha}/I^{\alpha +1} \to (\underline a)^{\alpha}S /(\underline a)^{\alpha +1}S \; \mbox{ for all }\alpha \geq 0. $$ Let $D_{\alpha} := \Coker \psi^{\alpha}$ and let $\underline x \cdot \underline a$ be the sequence of elements $\{x_i a_j : i = 1,\ldots, n-1, j = 1,\ldots,m\}$. We define $C_{\alpha} = \Coker \phi^{\alpha} = S/((\underline a)^{\alpha}S + S[[\underline x\cdot\underline a]])$ because of $\im \phi = S[[\underline x \cdot \underline a]]$. Now it follows that \[ D_{\alpha} \cong ((\underline a)^{\alpha}S + S[[\underline x \cdot \underline a]])/((\underline a)^{\alpha +1}S +S[[\underline x \cdot \underline a]]). \] as a consequence of the following commutative diagram with exact rows and the snake lemma \[ \begin{array}{ccccccccc} 0 & \to & I^{\alpha}/I^{\alpha +1} & \to & R/I^{\alpha +1}& \to &R/I^{\alpha} & \to & 0 \\ & & \downarrow \psi^{\alpha} & & \downarrow \phi^{\alpha +1}& & \downarrow \phi^{\alpha}& & \\ 0 & \to & (\underline a)^{\alpha}S/(\underline a)^{\alpha +1}S & \to & S/(\underline a)^{\alpha +1}S & \to & S/(\underline a)^{\alpha}S & \to & 0. \end{array} \] Since $\psi^{\alpha}$ is injective for all $\alpha \geq 0$ as the restriction of $\phi^{\alpha}$ it follows by induction that $\phi^{\alpha}$ is injective too. Therefore $D_{\alpha}$ is spanned by $(\underline x)^i (\underline a)^{\alpha}$ with $0 \leq i < \alpha$ over $\Bbbk$. As an $R$-module it is finitely generated. For the radical of the annihilator it follows that $\Rad_S \Ann_S D_{\alpha} = (\underline x ,\underline a)$. So that $C_{\alpha}, \alpha \geq 1,$ is an $R$-module of finite length. There is the short exact sequence \[ 0 \to R/I^{\alpha} \to S/(\underline a)^{\alpha}S \to C_{\alpha} \to 0 \; \mbox{ for all } \alpha \geq 1. \] Because of $C_{\alpha} \cong \oplus_{\beta = 1}^{\alpha-1} D_{\beta}$ it follows that it is an $R$-module of finite length. Since $S/(\underline a)^{\alpha}S$ is a finitely generated Cohen-Macaulay $R$-module it implies that $H^1_{\mathfrak{m}}(R/I^{\alpha}) \cong C_{\alpha}$ and $H^i_{\mathfrak{m}}(R/I^{\alpha}) = 0$ for all $i \not= 1,m$. The above short exact sequences form an inverse system of short exact sequences. By passing to the inverse limit we get a short exact sequence \[ 0 \to R \to S \to \varprojlim H^1_{\mathfrak{m}}(R/I^{\alpha}) \to 0. \] Note that the first family is given by surjective maps. By virtue of the Local Duality Theorem for a Cohen-Macaulay ring (see \ref{not-1} (A)) there are isomorphisms $$ H^i_{\mathfrak{m}}(R/I^{\alpha}) \cong \Hom_R(\Ext_R^{m+n-1-i}(R/I^{\alpha},\omega_R),E) $$ for all $\alpha \geq 1$ and by passing to the inverse limit $\varprojlim H^i_{\mathfrak{m}}(R/I^{\alpha}) \cong \Hom_R(H^{m+n-1-i}_I(\omega_R),E)$ for all $i$. Therefore $H^i_I(\omega_R) = 0$ for $i \not= n-1,m+n-2$ and $$ \varprojlim H^1_{\mathfrak{m}}(R/I^{\alpha}) \cong \Hom_R(H^{m+n-2}_I(\omega_R),E) \not= 0. $$ Because $S$ is not a finitely generated $R$-module $\Hom_R(H^{m+n-2}_I(\omega_R),E)$ is not finitely generated too. By the isomorphism \[ \Bbbk \otimes_R \Hom_R(H^n_I(\omega_R),E) \cong \Hom_R(\Hom_R(\Bbbk, H^{m+n-2}_I(\omega_R)),E) \] and by Matlis Duality $\dim_{\Bbbk} \Hom_R(\Bbbk, H^{m+n-2}_I(\omega_R))$ is not finite. Moreover, $H^{m+n-2}_I(\omega_R) \cong H^{m+n-2}_I(R) \otimes_R\omega_R$ so that $\Hom_R(H^{m+n-2}_I(\omega_R),E) \cong \Hom_R(\omega_R, \Hom_R(H^{m+n-2}_I(R),E))$. Because $\omega_R$ is a finitely generated $R$-module $\Hom_R(H^{m+n-2}_I(R),E)$ is not finitely generated and -- as above -- $\dim_{\Bbbk} \Hom_R(\Bbbk, H^{m+n-2}_I(R))$ is not finite. That is, (a) and (b) of \ref{ex-1} are proved. Now recall that $S/(\underline a)^{\alpha}S$ is an $m$-dimensional Cohen-Macaulay ring, finitely generated over $R$ and $\dim_R C_{\alpha} = 0$. Therefore, $$ S/(\underline a)^{\alpha}S \cong \omega_R(\omega_R(R/I^{\alpha})) \cong \Ext_R^{n-1}(\Ext_R^{n-1}(R/I^{\alpha}, \omega_R),\omega_R) $$ (by view of \ref{not-1} (D)). By passing to the inverse limit of the corresponding inverse and by Theorem \ref{thm-1} (a) it yields that $S \cong \Hom_R(H^{n-1}_I(\omega_R),H^{n-1}_I(\omega_R))$, whence (c) is shown. The first claim in (d) follows by \ref{thm-1} (b) since $S$ is not finitely generated over $R$. The second one is clear by (b) since $\Supp_R H^{m+n-2}_I(R) = V(\mathfrak{m})$. \end{proof} For the case of $m = n$ in \ref{ex-1} we get that $R$ is a Gorenstein ring and therefore $\omega_R \cong R$. For $m = n= 2$ we recover Hartshorne's example in this different context with additional properties. \begin{question} Let $I \subset R$ denote an ideal of a local ring $(R,\mathfrak{m})$ with $c = \grade I.$ We do not know whether the endomorphism ring $\Hom_R(H^c_I(R),H^c_I(R))$ is in general commutative and Noetherian. \end{question} For further results about the endomorphism ring $\Hom_R(H^c_I(R),H^c_I(R))$ we refer also to \cite{Sp8}. \end{document}
\begin{document} \begin{frontmatter} \title{On nonadiabatic SCF calculations of molecular properties} \author{Francisco M. Fern\'{a}ndez \thanksref{FMF}} \address{INIFTA (UNLP,CCT La Plata-CONICET), Divisi\'{o}n Qu\'{i}mica Te\'{o}rica,\\ Diag. 113 y 64 (S/N), Sucursal 4, Casilla de Correo 16,\\ 1900 La Plata, Argentina} \thanks[FMF]{e--mail: [email protected]} \begin{abstract} We argue that the dynamic extended molecular orbital (DEMO) method may be less accurate than expected because the motion of the center of mass was not properly removed prior to the SCF calculation. Under such conditions the virial theorem is a misleading indication of the accuracy of the wavefunction. \end{abstract} \end{frontmatter} The first step in any quantum--mechanical treatment of atomic and molecular systems is the separation of the motion of the center of mass. The nonrelativistic Hamiltonian operator with only Coulomb interactions between the constituent particles for such systems is of the form $\hat{H}_{T}=\hat{T }+V$, where $\hat{T}$ is the total kinetic--energy operator and $V$ is the sum of all the Coulomb interactions between the charged particles. By means of a straightforward linear combination of variables one rewrites the kinetic--energy operator as $\hat{T}=\hat{T}_{CM}+\hat{T}_{rel}$, where $ \hat{T}_{CM}$ and $\hat{T}_{rel}$ are the operators for the kinetic energies of the center of mass and relative motion, respectively. Then one solves the Schr\"{o}dinger equation for the internal Hamiltonian $\hat{H}=\hat{T} _{rel}+V$\cite{BD03,KW66,KA00}. It is well known that the eigenfunctions of $\hat{H}_{T}$ are not square integrable. For this reason, it is at first sight striking that Tachikawa et al\cite{TMNI98,TO00} carried out their dynamic extended molecular orbital (DEMO) method on the total Hamiltonian operator $ \hat{H}_{T}$. A question therefore arises: how does this omission affect the results of the nonadiabatic calculation of molecular properties?. In this letter we will try to answer it. Suppose that we try to approximate the energy of the system by minimization of the variational energy $W=\left\langle \hat{H}_{T}\right\rangle =\left\langle \varphi \right| \hat{H}_{T}\left| \varphi \right\rangle /\left\langle \varphi \right| \left. \varphi \right\rangle $ as in the DEMO method of Tachikawa et al\cite{TMNI98,TO00}. If $\varphi $ depends only on translation--invariant coordinates then $W=W_{rel}=\left\langle \hat{H} \right\rangle $ because $\left\langle \hat{T}_{CM}\right\rangle =0$. However, if $\varphi $ depends on the coordinates of the particles in the laboratory--fixed set of axes, as in the case of the SCF wavefunction used by Tachikawa et al (see, for example equations (10) and (7) in references \cite{TMNI98} and \cite{TO00}, respectively), then $W=\left\langle \hat{T} _{CM}\right\rangle +\left\langle \hat{H}\right\rangle >W_{rel}$. From the variational principle we know that $W_{rel}>E_{0}$, where $E_{0}$ is the exact ground--state energy of the atomic or molecular system. Therefore, the use of $\hat{H}_{T}$ (instead of $\hat{H}$) and a laboratory--fixed set of axes for the electronic and nuclear coordinates in $\varphi $ will result in an even larger estimation of the molecular energy. It is well--known that the SCF wavefunction satisfies the virial theorem\cite {FC87,TO00} $2\left\langle \hat{T}\right\rangle =-\left\langle V\right\rangle $, but in this case we have a wrong relation because $ \left\langle \hat{T}\right\rangle =\left\langle \hat{T}_{CM}\right\rangle +\left\langle \hat{T}_{rel}\right\rangle >\left\langle \hat{T} _{rel}\right\rangle $. Therefore, under such conditions the virial theorem may be a misleading indication of the quality of the wavefunction. Table~\ref{tab:energies} shows the ground--state energies of some diatomic molecules calculated with the internal Hamiltonian operator\cite{KW66,KA00} and also the corresponding DEMO results of Tachikawa and Osamura\cite{TO00} who did not remove the motion of the center of mass. As expected the uncorrelated SCF energies are greater than those in which particle correlation is explicitly taken into account\cite{KW66,KA00}. In addition to it, we also expect the energy difference $\Delta W=W^{TO}-W^{KA}$ (where TO and KA stand for Tachikawa and Osamura and Kinghorn and Adamowicz, respectively) to depend on the expectation value $\left\langle \hat{T} _{CM}\right\rangle $ that should decrease as the molecular mass increases. In fact, the third column of Table~\ref{tab:energies} shows this trend as expected from the fact that $\left\langle \hat{T}_{CM}\right\rangle $ is inversely proportional to the total molecular mass. If this argument were correct then $\Delta W$ would exhibit an almost linear relation with the inverse of the mass number $A$. Fig.~\ref{fig:TCM} shows that this is in fact the case for the values of the energy difference shown in Table~ \ref{tab:energies}. In order to illustrate (and in some way corroborate) the arguments above we consider a simple but nontrivial toy example given by the anharmonic oscillator \begin{equation} \hat{H}_{T}=-\frac{\hbar ^{2}}{2m_{1}}\frac{\partial ^{2}}{\partial x_{1}^{2} }-\frac{\hbar ^{2}}{2m_{2}}\frac{\partial ^{2}}{\partial x_{2}^{2}} +k(x_{1}-x_{2})^{4} \label{eq:HT_osc} \end{equation} In terms of the relative $x=x_{1}-x_{2}$ and center--of--mass $ X=(m_{1}x_{1}+m_{2}x_{2})/M$ coordinates, where $M=m_{1}+m_{2}$, we have \begin{equation} \hat{H}_{T}=-\frac{\hbar ^{2}}{2M}\frac{\partial ^{2}}{\partial X^{2}}-\frac{ \hbar ^{2}}{2m}\frac{\partial ^{2}}{\partial x^{2}}+kx^{4} \label{eq:HT_osc_CM} \end{equation} where $m=m_{1}m_{2}/M$ is the reduced mass. The first and second terms in the right--hand--side of this equation are simple examples of the $\hat{T} _{CM}$ and $\hat{T}_{rel}$ operators, respectively, mentioned above. This toy model may seem to be rather too unrealistic at first sight but if exhibits some of the necessary features. First, it is separable into center of mass and relative degrees of freedom. Second, we can apply simple variational functions of coordinates defined in the laboratory--fixed set of axes as well as functions of more convenient relative variables. Third, we can calculate the eigenvalues of the relative Hamiltonian operator quite accurately, which are useful for comparison. To simplify the calculation we resort to the dimensionless coordinates $ q_{i}=x_{i}/L $, where $L=[\hbar ^{2}/(m_{1}k)]^{1/6}$, and the total dimensionless Hamiltonian operator \begin{equation} \hat{H}_{Td}=\frac{m_{1}L^{2}}{\hbar ^{2}}\hat{H}_{T}=-\frac{1}{2}\frac{ \partial ^{2}}{\partial q_{1}^{2}}-\frac{\beta }{2}\frac{\partial ^{2}}{ \partial q_{2}^{2}}+(q_{1}-q_{2})^{4} \label{eq:HT_osc_d} \end{equation} where $\beta =m_{1}/m_{2}$. Analogously, the relative Hamiltonian operator is given by \begin{equation} \hat{H}_{d}=-\frac{\beta +1}{2}\frac{\partial ^{2}}{\partial q^{2}}+q^{4}. \label{eq:H_osc_d} \end{equation} where $q=q_1-q_2$ is the translation--invariant coordinate. We first consider the variational function $\varphi _{r}(a,q)=\exp (-aq^{2})$ , where $a$ is a variational parameter, and the total dimensionless Hamiltonian operator (\ref{eq:HT_osc_d}). Notice that this trial function depends only on the relative coordinate $q$. The calculation is straightforward and we obtain $W_{r}=3\cdot 6^{1/3}(\beta +1)^{2/3}/8$. Obviously, the optimized trial function satisfies the virial theorem $\left\langle \hat{T}\right\rangle =\left\langle \hat{T} _{rel}\right\rangle =2\left\langle \hat{V}\right\rangle =6^{1/3}(\beta +1)^{2/3}/4$. In order to simulate an SCF function of the laboratory--fixed coordinates we consider $\varphi _{nr}(a,b,q_{1},q_{2})=\exp (-aq_{1}^{2}-bq_{2}^{2})$. The calculation is also straightforward and we obtain $W_{nr}=3\cdot 6^{1/3}(\sqrt{\beta }+1)^{2}/[8(\sqrt{\beta } +1)^{2/3}]>W_{r}$. The optimized trial function also satisfies the virial theorem $\left\langle \hat{T}\right\rangle =2\left\langle \hat{V} \right\rangle $, but in this case $\left\langle \hat{T}\right\rangle >\left\langle \hat{T}_{rel}\right\rangle $ as discussed above. Fig.~\ref{fig:energies} shows $W_{r}$, $W_{nr}$ and an accurate numerical calculation of the ground--state energy of the dimensionless relative Hamiltonian operator (\ref{eq:H_osc_d}) for $0<\beta <1$. We clearly appreciate the advantage of using a trial wavefunction of internal coordinates, or of properly removing the motion of the center of mass. We do not claim that the error in the DEMO calculation of molecular energies\cite{TMNI98,TO00} is as large as the one suggested by present anharmonic--oscillator, but this simple model shows (at least) two aspects of the problem. First, that the energy calculated by trial functions of the laboratory--fixed coordinates may be considerably greater than those coming from the use of relative coordinates if we do not remove the motion of the center of mass properly. And, second, that the virial theorem is not a reliable indication of the quality of the wavefunction if it is not based on the relative kinetic energy. We can carry out another numerical experiment with the toy model. The total mass in units of $m_{1}$ is $M/m_{1}=(1+\beta )/\beta $. Fig.~\ref {fig:energies2} shows that $\Delta W=W_{nr}-W_{r}$ depends almost linearly on $\beta /(1+\beta )$ (at least for some values of $\beta $) as suggested by the argument above about the actual molecular energies. We appreciate that the toy model gives us another hint on the difference between the actual molecular energies calculated by Kinghorn and Adamowicz\cite{KA00} and Tachikawa and Osamura\cite{TO00}. Summarizing: if we do not properly separate the motion of the center of mass in a calculation of atomic or molecular properties we expect inaccurate results unless the approximate trial function depends only on internal, translation--free coordinates. Otherwise, the effect of the kinetic energy of the center of mass will be a too large estimate of the energy. Under such conditions the virial theorem will result in a misleading indication of a supposedly accurate wavefunction. These arguments apply to the case in which all the particles are allowed to move\cite{TO00} and may not be valid when some heavy particles\cite{TMNI98} (or all the nuclei\cite{TO00}) are considered as merely point charges (a sort of clamped nucleus approximation). \begin{table}[H] \caption{Nonadiabatic energies of some diatomic molecules} \label{tab:energies} \begin{center} \begin{tabular}{lll} \hline Ref. & \multicolumn{1}{c}{$W$} & \multicolumn{1}{c}{$\Delta W$} \\ \hline \multicolumn{3}{c}{H$_2$} \\ \hline KA00 & -1.1640250232 & 0.111654 \\ TO00 & -1.052371 & \\ \hline \multicolumn{3}{c}{HD} \\ \hline KW66 & -1.1654555 & \\ KA00 & -1.1654718927 & 0.102116 \\ TO00 & -1.063356 & \\ \hline \multicolumn{3}{c}{HT} \\ \hline KA00 & -1.1660020061 & 0.0987868 \\ TO00 & -1.068382 & \\ \hline \multicolumn{3}{c}{D$_2$} \\ \hline KA00 & -1.1671688033 & 0.0918650 \\ TO00 & -1.074137 & \\ \hline \multicolumn{3}{c}{DT} \\ \hline KA00 & -1.1678196334 & 0.0885406 \\ TO00 & -1.079279 & \\ \hline \multicolumn{3}{c}{T$_2$} \\ \hline KA00 & -1.1685356688 & 0.0844127 \\ TO00 & -1.084123 & \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \caption{$\Delta W$ vs. $A^{-1} \label{fig:TCM} \end{figure} \begin{figure} \caption{Ground--state energy of the anharmonic oscillator calculated with the variational function of the relative (solid line) and laboratory--fixed (dashed line) coordinates and the accurate numerical results (circles).} \label{fig:energies} \end{figure} \begin{figure} \caption{$\Delta W$ vs. $\beta/(1+\beta)$ for the ground--state of the anharmonic oscillator.} \label{fig:energies2} \end{figure} \end{document}
\begin{document} \setcounter{page}{1} \title[Well-posedness of Tricomi-Gellerstedt-Keldysh equations]{Well-posedness of Tricomi-Gellerstedt-Keldysh-type fractional elliptic problems} \author[M. Ruzhansky, B. T. Torebek, B. Kh. Turmetov]{Michael Ruzhansky, Berikbol T. Torebek$^*$, Batirkhan Kh. Turmetov} \address{\textcolor[rgb]{0.00,0.00,0.84}{Michael Ruzhansky \newline Department of Mathematics: Analysis, Logic and Discrete Mathematics \newline Ghent University, Krijgslaan 281, Ghent, Belgium \newline and \newline School of Mathematical Sciences \newline Queen Mary University of London, United Kingdom}} \email{\textcolor[rgb]{0.00,0.00,0.84}{[email protected]}} \address{\textcolor[rgb]{0.00,0.00,0.84}{Berikbol T. Torebek \newline Department of Mathematics: Analysis, Logic and Discrete Mathematics \newline Ghent University, Krijgslaan 281, Ghent, Belgium \newline and \newline Al--Farabi Kazakh National University \newline Al--Farabi ave. 71, 050040, Almaty, Kazakhstan \newline and \newline Institute of Mathematics and Mathematical Modeling \newline 125 Pushkin str., 050010 Almaty, Kazakhstan}} \email{\textcolor[rgb]{0.00,0.00,0.84}{[email protected]}} \address{\textcolor[rgb]{0.00,0.00,0.84}{Batirkhan Kh. Turmetov \newline Department of Mathematics, Akhmet Yasawi University, \newline 29 B.Sattarkhanov str., 161200 Turkistan, Kazakhstan}} \email{\textcolor[rgb]{0.00,0.00,0.84}{[email protected]}} \thanks{The first author was supported in parts by the FWO Odysseus Project 1 grant G.0H94.18N: Analysis and Partial Differential Equations, by the EPSRC grant EP/R003025/2 and by the Methusalem programme of the Ghent University Special Research Fund (BOF) (Grant number 01M01021). The second author was supported in parts by the FWO Odysseus Project 1 grant G.0H94.18N: Analysis and Partial Differential Equations and by a grant No.AP08052046 from the Ministry of Science and Education of the Republic of Kazakhstan.} \let\thefootnote\relax\footnote{$^{*}$Corresponding author} \subjclass[2010]{34A08, 35R11, 74S25.} \keywords{Caputo derivative, fractional Laplacian, Kilbas-Saigo function, boundary value problem.} \begin{abstract} In this paper Tricomi-Gellerstedt-Keldysh-type fractional elliptic equations are studied. The results on the well-posedness of fractional elliptic boundary value problems are obtained for general positive operators with discrete spectrum and for Fourier multipliers with positive symbols. As examples, we discuss results in half-cylinder, star-shaped graph, half-space and other domains. \end{abstract} \maketitle \tableofcontents \section{Introduction} \subsection{Statement of the problem and historical background} The main purpose of this paper is to study the following fractional elliptic equation \begin{equation}\label{1.1} \mathcal{D}^{2\alpha } u(x,y) - x^{2\beta}\mathcal{L}u(x,y) = 0,\,\left({x,y} \right) \in \mathbb{R}_+\times\Omega,\end{equation} where $1/2<\alpha\leq 1,\,\beta>-\alpha,$ $\Omega\subset\mathbb{R}^N$ is a bounded domain with smooth boundary or $\Omega=\mathbb{R}^N$, and $\mathcal{D}_x^{2\alpha }$ means $\mathcal{D}_x^{2\alpha }= \partial_{0+,x}^\alpha \partial_{0+,x}^\alpha.$ Here $\partial_{0+,x}^\alpha$ is a Caputo fractional derivatives of order $\alpha:$ $$\partial_{0+,x}^\alpha u(x,y) = \frac{1}{{\Gamma \left(1- \alpha \right)}}\int\limits_0^x {\left( {x - s} \right)^{-\alpha} \partial_s u\left( s, y\right)} ds,$$ and $\mathcal L$ satisfies one of the following properties \begin{description} \item[(A)] a linear self-adjoint positive operator with a discrete spectrum $\{\lambda_k\geq 0:k\in \mathbb N\}$ on the Hilbert space $L^2(\Omega)$. According to $\lambda_k$, the operator $\mathcal L$ has the system of orthonormal eigenfunctions $\{e_k:k\in \mathbb N\}$ on $L^2(\Omega)$.\\ As an example of $\mathcal L$, we can consider all self-adjoint positive operators that were given in \cite{RuzT1, RuzT2}. For example: \begin{itemize} \item Dirichlet-Laplacian, Neumann-Laplacian or fractional Dirichlet-Laplacian in a bounded domain; \item Sturm-Liouville operator or its involution perturbations in a finite interval; \item integro-differential operators with fractional derivatives. \end{itemize} \item[(B)] Fourier multiplier $a(D)$ with symbol $a(\xi)\geq 0,\,\,\xi\in \mathbb{R}^N,$ i.e. $a(D)=\mathcal{F}^{-1}\left(a(\xi)\mathcal{F}\right),\,\,\xi\in \mathbb{R}^N,$ where $\mathcal{F}$ is the Fourier transform and $\mathcal{F}^{-1}$ is the inverse Fourier transform.\\ As an example of $\mathcal L$, we can consider all operators with nonnegative symbol (see \cite{Ruzh}). For example: \begin{itemize} \item Laplace operator $-\Delta$ with symbol $|\xi|^2$ or fractional Laplacian $(-\Delta)^s,\,s\in(0,1),$ with symbol $|\xi|^{2s}$; \item Linear partial differential operator $\sum\limits_{|\beta|\leq m} a_\beta D^\beta,\,\,a_\beta\geq 0,$ with nonnegative symbol $\sum\limits_{|\beta|\leq m} a_\beta \xi^\beta\geq 0,$ with $D^\beta=\left(\frac{1}{i}\partial_{x_1}\right)^{\beta_1}\cdot ... \cdot \left(\frac{1}{i}\partial_{x_N}\right)^{\beta_N}$. \end{itemize} \end{description} The need to study the boundary value problems for the fractional elliptic equations to describe the production processes in mathematical modeling of socio-economic systems was shown in \cite{Nak}. In \cite{Nak} the attention was drawn to the fact that the problem of finding a generalized two-factor Cobb-Douglas function is reduced to the Dirichlet problem for the fractional elliptic equation. The equation \eqref{1.1} is a generalization of the following well-known equations: \begin{itemize} \item If $\alpha=1,$ $\beta=0$ and $\mathcal{L}=-\Delta=-\sum\limits_{j=1}^n\frac{\partial^2}{\partial y^2_j},$ then the equation \eqref{1.1} coincides with the classical Laplace equation $$u_{xx}(x,y)+\Delta_yu(x,y)=0,\,x>0,\,y\in\mathbb{R}^N;$$ \item If $N=1,$ $\alpha=1,$ $\beta=\frac{1}{2}$ and $\mathcal{L}=-\frac{\partial^2}{\partial y^2},$ then the equation \eqref{1.1} coincides with the classical Tricomi equation (\cite{Tricomi}) $$u_{xx}(x,y)+xu_{yy}(x,y)=0,\,x>0,\,y\in\mathbb{R};$$ \item If $N=1,$ $\alpha=1,$ $\beta=m>0$ and $\mathcal{L}=-\frac{\partial^2}{\partial y^2},$ then the equation \eqref{1.1} coincides with the classical Gellerstedt equation (\cite{Geller}) $$u_{xx}(x,y)+x^{m}u_{yy}(x,y)=0,\,x>0,\,y\in\mathbb{R};$$ \item If $N=1,$ $\alpha=1,$ $\beta=-k\in(-2,0)$ and $\mathcal{L}=-\frac{\partial^2}{\partial y^2},$ then the equation \eqref{1.1} coincides with the classical Keldysh equation (\cite{Keldysh}) $$u_{xx}(x,y)+x^{-k}u_{yy}(x,y)=0,\,x>0,\,y\in\mathbb{R}.$$ \end{itemize} The above equations are used in transonic gas dynamics \cite{Bers58}, and in mathematical models of cold plasma \cite{Otway}. Note that the study of Tricomi, Gellerstedt and Keldysh equations was done in many papers \cite{Alg, Gelf1, Gelf2, Gelf3, Mois, Xu}. The boundary value problems for the fractional elliptic equations are studied in \cite{Amb, Caff, Mas, Turm}. \subsection{Three-parameter Mittag-Leffler (Kilbas-Saigo) function} First, we recall the definition of the Kilbas-Saigo function (three-parameter Mittag-Leffler function) and some of its particular cases. \begin{itemize} \item {\bf Classical Mittag-Leffler function.} The classical Mittag-Leffler function $E_{\alpha,1}(z)$ defined by (\cite{M-L03}) \begin{equation*} E_{\alpha,1}(z)=\sum\limits_{k=0}^\infty \frac{z^k}{\Gamma(\alpha k+1)},\,\,\alpha>0,\, z\in \mathbb{C}, \end{equation*} is a natural extension of the exponential function $E_{1,1}(z)=\exp(z),$ and also of the hyperbolic cosine function $E_{2,1}(z)=\cosh{\sqrt{z}}.$ The most interesting properties of Mittag-Leffler function are associated with its upper-lower estimates for $0<\alpha<1$ as follows (\cite{TSim15}): \begin{equation}\label{MLF1} \frac{1}{1+\Gamma(1-\alpha)z}\leq E_{\alpha, 1}(-z)\leq \frac{1}{1+\frac{1}{\Gamma(1+\alpha)}z},\, z\geq 0. \end{equation} \end{itemize} \begin{itemize} \item {\bf Two-parameter Mittag-Leffler function.} The two-parameter Mittag-Leffler function $E_{\alpha,\beta}(z)$ is defined by \begin{equation*} E_{\alpha,\beta}(z)=\sum\limits_{k=0}^\infty \frac{z^k}{\Gamma(\alpha k+\beta)},\,\,\alpha>0,\, \beta>0,\, z\in \mathbb{C}. \end{equation*} This function, sometimes called a Mittag-Leffler-type function, first appeared in \cite{W05}. When $\beta=1$, $E_{\alpha,\beta}(z)$ coincides with the classical Mittag-Leffler function $E_{\alpha,1}(z).$ \end{itemize} \begin{itemize} \item {\bf Three-parameter (Kilbas-Saigo) Mittag-Leffler function.} Another generalization of the Mittag-Leffler function was introduced by Kilbas and Saigo \cite{KS95} in terms of a special function of the form \begin{equation} \label{SF-01} E_{\alpha, m, n}(z)=1+\sum_{k=1}^{\infty}\prod_{j=0}^{k-1}\frac{\Gamma(\alpha(jm+n)+1)}{\Gamma(\alpha(jm+n+1)+1)}\,z^{k}, \end{equation} where $\alpha,\, m$ are real numbers and $n\in \mathbb{C}$ such that \begin{equation}\label{cond1} \alpha>0,\, m>0,\, \alpha(jm+n)+1\neq -1,-2,-3,... (j\in\mathbb N_0). \end{equation} In particular, if $m = 1,$ the function $E_{\alpha, m, n}(z)$ is reduced to the two-parameter Mittag-Leffler function: \begin{equation*} E_{\alpha, 1, n}(z)=\Gamma(\alpha n+1)E_{\alpha, \alpha n+1}(z), \end{equation*} and if $m = 1, n=0,$ then it coincides with the classical Mittag-Leffler function: \begin{equation*} E_{\alpha, 1, 0}(z)=E_{\alpha, 1}(z). \end{equation*} Recently Simon et al. \cite{TSim19} obtained the following interesting estimates of the Kilbas-Saigo functions: \begin{equation}\label{estim1} \frac{1}{1+\Gamma(1-\alpha)z}\leq E_{\alpha, m, m-1}(-z)\leq \frac{1}{1+\frac{\Gamma(1+(m-1)\alpha)}{\Gamma(1+m\alpha)}z},\, z\geq 0, \end{equation} where $m>0$ and $0<\alpha<1$. \end{itemize} \subsection{Ill-posedness of the non-sequential problem} As generally $$\partial_x^\alpha \partial_x^\alpha\neq \partial_x^{2\alpha},$$ the equation \eqref{1.1} is different from the following non-sequential equation \begin{equation}\label{001} \partial_x^{2\alpha}u(x,y) - x^{2\beta}\mathcal{L}u(x,y) = 0,\,\left({x,y} \right) \in \mathbb{R}_+\times\Omega.\end{equation} However, we cannot consider the problem of bounded solutions of equation \eqref{001} in $x\in \mathbb{R}_+$, since for such class of functions, nontrivial solutions of equation \eqref{001} may not exist. We demonstrate this with the following example:\\ Let $1<2\alpha<2,$ $\beta=0,$ and $\mathcal{L}=-\Delta=-\sum\limits_{j=1}^n\frac{\partial^2}{\partial y^2_j}$ in \eqref{001}. Then using the Fourier transform to \eqref{001} with respect to $y$ we have \begin{equation}\label{002} \partial_x^{2\alpha}\hat{u}(x,\xi) - |\xi|^{2}\hat{u}(x,\xi) = 0,\,x>0,\,\xi\in\mathbb{R}^N.\end{equation} The general solution to the equation \eqref{002} has the form \cite[Example 4.10]{1} \begin{equation*}\hat{u}(x,\xi) = C_1(\xi) E_{\alpha, 1} \left( |\xi|^{2} x^{\alpha}\right) + C_2(\xi) t E_{\alpha, 2} \left( {|\xi|^{2} x^{\alpha}}\right),\end{equation*} where $C_1(\xi),$ $C_2(\xi)$ are arbitrary constants and $E_{\alpha,\beta}(z)$ is the Mittag-Leffler function. From the asymptotic estimate of the Mittag-Leffler function $$E_{\alpha,\beta}(z)\sim z^{\frac{1-\beta}{\alpha}}e^{z^{\frac{1}{\alpha}}},\,z\rightarrow\infty,$$ it follows that $$\lim\limits_{x\rightarrow\infty}E_{\alpha, 1} \left( |\xi|^{2s} x^{\alpha}\right)\rightarrow\infty\,\,\,\textrm{and}\,\,\, \lim\limits_{x\rightarrow\infty}E_{\alpha, 2} \left( |\xi|^{2s} x^{\alpha}\right)\rightarrow\infty.$$ Therefore, the equation \eqref{002} does not have a bounded solution in $x\in\mathbb{R}_+.$ \subsection{One dimensional fractional differential equation} Let $ 0 < \alpha \le 1 ,$ $ \mu $ is a positive real number. For further exposition we need to give some information about the exact solutions of differential equations of the form: \begin{equation}\label{2.1} \mathcal{D}^{2\alpha } h\left( x \right) - \mu^2 x^{2\beta} h\left( x \right) = 0,\,x > 0.\end{equation} Using the method of constructing the solution of the fractional-order differential equations developed in \cite{Tur1, Tur2}, one can show that the functions \begin{equation}\label{2.3} \left\{E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( { \mu x^{\alpha+\beta} }\right), E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( { -\mu x^{\alpha+\beta} } \right)\right\},\end{equation} are solutions of the equation \eqref{2.1}. It is easy to show that the functions \eqref{2.3} are linearly independent. Hence, the system of functions \eqref{2.3} is a fundamental system for the equation \eqref{2.1}, and therefore the general solution of this equation has the form: \begin{equation}\label{2.4}h\left(x\right) = C_1 E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( { \mu x^{\alpha+\beta} }\right) + C_2 E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \mu x^{\alpha+\beta} }\right),\end{equation} where $C_1$ and $C_2 $ are arbitrary constants. It is easy to see that, if $ x \to +\infty ,$ then $$E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( { \mu x^{\alpha+\beta} }\right)\to +\infty,$$ since \begin{equation}\label{2.5}E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( { \mu x^{\alpha+\beta} }\right)\geq \frac{\mu\Gamma(\beta+1)}{\Gamma(\alpha+\beta+1)}x^{\alpha+\beta},\,x>0.\end{equation} And for the function $E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \mu x^{\alpha+\beta} }\right),$ the following estimate holds (\cite{TSim19}): \begin{equation}\label{2.6}E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \mu x^{\alpha+\beta} }\right)\leq \frac{1}{1+\frac{\Gamma(\beta+1)}{\Gamma(\alpha+\beta+1)}\mu x^{\alpha+\beta}},\, x>0.\end{equation} \section{Well-posedness in a bounded domain} Let $\mathcal{L}$ be a self-adjoint, positive operator with the discrete spectrum $\{\lambda_k\geq 0:\;k\in\mathbb N\}$ on $L^2(\Omega)$. The main assumption in this section is that the system of eigenfunctions $\{e_k\in L^2(\Omega):k\in\mathbb N\}$ of the operator $\mathcal{L}$ is an orthonormal basis in $L^2(\Omega)$. The Hilbert space $\mathcal{H}^\mathcal{L}(\Omega)$ is defined by $$\mathcal{H}^\mathcal{L}(\Omega)=\{u\in L^2(\Omega):\, \sum\limits_{k = 0}^\infty \lambda^2_k|(u,e_k)|^2<\infty\},$$ with the norm $$\|u\|^2_{\mathcal{H}^\mathcal{L}(\Omega)}=\sum\limits_{k = 0}^\infty \lambda^2_k|(u,e_k)|^2.$$ \begin{definition} The generalised solution of equation \eqref{1.1} in $\Omega\subset\mathbb{R}^N$ is a bounded function $u\in C\left(\mathbb{R}_+;L^2(\Omega)\right),$ such that $x^{-2\beta}\mathcal{D}_x^{2\alpha } u, \mathcal{L}u\in C\left(\mathbb{R}_+;L^2(\Omega)\right).$\end{definition} \begin{theorem}\label{th1}Let $\phi \in \mathcal{H}^\mathcal{L}(\Omega).$ Then the generalised solution of equation \eqref{1.1} satisfying conditions \begin{equation}\label{1.2} u(0,y)=\phi(y),\,y\in\Omega, \end{equation} and \begin{equation}\label{1.3} \lim\limits_{x\rightarrow+\infty}u(x,y)\,\,\,\,\text{is bounded for almost every}\,\,\,\,y\in\Omega, \end{equation} exists, it is unique and can be represented as \begin{equation}\label{1.5}u\left( {x,y} \right) = \sum\limits_{k = 0}^\infty {\phi_k E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{\lambda_k} x^{\alpha+\beta} }\right) e_k\left( y \right)},\,(x,y)\in [0,\infty)\times\Omega,\end{equation} where $\phi_k=\int\limits_\Omega {\phi\left( y \right) \overline{e_k \left( y \right)}dy},\,k\in\mathbb{Z}_+=0, 1, 2, ... ,$ and $E_{\alpha, m, l} \left( z \right)$ is a Kilbas-Saigo function. In addition, the solution $u$ satisfies the following estimates: \begin{align*}\|u\|_{C(\mathbb{R}_+;L^2(\Omega))}\leq\|\phi\|_{L^2(\Omega)},\end{align*} \begin{align*}\sup\limits_{x\in(0,\infty)}\left\|x^{-2\beta}\mathcal{D}_x^{2\alpha}u\left( {x, \cdot} \right)\right\|_{L^2(\Omega)} \leq \|\phi\|_{\mathcal{H}^\mathcal{L}(\Omega)},\end{align*} and \begin{align*}\sup\limits_{x\in(0,\infty)}\|\mathcal{L}u\left({x, \cdot} \right)\|_{L^2(\Omega)}\leq\|\phi\|_{\mathcal{H}^\mathcal{L}(\Omega)}.\end{align*} \end{theorem} \begin{remark} If in Theorem \ref{th1} we replace the boundedness condition \eqref{1.3} by condition \begin{equation}\label{1.4} \lim\limits_{x\rightarrow+\infty}u(x,y)=0,\,\,\,\,y\in\Omega, \end{equation} then the problem \eqref{1.1}, \eqref{1.2}, \eqref{1.4} for the self-adjoint operators $\mathcal{L}$ with nonnegative eigenvalues $\lambda_k\geq 0,\, k\in \mathbb{N},$ becomes ill-posed. Indeed, it is easy to show that the bounded solution to Problem \eqref{1.1}, \eqref{1.2} has the form \eqref{1.5}. However, if we take into account condition \eqref{1.4}, then, for the existence of a solution to problem \eqref{1.1}, \eqref{1.2}, \eqref{1.4}, it is necessary and sufficient to have the condition $$\int\limits_\Omega {\phi\left( y \right) dy}=0.$$ \end{remark} \subsection{Particular cases} We now specify Theorem \ref{th1} to several concrete cases. \subsubsection{Laplace equation in the half-strip and in the star-shaped graphs} Our first example will focus on the Laplace equation. $\bullet$ Let $\Omega=(0,1),$ $\alpha=1,$ $\beta=0$ and $$\mathcal{L}=-\frac{\partial^2}{\partial y^2},\,D(\mathcal{L}):=\{u\in W^1_2([0,1]),\, u(0)=u(1)=0\}.$$ Then the equation \eqref{1.1} coincides with the classical Laplace equation on the half-strip \begin{equation}\label{1.1a} u_{xx}(x,y) + u_{yy}(x,y) = 0,\,\left({x,y} \right) \in \mathbb{R}_+\times(0,1).\end{equation} It is known that the unique solution to problem \eqref{1.1a}, \eqref{1.2}, \eqref{1.3} is represented in the form \begin{equation*}u\left( {x,y} \right) = \sum\limits_{k = 1}^\infty \phi_k e^{-k\pi x} \sin k\pi y.\end{equation*} $\bullet$ Let $\Omega$ be a star-shaped metric graph consisting of $d$ segments of equal length, $\alpha=1,$ $\beta=0,$ and let $\mathcal{L}$ be a differential operator $\mathcal{L}=-\frac{\partial^2 v_j(y)}{\partial y^2},\,j=1,...,d,$ with boundary conditions \begin{align*}&v_j(0)=0,\,j=1,...,d,\\& v_1(\pi)=v_2(\pi)=...=v_d(\pi),\\& v'_1(\pi)+v'_2(\pi)+...+v'_d(\pi)=0.\end{align*} It is known (\cite{yang}) that the above operator is self-adjoint in $L^2_d([0,\pi])=\bigotimes\limits_{i=1}^dL^2([0,\pi])$ and has discrete spectrum $\lambda_k^d=\left(k-\frac{1}{2}\right)^2,\,k\in\mathbb{N}.$ Then the equation \eqref{1.1} coincides with the Laplace equation on the star-shaped graphs \begin{equation}\label{1.1aa} \Delta u(x,y)\equiv \Delta \left(\begin{array}{l}u_1(x,y)\\ u_2(x,y)\\ \vdots\\ u_d(x,y)\end{array}\right) = 0.\end{equation} Then the unique solution to problem \eqref{1.1aa}, \eqref{1.2}, \eqref{1.3} is represented in the form \begin{equation*}u(x,y)\equiv \left(\begin{array}{l}u_1(x,y)\\ u_2(x,y)\\ \vdots\\ u_d(x,y)\end{array}\right) = \sum\limits_{k = 1}^\infty \phi_k e^{- \left(k-\frac{1}{2}\right)x} \left(\begin{array}{l}1\\ 1\\ \vdots\\ 1\end{array}\right) \sin \left(k-\frac{1}{2}\right) y.\end{equation*} \subsubsection{Fractional analogue of the Laplace equation with involution} Let $\Omega=(-\pi,\pi),$ $\beta=0,$ and $$\mathcal{L}u(x)=-\frac{\partial^2}{\partial y^2}u(x)+\varepsilon \frac{\partial^2}{\partial y^2}u(-x),\,|\varepsilon|<1,$$ $$D(\mathcal{L}):=\{u\in W^1_2([-\pi,\pi]),\, u(-\pi)=u(\pi)=0\}.$$ Then the equation \eqref{1.1} coincides with the fractional analogue of the Laplace equation with involution on the half-strip \begin{equation}\label{1.1b} \mathcal{D}_x^{2\alpha }u(x,y) + u_{yy}(x,y) - \varepsilon u_{yy}(x,-y) = 0,\,\left({x,y} \right) \in \mathbb{R}_+\times(-\pi,\pi).\end{equation} It is known (\cite{Turm}) that there exist a unique solution to problem \eqref{1.1b}, \eqref{1.2}, \eqref{1.3} and it can be represented in the form \begin{equation*}u\left( {x,y} \right) = \sum\limits_{k = 1}^\infty \phi_k E_{\alpha,1}\left({-\left(1+(-1)^k\varepsilon\right)k\pi x^\alpha}\right) \sin k\pi y.\end{equation*} \subsubsection{Elliptic Tricomi and Gellerstedt equation} Let $\alpha=1,$ $\beta>-2,$ and $\mathcal{L}=-\frac{\partial^2}{\partial y^2},\,D(\mathcal{L}):=\{u\in W^1_2([0,1]),\, u(0)=u(1)=0\}.$ $\bullet$ If $\beta=1$ then the equation \eqref{1.1} coincides with the classical Tricomi equation \begin{equation}\label{1.1c}u_{xx}(x,y)+x u_{yy}(x,y)=0,\,x>0,\,y\in(0,1),\end{equation} and the unique solution to problem \eqref{1.1c}, \eqref{1.2}, \eqref{1.3} can be written as \begin{equation*}u\left( {x,y} \right) = \sum\limits_{k = 1}^\infty \phi_k\, \text{Ai}({-k\pi x}) \sin k\pi y,\end{equation*} where $\text{Ai}(z)$ is the Airy function. $\bullet$ If $\beta>-2$ then the equation \eqref{1.1} coincides with the classical Gellerstedt equation \begin{equation}\label{1.1cc}u_{xx}(x,y)+x^\beta u_{yy}(x,y)=0,\,x>0,\,y\in(0,1),\end{equation} and the unique solution to problem \eqref{1.1cc}, \eqref{1.2}, \eqref{1.3} can be written as (see \cite{Mois}) \begin{equation*}u\left( {x,y} \right) = \sum\limits_{k = 1}^\infty \phi_k\, \sqrt{x}K_{\frac{1}{\beta+2}}\left(\frac{2\pi k x^{\frac{2}{\beta+2}}}{\beta+2}\right) \sin k\pi y,\end{equation*} where $K_{\nu}(z)$ is the Macdonald function. \subsubsection{Fractional elliptic equation with variable coefficients} If $\beta=0$ and $$\mathcal{L}=(1-y)^\mu (1+y)^\mu D^\mu_{1-,y}\partial^\mu_{-1+,y},$$ $$u(-1)=I^{1-\mu}_{1-,y}\partial^\mu_{-1+,y}u(1)=0,$$ then the equation \eqref{1.1} coincides with the equation \begin{equation}\label{1.1d}u_{xx}(x,y)+(1-y)^\mu (1+y)^\mu D^\mu_{1-,y}\partial^\mu_{-1+,y}u(x,y)=0,\,x>0,\,y\in(-1,1),\end{equation} where $\mu\in(0,1),$ $D_{1-,y}^\mu$ is a right-side Riemann-Liouville fractional derivative of order $\mu\in(0,1)$ $$D_{1-,y}^\mu u(x,y) = \frac{1}{{\Gamma \left(1- \mu \right)}}\frac{\partial}{\partial y}\int\limits_{y}^1 {\left( {s - y} \right)^{-\mu} u\left( x, s\right)} ds,$$ $\partial_{-1+,y}^\mu$ is a left-side Caputo fractional derivative of order $\mu\in(0,1)$ $$\partial_{-1+,y}^\mu u(x,y) = \frac{1}{{\Gamma \left(1- \mu \right)}}\int\limits_{-1}^y {\left( {y - s} \right)^{-\mu} u_s\left( x, s\right)} ds,$$ $I_{1-,y}^{1-\mu}$ is a right-side Riemann-Liouville fractional integral of order $\mu\in(0,1)$ $$I_{1-,y}^{1-\mu} u(x,y) = \frac{1}{{\Gamma \left(1- \mu \right)}}\int\limits_{y}^1 {\left( {s - y} \right)^{-\mu} u\left( x, s\right)} ds.$$ The unique solution of problem \eqref{1.1d}, \eqref{1.2}, \eqref{1.3} can be written as \begin{equation*}u\left( {x,y} \right) = \sum\limits_{k = 1}^\infty \phi_k\, \exp\left({-\frac{\Gamma(k+\mu)}{\Gamma(k-\mu)} x}\right) (1+y)^\mu P^{-\mu,\mu}_{k-1}(y),\end{equation*} where $P^{-\mu,\mu}_{k-1}(y)$ is the Jacobi polynomial (\cite{JCP}) $$P^{-\mu,\mu}_{k-1}(y)=\sum\limits_{n=0}^{k-1}\left(\begin{array}{l}k-1-\mu\\k-1-n\end{array}\right) \left(\begin{array}{l}k-1+\mu\\n\end{array}\right)\left(\frac{y-1}{2}\right)^n\left(\frac{y+1}{2}\right)^{k-1-n}.$$ \subsection{Proof of Theorem \ref{th1}} \subsubsection{Existence of solution.} As $\mathcal{L}$ is self-adjoint in $L^2(\Omega),$ any solution of problem \eqref{1.1}, \eqref{1.2}--\eqref{1.3} can be represented as: \begin{equation}\label{3.2}u\left( {x,y} \right) = \sum\limits_{k = 0}^\infty {u_k \left( x \right) e_k\left( y \right)},\,(x,y)\in \mathbb{R}_+\times\Omega.\end{equation} It is clear that if $\phi\in \mathcal{H}^\mathcal{L}(\Omega)$, then it can be represented in the form \begin{equation}\phi\left( y \right) = \sum\limits_{k = 0}^\infty {\phi_k e_k \left( y \right)},\,y\in\Omega,\end{equation} where $\phi_k=\int\limits_\Omega {\phi\left( y \right) \overline{e_k \left( y \right)}dy}.$ Substituting function \eqref{3.2} into equation \eqref{1.1}, we obtain the following problem for $u_k(x),$ \begin{equation}\label{3.3}\mathcal{D}^{2\alpha } u_k \left( x \right) - \lambda_k x^{2\beta}u_k \left( x \right) = 0,\, x >0,\end{equation} \begin{equation}\label{3.4}u_k \left(0\right) = \phi_k,\, u_k \left( \infty \right)\leq C,\,C=const,\end{equation} where $\lambda_k>0$ are eigenvalues of $\mathcal{L}$. According to formula \eqref{2.4}, the general solution to equation \eqref{3.3} has the form: \begin{equation*}u_k \left( x \right) = C_1 E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( { \sqrt{\lambda_k} x^{\alpha+\beta} }\right) + C_2 E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{\lambda_k} x^{\alpha+\beta} }\right),\end{equation*} where $C_1$ and $C_2 $ are arbitrary constants. Since $$E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( { \sqrt{\lambda_k} x^{\alpha+\beta} }\right)\to +\infty,\,\,\, \text{as} \,\,\,x \to +\infty,$$ we have $C_1=0.$ Since $$E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( { -\sqrt{\lambda_k} x^{\alpha+\beta} }\right)\to 0,\,\,\, \text{as} \,\,\,x \to +\infty,$$ then by \eqref{3.4} we have \begin{equation}\label{3.6}u_k \left( x \right) = \phi_k E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{\lambda_k} x^{\alpha+\beta} }\right),\end{equation} hence \begin{equation*}u\left( {x,y} \right) = \sum\limits_{k = 0}^\infty {\phi_k E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{\lambda_k} x^{\alpha+\beta} }\right) e_k\left( y \right)},\,(x,y)\in \mathbb{R}_+\times\Omega.\end{equation*} \subsubsection{Convergence of solution.} The estimate \eqref{2.6} gives \begin{align*}|u_k \left( x \right)| \leq \frac{|\phi_k|}{1+\frac{\Gamma(\beta+1)}{\Gamma(\alpha+\beta+1)}\sqrt{\lambda_k} x^{\alpha+\beta}},\end{align*} which implies \begin{align*}\sup\limits_{x\geq 0}\|u\left( {x, \cdot} \right)\|_{L^2(\Omega)}^2 &\leq \sup\limits_{x\geq 0}\sum\limits_{k = 0}^\infty {|\phi_k|^2 \left|E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{\lambda_k} x^{\alpha+\beta} }\right)\right|^2 \|e_k\|_{L^2(\Omega)}^2}\\& \leq \sup\limits_{x\geq 0}\sum\limits_{k = 0}^\infty \frac{|\phi_k|^2}{\left(1+\frac{\Gamma(\beta+1)}{\Gamma(\alpha+\beta+1)}\sqrt{\lambda_k} x^{\alpha+\beta}\right)^2} \\&\leq \sum\limits_{k = 0}^\infty {|\phi_k|^2}=\|\phi\|_{L^2(\Omega)}^2<\infty,\end{align*} thanks to Parseval's identity. Let us calculate $\mathcal{D}_x^{2\alpha } u$ and $\mathcal{L}u.$ We have \begin{align*}\mathcal{D}_x^{2\alpha}u\left( {x,y} \right)& = \sum\limits_{k = 0}^\infty {\phi_k \mathcal{D}_x^{2\alpha}E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{\lambda_k} x^{\alpha+\beta} }\right) e_k\left( y \right)}\\&= x^{2\beta}\sum\limits_{k = 0}^\infty \lambda_k{\phi_k E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{\lambda_k} x^{\alpha+\beta} }\right) e_k\left( y \right)},\,(x,y)\in \mathbb{R}_+\times\Omega,\end{align*} and \begin{align*}\mathcal{L}u\left( {x,y} \right)& = \sum\limits_{k = 0}^\infty {\phi_k E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{\lambda_k} x^{\alpha+\beta} }\right) \mathcal{L}e_k\left( y \right)}\\&= \sum\limits_{k = 0}^\infty \lambda_k{\phi_k E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{\lambda_k} x^{\alpha+\beta} }\right) e_k\left( y \right)},\,(x,y)\in \mathbb{R}_+\times\Omega.\end{align*} Applying the above calculations and Parseval's identity we have \begin{align*}\sup\limits_{x\in(0,\infty)}\left\|x^{-2\beta}\mathcal{D}_x^{2\alpha}u\left( {x, \cdot} \right)\right\|^2_{L^2(\Omega)} \leq \sum\limits_{k = 0}^\infty \lambda_k^2{|\phi_k|^2 }=\|\phi\|^2_{\mathcal{H}^\mathcal{L}(\Omega)}<\infty,\end{align*} and \begin{align*}\sup\limits_{x\in(0,\infty)}\|\mathcal{L}u\left({x, \cdot} \right)\|^2_{L^2(\Omega)}\leq\sum\limits_{k = 0}^\infty \lambda_k^2{|\phi_k|^2}=\|\phi\|^2_{\mathcal{H}^\mathcal{L}(\Omega)}<\infty.\end{align*} \subsubsection{Uniqueness of solution.} Suppose that there are two solutions $u_1(x, y)$ and $u_2(x,y)$ of problem \eqref{1.1}, \eqref{1.2}--\eqref{1.3}. Let $$u(x,y)=u_1(x,y)-u_2(x,y).$$ Then $u(x,y)$ satisfies the equation \eqref{1.1} and homogeneous conditions \eqref{1.2}--\eqref{1.3}. Let us consider the function \begin{equation}\label{3.5}u_k(x)=\int\limits_\Omega u(x,y)\overline{e_k(y)}dy,\,k\in\mathbb{Z}_+,\,x\geq 0.\end{equation} Applying $\mathcal{D}^{2\alpha}$ to the function \eqref{3.5} by \eqref{1.1} we have \begin{align*}\mathcal{D}^{2\alpha}u_k(x)&=\int\limits_\Omega \mathcal{D}^{2\alpha}_xu(x,y)\overline{e_k(y)}dy =x^{2\beta}\int\limits_\Omega \mathcal{L}u(x,y)\overline{e_k(y)}dy \\& =x^{2\beta}\int\limits_\Omega u(x,y)\mathcal{L}\overline{e_k(y)}dy=x^{2\beta}\lambda_k\int\limits_\Omega u(x,y)\overline{e_k(y)}dy\\&= x^{2\beta}\lambda_k u_k(x),\,k\in\mathbb{Z}_+,\,x\geq 0.\end{align*} Also from \eqref{1.2} and \eqref{1.3} we have $u_k(0)=0,\,\,u_k(\infty)\,\,\,\text{is bounded}.$ Then from \eqref{3.6} we conclude that $u_k(x)=0,\,x\geq 0.$ This implies $\int\limits_\Omega u(x,y)\overline{e_k(y)}dy=0,$ and the completeness of the system $e_k(x),\,k\in \mathbb{Z}_+,$ gives $u(x,y)\equiv 0,\,(x,y)\in [0,\infty)\times\Omega.$ \section{Well-posedness in $\mathbb{R}^N$} The Sobolev space $\mathcal{H}^\mathcal{L}(\mathbb{R}^N)$ is defined by $$\mathcal{H}^\mathcal{L}(\mathbb{R}^N)=\{f\in L^2(\mathbb{R}^N):\, a(\xi)\hat{f}\in L^2(\mathbb{R}^N)\},$$ where $\hat{f}(\xi)=\frac{1}{(2\pi)^N}\int\limits_{\mathbb{R}^N}e^{-iy\xi} f(y)dy,\,\,\xi\in\mathbb{R}^N.$ The space $\mathcal{H}^\mathcal{L}(\mathbb{R}^N)$ is a Hilbert space; it is equipped with the norm \[ \|f\|^2_{\mathcal{H}^\mathcal{L}(\mathbb{R}^N)}=\int\limits_{\mathbb{R}^N}|a(\xi)\hat{f}(\xi)|^2d\xi. \] \begin{definition} The generalised solution of equation \eqref{1.1} in $\mathbb{R}^N$ is a function $ u\in C\left([0,\infty);L^2(\mathbb{R}^N)\right),$ such that $x^{-2\beta}\mathcal{D}_x^{2\alpha } u, \mathcal{L}u\in C\left((0,\infty);L^2(\mathbb{R}^N)\right).$\end{definition} \begin{theorem}\label{th2}Let $\phi \in \mathcal{H}^\mathcal{L}(\mathbb{R}^N).$ Then the generalized solution of equation \eqref{1.1} satisfying conditions \begin{equation}\label{1.2*} u(0,y)=\phi(y),\,y\in\mathbb{R}^N, \end{equation} and \begin{equation}\label{1.3*} \lim\limits_{x\rightarrow+\infty}u(x,y)\,\,\,\,\text{is bounded for almost every}\,\,\,\,y\in\mathbb{R}^N, \end{equation} exists, it is unique and can be represented as \begin{equation}\label{1.5*}u\left( {x,y} \right) = \int\limits_{\mathbb{R}^N}e^{-iy\xi} \hat{\phi}(\xi) E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{a(\xi)} x^{\alpha+\beta} }\right) d\xi,\,(x,y)\in \mathbb{R}_+\times\mathbb{R}^N,\end{equation} where $\hat{\phi}(\xi)=\frac{1}{(2\pi)^N}\int\limits_{\mathbb{R}^N}e^{-i\xi s}\phi(s)ds.$ In addition, the solution $u$ satisfies the following estimates: \begin{align*}\|u\|_{C(\mathbb{R}_+;L^2(\mathbb{R}^N))}\leq\|\phi\|_{L^2(\mathbb{R}^N)},\end{align*} \begin{align*}\sup\limits_{x\in(0,\infty)}\left\|x^{-2\beta}\mathcal{D}_x^{2\alpha}u\left( {x, \cdot} \right)\right\|_{L^2(\mathbb{R}^N)} \leq \|\phi\|_{\mathcal{H}^\mathcal{L}(\mathbb{R}^N)},\end{align*} and \begin{align*}\sup\limits_{x\in(0,\infty)}\|\mathcal{L}u\left({x, \cdot} \right)\|_{L^2(\mathbb{R}^N)}\leq\|\phi\|_{\mathcal{H}^\mathcal{L}(\mathbb{R}^N)}.\end{align*} \end{theorem} \subsection{Particular cases} We now specify Theorem \ref{th2} to several concrete cases. \subsubsection{Laplace equation in the half-space} Our first example will focus on the Laplace equation. Let $\alpha=1,$ $\beta=0$ and $\mathcal{L}=-\Delta=\sum\limits_{j=1}^N\frac{\partial^2}{\partial y_j^2}.$ Then the equation \eqref{1.1} coincides with the classical Laplace equation on the half-space \begin{equation}\label{1.1a*} u_{xx}(x,y) + \Delta_y u(x,y) = 0,\,\left({x,y} \right) \in \mathbb{R}_+\times\mathbb{R}^N.\end{equation} It is known that the unique solution to problem \eqref{1.1a*}, \eqref{1.2*}, \eqref{1.3*} is represented by the Poisson integral (\cite{Stein}) \begin{equation*}u\left( {x,y} \right) = \frac{\Gamma((N+1)/2)}{\pi^{(N+1)/2}}\int\limits_{\mathbb{R}^N} \frac{x\phi(s)}{(|y-s|^2+x^2)^{(N+1)/2}}ds.\end{equation*} \subsubsection{Multidimensional degenerate elliptic equations} Let $\alpha=1,$ $\beta>-2$ and $\mathcal{L}=-\Delta_y.$ $\bullet$ If $\beta=1,$ then the equation \eqref{1.1} coincides with the multidimensional Tricomi equation \begin{equation}\label{1.1c*}u_{xx}(x,y)+x \Delta_y u(x,y)=0,\,x>0,\,y\in\mathbb{R}^N,\end{equation} and the solution to problem \eqref{1.1c*}, \eqref{1.2*}, \eqref{1.3*} can be written as (\cite{Alg}) \begin{equation*}u\left( {x,y} \right) = \frac{3^{n+1/2}\Gamma(2/3)\Gamma(N/2+1/3)}{2^{1/3}\pi^{N/2+1}}\int\limits_{\mathbb{R}^N} \frac{x\phi(s)}{(9|y-s|^2+4x^3)^{N/2+1/3}}ds.\end{equation*} $\bullet$ If $\beta=m>-2$ then the equation \eqref{1.1} coincides with the multidimensional Gellerstedt equation \begin{equation}\label{1.1cc*}u_{xx}(x,y)+x^m \Delta_y u(x,y)=0,\,x>0,\,y\in\mathbb{R}^N,\end{equation} and the unique solution to problem \eqref{1.1cc*}, \eqref{1.2*}, \eqref{1.3*} can be written as (\cite{Alg}) \begin{equation*}u\left( {x,y} \right) = \frac{(m+2)^{n+\frac{1}{2}}\Gamma\left(\frac{2}{3}\right)\Gamma\left(\frac{N}{2}+\frac{1}{m+2}\right)}{2^{N}\pi^{\frac{N}{2}} \Gamma\left(\frac{1}{m+2}\right)}\int\limits_{\mathbb{R}^N} \frac{x\phi(s)}{\left(x^{m+2}+\left(\frac{m+2}{2}\right)^2|y-s|^2\right)^{\frac{N}{2}+\frac{1}{m+2}}}ds.\end{equation*} \subsubsection{Fractional Laplace equation} Let $\beta=0$ and $$\mathcal{L}v=(-\Delta)^sv=C_{N, s} P.V. \int_{\mathbb{R}^N}\frac{(v(y)-v(s))}{|x-y|^{N+2s}}dy,$$ where $s\in(0,1)$ and $C_{N,s}$ is a normalizing constant (whose value is not important here). Then the equation \eqref{1.1} coincides with the equation \begin{equation}\label{1.1d*}\mathcal{D}_x^{2\alpha}u(x,y)+(-\Delta)^s_y u(x,y)=0,\,x>0,\,y\in\mathbb{R}^N.\end{equation} From Theorem \ref{th2} we have the unique solution of the problem \eqref{1.1d*}, \eqref{1.2*}, \eqref{1.3*} in the form \begin{equation*}u\left( {x,y} \right) = \int\limits_{\mathbb{R}^N}e^{-iy\xi} \hat{\phi}(\xi) E_{\alpha, 1} \left( {- |\xi|^s} x^{\alpha} \right) d\xi,\,(x,y)\in \mathbb{R}_+\times\mathbb{R}^N.\end{equation*} Rearranging the order of integration in the last representation, according to Fubini's Theorem, we have \begin{equation*}u\left( {x,y} \right) = \int\limits_{\mathbb{R}^N}\phi(s) \int\limits_{\mathbb{R}^N} e^{-i\xi(y-s)} E_{\alpha, 1} \left( {- |\xi|^s} x^{\alpha} \right) d\xi ds,\,(x,y)\in \mathbb{R}_+\times\mathbb{R}^N.\end{equation*} Using the calculation of the Fourier transform of Mittag-Leffler functions from \cite{Zacher}, we have \begin{equation*}u\left( {x,y} \right) = \pi^{-\frac{N}{2}}\int\limits_{\mathbb{R}^N}\frac{\phi(s)}{|y-s|^{N}} H_{3\,2}^{1\,2}\left(\frac{2^s x^\alpha}{|y-s|^{s}}\Big{|} \begin{array}{l}(1-N/2-s/2), \,\,(0,1),\,\, (0,s/2)\\ (0,1),\,\, (0,\alpha) \end{array}\right) ds.\end{equation*} Here $H_{pq}^{mn}(\cdot)$ is the Fox H-function defined via a Mellin-Barnes type integral as $$H_{p\,q}^{m\,n}\left(z\Big{|} \begin{array}{l}(a^1_i, a^2_i)_{1,p}\\ (b^1_i, b^2_i)_{1,q}\end{array}\right)=\frac{1}{2\pi i}\int\limits_{\mathcal{I}}\mathcal{H}_{p, q}^{m, n}(\tau) z^{-\tau} d\tau,$$ where $(a^1_i, a^2_i)_{1,p}=((a^1_1, a^2_1), (a^1_2, a^2_2), ..., (a^1_p, a^2_p))$ and $$\mathcal{H}_{p, q}^{m, n}(\tau)=\frac{\prod\limits_{j=1}^m\Gamma(b^1_j+b^2_j\tau) \prod\limits_{i=1}^n\Gamma(1-a^1_i-a^2_j\tau)}{\prod\limits_{i=n+1}^p\Gamma(a^1_i+a^2_i\tau)\prod\limits_{j=m+1}^q\Gamma(1-b^1_j-b^2_j\tau)}.$$ \subsection{Proof of Theorem \ref{th2}} \subsubsection{Existence of solution.} Applying the Fourier transform $\mathcal{F}$ to problem \eqref{1.1}, \eqref{1.2*}--\eqref{1.3*} with respect to space variable $y$ yields \begin{equation}\label{3.3*}\mathcal{D}_x^{2\alpha } \hat{u} \left( x, \xi\right) - a(\xi) x^{2\beta}\hat{u}\left( x, \xi \right) = 0,\, x >0,\,\xi\in \mathbb{R}^N,\end{equation} \begin{equation}\label{3.4*}\hat{u} \left(0,\xi\right) = \hat{\phi}(\xi),\, \hat{u} \left( \infty, \xi\right) \,\,\text{is bounded for}\,\,\xi\in \mathbb{R}^N,\end{equation} thank to $\mathcal{F}\left\{\mathcal{L} u(x,y)\right\}=a(\xi)\hat{u}(x,\xi).$ Then the solution of problem \eqref{3.3*}-\eqref{3.4*} can be represented as \begin{equation}\label{3.6*}\hat{u} \left( x, \xi\right) = \hat{\phi}(\xi) E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{a(\xi)} x^{\alpha+\beta} }\right).\end{equation} By applying the inverse Fourier transform $\mathcal{F}^{-1}$ we have \eqref{1.5*}, i.e. \begin{equation*}u\left( {x,y} \right) = \int\limits_{\mathbb{R}^N}e^{iy\xi} \hat{\phi}(\xi) E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{a(\xi)} x^{\alpha+\beta} }\right) d\xi,\,(x,y)\in \mathbb{R}_+\times\mathbb{R}^N.\end{equation*} \subsubsection{Convergence of solution.} Now we prove the convergence of the obtained solution. Applying estimate \eqref{2.6} and Plancherel theorem we have \begin{align*}\sup\limits_{x\in[0,\infty)}\int\limits_{\mathbb{R}^N}|u\left( {x,y} \right)|^2dy&=\sup\limits_{x\in[0,\infty)}\int\limits_{\mathbb{R}^N}|\hat{u}\left( {x,\xi} \right)|^2d\xi\\& \leq \sup\limits_{x\in[0,\infty)}\int\limits_{\mathbb{R}^N}\left|\hat{\phi}(\xi)\right|^2 \left|E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{a(\xi)} x^{\alpha+\beta} }\right)\right|^2d\xi\\& \leq \int\limits_{\mathbb{R}^N}\left|\hat{\phi}(\xi)\right|^2 d\xi=\|\hat{\phi}\|^2_{L^2(\mathbb{R}^N)}=\|{\phi}\|^2_{L^2(\mathbb{R}^N)}<\infty.\end{align*} Let us calculate $\mathcal{D}_x^{2\alpha } u:$ \begin{align*}\mathcal{D}_x^{2\alpha}u\left( {x,y} \right)& = \int\limits_{\mathbb{R}^N}e^{iy\xi} \hat{\phi}(\xi) \mathcal{D}_x^{2\alpha}E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{a(\xi)} x^{\alpha+\beta} }\right) d\xi\\&= x^{2\beta}\int\limits_{\mathbb{R}^N}e^{iy\xi} \hat{\phi}(\xi)a(\xi)E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{a(\xi)} x^{\alpha+\beta} }\right) d\xi,\,\,(x,y)\in \mathbb{R}_+\times\mathbb{R}^N.\end{align*} Hence \begin{align*}\sup\limits_{x\in(0,\infty)}\left\|x^{-2\beta}\mathcal{D}_x^{2\alpha}u\left( {x, \cdot} \right)\right\|^2_{L^2(\mathbb{R}^N)} &\leq \sup\limits_{x\in(0,\infty)}\int\limits_{\mathbb{R}^N}a^2(\xi)|\hat{\phi}(\xi)|^2\left|E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{a(\xi)} x^{\alpha+\beta} }\right)\right|^2 d\xi\\& \leq \int\limits_{\mathbb{R}^N}|a(\xi)\hat{\phi}(\xi)|^2d\xi=\|\phi\|_{\mathcal{H}^\mathcal{L}(\mathbb{R}^N)}^2<\infty.\end{align*} Similarly, for $\mathcal{L}u$ we have \begin{align*}\sup\limits_{x\in(0,\infty)}\left\|\mathcal{L}u\left( {x, \cdot} \right)\right\|^2_{L^2(\mathbb{R}^N)} &\leq \sup\limits_{x\in(0,\infty)}\int\limits_{\mathbb{R}^N}a^2(\xi)|\hat{\phi}(\xi)|^2\left|E_{\alpha, 1+\frac{\beta}{\alpha}, \frac{\beta}{\alpha}} \left( {- \sqrt{a(\xi)} x^{\alpha+\beta} }\right)\right|^2 d\xi\\& \leq \|\phi\|_{\mathcal{H}^\mathcal{L}(\mathbb{R}^N)}^2<\infty.\end{align*} \subsubsection{Uniqueness of solution.} Suppose that there are two solutions $u_1(x, y)$ and $u_2(x,y)$ of problem \eqref{1.1}, \eqref{1.2*}--\eqref{1.3*}. Let $u(x,y)=u_1(x,y)-u_2(x,y).$ Then $u(x,y)$ satisfies the equation \eqref{1.1} and homogeneous conditions \eqref{1.2*}--\eqref{1.3*}. Let us consider the function \begin{equation}\label{3.5*}\hat{u}(x,\xi)=\int\limits_{\mathbb{R}^N}e^{-iy\xi} u(x,y)dy,\,\,x\geq 0,\,\xi\in\mathbb{R}^N.\end{equation} As $u$ is bounded continuous in $x$ function, applying $\mathcal{D}^{2\alpha}_x$ to the function \eqref{3.5*} by \eqref{1.1} we have \begin{align*}\mathcal{D}^{2\alpha}_x\hat{u}(x,\xi)&=\int\limits_{\mathbb{R}^N}e^{-iy\xi}\mathcal{D}^{2\alpha}_x u(x,y)dy\\&=x^{2\beta}\int\limits_{\mathbb{R}^N}e^{-iy\xi}\mathcal{L}u(x,y)dy \\& =x^{2\beta}\mathcal{F}\left[\mathcal{F}^{-1}(a(\xi)\hat{u}(x,y))\right]\\& = x^{2\beta}a(\xi) \hat{u}(x,\xi),\,x\geq 0,\, \xi\in\mathbb{R}^N.\end{align*} Also from \eqref{1.2*} and \eqref{1.3*} we have $\hat{u}(0,\xi)=0,\,\,\hat{u}(\infty,\xi)\,\,\text{is bounded}.$ Then from \eqref{3.6*} we conclude that $\hat{u}(x,\xi)=0,\,x\geq 0,\,\xi\in\mathbb{R}^N.$ Applying the inverse Fourier transform we have $u(x,y)\equiv 0,\,(x,y)\in [0,\infty)\times\mathbb{R}^N.$ The proof is complete. \end{document}
\begin{document} \title{Capacity Planning Frameworks for Electric Vehicle Charging Stations with Multi-Class Customers} \author{Islam~Safak~Bayram,~\IEEEmembership{Member,~IEEE,}~Ali~Tajer,~\IEEEmembership{Member,~IEEE,}~Mohamed~Abdallah,~\IEEEmembership{Senior~Member,~IEEE}~and ~Khalid~Qaraqe,~\IEEEmembership{Senior~Member,~IEEE} \thanks{Islam Safak Bayram (corresponding author) is with Qatar Environment and Energy Research Institute, Qatar Foundation, Doha Qatar. Email: [email protected]} \thanks{Mohamed Abdallah and Khalid Qaraqe are with the Department of Electrical and Computer Engineering, Texas A\&M University at Qatar. Emails:\{mohamed.abdallah, khalid.qaraqe\}@qatar.tamu.edu.} \thanks{Ali Tajer is with the Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, USA, Email: [email protected].}} \markboth{{IEEE Transactions on Smart Grid \emph{accepted for publication}}} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for Journals} \maketitle \IEEEpeerreviewmaketitle \begin{abstract} In order to foster electric vehicle (EV) adoption, there is a strong need for designing and developing charging stations that can accommodate different customer classes, distinguished by their charging preferences, needs, and technologies. By growing such charging station networks, the power grid becomes more congested and, therefore, controlling of charging demands should be carefully aligned with the available resources. This paper focuses on an EV charging network equipped with different charging technologies and proposes two frameworks. In the first framework, appropriate for large networks, the EV population is expected to constitute a sizable portion of the light duty fleets. This which necessitates controlling the EV charging operations to prevent potential grid failures and distribute the resources efficiently. This framework leverages pricing dynamics in order to control the EV customer request rates and to provide a charging service with the best level of quality of service. The second framework, on the other hand, is more appropriate for smaller networks, in which the objective is to compute the minimum amount of resources required to provide certain levels of quality of service to each class. The results show that the proposed frameworks ensure grid reliability and lead to significant savings in capacity planning. \end{abstract} \printnomenclature \section{Introduction} \subsection{Motivation} Electric Vehicles are becoming a viable transportation option as they offer solutions to an array of current societal problems ranging from high oil prices to environmental concerns. Corollary, EV population is expected to reach a sizable market portion in the next decade (e.g., $10$\% of the U.S. National fleet and similar targets in Europe)~\cite{jsac}. However, achieving such penetration rates requires wide deployment of charging facilities that can serve different types of charging requests. On the other hand, if not controlled, EV charging can easily lead to transformer and line overloading. Moreover it can deteriorate the power quality and even endangers the security of supply. The impacts of EV charging on the grid has been very well documented in~\cite{5356176, jsac,generationPortfolio} and \cite{oakRidge}. For instance \cite{oakRidge} argues that if $5\%$ of the EVs charge simultaneously using fast charging technology, $5.5$ GW of extra power would be needed in Virginia and Carolinas region by $2018$. Similarly, NERC regions would require an expansion of $5.5$\% in their power generation capacity in a typical EV penetration rate of $25$\%. Furthermore distribution grid could easily become a bottleneck. For example, authors of \cite{jsac} state that even adding two Level-2 chargers in a typical neighborhood in the US could easily cause transformer overloading. Also uncontrolled demand can decrease the efficiency of the power grid operations and increase the power generation cost if it occurs during peak hours. \subsection{Related Works}\label{relatedWork2} There has been an increasing body of literature on developing intelligent charging station architectures\cite{power1,power2,power4,sgc12, queue2, queue1,queue3}, controlling and scheduling EV demand \cite{TSG14,jsac,CallawayX,price1,price2,price3,globecom} and rather sparse literature on capacity planning on charging infrastructures serving multiple classes of customers \cite{sgc13,energyCon}. This section provides a brief overview of related literature. The studies on charging station design can be divided into two categories. The first category includes works from power engineering perspective \cite{power1,power2,power4} where the studies aim to minimize the charging duration by improving the efficiency of power electronics and aiding the system with energy storage system. On the other hand, approaches in the second category are mostly focused on the system level where they abstract the details of the underlying power system components and evaluate the system performance in terms of long term statistical metrics e.g., mean waiting time and percentage of served customer using the arguments from queuing theory~\cite{ sgc12, queue2, queue3}. Study in \cite{sgc12} presents a small scale fast charging station design and blocking probability, that is the probability that the request of an EV is rejected, is used as the main performance metric to solve the optimal resource provisioning problem. In\cite{queue1} authors use M/M/s queues to model the EV demand at fast charging stations near highway exits. In~\cite{queue2} residential charging infrastructures are modeled using M/M/$\infty$ queue, where they broadcast the arrival rates to customers such that probability of blocking (exceeding circuit capacity) is zero. Furthermore, the work in~\cite{queue3} uses BCMP queueing network model to estimate the EV charging demand. In this paper we perform a more holistic approach by taking into account that the charging station can serve multiple classes of customer that are differentiated by the charger technology used. Also, we employ loss-of-load-probability (LoLP) as our main performance metric, which measures the probability that grid resources cannot accommodate EV demand \cite{von2006electric}. The pricing-based control methods have been employed successfully in communication networks where the goal is to match scarce bandwidth resources to insatiable user demand~\cite{priceSurvey}. In a similar fashion, pricing mechanisms applications gained popularity in the literature on controlling EV demand~\cite{jsac,globecom, price1, price2, price3} and \cite{fan2011}. Authors in \cite{jsac} and \cite{globecom} propose a control mechanism in a network of charging stations to route customers to idle stations. The work in~\cite{price1} proposes a pricing scheme for EV chargings that leads to socially optimal solution, whereas \cite{fan2011} uses proportional fairness pricing from communication literature to control the charging rates of EVs. In our work we also consider the technological constraints and current state of affairs that charging rate is constant until the end of the service. The work presented in \cite{price3} is most relevant to our work, in which researchers model general power consumers behavior with utility functions and propose a pricing algorithm to control the consumption of smart grid users. Our work mainly focuses on multi-class EV chargings and our framework ensures grid reliability at all times. The literature on resource provisioning is rather sparse. The study in \cite{sgc13} presents a capacity planning framework in a large scale charging stations with single class customers. This framework is based on computing a deterministic quantity ``effective power" using On-Off Markov models. In this study we assume that charging infrastructure can serve multiple types of customers. The proposed methodology is rooted in multi dimensional loss systems in teletraffic engineering, where the goal is to provide statistical quality of service(QoS) guarantees to customers with different demand profiles. The control mechanisms with congestion pricing in multi-rate Erlang-B systems and related resource provisioning problems are addressed in~\cite{hampshire, kaufman, nilsson, cong1,cong2}. The works presented in \cite{cong1} and \cite{kaufman,nilsson,fan2011} focus on efficient computation of blocking probabilities and the their derivatives, whereas studies in \cite{hampshire} and \cite{cong2} provide efficient algorithms to solve bandwidth provisioning problem in congestible networks. \begin{figure} \caption{Two design problems. } \label{comp1} \end{figure} \section{Proposed Frameworks}\label{ChargingStationModel} Electric vehicle charging demands are primarily dominated by customers' needs and preferences. Based on this premise, designing different aspects of the network of charging stations is governed by the economic dynamics between network operators, on one hand, and the customers on the other hand. In such dynamics the former seek operational reliability and profit margins and the latter seek economic incentives. Such dynamics and their pertinent design challenges vary by the size of network and customers population. In this paper we consider two canonical design problems for EV charging infrastructures fed by a single substation suited for large metropolitan areas and small cities as summarized in Fig.~\ref{comp1}. Specifically, the first framework proposes a pricing-based EV control for charging stations located in large metropolitan cities in which the EV population is assumed to be large. In this framework, the objective is to incentivize and control the customers to submit their charging demands at certain optimal rates which maximize a social welfare utility. This pricing-based approach framework, at its core, enables station operators to alleviate the congestion and the degradations in the service quality during increased demand periods through incentivizing the customers to defer their charging needs to less congested periods. This control mechanism ensures charging services with desirable level of QoS guarantees. The second framework focuses on small cities, in which customer demand can be obtained from profiling studies, and proposes a capacity provisioning mechanism for EV charging stations. In contrary to large metropolitan areas, small cities with fewer charging stations and customers enjoy less flexibility for incentivizing the customers. Hence, the primary goal in this framework is to compute the minimum amount of required resources to provide charging service to meet a target level of QoS. \subsection{Network Model}\label{sysModel} \nomenclature{$N$}{Number of customers} \nomenclature{$i$}{Customer index } \nomenclature{$J$}{Number of distinct customer classes} \nomenclature{$j$}{Customer class index} \nomenclature{$b_j$}{Amount of resources requested by customer class $j$} \nomenclature{$k$}{Time index} \nomenclature{$C_k$}{Amount of power drawn from the grid during period $k$} \nomenclature{$\beta_j(\cdot)$}{Loss-of-Load-Probability for class $j$} \nomenclature{$p_j(k)$}{Price of service for customer type $j$ at time $k$} \nomenclature{$\lambda_j^k$}{Arrival rate for customer class $j$ at time $k$} \nomenclature{$\lambda_j(\cdot)$}{Aggregated arrival rate for customer class $j$ at time $k$ given in \eqref{aggrArrival}} \nomenclature{${\boldsymbol \lambda}^n$}{Arrival rate vector for all customer classes at period $n$} \nomenclature{${\boldsymbol \lambda}$}{Aggregated arrival rate vector for all customer classes at period $n$} \nomenclature{$U^n(\cdot)$}{Utility function for customer $n$} \nomenclature{$\delta_j$}{Loss-of-Load Probability target for customer class $j$} We consider a network of charging stations serving $N$ customers. Since the customers do not necessarily seek identical charging services, we consider $J$ distinct customer classes, which are distinguished by their preferences; size of the battery packs, amounts of requested demands, and the available charger technologies. The charging rate of customers of type $j \in \left\{ {1,..,J} \right\}$ is denoted by $b_j$. Since the amount of resources are finite, upon the arrival of a specific customer type $j$, if the available resources is less than $b_j$ the customer will not be served resulting in an outage. Hence, the probability of being blocked (or loss-of-load-probability) constitutes a natural system performance metric. Due to the temporal fluctuations network conditions (e.g., congestion etc.), the amount of available resources varies over time. Hence, we consider a dynamic system indexed by the time index $k\in\mathds{N}$ in which $C_k$ denotes the aggregate units of grid power available to the entire EV fleet at time $k$. Note that $C_k$ is an exogenous parameter given to station. Since the charging stations reside in a small well-confined regions, the set of chargers can be abstracted collectively as a super-station with multiple classes of customers. We further assume that system capacity is large when compared to demands of each EV type. Increased outage events leads to reduced utility for the network operator and service disruption for the customers. Hence, optimizing the operation of the network strongly hinges on finding the decision rules that lead to the optimal level of Loss-of-Load-Probabilities (LoLP) that is defined as the probability that grid resources fall short of aggregated customer demand \cite{von2006electric}. The leverage that the network operators have for adjusting the LoLP rates at the desired optimal rates is service pricing, by using which they can influence the behavior of the customers abstracted by their arrival rates. To formalize this, we define $p_j(k)$ as the price of service of type $j$ and define the price vector $\boldsymbol{p}(k)\triangleq[p_1(k),\dots,p_J(k)]$, accordingly. \tikzstyle{block} = [rectangle, draw,top color =light-gray , bottom color = processblue!20, text width=5em, text centered, rounded corners, minimum height=3em] \tikzstyle{line} = [draw, -triangle 45] \begin{figure} \caption{Pricing-based control mechanism for social welfare maximization } \label{illustrate} \end{figure} Based on the prices ${\boldsymbol p}(k)$ and its needs, each customer decides whether to generate a service request of a certain type. We denote the rate of type $j \in \left\{ {1,..,J} \right\}$ requests generated by customer $n\in \left\{ {1,..,N} \right\}$ during period $k$ by ${\lambda _j^{n}(k;{\boldsymbol p}(k))}$. Hence, the aggregate arrival rate of service requests of type $j$ is \begin{equation}\label{aggrArrival} \lambda_j(k;{\boldsymbol p}(k))\triangleq\sum_{n=1}^N\lambda_j^n(k;{\boldsymbol p}(k)) \ . \end{equation} From the network operators point of view, these requests are being constantly generated across the networks and as discussed in \cite{queue1,queue2,jsac}, a common assumption is that such aggregated requests (and not necessarily individual requests) arrive according to a Poisson process. Accordingly, we define the arrival rate vectors \begin{align} {\boldsymbol \lambda}^n(k;{\boldsymbol p}(k))\; & \triangleq \; [\lambda^n_1(k;{\boldsymbol p}(k)),\dots,\lambda^n_J(k;{\boldsymbol p}(k))]\ ,\\ \mbox{and}\qquad {\boldsymbol \lambda}(k;{\boldsymbol p}(k))\; & \triangleq[\lambda_1(k;{\boldsymbol p}(k)),\dots,\lambda_J(k;{\boldsymbol p}(k))]\ . \end{align} By noting that the LoLPs for each class are functions of the arrival rates, finally we define $\beta_j(k;{\boldsymbol \lambda}(k;{\boldsymbol p}(k))$ as the probability that customers of type $j$ are blocked and define ${\boldsymbol \beta}(k;{\boldsymbol \lambda}(k;{\boldsymbol p}(k))$ as the corresponding LoLP vector. Since different customer types do not necessarily charge their batteries at the same rate, we denote the {\em average} charging duration of customers of type $j$ by $1/{\mu _j}$. It is noteworthy that from station operator standpoint, the information required to characterize customers' profiles are the average service duration and the charging rates. These parameters enable modeling customers' behaviors with fine granularity. The main goal is to keep the aggregated demand below a constant power $C_{k}$ with minimal or controlled loss-of-load (outage) events. Drawing constant power is highly desirable from the viewpoint of the network operators and its benefits include (1) grid components are isolated from stochastic variations and hence grid reliability is ensured~\cite{sgc12,jsac}; (2) station operator can sign long-term contracts and benefit from the lower prices; (3) constant demand will reduce the peak-to-average demand ratio of the whole power system and accordingly the average spot prices would reduce; and (4) it leads to more efficient market equilibrium \cite{jsac}. It is noteworthy that since the charging stations reside in a small well-confined region, the whole set of chargers act as one big station with multiple classes of customers. Due to this fact, power system losses are assumed to be negligible and all customers share the same resource pool. For the simplicity of the notations, in the rest of the paper the explicit dependence on $k$ is omitted. \tikzstyle{block} = [rectangle, draw,top color =light-gray , bottom color = processblue!20, text width=5em, text centered, rounded corners, minimum height=3em] \tikzstyle{line} = [draw, -triangle 45] \begin{figure} \caption{Customer preferences can be determined by EV type, amount of requested demand, and the charger technology.} \label{depend} \end{figure} \subsection{Large Networks: Pricing-based Control Framework}\label{pricing} As the EV population is expected to constitute a sizable portion of the national light duty fleets, there is a need to control EV charging operations in order to protect the grid components from potential failures. To this end, this framework focuses on charging infrastructures that are located in big metropolitan areas with high EV penetration rates. The primary goal is to leverage pricing in order to control the EV customer arrival rates such that station operator can provide a charging service with a small level of customer LoLP QoS for each class. Here, the primary goal is control the large EV demand in a big city for a given amount of grid power ($C_k$), which is assumed to computed by the utility operator and meets the power network constraints (e.g., congestion, transformer ratings etc.). Pricing-based control scheme is depicted in Fig. \ref{illustrate}. Specifically, the aim is to design a distributed control framework for maximizing an aggregate utility (social welfare measure), which requires that the EVs operate at an optimal set of arrival rates. In order to establish the tools for the distributed design let \begin{align*} U^n({\boldsymbol \lambda}^{n};{\boldsymbol \beta}({\boldsymbol \lambda})) \end{align*} for a given set of arrival rates ${\boldsymbol \lambda}^n$ and block probabilities ${\boldsymbol \beta}$. Hence, the aggregate utility in the network is \begin{align}\label{mainProblem} R\;=\; \sum_{n = 1}^N U^n({\boldsymbol \lambda}^{n};{\boldsymbol \beta}({\boldsymbol \lambda}))\ . \end{align} Therefore, the social welfare problem can be defined as the problem that maximizes the aggregate utility $R$ over all possible choices of the arrival rates $\{{\boldsymbol \lambda}^n\}_{n=1}^N$, i.e., \begin{align}\label{mainProblem2} \max_{\{{\boldsymbol \lambda}^1,\dots,{\boldsymbol \lambda}^N\}} &\sum_{n = 1}^N U^n({\boldsymbol \lambda}^{n};{\boldsymbol \beta}({\boldsymbol \lambda}))\ . \end{align} We assume the utility function $U^n$, is increasing in the arrival rates $\{\lambda_j\}$, and decreasing in the LoLPs $\beta_j$. Furthermore, it is assumed that $U^n$ is concave in $\lambda_j$ and continuously differentiable in all of its arguments. This problem is treated in Section~\ref{sec:price}. \subsection{Small Networks: Resource Provisioning Framework}\label{resourceProv} In the resource provisioning problem, we are interested in computing the minimum amount of grid resources $C$ such that system operator guarantees certain levels of reliability for serving different types of users by enforcing $\beta_j \leq \delta_j$ for each customer type. This structure suits the resource provisioning problems in small cities in which customer arrival rates can be obtained from profiling studies with reliable accuracy. Based on the constrains defined for LoLPs, the problem can be cast as \begin{equation}\label{argmin} {C^*} = \left\{ {\begin{array}{ll} \min & C \\ \mbox{s.t} & \;\beta_j \le {\delta _j},\;\mbox{for}\; \in\{1,\dots,J\} \end{array}} \right. \ . \end{equation} This problem is treated in Section~\ref{lossComp}. \begin{figure} \caption{Numerical Evaluation of multi-class EV LoLP. Each EVs arrival rate is assumed to be equal to one third of the total arrival rate.} \label{blockingGraph} \end{figure} \subsection{Toy Example} We provide a toy example before the case studies in order to put some of the details into perspective. Even though the objectives of the proposed frameworks are different, computation of LoLP is carried out for a fixed parameter setting (e.g., $\lambda$, $\mu$ etc.), the methods for computing blocking probability is valid for both cases and details are given in Section \ref{lossComp}. Let us assume that the charging infrastructure draws $C =1000$ units of power from the grid and serves three types of EV customers, i.e., $j \in \left\{ {1,2,3} \right\}$. Customer classes are differentiated by the charging technology they use, and by mimicking the current charging standards (fast charging, level-II three and single phase). It is assumed that $\boldsymbol{b}=\left\{ {50,7,5} \right\}$, the charging rates are $\mu_1=3$, $\mu_2=0.42$, and $\mu_3=0.2$, and the arrival rates for each type are $\lambda_1=\lambda_2=\lambda_3=14$ (on average 14 customers arrive at each time unit). Consequently, the resulting LoLPs are $\beta_{1000}^1$=$0.0152$, $\beta_{1000}^2=0.0015$, and $\beta_{1000}^3=0.0011$. Note that customers of type $1$ have the highest LoLP mainly because grid resources are shared equally among all customer classes and type $1$ customers use fast charging technology which draws more power compared to other classes. \begin{figure} \caption{Numerical evaluation of multi-class EV LoLP for different system capacity.} \label{blockingCapacity} \end{figure} Next, we evaluate the same system for a wide range of arrival rates. We assume that the sum $\lambda=\sum\nolimits_{j = 1}^3 {{\lambda _j}}$ varies in the range $[1,80]$ and $\lambda_{j}=\lambda / 3$ for $j\in \left\{ {1,2,3} \right\}$. The results are depicted in Fig. \ref{blockingGraph}. It is important to notice that, for the given amount of grid resources (in our case grid allows $C$=$1000$), stations can provide a very good service for light traffic scenarios in which the aggregated arrival rate varies in the range $\lambda\in[1,50]$, which can be a very typical setting for small cities. It is noteworthy that instead of providing an extremely stringent LoLP constraints (near zero outage probability), network operator can back off from $C$=$1000$ to a smaller level $\hat{C}<C$ while guaranteeing a high level of LoLP (e.g. $1$\% LoLP) for each customer classes. On the other hand, in the heavy traffic regimes (e.g., $\lambda>50$) which can occur in large metropolitan cities, in order to provide good QoS, either charging resources ($C$) should be increased or the arrival rates should be controlled. To shed more light on this, we evaluate the system performance for a fixed set of arrival rates ($\lambda_1$=$\lambda_2$=$\lambda_3=10$) and for a range of serving capacity values $C\in[500,1000]$ . As clearly depicted in Fig. \ref{blockingCapacity}, more customers will be accommodated as the amount of resources increases. Hence, besides upgrading the grid to have increased capacity, which might not be economically viable in the short term, an alternative approach is to provide charging services based on the available resources and with the best possible QoS via controlling customer arrival rates. The pricing-based control framework discussed in section~\ref{pricing} attacks this problem. \section{Pricing-Based Control} \label{sec:price} \label{sec:provision} \nomenclature{$R$}{Aggregated utility of all customers} \subsection{Global Problem} This treatment provided in this section has a two-fold purpose. Primarily, it provides an optimal LoLP policy in order to maintain an optimal social welfare utility as characterized in \eqref{mainProblem}. Secondly, it proves that the solution provided is amenable to distributed implementation. This is of paramount significant as in an EV charging network the EVs are autonomous entities and operate based on the common information known to all the EVs (e.g., charging prices) and their local perception about network dynamics. Such distributed nature of the EV networks necessitates solving \eqref{mainProblem} in a distributed manner. As it will be shown, an important feature of the proposed framework is that each EV adjusts its arrival rates solely based on the updated prices that are dynamically announced by the station operators, and such distributed adjustment of arrival rates by the EVs leads to an optimal global welfare for the entire network. By recalling that utility function $U^n$ is increasing in the arrival rates $\{\lambda_j\}$, decreasing in the LoLP $\beta_j$, concave in $\lambda_j$, and continuously differentiable in all of its arguments, the aggregate social welfare $R=\sum_{n=1}^N U^n$ is is maximized when $\forall n\in\{1,\dots,N\}$ and $\forall j\in\{1,\dots, J\}$ \begin{equation}\label{eq:derivative} \frac{{\partial R}}{{\partial \lambda _j^{n} }} = \frac{{\partial {U^n}}}{{\partial \lambda _j^{n}}} + \sum\limits_{l = 1}^N \sum\limits_{s = 1}^J {\frac{{\partial {U^l}}}{{\partial {\beta_s}}}\cdot\frac{{\partial {\beta_s}}}{{\partial \lambda _j^{n}}} = 0} \ . \end{equation} Since the LoLP of each charge type depends only on the sum of the arrival rates of that type ${\lambda _j} = \sum\nolimits_{n = 1}^N \lambda _j^{n}$, from \eqref{eq:derivative} we immediately have $\forall n\in\{1,\dots,N\}$ and $\forall j\in\{1,\dots, J\}$: \begin{equation}\label{global} \frac{{\partial R}}{{\partial \lambda _j^{n} }} = \frac{{\partial {U^n}}}{{\partial \lambda _j^{n}}} + \sum\limits_{l = 1}^N \sum\limits_{s = 1}^J {\frac{{\partial {U^l}}}{{\partial {\beta_s}}}\cdot\frac{{\partial {\beta_s}}}{{\partial \lambda _j}} = 0} \ . \end{equation} Solving \eqref{global} yields a {\em globally} optimal set of arrival rates for the customers. \subsection{Local Problems} We show that the globally optimal solutions yielded by \eqref{global} can be found in a distributed way, in which each charging statin forms and solves a local problem. In order the lay the foundation, we first formulate the local problems in this subsection, and relegate establishing the connection between the local and global solutions to Section III-C. In order to enforce such arrival rates the charging stations compute and announce prices for each customer type, which in turn forces the customers to operate at the desired (optimal) arrival rates. In order to solve this global problem and formalize such dynamic between posted prices and adjusted arrival rates, by recalling that $p_{j}$ is the prices charged to customers of type $j$, we first define the following {\em local} optimization problem for each customer $n\in\{1,\dots,N\}$ and service type $j\in\{1,\dots,J\}$: \begin{align}\label{localOptim} \tilde U^n({\boldsymbol \lambda}^{n};{\boldsymbol \beta}({\boldsymbol \lambda}))\;\triangleq\; \underbrace{U^n({\boldsymbol \lambda}^{n};{\boldsymbol \beta}({\boldsymbol \lambda}))}_{\text{gain}}\;-\;\underbrace{\sum\limits_{l = 1}^J {{p_{l}}\lambda _l^{n}(1 - {\beta_l}})}_{\text{cost}}\ . \end{align} By solving this local optimization problem, customer $n$ computes its locally optimal and relevant arrival rates ${\boldsymbol \lambda}^n=[\lambda_1^{n},\dots,\lambda_J^{n}]$. Specifically, the locally optimal arrival rates that maximize the {\em local} optimization problems $\{\tilde U^n\}_n$ for all $\forall n\in\{1,\dots,N\}$ and $j\in\{1,\dots,J\}$ satisfy: \begin{equation}\label{localOptim2} \frac{{\partial {U^n}}}{{\partial \lambda _j^{n}}} - {p_{j}}(1 - {\beta_j}) = 0\ . \end{equation} By solving \eqref{localOptim2} each customer $n$ can compute its own arrival rates based on the announced prices. The structure of the optimal local arrival rates depends on the selection of the utility function $U^n$, which is discussed in more details in Section~\ref{Example}. \subsection{Connection Between Local and Global Solutions} Before proceeding to the details of the relevance of the globally optimal solution to these local solutions it is important to note that individual customers do not have knowledge about the derivatives of their arrival rates with respect to LoLP $\beta_j$ which leads to above local optimization problem~\cite{courcoubetis1999pricing}. Also, the size of the system is fairly large when compared to demand of each individual customer, hence each EV does not have significant impact on the LoLP. Hence, by comparing equations (\ref{global}) and (\ref{localOptim2}) we can deduce that by the optimal prices for $j\in\{1,\dots,J\}$ satisfy: \begin{equation}\label{prices} p_{j}^{{*}} = - (1 - {\beta^{j}})^{ - 1} \sum\limits_{l = 1}^N \sum\limits_{s = 1}^J {\frac{{\partial {U^l}}}{{\partial {\beta^{s}}}}\cdot\frac{{\partial {\beta^{s}}}}{{\partial \lambda _j}} = 0} \ . \end{equation} When a customer of type $j$ is presented with price $p_{j}^*$ the solution of her local optimization problem \eqref{localOptim} satisfies the conditions given in \eqref{eq:derivative}, which in turn guarantees that the solution of the global optimization problem in (\ref{global}) is equivalent to the local ones given in \eqref{localOptim} as established in~\cite{courcoubetis1999pricing}. Hence, the local optimal solutions become the equilibrium since no single customer can obtain a higher utility by deviating from its locally optimal solution. It is noteworthy that the prices $\{p_{1}^*,\dots,p_{J}^*\}$ can be considered as congestion prices since each customer pays other customers for one unit of marginal decrease in their utility due to the rise in the LoLP due to increase in the arrival rates. \section{Resource Provisioning Framework}\label{sec:res} \nomenclature{$Q^j_{C}$}{Number of simultaneous customers of type $j$ requesting $b_j$ units} \nomenclature{$S$}{Aggregated customer load given in \eqref{offeredLoad}} \nomenclature{$q_j$}{Average number of arrival rates of class $j$} In the resource provisioning problem characterize in \eqref{argmin} the optimal solution $C^*$ increases monotonically as the target outage upper bounds increase. Hence, one naive approach to solve this problem is a linear brute force search, which can be computationally prohibitive for large scale system. Instead, we aim to provide an establish an analytical connection between the optimal solution and the target outage requirements. This proposed approach, moreover, is versatile enough to be applied to other relevant optimization scenarios, e.g., integration of renewable generation or energy storage system and for varying customer demand. In order to furnish the tools we model the request arrivals of the customers as a collection of $J$ independent queues (see~\cite{kleinrock}). In this model we start the analysis by first assuming that the grid resources are infinite and treat the case of finite resources in the next stage. Let $Q_\infty ^j$ denote the number of customers of type $j$ requesting $b_j$ units of power concurrently. Further, let $S$ be the sum of offered load by the system that is given by \begin{equation}\label{offeredLoad} S = \sum\limits_{j = 1}^J {{b_j}Q_\infty ^j}\ . \end{equation} Due to Poisson model for the arrival rates, for the mean and the variance of $Q_\infty ^j$ we have $\mathbb{E}\left[ {Q_\infty ^j} \right]$ = ${\rm var}\left[ {Q_\infty ^j} \right]$ = ${q_j} = \frac{{{\lambda _j}}}{{{\mu _j}}}$. Therefore \begin{equation}\label{stats} \mathbb{E}\left[ S \right] = \sum\limits_{j = 1}^J {{b_j}{q_j}}, \hspace{2mm}{\rm{ and }}\hspace{3mm}{\rm var}\left[ S \right] = \sum\limits_{j = 1}^J {b_j^2} {q_j}. \end{equation} In the case of finite resources $C$, we can reformulate the carried load on the system by defining $Q_C^j$ as the number of simultaneous customers requesting $b_j$ units of grid resources. Then the blocking (loss-of-load) event occurs if upon an arrival of customer $j$ the load on the system is greater than available resources $\sum\nolimits_{j = 1}^J {Q_C^j} $ is greater than $C-b_j$. For a given set of charging rates $\boldsymbol{b}\triangleq [b_1,\dots, b_J]$ and average numbers of arrival rates $\boldsymbol{q}\triangleq [q_1,\dots, q_J]$, let us denote the LoLP by of customer type $j$ by $\beta_j:\mathds{R}^J \to [0,1]$, which captures the connection between the block probability on one hand hand, and $\boldsymbol b$ and $\boldsymbol q$, on the other hand. By noting that the random number of customers for each class $Q_\infty ^j$ are mutually independent Poisson random variables we have \begin{align} \beta _j(\boldsymbol{q},\boldsymbol{b}) &= {\mathds P}\left\{ {C - {b_j} < \sum_{j = 1}^J {{b_j}Q_C^j} } \right\} \label{blockings1}\\ &= {\mathds P}\left\{ {C - {b_j} < \sum_{j = 1}^J {{b_j} Q_\infty ^j} \le C} \;\Big|\; \sum_{j = 1}^J {{b_j}Q_\infty ^j \le C}\right\}\label{blockings2}\\ &=\frac{{{\mathds P}\left\{ {C - {b_j} < \sum_{j = 1}^J {{b_j}Q_\infty ^j} \le C } \right\}}}{{{\mathds P}\left\{ {\sum_{j = 1}^J {{b_j}Q_\infty ^j \le C} } \right\}}}\label{blockings3} \ . \end{align} In the resource provisioning problem of interest, in order to meet multi class QoS targets $\{\delta_j\}$, $C^*$ should be at least as big as the mean offered load on the system, that is $S=\sum\nolimits_{j = 1}^J {{b_j}{q_j}} $. However, customers arrive in a stochastic fashion, hence it is required to add extra capacity to accommodate the fluctuations beyond the average offered load. For this purpose, $C^*$ is set equal to the mean of the total load $\mathbb{E}\left[ S \right]$ adjusted by an extra term which is a multiple of its variance denoted by $x\cdot{\rm var}\left[ S \right]$. In this formulation more stringent QoS targets lead to larger values of $x$. The objective of the resource provisioning problem is to provide a closed form solution to~\eqref{argmin}. To this end, we first scale the system with $\varsigma>0 $, according to which we have the following capacity for the network: \begin{equation}\label{} {\bar{C}(\varsigma,x)} = \varsigma \sum\limits_{j = 1}^J {{b_j}{q_j} + x\sqrt {\varsigma \sum\limits_{j = 1}^J {b_j^2{q_j}} } }\ . \end{equation} and we have the following limiting result \cite{ErlangMitra, hampshire}: \begin{equation}\label{result1} \lim_{\varsigma \to \infty } \sqrt \varsigma \beta _j(\boldsymbol{q},\boldsymbol{b}) = \frac{{{b_j}}}{{\sqrt {\sum\nolimits_{j = 1}^J {b_j^2{q_j}} } }}\cdot \frac{{\phi (x)}}{{\varphi (x)}} \end{equation} where $\phi (x) = \frac{1}{\sqrt{2\pi }}{e^{ -x^2/2}}$ and $\varphi (x) = \frac{1}{\sqrt{2\pi }}\int_{ - \infty}^x e^{-t^2/2}\; dt $. Now let us define function $\psi $ as the inverse of $\frac{\phi }{\varphi }$ for $\forall$$x$, that is: \begin{equation} \frac{{\phi (\psi (x))}}{{\varphi (\psi (x))}} = x \ . \end{equation} Note that $\psi(\cdot)$ is a strictly decreasing function with $\psi (y)+y>0$, $\forall y$ (\cite{ErlangMitra, hampshire } and references therein). Then, the asymptotic behavior of the QoS constraint $\beta^j(\boldsymbol{q},\boldsymbol{b})\leq\delta_j$ results: \begin{equation}\label{xxx} x \ge \psi \left( {\frac{{{\delta _j}}}{{{b_j}}}\sqrt {\sum\limits_{j = 1}^J {b_j^2{q_j}} } } \right) \ . \end{equation} The provisioned grid resources should satisfy QoS targets for all classes. Hence, the inequality in~\eqref{xxx} yields \begin{equation} x \ge \psi \left( {\mathop {\min }\limits_{1 \le j \le J} \frac{{{\delta _j}}}{{{b_j}}}\sqrt {\sum\limits_{j = 1}^J {b_j^2{q_j}} } } \right) \ . \end{equation} By operating at the lower possible values of $x$, the minimum required amount of resources solving the provisioning problem in~(\ref{argmin}) is \begin{equation}\label{minCapacity} {C^*} = \sum\limits_{j = 1}^J {{b_j}{\frac{{{\lambda _j}}}{{{\mu _j}}}} + \psi \left( {\mathop {\min }\limits_{1 \le j \le J} \frac{{{\delta _j}}}{{{b_j}}}\sqrt {\sum\limits_{j = 1}^J {b_j^2{\lambda _j}} } } \right)\sqrt {\sum\limits_{j = 1}^J {b_j^2{\lambda _j}} } } \ , \end{equation} where $\psi(\cdot)$ can be easily computed numerically by solving~\cite{tian2007analysis}: \begin{equation} {x^{ - 1}}{e^{ - 0.5\psi {{(x)}^2}}} - \sqrt {2\pi }\;{\rm erf}\left( {\frac{1}{{\sqrt 2 }}\psi (x)} \right) - x\sqrt {0.5\pi } = 0 \ , \end{equation} and it can be plugged back into (\ref{minCapacity}) to compute the required capacity. It is important to notice that providing enough resources to meet the QoS targets of the most dominant classes, that are the ones with minimum $\delta_j$/$b_j$, is sufficient for the remaining customer classes. \section{Computing Loss-of-Load-Probabilities}\label{lossComp} Both the pricing-based control problem (summarized in section \ref{pricing} with \eqref{mainProblem}) and the resource provisioning problem (summarized in \label{resourceProv} with with \eqref{argmin}) fall into multi-dimensional loss systems (or multi-rate Erlang loss systems), which are used to evaluate the guaranteed performance of networks with limited resources. In such systems an arriving customer requesting a certain amount of grid resources is either admitted to the system or is blocked and we are interested in computing LoLP as a function of system parameters. Computing the LoLP function requires the analysis of the $J$ independent time reversible Markov chains in which the state of the system is defined as the number of customers of each type, that is $\boldsymbol{Q}\triangleq [Q_C^1,\dots, Q_C^J ]$ and the state space is denoted by $\Omega\triangleq \{ {\boldsymbol{Q}: \sum_{j = 1}^J {{b_j}Q_\infty ^j \le C} } \}$. Let $\tilde{Q}_C^j$ denote the maximum number of customers of type $j$ that can be served simultaneously. Assuming the order $b_1\geq\dots \geq b_J\geq 0$, without loss of generality, provides that $0 \leq \tilde{Q}_C^1\leq \dots \leq \tilde{Q}_C^J$. Then the probability of being at state $\boldsymbol{Q}$ is~\cite{kleinrock}: \begin{equation} {{\overline{\pi}}}(\mathbf{{Q}})=\prod\limits_{j = 1}^J {\frac{{q_j^{{Q_\infty^j}}}}{{{Q_\infty^j}!}}{e^{ - {q_j}}}} \ . \end{equation} Next, similar to (\ref{blockings2}) we condition on the finite capacity, and compute a generic state $\boldsymbol{Q}$ probability distribution as: \begin{equation} { \pi}(\boldsymbol{Q})=\frac{{{\mathop{\overline{\pi}}\nolimits} (\mathbf{{Q}})}}{{\sum\nolimits_{ \tilde{Q} \in \Omega } { { \overline{\pi}}(\mathbf{\tilde Q})}}} \cdot \end{equation} Next, let us define the blocking states (LoLP) for customer type $j$ as \begin{align*} \Psi_j = \{{\boldsymbol{Q}: { {C - {b_j} \;<\; \sum\limits_{k= 1}^J {{b_k}Q_C ^k} \; \leq \; C}}} \}\ . \end{align*} Hence, (\ref{blockings3}) can be re-written as: \begin{equation}\label{block} {\beta^j}(\boldsymbol{q},\boldsymbol{b}) = \sum_{s \in {\Psi _j}} {\pi (s)} = 1 - \sum_{s \notin \Psi _j} {\pi (s)}\ , \end{equation} where the second term represents the probability that the charging station has a total capacity of $C-b_j$ instead of $C$ and $\pi(s)$ is the steady state probability mass function. Furthermore, let us define function $H(C,J)$ as \begin{equation}\label{eq:H} H(C,J) \triangleq \sum_{\left\{ {\boldsymbol{Q}:\;{\boldsymbol b}\boldsymbol{Q}\; \leq\; C} \right\}}{\prod\nolimits_{j = 1}^J {\frac{{q_j^{{Q^j}}}}{{{Q^j}!}}} } \centering\ , \end{equation} based on which one can compute the LoLP for class $j$ explicitly as: \begin{equation}\label{betaResult} {\beta_j}(\boldsymbol{q},\boldsymbol{b})=1 - \frac{{H(C - {b_j},J)}}{{H(C,J)}} \cdot \end{equation} The set $\{ {\boldsymbol{Q}:\;{\boldsymbol b}\boldsymbol{Q}\; \leq\; C}\} $ in \eqref{eq:H} contains all the states corresponding to which {\em no} outage occurs. While (\ref{betaResult}) provides an explicit representation for ${\beta_j}$, the associated computation can be costly when the system capacity $C$ is large. Considering a real-world scenario in which the number of classes $J$ is typically varies between 3 and 5, and $C$ is in the order of Mega-watts, computing LoLP could be easily carried out via the Kaufman-Roberts algorithm (Algorithm~\ref{algo}) which involves a simple recursion which computes the occupied resources~\cite{kaufman}. Let $c$ denote the amount of resources being in use and \begin{align*} \alpha(c)\;\triangleq\; {\mathds P}\left\{\mbox{c units of power in use} \right\}\ . \end{align*} We subsequently have \begin{equation}\label{kauf} \alpha(c)=\sum_{\left\{ {\boldsymbol{Q}:\;{\boldsymbol b}\boldsymbol{Q} \;\leq\; c} \right\}}{\frac{{q_j^{{Q^j}}}}{{{Q^j}\!}} \cdot \frac{1}{{H(C,J)}}}\ , \end{equation} and the LoLP corresponding to customer type $j$ can be calculated by using \begin{equation} \beta_j(\boldsymbol{q},\boldsymbol{b}) = \sum\limits_{i = 0}^{{b_j} - 1} {\alpha (C - i)} \ . \end{equation} Note that the above derivations are based on the assumption that grid resources are discretized (e.g., $1$ kW is considered as $1000$ discrete serving units) and the interpretation of the Algorithm~\ref{algo} is that LoLP of customer type $j$ equals to some of occupied states that are in the range of $C-(b_j-1) ,\dots, C-1, C$. Another important measure of interest for solving the optimal pricing problem given in \eqref{prices} is the set of derivatives of LoLP with respect to offered load by each customer class. In the proposed multi-class customer model, there is no explicit formulae for performance measures (LoLP) in terms of input parameters ($C$, $\lambda_{j}$,$\mu_{j}$, etc.). In this paper we follow the methods based on convolution algorithms~\cite{iversen2007derivatives} to compute the derivatives of the LoLP. To that end the derivative of the LoLP associated with customer type $j$ with respect to the traffic intensity of another class $j_{1} \ne j_{2}$ can be computed by using the function $\alpha(\cdot)$ as follows. \begin{eqnarray}\label{derivatives} \frac{{\partial {\beta _{j_{1}}}}}{{\partial {q_{j_{2}}}}} &= \alpha (C - {b_{{j_1}}}) + \alpha (C - {b_{{j_{2}}}} - 1) + \cdots \\ &+ \alpha (C - {b_{{j_{2}}}} - {b_{j_{1}}} - 1) - (1 - {\beta ^{{j_{2}}}}){\beta ^{j_{1}}}\nonumber \end{eqnarray} A useful property of the LoLP is the elasticity property, which entails that the sensitivity of customer LoLP~$j$ to customer class $k$ is the same as the sensitivity of customer LoLP~$k $ to type $j$\cite{mazumdar}. This property is given as $\frac{{\partial {\beta_{{j_{1}}}}}}{{\partial {q_{{j_{2}}}}}} = \frac{{\partial {\beta_{{j_{2}}}}}}{{\partial {q_{{j_{1}}}}}}$. \begin{algorithm}[t]\label{algo} \begin{small} \caption{Kaufman-Roberts Algorithm~\cite{kaufman}}\label{algo} \begin{algorithmic} \STATE Set $\kappa(0)=0$ and $\kappa(i)=0$ for $i \in {\rm I\!R}^{-}$ \FOR {$i$=$1$ to $C$} \STATE $\kappa(i) = \frac{1}{i}\sum\nolimits_{j = 1}^J {{b_j}} {q_j}(j - {b_j})$ \ENDFOR \STATE Compute $H = \sum\nolimits_{i = 1}^C {\kappa(i)} $ \FOR {$i$=$0$ to $J$} \STATE $\alpha(i)=\frac{{\kappa (i)}}{H}$ \ENDFOR \FOR {$j$=$1$ to $J$} \STATE $\beta _j(\boldsymbol{q},\boldsymbol{b})=\sum\nolimits_{i = C - {b_j} + 1}^C {\alpha (i)} $ \ENDFOR \end{algorithmic} \end{small} \end{algorithm} Notice that Kaufman-Roberts algorithm is used in both of the proposed frameworks and the complexity of this algorithm is \BigO{CJ} which provides a considerable improvement over solving the LoLP through \eqref{betaResult} which has a complexity of \BigO{C^J} ~\cite{kaufman,nilsson}. For the pricing-based control problem, we compute the derivatives of multi-rate blocking probabilities using the convolution-based algorithm \cite{iversen2007derivatives} and similar to Kaufman-Roberts algorithm, the order of complexity is \BigO{CJ}. \section{Simulation Results}\label{Example} \begin{figure*} \caption{Social welfare computation.} \caption{Numerical Evaluation-I: system capacity $C_{t\in k} \label{socialWelf} \label{optRates} \label{pricing} \label{results1} \end{figure*} \subsection{Case Study-I: Pricing-Based Control} In this subsection we present a case study to evaluate the social welfare maximization problem of the pricing-based control framework presented in section~\ref{sec:price}. The parameter setting for our case study is as follows. For the ease of representation we assume that there are two customer types, namely, fast charging customers (type-I) and slow charging customers (type-II). In order to mimic the typical charging rate for a fast DC charging (that is $50$kW), we set $b_1$=$50$ units. Also, charging duration takes around $20$min, so mean service rate is set as $\mu_1$=$3$. In a similar manner we tune the parameters for the slow charging customers as $b_2$=$7$ units and $\mu_2$=$0.42$. For the utility functions we adopt the widely-used logarithmic utility function~\cite{fan2011, cong2}. The utility function of a single customer increases with the arrival rate and decreases with the customer LoLP according to \begin{equation} U=\left\{ {\begin{array}{ll} {\sum\nolimits_{j = 1}^J {\omega_j\log (1 + {\lambda _j}) - {\theta _j}\log (1 + \beta _C^j)} }&{{\lambda _j} > 0}\\ &\\ 0&{{\lambda _j} \le 0} \end{array}} \right.\ , \end{equation} where $\omega_j$ and $\theta_j$ are the weights of each class. Note that the weight is higher for customer types with higher demand $b_j$, mainly because fast charging customers demand more resources and hence gains more utility. A simple example is presented to clarify the matters. Assume that system capacity $C=500$, and the weights are chose as $\omega_1$=$20$, $\omega_2$=$10$ and $\theta_1$=$60$, $\theta_2$=$20$. Then the social welfare could be maximized by setting the arrival rates to $\lambda_1$=$8.6638$ and $\lambda_2$=$5.2001$. For this arrival rates prices $p_1$ and $p_2$ are computed to be $0.3197$ and $0.2211$ and the resulting LoLPs are $\beta^1$=$0.0097$ and $\beta^3$=$0.0009$, and the maximum utility is computed to be $59.1238$ units. Notice that $\omega_j$ is chosen greater than $\theta_j$ so that station operator is motivated to provide a good level of QoS. We further explore the relationship between the optimal arrival rates and the social welfare. As shown in Fig. \ref{socialWelf}, if the arrival rates deviate from its optimal value, the social welfare reduces. \begin{figure*} \caption{LOLP performance.} \label{blocking} \caption{Social welfare computation.} \label{TotalWelfare} \caption{Capacity planning for varying QoS targets with two classes.} \label{capPlanning} \caption{Numerical Evaluation-II} \label{results1} \end{figure*} We proceed to provide more numerical evaluations. In the first setting we present the relationship between system capacity and optimal arrival rates for time-dependent case, where station capacity varies over time and follows $C(k)$=$450+50\sin(2\pi k / 80)$ (due to grid conditions) and also $C(k)$ is assumed to be constant for every $T=10$ duration. Results depicted in Fig. \ref{optRates} shows that, it is more beneficial to accept fast charging customers as they improve the social welfare function more than slow charging ones. For the given set of arrival rates, the corresponding prices in \eqref{prices} and loss-of-load-probabilities are given in Figs. \ref{pricing} and \ref{blocking} respectively. Obviously, since fast charging uses more resources, the corresponding prices are higher and due to higher arrival rate the corresponding LoLP is higher than the slow charging customers. Finally, we present the corresponding social welfare in Fig. \ref{TotalWelfare}. \subsection{Case Study-II: Capacity Planning} We proceed to compute minimum amount of grid resources to provide QoS guarantees to two customer classes with the same set of parameters ($\mu_1$=$3$, $\mu_2$=$0.42$, $b_1$=$50$, $b_2$=$7$) for a wide range of QoS targets ($0.001 \le {\delta _1} \le 0.05$, $0.001 \le {\delta _2} \le 0.05$) with fixed arrival rate $\lambda_{1,2}$=$5$. The results depicted in Fig.~\ref{capPlanning} presents that, since type-$1$ is the dominant class in (most) regions where $\delta_1/b_1<\delta_2/b_2$, providing resources for class-$1$ already satisfies the QoS targets for class-$2$. Next we compute the required capacity for different range of arrival rates and fixed QoS targets $\delta_1$=$\delta_2$=$0.03$. Results depicted in Fig.~\ref{capPlanningX} can be used as a guideline to choose the required capacity for given arrival rates. We proceed to investigate the percentage of reduction in station capacity ($C$) for a given LoLP targets. The motivation is that instead of providing zero percent {LoLP, station operator can sacrifice to reject small amount of customers and reduce the stress on the grid. We use the same parameter setting as above and computed the required capacity respect to almost zero LoLP ($\delta_{1,2}$=$10^{-6})$. As presented in Fig.~\ref{capPlanning2} even one percent QoS targets leads to significant savings in station capacity. Our final evaluation is on evaluating the system performance for non-homogenous arrival rates. Let's assume that arrival rate for class-II is constant and $\lambda_2(k)$=$10$ and $\lambda_1$=$10+2\sin(2\pi k / 80)$. The QoS targets are set as $\delta_1$=$0.04$ and $\delta_2$=$0.01$. Then station operator can provision the system according to peak hour demand (computed as $C$=$683$) and hence meet the QoS requirements at all times. Results are presented in Fig.~\ref{lambdas}. \begin{figure*} \caption{Resource provisioning (fixed $\delta_{1} \label{capPlanningX} \caption{\% of Savings in Station Capacity} \label{capPlanning2} \caption{LoLP Performance, $C_{min} \label{lambdas} \caption{Numerical Evaluation III} \label{results1} \end{figure*} \section{Concluding Remarks} In this paper we provided two important design problems for electric vehicle charging stations with multiple classes of customers. We considered two cases. In the first one, given a capacity of a station and infinite amount of customer request, we provide a framework to compute the optimal arrival rates such that the total social welfare is maximized. This is a very typical case for charging stations located in big cities. On the other hand, in the second case we considered charging stations located in small cities. This time our primary concern was to calculate the minimum amount of grid resources such that each customer class is ensured with a certain level of QoS targets (LoLP). This initial work can be expanded in different directions. For example each station can employ an energy storage system that can aid to further reduce the strain on the power grid. The ESS can be charged during light traffic and the stored energy can be used to meet EV demand during peak hours. Another future research interest would be to consider a different resource policy to optimize the resource usage (e.g., portioning for each class etc.). In this paper we assume one micro-grid level charging. Another research direction can include a customer routing among the micro grids or stations. In this work, we have concentrated our focus on a network of stations fed by a single substation. To that end, our final research direction is on considering a general network case and address the congestion issues in a grid composed of power lines and other elements with different capacity ratings. \section*{Acknowledgment} This publication was made possible by NPRP grant \# 6-149-2-058 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors. The authors would like to thank Dr. Sercan Teleke for the fruitful discussions. \ifCLASSOPTIONcaptionsoff \fi \end{document}